text
stringlengths
1
2.51M
meta
dict
\section{Introduction} \noindent Clustering is an important but very challenging unsupervised task, the goal of which is to partition a set of samples into homogeneous groups \cite{8486482}. Numerous applications can be formulated as a clustering problem, such as recommender systems \cite{song2014online}, community detection \cite{wu2018nonnegative}, and image segmentation \cite{li2019superpixel}. Over the past decades, a large number of clustering techniques were proposed, e.g., K-means \cite{jain2010data}, spectral clustering \cite{vonLuxburg2007}, matrix factorization \cite{jia2019semi,8361078,9013063}, hierarchical clustering \cite{johnson1967hierarchical}, Gaussian mixture models \cite{moore1999very}, and so on. As each method has its own advantages as well as drawbacks, no method could always outperform others \cite{vega2011survey}. Additionally, a clustering method usually contains a few hyper-parameters, on which its performance heavily depends \cite{9178787}. Moreover, the hyper-parameters are difficult to tune, and some methods are quite sensitive to initialization, like K-means. Those dilemmas increase the difficulty in choosing an appropriate clustering method for a typical clustering task. To this end, clustering ensemble was introduced, i.e., given a set of base clusterings produced by different methods or the same method with different hyper-parameters/initializations, it aims to generate a consensus clustering with better clustering performance than the base clusterings \cite{sagi2018ensemble,boongoen2018cluster}. Unlike supervised ensemble learning, clustering ensemble is more difficult \cite{tao2017ensemble,tao2016robust}, as the commonly used strategies in supervised ensemble learning, such as voting, cannot be directly applied to clustering ensemble, when labels of samples are unavailable. To realize clustering ensemble, the existing methods generally first learn a pairwise relationship matrix from the base clusterings, and then apply off-the-shelf clustering methods like spectral clustering to the resulting matrix to produce the final clustering result \cite{tao2017simultaneous}. Based on how to generate the pairwise relationship matrix, we roughly divide the existing methods into two categories. ($1$) The first kind of methods treats the base clusterings as new feature representations (as shown in Fig. 1-A), to learn a pairwise relationship matrix. For example, \cite{gao2016robust} formulated clustering ensemble as a convex low-rank matrix representation problem. \cite{zhou2019ensemble} used a Frobenius norm regularized self-representation model to seek a dense affinity matrix for clustering ensemble. ($2$) The second kind of methods relies on the co-association matrix (as shown in Fig. 1-C), which summarizes the co-occurrence of samples in the same cluster of the base clusterings. The concept of using co-association matrix was first proposed by \cite{fred2005combining}, and since then it became popular as an important fundamental method in clustering ensemble. \cite{7811216} theoretically bridged the co-association based method to weighted K-means clustering, which largely reduces the computational complexity. Recently, many advanced co-association matrix construction methods were proposed. For example, \cite{huang2017locally} considered the uncertainty of each base clustering and proposed a locally weighted co-association matrix. \cite{huang2018enhanced} used the cluster-wise similarities to enhance the traditional co-association matrix. \cite{zhou2020self} proposed a self-paced strategy to learn the co-association matrix. See the detailed discussion about the related works in the next section. We observe that, the constructed co-association matrices of the prior works are variants of a weighted linear combination of the connective matrices (as shown in Fig. 1-B) from different base clusterings. When the performance of some base clusterings are poor, they will dominate the co-association matrix and degrade the clustering performance severely. In this paper, we propose a novel constrained low-rank tensor approximation (LTA) model to refine the co-association matrix from a global perspective. Specifically, as shown in Fig. 1-D, we first construct a coherent-link matrix, whose element examines whether two samples are from the same cluster in all the base clusterings or not. We then stack the coherent-link matrix and the conventional co-association matrix to form a $3$-dimensional (3-D) tensor shown in Fig. 1-E, which is further low-rank approximated. By exploring the low-rankness, the proposed model can propagate the highly reliable information of the coherent-link matrix to the co-association matrix, producing a refined co-association matrix, which is adopted as the input of an off-the-shelf clustering method to produce the final clustering result. Technically, the proposed model is formulated as a convex optimization problem and solved by an alternative iterative method. We evaluate the proposed model on $7$ benchmark data sets, and compare it with $12$ state-of-the-art clustering ensemble methods. The experimental comparisons substantiate that the proposed model significantly outperforms state-of-the-art methods. \textit{To the best of our knowledge, this is the first work to explore the potential of low-rank tensor on clustering ensemble. } \begin{figure}[!t] \centering \centerline{\epsfig{figure=FW5-c.pdf,width=8.5cm}} \caption{Illustration of the proposed method by taking $3$ base clusterings denoted by $\mathbf{\pi}_1$, $\mathbf{\pi}_2$ and $\mathbf{\pi}_3$, and $6$ input samples denoted by $\mathbf{x}_1,\cdots,\mathbf{x}_6$ as an example. By exploring the low-rankness of the formed $3$-D tensor, the limited but highly reliable information contained in the coherent-link matrix can be leveraged to enhance the quality of the co-association matrix.} \label{fig:framework-s} \end{figure} \section{Related Work} \subsubsection{Notation.} We denote tensors by boldface swash letters, e.g., $\bm{\mathcal{A}}$, matrices by boldface capital letters, e.g., $\mathbf{A}$, vectors by boldface lowercase letters, e.g., $\mathbf{a}$, and scalars by lowercase letters, e.g., $a$. Let $\bm{\mathcal{A}}(i,j,k)$ denote the $(i,j,k)$-th element of $3$-D tensor $\bm{\mathcal{A}}$, $\mathbf{A}(i,j)$ denote the $(i,j)$-th element of matrix $\mathbf{A}$, and $\mathbf{a}(i)$ denote the $i$-th entry of vector $\mathbf{a}$. The $i$-th frontal slice of tensor $\bm{\mathcal{A}}$ is denoted as $\bm{\mathcal{A}}(:,:,i)$. \subsubsection{Rank of tensors.} In this paper, we use the tensor nuclear norm induced by tensor singular value decomposition (t-SVD) \cite{kilmer2013third} to measure the rank of a tensor. Specifically, the t-SVD of a $3$-D tensor $\bm{\mathcal{A}}\in\mathbb{R}^{n_1\times n_2\times n_3}$ can be represented as \begin{equation} \bm{\mathcal{A}}=\bm{\mathcal{U}}* \bm{\mathcal{S}} * \bm{\mathcal{V}}^\mathsf{T}, \label{t-SVD-form} \end{equation} where $\bm{\mathcal{U}}\in\mathbb{R}^{n_1 \times n_1 \times n_3}$ and $\bm{\mathcal{V}}\in\mathbb{R}^{n_2\times n_2\times n_3}$ are two orthogonal tensors, $\bm{\mathcal{S}}\in\mathbb{R}^{n_1\times n_1\times n_3}$ is an f-diagonal tensor, $*$ and $\cdot^\mathsf{T}$ denote tensor product and tensor transpose, respectively. The detailed definitions of the above-mentioned tensor related operators can be find in \cite{zhang2014novel}. Since the tensor product can be efficiently computed in the Fourier domain \cite{kilmer2013third}, the t-SVD form of a tensor can be obtained with fast Fourier transform (FFT) efficiently as shown Algorithm 1. Given t-SVD, the tensor nuclear norm \cite{zhang2014novel} is defined as the sum of the absolute values of the diagonal entries of $\bm{\mathcal{S}}$, i.e., \begin{equation} \|\bm{\mathcal{A}}\|_{\star} =\sum_{i=1}^{{\rm min}(n_1,n_2)} \sum_{k=1}^{n_3}|\bm{\mathcal{S}}(i,i,k)|. \end{equation} \begin{algorithm}[!t] \caption{t-SVD of a $3$-D tensor \cite{zhang2014novel}} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Initialize:}} \REQUIRE $3$-D tensor $\bm{\mathcal{A}}\in\mathbb{R}^{n_1 \times n_2 \times n_3}$. \STATE Perform FFT on $\bm{\mathcal{A}}$, i.e., $\bm{\mathcal{A}}_f={\rm fft}(\bm{\mathcal{A}},[~],3)$; \STATE \textbf{for} $k=1:n_3$; \textbf{do} \STATE ~~Perform SVD on each frontal slice of $\bm{\mathcal{A}}_f$, i.e., ~~[$\mathbf{U}$,$\mathbf{S}$,$\mathbf{V}$]=SVD($\bm{\mathcal{A}}_f(:,:,k)$) ; \STATE ~~~$\bm{\mathcal{U}}_f(:,:,k)=\mathbf{U}$, $\bm{\mathcal{S}}_f(:,:,k)=\mathbf{S}$, $\bm{\mathcal{V}}_f(:,:,k)=\mathbf{V}$; \STATE \textbf{end} \STATE Perform inverse FFT on $\bm{\mathcal{U}}_f$, $\bm{\mathcal{S}}_f$ and $\bm{\mathcal{V}}_f$, i.e., $\bm{\mathcal{U}}=\rm{ifft}(\bm{\mathcal{U}}_f, [ ],3)$, $\bm{\mathcal{S}}=\rm{ifft}(\bm{\mathcal{S}}_f, [ ],3)$ and $\bm{\mathcal{V}}=\rm{ifft}(\bm{\mathcal{V}}_f, [ ],3)$; \end{algorithmic} \textbf{Output:} $\bm{\mathcal{U}}$, $\bm{\mathcal{S}}$ and $\bm{\mathcal{V}}$. \label{t-SVD} \end{algorithm} \subsubsection{Formulation of Clustering Ensemble.} Given a data set $\mathbf{X}=[\mathbf{x}_1,\mathbf{x}_2, \cdots, \mathbf{x}_n]\in\mathbb{R}^{d\times n}$ of $n$ samples with each sample $\mathbf{x}_i\in\mathbb{R}^{d\times 1}$, and $m$ base clusterings $\mathbf{\Pi}=[\bm{\pi}_1,\bm{\pi}_2,\cdots,\bm{\pi}_m]\in\mathbb{R}^{n\times m}$, where each base clustering $\bm{\pi}_i\in\mathbb{R}^{n\times 1}$ is an $n$-dimensional vector with the $j$-th element $\bm{\pi}_i(j)$ indicating the clustering membership of the $j$-th sample $\mathbf{x}_j$ in $\bm{\pi}_i$. For clustering ensemble, the cluster indicators in different base clusterings are generally different. Fig. 1-A shows a toy example of 6 samples and 3 base clusterings. The objective of clustering ensemble is to combine multiple base clusterings to produce better performance than that of the individual one. \subsubsection{Prior Art.} Based on how to use $\bm{\Pi}$, we roughly divide previous clustering ensemble methods into two categories. The methods in the first category treat $\mathbf{\Pi}$ as a representation of samples and then construct a pairwise affinity matrix $\mathbf{P}\in\mathbb{R}^{n\times n}$ accordingly, which can be generally expressed as \begin{equation} \min_{\mathbf{P}}f(\mathbf{\Pi},\mathbf{P})+\lambda\phi(\mathbf{P}), \end{equation} where $f(\bm{\Pi}, \mathbf{P})$ is the fidelity term and $\phi(\mathbf{P})$ imposes specific regularization on $\mathbf{P}$. For example, in \cite{gao2016robust}, $f(\cdot,\cdot)$ and $\phi(\cdot)$ denote the Frobenius norm and the nuclear norm, respectively, while those in \cite{zhou2019ensemble} are both the Frobenius norm. The second kind of methods first transform each base clustering as a connective matrix (as shown in Fig. 1-B), i.e., \begin{equation} \mathbf{A}_k(i,j)=\delta(\pi_k(i),\pi_k(j)), \end{equation} where $\mathbf{A}_k\in\mathbb{R}^{n\times n}$ is the $k$-th connective matrix constructed from $\bm{\pi}_k$, and \begin{equation} \delta(\pi_k(i),\pi_k(j))=\begin{cases} 1&{\rm if~}\pi_k(i)=\pi_k(j)\\ 0&{\rm otherwise}. \end{cases} \end{equation} And then, the methods in the second category build a co-association matrix $\mathbf{A}\in\mathbb{R}^{n\times n}$ \cite{fred2005combining} according to the connective matrices, i.e., \begin{equation} \mathbf{A}(i,j)=\frac{1}{m}\sum_{k=1}^m\mathbf{A}_k(i,j). \label{CA} \end{equation} As the co-association matrix naturally converts the base clusterings to a pairwise similarity measure, it becomes the cornerstone of clustering ensemble. Recently, many advanced co-association matrix construction methods were proposed to enhance the clustering performance, which can be generally unified in the following formula: \begin{equation} \mathbf{A}(i,j)=\sum_{k=1}^m \bm{\omega}(k) \times \mathbf{A}_k(i,j), \label{WCA} \end{equation} where $\bm{\omega}\in\mathbb{R}^{m\times 1}$ is the weight vector constructed with different strategies. For example, \cite{zhou2020self} used a self-paced learning strategy to construct $\bm{\omega}$. \cite{huang2017locally} considered the uncertainties of the base clustering, and proposed a locally-weighted weight vector. \cite{huang2018enhanced} used the cluster-wise similarities to construct the weight vector. \section{Proposed Method} As shown in Eq. \eqref{WCA}, the previous methods construct a co-association matrix as the linear combination of connective matrices, and thus are vulnerable to some poor base clusterings. To this end, we propose a novel low-rank tensor approximation based method to refine the initial co-association matrix from a global perspective. \subsection{Problem Formulation} To refine the co-association matrix, we first construct a coherent-link matrix (as shown in Fig. 1-D), which inspects whether two samples are clustered to the same category under all the base clusterings. It is worth pointing out that the elements of the coherent-link matrix are highly reliable information we could infer from the base clusteirngs. Specifically, we could directly get the coherent-link matrix $\mathbf{M}\in\mathbb{R}^{n\times n}$ from the co-association matrix in Eq. \eqref{CA}, i.e, \begin{equation} \mathbf{M}(i,j)=\begin{cases} 1&{\rm if~}\mathbf{A}(i,j)=1\\ 0&{\rm otherwise.} \end{cases} \label{CLM} \end{equation} We then stack the coherent-link matrix and the co-association matrix to form a $3$-D tensor $\bm{\mathcal{P}}\in\mathbb{R}^{n\times n \times 2}$, with $\bm{\mathcal{P}}(:,:,1)=\mathbf{M}$, and $\bm{\mathcal{P}}(:,:2)=\mathbf{A}$. As the elements of both the coherent-link matrix and the co-association matrix express the pairwise similarity between samples, ideally, the formed tensor should be low-rank. Moreover, the non-one elements of $\mathbf{M}$ are limited but express the highly reliable similarity between samples, and we thus try to complement the zero elements with reference to the non-zero ones and the co-association matrix. On the contrary, the elements of the co-association matrix is dense but with many error connections, and we try to refine it by removing the incorrect connections which is depicted by $\mathbf{E}\in\mathbb{R}^{n\times n}$, by leveraging the information from the coherent-link matrix. In addition, the elements of $\bm{\mathcal{P}}$ should be bounded in $[0,1]$, and each frontal slice of $\bm{\mathcal{P}}$ should be symmetric. Taking all the above analyses into account, the proposed method is mathematically formulated as a constrained optimization problem, written as \begin{equation} \begin{split} &\min_{\bm{\mathcal{P}},\mathbf{E}}\|\bm{\mathcal{P}}\|_{\star}+\lambda\|\mathbf{E}\|_F^2\\ &{\rm s.t.}~\bm{\mathcal{P}}(i,j,1)=\mathbf{M}(i,j),~{\rm if}~\mathbf{M}(i,j)=1,\\ &~~~~~~\bm{\mathcal{P}}(:,:,1)=\bm{\mathcal{P}}(:,:,1)^\mathsf{T},0\leq\bm{\mathcal{P}}(i,j,1)\leq1,\forall i,j,\\ &~~~~~~\bm{\mathcal{P}}(:,:,2)+\mathbf{E}=\mathbf{A},\\ &~~~~~~\bm{\mathcal{P}}(:,:,2)=\bm{\mathcal{P}}(:,:,2)^\mathsf{T},0\leq\bm{\mathcal{P}}(i,j,2)\leq1,\forall i,j, \end{split} \label{model} \end{equation} where $\lambda>0$ is the coefficient to balance the error matrix, and a Frobenius norm is imposed on $\mathbf{E}$ to avoid trivial solution, i.e., $\bm{\mathcal{P}}(:,:,2)=0$. By optimizing Eq. \eqref{model}, it is expected that the limited but highly reliable information in $\mathbf{M}$ could be propagated to the co-association matrix, while the coherent-link matrix is complemented according to the information from the co-association matrix at the same time. After solving the problem in Eq. \eqref{model}, we can obtain a refined co-association matrix $\bm{\mathcal{P}}^*(:,:,2)$ with $\bm{\mathbf{P}}^*$ being the optimized solution. Then, one can apply any clustering methods based on pairwise similarity on $\bm{\mathcal{P}}^*(:,:,2)$ to generate the final clustering result. In this paper, we investigate two popular clustering methods, i.e., spectral clustering \cite{ng2002spectral} and agglomerative hierarchical clustering \cite{fred2005combining}. \subsection{Numerical Solution} We propose an optimization method to solve Eq. \eqref{model}, based on the inexact Augmented Lagrangian method \cite{8253493}. Specifically, we first introduce two auxiliary matrices $\mathbf{B}, \mathbf{C}\in\mathbb{R}^{n\times n}$ to deal with the bounded and symmetric constraints on $\bm{\mathcal{P}}(:,:,1)$ and $\bm{\mathcal{P}}(:,:,2)$, respectively, and Eq. \eqref{model} can be equivalently rewritten as \begin{equation} \begin{split} &\argmin_{\bm{\mathcal{P}},\mathbf{E},\mathbf{B},\mathbf{C}}\|\bm{\mathcal{P}}\|_{\star}+\lambda\|\mathbf{E}\|_F^2\\ &{\rm s.t.}~\mathbf{B}(i,j)=\mathbf{M}(i,j),~{\rm if}~\mathbf{M}(i,j)=1,~\mathbf{B}=\mathbf{B}^\mathsf{T},\\ &~~~~~~0\leq\mathbf{B}(i,j)\leq1,\forall i,j,~\mathbf{B}=\bm{\mathcal{P}}(:,:,1),\\ &~~~~~~\bm{\mathcal{P}}(:,:,2)+\mathbf{E}=\mathbf{A},~\mathbf{C}=\bm{\mathcal{P}}(:,:,2),\\ &~~~~~~\mathbf{C}=\mathbf{C}^\mathsf{T},~0\leq\mathbf{C}(i,j)\leq1,\forall i,j. \end{split} \label{BC} \end{equation} To handle the equality constraints, we introduce three Lagrange multipliers $\bm{\Lambda}_1, \bm{\Lambda}_2$ and $\bm{\Lambda}_3\in\mathbb{R}^{n\times n}$, and the augmented Lagrangian form of Eq. \eqref{BC} becomes \begin{align} &\argmin_{\bm{\mathcal{P}},\mathbf{E},\mathbf{B},\mathbf{C}}\|\bm{\mathcal{P}}\|_{\star}+\lambda\|\mathbf{E}\|_F^2+\frac{\mu}{2}\left\|\bm{\mathcal{P}}(:,:,2)+\mathbf{E}-\mathbf{A}+\frac{\mathbf{\Lambda}_2}{\mu}\right\|_F^2\nonumber\\ &+\frac{\mu}{2}\left\|\bm{\mathcal{P}}(:,:,1)-\mathbf{B}+\frac{\mathbf{\Lambda}_1}{\mu}\right\|_F^2+\frac{\mu}{2}\left\|\bm{\mathcal{P}}(:,:,2)-\mathbf{C}+\frac{\mathbf{\Lambda}_3}{\mu}\right\|_F^2\nonumber\\ &{\rm s.t.}~\mathbf{B}(i,j)=\mathbf{M}(i,j),~{\rm if}~\mathbf{M}(i,j)=1,0\leq\mathbf{B}(i,j)\leq1,\forall i,j, \nonumber\\ &~~~~~~~\mathbf{B}=\mathbf{B}^\mathsf{T}, \mathbf{C}=\mathbf{C}^\mathsf{T},~0\leq\mathbf{C}(i,j)\leq1,\forall i,j, \label{ALM} \end{align} where $\mu>0$ is the penalty coefficient. Then Eq. \eqref{ALM} can be optimized by solving the following four subproblems iteratively and alternately, i.e., only one variable is updated with the remaining ones fixed at each time. \subsubsection{The $\bm{\mathcal{P}}$ subproblem.} Removing the irrelevant terms, Eq. \eqref{ALM} with respect to $\bm{\mathcal{P}}$ is written as \begin{equation} \begin{split} &\argmin_{\bm{\mathcal{P}}}\frac{1}{\mu}\|\bm{\mathcal{P}}\|_{\star}+\frac{1}{2}\left\|\bm{\mathcal{P}}-\bm{\mathcal{T}}\right\|_F^2,\\ \end{split} \label{P-sub1} \end{equation} where \begin{equation} \begin{cases} \bm{\mathcal{T}}(:,:,1)=\mathbf{B}-\frac{\mathbf{\Lambda}_1}{\mu}\\ \bm{\mathcal{T}}(:,:,2)=\frac{1}{2}\left(\mathbf{A}+\mathbf{C}-\mathbf{E}-\frac{\mathbf{\Lambda}_2+\mathbf{\Lambda}_3}{\mu}\right). \end{cases} \end{equation} According to \cite{zhang2014novel}, Eq. \eqref{P-sub1} has a closed-form solution with the soft-thresholding operator of the tensor singular values. Moreover, according to Algorithm 1, t-SVD computes FFT and SVD on the frontal slices of the input $3$-D tensor $\bm{\mathcal{T}}(:,:,i)$ and its FFT version $\bm{\mathcal{T}}_f(:,:.i)$, respectively, which mainly emphasizes the low-rankness of the frontal slices. Differently, we aim to take advantage of the correction between the original co-association matrix and the coherent-link matrix. Therefore, we perform FFT and SVD on the lateral slices of the tensors $\bm{\mathcal{T}}(:,i,:)$, and $\bm{\mathcal{T}}_f(:,i,:)$, respectively, to get the t-SVD representation. \subsubsection{The $\mathbf{E}$ subproblem.} Without the irrelevant terms, the $\mathbf{E}$ subproblem becomes: \begin{equation} \begin{split} &\min_{\mathbf{E}}\lambda\|\mathbf{E}\|_F^2+\frac{\mu}{2}\left\|\bm{\mathcal{P}}(:,:,2)+\mathbf{E}-\mathbf{A}+\frac{\mathbf{\Lambda}_2}{\mu}\right\|_F^2.\\ \end{split} \label{E-sub} \end{equation} Since Eq. \eqref{E-sub} is quadratic function of $\mathbf{E}$, we can get its global minimum by setting the derivative of it to $0$, i.e., \begin{equation} \mathbf{E}=\frac{\mu\mathbf{A}-\mathbf{\Lambda}_2-\mu\bm{\mathcal{P}}(:,:,2)}{2\lambda+\mu}. \label{E-sol} \end{equation} \subsubsection{The $\mathbf{B}$ subproblem.} The $\mathbf{B}$ subproblem is written as \begin{equation} \begin{split} &\min_{\mathbf{B}}\frac{\mu}{2}\left\|\mathbf{B}-\left(\bm{\mathcal{P}}(:,:,1)+\frac{\mathbf{\Lambda}_1}{\mu}\right)\right\|_F^2\\ &{\rm s.t.}~\mathbf{B}(i,j)=\mathbf{M}(i,j),~{\rm if}~\mathbf{M}(i,j)=1,\\ &~~~~~~~\mathbf{B}=\mathbf{B}^\mathsf{T},~0\leq\mathbf{B}(i,j)\leq1,\forall i,j,\\ \end{split} \label{B-sub} \end{equation} which is a symmetric and bounded constrained least squares problem, and has an optimal solution in element-wise \cite{9154586}, i.e., \begin{equation} \mathbf{B}(i,j)=\begin{cases} \mathbf{M}(i,j)&{\rm if~}\mathbf{M}(i,j)=1,\\ 0&{\rm if~}\mathbf{T}_1(i,j)\leq0~\&~ \mathbf{M}(i,j)\neq 1,\\ 1&{\rm if~}\mathbf{T}_1(i,j)\geq1~\&~ \mathbf{M}(i,j)\neq 1,\\ \mathbf{T}_1(i,j)&{\rm if~}0\leq\mathbf{T}_1(i,j)\leq1~\&~ \mathbf{M}(i,j)\neq 1,\\ \end{cases} \label{B-sol} \end{equation} where \begin{equation} \mathbf{T}_1=\frac{1}{2}\left(\bm{\mathcal{P}}(:,:,1)+\bm{\mathcal{P}}(:,:,1)^\mathsf{T}+\frac{\mathbf{\Lambda}_1+\mathbf{\Lambda}^\mathsf{T}_1}{\mu}\right). \end{equation} \subsubsection{The $\mathbf{C}$ subproblem.} The $\mathbf{C}$ subproblem is identical to the $\mathbf{B}$ subproblem without a set of element-wise equality constraints, which is written as \begin{equation} \begin{split} &\min_{\mathbf{C}}\frac{\mu}{2}\left\|\mathbf{C}-\left(\bm{\mathcal{P}}(:,:,2)+\frac{\mathbf{\Lambda}_3}{\mu}\right)\right\|_F^2\\ &{\rm s.t.}~\mathbf{C}=\mathbf{C}^\mathsf{T},~0\leq\mathbf{C}(i,j)\leq1,\forall i,j, \end{split} \label{C-sub} \end{equation} and the optimal solution of it is \begin{equation} \mathbf{C}(i,j)=\begin{cases} \mathbf{T}_2(i,j)&{\rm~if~}0\leq\mathbf{T}_2(i,j)\leq1,\\ 0&{\rm~if~}\mathbf{T}_2(i,j)\leq0,\\ 1&{\rm~if~}\mathbf{T}_2(i,j)\geq1,\\ \end{cases} \label{C-sol} \end{equation} where \begin{equation} \mathbf{T}_2=\frac{1}{2}\left(\bm{\mathcal{P}}(:,:,2)+\bm{\mathcal{P}}(:,:,2)^\mathsf{T}+\frac{\mathbf{\Lambda}_3+\mathbf{\Lambda}^\mathsf{T}_3}{\mu}\right). \end{equation} \subsubsection{Update $\mathbf{\Lambda}_1$, $\mathbf{\Lambda}_2$, $\mathbf{\Lambda}_3$ and $\mu$} The Lagrange multipliers and $\mu$ are updated by \begin{equation} \begin{cases} \mathbf{\Lambda}_1=\mathbf{\Lambda}_1+\mu(\bm{\mathcal{P}}(:,:,1)-\mathbf{B})\\ \mathbf{\Lambda}_2=\mathbf{\Lambda}_2+\mu(\bm{\mathcal{P}}(:,:,2)+\mathbf{E}-\mathbf{A})\\ \mathbf{\Lambda}_3=\mathbf{\Lambda}_3+\mu(\bm{\mathcal{P}}(:,:,2)-\mathbf{C})\\ \mu={\rm min}(1.1\mu,\mu_{\rm max}),\\ \end{cases} \label{mu-sol} \end{equation} where $\mu$ is initialized to $0.0001$ \cite{liu2019imbalance}, and $\mu_{\rm max}$ is the upper-bound for $\mu$. The overall numerical solution is summarized in Algorithm \ref{TLA}, where the stopping conditions is ${\rm max}(\|\mathbf{B}-\bm{\mathcal{P}}(:,:,1)\|_\infty, \|\mathbf{C}-\bm{\mathcal{P}}(:,:,2)\|_\infty, \|\mathbf{A}-\mathbf{E}-\bm{\mathcal{P}}(:,:,2)\|_\infty )<10^{-8}$ with $\|\cdot\|_\infty$ being the maximum of the absolute values of a matrix. \begin{algorithm}[!t] \caption{Numerical solution to Eq. \eqref{model}} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Initialize:}} \REQUIRE Base clusterings matrix $\mathbf{\Pi}$; \ENSURE $\bm{\mathcal{P}}=0$, $\mathbf{E}=0$, $\mathbf{B}=0$, $\mathbf{C}=0$, and $\mu_{\rm max}=10^8$; \STATE Construct the co-association matrix $\mathbf{A}$ by Eq. \eqref{CA}; \STATE Construct the coherent-link matrix $\mathbf{M}$ by Eq. \eqref{CLM}; \WHILE{not converged } \STATE Update $\bm{\mathcal{P}}$ by solving Eq. \eqref{P-sub1}; \STATE Update $\mathbf{E}$ by Eq. \eqref{E-sol}; \STATE Update $\mathbf{B}$ by Eq. \eqref{B-sol}; \STATE Update $\mathbf{C}$ by Eq. \eqref{C-sol}; \STATE Update $\mathbf{\Lambda}_1$, $\mathbf{\Lambda}_2$, $\mathbf{\Lambda}_3$ and $\mu$ by Eq. \eqref{mu-sol}; \STATE Check the convergence conditions; \ENDWHILE \end{algorithmic} \textbf{Output:} $\bm{\mathcal{P}}(:,:,2)$ as the refined co-association matrix. \label{TLA} \end{algorithm} \section{Experiment} We conducted extensive experiments to evaluate the proposed model. To reproduce the results, we made the code publicly available at https://github.com/jyh-learning/TensorClusteringEnsemble. \subsubsection{Data Sets.} Following recent clustering ensemble papers \cite{huang2017locally, huang2015robust, zhou2019ensemble}, we adopted $7$ commonly used data sets, i.e., BinAlpha, Multiple features (MF), MNIST, Semeion, CalTech, Texture and ISOLET. Following \cite{huang2017locally}, we randomly selected 5000 samples from MNIST and used the subset in the experiments, and for CalTech, we used 20 representative categories out of $101$ categories and denoted it as CalTech20. \subsubsection{Generation of Base Clusterings.} Following \cite{huang2017locally}, we first generated a pool of $100$ candidate base clusterings for all the data sets by applying the the K-means algorithm with the value of K randomly varying in the range of $[2,\sqrt{n}]$, where $n$ is the number of input data samples. \subsubsection{Methods under Comparison.} We compared the proposed model with $12$ state-of-the-art clustering ensemble methods, including PTA-AL, PTA-CL, PTA-SL and PTGP \cite{ huang2015robust}, LWSC, LWEA and LWGP \cite{huang2017locally}, ECPCS-HC and ECPCS-MC \cite{huang2018enhanced}, DREC \cite{zhou2019ensemble}, SPCE \cite{zhou2020self}, and SEC \cite{7811216}. The codes of all the compared methods are provided by the authors. Ours-EA and Ours-SC denote the proposed model equipped with agglomerative hierarchical clustering and spectral clustering, respectively, to generate the final clustering result. \subsubsection{Evaluation Metrics.} We adopted $7$ commonly used metrics to evaluate clustering performance, i.e., clustering accuracy (ACC), normalized mutual information (NMI), purity, adjust rand index (ARI), F$1$-score, precision, and recall. For all the metrics, a larger value indicates better clustering performance, and the values of all the metrics are up-bounded by $1$. The detailed definitions of those metrics can be found in \cite{8502831,9072553}. \subsubsection{Experiment Settings.} For each data set, we randomly selected $10$ base clusterings from the candidate base clustering pool, and performed different clustering ensemble methods on the selected base clusteirngs. To reduce the influence of the selected base clusterings, we repeated the random selection $20$ times, and reported the average performance over the $20$ repetitions. For the compared methods, we set the hyper-parameters according to their original papers. If there are no suggested values, we exhaustively searched the hyper-parameters, and used the ones producing the best performance. The proposed model only contains one hyper-parameter $\lambda$, which was set to $0.002$ for all the data sets. \begin{table*}[tph] \caption{Clustering Performance on BinAlpha (\# samples: $1404$, dimension: $320$, \# clusters: $36$) }\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{BinAlpha} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.430 $&$ 0.429 $&$ 0.186 $&$ 0.429 $&$ 0.424 $&$ 0.403 $&$ 0.431 $&$ 0.375 $&$ 0.454 $&$ 0.375 $&$ 0.298 $&$ 0.443 $&$\underline{ 0.712 }$&$\mathbf{ 0.858 }$\\ NMI &$ 0.574 $&$ 0.577 $&$ 0.300 $&$ 0.574 $&$ 0.570 $&$ 0.553 $&$ 0.575 $&$ 0.537 $&$ 0.592 $&$ 0.518 $&$ 0.541 $&$ 0.585 $&$\underline{ 0.824 }$&$\mathbf{ 0.916 }$\\ Purity &$ 0.447 $&$ 0.451 $&$ 0.197 $&$ 0.446 $&$ 0.444 $&$ 0.413 $&$ 0.457 $&$ 0.383 $&$ 0.478 $&$ 0.396 $&$ 0.285 $&$ 0.470 $&$\underline{ 0.718 }$&$\mathbf{ 0.876 }$\\ ARI &$ 0.292 $&$ 0.291 $&$ 0.081 $&$ 0.291 $&$ 0.284 $&$ 0.289 $&$ 0.287 $&$ 0.269 $&$ 0.300 $&$ 0.248 $&$ 0.227 $&$ 0.291 $&$\underline{ 0.643 }$&$\mathbf{ 0.817 }$\\ F1-score &$ 0.314 $&$ 0.312 $&$ 0.126 $&$ 0.313 $&$ 0.306 $&$ 0.313 $&$ 0.308 $&$ 0.295 $&$ 0.320 $&$ 0.271 $&$ 0.302 $&$ 0.311 $&$\underline{ 0.654 }$&$\mathbf{ 0.822 }$\\ Precision &$ 0.275 $&$ 0.276 $&$ 0.071 $&$ 0.277 $&$ 0.272 $&$ 0.248 $&$ 0.277 $&$ 0.220 $&$ 0.305 $&$ 0.238 $&$ 0.294 $&$ 0.296 $&$\underline{ 0.559 }$&$\mathbf{ 0.801 }$\\ Recall &$ 0.366 $&$ 0.361 $&$ 0.635 $&$ 0.361 $&$ 0.349 $&$ 0.426 $&$ 0.348 $&$ 0.451 $&$ 0.337 $&$ 0.323 $&$ 0.314 $&$ 0.327 $&$\underline{ 0.791 }$&$\mathbf{ 0.845 }$\\ \hline \end{tabular} } \begin{tablenotes} \item[*] \tiny{The highest value in each row is bolded, and the second highest one is underlined.}\ \end{tablenotes} \label{table-BA} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on MF (\# samples: 2000, dimension: 649, \# clusters: 10)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{MF} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.647 $&$ 0.606 $&$ 0.507 $&$ 0.648 $&$ 0.671 $&$ 0.609 $&$ 0.649 $&$ 0.589 $&$ 0.652 $&$ 0.362 $&$ 0.581 $&$ 0.592 $&$\underline{ 0.718 }$&$\mathbf{ 0.990 }$\\ NMI &$ 0.655 $&$ 0.638 $&$ 0.536 $&$ 0.654 $&$ 0.655 $&$ 0.650 $&$ 0.655 $&$ 0.618 $&$ 0.652 $&$ 0.347 $&$ 0.621 $&$ 0.602 $&$\underline{ 0.790 }$&$\mathbf{ 0.979 }$\\ Purity &$ 0.675 $&$ 0.644 $&$ 0.533 $&$ 0.677 $&$ 0.690 $&$ 0.650 $&$ 0.673 $&$ 0.616 $&$ 0.676 $&$ 0.387 $&$ 0.615 $&$ 0.623 $&$\underline{ 0.719 }$&$\mathbf{ 0.990 }$\\ ARI &$ 0.523 $&$ 0.500 $&$ 0.371 $&$ 0.523 $&$ 0.533 $&$ 0.514 $&$ 0.530 $&$ 0.481 $&$ 0.526 $&$ 0.257 $&$ 0.459 $&$ 0.472 $&$\underline{ 0.685 }$&$\mathbf{ 0.979 }$\\ F1-score &$ 0.574 $&$ 0.554 $&$ 0.457 $&$ 0.575 $&$ 0.583 $&$ 0.567 $&$ 0.582 $&$ 0.541 $&$ 0.576 $&$ 0.370 $&$ 0.527 $&$ 0.528 $&$\underline{ 0.724 }$&$\mathbf{ 0.981 }$\\ Precision &$ 0.535 $&$ 0.511 $&$ 0.344 $&$ 0.534 $&$ 0.551 $&$ 0.517 $&$ 0.530 $&$ 0.472 $&$ 0.541 $&$ 0.311 $&$ 0.424 $&$ 0.496 $&$\underline{ 0.586 }$&$\mathbf{ 0.981 }$\\ Recall &$ 0.625 $&$ 0.608 $&$ 0.712 $&$ 0.627 $&$ 0.619 $&$ 0.628 $&$ 0.647 $&$ 0.637 $&$ 0.618 $&$ 0.739 $&$ 0.713 $&$ 0.566 $&$\underline{ 0.960 }$&$\mathbf{ 0.981 }$\\ \hline \end{tabular} } \label{table-MF} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on MNIST (\# samples: 5000, dimension: 784, \# clusters: 10)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{MNIST} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.663 $&$ 0.654 $&$ 0.207 $&$ 0.665 $&$ 0.613 $&$ 0.658 $&$ 0.573 $&$ 0.609 $&$ 0.656 $&$ 0.480 $&$ 0.543 $&$ 0.539 $&$\underline{ 0.797 }$&$\mathbf{ 0.977 }$\\ NMI &$ 0.618 $&$ 0.610 $&$ 0.133 $&$ 0.622 $&$ 0.612 $&$ 0.635 $&$ 0.594 $&$ 0.608 $&$ 0.635 $&$ 0.434 $&$ 0.482 $&$ 0.521 $&$\underline{ 0.806 }$&$\mathbf{ 0.979 }$\\ Purity &$ 0.682 $&$ 0.668 $&$ 0.209 $&$ 0.685 $&$ 0.663 $&$ 0.676 $&$ 0.626 $&$ 0.624 $&$ 0.691 $&$ 0.498 $&$ 0.557 $&$ 0.585 $&$\underline{ 0.798 }$&$\mathbf{ 0.980 }$\\ ARI &$ 0.513 $&$ 0.504 $&$ 0.051 $&$ 0.522 $&$ 0.483 $&$ 0.531 $&$ 0.460 $&$ 0.495 $&$ 0.524 $&$ 0.342 $&$ 0.429 $&$ 0.384 $&$\underline{ 0.735 }$&$\mathbf{ 0.969 }$\\ F1-score &$ 0.566 $&$ 0.557 $&$ 0.219 $&$ 0.572 $&$ 0.540 $&$ 0.582 $&$ 0.522 $&$ 0.558 $&$ 0.574 $&$ 0.427 $&$ 0.445 $&$ 0.450 $&$\underline{ 0.767 }$&$\mathbf{ 0.972 }$\\ Precision &$ 0.520 $&$ 0.523 $&$ 0.124 $&$ 0.541 $&$ 0.490 $&$ 0.536 $&$ 0.459 $&$ 0.448 $&$ 0.543 $&$ 0.373 $&$ 0.316 $&$ 0.420 $&$\underline{ 0.666 }$&$\mathbf{ 0.968 }$\\ Recall &$ 0.624 $&$ 0.596 $&$ 0.952 $&$ 0.607 $&$ 0.603 $&$ 0.641 $&$ 0.609 $&$ 0.745 $&$ 0.610 $&$ 0.576 $&$ 0.831 $&$ 0.485 $&$\underline{ 0.918 }$&$\mathbf{ 0.977 }$\\ \hline \end{tabular} } \label{table-MNIST} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on Semeion (\# samples: 1593, dimension: 256, \# clusters: 10)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c}\hline \textbf{Semeion} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.688 $&$ 0.700 $&$ 0.425 $&$ 0.692 $&$ 0.682 $&$ 0.739 $&$ 0.620 $&$ 0.638 $&$ 0.679 $&$ 0.450 $&$ 0.571 $&$ 0.594 $&$\underline{ 0.846 }$&$\mathbf{ 0.983 }$\\ NMI &$ 0.633 $&$ 0.634 $&$ 0.418 $&$ 0.631 $&$ 0.630 $&$ 0.656 $&$ 0.598 $&$ 0.601 $&$ 0.635 $&$ 0.386 $&$ 0.571 $&$ 0.569 $&$\underline{ 0.831 }$&$\mathbf{ 0.962 }$\\ Purity &$ 0.698 $&$ 0.707 $&$ 0.449 $&$ 0.703 $&$ 0.702 $&$ 0.739 $&$ 0.651 $&$ 0.645 $&$ 0.705 $&$ 0.460 $&$ 0.607 $&$ 0.634 $&$\underline{ 0.847 }$&$\mathbf{ 0.983 }$\\ ARI &$ 0.507 $&$ 0.510 $&$ 0.248 $&$ 0.507 $&$ 0.507 $&$ 0.540 $&$ 0.465 $&$ 0.480 $&$ 0.508 $&$ 0.290 $&$ 0.401 $&$ 0.418 $&$\underline{ 0.790 }$&$\mathbf{ 0.962 }$\\ F1-score &$ 0.561 $&$ 0.563 $&$ 0.360 $&$ 0.560 $&$ 0.559 $&$ 0.588 $&$ 0.525 $&$ 0.540 $&$ 0.560 $&$ 0.391 $&$ 0.477 $&$ 0.481 $&$\underline{ 0.813 }$&$\mathbf{ 0.966 }$\\ Precision &$ 0.513 $&$ 0.522 $&$ 0.246 $&$ 0.522 $&$ 0.523 $&$ 0.552 $&$ 0.466 $&$ 0.468 $&$ 0.527 $&$ 0.329 $&$ 0.381 $&$ 0.448 $&$\underline{ 0.748 }$&$\mathbf{ 0.966 }$\\ Recall &$ 0.620 $&$ 0.611 $&$ 0.712 $&$ 0.606 $&$ 0.601 $&$ 0.631 $&$ 0.603 $&$ 0.644 $&$ 0.599 $&$ 0.664 $&$ 0.660 $&$ 0.524 $&$\underline{ 0.893 }$&$\mathbf{ 0.966 }$\\ \hline \end{tabular} } \label{table-Semeion} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on CalTech20 (\# samples: 2386, dimension: 30,000, \# clusters: 20)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{CalTech20} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.345 $&$ 0.343 $&$ {0.421} $&$ 0.345 $&$ 0.324 $&$ 0.423 $&$ 0.336 $&$ 0.450 $&$ 0.363 $&$ 0.340 $&$ \underline{0.495} $&$ 0.297 $&$\mathbf{ 0.726 }$&${ 0.418 }$\\ NMI &$ 0.404 $&$ 0.402 $&$ 0.269 $&$ 0.401 $&$ 0.396 $&$ 0.454 $&$ 0.406 $&$ 0.455 $&$ 0.428 $&$ 0.350 $&$ 0.452 $&$ 0.381 $&$\underline{ 0.620 }$&$\mathbf{ 0.621 }$\\ Purity &$ 0.641 $&$ 0.639 $&$ 0.520 $&$ 0.637 $&$ 0.642 $&$ 0.665 $&$ 0.646 $&$ 0.645 $&$ 0.660 $&$ 0.590 $&$ 0.664 $&$ 0.633 $&$\underline{ 0.730 }$&$\mathbf{ 0.788 }$\\ ARI &$ 0.261 $&$ 0.265 $&$ 0.184 $&$ 0.267 $&$ 0.222 $&$ {0.359} $&$ 0.224 $&$ 0.351 $&$ 0.258 $&$ 0.225 $&$ \underline{0.395} $&$ 0.202 $&$\mathbf{ 0.785 }$&${ 0.328 }$\\ F1-score &$ 0.334 $&$ 0.337 $&$ 0.363 $&$ 0.338 $&$ 0.291 $&$ 0.432 $&$ 0.298 $&$ 0.437 $&$ 0.332 $&$ 0.316 $&$ \underline{0.457} $&$ 0.269 $&$\mathbf{ 0.823 }$&${ 0.384 }$\\ Precision &$ 0.553 $&$ 0.561 $&$ 0.284 $&$ 0.563 $&$ 0.529 $&$ 0.612 $&$ 0.510 $&$ 0.538 $&$ 0.543 $&$ 0.479 $&$ 0.503 $&$ 0.525 $&$\mathbf{ 0.764 }$&$\underline{ 0.743 }$\\ Recall &$ 0.239 $&$ 0.241 $&$ \underline{0.562} $&$ 0.243 $&$ 0.201 $&$ 0.335 $&$ 0.211 $&$ 0.373 $&$ 0.239 $&$ 0.253 $&$ 0.449 $&$ 0.181 $&$\mathbf{ 0.898 }$&${ 0.259 }$\\ \hline \end{tabular} } \label{CalTech} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on Texture (\# samples: 5500, dimension: 20, \# clusters: 11)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{Texture} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.741 $&$ 0.714 $&$ 0.410 $&$ 0.732 $&$ 0.719 $&$ 0.793 $&$ 0.686 $&$ 0.675 $&$ 0.675 $&$ 0.416 $&$ 0.634 $&$ 0.614 $&$\underline{ 0.863 }$&$\mathbf{ 0.993 }$\\ NMI &$ 0.742 $&$ 0.721 $&$ 0.438 $&$ 0.731 $&$ 0.742 $&$ 0.782 $&$ 0.739 $&$ 0.703 $&$ 0.718 $&$ 0.419 $&$ 0.693 $&$ 0.638 $&$\underline{ 0.868 }$&$\mathbf{ 0.995 }$\\ Purity &$ 0.751 $&$ 0.729 $&$ 0.427 $&$ 0.746 $&$ 0.744 $&$ 0.798 $&$ 0.728 $&$ 0.685 $&$ 0.698 $&$ 0.441 $&$ 0.658 $&$ 0.647 $&$\underline{ 0.864 }$&$\mathbf{ 0.995 }$\\ ARI &$ 0.628 $&$ 0.600 $&$ 0.237 $&$ 0.619 $&$ 0.628 $&$ 0.696 $&$ 0.609 $&$ 0.569 $&$ 0.585 $&$ 0.298 $&$ 0.534 $&$ 0.486 $&$\underline{ 0.816 }$&$\mathbf{ 0.993 }$\\ F1-score &$ 0.664 $&$ 0.639 $&$ 0.350 $&$ 0.656 $&$ 0.663 $&$ 0.724 $&$ 0.648 $&$ 0.614 $&$ 0.626 $&$ 0.397 $&$ 0.590 $&$ 0.537 $&$\underline{ 0.834 }$&$\mathbf{ 0.993 }$\\ Precision &$ 0.624 $&$ 0.598 $&$ 0.228 $&$ 0.627 $&$ 0.631 $&$ 0.700 $&$ 0.592 $&$ 0.543 $&$ 0.582 $&$ 0.337 $&$0.465 $&$ 0.500 $&$\underline{ 0.780 }$&$\mathbf{ 0.991 }$\\ Recall &$ 0.712 $&$ 0.690 $&$ \underline{0.897} $&$ 0.689 $&$ 0.700 $&$ 0.752 $&$ 0.719 $&$ 0.710 $&$ 0.677 $&$ 0.752 $&$ 0.827 $&$ 0.586 $&${ 0.895 }$&$\mathbf{ 0.996 }$\\ \hline \end{tabular} } \label{Texture} \end{table*} \begin{table*}[tph] \caption{Clustering Performance on ISOLET (\# samples: 7791, dimension: 617, \# clusters: 26)}\smallskip \vspace{-0.5\baselineskip} \centering \resizebox{2.05\columnwidth}{!}{ \smallskip\begin{tabular}{l c c c c c c c c c c c c c c} \hline \textbf{ISOLET} & PTA-AL & PTA-CL & PTA-SL & PTGP & LWSC & LWEA & LWGP & ECPCS-HC & ECPCS-MC & DREC & SPCE & SEC & Ours-EA & Ours-SC \\ \hline ACC &$ 0.551 $&$ 0.540 $&$ 0.394 $&$ 0.539 $&$ 0.556 $&$ 0.578 $&$ 0.527 $&$ 0.451 $&$ \underline{0.581} $&$ 0.324 $&$ 0.574 $&$ 0.554 $&${ 0.575 }$&$\mathbf{ 0.675 }$\\ NMI &$ 0.722 $&$ 0.718 $&$ 0.587 $&$ 0.715 $&$ 0.721 $&$ 0.743 $&$ 0.710 $&$ 0.667 $&$ 0.743 $&$ 0.413 $&$\underline{ 0.818} $&$ 0.719 $&${ 0.752 }$&$\mathbf{ 0.831 }$\\ Purity &$ 0.580 $&$ 0.572 $&$ 0.407 $&$ 0.567 $&$ 0.594 $&$ 0.605 $&$ 0.564 $&$ 0.467 $&$ \underline{0.619} $&$ 0.350 $&$ 0.301 $&$ 0.590 $&${ 0.583 }$&$\mathbf{ 0.707 }$\\ ARI &$ 0.507 $&$ 0.498 $&$ 0.316 $&$ 0.495 $&$ 0.485 $&$ 0.552 $&$ 0.472 $&$ 0.449 $&$ 0.516 $&$ 0.251 $&$ 0.367 $&$ 0.483 $&$\underline{ 0.563 }$&$\mathbf{ 0.639 }$\\ F1-score &$ 0.529 $&$ 0.520 $&$ 0.356 $&$ 0.517 $&$ 0.506 $&$ 0.571 $&$ 0.495 $&$ 0.477 $&$ 0.536 $&$ 0.303 $&$ 0.384 $&$ 0.505 $&$\underline{ 0.584 }$&$\mathbf{ 0.654 }$\\ Precision &$ 0.479 $&$ 0.467 $&$ 0.237 $&$ 0.462 $&$ 0.458 $&$ 0.511 $&$ 0.437 $&$ 0.352 $&$ {0.496} $&$ 0.253 $&$ \underline{0.584} $&$ 0.463 $&${ 0.481 }$&$\mathbf{ 0.625 }$\\ Recall &$ 0.593 $&$ 0.591 $&$ \mathbf{0.787} $&$ 0.590 $&$ 0.568 $&$ 0.648 $&$ 0.573 $&$ 0.748 $&$ 0.584 $&$ 0.690 $&$ 0.327 $&$ {0.555} $&$\underline{ 0.752 }$&${ 0.685 }$\\ \hline \end{tabular} } \label{ISOLET} \end{table*} \subsection{Analysis of the Clustering Performance} \begin{figure}[!t] \centering \centerline{\epsfig{figure=BPnmi_ours_cc.pdf,width=7.8cm}} \caption{The NMI of our methods against the average NMI of the base clusteings in the candidate base clustering pool. } \label{fig:BPnmi} \end{figure} \begin{figure}[!t] \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\epsfig{figure=LTA_EA_lambda_nM_ours_c.pdf,width=4.2cm}} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\epsfig{figure=LTA_SC_lambda_nM_ours_c.pdf,width=4.2cm}} \end{minipage} \caption{The NMI of our methods against different $\lambda$. } \label{fig:lambda} \end{figure} \begin{figure}[!t] \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\epsfig{figure=LTA_EA_M_errorbar_ours_c.pdf,width=4.2cm}} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \centerline{\epsfig{figure=LTA_SC_M_errorbar_ours_c.pdf,width=4.2cm}} \end{minipage} \caption{The NMI of our methods with different numbers of base clusterings, where the vertical error bar indicates the standard deviation over $20$ repetitions. } \label{fig:M} \end{figure} \begin{figure*}[!t] \begin{minipage}[b]{0.16\linewidth} \centering \centerline{\epsfig{figure=af_coherent.pdf,width=2.8cm}} \end{minipage} \begin{minipage}[b]{0.16\linewidth} \centering \centerline{\epsfig{figure=af_CA.pdf,width=2.8cm}} \end{minipage} \begin{minipage}[b]{0.16\linewidth} \centering \centerline{\epsfig{figure=af_LWCA.pdf,width=2.8cm}} \end{minipage} \begin{minipage}[b]{0.16\linewidth} \centering \centerline{\epsfig{figure=af_spce.pdf,width=2.8cm}} \end{minipage} \begin{minipage}[b]{0.162\linewidth} \centering \centerline{\epsfig{figure=af_proposed.pdf,width=2.8cm}} \end{minipage} \begin{minipage}[b]{0.162\linewidth} \centering \centerline{\epsfig{figure=af_ideal_cb.pdf,width=3.54cm}} \end{minipage} \caption{Visual comparison of the learned pairwise similarity matrices for different methods. All the matrices share the same color bar, and the brighter color indicates a larger value.} \label{fig:latent-visual} \end{figure*} Tables \ref{table-BA}-\ref{ISOLET} show the clustering performance of all the methods over $7$ data sets, where we have the following observations. First, the proposed methods including both Ours-EA and Ours-SC almost always outperform all the compared methods under various metrics, which proves the universality of the refined co-association matrix of the proposed model to different clustering methods. Moreover, Ours-SC usually performs better than Ours-EA, which means the refined co-association is more suitable for spectral clustering. Second, the improvements of the proposed methods are significant. For example, on BinAlpha, compared with the best method under comparison, Ours-SC increases the ACC from $0.454$ to $0.858$. On CalTech20, the highest ACC of the compared methods is $0.495$, while the ACC of Ours-EA is $0.726$. The improvements of the proposed methods in terms of other metrics are also significant. Moreover, the performance of Ours-SC on MF, MNIST, Semeion, Texture are extremely good, i.e., all the metrics are quite close to $1$. Those phenomena suggest that the proposed model brings a breakthrough in clustering ensemble. Third, the highly competitive performance of the proposed model is achieved with a fixed hyper-parameter, proving the practicability of the proposed model. Besides, the proposed model is also robust to different data sets, as both Ours-EA and Ours-SC consistently produce superior clustering performance on all the data sets. \subsubsection{Comparison Against Base Clusterings.} We compared the average NMI of the our methods with that of all the base clusterings from the candidate clustering pool in Fig. \ref{fig:BPnmi}. It is clear that, on all the data sets, both Ours-SC and Ours-EA can significantly improve the NMI of the base clusterings, and Ours-SC outperforms Ours-EA in the majority cases. \subsubsection{Sensitivity to Hyper-parameter.} Fig. \ref{fig:lambda} shows the NMI of the proposed methods with different $\lambda$ on all the data sets, where we can conclude that: first, a smaller $\lambda$ usually leads to better clustering performance for both Ours-EA and Ours-SC, which demonstrates the importance of removing the incorrect connections from the original co-association matrix; and second, for the majority data sets, the highest NMI occurs when $\lambda=0.002$ for both Ours-EA and Ours-SC, which proves the highly robustness of the proposed model to different data sets. \subsubsection{Performance with Different Number of Base Clusterings.} Fig. \ref{fig:M} illustrates the influence of different numbers of the base clusterings to the proposed model, where we have the following observations. First, with the increase of the number of base clusterings, the NMIs of both Ours-EA and Ours-SC generally increase, indicating that more base clustering are beneficial to the clustering performance. Second, with more base clusterings, the standard deviations generally become smaller for all the data sets, which suggests that more base clusterings can enhance the stability our methods. Third, for the majority data sets, $20$ base clusterings are sufficient for our methods to generate high value of NMI. \subsubsection{Comparison of the Learned Pairwise Similarity Matrix.} Fig. \ref{fig:latent-visual} presents the cohere-link matrix, the traditional co-association matrix, the learned co-association matrices by LWCA \cite{huang2017locally}, SPCE \cite{zhou2020self} and the proposed model, and the ideal affinity matrix of BinAlpha, where all the matrices are normalized to $[0,1]$ and share the same color bar. Form \ref{fig:latent-visual}, we can observe that the coherent-link matrix is sparse, but its majority connections are correct, while on the contrary, the co-association matrix is dense, but with many incorrect connections in it. By exploiting the the low-rankness of the $3$-D tensor stacked by the coherent-link matrix and the association matrix, the refined co-association matrix of the proposed model is quite close to the ideal one. Although there are some error corrections in it, almost all the relationships of two samples belonging to the same cluster have been correctly recovered, leading to high clustering performance. In contrast, there are many incorrect connections, but without enough correct connections in both the affinity matrices of LWCA and SPCE, which explains why they produced inferior clustering performance than the proposed model. \section{Conclusion} As the first work, we introduced low-rank tensor approximation to clustering ensemble. Different from previous methods, the proposed model solves clustering ensemble from a global perspective, i.e., exploiting the low-rankness of a $3$-D tensor formed by the coherent-link matrix and the co-association matrix, such that the valuable information of the coherent-link matrix can be effectively propagated to the co-association matrix. Extensive experiments have shown that $i$), the proposed model improves current state-of-the-art performance of clustering ensemble to a new level; $ii$), the recommended value for the hyper-parameter of the proposed model is robust to different data sets; and $iii)$, only a few base clusterings are required to generate high clustering performance. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-12-17T02:17:25", "yymm": "2012", "arxiv_id": "2012.08916", "language": "en", "url": "https://arxiv.org/abs/2012.08916" }
\section{Introduction} The gap rigidity problem for locally Hermitian symmetric space of noncompact type has been raised by Mok\cite{MR1918134} and Eyssidieux-Mok \cite{MR2127948}. Write a pair of bounded symmetric domains as $i:D\hookrightarrow\Omega$ where $i$ is a geodesic embedding. Write $\Omega=G_0/K$ where $G_0=Aut(\Omega)$ and $K$ is the isotropy subgroup with respect to a reference point $o$, there is a $K^{\mathbb{C}}$-invariant Zariski open subset $\mathcal{O}_o$ in $Gr(\dim(D),T_o(\Omega))$ such that $[T_o(D)]\in \mathcal{O}_o$. If $\Gamma$ is a torsion free discrete subgoup of automorphisms, we say the gap rigidity holds in the Zariski topology if for any compact complex submanifold $S$ in the quotient $\Omega/\Gamma$ with $[T_x(S)]\in \mathcal{O}_o$ (by some lifting) at every point $x\in S$, $S$ must be totally geodesic. \par In \cite{MR1918134}, a typical example is given using Poincar\'e-Lelong equation where $\Omega$ is an irreducible bounded symmetric domain of tube type with rank $\geq 2$ and $D$ is a diagonal curve in a maximal polydisk, and in this case $\mathcal{O}_o$ is given by the complement of highest characteristic subvariety in $\mathbb{P}(T_o(\Omega))$ which is a hypersurface when $\Omega$ is of tube type. More precisely they obtained \begin{theorem}\label{thm polydisk}\rm(\textbf{Mok, \cite[Theorem~1]{MR1918134}}) Suppose that $\Omega$ is an irreducible bounded symmetric domain of tube type with rank $\geq 2$ and $\Gamma$ is a torsion free discrete subgoup of automorphisms, if $C$ is a compact smooth curve in $\Omega/\Gamma$ such that $C$ is tangent to some diagonal curves in maximal polydisks at every point (or equivalently every tangent vector on $C$ is of maximal rank), then $C$ is totally geodesic in $\Omega/\Gamma$. \end{theorem} By using a similar method on higher dimensional submanifold, Eyssidieux-Mok \cite{MR2127948} generalized the result to any ($H_3$)-holomorphic geodesic cycle, where $H_3$ is defined in \cite[p.10]{MR2127948}, as a condition stronger than total geodesy. The theorem can be stated as follows \begin{theorem}\label{EyssidieuxMok}\rm(\textbf{Eyssidieux-Mok, \cite[Theorem~3]{MR2127948}}) Suppose that $\Omega=G_0/K$ is an irreducible bounded symmetric domain and $\Gamma$ is a torsion free discrete subgoup of automorphisms, $D\subset\Omega$ is an ($H_3$)-embedding and there is a $K^{\mathbb{C}}$-invariant hypersurface $\mathcal{Z}_o$ in $Gr(\dim(D),T_o(\Omega))$ such that $[T_o(D)]\notin \mathcal{Z}_o$, if $S\subset \Omega/\Gamma$ is a compact complex submanifold with $[T_x(S)]\notin \mathcal{Z}_x$ at every $x\in S$, then $S$ is an $(H_3)$-holomorphic geodesic cycle. \end{theorem} \par In fact the gap rigidity problem was originally studied by Eyssidieux-Mok \cite{MR1369135} in complex topology, i.e. in differential geometric sense, which is a weaker form of gap phenomenon. More precisely, the gap rigidity holds in the sense of complex topology for a pair of bounded symmetric domains $(\Omega, D;i)$ if a compact complex submanifold $S\subset \Omega/\Gamma$ modelled on $(\Omega, D;i)$ (roughly speaking, locally approximated by $i(D)$) with uniformly sufficiently small norm of the second fundamental form is necessarily totally geodesic. Simple example like diagonal embedding of disk into polydisk was considered. And examples for gap rigidity concerning period domains from Hodge Theory were studied in \cite{MR1468929}\cite{MR1714821}. \par The dual analogy of the gap rigidity problem can be easily formualted, let $M=M_1\times \cdots \times M_k=G_1/P_1\times \cdots \times G_k/P_k (k\geq 1)$ be a product of irreducible compact Hermitian symmetric spaces where $G_i=Aut(M_i)$ and $P_i$ are the maximal parabolic subgroups. Write $P=P_1\times \cdots \times P_k$, assume that $i:X\hookrightarrow M$ is an equivariant geodesic embedding and there is a $P$-invariant Zariski open subset $\mathcal{O}_o\subset Gr(\dim(X),T_o(M))$ such that $[T_o(X)]\in \mathcal{O}_o$, we say gap rigidity holds for the pair of Hermitian symmetric spaces $(M,X,i)$ (in Zariski topology) if for any compact complex submanifold $S$ with tangent space $[T_x(S)]$ lifting to an element in $\mathcal{O}_o$ for every point $x\in S$, $S$ must be some standard model of $X$ where the standard models of $X$ are defined as $g\circ i(X) \subset M$ for all $g\in G$. \par In this article, we are going to prove that the gap rigidity holds for $(M,X,i)$ where $M$ is an irreducible compact Hermitian symmetric space of tube type and $X$ is a diagonal curve in a maximal polysphere, giving a dual analogy for Theorem 1 in \cite{MR1918134}. We have \begin{theorem}\label{diagonal curve} Let $M$ be an irreducible compact Hermitian symmetric space of tube type with $rank(M)\geq 2$ and $C\subset M$ is a compact curve with generic tangent vectors at every point, i.e. $C$ is tangent to some standard models of diagonal curves in maximal polyspheres at every point, then $C$ itself must be a standard model. \end{theorem} The theorem is not true in general when $M$ is a non-tube type irreducible compact Hermitian symmetric space. For instance, for the Grassmannian $G(2,3)$ (denoting all 2-planes in a 5-dimensional complex vector space), there is a standard embedding from $\mathbb{P}^1\times \mathbb{P}^2$ to $G(2,3)$, we take the graph of the Veronese embedding from $\mathbb{P}^1$ to $\mathbb{P}^2$, then this gives a compact curve with generic (rank two) tangent vectors which is not a standard model of the diagonal curve. \par Motivated by the proof for curves, we consider a weaker gap rigidity problem for higher dimensional submanifold satisfying the so-called $(H_2)$-condition (stronger than total geodesy), we obtain \begin{mainthm}\label{general} Suppose that $M=M_1\times \cdots \times M_k$ is a (possibly reducible) compact Hermitian symmetric space other than a projective space where $M_i=G_i/P_i(1\leq i\leq k)$ are all irreducible. $X\subset M$ is a totally geodesic equivariant $(H_2)$-subspace with respect to some choice of K\"ahler-Einstein metric on $M$, where $\dim(M)=n,\dim(X)=m$. And there is a $P$-invariant hypersurface $\mathcal{Z}_o$ in $Gr(m,T_o(M))$ such that $P.[T_o(X)]\subset \mathcal{O}_o:=Gr(m,T_o(M))-\mathcal{Z}_o$ where $P=P_1\times \cdots \times P_k$, if $S\subset M$ is a compact complex submanifold and by some lifting $[T_x(S)] \in P.[T_o(X)]$ at every $x\in S$, and moreover $S$ is tangent to some standard models of $X$ to the second order at every point, then $S$ itself is a standard model. \end{mainthm} \begin{remark}\label{reducible} $(H_2)$-condition is required to satisfy a Lie bracket generating condition. We know if $X$ is $H_2$ in $M=M_1\times \cdots \times M_k$, then $X$ is $H_2$ in each factor $M_i (1\leq i\leq k)$. For convenience we will assume that $M=G/P$ is irreducible in the following discussion in this article, and the case when $M$ is reducible can follow the same procedure to give a proof if we check it step by step. And a simple example satisfying the (weaker) gap rigidity for reducible ambient space is the diagonal embedding of a Riemann sphere $X=\mathbb{P}^1$ in a polysphere $M=(\mathbb{P}^1)^r$, where the $P$-action on $T_o(M)$ is actually the $\mathbb{C}^*$-action on tangent vectors of each sphere and the $P$-invariant hypersurface in the projectivized tangent space can be chosen as a hyperplane. \end{remark} \begin{remark} This theorem is weaker than general gap rigidity theorem, there is a restriction on the tangent spaces of $S$, they are required to be tangent to some standard models to the second order, not just contained in the Zariski open set $\mathcal{O}_o \subset Gr(m,T_o(M))$. If $P.[T_o(X)]$ is coincident with $\mathcal{O}_o$ and the second order condition is automatically satisfied as the case where we consider diagonal curve in a tube type ambient space, then it's the same as general gap rigidity. \end{remark} \begin{remark} We are still curious about whether the second order tangency can be removed in the condition of the Main Theorem. When $X$ is a curve, we will find that the second order tangency is an empty condition by our proof, but for general case we still don't know. Since we consider the submanifold $S$ globally, the second order condition may be implicitly contained in the global condition. In fact we will find that when $M$ is a hyperquadric and $X$ is a smooth linear section subquadric, the second order tangency is actually redundant. As in this case if a compact submanifold $S$ is tangent to the some standared models of smooth linear section subquadric only to the first order at every point, then $S$ is a submanifold with splitting tangent sequence in $M$ where the direct sum $T(M)=T(S)\oplus N_{o,S|M}$ is given by the annihilator with respect to the holomorphic conformal structure of the hyperquadric. Therefore by the result of Jahnke (cf. \cite[Theorem~4.7]{MR2190340}) we know $S$ itself must be a smooth linear section subquadric. From this example we may conjecture that the second order tangency is not necessary, but we have no more evidence until now. \end{remark} Our proof is to construct a holomorphic map from the compact submanifold $S$ to the moduli space of standard models and show that the moduli space is affine algebraic. For each point $x$ on $S$, there is a family of standard models tangent to $S$ to the first order, we only need to show that there is a unique standard model tangent to $S$ to the second order, then the holomorphic map can be constructed. The affineness comes from the fact that the isotropy group of the moduli space is reductive by a double fibration construction and dimension counting argument. And $(H_2)$-condition is needed in the dimension counting argument. \section{Notations and preliminaries} \subsection{Hermitian symmetric spaces} We write an irreducible compact Hermitian symmetric space $M=G/P=G_c/K$ where $G$ is a connected complex simple Lie group, $P$ is a maximal parabolic subgroup, $G_c$ is the isometry group of $M$ which is a compact real form of $G$ and $K$ is a maximal compact subgroup of $G_c$. Let $\mathfrak{g}=Lie(G)$, then the Harish-Chandra decomposition can be written as $\mathfrak{g}=\mathfrak{m}^++\mathfrak{l}^\mathbb{C}+\mathfrak{m}^-$ where $\mathfrak{l}=Lie(K)$ and $\mathfrak{l}^\mathbb{C}+\mathfrak{m}^-=Lie(P)$. The holomorphic tangent space at a reference point $o$ is $T_{o}(M)\cong\mathfrak{m}^+$. Consider the isotropy action of $P$ on the projectivized tangent space $\mathbb{P}T_o(M)$, since we have Levi decomposition $P=K^{\mathbb{C}}M^-$ where $M^-$ is the unipotent radical and acts trivially on $T_o(M)$, $P$-orbits on $\mathbb{P}T_o(M)$ is the same as $K^{\mathbb{C}}$ orbit and similarly they are the same on $Gr(p,T_o(M))$ for $p<\dim(M)$ which denotes all $p$-planes in the tangent space $T_o(M)$. Let $r=rank(M)\geq 2$, by the Polysphere Theorem and Restricted Root Theorem we know there are precisely $r$ $K^{\mathbb{C}}$-orbits on $\mathbb{P}T_o(M)$ according to the rank of the vector $\eta\in T_o(M)$. \par More precisely we fix a Cartan subalgebra $\mathfrak{h}^{\mathbb{C}}$ of $\mathfrak{l}^{\mathbb{C}}$ and give a root system $\Delta\subset( \mathfrak{h}^{\mathbb{C}})^*$ for $\mathfrak{g}$. Let $z$ be a central element of $\mathfrak{l}$ such that $ad(z)$ gives the almost complex structure $M$, and let $iy\in i\mathfrak{h}$ be in the interior of a Weyl chamber whose closure contains $iz$, then we can define a positive root system $\Delta^+=\{\alpha\in \Delta:\alpha(iy)>0\}$ and we denote the set of negative roots by $\Delta^-$ correspondingly. Then we can write $\mathfrak{m}^+=\sum_{\alpha \in \Delta^+_{M}}\mathfrak{g}_{\alpha}$, $\mathfrak{m}^-=\sum_{\alpha \in \Delta^-_{M}}\mathfrak{g}_{\alpha}$ where $\mathfrak{g}_{\alpha}$ is the corresponding root space. The roots in the set $\Delta_{M}^+(\Delta_{M}^-)$ are called noncompact positive(negative) roots, and the other roots in $\Delta^+(\Delta^-)$ are called compact positive(negative) roots, for which we use $\Delta^{+}_K(\Delta^-_K)$ to denote the set of them. For each root $\alpha$, $e_{\alpha}$ denotes the normalized root vector of $\alpha$ such that $[e_{\alpha},e_{-\alpha}]=H_{\alpha}$ where $H_{\alpha}$ is the dual element of $\alpha$ in $\mathfrak{h}^{\mathbb{C}}$ with respect to the Killing form of $\mathfrak{g}$, and let $h_\alpha$ be the corresponding coroot. There is a maximal strongly orthogonal positive noncompact roots $\Pi=\{\alpha_1,...\alpha_r\}$ and the corresponding root space $\mathfrak{a}^{+}=\mathbb{C}\{e_1,...e_{\alpha_r}\}$ is a maximal abelian subspace of $\mathfrak{m}^+$. Then each representative for the $K^{\mathbb{C}}$-action is $\sum_{i=1}^pe_{\alpha_i}$ for $1\leq p\leq r$. \par We will say a vector $\eta\in T_o(M)$ is generic if $\eta$ is of rank $r$. In our case, each tangent vector of a diagonal curve is generic. By case-by-case study, in \cite{MR1918134} they observed that $K^{\mathbb{C}}$-orbit of a maximal rank vector is equal to the complement of a hypersurface in $\mathbb{P}T_o(M)$ if and only if $M$ is dual to an irreducible bounded symmetric domain of \textbf{tube type}, i.e. they are \begin{enumerate} \item $G(n,n)$ with $n\geq 2$; \item $G^{II}(n,n)$ with $n$ even and $n\geq 4$; \item $G^{III}(n,n)$ with $n\geq 2$; \item $Q^n$ with $n\geq 3$; \item $E_7$-type. \end{enumerate} We will call them irreducible compact Hermitian symmetric space of tube type. The Restricted Root Theorem (cf.\cite{MR0161943}) gives a description on the roots restricted to the real vector space $\mathfrak{h}^-=\sum_{\alpha\in \Pi}\sqrt{-1}H_{\alpha}\mathbb{R}$. corresponding to tube type and non-tube type Hermitian symmetric spaces. \begin{theorem}[The Restricted Root Theorem] Let $\rho$ denote the restriction of roots from $\mathfrak{h}^{\mathbb{C}}$ to $\mathfrak{h}^-$, identify the elements in $\Delta$ with it's $\rho$-image, then either $\rho(\Delta)\cup\{0\}=\{\pm\frac{1}{2}\alpha_i\pm\frac{1}{2}\alpha_j:1\leq i,j\leq r\}$ or $\rho(\Delta)\cup\{0\}=\{\pm\frac{1}{2}\alpha_i\pm\frac{1}{2}\alpha_j, \pm\frac{1}{2}\alpha_i:1\leq i,j\leq r\}$. Accordingly $\rho(\Delta^+_M)=\{\frac{1}{2}\alpha_i+\frac{1}{2}\alpha_j:1\leq i,j\leq r\}$ or $\rho(\Delta^+_M)=\{\frac{1}{2}\alpha_i+\frac{1}{2}\alpha_j, \frac{1}{2}\alpha_i:1\leq i,j\leq r\}$. Moreover, all $\alpha_i$ have the same length and the subgroup of the Weyl group of $G$ preserving the compact roots and fixing $\Pi$ as a set induces all signed permutations $\alpha_i\rightarrow \pm\alpha_j$ of $\Pi$. \end{theorem} From the perspective of Restricted Root Theorem, $M$ is of tube type if and only if it's the first case in the theorem. \subsection{$(H_2)$-embeddings} The readers may refer \cite{MR0196134} for more details. Suppose we have a pair of semisimple Lie algebras of Hermitian type $(\mathfrak{g}_c, H_0)$, $(\mathfrak{g}_c', H'_0)$ corresponding to two Hermitian symmetric spaces of compact type $G_c/K$, $G_c'/K'$ respectively, where $H_0, H'_0$ are correponding to the central element in the associated maximal compact subgroups $K,K'$ determining the complex structure of $G_c/K$ and $G_c'/K'$ respectively, we call a Lie algebra homomorphism $\rho :\mathfrak{g} \longrightarrow \mathfrak{g}'$ satisfying $\rho\circ ad(H_0)=ad(H'_0)\circ\rho$ an $(H_1)$-homomorphism, injecitive $(H_1)$-homomorphism are one-to-one corresponding to totally geodesic complex submanifolds which are called $(H_1)$-subspaces. Furthermore if we have $\rho(H_0)=H'_0$, the Lie algebra homomorphism is called $(H_2)$-homomorphsim and injective $(H_2)$-homomorphsims give so-called $(H_2)$-subspaces. There are some simple examples which are $H_1$ but not $H_2$, like $\mathbb{P}^1\times \mathbb{P}^2 \subset G(2,3)$ by standard equivariant embedding. The full classification of $(H_2)$-subspaces in Hermitian symmetric spaces are given in \cite{MR0196134} and \cite{MR0214807}. And in this article we will refer the table given in \cite[pp.27-28]{MR2127948} for maximal $(H_2)$-subspaces in irreducible Hermitian symmetric spaces. For readers' convenience, we give the table when $M$ is of classical type (with some corrections) \begin{longtable}{|p{4em}|p{10em}|p{5em}|p{10em}|} \caption{ Maximal $(H_2)$-subspaces $X$ of a compact irreducible Hermitian symmetric space of classical type $M$} \\ \hline $M$ & $X\subset M$ & maximal & Additional conditions \\ \hline $G(p,q)$ & $G(r,s)\times G(p-r,q-s)$ & * & $\frac{r}{s}=\frac{p}{q}$ \\ \hline & $G^{II}(n,n)$ & * & $p=q=n$ \\ \hline & $G^{III}(n,n)$ & * & $p=q=n$ \\ \hline & $\mathbb{P}^m$ & $m\equiv 0[2]$ & $p=\binom{m}{r-1}, q=\binom{m}{r}, r\in \mathbb{N}$ \\ \hline & $Q^{2\ell}$ & $\ell \equiv 0[2]$ & $p=q=2^{\ell-1}, \ell\geq 3$ \\ \hline $G^{II}(n,n)$ &$G(r,r)$ & * &$n=2r$ \\ \hline & $G^{II}(r,r)\times G^{II}(n-r,n-r)$ & * & $n>r$ \\ \hline & $\mathbb{P}^m$ & * & $n=\binom{m}{\frac{m+1}{2}},m\equiv 3[4]$ \\ \hline & $Q^{2\ell}$ & * & $n=2^{\ell-1}, \ell\equiv 3[4], \ell\geq 3$ \\ \hline & $Q^{2\ell-1}$ & * & $n=2^{\ell-1}, \ell\equiv 0, 3[4], \ell\geq 3$ \\ \hline $G^{III}(n,n)$ &$G(r,r)$ & * &$n=2r$\\ \hline & $G^{III}(r,r)\times G^{III}(n-r,n-r)$ & * & $n>r$ \\ \hline & $\mathbb{P}^m$ & * & $n=\binom{m}{\frac{m+1}{2}},m\equiv 1[4]$ \\ \hline & $Q^{2\ell}$ & * & $n=2^{\ell-1}, \ell\equiv 1[4], \ell\geq 3$ \\ \hline & $Q^{2\ell-1}$ & * & $n=2^{\ell-1}, \ell\equiv 1, 2[4], \ell\geq 3$ \\ \hline $Q^{2\ell}$ & $Q^{2\ell-1}$ & *& $\ell \geq 3$\\ \hline $Q^{2\ell-1}$ & $Q^{2\ell-2}$ & *& $\ell \geq 3$ \\ \hline \end{longtable} \section{Gap rigidity for diagonal curves} \par Let $M=G/P=G_c/K$ be an irreducible compact Hermitian symmetric space of tube type with rank $r\geq 2$ and $C\subset M$ is a compact curve with generic (rank $r$) tangent vectors at every point. Our proof is to construct a holomorphic map from the compact curve $C$ to the moduli space of standard models of diagonal curves in maximal polyspheres and show that the moduli space is affine algebraic, then the map should be constant and $C$ itself must be some standard model. For each point $x$ on $C$, there is a family of standard models tangent to $C$ to the first order, we only need to show that there is a unique standard model tangent to $C$ to the second order, then the holomorphic map can be constructed. The affineness comes from the fact that the isotropy group of the moduli space is reductive by a double fibration construction and dimension counting argument. \subsection{Construction of a holomorphic map from the curve to the moduli space of standard models.} We will give a proof for the existence and uniqueness of a standard model which is tangent to $C$ at one point $o$ to the second order. Let $M^-=\exp(\mathfrak{m}^-)$, the action given by $M^-$ will transform the standard model fixing the tangent space at $o$. For $\eta\in T_{o}(M)\cong \mathfrak{m}^+$, we know the curve $\eta(t)=t\eta\subset \mathfrak{m}^+$. Fix $\xi\in \mathfrak{m}^-$, under the $M^-$-action $\exp(\xi)$, the second order term of the curve is given by $ \exp\left(\frac{t^2}{2}[\eta,[\xi,\eta]]\right)\cdot o$ (cf. \cite[Lemma 4.3]{MR1198602}). Also the second fundamental form of the standard model after the $M^-$-action is given by \[\sigma(\eta,\eta)=[\eta,[\xi,\eta]] \mod \mathbb{C}\eta\] Now we fix a root system and a maximal strongly orthogonal positive noncompact roots $\Pi=\{\alpha_1,...\alpha_r\}$, the normalized root vectors are denoted by $e_{\alpha_i}$. Consider the case when $\eta=\sum_{i=1}^re_{\alpha_i}$, we prove the following lemma \begin{lemma}{\label{bijection}} The linear map $H:\mathfrak{m}^- \rightarrow \mathfrak{m}^+$ given by \[H(v)=[\sum_{i=1}^re_{\alpha_i},[v,\sum_{i=1}^re_{\alpha_i}]]\] is a bijection. \end{lemma} \begin{proof} We can check case-by-case. $M$ can be classified as (1) $G(n,n)$ with $n\geq 2$; (2) $G^{II}(n,n)$ with $n$ even and $n\geq 4$; (3) $G^{III}(n,n)$ with $n\geq 2$; (4) $Q^n$ with $n\geq 3$; or (5) $E_7$-type irreducible compact Hermitian symmetric space. \par For the cases (1),(2) and (3), it's straightforward by matrix computation. Write $\sum_{i=1}^re_{\alpha_i}$ as $\begin{bmatrix} 0 & D \\ 0 & 0 \end{bmatrix}$ and $v$ as $\begin{bmatrix} 0 & 0 \\ A & 0 \end{bmatrix}$ Then $H(v)$ can be written as $\begin{bmatrix} 0 & 2DAD \\ 0 & 0 \end{bmatrix}$. For $G(n,n)$ or $G^{III}(n,n)$ with $n\geq 2$, $r=n$, $D=I_r$ where $I_r$ is the $r\times r$ identity matrix. For $G^{II}(n,n)$ with $n$ even and $n\geq 4$, $r=\frac{n}{2}$, we have $ D=J_r=diag(J_1,...J_1) $ where $J_1=\begin{bmatrix} 0&1\\ -1&0 \end{bmatrix}$. Hence for these cases $H$ is clearly a bijection. \par As hyperquadric case has already been proven in \cite[Proposition 2.3]{Zhang2015}, we remain to check the $E_7$ case, the strongly orthogonal roots are written as $\Pi=\{\alpha_1,\alpha_2, \alpha_3\}$. It suffices to show that $H$ is a surjection, i.e. for any root vector $e_{\gamma}\in \mathfrak{m}^+$, there exists $v\in \mathfrak{m}^-$, such that $[e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3},[v,e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3}]]=ce_{\gamma}$ for some nonzero constant $c$. If $\gamma=\alpha_i\in \Pi$, we can just choose $v=e_{-\alpha_i}$. If $\gamma\notin \Pi$, for simplicity we introduce \begin{definition} For any postive noncompact root $\gamma$ we will call a triple of positive noncompact roots $(\alpha_i,\alpha_j,\beta)$ a \textbf{compatible triple for} $\mathbf{\gamma}$ if $\beta=\alpha_i+\alpha_j-\gamma$ and $\alpha_i-\gamma, \alpha_j-\gamma$ are roots, where $\alpha_i,\alpha_j\in \Pi$. \end{definition} Then it suffices to show that for any positive noncompact root $\gamma\notin \Pi$, there exists a compatible triple $(\alpha_i, \alpha_j, \beta)$ for $\gamma$ and also there is no $\alpha_{i'},\alpha_{j'}\in \Pi$ such that $(\alpha_{i'},\alpha_{j'},\beta)$ is a compatible triple for another positive noncompace root $\gamma'\neq \gamma$. Then $v$ can be chosen as $e_{-(\alpha_i+\alpha_j-\gamma)}$. This property is straightforward to check when $M$ is classical. \par We now check the root system of $E_7$. Let $x_i (1\leq i\leq 7)$ be the standard basis of $\mathbb{R}^7$. The positive roots are $x_i-x_j(1\leq i<j\leq 7)$, $x_i+x_j+x_k(1\leq i<j<k\leq 7)$ and $d-x_i(1\leq i\leq 7)$ where $d=\sum_{i=1}^7x_i$. The positive noncompact roots are $x_1-x_i(2\leq i\leq 7)$, $x_1+x_i+x_j(2\leq i<j\leq 7)$ and $d-x_i(2\leq i\leq 7)$. The maximal strongly orthogonal roots can be chosen as $\alpha_1=x_1-x_2, \alpha_2=x_1+x_2+x_3, \alpha_3=d-x_3$. For $\gamma=x_1-x_3$, we have the triple $(\alpha_1,\alpha_3,d-x_2)$; For $\gamma=x_1-x_i$ with $4\leq i\leq 7$, we have the triple $(\alpha_1,\alpha_2,x_1+x_3+x_i)$; For $\gamma=x_1+x_2+x_j$ with $4\leq j\leq 7$, we have the triple $(\alpha_2,\alpha_3,d-x_j)$; For $\gamma=x_1+x_3+x_j$ with $4\leq j\leq 7$, we have the triple $(\alpha_1,\alpha_2,x_1-x_j)$; For $\gamma=x_1+x_i+x_j$ with $4\leq i<j\leq 7$, we have the triple $(\alpha_1,\alpha_3,x_1+x_k+x_\ell)$ with $\{k,\ell\} =\{4,5,6,7\}-\{i,j\}$. For $\gamma=d-x_2$, we have the triple $(\alpha_1,\alpha_3,x_1-x_3)$; For $\gamma=d-x_i$ with $4\leq i\leq 7$ we have the triple $(\alpha_2,\alpha_3,x_1+x_2+x_i)$. This list satisfies the desired property. Hence $H$ must be a bijection. \end{proof} \begin{remark} We can observe that in our situation only the uniqueness of compatible triple for a noncompact positive root $\gamma \notin \Pi$ can be obtained from the Restricted Root Theorem directly. \end{remark} \par From the lemma we can easily obtain that \begin{proposition}\label{unique} If $C$ is a curve with generic tangent vectors in $M$, then for every point $x\in C$, $C$ is tangent to a unique standard model of diagonal curves in maximal polyspheres to the second order at $x$. \end{proposition} \begin{remark} From above lemma we can also compute the splitting type of $T(M)$ over the diagonal curve $C$. The corresponding coroot of $\alpha_i$ is denoted by $h_{\alpha_i}$. The tangent vector of $C$ at $o$ can be written as $\sum_{i=1}^re_{\alpha_i}$. By Grothendieck Theorem we know that $T(M)|_{C}=\bigoplus_{\gamma}\mathcal{O}(a_\gamma)$ where $a_{\gamma}=\sum_{i=1}^r\gamma(h_{\alpha_i})$ whenever $\gamma$ is a positive noncompact root. \par It's easy to see that when $\gamma=\alpha_i$ for some $1\leq i\leq r$, $a_{\gamma}=2$. For other roots, we can actually use the proof as in the above lemma, the proof tells us for the tube type irreducible compact Hermitian symmetric space, the compatible triple defined as above exists for $\gamma$ and it's unique. Hence there exist only two roots $\alpha_k,\alpha_\ell\in \Pi$ such that $\gamma(h_{\alpha_k})=\gamma(h_{\alpha_\ell})=1$ and for other $\alpha_i\in \Pi$, $\gamma(h_{\alpha_i})=0$. Hence the splitting type must be $T(M)|_{C}=\mathcal{O}(2)^n$. Then the local uniqueness of standard model tangent to $C$ to the second order at one point is guaranteed by deformation theory. \end{remark} \subsection{Affineness of the moduli space} \par From previous construction, we obtain a holomorphic map from $C$ to the moduli space of standard models (as tangency to the second order is a holomorphic condition). We now prove that the moduli space $\mathcal{M}$ is affine and hence the holomorphic map is actually constant. We know $G$ acts transitively on $\mathcal{M}$. Denoting the stablizer of the diagonal curve by $H$ , we need the following theorem from \cite{MR0437549} and also \cite[p.162]{MR1334091}). \begin{theorem} Let $H$ be a closed subgroup of the complex connected reductive affine algebraic group $G$, then $G/H$ is an affine variety if and only if it's stein, if and only if $H$ is a reductive group. \end{theorem} \par Write $v=\sum_{i=1}^re_{\alpha_i}$. We have the following lemma on the $K$-orbit $K.[v]\subset \mathbb{P}T_o(M)$. \begin{lemma} When $M$ is an irreducible compact Hermitian symmetric space of tube type, the $K$-orbit $\mathcal{S}=K.[v]$ in the projectivized tangent space $\mathbb{P}T_o(M)$ is an $(n-1)$-dimensional totally real submanifold, where $n=\dim(M)$. \end{lemma} \begin{proof} We use $\mathfrak{l}$ to denote the Lie algebra of $K$. Then the tangent space of $\mathcal{S}$ is identified with \[T^{\mathbb{R}}_{[v]}(\mathcal{S})=([\mathfrak{l},v]+\mathbb{C}v)/\mathbb{C}v\] On the other hand, we write \[\mathfrak{l}=\sum_{\phi\in \Delta}\sqrt{-1}h_{\phi}\mathbb{R}+\sum_{\phi\in \Delta_K^+}\sqrt{-1}(e_{\phi}+e_{-\phi})\mathbb{R}+\sum_{\phi\in \Delta_K^+}-(e_{\phi}-e_{-\phi})\mathbb{R}\] where $\Delta$ denotes the set of all roots and $ \Delta_K^+$ denotes the set of all positive compact roots. \par For the first part, we have \[[\sum_{\phi\in \Delta}\sqrt{-1}h_{\phi}\mathbb{R},v]=\sum_{i=1}^r\sqrt{-1}e_{\alpha_i}\mathbb{R}\] For the second and third part, we use the following special property of the root system of Hermitian symmetric space of tube type which can be obtained from the proof of Lemma \ref{bijection} together with the Restricted Root Theorem. \par ($\star$) For any positive compact root $\phi$, if $\phi+\alpha_i$ is a positive noncompact root for some $\alpha_i\in \Pi$, then there exists a unique pair $\alpha_i,\alpha_j\in \Pi$ with $j\neq i$ such that $\alpha_i+\phi$ and $\alpha_j-\phi$ are noncompact positive roots. Moreover, $(\alpha_i,\alpha_j, \alpha_i+\phi)$ is the unique compatible triple for $\alpha_j-\phi$. \par We know for any root $\alpha,\beta$ whose sum is also a root, $[e_{\alpha},e_{\beta}]=N_{\alpha,\beta}e_{\alpha+\beta}$ for some nonzero real number $N_{\alpha,\beta}$ (cf. Theorem 5.5 in \cite[Chapter III]{MR1834454} ). Let $\beta_1=\alpha_i+\phi, \beta_2=\alpha_j-\phi$. \par If $\beta_1\neq \beta_2$, then $[\sqrt{-1}(e_{\phi}+e_{-\phi}),v]=\sqrt{-1}( N_{\phi,\alpha_i}e_{\beta_1}+N_{-\phi,\alpha_j}e_{\beta_2})$ and $[e_{\phi}-e_{-\phi},v]= N_{\phi,\alpha_i}e_{\beta_1}-N_{-\phi,\alpha_j}e_{\beta_2}$, the set of such compact roots $\phi$ is denoted by $\Delta_{K,1}$. \par If $\beta_1=\beta_2$, then $[\sqrt{-1}(e_{\phi}+e_{-\phi}),v]=\sqrt{-1}( N_{\phi,\alpha_i}+N_{-\phi,\alpha_j})e_{\beta_1}$ and $[e_{\phi}-e_{-\phi},v]= (N_{\phi,\alpha_i}-N_{-\phi,\alpha_j})e_{\beta_1}$. Note that $\{\alpha_i,\beta_1,\alpha_j\}$ is a maximal $\phi$-chain, also from Theorem 5.5 in \cite[Chapter III]{MR1834454} we have $N_{\phi,\alpha_i}^2=\phi(H_{\phi})=N_{-\phi,\alpha_j}^2$, hence either $[\sqrt{-1}(e_{\phi}+e_{-\phi}),v]=0$ or $[e_{\phi}-e_{-\phi},v]=0$, the set of compact roots $\phi$ with $[e_{\phi}-e_{-\phi},v]=0$ is denoted by $\Delta_{K,2}$ and the set of compact roots $\phi$ with $[\sqrt{-1}(e_{\phi}+e_{-\phi}),v]=0$ is denoted by $\Delta_{K,3}$. \par Since all $N_{\phi,\alpha_i}$ are real, from above we have \begin{align*} [\mathfrak{l},v]&=\sum_{i=1}^r\sqrt{-1}e_{\alpha_i}\mathbb{R}+\sum_{\phi\in \Delta_{K,1}}\sqrt{-1}( N_{\phi,\alpha_{i_\phi}}e_{\alpha_{i_\phi}+\phi}+N_{-\phi,\alpha_{j_\phi}}e_{\alpha_{j_\phi}-\phi})\mathbb{R}+ \\&\sum_{\phi\in \Delta_{K,1}}( N_{\phi,\alpha_{i_\phi}}e_{\alpha_{i_\phi}+\phi}-N_{-\phi,\alpha_{j_\phi}}e_{\alpha_{j_\phi}-\phi})\mathbb{R}+\sum_{\phi\in \Delta_{K,2}}\sqrt{-1}e_{\alpha_{i_\phi}+\phi}\mathbb{R}+\sum_{\phi\in\Delta_{K,3}}e_{\alpha_{i_\phi}+\phi}\mathbb{R} \end{align*} where $\alpha_{i_\phi},\alpha_{j_\phi}$ are uniquely determined corresponding to the choice of $\phi$. \par Since the complex structure $J$ on $\mathbb{P}T_o(M)$ acts on $\mathfrak{m}^+$ by $\sqrt{-1}$ and also the compatible triple uniquely exists for each positive noncompact root, we can easily obtain that $[\mathfrak{l},v]^{\mathbb{C}}=\mathfrak{m}^+$ and $[\mathfrak{l},v]$ is totally real, i,e, $J[\mathfrak{l},v]\cap [\mathfrak{l},v]=\{0\}$. Therefore $\mathcal{S}$ is an $(n-1)$-dimensional totally real submanifold in $\mathbb{P}T_o(M)$. \end{proof} \begin{remark} This lemma is actually equivalent to the fact that the Bergman-\v{S}ilov boundary of a tube type irreducible bounded symmetric domain is totally real and has half real dimension of the whole space. Although the latter is a well-known result in \cite{MR0174787}, we still give a self-contained proof here. \end{remark} \par Now we want to prove that $H$ is a reductive Lie group. We need to show that $H$ has a compact real form and $H$ has finitely many connected components. The latter will be uniformly explained in Proposition \ref{finite connected}. So we just show that $H$ has a compact real form in this section. \par We consider the $G_c$ action on a fixed diagonal curve $C_d$, the stablizer is denoted by $H_0$, we claim that $Lie(H)=Lie(H_0)^{\mathbb{C}}$, since $H_0$ is a compact real Lie group, then we know $H$ is reductive. \par $Lie(H)=Lie(H_0)^{\mathbb{C}}$ can be done by dimension counting. We consider two double fibrations. The first one is given by \[M=G_c/K\stackrel{\rho_1}\longleftarrow \mathcal{U}_1 \stackrel{\mu_1}\longrightarrow G_c/H_0 \] where $\rho_1^{-1}(o)\cong K.[v]$ and $\mu_1^{-1}(\kappa)\cong \mathbb{P}^1$ for some standard model of diagonal curve $\kappa$. $G_c/H_0$ is a subspace in the moduli space of all standard models of the diagonal curve. $\mu_1$ can be well-defined for the following reason: choosing some canonical K\"{a}hler-Einstein metric on $M$ such that the diagonal curve $C_d$ is totally geodesic (so $G_c/H_0$ is actually the moduli space of all standard models of the diagonal curve which are totally geodesic with respect to this metric), $K$ is in the group of isometry so $K$-action on $C_d$ preserving the tangent vector $v$ will fix the standard model. From this double fibration we easily compute that \[ \dim_{\mathbb{R}}(H_0)=-\dim_{\mathbb{R}}(M)-\dim_{\mathbb{R}}(K.[v])+\dim_{\mathbb{R}}(G_c)+2=\dim_{\mathbb{R}}(G_c)-(3n-3) \] \par On the other hand, from Proposition \ref{unique} we have another double fibration \[ \mathcal{D}(M) \stackrel{\rho_2}\longleftarrow \mathcal{U}_2 \stackrel{\mu_2}\longrightarrow G/H \] where $\mathcal{D}(M)$ is a fibration over $M=G/P$ with fibre isomorphic to $P.[v]$ and $\rho_2^{-1}(o,[w])=[w,[\mathfrak{m}^-,w]]/\mathbb{C}w $ and $\mu_2^{-1}(\kappa)\cong \mathbb{P}^1$ for some standard model of diagonal curve $\kappa$. Thus we have \[ \dim_{\mathbb{C}}(H)=-\dim_{\mathbb{C}}(M)-\dim_{\mathbb{C}}(P.[v])-(n-1)+\dim_{\mathbb{C}}(G)+1=\dim_{\mathbb{C}}(G)-(3n-3) \] Combine the dimensions above, we have \[\dim_{\mathbb{C}}(H)=\dim_{\mathbb{R}}(H_0)\] Since $H_0^{\mathbb{C}}\subset H$ we know $Lie(H_0)^{\mathbb{C}}=Lie(H)$ and then $H$ is a reductive group. So we can obtain the Theorem \ref{diagonal curve}. \begin{remark} The (complex) dimension of $G/H$ is $3n-3$, this is compatible with the computation from deformation theory, as $T(M)|_C=\mathcal{O}(2)^n$ and $\dim H^0(C,N_{C|M})=3(n-1)$. \end{remark} \section{Generalization} In this part, as in the Remark \ref{reducible} we assume that the ambient space $M$ is irreducible. We can actually prove the weaker gap ridigity for all $(H_2)$-subspaces $X$ in $M=G/P$ satisfying that $P.[T_o(X)]$ is contained in a complement of a $P$-invariant hypersurface in $Gr(m,T_o(M))$ where $\dim(M)=n, \dim(X)=m$. For the higher dimensional generalization we assume the existence of the standard models satisfying the second order tangency. The idea is basically similar as diagonal curve case, we still need two steps \begin{enumerate} \item For every point $x\in S$, the standard model tangent to $S$ can be uniquely determined so that the holomorphic map from $S$ to the moduli space of standard models can be constructed. \item Show that the moduli space is affine algebraic. \end{enumerate} Also we write $P=K^{\mathbb{C}}M^-$ and $S$ is the submanifold in $M$ tangent to some standard models at every point. Then it suffices to prove \begin{enumerate} \item The standard models tangent to $T_o(S)$ is an $(n-m)$-dimensional family and for given $S$ a corresponding standard model can be uniquely determined by an $(n-m)$-dimensional subspace $Q\subset S^2T_o(S)\otimes N_{o,S|M}$, where $Q$ is spanned by the second fundamental forms of all standard models. \item There is a totally real $K$-orbit on $P.[T_o(X)]\subset Gr(m, T_o(M))$ such that $\dim_{\mathbb{C}}(P.[V])=\dim_{\mathbb{C}}(K^{\mathbb{C}}.[V])=\dim_{\mathbb{R}}(K.[V])$ where $[V]$ denotes a reference point on the totally real $K$-orbit. Also, the isotropy group $H$ of the $G$-action on the space of standard models has finitely many connected components. \end{enumerate} Assuming these two facts, we can give a proof for the Main Theorem. \begin{proof}[Proof of the Main Theorem] From the first fact we know that for each $x\in S$ there exists only one standard model tangent to $S$ at $x$ to the second order and hence the holomorphic map from $S$ to the moduli space of standard models can be constructed. On the other hand, $V$ is corresponding to some choice of standard models, we fix one as $g.X$ and choose some K\"{a}hler-Einstein metric on $M$ such that $g.X$ is totally geodesic with respect to this metric. Inheriting the same argument and notations in diagonal curve case, we also have two double fibrations. The first one is given by \[M=G_c/K\stackrel{\rho_1}\longleftarrow \mathcal{U}_1 \stackrel{\mu_1}\longrightarrow G_c/H_0 \] where $\rho_1^{-1}(o)\cong K.[V]$ and $\mu_1^{-1}(\kappa)\cong X$ for some standard model $\kappa$, $H_0$ is the isotropy group of $G_c$-action on the moduli space of standard models. The second double fibration \[ \mathcal{D}(M) \stackrel{\rho_2}\longleftarrow \mathcal{U}_2 \stackrel{\mu_2}\longrightarrow G/H \] where $\mathcal{D}(M)$ is a fibration over $M=G/P$ with fibre isomorphic to $P.[V]$ and $\rho_2^{-1}(o,[V])=Q$ and $\mu_2^{-1}(\kappa)\cong X$ for some standard model $\kappa$, $G/H$ is the moduli space of standard models. \par We still have the dimension counting \[ \dim_{\mathbb{R}}(H_0)=-\dim_{\mathbb{R}}(M)-\dim_{\mathbb{R}}(K.[V])+\dim_{\mathbb{R}}(G_c)+2m \] and \[ \dim_{\mathbb{C}}(H)=-\dim_{\mathbb{C}}(M)-\dim_{\mathbb{C}}(P.[V])-(n-m)+\dim_{\mathbb{C}}(G)+m \] Therefore $\dim_{\mathbb{R}}(H_0)=\dim_{\mathbb{C}}(H)$ holds, together with the fact that $H$ has finitely many connected components, we know $H$ is reductive and the conclusion easily follows. \end{proof} \begin{remark} For the curve case, i.e. $M$ is irreducible and of tube type, $X$ is a diagonal curve in $M$, two conditions are satisfied including $P.[T_o(X)]=\mathcal{O}_o$ and $Q=S^2T_o(S)\otimes N_{o,S|M}$ by dimension counting, the second condition gives the existence of standard models with second order tangency automatically. Hence the 'weaker' gap rigidity is actually the original gap rigidity in this case. In general situation we want to remove the extra condition on second order tangency, i.e. to construct a holomorphic map from $S$ to the moduli space of standard models with only the first order tangency. This may need further study. \end{remark} We firstly give an affirmative answer to the second statement, we have \begin{lemma}\label{totally real} If the $P$-orbit $P.[T_o(X)]\subset Gr(m, T_o(M))$ is contained in a complement of a $P$-invariant hypersurface, then there is a totally real $K$-orbit in $P.[T_o(X)]\subset Gr(m, T_o(M))$, denoting the reference point by $[V]$ we have \[\dim_{\mathbb{C}}(K^{\mathbb{C}}.[V])=\dim_{\mathbb{R}}(K.[V])\] \end{lemma} \begin{proof} Since $K^{\mathbb{C}}.[V]$ is in the complement of a hypersurface in $Gr(p,T_o(M))$, it's affine algebraic and hence the isotropy group $J$ of this orbit is reductive. Since $K$ is a compact real subgroup of $K^{\mathbb{C}}$, there is a totally real $K$-orbit on $K^{\mathbb{C}}/J$ (cf.\cite[p.161]{MR1334091}). We let $L=K\cap J$, then \[\dim_{\mathbb{R}}K/L\leq \dim_{\mathbb{C}}K^{\mathbb{C}}/J\] so $\dim_{\mathbb{R}}(L)\geq \dim_{\mathbb{C}}(J)$. On the other hand we know the Lie algebra $Lie(L)\oplus \sqrt{-1}Lie(L) \subset J$, then $L$ is a maximal compact real form of $J$ and \[\dim_{\mathbb{R}}K/L= \dim_{\mathbb{C}}K^{\mathbb{C}}/J\] \end{proof} \begin{remark} This lemma gives a uniform conceptual proof on the existence of totally real $K$-orbits, and the computation using root vectors in curve case gives an explicit example. \end{remark} Also we can prove \begin{proposition}\label{finite connected} The isotropy group $H$ has finitely many connected components. \end{proposition} \begin{proof} We consider the double fibration \[ \mathcal{D}(M) \stackrel{\rho_2}\longleftarrow \mathcal{U}_2 \stackrel{\mu_2}\longrightarrow G/H \] We know $\mathcal{D}(M)$ is a fibration over $M=G/P$ with fibre isomorphic to $P.[V]=K^{\mathbb{C}}.[V]$ and the fibre of $\rho_2$ is some affine space. On the other hand the fibre of $\mu_2$ is some standard model of $(H_2)$-subspace. We will use some exact sequences on the fundamental groups in the fibrations. The first exact sequence is \[ \pi_1(K^{\mathbb{C}}.[V])\longrightarrow \pi_1(\mathcal{D}(M)) \longrightarrow \pi_1(M) \] Denoting the connected semisimple part of $K^{\mathbb{C}}$ by $\tilde{K}^{\mathbb{C}} $, we know $K^{\mathbb{C}}.[V]=\tilde{K}^{\mathbb{C}}.[V]$ is stein and the isotropy group $J$ under $\tilde{K}^{\mathbb{C}}$-action is reductive and hence has finitely many connected components. Then from $\pi_{1}(\tilde{K}^{\mathbb{C}})\longrightarrow\pi_1(K^{\mathbb{C}}.[V])\longrightarrow \pi_0(J)\longrightarrow 0$ we know $\pi_1(K^{\mathbb{C}}.[V])$ is finite (Although $\pi_0$ is not a group, the sequence is still exact in the sense that kernel equals image, and the 'zero' in $\pi_{0}(J)$ can be chosen as any element in $\pi_0(J)$). On the other hand $\pi_1(M)=0$, so $\pi_1(\mathcal{D}(M))$ is finite. \par The second exact sequence is \[ 0 \longrightarrow \pi_1(\mathcal{U}_2) \longrightarrow \pi_1(\mathcal{D}(M)) \] since the fibre of $\rho_2$ is an affine space. Thus $\pi_1(\mathcal{U}_2)$ is finite. \par The last exact sequence is \[ 0 \longrightarrow \pi_1(\mathcal{U}_2) \longrightarrow \pi_1(G/H) \longrightarrow 0\] since any $(H_2)$-subspace is path-connected and simply-connected. Hence $\pi_1(G/H)$ is finite and from $ \pi_1(G) \longrightarrow \pi_1(G/H) \longrightarrow \pi_0(H) \longrightarrow 0$ we know $H$ has finitely many connected components. \end{proof} \par The remaining task is to show the first statement, we still have the following lemma as diagonal curve case. Hereafter all the complex conjugations are with respect to the compact real form. \begin{lemma}\label{2nd fundamental form under M-} Suppose that $X\subset M$ is totally geodesic with respect to some choice of K\"{a}hler-Einstein metric, then under the $M^-$ action given by $\exp(\overline{u})$ for some $u\in \mathfrak{m}^+$, the second fundamental form of $\exp(\overline{u}).X$ in $M$ will be \[\sigma_{o}(v_1,v_2)=[v_1,[\overline{u},v_2]] \mod T_{o}(S)\] for any $v_1,v_2\in T_ o(S)=T_o(X)$. \end{lemma} \begin{proof} For any $v\in T_o(S)=T_o(X)$, from \cite[Lemma 4.3]{MR1198602} we know $\sigma_{o}(v,v)=[v,[\overline{u},v]] \mod T_{o}(S)$. By polarization argument the conclusion follows. \end{proof} Then we can prove the following lemma which reduces the proof of the first statement to check a Lie bracket generating condition, we have \begin{lemma} If $\text{span}[T_o(X),[\mathfrak{m}^-,T_o(X)]]=\mathfrak{m^+}=T_o(M)$, then for $u\in \mathfrak{m}^+$, $\exp{\overline{u}}.X=X$ if and only if $u\in T_o(X)$, and then $\exp(\mathfrak{m}^-).X$ gives an $(n-m)$-dimensional family of standard models. \end{lemma} \begin{proof} Since $X$ is a geodesic model, we know $[T_o(X),[\overline{T}_o(X),T_o(X)]] \subset T_o(X)$ and hence if $u\in T_o(X)$, $\exp{\overline{u}}.X=X$. For another direction, if $\exp{\overline{u}}.X=X$, then $[v,[\overline{u},v]]\in T_o(X)$ for all $v\in T_o(X)$, under the metric induced from the Killing form $B$ we decompose $\mathfrak{m}^+=T_o(X)\oplus T_o(X)^{\bot}$. Then we have $B([v,[\overline{u'},v]], \overline{w})=0$ for all $w\in T_o(X)^\bot$ if we denote the projection of $u$ to $T_o(X)^{\bot}$ by $u'$. Hence $B([v,[\overline{w},v]], \overline{u'})=0$ for all $w\in T_o(X)^\bot$, since $\text{span}[T_o(X),[\mathfrak{m}^-,T_o(X)]]=\mathfrak{m^+}$ and $[T_o(X),[\overline{T}_o(X),T_o(X)]] \subset T_o(X)$ we know $u'$ is orthogonal to the whole $\mathfrak{m}^+$ thus $u'=0$ and $u\in T_o(X)$. \end{proof} Therefore it suffices to show the following Lie bracket generating condition \[(\dag):\text{span}[T_o(S),[\mathfrak{m}^-,T_o(S)]]=\mathfrak{m}^+\] this condition is invariant under $P$-action, so without loss of generality we may assume that $V=T_o(X)=T_o(S)$. We will check that all $(H_2)$-subspaces in irreducible compact Hermitian symmetric spaces with rank $\geq 2$ satisfy this condition. Classification of maximal $(H_2)$-subspace is given in the table in \cite[pp.27-28]{MR2127948}, an easy corollary obtained from the diagonal curve case shows that if $V$ contains a tangent vector of maximal rank on $M$ and $M$ is of tube type, then the Lie bracket generating condition $(\dag)$ will hold. We have typical examples of higher dimensional submanifolds for this corollary, like linear section subquadrics in a hyperquadric and maximal polyspheres in tube type symmetric spaces. Motivated by this, if $T_o(X)$ contains such a tangent vector, we will call the embedding $X\subset M$ is of diagonal type and the others are called non-diagonal type. We want to see whether the maximal $(H_2)$-subspaces will be of diagonal type. In the meantime, we will divide the cases into two categories from the classification, when the ambient space is of Type I,II,III, we will prove the generating condition based on the matrices expressions; when the ambient space is of Type IV or Type $E_6$, $E_7$, all $(H_2)$-subspaces can be easily written down and the checking is straightforward. \subsection {When $M$ is a hyperquadric} All $(H_2)$-subspaces in a hyperquadric can be classified as linear section subquadrics and diagonal curves in geodesic two-spheres. All of these are of diagonal type and hyperquadrics are of tube type, so the Lie bracket generating condition $(\dag)$ has already been satisfied. \subsection{When $M$ is of $E_6$-type} From the classification in \cite{MR2127948} we know all $(H_2)$-embedding in $E_6$ type ambient space is of diagonal type (but $E_6$ is not of tube type), and root systems will be used in this case, we have \begin{lemma}\label{E6 span} When $M$ is the $E_6$-type irreducible compact Hermitian symmetric space, for any $(H_2)$-embedding of $X\subset M$, denoting the tangent space of $X$ at the reference point $o$ by $V$, then $[V,[\mathfrak{m}^-,V]]$ spans the whole tangent space $T_o(M)\cong \mathfrak{m}^+$. \end{lemma} \begin{proof} We refer the table given in \cite[p.28]{MR2127948}, chains of $(H_2)$-subspaces can be classified as $\mathbb{P}^2 \stackrel{diag}{\hookrightarrow}\mathbb{P}^2\times \mathbb{P}^2 \hookrightarrow G(2,4) \hookrightarrow M$ or $\mathbb{P}^5\times \mathbb{P}^1\hookrightarrow M$. It suffices to check $\mathbb{P}^2$ and $\mathbb{P}^5\times \mathbb{P}^1\hookrightarrow M$ cases. We adapt the root system notations in \cite[p.290]{MR0214807}, where the extended Dynkin diagram is as follows \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (6,0) (3,0) -- (3,-1.5) (3,-1.5) -- (1.5,-1.5); \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (3,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (1.5,-1.5) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \end{tikzpicture} \caption{Extended Dyndin diagram of $E_6$} \label{fig:E6} \end{figure} Here the highest root $\gamma=\alpha_1+2\alpha_2+3\alpha_3+2\alpha_4+\alpha_5+2\alpha_6$. Then the tangent space of $X=\mathbb{P}^2$ is spanned by $e_{\alpha_1}+e_{\gamma-\alpha_6}$ and $e_{\alpha_1+\alpha_2}+e_{\gamma}$. All the positive noncompact roots can be listed according to the coefficients of $\alpha_1,...,\alpha_6$ in order, which are \[ \begin{gathered} (100000)=\alpha_1, (110000)=\alpha_1+\alpha_2, (111000), (111001) \\ (111100), (111101), (111110), (112101) \\ (111111), (122101), (112111), (122111) \\ (112211), (122211), (123211)=\gamma-\alpha_6, (123212)=\gamma \\ \end{gathered}\] One can easily check that any positive noncompact root $\delta$ has a pairing $(\delta,\delta')$ such that $\delta+\delta'=\alpha_1+\gamma-\alpha_6$ or $\delta+\delta'=\alpha_1+\alpha_2+\gamma$, thus every positive noncompact root vector is in the space span$[V,[\mathfrak{m}^-,V]]$. \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (6,0) (3,0) -- (3,-1.5) (3,-1.5) -- (1.5,-1.5); \draw[dashed](-0.75,0.7)--(2.25,0.7) (-0.75,-0.7)--(2.25,-0.7) (-0.75,0.7)--(-0.75,-0.7) (2.25,0.7)--(2.25,-0.7); \draw[dashed](0.75,-0.8)--(3.75,-0.8) (0.75,-2.2)--(3.75,-2.2) (0.75,-0.8)--(0.75,-2.2) (3.75,-0.8)--(3.75,-2.2); \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (3,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (1.5,-1.5) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \end{tikzpicture} \caption{$(H_2)$-embedding of $\mathbb{P}^2\times \mathbb{P}^2$} \label{fig:P2} \end{figure} \par For the embedding $X=\mathbb{P}^5\times \mathbb{P}^1\hookrightarrow M$ we know the tangent space of $\mathbb{P}^5\times \mathbb{P}^1$ is spanned by the root vectors with roots containing no $\alpha_6$ term and the root vector $e_{\gamma}$, we can check the conclusion in a similar manner. \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (6,0) (3,0) -- (3,-1.5) (3,-1.5) -- (1.5,-1.5); \draw[dashed](-0.75,0.7)--(6.75,0.7) (-0.75,-0.7)--(6.75,-0.7) (-0.75,0.7)--(-0.75,-0.7) (6.75,0.7)--(6.75,-0.7); \draw[dashed](0.75,-0.8)--(2.25,-0.8) (0.75,-2.2)--(2.25,-2.2) (0.75,-0.8)--(0.75,-2.2) (2.25,-0.8)--(2.25,-2.2); \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (3,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (1.5,-1.5) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \end{tikzpicture} \caption{$(H_2)$-embedding of $\mathbb{P}^5\times \mathbb{P}^1$} \label{fig:P5} \end{figure} \end{proof} \subsection{When $M$ is of $E_7$-type.} All $(H_2)$-subspaces have been classified in \cite{MR2127948}. As $E_7$ is of tube type, the diagonal type cases has been done from previous discussion, we only need to check the non-diagonal type cases, we have \begin{lemma}\label{E7 span} When $M$ is the $E_7$-type irreducible compact Hermitian symmetric space, for any $(H_2)$-embedding of $X\subset M$, denoting the tangent space of $X$ at the reference point $o$ by $V$, then $[V,[\mathfrak{m}^-,V]]$ spans the whole tangent space $T_o(M)\cong \mathfrak{m}^+$. \end{lemma} \begin{proof} We refer the table given in \cite[p.28]{MR2127948}, we only need to consider the non-diagonal type, chains of such $(H_2)$-subspaces can be classified as $\mathbb{P}^3 \stackrel{diag}{\hookrightarrow}\mathbb{P}^3\times \mathbb{P}^3 \hookrightarrow G(2,6) \hookrightarrow M$ or $\mathbb{P}^5\times \mathbb{P}^2\hookrightarrow M$. It suffices to check $\mathbb{P}^3$ and $\mathbb{P}^5\times \mathbb{P}^2\hookrightarrow M$ cases. We adapt the root system notations in \cite[p.291]{MR0214807}, the extended Dynkin diagram is as follows \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (9,0) (4.5,0) -- (4.5,-1.5) ; \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (7.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (9,0) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \draw[ thick, fill=white] (4.5,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_7$}; \end{tikzpicture} \caption{Extended Dyndin diagram of $E_7$} \label{fig:E7} \end{figure} Here the highest root $\gamma=\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+2\alpha_7$. Then the tangent space of $X=\mathbb{P}^3$ is spanned by $e_{\alpha_1}+e_{\gamma}$, $e_{\alpha_1+\alpha_2}+e_{\gamma-\alpha_6}$ and $e_{\alpha_1+\alpha_2+\alpha_3}+e_{\gamma-\alpha_5-\alpha_6}$. All the positive noncompact roots can be listed according to the coefficients of $\alpha_1,...,\alpha_7$ in order, which are \[ \begin{gathered} (1000000)=\alpha_1, (1100000)=\alpha_1+\alpha_2, (1110000)=\alpha_1+\alpha_2+\alpha_3\\ (1111000), (1111001), (1111100), (1111110) \\ (1111101), (1111111), (1112101), (1112111) \\ (1122101), (1122111), (1122111), (1222101) \\ (1122211), (1222111), (1123211), (1222211)\\ (1123212), (1223211), (1223212), (1233211), (1233212), \\ (1234212)=\gamma-\alpha_6-\alpha_5, (1234312)=\gamma-\alpha_6, (1234322)=\gamma \\ \end{gathered}\] One can easily check that any positive noncompact root $\delta$ has a pairing $(\delta,\delta')$ such that $\delta+\delta'=\alpha_1+\gamma$ or $\delta+\delta'=\alpha_1+\alpha_2+\gamma-\alpha_6$ or $\delta+\delta'=\alpha_1+\alpha_2+\alpha_3+\gamma-\alpha_5-\alpha_6$, thus every positive noncompact root vector is in the space span$[V,[\mathfrak{m}^-,V]]$. \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (9,0) (4.5,0) -- (4.5,-1.5) ; \draw[dashed](-0.75,0.7)--(3.75,0.7) (-0.75,-0.7)--(3.75,-0.7) (-0.75,0.7)--(-0.75,-0.7) (3.75,0.7)--(3.75,-0.7); \draw[dashed](5.25,0.7)--(9.75,0.7) (5.25,-0.7)--(9.75,-0.7) (5.25,0.7)--(5.25,-0.7) (9.75,0.7)--(9.75,-0.7); \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (7.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (9,0) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \draw[ thick, fill=white] (4.5,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_7$}; \end{tikzpicture} \caption{$(H_2)$-embedding of $\mathbb{P}^3\times \mathbb{P}^3$} \label{fig:P3} \end{figure} \par For the embedding $X=\mathbb{P}^5\times \mathbb{P}^2\hookrightarrow M$ we know the tangent space of $\mathbb{P}^5\times \mathbb{P}^2$ is spanned by the root vectors with roots containing no $\alpha_5$ term and the root vector $e_{\gamma}$, similar procedure can be done. \begin{figure}[h] \begin{tikzpicture} \draw[thick] (0,0) -- (9,0) (4.5,0) -- (4.5,-1.5) ; \draw[dashed](-0.75,0.7)--(5.25,0.7) (-0.75,-0.7)--(3.75,-0.7) (-0.75,0.7)--(-0.75,-0.7) (5.25,0.7)--(5.25,-2.2) (3.75,-0.7)--(3.75,-2.2) (3.75,-2.2)--(5.25,-2.2) ; \draw[dashed](6.75,0.7)--(9.75,0.7) (6.75,-0.7)--(9.75,-0.7) (6.75,0.7)--(6.75,-0.7) (9.75,0.7)--(9.75,-0.7); \draw[ thick, fill=black] (0,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_1$}; \draw[ thick, fill=white] (1.5,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_2$}; \draw[ thick, fill=white] (3,0) circle (3pt) node[above, outer sep=3pt]{$\alpha_3$}; \draw[ thick, fill=white] (4.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_4$}; \draw[ thick, fill=white] (6,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_5$}; \draw[ thick, fill=white] (7.5,0) circle (3pt) node[above,outer sep=3pt]{$\alpha_6$}; \draw[ thick, fill=black] (9,0) circle (3pt) node[above,outer sep=3pt]{$-\gamma$}; \draw[ thick, fill=white] (4.5,-1.5) circle (3pt) node[right,outer sep=3pt]{$\alpha_7$}; \end{tikzpicture} \caption{$(H_2)$-embedding of $\mathbb{P}^5\times \mathbb{P}^2$} \label{fig:P5P2} \end{figure} \end{proof} \subsection{When $M$ is of Type I, II, III} The rest are dealt by matrices expressions. We will check whether some of the maximal $(H_2)$-subspaces are of diagonal type and then prove the Lie bracket generating condition $(\dag)$. When $X\subset M$ is of non-diagonal type or when the ambient space is not of tube type, the corrollary obtained from diagonal curve case no long works, however, we can obtain some idea from the tube type case, for instance when $M$ is a Grassmannian $G(n,n) (n\geq 2)$, the tangent space can be identified with $n\times n$ matrices, we know a tangent vector $v$ of maximal rank correponding to a matrix of full rank generates all $n\times n$ matrices under the Lie bracket $[v,[\mathfrak{m}^-,v]]$, then in general for $G(p,q) (p\geq q\geq 2)$, we identify the tangent space as \[ T_o(M)\cong \mathfrak{m}^+=\{\begin{bmatrix} 0 & A \\ 0 & 0\\ \end{bmatrix}:A\ \text{is a $p\times q$ matrix}\} \] if a tangent vector $v$ is identified with a $p\times q$ matrix with only one maximal principal submatrix of full rank, for instance $A$ is of the form \[ \left[ \begin{BMAT}(e)[2pt,3cm,2cm]{c.c.c}{c.c} 0 & A' & 0\\ 0 & 0 &0 \end{BMAT} \right] \] where $A'$ is an $r\times r$ matrix of full rank. Then the Lie bracket $[v,[\mathfrak{m}^-,v]]$ will generate all matrices of the form \[ \left[ \begin{BMAT}(e)[2pt,3cm,2cm]{c.c.c}{c.c} 0 & * & 0\\ 0 & 0 &0 \end{BMAT} \right] \] If such matrices corresponding to tangent vectors in $V=T_o(X)$ can cover all the $pq$ positions in the matrix, we call the $(H_2)$-embedding satisfies the \textbf{Condition C}. One can easily obtain that if \textbf{Condition C} is satisfied, then $[V,[\mathfrak{m}^-,V]]$ will generate $\mathfrak{m}^+\cong T_o(M)$. When the ambient space is Type II or Type III, in similar mannar we just replace the matrices $A$ and the principal submatrices by antisymmetric matrices and symmetric matrices respectively. Then we reduce the problem to show that all $(H_2)$-embeddings in Type I,II,III ambient spaces satisfy \textbf{Condition C}. At first we check whether some maximal $(H_2)$-subspaces in the Grassmannian are of diagonal type, and furthermore check whether they will satisfy \textbf{Condition C}. \subsubsection{Even dimensional hyperquadric into Grassmannian} In this and next part, we use \cite{MR1031992} and \cite{MR0196134} as a general reference for spin representations and the embedding from hyperquadric into Grassmannian. We check that image of the embedding $Q^{2\ell} \rightarrow G(2^{\ell-1},2^{\ell-1})$ induced by spin representation has tangent vector of maximal rank in $ G(2^{\ell-1},2^{\ell-1})$, i.e. the embedding is of diagonal type. Let $\mathfrak{g}_1=\mathfrak{so}(2\ell+2,\mathbb{C})$ be the Lie algebra of the automorphism group of $Q^{2\ell}$ and in block form \[\mathfrak{g}_1=\{\begin{bmatrix} A & B \\ C & D\\ \end{bmatrix}:A=-A^t,D=-D^t,B=-C^t\}\] The holomorphic tangent space can be identified with \[\mathfrak{m}_1^+=\{\begin{bmatrix} 0 & B \\ -B^t & 0\\ \end{bmatrix}:B=(\sqrt{-1}Z,Z), Z\ \text{is a $2\ell\times 1$ matrix}\}\] On the other hand we consider the Clifford algebra $C=C^{\mathbb{C}}(2\ell+2)$ associated to a complex vector space $E=\mathbb{C}^{2\ell+2}$ and the standard nondegenerate quadratic form $q$ such that $q(v,v)=-v^2$ for any $v\in E$, choose a basis $\{e_1,...e_{2\ell+2}\}$ with $q(e_i,e_j)=\delta_{ij}$ we know as a vector space \[\mathfrak{spin}(2\ell+2)=\text{span}\{e_ie_j\in C:1\leq i<j\leq 2\ell+2\}\] and together with the operation of taking commutators it's isomorphic to $\mathfrak{so}(2\ell+2,\mathbb{C})$, and more precisely the isomorphism is given by \[ L_{ij}=E_{ij}-E_{ji}\longleftrightarrow \frac{1}{2}e_ie_j\] where $L_{ij}$ gives a basis for antisymmetric complex $(2\ell+2)\times (2\ell+2)$ matrices. \par We also introduce an explicit spinor module using the exterior algebra $S=\bigwedge^*W$ where $W$ is an $(\ell+1)$-dimensional complex vector space with an orthonormal basis $\{w_1,...w_{\ell+1}\}$ (for some Hermitian inner product on $W$). For any $w\in W$, it induces two operations on $S$, the first one is given by (left) exterior multiplication, we denote it by $w\wedge$, the second one is the adjoint operation of $w\wedge$ on $S$ with respect to the Hermitian inner product induced by the Hermitian inner product on $W$, we will denote it by $w\lfloor$. Then $w_i\wedge,w_i\lfloor$ generates the algebra $\text{End}(S)$ which also carries Clifford algebra structure, for simplicity we use the notation $a_i^\dagger=w_i\wedge, a_i=w_i\lfloor$. Let $\{ , \}$ be a nondegenerate quadratic form on the complex vector space spanned by all $a_i^{\dagger},a_i$ satisfying $\{a_i,a_j\}=\{a_i^\dagger,a_j^\dagger\}=0$ and $\{a_i,a^{\dagger}_j\}=\delta_{ij}$. The restriction $ab+ba=-\{a,b\}$ together with the nondegenerate quadratic form $\frac{1}{2}\{,\}$ gives a Clifford algebra structure of $\text{End}(S)$. \par In fact, $W$ can be constructed by letting $w_i=\frac{1}{2}(e_{2i-1}+\sqrt{-1}e_{2i})$, where the Hermitian inner product can be chosen such that $\{w_i\}_i$ is orthonormal. In this setting we know in the Clifford algebra $C$\[ \begin{gathered} w^2_i=0,w_iw_j=-w_jw_i\\ \overline{w}_iw_j=-w_{j}\overline{w}_i-\delta_{ij} \end{gathered} \] Let $\overline{w}_{N}=\overline{w}_1\cdots\overline{w}_{\ell+1}$, then by canonically identifying $S$ with a subalgebra of $C$ generated by $W$ (still denoted by $S$), we know $S\overline{w}_N$ is a left ideal of the Clifford algebra and for each $e\in C$, there is a unique linear transformation $L(e)$ on $S$ such that $ex\overline{w}_N=L(e)x\overline{w}_N$, then through $L$ we can identify $C\cong \text{End}(S)$ as an isomorphism of Clifford algebras, this gives the spin representation by restricting the Clifford algebra on $\mathfrak{spin}(2\ell+2)$. More precisely one can easily check that the action $L(w_i)$ and $L(\overline{w}_i)$ are actually the action $a^\dagger_i$ and $-a_i$ on the exterior algebra $S$. \par Therefore the generators of the Clifford algebra $C$ can be identified with generators of $\text{End}(S)$ by \[e_{2j-1}\longleftrightarrow a_j^\dagger-a_j, e_{2j}\longleftrightarrow -\sqrt{-1}(a_j^\dagger+a_j)\] In this case the spin representation is not irreducible, we denote by $C^{\pm}$ the subspaces of $C$ generated by even and odd elements respectively and $C^+$ is a subalgebra. For $e\in \mathfrak{spin}(2\ell+2)\subset C^{+}$, the action $L(e)$ preserves $S^+=\bigwedge^{even}W$ and $S^-=\bigwedge^{odd}W$ and induces two so-called half-spin representations, we consider the representation on $S^+$ whose dimension is $2^\ell$ then this gives an $(H_2)$-embedding of $Q^{\ell}$ into $G(2^{\ell-1},2^{\ell-1})$. \par Now we choose a vector in $\mathfrak{m}_1^+$ given by $Z=(-\sqrt{-1},0,...,0)^t$ then it can be identified with $\frac{1}{2}(e_1e_{2\ell+1}-\sqrt{-1}e_1e_{2\ell+2})$. Through the spin representation $\frac{1}{2}(e_1e_{2\ell+1}-\sqrt{-1}e_1e_{2\ell+2})=-(a_1^\dagger-a_1)a_{\ell+1}$, we choose the standard basis of $S^+$ given by $w_{i_1}\wedge...\wedge w_{i_k}$ ($k$ is even), then the operation on $S^+$ annihilates the terms containing no $w_{\ell+1}$ term and this operation on the subspace generated by the terms containing $w_{\ell+1}$ has $2^{\ell-1}$-dimensional image. Thus the image of the tangent vector given by $Z=(-\sqrt{-1},0,...,0)^t$ is a tangent vector of maximal rank in the tangent space of $G(2^{\ell-1},2^{\ell-1})$. \subsubsection{Odd dimensional hyperquadric into Grassmannian} Next we continue to check that the embedding $Q^{2\ell-1} \rightarrow G(2^{\ell-1},2^{\ell-1})$ induced by spin representation is also of diagonal type. In this case the representation is irreducible. Let $C=C^{\mathbb{C}}(2\ell+1)$ be the Clifford algebra associated to a complex vector space $E=\mathbb{C}^{2\ell+1}$ and the standard nondegenerate quadratic form $q$, we still denote the basis by $e_1,...,e_{2\ell+1}$ satisfying $q(e_i,e_j)=\delta_{ij}$. Let $E'$ be the subspace generated by $e_i(i \leq 2\ell-2), e_{2\ell}, e_{2\ell+1}$, there is a Clifford algebra $C'$ associated to $E'$ together with the standard nondegenerate quadratic form induced from $q$. Since $q(x',x')=-x'^2=-(x'e_{2\ell-1})^2$, we know $(C',q)$ is isomorphic to $(C^+,q)$ as a Clifford algebra induced by $\psi(x')=x'e_{2\ell-1}$ for $x'\in E'$. Now the action of $e_{2\ell-1}$ on the spin module is identity. Similar procedure as even dimensional hyperquadric case can be done for $C'$, then we also obtain that the image of the tangent vector given by $Z=(-\sqrt{-1},0,...,0)$ is a tangent vector of maximal rank in the tangent space of $G(2^{\ell-1},2^{\ell-1})$. \subsubsection{Project space into Grassmannian} We will check that the embedding of $\mathbb{P}^n (n\geq 3)$ into $G(p,q)$ through the irreducible representation $\Lambda_m$ is not of diagonal type, where $\Lambda_m$ is the space of skew symmetric tensors of degree $m$ ($1< m< n$), and $q=\binom{n}{m}, p=\binom{n}{m-1}$. However the Lie bracket generating condition $(\dag)$ is still satisfied. The Lie algebra of the automorphism group of $\mathbb{P}^n$ is $\mathfrak{g}_1=\mathfrak{sl}(n+1,\mathbb{C}) \subset \End(W)$ with $\dim_\mathbb{C}(W)=n+1$, and the holomorphic tangent space can be identified with \[\mathfrak{m}_1^+=\{\begin{bmatrix} 0 & A \\ 0 & 0\\ \end{bmatrix}:A\ \text{is an $n\times 1$ matrix}\}\] We considet the tangent vector corresponding to $A=(1,0,...,0)^t$, choose a basis $\{w_1,...w_{n+1}\}$ for $W$ then the tangent vector is corresponding to a linear transformation $T$ with $T(w_j)=0 (1\leq j\leq n), T(w_{n+1})=w_1$. Therefore the induced action of $T$ (still denoted by $T$) on $\bigwedge^mW$ satisfies $ T(w_{i_1}\wedge w_{i_2} \wedge \cdots w_{i_m})=0 $ if $n+1\notin \{i_1,...,i_m\}$ or $1\in \{i_1,...,i_m\}$, thus the rank of $T\in End(\bigwedge^mW)$ is $\binom{n}{m-1}-\binom{n-1}{m-2}=\binom{n-1}{m-1}$. We can easily check that \[\binom{n}{m-1}-\binom{n-1}{m-2} < \min\{\binom{n}{m-1}, \binom{n}{m}\}\] the tangent vector on $G(p,q)$ is not of maximal rank and thus the embedding is not of diagonal type. \par To see whether $[V,[\mathfrak{m}^-,V]]$ will generate $\mathfrak{m}^+$ if $V=T_o(f(\mathbb{P}^n))$ where $f$ denotes the embedding above, we give the following easy lemma at first \begin{lemma}\label{block span} For any nonzero $w_{i_1}\wedge \cdots \wedge w_{i_{m-1}}\wedge w_{n+1}, w_{j_1}\wedge \cdots \wedge w_{j_{m}} \in \bigwedge^mW$ with $n+1\notin\{j_1,...,j_{m}\}$, there exists nonzero $T\in \End(W)$ with $T(w_j)=0$ for all $j\neq n+1$ and $T(w_{n+1})\in \text{span}\{w_1,...,w_n\}$ such that the induced action $T(w_{i_1}\wedge \cdots \wedge w_{i_{m-1}}\wedge w_{n+1})\neq 0$ and $T(W^{\wedge})=w_{j_1}\wedge \cdots \wedge w_{j_{m}}$ for some $W^{\wedge}\in \bigwedge^mW$. \end{lemma} \begin{proof} From the given condition we know there exists some $j_k$ such that $j_k\notin \{i_1,...,i_{m-1}\}$, then choose $T_k$ which satisfies $T_{k}(w_{n+1})=w_{j_k}$ and $T_{k}(w_j)=0 (j\neq n+1)$, one can easily observe that $T_k$ is a required linear transformation. \end{proof} We interpret Lemma \ref{block span} in term of matrices. Write the holomorphic tangent space of $M$ as \[T_o(M)\cong\mathfrak{m}^+=\{\begin{bmatrix} 0 & C \\ 0 & 0\\ \end{bmatrix}:C\ \text{is a $p\times q$ matrix}\}\] The matrix is written in terms of the basis $\{w_{i_1}\wedge \cdots w_{i_m}, 1\leq i_1<\cdots \leq i_m\leq n+1\}$ and the first $q$ rows are corresponding to the tranformation of $w_{i_1}\wedge \cdots w_{i_m}$ with $i_m=n+1$. Then the image of the $T_k$ above is corresponding to a matrix $C_k$ with only one $\binom{n-1}{m-1}\times \binom{n-1}{m-1}$ principal submatrix $C'_k$ of rank $\binom{n-1}{m-1}$ and from the lemma any position in $C$ can be covered by some of such $C'_k$'s. Therefore Lemma \ref{block span} says that the $(H_2)$-embedding $\mathbb{P}^m \subset G(p,q)$ satisfies \textbf{Condition C}. \par Now we are ready to finish the proof for the Main Theorem \begin{proof} It suffices to check the Lie bracket generating condition $(\dag)$ for all $(H_2)$-subspaces. When the ambient space is of Type IV, $E_6$ or $E_7$, we have already done. It remains to consider the cases when $M$ is of Type I,II,III. We know the prototype to solve these problems comes from the diagonal curve in a tube type ambient space, comparing our situation with the prototype, we can observe that when $M=G(p,q)$ with $p\neq q$, the $(H_2)$-subspaces contain no Type II, III, IV factors and the 'trouble' comes from the non-diagonal type embedding of a projective space. When $M=G(n,n), G^{II}(n,n), G^{III}(n,n)$, besides the non-diagonal type embedding of a projective space, the 'trouble' also comes from Type II factor for example $G^{II}(n,n)\subset G(n,n)$ with odd number $n$ and $G^{II}(r,r)\times G^{II}(n-r,n-r) \subset G^{II}(n,n)$ with odd number $r$ or odd number $n-r$, which are non-diagonal embeddings (but satisfy the \textbf{Condition C}). \par Based on the observation we want to check the \textbf{Condition C} generally, we will consider the problem using a chain of maximal $(H_2)$-subspaces. There are three types of maximal embeddings when reducible Hermitian symmetric space appears in the chain, the first type is $X_1 \times \cdots X_{i1}\times X_{i2} \cdots \times X_k \subset X_1 \times \cdots X_i\times \cdots \times X_k \subset M$ assuming all factors are irreducible, where $X_{i1}\times X_{i2}$ is a maximal $H_2$-subspace in $X_i$ and other factors are preserved; the second type is $ X_1\times \cdots X_i \times \cdots \times X_k \subset X_1\times \cdots X_i\times X_i \times \cdots \times X_k \subset M$ assuming all factors here are irreducible, where $X_i\subset X_i\times X_i$ is the diagonal embedding and also other factors are preserved; and the third type is $X_1 \times \cdots X_{i'}\cdots \times X_k \subset X_1 \times \cdots X_i\times \cdots \times X_k \subset M$ assuming all factors are irreducible, where $X_{i'}$ is an irreducible maximal $(H_2)$-subspace in $X_i$ and other factors are preserved. All $(H_2)$-subspaces can be obtained by a composition of these three types of embeddings. \par From the table in \cite[pp.27-28]{MR2127948} we can find that all maximal $(H_2)$-subspaces in $M$ satisfy \textbf{Condition C}. We then use induction argument and suppose that $M'=X_1 \times \cdots X_i \times\cdots X_k\subset M$ is $H_2$ and satisfies \textbf{Condition C}. When $M'$ contains only Type I,II,III factors except for projective spaces, using block form together with the previous discussion we can easily observe that all these three types of maximal $(H_2)$-subspaces of $M'$ inherits \textbf{Condition C}. When $M'$ contains hyperquadric factor and $X_i=Q^s$ then from the table in \cite[pp.27-28]{MR2127948} we know $M$ must be $G(n,n), G^{II}(n,n), G^{III}(n,n)$ and the only maximal $(H_2)$-subspaces in $M'$ changing $X_i$ are $X_1 \times \cdots Q^{s} \times \cdots \times X_k \subset X_1 \times \cdots Q^{s}\times Q^{s}\times \cdots \times X_k=M' \subset M$ assuming there is another factor $Q^s$ in $M'$ such that $Q^s$ in the subspace is diagonally embedded in $Q^s\times Q^s$, or $X_1 \times \cdots Q^{s'} \times \cdots \times X_k \subset X_1 \times \cdots Q^{s}\times \cdots \times X_k=M' \subset M$ for some $s'<s$. Since the embedding of a hyperquadric as a maximal $(H_2)$-subspace is always of diagonal type in a tube type ambient space, this case can be easily done. If $X_i$ is preserved, then the case follows from the case when $M'$ contains only Type I,II,III factors directly. When $M'$ contains $X_i=\mathbb{P}^s$ factor, then only maximal $(H_2)$-subspaces in $M'$ changing $X_i$ (also if $X_i$ is preserved this follows from the cases discussed above) is $X_1 \times \cdots \mathbb{P}^{s} \times \cdots \times X_k \subset X_1 \times \cdots \mathbb{P}^{s}\times \mathbb{P}^{s}\times \cdots \times X_k \subset M$ assuming there is another factor $\mathbb{P}^s$ in $M'$ and $\mathbb{P}^s$ is diagonally embedded in $\mathbb{P}^s\times \mathbb{P}^s$ as there is no maximal $(H_2)$-subspace for a projective space, for this case we discuss more as follows. \par We need to give more explanations for cases concerning the diagonal embedding of projective space which is less obvious to preserve the \textbf{Condition C}. From the table in \cite[pp.27-28]{MR2127948}, if $M=G(p,q)$ with $p\neq q$, all factors must be some $G(p',q')$ with $\frac{p'}{q'}=\frac{p}{q}$ and projective spaces. In this case note that the embedding $\mathbb{P}^s \subset G(p',q')$ satisfies $p':q'=m:s-m+1$ for some $1<m<s$ and the factor $G(p',q')$ in an $(H_2)$-embedding satisfies $\frac{p}{q}=\frac{p'}{q'}$, so $m$ can be uniquely determined, we conclude that the diagonal embedding of projective space must be of the form $\mathbb{P}^s \subset \mathbb{P}^s \times \mathbb{P}^s \subset G(p',q')\times G(p',q')$. Therefore in this case we can observe that in a chain of $(H_2)$-subspaces, \textbf{Condition C} is always preserved. If $M=G(n,n), G^{II}(n,n), G^{III}(n,n)$, then in the chain of $(H_2)$-subspaces, embedding of projective space into Type II or Type III spaces can be uniquely determined by the dimension of the projective space and a $\mathbb{P}^s$ can not be a maximal $(H_2)$-subspace of Type II and Type III spaces at the same time, which implies that the chain of maximal $(H_2)$-subspace $\mathbb{P}^s\subset \mathbb{P}^s\times\mathbb{P}^s \subset X_1\times X_2$ with $X_1,X_2$ irreducible and $X_1\neq X_2$ will not appear. Hence \textbf{Condition C} is also preserved. \par Hence the (weaker) gap rigidity holds if there is a $P$-invariant Zariski open subset $\mathcal{O}_o=Gr(\dim(X),T_o(M))-\mathcal{Z}_o$ for some $P$-invariant hypersurface $\mathcal{Z}_o$ such that $[T_o(X)]\in \mathcal{O}_o$. \end{proof} \begin{remark} We didn't find a conceptual proof on the Lie bracket generating condition $(\dag)$ independent of the classification and we can observe that $(H_2)$-condition is sufficient but not necessary, since for example $\mathbb{P}^2\times \mathbb{P}^2 \subset G(3,3)$ by standard embedding is a $(H_1)$-subspace satisfying the Lie bracket generating condition but it's not $H_2$. \end{remark} \section*{Acknowledgement} The work was part of the author's PhD thesis. The author would like to thank his supervisor, Professor Ngaiming Mok, for his guidance. \bibliographystyle{alpha}
{ "timestamp": "2020-12-17T02:17:45", "yymm": "2012", "arxiv_id": "2012.08926", "language": "en", "url": "https://arxiv.org/abs/2012.08926" }
\section{Acknowledge} This work was supported by the National Key Research and Development Program of China (Grants NO. 2016YFA0302700 and 2017YFA0304100), National Natural Science Foundation of China (Grant NO. 11874343, 11821404, 11774335, 61725504, 61805227, 61805228, 61975195, U19A2075), Anhui Initiative in Quantum Information Technologies (Grant NO.\ AHY060300 and AHY020100), Key Research Program of Frontier Science, CAS (Grant NO.\ QYZDYSSW-SLH003), Science Foundation of the CAS (NO. ZDRW-XH-2019-1), the Fundamental Research Funds for the Central Universities (Grant NO. WK2030380017, WK2030380015 and WK2470000026), the CAS Youth Innovation Promotion Association (No.\,2020447).
{ "timestamp": "2020-12-17T02:17:48", "yymm": "2012", "arxiv_id": "2012.08927", "language": "en", "url": "https://arxiv.org/abs/2012.08927" }
\section*{Conflict of interest} \section{Introduction} \label{sec:introduction} Software maintenance has historically been the Achilles' heel of the software life cycle \cite{abreu95b}. Maintenance tasks can be seen as incremental modifications to a software system that aim to add or adjust some functionality or to correct some design flaws and fix some bugs. It has been found that feature addition, modification, bug fixing, and design improvement can cost as much as 80\% of the total software development cost \cite{Travassos:1999:DDO:320384.320389}. Code smells (CS), also called ``bad smells'', are associated with symptoms of software maintainability problems \cite{Yamashita2013d}. They often correspond to the violation of fundamental software design principles and negatively impact its future quality. Those weaknesses in design may slow down software evolution (e.g. due to code misunderstanding) or increase the risk of bugs or failures in the future. In this context, the detection of CS or anti-patterns (undesirable patterns, said to be recipes for disaster \cite{Brown1998}) is a topic of special interest, since it prevents code misunderstanding and mitigates potential maintenance difficulties. According to the authors of \cite{Singh2017}, there is a subtle difference between a CS and an anti-pattern: the former is a kind of warning for the presence of the latter. Nevertheless, in the remaining paper, we will not explore that slight difference and only refer to the CS concept. Code smells have been catalogued. The most widely used catalog was compiled by Martin Fowler \cite{Fowler1999}, and describes 22 CS. Other researchers, such as van Emden and Moonen \cite{Emden2002}, have subsequently proposed more CS. In recent years, CS have been cataloged for other object-oriented programming languages, such as Matlab \cite{Gerlitz2015DetectionAH}, Python \cite{Chen2016} and Java Android-specific CS \cite{Palomba2017Android, Kessentini2017Android}, which confirms the increasing recognition of their importance. Manual CS detection requires code inspection and human judgment, and is therefore unfeasible for large software systems. Furthermore, CS detection is influenced (and hampered) by the subjectivity of their definition, as reported by Mantyla et al \cite{Mantyla2004}, based on the results of experimental studies. They observed the highest inter-rater agreements between evaluators for simple CS, but when the subjects were asked to identify more complex CS, such as \textit{Feature Envy}, they had the lowest coefficient of concordance. The main reason reported for this fact was that participants had no clear idea of what the \textit{Feature Envy} CS was. In other words, they suggested that experience may mitigate the subjectivity issue and indeed they observed that experienced developers reported more complex CS than novices did. However, they also concluded that the CS' commonsense detection rules expressed in natural language can also cause misinterpretation. Automated CS detection, mainly in object-oriented systems, involves the use of source code analysis techniques, often metrics-based \cite{Lanza2006}. Despite research efforts dedicated to this topic in recent years, the availability of automatic detection tools for practitioners is still scarce, especially when compared to the number of existing detection methods (see section \ref{subsec:Evaluationtechniques}). Many researchers proposed CS detection techniques. However, most studies are only targeted to a small range of existent CS, namely \textit{God Class}, \textit{Long Method} and \textit{Feature Envy}. Moreover, only a few studies are related with the application of calibration techniques in CS detection (see section \ref{subsec:Codesmellsdetection} and \ref{subsec:Evaluationtechniques}). Considering the diversity of existing techniques for CS detection, it is important to group the different approaches into categories for a better understanding of the type of technique used. Thus, we will classify the existing approaches into seven broad categories, according to the classification proposed by Kessentini et al. \cite{Kessentini2014}: manual approaches, symptom-based approaches, metric-based approaches, probabilistic approaches, visualization-based approaches, search-based approaches and cooperative-based approaches. A factor that exacerbates the complexity of CS detection is that practitioners have to reason at different abstraction levels: some CS are found at the class level, others at the method level and even others encompass both method and class levels simultaneously (e.g. \textit{Feature Envy}). This means that once a CS is detected, its extension / impact must be conveyed to the developer, to allow him to take appropriate action (e.g. a refactoring operation). For instance, the representation of a \textit{Long Method} (circumvented to a single method) will be rather different from that of a \textit{Shotgun Surgery} that can spread across a myriad of classes and methods. Therefore, besides the availability of appropriate CS detectors, we need suggestive and customized CS visualization features, to help practitioners understand their manifestation. Nevertheless, there are only a few primary studies aimed at CS visualization. We classify CS visualization techniques in two categories: (i) the detection is done through a non-visual approach, the visualization being performed to show CS location in the code itself, (ii) the detection is performed through a visual approach. In this Systematic Literature Review (SLR) we approach those two categories. Most of the proposed CS visualization techniques show them inside the code itself. This approach works for some systems, but when we are in the presence of large legacy systems, it is too detailed for a global refactoring strategy. Thus, a more macro approach is required, without losing detail, to present CS in a more aggregated form. There are few primary studies in that direction. Summing up, the main objectives for this review are: \begin{itemize} \item What are the main techniques for the detection of CS and their respective effectiveness reported in the literature? \item What are the visual approaches and techniques reported in the literature to represent CS and therefore support practitioners to identify their manifestation? \end{itemize} The rest of this paper is organized as follows. Section \ref{sec:Relatedwork} describes the differences between this SLR and related ones. The subsequent section outlines the adopted research methodology (section \ref{sec:ResearchMethodology}). Then, SLR results and corresponding analyses are presented (section \ref{sec:ResultandAnalysis}). The answers to the Research Questions (RQ) are discussed in section \ref{sec:discussion}, and the concluding remarks, as well as scope for future research, are presented in section \ref{sec:conclusion}. \section{Related work} \label{sec:Relatedwork} We will present the related work in chronological order. Zhang et al. \cite{Zhang2010} presented a systematic review on CS, where more than 300 papers published from 2000 to 2009 in leading journals from IEEE, ACM, Springer and other publishers were investigated. After applying the selection criteria, the 39 most relevant ones were analyzed in detail. Different research parameters were investigated and presented from different perspectives. The authors revealed that \textit{Duplicated Code} is the most widely studied CS. Their results suggest that only a few empirical studies have been conducted to examine the impact of CS and therefore a phenomenon that was far from being fully understood. Rattan et al. \cite{Rattan2013} performed a vast literature review to study software clones (aka \textit{Duplicate Code}) in general and software clone detection in particular. The study was based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and workshops. An empirical evaluation of clone detection tools/techniques is presented. Clone management, its benefits and cross cutting nature is reported. A number of studies pertaining to nine different types of clones is reported, as well as thirteen intermediate representations and 24 match detection techniques. In conclusion, the authors call for an increased awareness of the potential benefits of software clone management and identify the need to develop semantic and model clone detection techniques. Rasool and Arshad \cite{Rasool2015} presented a review on several detection techniques and tools for mining CS. They classify selected CS detection techniques and tools based on their detection methods and analyze the results of the selected techniques. This study presented a critical analysis, where the limitations for the different tools are identified. The authors concluded, for example, that there is still no consensus on the common definitions of CS by the research community and there is a lack of standard benchmark systems for evaluating existing techniques. Al Dallal \cite{AlDallal2015} performed a SLR on the possibilities of performing refactoring in object-oriented systems. The primary focus is on the detection of CS and covered 45 primary studies. Various approaches for the detection of CS were brought into limelight. The work revealed the open source systems potentially used by the researchers, and the author found that, among those systems, JHotDraw is the most used by researchers to validate their results. Similarly, the Java language was found to be the most reported language on refactoring studies. Fernandes et al. \cite{Fernandes2016} presented the findings of a SLR on CS detection tools. They found in the literature a mention to 84 tools, but only 29 of them were available online for download. Altogether, these tools aim to detect 61 CS, by relying on at least six different detection techniques. The review results show that Java, C, and C++ are the top-three most covered programming languages for CS detection. The authors also present a comparative study of four detection tools with respect to two CS: \textit{Large Class} and \textit{Long Method}. Their findings support that tools provide redundant detection results for the same CS. Finally, this SLR concluded that \textit{Duplicated Code}, \textit{Large Class}, and \textit{Long Method} are the top-three CS that tools aim to detect. Singh and Kaur \cite{Singh2017} published a SLR on refactoring with respect to CS. Although the title appears to focus on refactoring, different types of techniques for identifying CS and antipatterns are discussed in depth. The authors claim that this work is an extension of the one published in \cite{AlDallal2015}. They found 1053 papers in the first round, which they refined to 325 papers based on the title of the paper. Then, based on the abstract, they trimmed down that number to 267. Finally, a set of 238 papers was selected after applying inclusion and exclusion criteria. This SLR includes primary studies from the early ages of digital libraries till September 2015. Some conclusions regarding detection approaches were that 28.15\% of researchers applied automated detection approaches to discover the CS, while empirical studies are used by a total of 26.89\% of researchers . The authors also pointed out that Apache Xerces, JFreeChart and ArgoUML are among the most targeted systems that, for obvious reasons, are usually open source. They also reckon that \textit{God Class} and \textit{Feature Envy} are the most recurrently detected CS. Gupta et al. \cite{Gupta2017} performed a SLR based on publications from 1999 to 2016 and 60 papers, screened out of 854, are deeply analyzed. The objectives of this SLR were to provide an extensive overview of existing research in the field of CS, identify the detection techniques and find out which are the CS that deserve more attention in detection approaches. This SLR identified that the \textit{Duplicate Code} CS receives most research attention and that very few papers report on the impact of CS. The authors conclude that most papers were focused on the detection techniques and tools and a significant correlation between detection techniques and CS has been performed on the basis of CS. They also identified four CS from Fowler's catalog, whose detection is not reported in the literature: \textit{Primitive Obsession}, \textit{Inappropriate Intimacy}, \textit{Incomplete Library Class} and \textit{Comments}. Alkharabsheh et al. \cite{Alkharabsheh2018} performed a systematic mapping study where they analyzed 18 years of research into Design Smell Detection based on a comprehensive set of 395 articles published in different proceedings, journals, and book chapters. Some key findings for future trends include the fact that all automatic detection tools described in the literature identify Design Smells as a binary decision (having the smell or not), lack of human experts and benchmark validation processes, as well as demonstrating that Design Smell Detection positively influences quality attributes. The authors found an important problem which is the absence of an extensive Smell Corpus Design available in common to several detection tools. Santos et al. \cite{SANTOS2018} investigating how CS impact the software development, the CS effect. They reached three main results: that the CS concept does not support the evaluation of quality design in practice activities of software development, i.e., there is still a lack of understanding of the effects of CS on software development; there is no strong evidence correlating CS and some important software development attributes, such as maintenance effort; and the studies point out that human agreement on CS detection is low. The authors suggest that to improve analysis on the subject, the area needs to better outline: (i) factors affecting human evaluation of CS; and (ii) a classification of types of CS, grouping them according to relevant characteristics. Sabir et al. \cite{Sabir2018} investigating the key techniques employed to identify smells in different paradigms of software engineering from object-oriented (OO) to service-oriented (SO). They performed a SLR based on publications from January 2000 to December 2017 and selected 78 papers. The authors concluded that: the most used CS in the literature are \textit{Feature Envy}, \textit{God Class}, \textit{Blob}, and \textit{Data Class}; Smells like the yo-yo problem, unnamed coupling, intensive coupling, and interface bloat received considerably less attention in the literature; Mainly two techniques in the detection of smells are used in the literature static source code analysis and dynamic source code analysis based on dynamic threshold adaptation, e.g., using a genetic algorithm, instead of fixed thresholds for smell detection. The SLR proposed by Azeem et al. \cite{AZEEM2019} investigated the usage of ML approaches in the field of CS between 2000 and 2017. From an initial set of 2456 papers, they found that 15 papers actually adopted ML approaches. They studied them from four different perspectives: (i) CS considered, (ii) setup of ML approaches, (iii) design of the evaluation strategies, and (iv) a meta-analysis on the performance achieved by the models proposed so far. The authors concluded that: the most used CS in the literature are \textit{God Class}, \textit{Long Method}, \textit{Functional Decomposition}, and \textit{Spaghetti Code}; Decision Trees and Support Vector Machines are the most commonly used ML algorithms for CS detection; several open issues and challenges exist that the research community should focus on in the future. Finally, they argue that there is still room for the improvement of ML techniques in the context of CS detection. Kaur \cite{Kaur2019} examined 74 primary studies covering the impact of CS on software quality attributes. The results indicate that the impact of CS on software quality is not uniform as different CS have the opposite effect on different software quality attributes. The author observed that most empirical studies reported the incoherent impact of CS on quality. This contradictory impact may be due to the size of the data set considered or the programming language in which the data sets are implemented. Thus, Kaur concludes the actual impact of CS on software quality is still unclear and needs more attention. The scope and coverage of the current SLR goes beyond those of aforementioned SLRs, mainly because it also covers the CS visualization aspects. The latter is important to show programmers the scope of detected CSs, so that they decide whether they want to proceed to refactoring or not. A good visualization becomes even more important if one takes into account the subjectivity existing in the definition of CS, which leads to the detection of many false positives. \section{Research Methodology} \label{sec:ResearchMethodology} In contrast to a non-structured review process, a SLR reduces bias and follows a precise and rigorous sequence of methodological steps to review research literature \cite{brereton2007lessons} and \cite{KitchenhametAl.2007}. SLRs rely on well-defined and evaluated review protocols to extract, analyze, and document results, as the stages conveyed in Figure \ref{fig:SLR_stages}. This section describes the methodology applied for the phases of planning, conducting and reporting the review. \subsection{Planning the Review} \label{subsec:PlanningReview} \noindent {\bf Identify the needs for a systematic review.} Search for evidences in the literature regarding the main techniques for CS detection and visualization, in terms of (i) strategies to detect CS, (ii) effectiveness of CS detection techniques, (iii) approaches and techniques for CS visualization. \noindent \textbf{The Research Questions.} We aim to answer the following questions, by conducting a methodological review of existing research: \noindent \textbf{RQ1}. \textit{Which techniques have been reported in the literature for the detection of CS?} The list of the main techniques reported in the literature for the detection of CS can provide a comprehensive view for both practitioners and researchers, supporting them to select a technique that best fit their daily activities, as well as highlighting which of them deserve more effort to be analyzed in future experimental studies. \noindent \textbf{RQ2}. \textit{What literature has reported on the effectiveness of techniques aiming at detecting CS?} The goal is to compare the techniques among themselves, using parameters such as accuracy, precision and recall, as well as the classification of automatic, semi-automatic or manual. \noindent \textbf{RQ3}. \textit{What are the approaches and resources used to visualize CS and therefore support the practitioners to identify CS occurrences?} The visualization of CS occurrences is a key issue for its adoption in the industry, due to the variety of CS, possibilities of location within code (e.g. in methods, classes, among classes), and dimension of the code for a correct identification of the CS. These three research questions are somehow related to each other. In fact, any detection algorithm after being implemented, should be tested and evaluated to verify its effectiveness, which causes RQ1 and RQ2 to be closely related. RQ3 encompasses two possible situations: i) CS detection is done through visual techniques, and ii) visual approaches are only used for representing CS previously detected with other techniques; therefore, there is also a close relationship between RQ1 and RQ3. \noindent \textbf{Publications Time Frame.} We conducted a SLR in journals, conferences papers and book chapters from January 2000 to June 2019. \subsection{Conducting the Review} \label{subsec:ConductingReview} This phase is responsible for executing the review protocol. \noindent \textbf{Identification of research.} Based on the research questions, keywords were extracted and used to search the primary study sources. The search string is presented as follows and used the same strategy cited in \cite{chen2011systematic}: \begin{quote} \textit{(``code smell" OR ``bad smell") AND (visualization OR visual OR representation OR identification OR detection) AND (methodology OR approach OR technique OR tool)} \end{quote} \noindent \textbf{Selection of primary studies.} The following steps guided the selection of primary studies. \textit{Stage 1 - Search string results automatically obtained from the engines} - Submission of the search string to the following repositories: ACM Digital Library, IEEE Xplore, ISI Web of Science, Science Direct, Scopus and Springer Link. The justification for the selection of these libraries is their relevance as sources in software engineering \cite{zhang2011identifying}. The search was performed using the specific syntax of each database, considering only the title, keywords, and abstract. The search was configured in each repository to select only papers carried out within the prescribed period. The automatic search was complemented by a backward snowballing manual search, following the guidelines of Wohlin \cite{Wohlin2014}. The duplicates were discarded. \textit{Stage 2 - Read titles \& abstracts to identify potentially relevant studies} - Identification of potentially relevant studies, based on the analysis of title and abstract, discarding studies that are clearly irrelevant to the search. If there was any doubt about whether a study should be included or not, it was included for consideration on a later stage. \textit{Stage 3 - Apply inclusion and exclusion criteria on reading the introduction, methods and conclusion} - Selected studies in previous stages were reviewed, by reading the introduction, methodology section and conclusion. Afterwards, inclusion and exclusion criteria were applied (see Table \ref{table:TableInclusionCriteria} and Table \ref{table:TableExclusionCriteria}). At this stage, in case of doubt preventing a conclusion, the study was read in its entirety. \begin{table}[htbp] \caption{Inclusion criteria} \label{table:TableInclusionCriteria} \centering \smallskip \begin{tabular}{|c|p{5cm}|} \hline \bf{ Criterion} & \bf{Description}\\ \hline IC1 & The publication venue should be a “journal” or “conference proceedings” or "book".\\ \hline IC2 & The primary study should be written in English.\\ \hline IC3 & The primary work is an empirical study or have "lessons learned" (experience report).\\ \hline IC4 & If several papers report the same study, the latest one will be included.\\ \hline IC5 & The primary work addresses at least one of the research questions.\\ \hline \end{tabular} \end{table} \begin{table}[htbp] \caption{Exclusion criteria} \label{table:TableExclusionCriteria} \centering \begin{tabular}{|c|p{5cm}|} \hline \bf{Criterion} & \bf{Description}\\ \hline EC1 & Studies not focused on code smells.\\ \hline EC2 & Short paper (less than 2000 words, excluding numbers) or unavailable in full text.\\ \hline EC3 & Secondary and tertiary studies, editorials/prefaces, readers’ letters, panels, and poster-based short papers.\\ \hline EC4 & Works published outside the selected time frame.\\ \hline EC5 & Code Smells detected in non-object oriented programming languages.\\ \hline \end{tabular} \end{table} The reliability of the inclusion and exclusion criteria of a publication in the SLR was assessed by applying Fleiss’ Kappa \cite{Fleiss2013}. Fleiss’ Kappa is a statistical measure for assessing the reliability of agreement between a fixed number of raters when classifying items. We used the Kappa statistic \cite{McHugh2012} to measure the level of agreement between the researchers. Kappa result is based on the number of answers with the same result for both observers \cite{Landis1977}. Its maximum value is 1, when the researchers have almost perfect agreement, and it tends to zero or less when there are no agreement between them (Kappa can range from {-1} to +1). The higher the value of Kappa, the stronger the agreement. Table \ref{table:TableKappaResults} shows the interpretation of this coefficient according to Landis \& Koch \cite{Landis1977}. \begin{table}[htbp] \caption{Interpretation of the Kappa results} \label{table:TableKappaResults} \centering \smallskip \begin{tabular}{|c|l|} \hline \textbf{Kappa values} & \textbf{Degree of agreement}\\ \hline \textless 0.00 & Poor\\ \hline 0.00 - 0.20 & Slight\\ \hline 0.21 - 0.40 & Fair(Weak)\\ \hline 0.41 - 0.60 & Moderate\\ \hline 0.61 - 0.80 & Substantial (Good)\\ \hline 0.81 - 1.00 & Almost perfect (Very Good)\\ \hline \end{tabular} \end{table} We asked two seniors researchers to classify, individually, a sample of 31 publications to analyze the degree of agreement in the selection process through the Fleiss’ Kappa \cite{Fleiss2013}. The selected sample was the set of the most recent publications (last 2 years) from phase 2. The result of the degree of agreement showed a substantial level of agreement between the two researchers (Kappa = 0.653). The 102 studies resulting from this phase are listed in Appendix \ref{AppendixB}. \textit{Stage 4 - Obtain primary studies and make a critical assessment of them} - A list of primary studies was obtained and later subjected to critical examination using the 8 quality criteria set out in Table \ref{table:TableQualityCriteria}. Some of these quality criteria were adapted from those proposed by Dyba and Dings{\o}yr \cite{Dyba2008}. In the QC1 criterion we evaluated venue quality based on its presence in the CORE rankings portal\footnote{http://www.core.edu.au/}. In the QC4 criterion, the relevance of the study to the community was evaluated based on the citations present in Google Scholar\footnote{https://scholar.google.com/} using the method of Belikov and Belikov \cite{Belikov2015}. The grading of each of the 8 criteria was done on a dichotomous scale ("YES"=1 or "NO"=0). For each selected primary study, its quality score was computed by summing up the scores of the answers to all the 8 questions. A given paper satisfies the Quality Assessment criteria if reaches a rating higher (or equal) to 4. Among the 102 papers resulting from stage 3, 19 studies [11, 16, 19, 22, 32, 36, 57, 71, 73, 77, 82, 83, 85, 86, 87, 90, 91, 95, 102] (see Appendix \ref{AppendixB}) were excluded because they did not reach the minimum score of 4 (Table \ref{table:ResultsQualityCriteria}), while 83 passed the Quality Assessment criteria. All 83 selected studies are listed in Appendix \ref{AppendixA} and the details of the application of the quality assessment criteria are presented in Appendix \ref{AppendixC}. \begin{table}[h] \caption{Quality criteria} \label{table:TableQualityCriteria} \centering \smallskip \begin{tabular}{|c|p{6cm}|} \hline \textbf{Criterion} & \textbf{Description}\\ \hline QC1 & Is the venue recognized in CORE rankings portal?\\ \hline QC2 & Was the data collected in a way that addressed the research issue?\\ \hline QC3 & Is there a clear statement of findings?\\ \hline QC4 & Is the relevance for research or practice recognized by the community?\\ \hline QC5 & Is there an adequate description of the validation strategy?\\ \hline QC6 & The study contains the required elements to allow replication?\\ \hline QC7 & The evaluation strategies and metrics used are explicitly reported?\\ \hline QC8 & Is a CS visualization technique clearly defined?\\ \hline \end{tabular} \end{table} \begin{table}[h] \caption{Number of studies by score obtained after application of the quality assessment criteria} \label{table:ResultsQualityCriteria} \centering \smallskip \begin{tabular}{|c|c|c|} \hline \textbf{Resulting score} & \textbf{Number of studies} & \textbf{\% studies }\\ \hline 1 & 3 & 2.9\%\\ \hline 2 & 4 & 3.9\%\\ \hline 3 & 12 & 11.8\%\\ \hline 4 & 15 & 14.7\%\\ \hline 5 & 30 & 29.4\%\\ \hline 6 & 32 & 31.4\%\\ \hline 7 & 6 & 5.9\%\\ \hline 8 & 0 & 0.0\%\\ \hline \end{tabular} \end{table} \noindent {\bf Data extraction.} All relevant information on each study was recorded on a spreadsheet. This information was helpful to summarize the data and map it to its source. The following data were extracted from the studies: (i) name and authors; (ii) year; (iii) type of article (journal, conference, book chapter); (iv) name of conference, journal or book; (v) number of Google Scholar citations at the time of writing this paper; (vi) answers to research questions; (vii) answers to quality criteria. \noindent {\bf Data Synthesis.} This synthesis is aimed at grouping findings from the studies in order to: identify the main concepts about CS detection and visualization, conduct a comparative analysis on the characteristics of the study, type of method adopted, and issues regarding three research questions (\textit{RQ1, RQ2} and \textit{RQ3}) from each study. Other information was synthesized when necessary. We used the meta-ethnography method \cite{Noblit1988} as a reference for the process of data synthesis. \begin{figure*}[!ht] \centering {\epsfig{file = IMAGES/SLR_stages.png, width = 14cm}} \caption{Stages of the study selection process} \label{fig:SLR_stages} \end{figure*} \noindent {\bf Conducting the Review.} We started the review with an automatic search followed by a manual search, to identify potentially relevant studies and afterwards applied the inclusion/exclusion criteria. We had to adapt the search string in some engines without losing its primary meaning and scope. The manual search consisted in studies published in conference proceedings, journals and books, that were included by the authors through backward snowballing in primary studies. These studies were equally analyzed regarding their titles and abstracts. Figure \ref{fig:SLR_stages} conveys them as 17 studies. We tabulated everything on a spreadsheet so as to facilitate the subsequent phase of identifying potentially relevant studies. Figure \ref{fig:SLR_stages} presents the results obtained from each electronic database used in the search, which resulted in 1866 articles considering all databases. \noindent {\bf Potentially Relevant Studies.} The results obtained from both the automatic and manual search were included on a single spreadsheet. Papers with identical title, author(s), year and abstract were discarded as redundant. At this stage, we registered an overall of 1883 articles, namely 1866 from the automated search plus 17 from the separate manual search \textit{(Stage 1)}. We then read titles and abstracts to identify relevant studies resulting in 161 papers \textit{(Stage 2)}. At \textit{Stage 3} we read introduction, methodology and conclusion in each study and then we applied the inclusion and exclusion criteria, resulting in 102 papers. In \textit{Stage 4}, after applying the quality criteria \textit{(QC)} the remaining 83 papers were analysed to answer the three research questions - RQ1, RQ2 and RQ3. \section{Results and Analysis} \label{sec:ResultandAnalysis} This section presents the results of this SLR to answer research questions RQ1, RQ2 and RQ3, based on the quality criteria and findings (F). In Figure \ref{fig:Findings-resume} we present a summary of the main findings. Figure \ref{fig:slrfinal} conveys the selected studies and the respective research questions they focus on. As can be seen in the same Figure, 72 studies addressed issues related to RQ1, while 61 studies discussed RQ2 issues and, finally, 17 papers addressed RQ3 issues. All selected studies are listed in Appendix \ref{AppendixA} and referenced as \textit{"S"} followed by the number of the paper. \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/SLR_Final.png, width = 8cm}} \caption{Selected studies per research question (RQ)} \label{fig:slrfinal} \end{figure} \subsection{Overview of studies} \label{subsec:Overviewstudies} The study selection process (Figure \ref{fig:SLR_stages}) resulted in 83 studies selected for data extraction and analysis. Figure \ref{fig:Pub-year} depicts the temporal distribution of primary studies. Note that 78.3\% primary studies have been published after 2009 (last 10 years) and that 2016 and 2018 were the years that had the largest number of studies published. This indicates that, although the human factor in software engineering has been acknowledged and researched since the 1970s, research focusing in CS detection is much more recent, with the vast majority of the studies developed in the last decade. \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/Pub-year.png, width = 8cm}} \caption{Trend of publication years} \label{fig:Pub-year} \end{figure} In relation to the type of publication venue (Figure \ref{fig:pub-venue}), the majority of the studies were published in conference proceedings 76\%, followed by journals with 23\%, and 1\% in books. \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/pub-venue1.png, width = 8cm}} \caption{Type of publication venue} \label{fig:pub-venue} \end{figure} Table \ref{table:Tabletenmostcitedstudies} presents the top ten studies included in the review, according to Google Scholar citations in September 2019\footnote{Data obtained in 22/09/2019}. These studies are evidences of the relevance of the issues discussed in this SLR and the influence these studies exert on the literature, as can be confirmed by their respective citation numbers. Table \ref{table:Tabletenmostcitedstudies} shows an overview of the distribution of the most relevant studies according to the addressed research questions. In the following paragraphs, we briefly describe these studies, by decreasing order of impact. \begin{table}[htpb] \caption{Top-ten cited papers, according to Google Scholar} \label{table:Tabletenmostcitedstudies} \centering \smallskip \begin{tabular}{|c|c|c|} \hline \textbf{Studies} & \textbf{Cited by} & \textbf{Research Question}\\ \hline S9 & 964 & RQ1 and RQ3 \\ \hline S26 & 577 & RQ1 and RQ2\\ \hline S3 & 562 & RQ1 and RQ2 \\ \hline S1 & 423 & RQ3 \\ \hline S10 & 245 & RQ1 and RQ2 \\ \hline S4 & 240 & RQ1 and RQ2 \\ \hline S7 & 184 & RQ3 \\ \hline S21 & 174 & RQ1 and RQ2 \\ \hline S45 & 157 & RQ1 and RQ2 \\ \hline S15 & 156 & RQ1 and RQ3 \\ \hline \end{tabular} \end{table} A brief review of each of the top cited paper follows: \textbf{[S9]} - RQ1 and RQ3 are addressed in this paper that got the highest number of citations. It introduces a systematic way of detecting CS by defining detection strategies based in four steps: Step 1: Identify Symptoms; Step 2: Select Metrics; Step 3: Select Filters; Step 4: Compose the Detection Strategy. It describes how to explore quality metrics, set thresholds for these metrics, and create a set of rules to identify CS. Finally, visualization techniques are used to present the detection result, based in several metaphors. \textbf{[S26]} - The DECOR method for specifying and detecting code and design smells is introduced. This approach allows specifying smells at a high level of abstraction using a consistent vocabulary and domain-specific language for automatically generating detection algorithms. Four design smells are identified by DECOR, namely \textit{Blob}, \textit{Functional Decomposition}, \textit{Spaghetti Code}, and \textit{Swiss Army Knife}, and the algorithms are evaluated in terms of precision and recall. This study addresses RQ1 and RQ2, and is one of the most used studies for validation / comparison of results in terms of accuracy and recall of detection algorithms. \textbf{[S3]} - Issues related to RQ1 and RQ2 are discussed. This paper proposes a mechanism called “detection strategies” for producing a metric-based rules approach to detect CS with detection strategies, implemented in the IPLASMA tool. This method captures deviations from good design principles and consists of defining a list of rules based on metrics and their thresholds for detecting CS. \textbf{[S1]} - A visualization approach supported by the jCOSMO tool, a CS browser that performs fully automatic detection and visualizes smells in Java source code, is proposed. This study focuses its attention on two CS, related to Java programming language, i.e., \textit{instanceof} and \textit{typecast}. This paper discusses issues related to RQ3. \textbf{[S10]} - This paper addresses RQ1 and RQ2 and proposes the Java Anomaly Detector (JADET) tool for detecting object usage anomalies in programs. JADET uses concept analysis to infer properties that are nearly always satisfied, and it reports the failures as anomalies. This approach is based on identifying usage patterns. \textbf{[S4]} - This paper presents a metric-based heuristic detection technique able of identifying instances of two CS, namely \textit{Lazy Class} and \textit{Temporary Field}. It proposes a template to describe CS systematically, that consists of three main parts: a CS name, a text-based description of its characteristics, and heuristics for its detection. An empirical study is also reported, to justify the choice of metrics and thresholds for detecting CS. This paper discusses issues related to RQ1 and RQ2. \textbf{[S7]} - This paper addresses RQ3 and presents a visualization framework for quality analysis and understanding of large-scale software systems. Programs are represented using metrics. The authors claim that their semi-automatic approach is a good compromise between fully automatic analysis techniques that can be efficient, but loose track of context, and pure human analysis that is slow and inaccurate. \textbf{[S21]} - This paper proposes an approach based on Bayesian Belief Networks (BBN) to specify and detect CS in programs. Uncertainty is managed by BBN that implement the detection rules of DECOR [S26]. The detection outputs are probabilities that a class is an occurrence of a defect type. This paper discusses issues related to RQ1 and RQ2. \textbf{[S45]} - This paper addresses RQ1 and RQ2, and proposes an approach called HIST (Historical Information for Smell deTection) to detect five different CS (\textit{Divergent Change}, \textit{Shotgun Surgery}, \textit{Parallel Inheritance}, \textit{Feature Envy}, and \textit{Blob}). HIST explores change history information mined from versioning systems to detect CS, by analyzing co-changes among source code artifacts over time. \textbf{[S15]} - This last paper in the top ten most cited ones addresses RQ1 and RQ3. It presents an Eclipse plug-in (JDeodorant) that automatically identifies Type-Checking CS in Java source code, and allows their elimination by applying appropriate refactorings. JDeodorant is one of the most used tools for validation / comparison of results in terms of accuracy and recall of detection algorithms. \subsection{Approach for CS detection (\textbf{F1})} \label{subsec:ApproachCSdetection} The first finding to be analyzed is the approach applied to detect CS, that is, the steps required to accomplish the detection process. For example, in the metric-based approach \cite{Lanza2006}, we need to know the set of source code metrics and corresponding thresholds for the target CS. Considering the diversity of existing techniques for CS detection, it is important to group the different approaches into categories for a better understanding of the type of technique used. Thus, we will classify the existing approaches for CS detection into seven (7) broad categories, according to the classification presented by Kessentini et al. \cite{Kessentini2014}: metric-based approaches, search-based approaches, symptom-based approaches, visualization-based approaches, probabilistic approaches, cooperative-based approaches and manual approaches. Classifying studies in one of the seven categories is not an easy task because some studies use intermediate techniques for their final technique. For example, several studies classified as symptom-based approaches use symptoms to describe CS, although detection is performed through a metric-based approach. Table \ref{table:CSdetectionapproaches} shows the classification of the studies in the seven broad categories. The most used approaches are search-based, metric-based, and symptom-based, being used in 30.1\%, 24.1\% and 19.3\% of the studies, respectively. The least used approaches are the cooperative-based and the manual ones, each being used in only one of the selected studies. \begin{table}[htpb] \caption{CS detection approaches used} \label{table:CSdetectionapproaches} \centering \smallskip \begin{tabular}{|p{2cm}|p{1.1cm}|c|p{2cm}|} \hline \textbf{Approaches} & \textbf{Nº of studies} & \textbf{\% Studies} & \textbf{Studies}\\ \hline Search-Based & 25 & 30.1\% & S5,S6,S14,S22, S24,S28,S33,S34, S36,S37,S42,S43, S45,S48,S51,S53, S55,S56,S71,S74, S75,S77,S78,S79, S83\\ \hline Metric-Based & 20 & 24.1\% & S3,S9,S29,S38, S44,S47,S49,S52, S58,S60,S63,S64, S66,S67,S68,S70, S72,S73,S81,S82\\ \hline Symptom-based & 16 & 19.3\% & S4,S8,S13,S15, S20,S23,S26,S30, S31,S32,S52,S57, S59,S61,S62,S69\\ \hline Visualization-based & 12 & 14.5\% & S1,S2,S7,S11, S12,S16,S17,S19, S21,S39,S46,S76\\ \hline Probabilistic & 10 & 12.0\% & S10,S18,S25,S27, S35,S40,S50,S54, S65,S80\\ \hline Cooperative-based & 1 & 1.2\% & S41\\ \hline Manual & 1 & 1.2\% & S52\\ \hline \end{tabular} \end{table} \subsubsection{Search-based approaches} Search-based approaches are inspired by contributions in the domain of Search-Based Software Engineering (SBSE). SBSE uses search-based approaches to solve optimization problems in software engineering. Most techniques in this category apply ML algorithms. The major benefit of ML-based approaches is that they do not require great experts’ knowledge and interpretation. However, the success of these techniques depends on the quality of data sets to allow training ML algorithms. \subsubsection{Metric-based approaches} The metric-based approach is the most commonly used. The use of quality metrics to improve the quality of software systems is not a new idea and for more than a decade, metric-based CS detection techniques have been proposed. This approach consists in creating a rule, based on a set of metrics and respective thresholds, to detect each CS. The main problem with this approach is that there is no consensus on the definition of CS, as such there is no consensus on the standard threshold values for the detection of CS. Finding the best fit threshold values for each metric is complicated because it requires a significant calibration effort \cite{Kessentini2014}. Threshold values are one of the main causes of the disparity in the results of different techniques. \subsubsection{Symptom-based approaches} To describe code-smell symptoms, different symptoms/notions are involved, such as class roles and structures. Symptom descriptions are later translated into detection algorithms. Kessentini et al. \cite{Kessentini2014} defines two main limitations to this approach: \begin{itemize} \item there exists no consensus in defining symptoms; \item for an exhaustive list of CS, the number of possible CS to be manually described, characterized with rules and mapped to detection algorithms can be very large; as a consequence, symptoms-based approaches are considered as time-consuming and error-prone. \end{itemize} Other authors \cite{Rasool2015} add more limitations, such as the analysis and interpretation effort required to select adequate threshold values when converting symptoms into detection rules. The precision of these techniques is low because of the different interpretations of the same symptoms. \subsubsection{Visualization-based approaches} Visualization-based techniques usually consist of a semi-automated process to support developers in the identification of CS. The data visually represented to this end is mainly enriched with metrics (metric-based approach) throughout specific visual metaphors. This approach has the advantage of using visual metaphors, which reduces the complexity of dealing with a large amount of data. The disadvantages are those inherent to human intervention: (i) they require great human expertise, (ii) time-consuming, (iii) human effort, and (iv) error-prone. Thus, these techniques have scalability problems for large systems. \subsubsection{Probabilistic approaches} Probabilistic approaches consist essentially of determining a probability of an event, for example, the probability of a class being a CS. Some techniques consist on the use of BBN, considering the CS detection process as a fuzzy-logic problem or frequent pattern tree. \subsubsection{Cooperative-based approaches} Cooperative-based CS techniques are primarily aimed at improving accuracy and performance in CS detection. This is achieved by performing various activities cooperatively. The only study that uses a cooperative approach is Boussaa et al. [S41]. According to the authors, the main idea is to evolve two populations simultaneously, where the first one generates a set of detection rules (combination of quality metrics) that maximizes the coverage of a base of CS examples and the second one maximizes the number of generated “artificial” CS that are not covered by solutions (detection rules) of the first population [S41]. \subsubsection{Manual approaches} \label{subsubsec:Manualapproaches} Manual techniques are human-centric, tedious, time-consuming, and error prone. These techniques require a great human effort, therefore not effective for detecting CS in large systems. According to the authors of \cite{Kessentini2014}, another important issue is that locating CS manually has been described as more a human intuition than an exact science. The only study that uses a manual approach is [S52], where a catalog for the detection and handling of model smells for MATLAB / Simulink is presented. In this study, 3 types of techniques are used - manual, metric-based, symptom-based - according to the type of smell. The authors note that the detection of certain smells like the \textit{Vague Name} or \textit{Non-optimal Signal Grouping} can only be performed by manual inspection, because of the expressiveness of the natural language. \subsection{Dataset availability (\textbf{F2})} \label{subsec:datasetavailability} The second finding is whether the underlying dataset is available - a precondition for study replication. When we talk about the dataset, i.e. the oracle, we are considering the software systems where CS and anti-patterns were detected, the type and number of CS and anti-patterns detected, and other data needed for the method used, e.g. if it is a metric-based approach the dataset must have the metrics for each application. Only 12 studies present the available dataset, providing a link to it, however 2 studies, [S28] and [S32], no longer have the active links. Thus, only 12.0\% of the studies (10 out of 83, [S18, S27, S38, S51, S56, S59, S69, S70, S74, S82]) provide the dataset. Another important feature for defining the dataset is which software systems are used in studies on which CS detection is performed. The number of software systems used in each study varies widely, and there are studies ranging from only one system to studies using 74 Java software systems and 184 Android apps with source code hosted in open source repositories. Most studies (83.1\%) use open-source software. Proprietary software is used in 3.6\% of studies and the use of the two types, open-source and proprietary, is used in 3.6\% of studies. It should be noted that 9.7\% of studies do not make any reference to the software systems being analysed. \begin{table}[htpb] \caption{Top ten open-source software projects used in the studies} \label{table:open-sourcesoftwareprojects} \centering \smallskip \begin{tabular}{|l|c|c|} \hline \textbf{Open-source software} & \textbf{Nº of Studies} & \textbf{\% Studies}\\ \hline Apache Xerces & 28 & 33.7\% \\ \hline GanttProject & 14 & 16.9\% \\ \hline ArgoUML & 11 & 13.3\% \\ \hline Apache Ant & 10 & 12.0\% \\ \hline JFreeChart & 8 & 9.6\% \\ \hline Log4J & 7 & 8.4\% \\ \hline Azureus & 7 & 8.4\% \\ \hline Eclipse & 7 & 8.4\% \\ \hline JUnit & 5 & 6.0\% \\ \hline JHotDraw & 5 & 6.0\% \\ \hline \end{tabular} \end{table} Table \ref{table:open-sourcesoftwareprojects} presents the most used open-source software in the studies, as well as the number of studies where they are used and the overall percentage. Apache Xerces is the most used (33.7\% of the studies), followed by GanttProject with 16.9\%, ArgoUML with 13.3\%, and Apache Ant used in 12.0\% of the studies. \subsection{Programming language (\textbf{F3})} \label{subsec:programminglanguage} In our research we do not make any restriction regarding the object-oriented programming language that supports the detection of CS. So, we have CS detection in 7 types of languages, in addition to the techniques that are language independent (3 studies) and a study [S66] that is for Android Apps without defining the type of language, as shown in Figure \ref{fig:Program-Languages} \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/Program-Languages.png, width = 8.4cm}} \caption{Programming languages and number of studies that use them} \label{fig:Program-Languages} \end{figure} 77.1\% of the studies (64 out of 83) use Java as a target language for the detection of CS, and therefore this is clearly the most used one. C\# is the second most used programming language, with 6 studies (7.2\%), the third most used language is C/C++ with 5 studies (6.0\%). JavaScript and Python are used in 2 studies (2.4\%). Finally, we have 2 languages, MatLab and Java Android, which are used in only 1 study (1.2\%). In total we found seven different types of program languages to be used as support for the detection of CS. In our analysis we found that 3.6\% of studies (3 out of 83, [S20, S32, S47]) present language-independent CS detection technique. When we related the studies that are language-independent with the used approach, we found that two of the three studies used Symptom-based and one the Metric-based approach. These results are in line with what was expected, since a symptom-based approach is the most susceptible of being adapted to different programming languages. \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/Num-Languages-study.png, width = 8.4cm}} \caption{Number of languages used in each study} \label{fig:Num-Languages-study} \end{figure} Multi-language support for CS detection is also very limited as shown in Figure \ref{fig:Num-Languages-study}. In addition to the three independent language studies, only [S28], one study (1.2\%) of the 83 analyzed, supports 3 programming languages. Five studies (6.0\%, [S3, S9, S12, S15, S57]) detect CS in 2 languages and 69 studies (83.1\%) only detect in a one programming language. Five studies (6.0\%) explain the detection technique, but do not refer to any language. When we analyze the 5 studies that do not indicate any programming language, we find that all use a visualization-based approach, i.e., 41.7\% (5 out of 12) of studies that use visualization techniques do not indicate any programming language. \subsection{Code smells detected (\textbf{F4})} \label{subsec:Codesmellsdetection} Several authors use different names for the same CS, so to simplify the analysis we have grouped the different CS with the same mean into one, for example, \textit{Blob}, \textit{Large Class} and \textit{God Class} were all grouped in \textit{God Class}. The description of CS can be found in Appendix \ref{AppendixD}. In Table \ref{table:Code-smells-detected} we can see the CS that are used in more than 3 studies, the number of studies in which they are detected and the respective percentage. As we have already mentioned in subsection \ref{subsec:programminglanguage}, in this systematic review we do not make any restriction regarding the Object Oriented programming language used. Thus, considering all Object Oriented programming languages 68 different CS are detected, much more than the 22 described by Fowler \cite{Fowler1999}. \textit{God Class} is the most detected CS, being used in 51.8 \% of the studies, followed by \textit{Feature Envy} and \textit{Long Method} with 33.7 \% and 26.5 \%, respectively. \begin{table}[htpb] \caption{Code smells detected in more than 3 studies} \label{table:Code-smells-detected} \centering \smallskip \begin{tabular}{|p{4.0cm} |c|c|} \hline \textbf {Code smell} & \textbf{Nº of studies} & \textbf{\% Studies}\\ \hline God Class (Large Class or Blob) & 43 & 51.8\% \\ \hline Feature Envy & 28 & 33.7\% \\ \hline Long Method & 22 & 26.5\% \\ \hline Data class & 18 & 21.7\% \\ \hline Functional Decomposition & 17 & 20.5\% \\ \hline Spaghetti Code & 17 & 20.5\% \\ \hline Long Parameter List & 12 & 14.5\% \\ \hline Swiss Army Knife & 11 & 13.3\% \\ \hline Refused Bequest & 10 & 12.0\% \\ \hline Shotgun Surgery & 10 & 12.0\% \\ \hline Code clone/Duplicated code & 9 & 10.8\% \\ \hline Lazy Class & 8 & 9.6\% \\ \hline Divergent Change & 7 & 8.4\% \\ \hline Dead Code & 4 & 4.8\% \\ \hline Switch Statement & 4 & 4.8\% \\ \hline \end{tabular} \end{table} All 68 CS detected are listed in Appendix \ref{AppendixE}, as well as the number of studies in which they are detected, their percentage, and the programming languages in which they are detected. When we analyzed the CS detected in each study, we found that the number is low, with an average of 3.6 CS per study and a mode of 1 CS. Only the study [S57], with a symptom-based approach, detects the 22 CS described by Fowler \cite{Fowler1999}. Figure \ref{fig:CS-per-study} shows the number of CS detected by number of studies. We can see that the number of smells most detected is 1 (in 24 studies), 4 smells are detected in 13 studies and 11 studies detect 3 smells. The detection of 12, 13, 15 and 22 smells is performed in only 1 study. It should be noted that 5 studies do not indicate which CS they detected. It is also important to note that these 5 studies use a visualization-based approach. \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/CS-per-study.png, width = 8cm}} \caption{Number of code smells detected by number of studies } \label{fig:CS-per-study} \end{figure} \subsection{Machine Learning techniques used (\textbf{F5})} \label{subsec:MachineLearningtechniques } ML algorithms present many variants and different parameter configurations, making it difficult to compare them. For example, the Support Vector Machines algorithm has variants such as SMO, LibSVM, etc. Decision Trees can use various algorithms such as C5.0, C4.5 (J48), CART, ID3, etc. As algorithms are presented with different details in the studies, for a better understanding of the algorithm used, we classify ML algorithms in their main category, creating 9 groups as shown in the table \ref{table:ML-algorithms-used}. Table \ref{table:ML-algorithms-used} shows the ML algorithms, the number of studies using the algorithm and its percentage, as well as the ID of the studies that use the algorithm. \begin{table}[htpb] \caption{ML algorithms used in the studies} \label{table:ML-algorithms-used} \centering \smallskip \begin{tabular}{|p{2.5cm}|p{1cm}|p{1.1cm}|p{2.3cm}|} \hline \textbf{ML algorithm} & \textbf{Nº of Studies} & \textbf{\% Studies} & \textbf{Studies}\\ \hline Genetic Programming & 9 & 10.8\% & S31, S41, S43, S45, S58, S64, S66, S70, S79 \\ \hline Decision Tree & 8 & 9.6\% & S6, S28, S36, S40, S53, S56, S77, S83 \\ \hline Support Vector Machines (SVM) & 6 & 7.2\% & S33, S34, S36, S56, S71, S77 \\ \hline Association Rules & 6 & 7.2\% & S36, S42, S50, S54, S56, S77 \\ \hline Bayesian Algorithms & 5 & 6.0\% & S18, S27, S36, S56, S77 \\ \hline Random Forest & 3 & 3.6\% & S36, S56, S77 \\ \hline Neural Network & 2 & 2.4\% & S74, S75 \\ \hline Regression models & 1 & 1.2\% & S22 \\ \hline Artificial Immune Systems (AIS) & 1 & 1.2\% & S24 \\ \hline \end{tabular} \end{table} From the 83 primary studies analyzed 35\% of the studies (29 out of 83, [S6, S18, S22, S24, S27, S28, S31, S33, S34, S36, S40, S41, S42, S43, S45, S50, S53, S54, S56, S58, S64, S66, S70, S71, S74, S75, S77, S79, S83]) use ML techniques in CS detection. Except for 3 studies [S36, S56, S77] where multiple ML algorithms are used, all 26 other studies use only 1 algorithm in the CS detection. The most widely used algorithms are Genetic algorithms (9 out 83, [S31, S41, S43, S45, S58, S64, S66, S70, S79]) and Decision Trees (8 out 83, [S6, S28, S36, S40, S53, S56, S77, S83]), which are used in 10.8\% and 9.6\%, respectively, of the analyzed studies. We think that a possible reason why genetic algorithms are the most used algorithm, is because they are used to generate the CS detection rules and to find the best threshold values to be used in the detection rules. Regarding Decision trees, it is due to the easy interpretation of the models, mainly in its variant C4.5 / J48 / C5.0. The third most used algorithms for ML, used in 7.2\% of the studies, are Support Vector Machines (SVM) (6 out 83, [S33, S34, S36, S56, S71, S77]) and association rules (6 out 83, [S36, S42, S50, S54, S56, S77]) with Apriori and JRip being the most used. Bayesian Algorithms are the fifth most used algorithm with 6.0\% (5 out 83, [S18, S27, S36, S56, S77]). The other 4 ML algorithms that were also used are Random Forest (in 3 studies), Neural Network (in 2 studies), Regression models (in 1 study) and Artificial Immune Systems (AIS) (in 1 study). \subsection{Evaluation of techniques (\textbf{F6})} \label{subsec:Evaluationtechniques} The evaluation of the technique used is an important factor to realize its effectiveness and consequently choose the best technique or tool. The main metrics used to evaluate the techniques are accuracy, precision, recall, F-measure. These 4 metrics are calculated based on true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) instances of CS detected, according to the following formulas: \begin{itemize} \item Accuracy = (TP + TN) / (TP + FP + FN + TN) \item Precision = TP / (TP + FP) \item Recall = TP / (TP + FN) \item F-measure = 2 * (Recall * Precision) / (Recall + Precision) \end{itemize} In the 83 articles analyzed, 86.7\% (72 studies) evaluated the technique used and 13.3\% (11 studies) did not evaluate the technique. Table \ref{table:metrics-evaluate-techniques} shows the most used evaluation metrics.Precision is the most used with 46 studies (55.4\%), followed by recall in 44 studies (53.0\%) and F-measure in 17 studies (20.5\%). It should be noted that 28 studies (33.7 \%) use other metrics for evaluation such as the number of detected defects, Area Under ROC, Standard Error (SE) and Mean Square Error (MSE), Root Mean Squared Prediction Error (RMSPE), Prediction Error (PE), etc. \begin{table}[htpb] \caption{Metrics used to evaluate the detection techniques} \label{table:metrics-evaluate-techniques} \centering \smallskip \begin{tabular}{|l|c|c|} \hline \textbf {Metric} & \textbf{Nº of studies} & \textbf{\% Studies}\\ \hline Precision & 46 & 55.4\% \\ \hline Recall & 44 & 53.0\% \\ \hline F-measure & 17 & 20.5\% \\ \hline Accuracy & 10 & 12.0\% \\ \hline Other & 28 & 33.7\% \\ \hline Without evaluation & 11 & 13.3\% \\ \hline \end{tabular} \end{table} In the last years the most used metrics in the evaluation are the precision and recall, but until 2010 few studies have evaluations based on these metrics, presenting only the CS detected. When we analyze the evaluations of the different techniques, we verified that the results depend on the applications used to create the oracle and the CS detected, so we have several studies that have chosen to present the means of precision and recall. Regarding the different approaches used, we can conclude that: \begin{enumerate}[noitemsep] \item in manual approaches and cooperative-based approaches, since we only have one study for each, we cannot draw conclusions; \item in the visualization-based approaches, most of the evaluations presented are qualitative, and almost half of the studies do not present an evaluation; \item in relation to the other 4 approaches (probabilistic, metric, symptom-based, and search-based), all present at least one study/technique with 100\% recall and precision results. \item It is difficult to make comparisons between the different techniques, since, except for the studies of the same author(s), all the others have different oracles. \end{enumerate} The usual way to build an oracle is to choose a set of software systems (typically open source), choose the CS that you want to detect and ask a group of MSc students (3, 4 or 5 students), supervised by experts (e.g. software engineers), to identify the occurrences of smells in systems. In case of doubt of a candidate to CS, either or the expert decides, or the group reaches to a consensus on whether this candidate is or not a smell. As you can see, the creation of an oracle is not an easy task, because it requires a tremendous amount of manual work in the detection of CS, having all the problems of a manual approach (see section \ref{subsubsec:Manualapproaches}) mainly the subjectivity. For a rigorous comparison of the evaluation of the different techniques, it is necessary to use common datasets and oracles (see section \ref{subsec:datasetavailability}), which does not happen today. \subsection{Detection tools (\textbf{F7})} \label{subsec:Detectiontools} Comparing the results of CS detection tools is important to understand the performance of the techniques associated with the tool and consequently to know which one is the best. It is also important to create tools that allow us to replicate studies. When we analyzed which studies created a detection tool, we found that 61.4\% (51 out of 83) studies developed a tool, as show in table \ref{table:developedtoolapproach}. \begin{table}[htpb] \caption{Number of studies that developed a tool and its approach} \label{table:developedtoolapproach} \centering \smallskip \begin{tabular}{|m{2cm}|m{0.9cm}|m{1cm}|m{1.3cm}|m{1.1cm}|} \hline \textbf{Approaches} & \textbf{Nº studies} & \textbf{Nº studies with tool} & \textbf{\% Studies in the approach} & \textbf{\% Studies in total}\\ \hline Symptom-based & 16 & 13 & 81.3\% & 15.7\%\\ \hline Metric-Based & 20 & 13 & 65.0\% & 15.7\%\\ \hline Visualization-based & 12 & 10 & 83.3\% & 12.0\%\\ \hline Search-Based & 25 & 9 & 36.0\% & 10.8\%\\ \hline Probabilistic & 10 & 6 & 60.0\% & 7.2\%\\ \hline Cooperative-based & 1 & 0 & 0.0\% & 0.0\%\\ \hline Manual & 1 & 0 & 0.0\% & 0.0\%\\ \hline \end{tabular} \end{table} The symptom-based and metrics-based approaches are those that present the most tools developed with 15.7\% (13 out of 83 studies), follow by Visualization-based with 12.0\% (10 out of 83 studies) (see Table \ref{table:developedtoolapproach}). On the opposite side, there is the Probabilistic approach where only 7.2\% (6 out of 83 studies) present developed tools. When we analyze the percentage of studies that develop tools within each approach, we find that visualization-based and symptom-based approaches are those that have a greater number of developed tools with 83.3\% (10 out of 12 studies) and 81.3\% (13 out of 16 studies), respectively (see Table \ref{table:developedtoolapproach}). On the other side, there is the search-based approach where only 36.0\% (9 out of 25 studies) present developed tools. In this approach, less than half of the studies present a tool because they chose to use already developed external tools instead of creating new ones. For example, some studies [S34, S36, S56, S77] use Weka \footnote{Weka is a collection of ML algorithms for data mining tasks (www.cs.waikato.ac.nz/ml/weka/)} to implement their techniques. As Rasool and Arshad mentioned in their study \cite{Rasool2015}, it becomes arduous to find common tools that performed experiments on common systems for extracting common smells. Different techniques perform experiments on different systems and present their results in different formats. When analyzing the results of different tools to verify their results, examining the same software packages and CS, we verified a disparity of results \cite{Fernandes2016,Rasool2015}. \subsection{Thresholds definition (\textbf{F8})} \label{subsec:Thresholdsdefinition} Threshold values are a very important component in some detection techniques because they are the values that define whether or not a candidate is a CS. Its definition is very complicated and one of the reasons there is so much disparity in the detection results of CS (see section \ref{subsec:Detectiontools}). Some studies use genetic algorithms to calibrate threshold values as a way of reducing subjectivity in CS detection, e.g. [S70]. \begin{table}[htpb] \caption{Number of studies that use thresholds in CS detection} \label{table:usethresholds} \centering \begin{tabular}{|m{2cm}|m{0.9cm}|m{1cm}|m{1.3cm}|m{1.1cm}|} \hline \textbf{Approaches} & \textbf{Nº studies} & \textbf{Nº use thresholds} & \textbf{\% Studies in the approach} & \textbf{\% Studies in total}\\ \hline Metric-Based & 20 & 15 & 75.0\% & 18.1\%\\ \hline Symptom-based & 16 & 11 & 68.8\% & 13.3\%\\ \hline Search-Based & 25 & 8 & 32.0\% & 9.6\%\\ \hline Probabilistic & 10 & 8 & 80.0\% & 9.6\%\\ \hline Visualization-based & 12 & 1 & 8.3\% & 1.2\%\\ \hline Cooperative-based & 1 & 1 & 100.0\% & 1.2\%\\ \hline Manual & 1 & 0 & 0.0\% & 0.0\%\\ \hline \end{tabular} \end{table} A total of 44 papers use thresholds in their detection technique, representing 53.0\% of all studies. 47.0\% of studies (39 out of 83 studies) did not use thresholds. Without the Cooperative-based approach (which presents only 1 study), in the total of studies, metric-based and symptom-based approaches are those that present the most tools developed with 18.1\% (15 out of 83 studies) and 13.3\% (11 out of 83 studies), respectively (see Table \ref{table:usethresholds}). When we analyze the number of studies within each approach that uses thresholds,we find that the three approaches that most use thresholds in their detection techniques are Probabilistic with 80\% (8 out of 10 studies), Metric-based with 75.0\% (15 out of 20 studies), and Symptom-based with 68.8\% (11 out of 16 studies). In visualization-based approaches, only one study use thresholds in their CS detection techniques, as shown in Table \ref{table:usethresholds}. Analyzing the detection techniques, we verified that these results are in line with what was expected, since we found that the probabilistic and metric-based approaches are those that most need to use thresholds. In the probabilistic approaches to define the values of support, confidence and probabilistic decision values. In metric-based approaches, it is essential to define threshold values for the different metrics that compose the rules. \subsection{Validation of techniques (\textbf{F9})} \label{subsec:Validationtechniquess} The validation of a technique is performed by comparing the results obtained by the technique, with the results obtained through another technique with similar objectives. Obviously, both techniques must detect the same CS in the same software systems. The most usual forms of validation are: using the techniques of various existing approaches, such as manuals; use existing tools; comparing the results with those of other published papers. When we analyze how many studies are validating their technique (see Table \ref{table:Toolsvalidation}), we verified that 62.7\% (52 out of 83, [S3, S6, S8, S13, S15, S18, S20, S23, S24, S26, S27, S28, S30, S31, S33, S34, S38, S41, S42, S43, S44, S45, S47, S48, S49, S50, S51, S53, S54, S55, S56, S58, S59, S60, S61, S62, S64, S65, S66, S67, S68, S70, S71, S72, S73, S74, S75, S77, S78, S79, S81, S83]) perform validation. In opposition 37.3\% (31 out of 83, [S1, S2, S4, S5, S7, S9, S10, S11, S12, S14, S16, S17, S19, S21, S22, S25, S29, S32, S35, S36, S37, S39, S40, S46, S52, S57, S63, S69, S76, S80, S82]) of the studies do not validate the technique. \begin{table}[htpb] \caption{Tools / approach used by the studies for validation} \label{table:Toolsvalidation} \centering \smallskip \begin{tabular}{|m{2cm}|m{1.2cm}|c|m{2.3cm}|} \hline \textbf{Tool/approach} & \textbf{Nº of Studies} & \textbf{\% Studies} & \textbf{Studies}\\ \hline DECOR & 14 & 26.9\% & S18, S24, S27, S30, S31, S42, S43, S45, S48, S50, S51, S59, S62, S79 \\ \hline Manually & 14 & 26.9\% & S3, S6, S8, S13, S15, S23, S31, S38, S43, S44, S53, S54, S56, S66 \\ \hline JDeodorant & 10 & 19.2\% & S42, S44, S45, S50, S51, S54, S62, S72, S73, S75 \\ \hline iPlasma & 8 & 15.4\% & S26, S49, S53, S54, S56, S65, S68, S81 \\ \hline Machine Learning & 7 & 13.5\% & S24, S43, S58, S66, S67, S79, S83 \\ \hline Papers & 6 & 11.5\% & S41, S51, S60, S61, S74, S77 \\ \hline DETEX & 3 & 5.8\% & S33, S34, S71 \\ \hline Incode & 3 & 5.8\% & S53, S64, S44 \\ \hline inFusion & 3 & 5.8\% & S65, S53, S49 \\ \hline PMD & 3 & 5.8\% & S49, S56, S53 \\ \hline BDTEX & 2 & 3.8\% & S33, S59 \\ \hline CodePro AnalytiX & 2 & 3.8\% & S55, S78 \\ \hline Jtombstone & 2 & 3.8\% & S55, S78 \\ \hline Rule Marinescu & 2 & 3.8\% & S47, S65 \\ \hline AntiPattern Scanner & 1 & 1.9\% & S56 \\ \hline Bellon benchmark & 1 & 1.9\% & S15 \\ \hline Checkstyle & 1 & 1.9\% & S49 \\ \hline DCPP & 1 & 1.9\% & S50 \\ \hline DUM-Tool & 1 & 1.9\% & S78 \\ \hline Essere & 1 & 1.9\% & S68 \\ \hline Fluid Tool & 1 & 1.9\% & S56 \\ \hline HIST & 1 & 1.9\% & S42 \\ \hline JADET & 1 & 1.9\% & S20 \\ \hline Jmove & 1 & 1.9\% & S75 \\ \hline JSNOSE & 1 & 1.9\% & S70 \\ \hline Ndepend & 1 & 1.9\% & S73 \\ \hline NiCad & 1 & 1.9\% & S28 \\ \hline SonarQube & 1 & 1.9\% & S50 \\ \hline \end{tabular} \end{table} Considering the differences between techniques and all subjectivity in a technique (see sections \ref{subsec:Thresholdsdefinition}, \ref{subsec:Detectiontools}, \ref{subsec:Evaluationtechniques}), we can conclude that it is not easy to perform validations with tools that implement other techniques, even if they have the same goals. Thus, it is not surprising that one of the most common method of validating the results is manually, with a percentage of 26.9\% of the studies (14 of the 52 studies doing validation, [S3, S6, S8, S13, S15, S23, S31, S38, S43, S44, S53, S54, S56, S66]), as shown in Table \ref{table:Toolsvalidation}. Some authors as in [S8] claim that, validation was performed manually because only maintainers can assess the presence of defects in design depending on their design choices and in the context, or as in [S23] where validation is performed by independent engineers who assess whether suspicious classes are smells, depending on the contexts of the systems. Equally with manual validation is the use of the DECOR tool \cite{Moha2010a}, also used in 26.9\% of studies (14 out of 52, [S18, S24, S27, S30, S31, S42, S43, S45, S48, S50, S51, S59, S62, S79]), this approach is based on symptoms. DECOR is a tool proposed by Moha et al. \cite{Moha2010a} which uses domain-specific language to describe CS. They used this language to describe well-known smells, \textit{Blob} (aka \textit{Long Class}), \textit{Functional Decomposition}, \textit{Spaghetti Code}, and \textit{Swiss Army Knife}. They also presented algorithms to parse rules and automatically generate detection algorithms. The following two tools most used in validation, with 19.2\% (10 out of 52 studies) are JDeodorant \cite{Fokaefs2007} used for validation of the studies [S42, S44, S45, S50, S51, S54, S62, S72, S73, S75], and iPlasma \cite{Marinescu2005} for the studies [S26, S49, S53, S54, S56, S65, S68, S81]. JDeodorant \footnote{https://users.encs.concordia.ca/~nikolaos/jdeodorant/} is a plug-in for eclipse developed by Fokaefs et al. for automatic detection of CS (\textit{God Class}, \textit{Type Check}, \textit{Feature Envy}, \textit{Long Method}) and performs refactoring. iPlasma \footnote{http://loose.utt.ro/iplasma/} is a tool that uses a metric-based approach to CS detection developed by Marinescu et al. Seven studies [S24, S43, S58, S66, S67, S79, S83] compare their results with the results obtained through ML techniques, namely Genetic Programming (GP), BBN, and Support Vector Machines (SVM). The ML techniques represent 13.5\% of the studies (7 out of 52) that perform validation. As we can see in the table \ref{table:Toolsvalidation}, where we present the 28 different ways of doing validation, there are still many other tools that are used to validate detection techniques. \subsection{Replication of the studies (\textbf{F10})} \label{subsec:Replicationstudies} The replication of a study is an important process in software engineering, and its importance is highlighted by several authors such as Shull et al. \cite{Shull2008} and Barbara Kitchenham \cite{Kitchenham2008}. According to Shull et al. \cite{Shull2008}, the replication helps to ``\textit{better understand software engineering phenomena and how to improve the practice of software development. One important benefit of replications is that they help mature software engineering knowledge by addressing both internal and external validity problems.}". The same authors also mention that in terms of external validation, replications help to generalize the results, demonstrating that they do not depend on the specific conditions of the original study. In terms of internal validity, replications also help researchers show the range of conditions under which experimental results hold. These authors still identify two types of replication: exact replications and conceptual replications. Another author to emphasize the importance of replication is Kitchenham \cite{Kitchenham2008}, claiming that "replication is a basic component of the scientific method, so it hardly needs to be justified." Given the importance of replication, it is important that the studies provide the necessary information to enable replication. Especially in exact replications, where the procedures of an experiment are followed as closely as possible to determine if the same results can be obtained \cite{Shull2008}. Thus, our goal is not to perform replications, but to verify that the study has the conditions to be replicated. According to Carver \cite{carver2010towards} \cite{Carver2014}, a replication paper should provide the following information about the original study (at a minimum): Research questions, Participants, Design, Artifacts, Context variables, Summary of results. This information about the original study is necessary to provide sufficient context to understand replication. Thus, we consider that for a study to be replicated, it must have available this information identified by Carver. Oracles are extremely important for the replication of CS detection studies. The building of oracles is one of the methods that more subjectivity causes in some techniques of detection, since they are essentially manual processes, with all the inherent problems (already mentioned in previous topics). As we have seen in section \ref{subsec:datasetavailability}, only 10 studies present the available dataset, providing a link to it, however 2 studies, [S28] and [S32], no longer have the active links. Thus, only 12.0\% of the studies (10 out of 83, [S18, S27, S38, S51, S56, S59, S69, S70, S74, S82]) provide the dataset, and are candidates for replication. Another of the important information for the replication is the existence of an artifact, it happens that the studies [S70] amd [S74] does not present an artifact, therefore it cannot be replicated. We conclude that only 9.6\% of the studies (8 out of 83, [S18, S27, S38, S51, S56, S59, S69, S82]) can be replicated because they provide the information claimed by Carver \cite{carver2010towards}. It is noteworthy that [S51] makes available on the Internet a replication package composed of Oracles, Change History of the Object systems, Identified Smells, Object systems, Additional Analysis - Evaluating HIST on Cassandra Releases. \subsection{Visualization techniques (\textbf{F11})} \label{subsec:Visualizationtechniques} The CS visualization can be approached in two different ways, (i) the CS detection is done through a non-visual approach and the visualization being performed to show the CS in the code, (ii) the CS detection is performed through a visual approach. Regarding the first approach, the visualization is only to show previously detected CS, by a non-visualization approach, we found 5 studies [S9, S40, S57, S72, S81], corresponding to 6.0\% of the studies analyzed in this SLR. Thus, we can conclude that most studies are only dedicated to detecting CS, but do not pay much attention to visualization. Most of the proposed CS visualization shows the CS inside the code itself. This approach works for some systems, but when we are in the presence of large legacy systems, it is too detailed for a global refactoring strategy. Thus, a more macro approach is required, without losing detail, to present CS in a more aggregated form. In relation to the second approach, where a visualization-based approach is used to detect CS, it represents 14.5\% of the studies (12 out of 83, [S1, S2, S7, S11, S12, S16, S17, S19, S21, S39, S46, S76]). One of the problems pointed to the visualization-based approach is the scalability for large systems, since this type of approach is semi-automatic, requiring human intervention. In relation to this aspect, we found only 3 studies [S7, S17, S16] with solutions dedicated to large systems. Most studies do a visualization showing the system structure in packages, classes, and methods, but it is not enough, it is necessary to adapt the views according to the type of CS, and this is still not generalized. The focus must be the CS and not the software structure, for example, it is not necessary to show the parts of the software where there are no smells, since it is only adding data to the views, complicating them without adding information. Combining the two types of approaches, we conclude that 20.5\% of the studies (17 out of 83) use some kind of visualization in their approach. \begin{figure*}[!ht] \centering {\epsfig{file = IMAGES/Findings-resume1.png, width = 15.5cm}} \caption{Summary of main findings} \label{fig:Findings-resume} \end{figure*} As for code smells coverage, the \textit{Duplicated Code} CS (aka \textit{Code Clones}) is definitely the one more where visualization techniques have been applied more intensively. Recall from Section \ref{sec:Relatedwork} that Zhang et al. \cite{Zhang2010} systematic review on CS revealed that \textit{Duplicated Code} is the most widely studied CS. Also in that section, we referred to Fernandes et al. \cite{Fernandes2016} systematic review that concluded that \textit{Duplicated Code} was among the top-three CS detected by tools, due to its importance. The application of visualization to the \textit{Duplicated Code} CS ranges from 2D techniques (e.g. dot plots / scatterplots, wheel views / chord diagrams, and other graph-based and polymetric view-based ones) to more sophisticated techniques, such as those based on 3D metaphors and virtual reality. A comprehensive mapping study on the topic of \textit{Duplicated Code} visualization has just been published \cite{Hammad2020}. \section{Discussion} \label{sec:discussion} We now address our research questions, starting by discussing what we found for each of the research questions, mainly the benefits and limitations of evidence of these findings. The mind map in Figure \ref{fig:Findings-resume} provides a summary of main findings. Finally, we discuss the validation and limitations of this systematic review. \subsection{Research Questions (\textbf{RQ})} \label{subsec:Researchquestions} This subsection aims to discuss the answers to the three research questions and how the findings and selected documents addressed these issues. In figure \ref{fig:slrfinal} we show the selected studies and the respective research questions they focus on. Regarding how findings (\textbf{F}) interrelate with research questions, findings F1, F2, F3, F4, F5 support the answer of RQ1, findings F5, F6, F7, F8, F9, F10, support the answer of RQ2, and, finally, F11 supports the answer of RQ3 (see figure \ref{fig:Findings-rq}). \begin{figure}[!ht] \centering {\epsfig{file = IMAGES/Findings-RQ1.png, width = 8.4cm}} \caption{Relations between findings and research questions} \label{fig:Findings-rq} \end{figure} \vspace{5pt} \textbf{RQ1}. \textit{Which techniques have been reported in the literature for the detection of CS?} To answer this research question, we classified the detection techniques in seven categories, following the classification presented in Kessentini et al. \cite{Kessentini2014} (see F1, subsection \ref{subsec:ApproachCSdetection}). The search-based approach is applied in 30.1\% studies. These types of approaches are inspired by contributions in the domain of Search-Based Software Engineering (SBSE) and most techniques in this category apply ML algorithms, with special incidence in the algorithms of genetic programming, decision tree, and association rules. The second most used approach is the metric-based applied in 24.1\% studies. The metric-based approach consists in the use of rules based on a set of metrics and respective thresholds to detect specific CS. The third most used approach, with 19.3\% of studies, is symptom-based approach. It consists of techniques that describe the symptoms of CS and later translate these symptoms into detection algorithms. Regarding datasets (see F2, subsection \ref{subsec:datasetavailability}), more specifically the oracles used by the different techniques, we conclude that only 10 studies (12.0\% of studies) provide the same. Most studies, 83.1\%, use open-source software, the most used systems being Apache Xerces (33.7\% of studies), GanttProject (16.9\%), followed by ArgoUML with 13.3\%. Java is used in 64 studies, i.e. 77.1\% of the studies use Java as a support language for the detection of CS. C\# and C++ are the other two most commonly used programming languages that support CS detection, being used in 7.2\% and 6.0\% of studies respectively (see F3, subsection \ref{subsec:programminglanguage}). Multi-language support for code detection is also very limited, with the majority (83.1\%) of the studies analyzed supporting only one language. The maximum number of supported languages is three, being reported exclusively in [S28]. Regarding the most detected CS, \textit{God Class} stands out with 51.8\% (see F4, subsection \ref{subsec:Codesmellsdetection}). \textit{Feature Envy} and \textit{Long Method} with 33.7\% and 26.5\%, respectively, are the two most commonly used CS. When we analyze the number of CS detected by each study, we found that they detected on average three CS, but the most frequent is the studies detect only 1 CS. \vspace{5pt} \textbf{RQ2}. \textit{What literature has reported on the effectiveness of techniques aiming at detecting CS?} Finding out the most effective technique to detect CS is not a trivial task. We have realized that the identified approaches have all pros and cons, presenting factors that can bias the results. Although, in the evaluation of the techniques (F6, subsection \ref{subsec:Evaluationtechniques}) we verified that there are 4 approaches (probabilistic, metric-based, symptom-based, and search-based) that present techniques with 100\% accuracy and recall results, the detection problem is very far to be solved. These results only apply to the detection of simpler CS, e.g. \textit{God Class}, so it is not surprising that 51.8\% of the studies use this CS, as we can see in table \ref{table:Code-smells-detected}. In relation to the more complex CS, the results are much lower and very few studies use them. We cannot forget that only one study [S57] detects the 22 CS described by Fowler \cite{Fowler1999}. The answer to RQ2 is that, there is no one technique, but several techniques, depending on several factors such as: \begin{itemize} \item Code smell to detect - We found that there is no technique that generalizes to all CS. When we analyze the studies that detect the greatest number of CS, [S38, S53, S57, S63, S69] are the studies that detect more than 10 CS and make an evaluation, we find that precision and recall depend on smell and there are large differences. \item Software systems - The same technique when detecting the same CS in different software systems, there is a great discrepancy in the number of false positives and false negatives and consequently in precision and recall. \item Threshold values - There is no consensus regarding the definition of threshold values, the variation of this value causes more, or less, CS to be detected, thus varying the number of false positives. Some authors try to define thresholds automatically, namely using genetic programming algorithms. \item Oracle - There is no widespread practice of oracle sharing, few oracles are publicly available. Oracles are a key part of most CS detection processes, for example, in the training of ML algorithms. \end{itemize} Regarding the automation of CS detection processes, thus making them independent of thresholds, we found that 35\% of the studies used ML techniques. However, when we look at how many of these studies do not require thresholds, we find that only 18.1\% (15 out of 83, [S22, S33, S34, S36, S40, S43, S53, S56, S66, S71, S74, S75, S77, S79, S83]) are truly automatic. \vspace{5pt} \textbf{RQ3}. \textit{What are the approaches and resources used to visualize CS and therefore support the practitioners to identify CS occurrences?} The visualization and representation are of extreme importance, considering the variety of CS, possibilities of location in code (within methods, classes, between classes, etc.), and dimension of the code for a correct identification of the CS. Unfortunately, most of the studies only detect it, does not visually represent the detected CS. In studies that do not use visualization-based approaches from 83 studies selected in this SLR, only five studies [S9, S40, S57, S72, S81] visually represent CS. In the 14.5\% studies (12 out of 83) that use visualization-based approaches to detect CS, several methods are used to show the structure of the programs, such as: (1) city metaphors [S7, S17]; (2) 3D visualization technique [S16, S17]; (3) interactive ambient visualization [S19, S39, S46]; (4) multiple views adapted to CS [S12,S21]; (5) polymetric views [S2, S46]; (6) graph model [S1]; (7) Multivariate visualization techniques, such as parallel coordinates, and non-linear projection of multivariate data onto a 2D space [S76]; (8) in [S46] several views are displayed such as node-link-based dependency graphs, grids and spiral egocentric graphs, and relationship matrices. With respect to large systems, only three studies present dedicated solutions. \subsection{SLR validation} \label{sec:SLRvalidation} To ensure the reliability of the SLR, we carried out validations in 3 stages: i) The first validation was carried out in the application of the inclusion and exclusion criteria, through the application of Fleiss' Kappa. Through this statistical measure, we validated the level of agreement between the researchers in the application of the inclusion and exclusion criteria. ii) The second validation was carried out through a focus group, when the quality criteria were applied in stage 4. iii) To validate the results of the SLR we conducted 3 surveys, with each survey divided into 2 parts, one on CS detection and another on CS visualization. Each question in the surveys consists of 3 parts: 1) the question about one of the findings that is evaluated on a 6 point Likert scale (Strong disagreement, Disagreement, Weak disagreement, Agreement, Strong agreement); 2) a slider between 0 and 4 that measures the degree of confidence of the answer; 3) an optional field to describe the justification of the answer or for comments. The three inquiries were intended to: 1) Pre-test, with the aim of identifying unclear questions and collecting suggestions for improvement. The subjects chosen for the pre-test were Portuguese researchers with the most relevant work in the area of software engineering, totaling 27; 2) The subjects in the second survey were the authors of the studies that are part of this SLR, totaling 193; 3) The third survey was directed at the software visualization community; we chose the authors from all papers selected for the SLR on software visualization by Merino et al.\cite{MERINO2018} that were taken exclusively from the SOFTVIS and VISSOFT conferences, totaling 380; we also distributed this survey through a post on a Software Visualization blog\footnote{https://softvis.wordpress.com/}. The structure of the surveys, collected responses, and descriptive statistics on the latter are available at a github repository\footnote{https://github.com/dataset-cs-surveys/Dataset-CS-surveys.git}. In table \ref{table:SummarySurvey} we present a summary of the results of the responses from this SLR' authors (2nd survey) and from the visualization community (3rd survey). As we can see, using the aforementioned scale, most participants agree with SLR results. The grayed cells in this table represent, for each finding, the answer(s) that obtained the highest score. We can then observe that: 10\% of the findings had \textit{Strong agreement} as its higher score, 80\% of the findings had \textit{Agreement} and, 20\% had \textit{Weak agreement}. Regarding the question, \textit{Please select the 3 most often detected code smells?}, the answers placed the \textit{Long Method} as the most detected CS, followed by \textit{God Class} and \textit{Feature Envy}. In our SLR, based on actual data, we concluded that the most detected CS is \textit{God Class}, followed by \textit{Feature Envy} and \textit{Long Method}. This mismatch is small, since it only concerns the relative order of those 3 code smells, and shows that the community is well aware of which are the most often found ones. \begin{table*}[htpb] \caption{Summary of survey results} \label{table:SummarySurvey} \centering \smallskip \begin{tabular} {m{4.5cm}m{1.1cm}m{1.1cm}m{1.1cm}m{1.2cm}m{1.2cm}m{1.2cm}||m{2.6cm}|} \cline{8-8} & & & & & & & \vtop{\hbox{\strut Respond. confidence}\hbox{\strut degree (1-4)}} \\ \end{tabular} \begin{tabular} {|m{4.5cm}|>{\centering\arraybackslash}m{1.1cm}|>{\centering\arraybackslash}m{1.1cm}|>{\centering\arraybackslash}m{1.1cm}|>{\centering\arraybackslash}m{1.2cm}|>{\centering\arraybackslash}m{1.1cm}||>{\centering\arraybackslash}m{1.1cm}||>{\centering\arraybackslash}m{1.1cm}|>{\centering\arraybackslash}m{1.1cm}|} \cline{8-9} \hline \backslashbox{Question(finding)}{Answer} & \textbf{Strong agreement} & \textbf{Agree-ment} & \textbf{Weak agreement} & \textbf{Weak disagreement} & \textbf{Disagree-ment} & \textbf{\# of answers} & \textbf{Average} & \textbf{Std. deviation}\\ \hline The most frequently used CS detection techniques are based on rule-based approaches (F1) & 35.3\% & \cellcolor[HTML]{EFEFEF}47.1\% & 11.8\% & 5.9\% & 0.0\% & 34 & 3.2 & 0.8\\ \hline Very few CS detection studies provide their oracles (a tagged dataset for training detection algorithms) (F2) & 26.5\% & \cellcolor[HTML]{EFEFEF}58.8\% & 11.8\% & 2.9\% & 0.0\% & 34 & 3.1 & 0.7 \\ \hline In the detection of simpler CS (e.g. \textit{Long Method} or \textit{God Class}), the achieved precision and recall of detection techniques can be very high (up to 100\%) (F6) & 11.8\% & \cellcolor[HTML]{EFEFEF}44.1\% & 26.5\% & 0.0\% & 14.7\% & 34 & 3.2 & 0.5\\ \hline When the complexity of CS is greater (e.g. \textit{Divergent Change} or \textit{Shotgun Surgery}), the precision and recall in detection are much lower than in simpler CS (F6) & 11.8\% & \cellcolor[HTML]{EFEFEF}47.1\% & 26.5\% & 8.8\% & 5.9\% & 34 & 3.1 & 0.7\\ \hline There are few oracles (a tagged dataset for training detection algorithms) shared and publicly available. The existence of shared and collaborative oracles could improve the state of the art in CS detection research (F2)& \cellcolor[HTML]{EFEFEF}60.0\% & 34.3\% & 2.9\% & 2.9\% & 0.0\% & 35 & 3.6 & 0.5 \\ \hline The vast majority of CS detection studies do not propose visualization features for their detection (F11) & 15.4\% & \cellcolor[HTML]{EFEFEF}66.7\% & 10.3\% & 5.1\% & 2.6\% & 39 & 3.0 & 1.0 \\ \hline The vast majority of existing CS visualization studies did not present evidence of its usage upon large software systems (F11) & 12.5\% & \cellcolor[HTML]{EFEFEF}43.8\% & 34.4\% & 6.3\% & 0.0\% & 32 & 2.9 & 0.9 \\ \hline Software visualization researchers have not adopted specific visualization related taxonomies (F11) & 9.4\% & 28.1\% & \cellcolor[HTML]{EFEFEF}46.9\% & 9.4\% & 6.3\% & 32 & 2.0 & 1.2 \\ \hline If visualization related taxonomies were used in the implementation of CS detection tools, that could enhance their effectiveness (F11) & 11.8\% & \cellcolor[HTML]{EFEFEF}38.2\% & \cellcolor[HTML]{EFEFEF}38.2\% & 5.9\% & 5.9\% & 34 & 2.8 & 1.1 \\ \hline The combined use of collaboration (among software developers) and visual resources may increase the effectiveness of CS detection (F11) & 23.5\% & \cellcolor[HTML]{EFEFEF}50.0\% & 26.5\% & 0.0\% & 0.0\% & 34 & 3.2 & 0.8 \\ \hline \end{tabular} \end{table*} \subsection{Validity threats} \label{sec:validitythreats} We now go through the types of validity threats and corresponding mitigating actions that were considered in this study. \textit{Conclusion validity.} We defined a data extraction form to ensure consistent extraction of relevant data for answering the research questions, therefore avoiding bias. The findings and implications are based on the extracted data. \textit{Internal validity.} To avoid bias during the selection of studies to be included in this review, we used a thorough selection process, comprised of multiple stages. To reduce the possibility of missing relevant studies, in addition to the automatic search, we also used snowballing for complementary search. \textit{External validity.} We have selected studies on code smells detection and visualization. The exclusion of studies on related subjects (e.g. refactoring and technical debt) may have caused some studies also dealing with code smells detection and visualization not to be included. However, we have found this situation to occur in breadth papers (covering a wide range of topics) rather than in depth (covering a specific topic). Since the latter are the more important ones for primary studies selection, we are confident on the relevance of the selected sample. \textit{Construct validity.} The studies identified from the systematic review were accumulated from multiple literature databases covering relevant journals and proceedings. In the selection process the first author made the first selection and the remaining ones verified and confirmed it. To avoid bias in the selection of publications we specified and used a research protocol including the research questions and objectives of the study, inclusion and exclusion criteria, quality criteria, search strings, and strategy for search and for data extraction. \section{Conclusion} \label{sec:conclusion} \subsection{Conclusions on this SLR} \label{sec:SLRconclusions} This paper presents a Systematic Literature Review with a twofold goal: the first is to identify the main CS detection techniques, and their effectiveness, as discussed in the literature, and the second is to analyze to which extent visual techniques have been applied to support practitioners in daily activities related to CS. For this purpose, we have specified 3 research questions (RQ1 through RQ3). We applied our search string in six repositories (ACM Digital Library, IEEE Xplore, ISI Web of Science, Science Direct, Scopus, Springer Link) and complemented it with a manual search (backward snowballing), having obtained 1883 papers in total. After removing the duplicates, applying the inclusion and exclusion criteria, and quality criteria, we obtained 83 studies to analyze. Most of the studies were published in conference proceedings (76\%), followed by journals (23\%), and books (1\%). The 83 studies were analysed on the basis of 11 points (findings) related to the approach used for CS detection, dataset availability, programming languages supported, CS detected, evaluation of techniques, tools created, thresholds, validation and replication, and use of visualization techniques. Regarding RQ1, we conclude that the most frequently used detection techniques are based on search-based approaches, which mainly apply ML algorithms, followed by metric-based approaches. Very few studies provide the oracles used and most of them target open-source Java projects. The most commonly detected CS are \textit{God Class}, \textit{Feature Envy} and \textit{Long Method}, by this order. On average, each study detects 3 CS, but the most frequent case is detecting only 1 CS. As for RQ2, in the detection of simpler CS (e.g. \textit{God Class}) 4 approaches are used (probabilistic, metric-based, symptom-based, and search-based) and authors claim to achieve 100\% precision and recall results. However, when the complexity of CS is greater, the results have much lower relevance and very few studies use them. Thus, the detection problem is very far to be solved, depending on the detection results of the CS used, of the software systems in which they are detected, of the threshold and oracle values. Regarding RQ3, we found that most studies that detect CS do not put forward a corresponding visualization feature. Several visualization approaches have been proposed for representing the structure of programs, either in 2D (e.g. graph-based, polymetric views) or in 3D (e.g. city metaphors), where the objective of allowing to identify potentially harmful design issues is claimed. However, we only found three studies that proposed dedicated solutions for CS visualization. \subsection{Open issues} \label{sec:openissues} Detecting and visualizing CS are nontrivial endeavors. While producing this SLR we obtained a comprehensive perspective on the past and ongoing research in those fields, that allowed the identification of several open research issues. We briefly overview each of those issues, in the expectation it may inspire new researchers in the field. (1) Code smells subjective definitions hamper a shared interpretation across researchers' and practitioners' communities, thus hampering the advancement of the state-of-the-art and state-of-the-practice; to mitigate this problem it has been suggested a formal definition of CS (see \cite{Rasool2015}); a standardization effort, supported by an IT standards body, would certainly be a major initiative in this context; (2) Open-source CS detection tooling is poor, both in language coverage (Java is dominant), and in CS coverage (e.g. only a small percentage of Fowler's catalog is supported); (3) Primary studies reporting experiments on CS often do not make the corresponding scientific workflows and datasets available, thus not allowing their ``reproduction", where the goal is showing the correctness or validity of the published results; (4) Replication of CS experiments, used to gain confidence in empirical findings, is also limited due to the effort of setting up the tooling required to running families of experiments, even when curated datasets on CS exist; (5) Thresholds for deciding on CS occurrence are often arbitrary/unsubstantiated and not generalizable; in mitigation, we foresee the potential for the application of multi-criteria approaches that take into account the scope and context of CS, as well as approaches that explore the power of the crowd, such as the one proposed in \cite{Reis2017}; (6) CS studies in mobile and web environments are still scarce; due to their importance of those environments in nowadays life, we see a wide berth for CS research in those areas; (7) CS visualization techniques seem to have great potential, especially in large systems, to help developers in deciding if they agree with a CS occurrence suggested by an existing oracle; a large research effort is required to enlarge CS visualization diversity, both in scope (single method, single class, multiple classes) and coverage, since the existing literature only tackles a small percentage of the cataloged CS. \begin{comment} As future work, we plan the perception of CS and its use has been developed over time, answering questions such as: is there currently a greater attention to the detection of CS? Developers are more aware of the problem of CS? Is there any evolution in reducing CS subjectivity? We also plan to develop a new approach to CS detection through the use of collective intelligence, which we call CrowdSmelling and presented in \cite{Reis2017}. Finally, we plan to perform the forward snowballing technique by checking references of the selected studies to extend the number of relevant studies related to the research questions, as well as continue to update this SLR. \end{comment} \begin{acknowledgements} This work was partially funded by the Portuguese Foundation for Science and Technology, under ISTAR's projects UIDB/ 04466/2020 and UIDP/04466/2020. \end{acknowledgements} \bibliographystyle{spbasic}
{ "timestamp": "2020-12-17T02:14:53", "yymm": "2012", "arxiv_id": "2012.08842", "language": "en", "url": "https://arxiv.org/abs/2012.08842" }
\section{Introduction} \label{sec:introduction} Various gravitational, astrophysical and cosmological observations strongly imply the existence of Dark Matter (DM) in the universe. In particular, observations of the cosmological scale structure favour the so-called cold DM (CDM) scenario where the DM was not relativistic in the era of structure formation. In particle physics framework, the CDM scenario can be easily accounted for by extending the Standard Model (SM) with weakly interacting massive particles (WIMPs) -- for a review see e.g. \cite{Bertone:2004pz} --. An interesting feature of the CDM scenario is that WIMPs with mass about $\mathcal{O}(100)~$GeV interacting primarily through weak interactions gives relic abundance of DM in agreement with the observation made by the Planck satellite, \emph{i.e.} $\Omega_{\mathrm{DM}} h^2 = 0.1188 \pm 0.0010$ \cite{Ade:2015xua}. \\ Indirect detection experiments such as the Fermi Large Area Telescope (LAT) \cite{Atwood:2009ez}, AMS \cite{Aguilar:2013qda} or IceCube \cite{Aartsen:2012kia} provide one the possible ways to detect WIMPs. Theoretically, WIMPs undergo either annihilation \cite{Griest:1990kh, Bergstrom:2000pn}, co-annihilations \cite{Baker:2015qna}, or decays \cite{Cata:2016dsg, Azri:2020bzl} into a set of SM stable final state particles such as high energy photons, positrons, neutrinos, or anti-protons. Recently, an excess on the gamma-ray spectra was detected by the Fermi-LAT \cite{TheFermi-LAT:2015kwa}, called the Galactic Center Excess (GCE) which apparently seem to be consistent with predictions from DM annihilation (see e.g. \cite{Goodenough:2009gk}). On the other hand, several attempts were made to address the GCE within particle physics models, in particular within supersymmmetric models \cite{Caron:2015wda, Bertone:2015tza, Butter:2016tjc, Achterberg:2017emt}. An important finding is that the quality of the fits depend crucially on the theoretical precision on the determination of the gamma-ray spectra \cite{Caron:2015wda}. \\ Particle production from DM annihilation/decay processes is dominated by QCD jet fragmentation.\footnote{This is true for DM masses above a few GeV producing hadronic final states either directly through e.g. $\chi\chi \to q\bar{q}$ or indirectly via the decays of the intermediate heavy resonances such as the $W/Z/H$ bosons or the top quark.} Final state stable particles such as photons or positrons are then produced as a result of a complicated set of processes which includes QED and QCD radiations, hadronisation, and hadron decays. Unlike parton-level scattering amplitudes at e.g. the lowest order of perturbation theory, the problem of hadronisation cannot be solved from first-principles. Jet universality tells us that hadronisation is a universal process that can be factorised off the short-distance processes e.g. DM annihilation. Phenomenological models such that Fragmentation Functions \cite{Metz:2016swz} or explicit dynamical models such as the string~\cite{Artru:1974hr,Andersson:1983ia} or cluster~\cite{Webber:1983if,Winter:2003tt} models which are embedded in Monte Carlo (MC) event generators~\cite{Buckley:2011ms} are the up-to-date solutions to the hadronisation problem. The essential point is that the fragmentation models' parameters are to a very good approximation independent of the short-distance processes; therefore, they can be determined from fits to existing data such as $e^+ e^- \to \mathrm{hadrons}$ and used to make predictions for e.g. DM annihilation. \\ The question of the intrinsic QCD uncertainties on the predicted particle spectra in DM annihilation is often neglected in the literature besides some comparisons between the predictions of different multi-purposes event generators such as \textsc{Herwig} and \textsc{Pythia}. For instance, a comprehensive analysis has shown that different MC event generators may have excellent agreement in both the peak as well as the bulk of the spectra while their agreement is not very good in the tails \cite{Cembranos:2013cfa}. Another study was done by the authors of the PPPC4DMID \cite{Cirelli:2010xx} where they highlighted the differences between \textsc{Herwig} and \textsc{Pythia} event generators. The excellent level of agreement in the most-populated regions of particle spectra may be interpreted as due to the fact that the different MC generators tend to be tuned to roughly the same set of data mostly coming from LEP measurements at the $Z$-boson pole \cite{Buckley:2009bj,Buckley:2010ar,Skands:2010ak,Platzer:2011bc,Karneyeu:2013aha,Skands:2014pea,Fischer:2014bja,Fischer:2016vfv,Reichelt:2017hts,Kile:2017ryy}. Therefore, the envelope spanned by the predictions of the different MC models cannot represent the true estimate of the uncertainty on the predicted spectra. \\ In this talk, we discuss a \emph{first} study of the QCD uncertainties on particle spectra from DM annihilation within the same MC model\footnote{In this work, we focused on the uncertainties within the \textsc{Pythia8} event generator and the results we shown are based on \cite{Amoroso:2018qga}.}. We take the default Monash 2013 tune~\cite{Skands:2014pea} of the \textsc{Pythia} version 8.2.35 event generator~\cite{Sjostrand:2014zea} as our baseline. We use a selection of experimental measurements constraining from $e^+e^-$ colliders preserved in the \textsc{Rivet}~\cite{Buckley:2010ar} analysis package combined with the \textsc{Professor}~\cite{Buckley:2009bj} parameter optimisation tool. Then, we define a small set of the systematic parameter variations which we argue that explores the uncertainty envelope for the estimate of the QCD fragmentation uncertainties on DM annihilation. \begin{figure}[!t] \centering \begin{tabular}{c|cc} QED bremsstrahlung & \multicolumn{2}{c}{QCD fragmentation and hadron decays}\\ \includegraphics[scale=0.65]{Figures/alphaEM.pdf} & \includegraphics[scale=0.65] {Figures/alphaS.pdf} & \raisebox{-2.5mm}{\includegraphics[scale=0.65]{Figures/fz.pdf}}\\ Dominates at high $x_\gamma$ & \multicolumn{2}{c}{Photons from $\pi^0\to\gamma\gamma$ dominate bulk (and peak) of spectra} \end{tabular} \caption{Illustration of the main parameters that affect the photon spectra ($x_\gamma = E_\gamma/m_\chi$) from DM annihilation into jets. Here we show the electromagnetic coupling $\alpha_\mathrm{EM}$ (left), the strong coupling $\alpha_S$ (middle), and the nonperturbative fragmentation function $f(z)$ (right). \label{fig:MCparams}} \end{figure} \section{Physics Modeling and Measurements} \label{sec:physics-modeling} \subsection{Physics modeling} In this section, we discuss briefly the physics modeling in a generic DM annihilation, and the origin of photons (a more detailed discussion can be found in \cite{Amoroso:2018qga}). To simplify the discussion, we consider a generic DM annihilation process: \begin{eqnarray*} \chi \chi \to X_1 \cdots X_n \to \displaystyle\prod_{i=1}^{m} Y_{1i} \cdots Y_{ni} \end{eqnarray*} where we factorised the whole process into a production part $\chi \chi \to X_1 \cdots X_n$, and the decay part ($X_i \to Y_{i1} \cdots Y_{in}$ with $Y_{ij}$ is any stable object such as photon, neutrino or proton) assuming the narrow-width approximation. We note three important processes which may occur after DM annihilation and responsible for gamma-rays: \begin{itemize} \item \emph{QED bremsstrahlung:} This process occurs if $X$ (or the decay products $Y$) contains photons or electrically charged particles. Additional photons are produced via $X_i^\pm \to X_i^\pm\gamma$ branchings with probabilities that are enhanced for both soft and collinear photons. On the other hand, collinear photons dominate the spectra at the region $x_\gamma \to E_\gamma/m_\chi \to 1$ with the only requirement that the angle between the emitted photon and the parent particle is very small. QED processes may lead to the production of charged fermion-antifermion pairs in photon splittings (which are generally subleading) and the corresponding probabilities are enhanced at very low values of $Q^2 = (p_f + p_{\bar{f}})^2$. The rates of QED processes are governed by the effective electromagnetic fine-structure constant, $\alpha_{\mathrm{EM}}$ (illustrated in Fig.~\ref{fig:MCparams}a). \item \emph{QCD showers:} If $X$ (or decay products $Y$) contains coloured particles, then these states will undergo QCD showers. The modeling of the QCD showers is similar to the QED one. Here, we can have enhancement of soft and collinear emissions -- in $q\to qg$ and $g\to gg$ -- and of $g\to q\bar{q}$ at low virtualities. The main parameter governing the QCD showering is the effective value of the strong coupling constant, $\alpha_S$ (see Fig. \ref{fig:MCparams}b) evaluated at a scale proprtional to the shower evolution variable ($p_\perp$ in \textsc{Pythia8}). Further sets of universal corrections in the soft limit implies that the strong coupling should be defined in the CMW~\cite{Catani:1990rr} rather than the conventional $\overline{\mathrm{MS}}$ scheme. Furthermore, good agreement between \textsc{Pythia8} and experimental measurements of $e^+ e^- \to 3~\mathrm{jets}$ \cite{Skands:2010ak,Skands:2014pea} increases the value of $\alpha_S(M_Z)$ by about $10\%$. Perturbative uncertainty estimates can be performed by variation of the evolution of the renormalisation by a factor of $2$ in each direction with respect to the nominal scale choice. The framework of the automated scale variations was recently implemented in \textsc{Pythia}~\cite{Mrenna:2016sih} implies a compensation of second-order terms which reestablishes the agreement with the CMW scheme. Variations of the no-universal (no-singular) components of the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) splitting kernels can be performed in this framework as it is detailed in \cite{Mrenna:2016sih}. \item \emph{Hadronisation and hadron decays:} Any produced coloured particles must be confined inside colourless hadrons. This process --- \emph{hadronisation} --- takes place at a distance scale of order the proton size $\sim 10^{-15}$m and in \textsc{Pythia} is modelled by the Lund string model; see~\cite{Andersson:1983ia} for details. The most majority of photons are produced from the decays of neutral pions where the number and energy of these photons are strongly correlated with the predicted pion spectra. The description of this process is embedded in the \emph{fragmentation function}, $f(z)$, which gives the probability for a hadron to take a fraction $z \in [0,1]$ of the remaining energy at each step of the (iterative) string fragmentation process (see Fig.~\ref{fig:MCparams}c). The fragmentation function $f(z)$ cannot calculated from first principles but its form can be constrained by requirements such as causality. The general form can be written as \begin{equation} f(z,m_{\perp h}) = N \frac{(1-z)^a}{z}\exp\left(\frac{-b m_{\perp h}^2}{z}\right)~, \label{eq:fz} \end{equation} where $N$ is a normalisation constant that guarantees the distribution to be normalised to unit integral, and $m_{\perp h}= \sqrt{m_h^2 + p_{\perp h}^2}$ is called the ``transverse mass'', with $m_h$ the mass of the produced hadron and $p_{\perp h}$ its momentum transverse to the string direction, $a$ and $b$ are tunable parameters which will be denoted respectively by \texttt{StringZ:aLund} and \texttt{StringZ:bLund}. We note that the $a$ and $b$ parameters are extremely highly correlated. This makes it meaningless to assign independent $\pm$ uncertainties on them. To address this question, we implement an alternative parametrisation of $f(z)$ where $b$ is replaced by a $<z>$ which represents the average $z$ fraction taken by $\rho$ mesons. \begin{equation} \left<z_\rho\right> = \int_0^1 \mathrm{d}z \ z f(z,\left<m_{\perp\rho}\right>)~, \label{eq:zrho} \end{equation} which we solve (numerically) for $b$ at initialisation when the option \texttt{StringZ:deriveBLund = on} is selected in \textsc{Pythia} 8.235, using the following parameters: \begin{eqnarray} \left<m_{\perp\rho}\right>^2 & = & m^2_\rho + 2( \mbox{\texttt{StringPT:sigma}})^2~, \\ \left<z_\rho\right> & = &\mbox{\texttt{StringZ:avgZLund}}~. \end{eqnarray} \end{itemize} \subsection{Photon origins and available measurements} Here, we discuss briefly the origin of photons from DM annihilation (a very detailed discussion can be found in \cite{Amoroso:2018qga}). Most of the photons are coming from pion decays; about $88$-$95\%$ depending on the annihilation channel and on the DM mass. The contribution from $\eta$ decays is somewhat subleading which is about $4\%$. Finally, very sub-leading contributions are coming from bremsstrahlung photons and dominates in the high tail of the spectrum. Since the majority of photons ($\simeq 95\%$) are coming from pion decays, the QCD uncertainties on photon spectra is strongly correlated to those on the pion spectra. We can distinguish between primary pions directly produced from QCD fragmentation and secondary pions coming from the decays of heavier hadrons and $\tau$ leptons. In all the final states \cite{Amoroso:2018qga}, the number of secondary pions is larger than primary ones. The secondary pions account for about $70\%$-$85\%$ of the total pions. We note that the secondary pions mainly come from five sources: $\rho^\pm, \eta, \omega, D^{0,\pm}$ and $K_{S,L}$. \\ After discussing the origin of photons in DM annihilation, we conclude that in addition to the direct measurements of the photon spectrum, other measurements can be used to constrain the spectrum: \emph{(i)} the spectrum of neutral pions ($\pi^0$) since they are the most dominant source of photons in QCD jets, \emph{(ii)} the spectra of charged pions due to the fact that their number is related to $\pi^0$ by isospin symmetry and \emph{(iii)} $\eta$ spectrm as they are the second-most important source of photons in QCD jets. Finally, it is important to ensure that these tunings do not produce large corrections to infrared and collinear safe observables such as e.g. the $C$-parameter. The tunings will include the full range of these observables including the back-to-back regions which are extremely sensitive to non-perturbative QCD effects. These measurements provides important constraints on the \texttt{StringPT:sigma} parameter in particular. \\ \begin{figure}[!t] \includegraphics[width=0.495\linewidth]{Figures/Photon} \hfill \includegraphics[width=0.495\linewidth]{Figures/neutral_pions.pdf} \caption{Comparison between MC event generators and LEP and SLD measurements for the photon spectrum (left pane), and the $\pi^0$ spectrum (right pane). } \label{comparison-1} \end{figure} In Fig. \ref{comparison-1}, we compare several different multi-purpose MC event generators to measurements of two the photon and $\pi^0$ scaled momenta. We consider three event generators in these comparisons; \textsc{Herwig} 7.1.3 \cite{Bellm:2015jjp} using both the angular-ordered \cite{Gieseke:2003rz} and dipole \cite{Platzer:2009jq, Platzer:2011bc} shower algorithms and a cluster based hadronisation model~\cite{Webber:1983if}, \textsc{Pythia} 8.2.35 with the default model of hadronisation \cite{Sjostrand:2014zea} and \textsc{Sherpa} 2.2.5 \cite{Gleisberg:2008ta} with the CSS parton shower \cite{Schumann:2007mg} using both the Ahadic~\cite{Winter:2003tt} (based on the cluster model) and the \textsc{Pythia} 6.4 Lund hadronisation~\cite{Sjostrand:2006za} models. The curve corresponding to \textsc{Pythia} is shown with an uncertainty band (red) obtained using the results of our new tune, based on the recent \textsc{Monash} tune but refitting the three main hadronisation parameters (see below). We can see from Fig. \ref{comparison-1} that the multi-purpose event generators agree pretty well except in a few regions such as e.g. in the tails towards hard high-energy photons. \section{Tuning} \label{sec:tune} \subsection{Setup} We used \textsc{Pythia8} version 8.235 throughout this study with the most recent Monash~\cite{Skands:2014pea} tune is used as baseline for the parameter optimisation. We use \textsc{Professor} v2.2~\cite{Buckley:2009bj} to perform the tuning and \textsc{Rivet} v2.5.4~\cite{Buckley:2010ar} for the implementation of the measurements. In \textsc{Professor}, a method permits to make simultaneous optimisation of several parameters by using analytical approximations of the dependence of the MC response on the model parameters (this idea was introduced first in Ref.~\cite{Abreu:1996na}). In order to minimises the differences between the interpolated functions and the true MC response, we use a fourth-order polynomial. The values of the model parameters at the minimum are then obtained with a standard $\chi^2$ minimisation of the analytic approximation to the corresponding data using \textsc{Minuit}~\cite{James:1975dr}. In this work, we tuned the $a$ and $b$ parameters of the Lund fragmentation function ($a$ and $<z_\rho>$ in the new parametrization) and the $\sigma$ parameter which governs the transverse components (see e.g. \cite{Skands:2012ts}). The default values of the parameters and their allowed range in \textsc{Pythia8} are shown in Table \ref{tab:ranges}. \\ To protect against over-fitting effects and as a baseline sanity limit for the achievable accuracy, we introduce an additional $5\%$ uncertainty on each bin and for each observable. This also substantially reduces the value of the goodness-of-fit measure so that the resulting $\chi^2/\textrm{ndf}$ is consistent with unity (see Table \ref{tab:T2tune}). The $\chi^2/N_\textrm{DoF}$ is defined by: \begin{equation} \frac{\chi^2}{N_\textrm{DoF}} = \frac{1}{\sum_{\mathcal{O}} \omega_\mathcal{O} |b \in \mathcal{O}|}\frac{\sum_{\mathcal{O}} \omega_\mathcal{O} \sum_{b\in \mathcal{O}} (f_{(b)}(\textbf{p}) - \mathcal{R}_b)^2} {(\Delta_b^2+ (0.05 f_{(b)}(\textbf{p}))^2)}~. \label{Gof-Ndf} \end{equation} Here $\omega_\mathcal{O}$ represents the weight per observable and per bin, $f_{(b)}(\textbf{p})$ is the interpolated function per bin $b$, $\mathcal{R}_b$ is the experimental value of the observable $\mathcal{O}$ and $\Delta_b$ is the experimental error per bin, with $f_{(b)}$ is the $4$th order interpolated polynomial used to model the MC response. We use various experimental measurements from \textsc{Lep} and \textsc{Slc} at the $Z$-boson peak produced by \textsc{Aleph}, \textsc{Delphi}, \textsc{L3}, \textsc{Opal} and \textsc{Sld}. \begin{table}[t!] \begin{center} \begin{tabular}{llcl} \hline parameter & \textsc{Pythia8} setting & Variation range & \textsc{Monash}\\ \hline $\sigma_{\perp}$~[GeV] & \verb|StringPT:Sigma| & 0.0 -- 1.0 & 0.335\\ $a$ & \verb|StringZ:aLund| & 0.0 -- 2.0 & 0.68 \\ $b$ & \verb|StringZ:bLund| & 0.2 -- 2.0 & 0.98 \\ $\left<z_\rho\right>$ & \verb|StringZ:avgZLund| & 0.3 -- 0.7 & (0.55) \\ \hline \end{tabular} \end{center} \caption{\label{tab:ranges} Parameter ranges used for the \textsc{Pythia} 8 tuning, and their corresponding value in the Monash tune. The parenthesis around the Monash value of the $\left<z_\rho\right>$ parameter indicates that this is a derived quantity, not an independent parameter.} \end{table} \subsection{Results} In this section, we discuss briefly the results of the different retunings (for a more detailed discussion please see \cite{Amoroso:2018qga}). In Table~\ref{tab:T2tune}, we show the results of the tunes with and without the additional $5\%$ flat uncertainty. We can see that the goodness-of-fit is improved a factor of $~7$ bringing it close to unity for the second fit (with $5\%$ uncertainty). Therefore, we can see that the additional $5\%$ uncertainty provide a useful protection against over-fitting. \begin{table}[!t] \begin{center} \begin{tabular}{lcc} \hline Parameter & without $5\%$ & with $5\%$ \\ \hline \verb|StringPT:Sigma| & $0.3151\substack{+0.0010\\ -0.00010}$ & $0.3227\substack{+0.0028\\ -0.0028}$\\ \verb|StringZ:aLund| & $1.028\substack{+0.031\\ -0.031}$ & $0.976\substack{+0.054\\-0.052}$\\ \verb|StringZ:avgZLund| & $0.5534\substack{+0.0010\\ -0.0010}$ & $0.5496\substack{+0.0026\\ -0.0026}$ \\ \hline $\chi^2/N_\textrm{DoF}$ & 5169/963 & 778/963 \\ \hline \end{tabular} \end{center} \caption{\label{tab:T2tune} Results of the tunes before and after including a flat $5\%$ uncertainty to the theory prediction.} \end{table} \begin{figure}\centering \includegraphics[width=0.495\textwidth]{Figures/Fits_experiments.png} \includegraphics[width=0.495\textwidth]{Figures/Fits_experiments2.png} \caption{\label{fig:experiments} Results of tunes performed separately to all of the measurements from a given experiment; \textsc{Aleph} (blue), \textsc{Delphi} (magenta), \textsc{L3} (red), \textsc{Opal} (green), \textsc{Sld} (yellow) and COMBINED (gray). The contours corresponding to one, two and three sigma deviations are also shown.} \end{figure} We show the possible tensions in the data measured by the different experiments by making independent tunes including all of the sensitive measurements by each experiment. We performed five independent tunes corresponding to the individual measurements by \textsc{Aleph}, \textsc{Delphi}, \textsc{L3}, \textsc{Opal} and \textsc{Sld} and display these results in Fig. \ref{fig:experiments}. We can see that the tunes to \textsc{Aleph}, \textsc{Delphi}, \textsc{Opal} and \textsc{Sld} are in agreement regarding the obtained value of \verb|StringZ:avgZLund| contrarily to \textsc{L3}. Due to the correlations between the $a$ and the $b$ (or $<z_\rho>$) parameters, we cannot say that these discrepancies in the individual best-fit points is a sign of disagreement between theoretical predictions and data, \emph{i.e.} the predictions at the best-fit point will agree with each other and with data. \section{QCD uncertainties} \label{sec:results} \subsection{Estimating the uncertainties} \label{tune:uncertainties} \begin{table}[!t] \begin{center} \begin{tabular}{lc} \hline Parameter & Value \\ \hline \verb|StringZ:aLund| & $0.5999\pm0.2$ \\ \verb|StringZ:avgZLund| & $0.5278^{+0.027}_{-0.023}$ \\ \verb|StringPT:sigma| & $0.3174^{+0.042}_{-0.037}$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:individual} Result of the single fit to all the measurements as obtained from independent optimisation to $N(=15)$ measurements. The quoted errors correspond to the $68\%$ CL uncertainty on the fit.} \end{table} The QCD uncertainties can be split into two categories: perturbative related to parton showers and non-perturbative related to the hadronisation model parameters. The uncertainties on parton showering within \textsc{Pythia8} were estimated using the automatic method developed in \citep{Mrenna:2016sih}. The uncertainty in this case is determined by variation of the central renormalisation scale by a factor of $2$ in two directions with a full NLO recompensation terms. Furthermore, this framework can allow for variations of the non-singular terms in the DGLAP splitting kernels. We notice that these variations give, in most of the cases, very small uncertainties and, therefore, will be neglected. On the non-perturbative side, the \textsc{Professor} toolkit allows to estimate uncertainties on the fitted parameters through the \texttt{eigentunes} method which diagonalises the $\chi^2$ covariance matrix around the best-fit point. Then, it uses variations along the principal directions (eigenvectors) in the space of the optimised parameters to construct a set of $2\cdot N_{\textrm{params}}$ variations. However, the resulting \texttt{eigentunes} are found to provide small uncertainties which cannot be interpreted as a conservative\footnote{We have checked that the impact of the \texttt{eigentunes} on the gamma-ray spectra in different final states and for different DM masses including the ones corresponding to the pMSSM best fit points and we have found that the bands obtained from the eigentunes are negligibly small.}. Therefore, we will devise a new method. The new method consist of making a new tuning where we use $N$ different measurements to get $N$ best-fit points. We then take the $68\%$ CL errors on the parameters to be our estimate of the uncertainty (we exclude observables with little or no sensitivity on our parameters). The results of these fits along with their $68\%$ CL errors are shown in Table \ref{tab:individual}. To get a comprehensive estimate of the uncertainty bands from the $68\%$ CL errors on the model parameters, we consider all the possible variations; there are $N_{\mathrm{var}} = 3^3 - 1 = 26$ variations. There are, however, some variations which don't give significant impact on the predicted spectra. We have checked that there are ten variations (including the nominal tune) meaningful variations. \subsection{Impact on Dark Matter Spectra and Fits} \begin{figure}[!t] \centering \includegraphics[width=0.48\linewidth]{Figures/WW_DM_spectra.pdf} \hfill \includegraphics[width=0.48\linewidth]{Figures/ttbar_DM_spectra.pdf} \caption{Photon energy distribution for dark matter annihilation into $W^+ W^-$ with $m_\chi=90.6$ GeV (\emph{left}) and into $t\bar{t}$ with $m_\chi = 177.6$ GeV (\emph{right}). In the two cases, the result corresponding to the new tune is shown in black line. Both the uncertainties from parton showering (gray bands) and from hadronisation (blue bands) are shown. Predictions from \textsc{Herwig7} are shown as a gray solid line.} \label{fig:DMimpact} \end{figure} In this subsection, we show the results of the of QCD fragmentation function, parton shower uncertainties on the photon spectra of two representative DM annihilation channels: $W^+W^-$ and $t\bar{t}$\footnote{For comparison, we show the predictions of \textsc{Herwig7}.}. We do not perform a full analysis to determine the best fit of the GCE, using PASS8 data performed in the pMSSM \cite{Achterberg:2017emt} but only show qualitatively the size of the uncertainties. As the best-fit point will be certainly affected by these uncertainties, we postpone this to a future study. In the analysis of \cite{Achterberg:2017emt}, the best-fit was found for two neutralino masses, i.e $m_\chi=90.6$ GeV and $m_\chi=177.6$ GeV corresponding to the $W^+W^-$ and $t\bar{t}$ DM annihilation channels respectively. These results are shown in Fig. \ref{fig:DMimpact} for $m_\chi=90.6$ GeV in the $W^+W^-$ channel (left panel) and for $m_\chi=177.6$ GeV in the $t\bar{t}$ channel (right panel) with the new tune (black line) and the \textsc{Herwig} prediction (green line). The bands show the \textsc{Pythia} parton-shower (gray bands) and hadronisation (blue bands) uncertainties. We can see that the predictions from \textsc{Pythia} and \textsc{Herwig} agree very well except for $E_\gamma \leqslant 2~$GeV where differences can reach about $21\%$ for $E_\gamma \sim 0.4$ GeV. One can see that the uncertainties can be important for both channels particularly, in the peak region which corresponds to energies where the photon excess is observed in the galactic center region. Indeed, combining them in quadrature assuming the different type of uncertainties are uncorrelated, they can go from few percents where the GCE lies to about 70\% in the high energy bins. Furthermore hadronisation uncertainties are the dominant ones around the peak of the photon spectrum. The parton showering uncertainties can change the peak of the energy spectra and are the main source of uncertainties while moving away toward the edges of the spectra. \begin{figure}[!t] \centering \includegraphics[width=0.32\linewidth]{Figures/mDM_10GeV.pdf} \hfill \includegraphics[width=0.32\linewidth]{Figures/mDM_100GeV.pdf} \hfill \includegraphics[width=0.32\linewidth]{Figures/mDM_1000GeV.pdf} \caption{Photon spectra obtained using our tune normalized to the results of \cite{Cirelli:2010xx} for $m_\chi = 10$ GeV (\emph{left pane}), $m_\chi = 100$ GeV (\emph{center pane}) and $m_\chi = 1000$ GeV (right pane). The spectra are shown for DM annihilation into $q\bar{q}$ (red), $W^\pm W^\mp$ (green) and $t\bar{t}$ (blue). The dashed bands show the QCD uncertainties on the parameters of the Lund fragmentation function.} \label{fig:comparison} \end{figure} \section{Public Data on Zenodo} \label{sec:code} The impact of the QCD uncertainties on the particle spectra from DM annihilation were produced in the form of tables which can be found in Zenodo \cite{Amoroso:2019Zenodo}. We have produced tables for five stables final states; gamma-rays, positrons, electron anti-neutrinos, muon anti-neutrinos and tau anti-neutrinos -- the work on the spectra of anti-protons is in progress \cite{Caron:2021xx} --. The calculations were done for various DM annihilation channels; $\chi\chi \to e^+ e^-, \mu^+ \mu^-, \tau^+ \tau^-, q\bar{q}(q=u,d,s), c\bar{c}, b\bar{b}, t\bar{t}, W^+ W^-, ZZ, gg, ~\mathrm{and}~hh$. We covered DM masses from $5$~GeV to $100~$GeV. For each final state, and annihilation channel, there are twelve tables which are provided in \texttt{zip} format. The notation of the different tables is given below: \begin{itemize} \item The table corresponding to the central prediction for the spectra is denoted by 'AtProduction-Hadronization1-\$TYPE.dat' with \$TYPE=Nuel, Numu, Nuta, Ga, and Positrons refers to the three flavours of anti-neutrinos, gamma-rays and positrons respectively. \item There are nine tables corresponding to the different variations of the light quark fragmentation function's parameters. These tables are denoted by 'AtProduction-Hadronization\$h-\$TYPE.dat' with h=2,..,10. \item The particle spectra corresponding to the variations of the shower evolution scale ($\mu_R$) are denoted by 'AtProduction-Shower-Var\$s-\$TYPE.dat' with s=1,2 corresponds to $\mu_R/2$ and $2 \mu_R$. \end{itemize} We stress that the uncertainties from parton shower and hadronisation were taken to be uncorrelated (more details on the generation of the spectra can be found in \cite{Amoroso:2018qga}). Finally, we have compared our predictions to the results of the PPPC4DMID. We show the comparison between our predictions and the results of the PPPC4DMID in the photon spectra for three DM masses; $m_\chi=10, 100$ and $1000$ GeV. We have chosen three final states, i.e $q\bar{q}, q=u,d,s$, $W^\pm W^\mp$ and $t\bar{t}$. We can see that the differences between our results and the predictions of the Cookbook can be very important, particularly in the edges of the distributions (small $x_\gamma$ and large $x_\gamma$). As these differences cannot be accounted for by QCD uncertainties (shown as dashed bands in Fig. \ref{fig:comparison}), we urge to use the updated predictions from this study. \\ \section{Conclusions} \label{sec:conclusions} In this talk, we discussed the study of the QCD uncertainties on particle spectra from DM annihilation which we studied for the \emph{first} time in \cite{Amoroso:2018qga}. We demonstrated that the relative differences between the predictions of different multi-purposes MC event generators (\textsc{Herwig} 7.1.3, \textsc{Pythia} 8.235 and \textsc{Sherpa} 2.2.5) cannot be used to define a conservative estimate of QCD uncertainties particularly in the bulk of the spectra. We studied a complementary approach by using the same modeling paradigm (\textsc{Pythia8}) to define parametric variations taking the default \textsc{Monash} tune as our baseline and performed several retunings using data from LEP. Next, we show quantitatively the impact of the QCD uncertainties on the spectra of gamma-rays from DM annihilation in two benchmark points in the pMSSM. Full data tables which can be used to update those in the PPPC4DMID are public now on Zenodo and can be found in \url{http://doi.org/10.5281/zenodo.3764809}.
{ "timestamp": "2020-12-17T02:16:59", "yymm": "2012", "arxiv_id": "2012.08901", "language": "en", "url": "https://arxiv.org/abs/2012.08901" }
\section{Introduction} \IEEEPARstart{B}{y enabling} the intelligent reflecting surface (IRS) to the wireless systems, the IRS-aided wireless system recently has attracted significant interest due to its potential to further improve the system capacity and spectral efficiency \cite{Wu2019Beamforming, Zenzo2019Smart}. Specifically, IRS exploits large reflecting elements to proactively steer the incident radio-frequency wave towards destination terminals \cite{Basar2019Wireless}, which is a promising solution to build a programmable wireless environment for 6G systems \cite{Hu2019Reconfigurable}. Thereby, the fine-grained three-dimensional reflecting beamforming can be achieved without the need of any transmit radio frequency (RF) chain \cite{Han2019Large}. \subsection{Related Work} The IRS-aided wireless systems refer to the scenario that a large number of software-controlled reflection elements with adjustable phase shifts for reflecting the incident signal. As such, the phase shifts of all reflecting elements can be tuned adaptively according to the state of networks, e.g., the channel conditions and the incident angle of the signal by the base station (BS). It is commonly believed that the propagation environment can be improved without incurring additional noise at the reflector elements. Currently, major communication field researchers are actively involved in the research of IRS-aided communications \cite{Zenzo2019Reconfigurable, Guo2019Weighted,Wu2019Intelligentjournal}. For example, \cite{Zenzo2019Reconfigurable} summarized the main communication applications and competitive advantages of the IRS technology. In the spirit of these works, a vast corpus of literature focused on optimizing active-passive beamforming for unilateral spectral efficiency maximization subject to power constraint. For instance, \cite{Guo2019Weighted} proposed a fractional programming based alternating optimization approach to maximize the weighted SE in IRS-aided multiple-input multiple-output (MISO) downlink communication systems. In particular, three assumptions for the feasible set of reflection coefficient were consider at IRS, including the ideal reflection coefficient constrained by peak-power, continuous phase shifter, and discrete phase shifter. Meantime, in MISO wireless systems, the problem of minimizing the total transmit power at the access point was considered to energy-efficient active-passive beamforming \cite{Wu2019Intelligentjournal}. Notably, the aforementioned studies for IRS-aided communications were based on the premise of ignoring the power consumption at IRS. In contrast, in \cite{Huang2019Reconfigurable}, an energy efficiency (EE) maximization problem was investigated by developing a realistic IRS power consumption model, where IRS power consumption relies on the type and the resolution of meta-element. \subsection{ Motivation and Contributions} The above resource allocation works address the joint transmit beamforming and phase shift optimization problem in IRS-aided communication systems. These works assume that IRS operators are all selfless, and will always participate in the cooperative transmission despite their own energy consumption/maintanence cost \cite{Huang2019Reconfigurable} and profits. However, this assumption becomes unrealistic in practice, due to the advances in intelligent communication and the shrinking resources. In other words, if an IRS operator cannot benefit from the participation, it will not join in the cooperative communication. Moreover, the common assumption in the existing studies for IRS-aided communications is that all the reflecting elements are used to reflect the incident signal, i.e., adjusting reflecting coefficient of each meta-element simultaneously each time. However, along with the use of a large number of high-resolution reflecting elements, especially with continuous phase shifters, triggering all the reflecting elements every time may result in significant power consumption \cite{Huang2019Reconfigurable}. Moreover, the hardware support for the IRS implementation is the use of a large number of tunable metasurfaces. Specifically, the tunability feature can be realized by introducing mixed-signal integrated circuits (ICs) or diodes/varactors, which can vary both the resistance and reactance, offering complete local control over the complex surface impedance \cite{Liu2019Intelligent}. According to the IRS power consumption model presented in \cite{Huang2019Reconfigurable} and the hardware support, activating the entire IRS not only incurs increased power consumption, but also entails the increased latency of adjusting phase-shift and accelerates equipment depreciation. Therefore, realizing reflection resource management is significantly important for IRS-aided communications. In this paper, for IRS-aided multiuser MISO systems, we consider the resource allocation problem in which an IRS operator serves the BS and prices the active modules. The problem is formulated as a Stackelberg game, in which the IRS operator decides the price for the active modules. The contributions of this paper are summarized as follows: \begin{itemize} \item{For the first time, a modular architecture of IRS is proposed that divides all the reflecting elements into multiple modules, whose states (active and idle) are controlled by multiple parallel switches belonging to a common controller. We assume that each module contains multiple reflecting elements, i.e., the size of each module is larger than the incident signal wavelength, since the unit meta-element size is subwavelength \cite{Liu2019Intelligent}. As mentioned in \cite{Zenzo2019Smart}, the IRS is programmatically controlled by the controller, and hence, from an operational standpoint, independent module activating can be implemented easily. Therefore, the proposed architecture of IRS allows the realization of the reflection resource management, since each module is independently activated by its switch.} \item{Based on the proposed modular architecture of IRS, this paper proposes a new price-based resource allocation scheme for both the BS and IRS. Furthermore, the Stackelberg game is formulated to maximize the individual revenue of the BS and IRS for the proposed price-based resource allocation. Since the entire game is a non-convex mixed-integer problem, which is even hard to solve in a centralized way, the problem is transformed into a convex problem by introducing the mixed row block $\ell_{1,2}\text{-norm}$ \cite{Mehanna2013Joint}, which yields a suitable semidefinite relaxation. To solve this problem, we apply a Stackelberg game-based alternating direction method of multipliers (ADMM) to identify the price, active module subsets, and subsequently both the transmit power allocation and the corresponding passive beamforming. } \end{itemize} \section{System Model and Problem Formulation} \subsection{System Model} Consider the downlink communication between a BS equipped with $M$ antennas and $K$ single-antenna mobile users. The communication takes place via an IRS with $S$ modules, and each module consisting $N$ reflection elements, and thus, the total reflecting elements of IRS is $SN.$ Define ${\cal K}:=\{1, 2, \ldots, K\},$ ${\cal S}:=\{1, 2, \ldots, S\},$ and ${\cal I}=\{1, 2, \ldots, (SN)\}$ as the index sets of users, the reflection modules, and the reflecting elements, respectively. Let ${\mathbf H}_{0,s}\in{\mathbb C}^{N\times M}$ be the channel matrix from the BS to the $s\text{th}$ module of IRS, ${\mathbf g}_{s,k}\in{\mathbb C}^{N\times 1}$ be the channel vector from the $s\text{th}$ module of the IRS to user $k.$ The direct channel for the BS to user $k$ is denoted as $h_{d,k}\in{\mathbb C}^{M\times 1}.$ Denote by $\phi_i, \forall i\in{\cal I}$ the $i\text{th}$ reflecting element of the IRS. Let ${\pmb\Phi}=\text{diag}\{{\pmb\Phi}_1, {\pmb\Phi}_2, \ldots, {\pmb\Phi}_S\}\in{\mathbb C}^{(SN)\times(SN)},$ where ${\pmb\Phi}_s=\text{diag}[\phi_{(s-1)N+1}, \phi_{(s-1)N+2}, \ldots, \phi_{sN}]\in{\mathbb C}^{N\times N}.$ Define ${\pmb\phi}=[({\pmb\phi}_1)^T,({\pmb\phi}_2)^T,\ldots, ({\pmb\phi}_S)^T ]^T\in{\mathbb C}^{(SN)\times 1}, $ where ${\pmb\phi}_s=[({\phi}_{(s-1)N+1})^{\dag}, ({\phi}_{(s-1)N+2})^{\dag}, \ldots, ({\phi}_{sN})^{\dag}]^T\in{\mathbb C}^{N\times 1}.$ We assume that all the modules of IRS can potentially join the cooperative communication, then, the channel matrix from the BS to the IRS and the IRS to user $k$ respectively are \begin{equation}\label{eq:1} \begin{aligned} {\mathbf H}&=[({\mathbf H}_{0,1})^T, ({\mathbf H}_{0,2})^T, \ldots, ( {\mathbf H}_{0,S})^T ]^T\in{\mathbb C}^{(SN)\times M}\\ {\mathbf g}_k&=[ ({\mathbf g}_{1,k})^T, ({\mathbf g}_{2,k})^T, \ldots, ({\mathbf g}_{S,k})^T ]^T\in{\mathbb C}^{(SN)\times 1}, \forall k\in{\cal K}. \end{aligned} \end{equation} The SINR for user $k$, which is denoted by $\gamma_k$ can be computed by \begin{equation}\label{eq:2} \gamma_k=\frac{|({\mathbf h}_{d,k}^{\dag}+{\mathbf g}_{k}^{\dag}{\pmb\Phi}(||\pmb\Phi||_{0,F}){\mathbf H}){\mathbf w}_k |^2} {\sum_{j\neq k}^K |({\mathbf h}_{d,k}^{\dag}+{\mathbf g}_k^{\dag}{\pmb\Phi}(||{\pmb\Phi}||_{0,F}){\mathbf H}){\mathbf w}_j |^2+\sigma^2}, \end{equation} where ${\mathbf w}_k\in{\mathbb C}^{M\times 1}$ is the transmit beamforming vector for user $k,$ $\sigma^2$ is the background noise at user $k$, and the $\ell_{0,F}\text{-norm}$ is the number of nonzero blocks of matrix $\pmb\Phi,$ i.e., $||{\pmb\Phi}||_{0,F}\triangleq \left|\left\{s: ||{\pmb\Phi}_s||_F\neq 0 \right\}\right|.$ Moreover, $\pmb\Phi(||\pmb\Phi||_{0,F})$ is the corresponding phase shift for the active modules. \subsection{Stackelberg Game Formulation} In this paper, we assume that the BS and IRS belong to different operators, and the IRS is selfish. In the case of unfavorable propagation condition on the direct signal path, in order to improve the sum rate of system, the BS needs to pay for the IRS's forwarding service. Then, the IRS's objective is to maximize its utility, denoted by $V$, calculated as follows: \begin{equation}\label{eq:7} V={ r}||\pmb\Phi||_{0,F}, \end{equation} where $r$ is the price to IRS for providing $||\pmb\Phi||_{0,F}$ reflection modules. Notably, in IRS-aided communication, the authority to adjust the passive beamforming, $\pmb\Phi(||\pmb\Phi||_{0,F})$ is controlled by the BS. The objective of the IRS is to solve the following problem: \begin{equation}\label{s:1} \text{Leader-Problem:}~~\max_{r}~V,~~~\text{s.t.~~} r>0. \end{equation} In response to the action of the IRS, the BS chooses a best $||\pmb\Phi||_{0,F}$ active modules, decides the phase shift $\pmb\Phi(||\pmb\Phi||_{0,F})$ of the activated modules and its own transmit beamforming ${\mathbf W}.$ Then, the BS's utility is designed as the sum data rate of all users excluding its cost of forwarding service, which is formulated as \begin{equation}\label{eq:3} \begin{aligned} U=\sum\nolimits_{k=1}^K \log_2\left( 1+\gamma_k\right)-{r}||{\pmb\Phi}||_{0,F}. \end{aligned} \end{equation} The problem of obtaining the optimal strategy for the BS can be formulated as follows: \begin{subequations}\label{s:2} \begin{align} \text{Follower-Problem:}~~&\max_{{\mathbf w}_k, {\pmb\Phi}}~~~U\\ \text{s.t.~}& \sum\nolimits_{k=1}^K ||{\mathbf w}_k||_2^2\leq p^{\max}\\ &|\phi_i|\leq 1, \forall i=1,2, \ldots, (SN). \end{align} \end{subequations} Since optimization problems (\ref{s:1}) and (\ref{s:2}) is nonconvex and generally impossible to solve as their solutions usually requires an intractable combinational search. A common alternative is to consider the mixed $\ell_{1,2}\text{-norm}$, defined as $||\pmb\Phi||_{1,F}=\sum_{s=1}^S||{\pmb\Phi}_{s}||_F.$ Note that the $\ell_{1,F}\text{-norm}$ behaves as the $\ell_{0,F}\text{-norm}$ on $\pmb\Phi$, which implies that each $||{\pmb\Phi}_s||_F$ is encouraged to be zero, therefore inducing group-sparsity \cite{CandEnhancing2007}. For our purpose, we will use the convex $\ell_{1,F}\text{-norm}$ as a group-sparsity to replace the nonconvex $\ell_{0,F}\text{-norm}$ in (\ref{s:1}) and (\ref{s:2}). Through all the transformation, the objectives of the leader and follower can thus be relaxed to \begin{subequations}\label{s:3} \begin{align} \text{F-Problem:}~&\max_{{\mathbf w}_k, \pmb\Phi}\sum\nolimits_{k=1}^K\text{log}_2(1+\gamma_k)-r\delta||\pmb\Phi||_{1,F}\\ \text{s.t.}~~&(\ref{s:2}\text{b}) ~\text{and}~(\ref{s:2}\text{c}), \end{align} \end{subequations} where $\delta$ is the positive real tuning parameter that controls the sparsity of the solution, and thus the number of activated modules. \begin{equation}\label{s:4} \text{L-Problem:}~\max_{r}~r\delta||\pmb\Phi||_{1,2}, ~\text{s.t.~} r>0. \end{equation} For the proposed Stackelberg game, the Stackelberg game equilibrium (SE) is defined as follows. \begin{definition} Define $ {\mathbf W}=[{\mathbf w}_1, {\mathbf w}_2, \ldots,{\mathbf w}_K]\in{\mathbb C}^{M\times K}.$ Let ${ r}^{*}$ be a solution of problem (\ref{s:4}) and $({\mathbf W}^{*}, {\pmb\Phi}^{*})$ be a solution for problem (\ref{s:3}). Then, the point $({ r}^{*}, {\mathbf W}^{*}, {\pmb\Phi}^{*})$ is the Stackelberg equilibrium for the proposed Stackelberg game if for any $({ r}, {\mathbf W}, {\pmb\Phi})$, the following conditions are satisfied: \begin{equation}\label{eq:10} \begin{aligned} U({ r}^{*}, {\mathbf W}^{*}, {\pmb\Phi}^{*})&\geq U({ r}^{*}, {\mathbf W}, {\pmb\Phi})\\ V({ r}^{*}, {\mathbf W}^{*}, {\pmb\Phi}^{*})&\geq V({ r}, {\mathbf W}^{*}, {\pmb\Phi}^{*}). \end{aligned} \end{equation} \end{definition} \section{Game Analysis} In the proposed game, both at the BS's and the IRS's side, since there is only one player, the best response of the BS and IRS can be readily obtained by solving { F-Problem} and { L-Problem}, respectively. For the proposed game, the SE can be obtained as follows: For a given ${ r},$ { F-Problem } (\ref{s:3}) is solved first. Then, with the obtained best response functions $({ W}^{*}, {\pmb\Phi}^{*})$ of the BS, we solve {L-Problem} (\ref{s:4}) for the optimal price ${ r}^{*}.$ \subsection{Strategy Analysis for the BS} If we denote the price for serving the BS as ${ r},$ the optimization problem (\ref{s:3}) can be solved by treating parameter $r$ as constant. To tackle the logarithm in the objective function of (\ref{s:3}), we apply the Lagrangian dual transform. Then, the objective function of (\ref{s:3}) can be equivalently written as \begin{equation}\label{eq:12} \begin{aligned} \max_{{\mathbf W}, {\pmb\Phi}}~\sum\nolimits_{k=1}^K\log_2\left(1+\alpha_k\right)&-\sum\nolimits_{k=1}^K\alpha_k +\sum\nolimits_{k=1}^K\frac{(1+\alpha_k)\gamma_k}{1+\gamma_k}\\ &-{ r}\delta\sum\nolimits_{s=1}^S||{\pmb\Phi}_s||_F. \end{aligned} \end{equation} In (\ref{eq:12}), when ${\mathbf W}$ and ${\pmb\Phi}$ hold fixed, the optimal $\alpha_k$ is $\alpha_k^{*}=\gamma_k, \forall k\in{\cal K}.$ Then, for a given price ${ r}$ and a fixed $\{\alpha_k\}_{k\in{\cal K}},$ optimizing ${\mathbf W}$ and ${\pmb\Phi}$ is reduced to \begin{equation}\label{eq:14} \begin{aligned} \max_{{\mathbf W}, {\pmb\Phi}} \sum\nolimits_{k=1}^K\frac{\tilde{\alpha}_k\gamma_k}{1+\gamma_k}-{ r}\delta\sum\nolimits_{s=1}^S||{\pmb\Phi}_s||_F, \end{aligned} \end{equation} where $\tilde{\alpha}_k=1+\alpha_k.$ \subsubsection{Transmit Beamforming} In the following, we investigate how to find a better beamforming matrix ${\mathbf W}$ given fixed ${\pmb\Phi}$ for (\ref{eq:14}). Denote the combined channel for user $k$ by \begin{equation}\label{eq:15} {\mathbf h}_k^{\dag}={\mathbf h}_{d,k}^{\dag}+{\mathbf g}_k^{\dag}{\pmb\Phi}{\mathbf H}, \forall k\in{\cal K.} \end{equation} Then, the SINR $\gamma_k$ in (\ref{eq:2}) is given by \begin{equation}\label{eq:16} \gamma_k=\frac{|{\mathbf h}_k^{\dag}{\mathbf w}_k|^2} {\sum\nolimits_{j\neq k}^K|{\mathbf h}_k{\mathbf w}_j|^2+\sigma^2}. \end{equation} Using $\gamma_k$ in (\ref{eq:16}), the objective function of (\ref{eq:14}) is written as a function of $\{{\mathbf w}_k\}_{k=1}^K:$ \begin{equation}\label{eq:17} \begin{aligned} \sum_{k=1}^K\frac{\tilde{\alpha}_k\gamma_k}{1+\gamma_k}-&{ r}\delta\sum_{s=1}^S||{\pmb\Phi}_s||_F =\\&\sum_{k=1}^K\frac{\tilde{\alpha}_k|{\mathbf h}_k^{\dag}{\mathbf w}_k|^2} {\sum_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2}-{ r}\delta\sum_{s=1}^S||{\pmb\Phi}_s||_F. \end{aligned} \end{equation} Thus, for given ${ r},$ $\{{\alpha}_k\}_{k\in{\cal K}},$ and ${\pmb\Phi}$, optimizing $\{{\mathbf w}_k\}_{k=1}^K$ becomes \begin{equation}\label{eq:18} \begin{aligned} \max_{\{{\mathbf w}_k\}_{k=1}^K}& \sum_{k=1}^K\frac{\tilde{\alpha}_k|{\mathbf h}_k^{\dag}{\mathbf w}_k|^2} {\sum_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2}\\ \text{s.t.~}&\sum_{k=1}^K||{\mathbf w}_k||_2^2\leq p^{\max}. \end{aligned} \end{equation} Using quadratic transform, the objective function of (\ref{eq:18}) is reformulated as \begin{equation}\label{eq:19} \begin{aligned} \sum_{k=1}^K&\frac{\tilde{\alpha}_k|{\mathbf h}_k^{\dag}{\mathbf w}_k|^2} {\sum_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2} =\sum\nolimits_{k=1}^K2\sqrt{\tilde{\alpha}_k}\text{Re}\left\{ \beta_k^{\ddag}{\mathbf h}_k^{\dag}{\mathbf w}_k \right\}\\ &-\sum\nolimits_{k=1}^K|\beta_k|^2\left(\sum\nolimits_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2\right), \end{aligned} \end{equation} where $(\cdot)^{\ddag}$ denotes the conjugate. $\beta_k\in{\mathbb C}$ is the auxiliary variable. Then, solving problem (\ref{eq:18}) over $\{{\mathbf w}_k\}_{k=1}^K$ is equivalent to solving the following problem over $\{{\mathbf w}_k\}_{k=1}^K$ and ${\pmb\beta}=[\beta_1, \ldots, \beta_K]^T\in{\mathbb C}^{K\times 1}:$ \begin{equation}\label{eq:20} \begin{aligned} \max_{\{{\mathbf w}_k\}_{k=1}^K, {\pmb\beta}}~&~\sum\nolimits_{k=1}^K2\sqrt{\tilde{\alpha}_k}\text{Re}\left\{ \beta_k^{\ddag}{\mathbf h}_k^{\dag}{\mathbf w}_k \right\}\\ &-\sum\nolimits_{k=1}^K|\beta_k|^2\left(\sum\nolimits_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2 \right)\\ \text{s.t.~}&\sum\nolimits_{k=1}^K||{\mathbf w}_k||_2^2\leq p^{\max}. \end{aligned} \end{equation} The optimal $\beta_k$ for a given $\{{\mathbf w}_k\}_{k=1}^K$ is \begin{equation}\label{eq:21} \beta_k^{*}=\frac{\sqrt{\tilde{\alpha}_k}{\mathbf h}_k^{\dag}{\mathbf w}_k} {\sum_{j=1}^K|{\mathbf h}_k^{\dag}{\mathbf w}_j|^2+\sigma^2}. \end{equation} Then, fixing ${\pmb\beta},$ the optimal ${\mathbf w}_k$ is \begin{equation}\label{eq:22} {\mathbf w}_k^{*}=\sqrt{\tilde{\alpha}_k}\beta_k(\lambda_0{\mathbf I}_M+\sum\nolimits_{j=1}^K|\beta_j|^2{\mathbf h}_j{\mathbf h}_j^{\dag})^{-1}{\mathbf h}_k, \end{equation} where $\lambda_0$ is the dual variable introduced for the power constraint, which is optimally determined by \begin{equation}\label{eq:23} \lambda_0^{*}=\max\{0, p^{\max}-\sum\nolimits_{k=1}^K||{\mathbf w}_k||_2^2 \}. \end{equation} \subsubsection{Optimizing Reflection Response Matrix ${\pmb\Phi}$} Optimize ${\pmb\Phi}$ in (\ref{eq:14}) given fixed pricing ${\mathbf r},$ $\{\alpha_k\}_{k\in{\cal K}},$ and $\{{\mathbf w}_k\}_{k=1}^K$. Using $\gamma_k$ defined in (\ref{eq:2}), the objective function of (\ref{eq:14}) is expressed as a function of ${\pmb\Phi}$: \begin{equation}\label{eq:24} \sum_{k=1}^K\frac{\tilde{\alpha}_k|({\mathbf h}_{d,k}^{\dag}+{\mathbf g}_k^{\dag}{\pmb\Phi}(\mathbf H)){\mathbf w}_k|^2} {\sum_{j=1}^K|({\mathbf h}_{d,k}^{\dag}+{\mathbf g}_k^{\dag}{\pmb\Phi}{\mathbf H}){\mathbf w}_j|^2+\sigma^2}-{ r}\delta\sum_{s=1}^S||{\pmb\Phi}_s||_F. \end{equation} Define ${\mathbf a}_{j,k}=\text{diag}\{{\mathbf g}_k^{\dag}\}{\mathbf H}{\mathbf w}_j, b_{j,k}={\mathbf h}_{d,k}^{\dag}{\mathbf w}_j, \forall k,j=1, 2, \ldots, K.$ Combining with the definition of ${\pmb\phi}$, (\ref{eq:24}) can be rewritten as \begin{equation}\label{eq:25} \sum_{k=1}^K\frac{\tilde{\alpha}_k|b_{k,k}+{\pmb\phi}^{\dag}{\mathbf a}_{k,k}|^2} {\sum_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2}-{ r}\delta\sum_{s=1}^S||{\pmb\Phi}_s||_F. \end{equation} Note that $\sum_{s=1}^S||{\pmb\Phi}_s||_F=\sum_{s=1}^S||{\pmb\phi}_s||_2,$ optimizing ${\pmb\phi}$ can be represented as follows: \begin{equation}\label{eq:26} \begin{aligned} \max_{{\pmb\phi}}~&\sum_{k=1}^K\frac{\tilde{\alpha}_k|b_{k,k}+{\pmb\phi}^{\dag}{\mathbf a}_{k,k}|^2} {\sum_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2}-{ r}\delta\sum_{s=1}^S||{\pmb\phi}_s||_2\\ \text{s.t.~} &{\pmb\phi}^{\dag}{\mathbf e}_i{\mathbf e}_i^{\dag}{\pmb\phi}\leq 1, \forall i=1, 2, \ldots, (SN). \end{aligned} \end{equation} Based on the quadratic transform, the new objective function of (\ref{eq:26}) is \begin{equation}\label{eq:27} \begin{aligned} &\sum\nolimits_{k=1}^K2\sqrt{\tilde{\alpha}_k}\text{Re}\left\{ \epsilon_k^{\ddag}{\pmb\phi}^{\dag}{\mathbf a}_{k,k} +\epsilon_k^{\ddag}b_{k,k}\right\}-\sum\nolimits_{k=1}^K|\epsilon_k|^2\\ &\times\left( \sum\nolimits_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2 \right)-{ r}\delta\sum\nolimits_{s=1}^S||{\pmb\phi}_s||_2^2, \end{aligned} \end{equation} and ${\pmb\epsilon}=[\epsilon_1, \ldots, \epsilon_K]^T\in{\mathbb C}^{K\times 1}$ refers to the auxiliary variable vector. Similarly, we optimize ${\pmb\phi}$ and $\pmb\epsilon$ alternatively \cite{Guo2019Weighted}. The optimal $\epsilon_k$ for given $\pmb\phi$ can be obtained easily, shown as follows: \begin{equation}\label{eq:28} \epsilon_k^{*}=\frac{\sqrt{\tilde{\alpha}_k}(b_{k,k}+{\pmb\phi}^{\dag}{\mathbf a}_{k,k})} {\sum_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2}. \end{equation} Then, the remaining problem is optimizing $\pmb\phi$ for given $\pmb\epsilon.$ By introducing new variable ${\pmb\theta}={\pmb\phi}\in{\mathbb C}^{(SN)\times 1}.$ Likewise, ${\pmb\theta}_s\in{\mathbb C}^{N\times 1}$ represents the $s$th block of vector ${\pmb\theta}.$ Thus, for the fixed ${\pmb\epsilon},$ the optimization problem of ${\pmb\phi}$ is given as follows: \begin{equation}\label{eq:29} \begin{aligned} \max_{{\pmb\phi}, {\pmb\theta}}~&\sum_{k=1}^K2\sqrt{\tilde{\alpha}_k}\text{Re}\left\{ \epsilon_k^{\ddag}{\pmb\phi}^{\dag}{\mathbf a}_{k,k}+\epsilon_k^{\ddag}b_{k,k}\right\}-\sum_{k=1}^K|\epsilon_k|^2\\ &\left( \sum_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2 \right)-{ r}\delta\sum_{s=1}^S||{\pmb\theta}_s||_2^2\\ \text{s.t.}~& {\pmb\phi}^{\dag}{\mathbf e}_i{\mathbf e}_i^{\dag}{\pmb\phi}\leq 1, \forall i=1, 2, \ldots, (SN)\\ &{\pmb\theta}={\pmb\phi}. \end{aligned} \end{equation} Utilizing the method of augmented Lagrangian minimization, (\ref{eq:29}) can be handled by solving \begin{equation}\label{eq:30} \begin{aligned} \min_{{\pmb\Lambda}}\max_{{\pmb\phi}, {\pmb\theta}}~&~ L_c({\pmb\phi}, {\pmb\theta}, {\pmb\Lambda})\\ \text{s.t.~}&~{\pmb\phi}^{\dag}{\mathbf e}_i{\mathbf e}_i^{\dag}{\pmb\phi}\leq 1, \forall i=1, 2, \ldots, (SN), \end{aligned} \end{equation} where $c>0$ is the penalty factor; ${\pmb\Lambda}\in{\mathbb C}^{(SN)\times 1}$ is the Lagrangian vector multiplier for ${\pmb\theta}={\pmb\phi}.$ The partial augmented Lagrangian function is defined as \begin{equation}\label{eq:31} \begin{aligned} L_c&({\pmb\phi},{\pmb\theta},{\pmb\Lambda})=\sum_{k=1}^K2\sqrt{\tilde{\alpha}_k}\text{Re}\left\{ \epsilon_k^{\ddag}{\pmb\phi}^{\dag}{\mathbf a}_{k,k}+\epsilon_k^{\ddag}b_{k,k} \right\}\\ &-\sum_{k=1}^K|\epsilon_k|^2\left( \sum_{j=1}^K|b_{j,k}+{\pmb\phi}^{\dag}{\mathbf a}_{j,k}|^2+\sigma^2 \right)\\ &-{ r}\delta\sum_{s=1}^S||{\pmb\theta}_s||_2^2- \text{Re}\left\{\text{Tr}\left[{\pmb\Lambda}^{\dag}({\pmb\theta}-{\pmb\phi}) \right]\right\} -\frac{c}{2}||{\pmb\theta}-{\pmb\phi}||_2^2. \end{aligned} \end{equation} \begin{itemize} \item{\textit{Updating} ${\pmb\phi}$: By dual theory and KKT conditions, the optimal solution is given by (\ref{eq:32}). {\small \begin{figure*} \begin{equation}\label{eq:32} \begin{aligned} {\pmb\phi}^{*}=\left(2\sum_{k=1}^K|\epsilon_k|^2 \sum_{j=1}^K {\mathbf a}_{j,k}{\mathbf a}_{j,k}^{\dag}+2\sum_{i=1}^{SN}\mu_i{\mathbf e}_i{\mathbf e}_i^{\dag}+c{\mathbf I}_{SN} \right)^{-1} \left(2\sum_{k=1}^K\sqrt{\tilde{\alpha}_k}\epsilon_k^{\ddag}{\mathbf a}_{k,k}+{\pmb\Lambda}+c{\pmb\theta}-2\sum_{k=1}^K|\epsilon_k|^2\sum_{j=1}^Kb_{j,k}{\mathbf a}_{j,k}\right),\\ \hline \end{aligned} \end{equation} \end{figure*}} The Lagrangian multiplier $\mu_i$ updated by \begin{equation}\label{eq:33} \mu_i^{*}=\max\left\{0, 1-{\pmb\phi}^{\dag}{\mathbf e}_i{\mathbf e}_i^{\dag}{\pmb\phi}\right\}. \end{equation} } \item{\textit{Updating } ${\pmb\theta}$: The problem of ${\pmb\theta}$ is an unconstrained group leastabsolute selection and shrinkage operator (group Lasso) problem \cite{Yuan2006Model}, i.e., \setcounter{equation}{31} \begin{equation}\label{eq:34} \max_{\pmb\theta}~-{r}\delta\sum_{s=1}^S||{\pmb\theta}_s||_2-\text{Re}\left\{\text{Tr} \left[{\pmb\Lambda}^{\dag}({\pmb\theta}-{\pmb\phi})\right] \right\}-\frac{c}{2}||{\pmb\theta}-{\pmb\phi}||_2^2. \end{equation} Let ${\pmb\Lambda}_s\in{\mathbb C}^{N\times 1}$ denote the $s\text{th}$ row block of vector ${\pmb\theta}, s=1, 2, \ldots, S.$ Then, (\ref{eq:34}) can be divided into $S$ independent problems of ${\pmb\theta}_s$ for $s=1, 2, \ldots, S$ \begin{equation}\label{eq:35} \max_{{\pmb\theta}_s}~-{ r}\delta||{\pmb\theta}_s||_2-\text{Re}\left\{\text{Tr}\left[{\pmb\Lambda}_s^{\dag} ({\pmb\theta}_s-{\pmb\phi}_s) \right] \right\}-\frac{c}{2}||{\pmb\theta}_s-{\pmb\phi}_s||_2^2 \end{equation} Defining ${\mathbf x}_s=c{\pmb\phi}_s-{\pmb\Lambda}_s,$ and ${\mathbf x}_s-c{\pmb\theta}_s\in{ r}\partial ||{\pmb\theta}_s||_2$, and thus, we can easily obtain \begin{equation}\label{eq:36} {\pmb\theta}_s=\left\{ \begin{array}{ccc} &{\mathbf 0}, & \text{~if~} ||{\mathbf x}_s||_2\leq { r\delta}\\ &\frac{(||{\mathbf x}_s||_2-{ r}\delta){\mathbf x}_s} {c||{\mathbf x}_s||_2}, &\text{otherwise}. \end{array}\right. \end{equation} The update of Lagrangian vector $\pmb\Lambda_s$ is given by \begin{equation}\label{eq:37} {\pmb\Lambda}_s={\pmb\Lambda}_s+c({\pmb\theta}_s-{\pmb\phi}_s), \forall s=1, 2, \ldots, S. \end{equation} } \end{itemize} \subsection{Game Analysis for the IRS Pricing} Substituting (\ref{eq:36}) into ({ L-Problem}) in (\ref{s:4}), the optimization problem at the IRS side can be formulated as \begin{equation}\label{eq:38} \max_{{ r}>0}~ \sum_{s=1}^S\kappa_s\frac{-\delta^2{ r}^2+\delta||{\mathbf x}_s||_2{ r}} {c}, \end{equation} where $\kappa_s$ is indicate function, i.e., \begin{equation}\label{eq:39} \kappa_s=\left\{\begin{array}{ccc} &0, &\text{~if~} ||{\mathbf x}_s||_2\leq { r\delta}\\ &1, &\text{~otherwise}. \end{array}\right. \end{equation} The optimal solution of (\ref{eq:38}) is \begin{equation}\label{eq:40} { r}^{*}=\frac{\sum_{s=1}^S\kappa_s||{\mathbf x}_s||_2} {2\delta\sum_{s=1}^S\kappa_s}. \end{equation} The entire framework including the identifying the price and the trigger module subsets as well as the transmit beamforming and the phase shift is summarized in Algorithm 1. \begin{algorithm}[!htp \caption{ Algorithm Summary} \label{alg:Framwork} \hspace*{0.02in} {\bf Initial:} The IRS initialize the price ${ r}(1),$ and set the outer iteration number $\tau=1$ \\ \hspace*{0.02in} {\bf Part I:} the alternating optimization for solving (\ref{eq:14}) \\ (1.1) Initialize ${\mathbf W}(1)$ and ${\pmb\Phi}(1)$ to feasible values, and set the iteration number $t=1.$\\ \hspace*{0.02in} {\bf Repeat}\\ (1.2) Update the nominal SINR $\alpha_k(t), \forall k\in{\cal K}$;\\ (1.3) Update $\beta_k(t), \forall k\in{\cal K}$ by (\ref{eq:21});\\ (1.4) Update transmit beamforming ${\mathbf W}(t)$ by (\ref{eq:22}); update $\lambda_0(t)$ by (\ref{eq:23})\\ (1.5) Update $\epsilon_k(t), \forall k\in{\cal K}$ by (\ref{eq:28});\\ (1.6) Update ${\pmb\phi}(t)$ by (\ref{eq:32}); update $\mu_i(t), \forall i=1, 2, \ldots, (SN),$ by (\ref{eq:33});\\ (1.7) Update ${\pmb\theta}_s(t) $ by (\ref{eq:36}) in parallel for $s=1, 2,\ldots S;$\\ (1.8) Update $\pmb\Lambda_s(t)$ by (\ref{eq:37}) in parallel for $s=1, 2,\ldots, S;$\\ (1.9) Update $t=t+1;$\\ (1.10) {\bf Until } The value of function (\ref{eq:12}) converges.\\ \hspace*{0.02in} {\bf Part II:} Update price ${ r}$ by solving problem (\ref{eq:38}) in the outer loop \\ (2.1) Solve problem (\ref{eq:38}) for given $\{{\pmb\theta}_s(t)\}_{s=1}^S, \{{\pmb\Lambda}_s(t)\}_{s=1}^S, $ update ${ r}(\tau)$ by (\ref{eq:40})\\ (2.2) {\bf Until } the utility of the IRS is convergence. \end{algorithm} \section{Simulation Results} In this section, extensive numerical results are presented to evaluate the performances of the proposed resource allocation strategies based on the approach of active module pricing. For simplicity, we set the balance parameter $\delta$ to $0.1.$ To keep the complexity of simulations tractable, we focus on the scenario, where the $K\in\{4, 6\}$ users are randomly deployed within a circle cell centered at $(200,0)\text{~m}$, and the cell radius is $10\text{~m}$, the BS and IRS are employed at $(0,0)\text{~m} $ and $(50, 50)\text{~m},$ respectively, where the number of reflecting elements of each module is set as $N=8.$ We assume that the BS is equipped with $4$ ($6$) antennas for $K=4$ ($K=6$). From \cite{Zheng2020Intelligent}, we set the path loss exponent of the direct link as $3.5$, and the path loss at the reference distance $1\text{~m}$ is set as $30\text{~dBm}$ for each individual link. For the IRS-aided link, $2$ is the value of the path loss exponent from the BS to the IRS and that from the IRS to users. For simplicity, we assume the Rayleigh fading model to account for small-scale fading. \begin{figure}[!t] \centering \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1.1\linewidth]{fig1-1} \caption{Impact of $p^{\max}$ on $U$.} \label{fig:1} \end{minipage} \centering \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1.1\linewidth]{fig2-1} \caption{Impact of $p^{\max}$ on $V$.} \label{fig:2} \end{minipage} \end{figure} The performance of the Stackelberg game-based ADMM scheme is evaluated against two existing benchmark schemes, i.e., random pricing scheme and direct link only scheme. In the random pricing scheme, the IRS randomly determines its strategies, without considering the existence of the BS. The direct link only scheme means no IRS to aid, i.e., no module is activated at the IRS. Figures \ref{fig:1} and \ref{fig:2} show the effect of the maximum transmit power $p^{\max}$ on the utility of the BS and the IRS, respectively, when the number of modules is $8.$ Correspondingly, Figs. \ref{fig:3} and \ref{fig:4} depict the sum rate of all users and the service prices versus the maximum transmit power at BS, respectively. For the BS and IRS, the Stackelberg game-based ADMM scheme achieves the highest utility value compared with random pricing and direct link schemes, which indicates that the proposed pricing-based Stackelberg game scheme performs best in resource allocation for IRS-aided communications. From the results, we observe that the utility values of the BS increases as $p^{\max}$ grows from $-5\text{~dBm}$ to $5\text{~dBm}.$ Meanwhile, the utility value of the IRS achieved by the Stackelberg game-based ADMM scheme first decreases slowly until the maximum transmit power increases to $0\text{~dBm}$ and then decreases rapidly by increasing the value of $p^{\max}$. This is because that the cost of power consumption is not considered in the utility of the BS, and thereby, the BS will tend to select a small number of active modules when the transmit power is sufficient. Meanwhile, for $p^{\max}>0\text{~dBm}$, the IRS needs to incentive the BS to select reflection resource through lower price, which can be observed from Fig. \ref{fig:4}. \begin{figure}[!t] \centering \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1.1\linewidth]{fig3-1} \caption{Sum rate vs. $P^{\max}.$} \label{fig:3} \end{minipage} \centering \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=1.1\linewidth]{fig4-1} \caption{ Prices vs. $p^{\max}.$ } \label{fig:4} \end{minipage} \end{figure} \section{Conclusion} The adoption of an IRS for downlink multi-user communication from a multi-antenna BS was investigated in this paper. Specifically, we developed a Stackelbeg game approach to analyze the interaction between the BS and the IRS operator considering that the IRS operator may be selfish or has its own objective. Different from the existing studies on IRS that merely focused on tuning the reflection coefficient of all the reflecting elements, we considered the reflection resource allocation, which can be realized via active module selection under the proposed modular IRS architecture that all the modules are controlled by independent controllers. The Stackelberg game-based ADMM was proposed to solve either the transmit beamforming at the BS or the passive beamorming of the activated modules. Numerical examples were presented to verify the proposed studies. It was shown that the proposed scheme is effective in the utilities of both the BS and IRS. \section*{Acknowledgement} {This work was supported in part by the National Science Foundation of China under Grant number 61671131, also supported by the National Research Foundation (NRF), Singapore, under Singapore Energy Market Authority (EMA), Energy Resilience, NRF2017EWT-EP003-041, Singapore NRF2015-NRF-ISF001-2277, Singapore NRF National Satellite of Excellence, Design Science and Technology for Secure Critical Infrastructure NSoE DeST-SCI2019-0007, A*STAR-NTU-SUTD Joint Research Grant on Artificial Intelligence for the Future of Manufacturing RGANS1906, Wallenberg AI, Autonomous Systems and Software Program and Nanyang Technological University (WASP/NTU) under grant M4082187 (4080), Singapore Ministry of Education (MOE) Tier 1 (RG16/20), and NTU-WeBank JRI (NWJ-2020-004), Alibaba Group through Alibaba Innovative Research (AIR) Program, Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University (NTU) Startup Grant, Alibaba-NTU Singapore Joint Research Institute (JRI), Singapore Ministry of Education Academic Research Fund Tier 1 RG128/18, Tier 1 RG115/19, Tier 1 RT07/19, Tier 1 RT01/19, and Tier 2 MOE2019-T2-1-176, NTU-WASP Joint Project, Singapore National Research Foundation (NRF) under its Strategic Capability Research Centres Funding Initiative: Strategic Centre for Research in Privacy-Preserving Technologies \& Systems (SCRIPTS), Energy Research Institute @NTU (ERIAN), Singapore NRF National Satellite of Excellence, Design Science and Technology for Secure Critical Infrastructure NSoE DeST-SCI2019-0012, AI Singapore (AISG) 100 Experiments (100E) programme, NTU Project for Large Vertical Take-Off \& Landing (VTOL) Research Platform.}
{ "timestamp": "2020-12-17T02:19:57", "yymm": "2012", "arxiv_id": "2012.08989", "language": "en", "url": "https://arxiv.org/abs/2012.08989" }
\section{Introduction and main results} \subsection{Introduction} Recall that a symplectic manifold is a smooth even-dimensional manifold $M$, equipped with a closed non-degenerate differential 2-form $\omega$. Non-degeneracy of $\omega$ means that its top wedge power, $\omega^n$, is a volume form. Symplectic manifolds serve as models of phase spaces of classical mechanics. The group of all automorphisms of a symplectic manifold, i.e. maps $M \to M$ which preserve $\omega$, contains a subgroup of all physically possible mechanical motions. This is the group of Hamiltonian diffeomorphisms, denoted $Ham(M,\omega)$, which plays a central role in symplectic topology and Hamiltonian dynamics. In 1990 H. Hofer introduced a remarkable bi-invariant Finsler metric on $Ham$ (see~\cite{H}). Studying coarse geometry of the metric space $Ham$ is an important problem of modern symplectic topology, and is still far from well-understood. This paper takes a step in this direction. Let us briefly outline our main results. Denote by $Powers_k \subset Ham$ the set of elements admitting a root of degree $k$. It has been shown in~\cite{PS14} that for surfaces of genus $\geq 4$ the complement $Ham \setminus Powers_k$ contains an arbitrarily large ball, with respect to the metric introduced by Hofer. We extend this result to surfaces of genus 2 and 3. Further, according to~\cite{CZ}, any asymptotic cone (in the sense of Gromov, see~\cite{G}) of a group with bi-invariant metric has a natural group structure. It has been shown in~\cite{10} that for a surface $M$ of genus $\geq 4$ any such cone of $Ham(M,\omega)$ contains a free group with 2 generators. We extend this result to surfaces of genus 2 and 3. The proofs of the above-mentioned results on surfaces of genus $\geq 4$ involve Floer homology of non-contractible closed orbits on the surface, and are proved by considering special elements of $Ham$, the so-called eggbeater maps (see~\cite{FO} and~\cite{PS14}), which originate in chaotic dynamics. An algebraic analysis of non-contractible closed orbits of these maps plays an important role in the proof. This is exactly the place where we had to modify the original arguments, in order to extend the results to the cases of genera 2 and 3. Our main innovations are two algebraic results about homomorphisms from a free group into a surface group (see Lemma~\ref{lemma:2} and Claim~\ref{cl:incompressible} below). \subsection{Preliminaries} Recall that Hofer's metric on $Ham(M,\omega)$ is defined as \begin{myequation} d_H(f,g) = \inf_H \int_0^1 \left( \max_M H_t - \min_M H_t \right) dt , \end{myequation} for $f,g \in Ham(M,\omega)$, where the infimum is taken over all smooth $H: S^1 \times M \to \mathbb{R}$ that generate $f^{-1}g$ and where $H_t = H(t,\cdot)$. This is a bi-invariant metric on $Ham(M)$, and the fact that it is a genuine metric, as opposed to a pseudo-metric, is non-trivial (see~\cite{H},\cite{LM}). Hofer's norm of a diffeomorphism is its Hofer distance to the identity, and is denoted \[ ||\cdot||_H = d_H(id,\cdot) . \]\\ The focus of this paper is the metric space $(Ham(M), d_H)$. The question whether for every symplectic manifold $Ham(M)$ has infinite diameter with respect to Hofer's metric is an important open problem in this field; it is conjectured (see discussion in 14.2 of~\cite{MS}) that the answer to this question is positive for every closed symplectic manifold $M$. The conjecture has been partially confirmed (see~\cite{Sc},\cite{P},\cite{MS}). \\ Rather than asking about the diameter of the group $Ham(M)$, one can ask about the supremum of distances $d_H(f,X)$ for some subset $X \subset Ham(M)$. A natural set $X$ to ask this question for is $Aut(M,\omega)$, the set of autonomous (i.e. "time-independent") Hamiltonian diffeomorphisms, and another interesting family of sets are the sets $Powers_k = \{\psi^k | \psi \in Ham(M,\omega)\}$ of Hamiltonian diffeomorphisms which admit $k^{th}$ roots, for $k \geq 2$ an integer. The quantities queried are the following: \begin{myequation} aut(M,\omega) = \sup_{\phi \in Ham(M, \omega)} d_H(\phi, Aut(M, \omega)) ,\\ powers_k(M,\omega) = \sup_{\phi \in Ham(M, \omega)} d_H(\phi, Powers_k(M,\omega)) . \end{myequation} Note that for a symplectic manifold $M$, showing that $aut(M) = \infty$ or that for any $k \geq 2$, $powers_k(M) = \infty$ would answer the Hamiltonian diameter question for $M$. L. Polterovich and E. Shelukhin conjectured in~\cite{PS14} that $aut(M) = \infty$ for all closed symplectic manifolds, and made a first step in that direction: they show that symplectic surfaces $M$ of genus $\geq 4$ have $powers_k(M) = \infty$ for all $k \geq 2$. One of the results in this paper states this is also true for symplectic surfaces of genera 2,3 (see Theorem~\ref{thm:2} below). \\ Our second result concerns the coarse structure of the metric space $(Ham(M), d_H)$. To state it we need the notions of the asymptotic cone of a metric space, which is an important notion in coarse geometry (see~\cite{G}), and of ultrafilters and ultralimits. A \textit{filter} on a partially ordered set $(P,\leq)$ is a non-empty proper subset $F \subset P$ that is upward closed and downward directed; i.e. if $x \in F, y \in P, x \leq y$ then $y \in F$, and also $\forall x,y \in F \ \exists z \in F$ such that $z \leq x,y$. A \textit{non-principal ultrafilter} on $(P,\leq)$ is a filter $F$ on $(P,\leq)$ such that there is no filter $F^\prime$ on $P$ with $F \subset F^\prime \subset P$, and such that $F$ is not of the form $\{x \in P | y \leq x\}$ for any $y \in P$. Given a metric space $(X,d)$, an ultrafilter $\mathcal{U}$ on the power set of the natural numbers $2^\mathbb{N}$ (equipped with the inclusion order) and a sequence of points $(x_n)$ in $X$, a point $x \in X$ is the $\mathcal{U}$-\textit{ultralimit} of $(x_n)$, denoted $\lim_\mathcal{U} x_n$, if for any $\epsilon > 0$, $\{n | d(x_n,x) \leq \epsilon\} \in \mathcal{U}$. The ultralimit does not necessarily exist, but can be shown to exist if $(x_n)$ is bounded. Let $(X,d)$ be a metric space, fix $\mathcal{U}$ a non-principal ultrafilter on $2^\mathbb{N}$, and fix some basepoint $x_0 \in X$. The \textit{asymptotic cone} of $(X,d)$ is a metric space $Cone_\mathcal{U}(X,d)$ whose underlying set is \begin{myequation} \left\{ \left( x_k \right)_{k \in \mathbb{N}} \in X^\mathbb{N} \middle| \exists C > 0 \ s.t.\ \forall k: \frac{d(x_k,x_0)}{k} < C \right\} \bigg/ \thicksim , \end{myequation} where $(x_k) \sim (y_k)$ if $\lim_\mathcal{U} \frac{d(x_k,y_k)}{k} = 0$, and whose metric is \begin{myequation} d_\mathcal{U} ([(x_k)], [(y_k)]) = \lim_\mathcal{U} \frac{d(x_k,y_k)}{k} . \end{myequation} Assume additionally that $X$ is a group, and that $d$ is a bi-invariant metric. Then $Cone_\mathcal{U}(X,d)$ is also a group, with multiplication \begin{myequation} [(x_k)] \cdot [(y_k)] = [(x_k \cdot y_k)] . \end{myequation} Since $d$ is bi-invariant, this multiplication is well defined, and $d_\mathcal{U}$ is also bi-invariant. Elements of the asymptotic cone represent directions (or rather, velocities) in which one can go to infinity in the base space $X$. For example, bounded metric spaces all have asymptotic cones which are single points, and on the other hand, the asymptotic cone of the hyperbolic plane is a tree with uncountably many branches at each point. The asymptotic cone is an invariant of the coarse structure, or the large-scale properties of a metric space, in the sense that quasi-isometric spaces have the same asymptotic cones (see~\cite{R} for more on coarse structure, asymptotic cones, and quasi-isometry). \\ The focus of this paper is the geometry of $Ham(M)$ for $(M,\omega)$ a symplectic manifold, therefore we consider $Cone_\mathcal{U}(Ham(M), d_H)$ for a non-principal ultrafilter $\mathcal{U}$ on $2^\mathbb{N}$. In~\cite{10}, D. Alvarez-Gavela et al. show that given a symplectic surface $M$ of genus $\geq 4$, there exists a monomorphism $F_2 \to Cone_\mathcal{U}(Ham(M))$, where $F_2$ is the free group on two generators, and therefore $Cone_\mathcal{U}(Ham(M))$ has a subgroup isomorphic to $F_2$. The second result in this paper states this is also true for symplectic surfaces of genera 2,3 (see Theorem~\ref{thm:3} below). We turn now to our main results. \subsection{Results} \label{sec:results} The following results are generalizations of previous theorems appearing in~\cite{PS14},~\cite{10}: specificaly, Theorem~\ref{thm:2} is a generalization of Theorem 1.3 in~\cite{PS14}, and Theorem~\ref{thm:3} is a generalization of Theorem 1.1 in~\cite{10}. The original theorems are the same as stated here, except for their assumptions on the surface $\Sigma$: while the original theorems hold for all closed symplectic surfaces $\Sigma$ of genus $\geq 4$, the theorems presented here hold for closed symplectic surfaces of genera 2 and 3. \begin{theorem} \label{thm:2} Let $\Sigma$ be a closed oriented surface of genus 2 or 3, equipped with an area form $\sigma$, and $k \geq 2$ an integer. Then $powers_k(\Sigma, \sigma) = \infty$. \end{theorem} \begin{theorem} \label{thm:3} Let $\Sigma$ be a closed oriented surface of genus 2 or 3, equipped with an area form $\sigma$. Then for any non-principal ultrafilter $\mathcal{U}$ on $2^\mathbb{N}$, there exists a monomorphism $F_2 \hookrightarrow Cone_\mathcal{U}(Ham(\Sigma), d_H)$. \end{theorem} We remark that since $aut(M) \geq powers_k(M)$, Theorem~\ref{thm:2} implies that $aut(\Sigma, \sigma) = \infty$ for any closed oriented surface $\Sigma$ of genus 2 or 3 with an area form $\sigma$. We remark further that the above results survive stabilization by a closed aspherical symplectic manifold. That is, if $(M,\omega)$ is a symplectic manifold with $\pi_2(M) = 0$ and $\Sigma$ is as above, then the results also hold for the symplectic manifold $(\Sigma \times M, \sigma \oplus \omega)$. This is shown in the same way as in~\cite{PS14},\cite{10}. In the proofs of our results we closely follow~\cite{PS14} and~\cite{10}. Specifically, we use the same construction called eggbeater maps (see~\cite{PS14} and~\cite{FO}). We will outline the proofs of the original theorems, and give an in-depth explanation of the changed parts in Section~\ref{sec:3}. An alternative proof of the theorems in genus 3 is presented in Section~\ref{sec:incompressibility}. \subsection{Outline of the proofs for genera 2,3} \label{sec:outline} In this subsection we present a short outline of the proofs of the theorems. The full details of the proofs are given in Section~\ref{sec:3}. \\ \begin{figure}[!ht] \centering \begin{minipage}{.5\textwidth} \begin{minipage}{\textwidth} \scalebox{0.8}{ \begin{tikzpicture} \def\a{70}; \def\b{55}; \def\c{83}; \def\d{66.5}; \draw (-1,0) ++(\a:3) arc (\a:360-\a:3); \draw (-1,0) ++(\b:3) arc (\b:-\b:3); \draw (-1,0) ++(\c:2.5) arc (\c:360-\c:2.5); \draw (-1,0) ++(\d:2.5) arc (\d:-\d:2.5); \draw (1,0) ++(180-\a:3) arc (540-\a:180+\a:3); \draw (1,0) ++(180-\b:3) arc (180-\b:180+\b:3); \draw (1,0) ++(180-\c:2.5) arc (540-\c:180+\c:2.5); \draw (1,0) ++(180-\d:2.5) arc (180-\d:180+\d:2.5); \draw (-1.5,3) node[anchor=south] {\Large $C_V$}; \draw (1.5,3) node[anchor=south] {\Large $C_H$}; \end{tikzpicture}} \caption{The manifold $C$ used in the proof.} \label{fig:C} \end{minipage} \\[1.5\baselineskip] \begin{minipage}{\textwidth} \scalebox{0.9}{ \begin{tikzpicture} \usetikzlibrary{decorations.markings} \begin{scope} [decoration={markings,mark=at position 0.55 with {\arrow[scale=2]{>}}}] \draw (-3,1) -- (3,1) -- (3,-1) decorate {(3,1) -- (3,-1)}; \draw (-3,1) -- (-3,-1) -- (3,-1) decorate {(-3,1) -- (-3,-1)}; \end{scope} \draw (-2,1) -- (2,0) -- (-2,-1); \draw[dashed] (-2,1) -- (-2,-1); \draw[->] (-2,0.6) -- (-0.9,0.6); \draw[->] (-2,-0.6) -- (-0.9,-0.6); \draw[->] (-2,0.2) -- (0.8,0.2); \draw[->] (-2,-0.2) -- (0.8,-0.2); \end{tikzpicture}} \caption{The profile of the map $f: C_* \to C_*$.} \label{fig:profile} \end{minipage} \end{minipage}% \hspace{0.1cm} \begin{minipage}{.48\textwidth} \centering \scalebox{0.9}{ \begin{tikzpicture} \draw (-2.5,-1) -- (-1,-1) -- (-1,-4.5); \draw (2.5,-1) -- (1,-1) -- (1,-4.5); \draw (2.5,1) -- (1,1) -- (1,2.5); \draw (-2.5,1) -- (-1,1) -- (-1,2.5); \draw[dotted] (-3,1) -- (-2.5,1); \draw[dotted] (1,-4.5) -- (1,-4.8); \draw[dotted] (1,2.8) -- (1,2.5); \draw[dotted] (3,1) -- (2.5,1); \draw[dotted] (3,-1) -- (2.5,-1); \draw[dotted] (-1,2.8) -- (-1,2.5); \draw[dotted] (-1,-4.5) -- (-1,-4.8); \draw[dotted] (-3,-1) -- (-2.5,-1); \draw (3,0) node[anchor=south] {$C_H$}; \draw (0,-4.8) node[anchor=west] {$C_V$}; \draw[dashed] (-2,1) -- (-2,-1); \draw (-2,1) -- (-1,0.75) -- (0,-3.5) -- (1,0.25) -- (2,0) -- (1,-0.25) -- (0,-4.5) -- (-1,-0.75) -- (-2,-1); \draw[->] (-2,0.6) -- (-1.6,0.6); \draw[->] (-2,-0.6) -- (-1.6,-0.6); \draw[->] (-2,0.2) -- (-1.1,0.2); \draw[->] (-2,-0.2) -- (-1.1,-0.2); \draw (-2,0) node[shape=circle,draw,outer sep=2pt,anchor=east] {1}; \draw[->] (-0.6,2) -- (-0.6,1.6); \draw[->] (0.6,2) -- (0.6,1.6); \draw[->] (-0.2,2) -- (-0.2,1.1); \draw[->] (0.2,2) -- (0.2,1.1); \draw (0,2) node[shape=circle,draw,outer sep=2pt,anchor=south] {2}; \end{tikzpicture}} \caption{The neighborhood of an intersection of $C_V$ and $C_H$, with the profile of $f_V \circ f_H$ pictured. The map $f_V \circ f_H$ is a shear along $C_H$ (1) and then a shear along $C_V$ (2).} \label{fig:profile2} \end{minipage} \end{figure} Denote by $C = C_V \bigcup C_H$ the union of two identical annuli (see Figure~\ref{fig:C}), and by $C_*$ a single such annulus. Let $f: C_* \to C_*$ be a piecewise-linear shear map along the axis of $C_*$, whose profile consists of two straight lines (see Figure~\ref{fig:profile}). Applying the map $f$ on $C_V,C_H$, one gets two shear maps $f_V,f_H: C \to C$ with support in $C_V,C_H$ respectively. Composing the maps $f_V,f_H$ and their inverses several times in some specific order yields a map $C \to C$ called an egg-beater map. Figure~\ref{fig:profile2} depicts the profile of such an egg-beater map $f_V \circ f_H$ in the neighborhood of one of the intersections of $C_V$ and $C_H$. The proofs of both results, Theorem~\ref{thm:2} and Theorem~\ref{thm:3}, are based on counting the fixed points of the egg-beater map in $C$ whose orbits belong to some suitably selected free homotopy classes $\alpha_k \in \pi_0(\mathscr{L}C)$ (where $\mathscr{L}X$ is the free loop space of a space $X$), and studying the Floer homology of these orbits. Using this tool, it can be shown that if there are not too many such fixed points and some condition on the actions of their orbits holds, then the theorems themselves hold. This is all done in~\cite{PS14} and~\cite{10}. However, in order to get results on a closed oriented surface $\Sigma_g$ of genus $g$, this construction must be embedded in $\Sigma_g$, using an embedding denoted $i: C \hookrightarrow \Sigma_g$, and a similar analysis done there. In this step, one might encounter a new problem: consider $\tau_i: \pi_0(\mathscr{L}C) \to \pi_0(\mathscr{L}\Sigma_g)$, the mapping induced by $i$. The mapping $\tau_i$ might not be injective. In this case, the previous analysis done on $C$ will no longer hold, and one must then re-count fixed points in $\Sigma_g$, since the number of fixed points of the egg-beater map on $\Sigma_g$ in class $\alpha \in \pi_0(\mathscr{L}\Sigma_g)$ is the number of fixed points of the egg-beater map on $C$ in all the classes $\tau_i^{-1}(\alpha)$, which might be too much for these methods to work. Originally, in genus $g \geq 4$, this problem did not arise, since if $g \geq 4$ one can find an embedding $i$ such that $\tau_i$ is injective. In fact, one can also find an embedding $i_3$ into $\Sigma_3$ with $\tau_{i_3}$ injective; this approach is shown in Section~\ref{sec:incompressibility}. It is likely that there also exists an embedding $i_2$ into $\Sigma_2$ with an injective $\tau_{i_2}$. However, this isn't proven in this paper. \\ The results for genera 2 and 3 are proven in Section~\ref{sec:3} by placing bounds on the non-injectivity of $\tau_i$, using a wisely chosen $i$ and an algebraic lemma. More precisely, note that injectivity of $\tau_i$ can be translated to a claim on images of conjugacy classes of $\pi_1(C)$ under the homomorphism $i_*: \pi_1(C) \to \pi_1(\Sigma_2)$ induced by $i$. One may choose $i$ such that the induced homomorphism $i_*: F_3 = \langle a,b,c \rangle \to \langle g_1,g_2,g_3,g_4 | [g_1,g_2][g_3,g_4] \rangle$ is the following: \begin{myequation} a \mapsto g_1 g_3 ,\\ b \mapsto g_2 g_1^{-1} g_2^{-1} g_3 ,\\ c \mapsto g_3 . \end{myequation} The main novel ingredient of the results of this paper is the following lemma: \begin{lemma} \label{lemma:2} Let $p \in \mathbb{Z}_{\geq 1}$. For all $1 \leq j \leq p$, let $k_j,l_j \in \mathbb{Z}, u_j,v_j \in \{0,1\}, 0 \neq m_j,n_j \in \mathbb{Z}$. Consider the homomorphism $i_*: F_3 \to \pi_1(\Sigma_2)$ given above. Let $\delta = \Pi_j a^{k_j}c^{-u_j}b^{l_j}c^{v_j} \in F_3$ and $\beta = \Pi_j a^{m_j}b^{n_j} \in F_3$. If $i_* \delta, i_* \beta$ are conjugates (in $\pi_1(\Sigma_2)$), then so are $\delta, \beta$ (in $F_3$). \end{lemma} This lemma gives bounds on the non-injectivity of $\tau_i$: different conjugacy classes having a specific form cannot have the same image under $\tau_i$. One may show that all orbits of the egg-beater map have free homotopy classes of this specific form (see Claim~\ref{claim:3}). This allows one to count fixed points of the egg-beater map in $\Sigma_g$ whose orbits are in specific free homotopy classes $\alpha_k$ by counting the corresponding fixed points in $C$ whose orbits are in $\tau_i^{-1}(\alpha_k)$. This calculation was carried out in~\cite{PS14} and~\cite{10}, and shows that there are not too many such fixed points, and that the condition on their actions mentioned above holds. Therefore, using the Floer homology tool, one can prove Theorem~\ref{thm:2} and Theorem~\ref{thm:3}. \\ A different approach is shown in Section~\ref{sec:incompressibility}: instead of bounding the non-injectivity of $\tau_i$, a different embedding $i_3$ into a surface of genus 3 is defined. The embedding $i_3$ has a restriction $i_3 \restriction_C$ that can be shown to induce an injective $\tau_{i_3 \restriction_C}$ with an argument using the intersection number of free homotopy classes (see Claim~\ref{cl:incompressible}). With this injectivity in hand, the above discussion yields the desired result on the number of fixed points of the egg-beater map. \\ Section~\ref{sec:2} contains the proof of Lemma~\ref{lemma:2} and a related result. The proofs of the theorems for genera 2 and 3, using the lemma, can be found in Section~\ref{sec:3}. Proofs of the theorems for genus 3, using injectivity of the induced map $\tau_{i_3}$ of the suitably-defined embedding $i_3$, are found in Section~\ref{sec:incompressibility}. \subsection{Acknowledgements} This paper is a part of the my M.Sc thesis, carried out under the supervision of Prof. Leonid Polterovich and Prof. Yaron Ostrover, whom I would like to sincerely thank for their contributions and knowledge. I would like to thank Matthias Meiwes and Leonid Potyagailo, for many fruitful discussions and ideas. I would also like to thank Ofir Karin, Leonid Vishnevsky and Asaf Cohen, for helpful comments and remarks. This work was partially supported by ISF grant numbers 1274/14 and 667/18. \section{Proof of the lemma}\label{sec:2} Recall the notation in chapter IV of~\cite{LS}. Let $G,H$ be finitely presentable groups, $A < G, B < H$ be isomorphic subgroups, and $\psi: A \xrightarrow{\sim} B$ an isomorphism. The \textit{free product of $G,H$ with respect to $\psi$} (or free product of $G,H$ with amalgamation), denoted $\langle G * H, A = B, \psi \rangle$, is defined as follows. If $G = \langle S_1 | R_1 \rangle, H = \langle S_2 | R_2 \rangle$ are finite presentations with $S_1 \cap S_2 = \emptyset$, then the free product with amalgamation is defined to be \[ \langle G * H, A = B, \psi \rangle = \langle S_1 \cup S_2 | R_1, R_2, \{a \psi(a)^{-1} | a \in A\} \rangle . \] The groups $G$ and $H$ are called the \textit{factors} of $\langle G * H, A = B, \psi \rangle$. Free products with amalgamation occur naturally in topology: let $X$ is a topological space with open cover $\{Y,Z\}$ such that $Y \cap Z$ is connected, and denote $G = \pi_1(Y), H = \pi_1(Z)$; $A = \pi_1(Y \cap Z)$, considered as a subgroup of $G$; and $B = \pi_1(Y \cap Z)$, considered as a subgroup of $H$. Denote also by $\psi: A \xrightarrow{\sim} B$ the natural isomorphism. Then by the van Kampen theorem: \[ \pi_1(X) = \langle G * H, A = B, \psi \rangle . \] Free products with amalgamation have a certain uniqueness property of conjugacy classes, which will be stated soon. First, we must recall the definition of cyclically reduced elements. \begin{definition} A sequence $c_1,...,c_n$ (with $n \geq 0$) of elements of $\langle G*H, A=B, \psi \rangle$ is called reduced if: \begin{enumerate} \item Each $c_i$ is in one of the factors $G$ or $H$. \item Successive $c_i,c_{i+1}$ come from different factors. \item If $n > 1$, no $c_i$ is in $A$ or $B$. \item If $n = 1$, $c_1 \neq 1$. \end{enumerate} A sequence $c_1,...,c_n$ of elements of $\langle G * H, A = B, \phi \rangle$ is called cyclically reduced if all its cyclic permutations (i.e. $c_2,...,c_n,c_1$, etc.) are reduced. An element $u \in \langle G * H, A = B, \phi \rangle$ is called cyclically reduced if there exists a cyclically reduced sequence $c_1,...,c_n$ such that $u = c_1 \cdot ... \cdot c_n$ (in this case the sequence $(c_i)_{i=1}^n$ is said to represent $u$). \end{definition} We remark that every element of $\langle G * H, A = B, \phi \rangle$ is conjugate to a (not necessarily unique) cyclically reduced element. Recall the following theorem (Theorem 2.8 in chapter IV of~\cite{LS}): \begin{theorem*}[Conjugacy Theorem for Free Products with Amalgamation] Let $P = \langle G * H, A = B, \phi \rangle$ be a free product with amalgamation. Let $u \in P$ be a cyclically reduced element, and let $c_1, ..., c_n$ be any cyclically reduced sequence with $u = c_1 \cdot ... \cdot c_n$ where $n \geq 2$. Then every cyclically reduced conjugate of $u$ can be obtained by cyclically permuting $c_1 \cdot \cdot \cdot c_n$ and then conjugating by an element of the amalgamated part~$A$: if $v \in P$ is cyclically reduced and conjugate to $u$, then $\exists 0 \leq k < n$ and $\exists a \in A$ such that $v = a \cdot c_k \cdot ... \cdot c_n \cdot c_1 \cdot ... \cdot c_{k-1} a^{-1}$. \end{theorem*} Denote the conjugacy relation in a group by $\sim$. For any group $G$, denote the conjugacy class of an element $x \in G$ by $[x]_G$. The Conjugacy Theorem implies one can define a length on elements of $P = \langle G*H , A=B, \phi \rangle$ by \begin{myequation} len: P \to \mathbb{Z}_{\geq 0} ,\\ u \mapsto n , \end{myequation} where $n$ is the length of a cyclically reduced sequence $c_1,...,c_n$ such that $u \sim \Pi_j c_j$. This is well defined by the Conjugacy Theorem, and is obviously conjugation-invariant. Note that $len(u) = 0 \iff u = 1$ and $len(u) = 1$ if and only if $u$ is conjugate to an element of $G \cup H \subset P$. \\ Let us denote the following groups: \begin{myequation} H_1 = \langle g_1, g_2 \rangle \simeq F_2 , \ H_2 = \langle g_3, g_4 \rangle \simeq F_2 ,\\ A = \langle [g_1,g_2] \rangle < H_1 , \ B = \langle [g_3,g_4] \rangle < H_2 , \end{myequation} and denote by $\phi: A \to B$ the isomorphism $[g_1,g_2] \mapsto [g_3,g_4]^{-1}$. Let $F_3 = \langle a,b,c \rangle$ be the free group with three generators, and let $\pi_1(\Sigma_2) = \langle g_1,g_2,g_3,g_4 | [g_1,g_2][g_3,g_4 \rangle = \langle H_1 * H_2, A = B, \phi \rangle$ be the first homotopy group of a closed oriented surface of genus 2. Consider the homomorphism $\varphi: F_3 \to \pi_1(\Sigma_2)$ defined by \begin{myequation} a \mapsto g_1 ,\\ b \mapsto g_2 g_1 g_2^{-1} ,\\ c \mapsto g_3 . \end{myequation} The next claim gives a restriction on non-injectivity of conjugacy classes by $\varphi$. \begin{claim}\label{cl:1} Let $r,s \in F_3$, and assume that $\varphi(r) \sim \varphi(s)$ in $\pi_1(\Sigma_2)$ and $r \not\sim s$ in $F_3$. Then exactly one of the following holds: \begin{itemize} \item $\exists 0 \neq j \in \mathbb{Z}$ such that $\varphi(r),\varphi(s)$ are conjugate to $g_1^j$ (in $\pi_1(\Sigma_2)$). \item $\exists 0 \neq j \in \mathbb{Z}$ such that $\varphi(r),\varphi(s)$ are conjugate to $g_3^j$ (in $\pi_1(\Sigma_2)$). \end{itemize} In other words, the only conjugacy classes in $F_3$ merged by the homomorphism $\varphi$ are the classes $[c^j]_{F_3}$ which merge with $[(ab^{-1}c)^j]_{F_3}$, and $[a^j]_{F_3}$ which merge with $[b^j]_{F_3}$ ($0 \neq j \in \mathbb{Z}$). These correspond to the following conjugacy classes in $\pi_1(\Sigma_2)$: $[g_3^j]_{\pi_1(\Sigma_2)} = [g_4 g_3^j g_4^{-1}]_{\pi_1(\Sigma_2)}$ and $[g_1^j]_{\pi_1(\Sigma_2)} = [g_2 g_1^j g_2^{-1}]_{\pi_1(\Sigma_2)}$. \end{claim} \begin{proof} Note that $\varphi(a)$ and $\varphi(b)$ are both elements in the same factor $H_1$, and $\varphi(c) \in H_2$. Partition $r$ and $s$ into sequences according to the partition $\{a,b,c\} = \{a,b\} \cup \{c\}$, i.e. concatenate consecutive symbols from $\langle a,b \rangle$, then perform the following steps, until no steps can be performed: \begin{itemize} \item Concatenate elements of the sequence of the form $(ab^{-1})^j$ (for some $j \in \mathbb{Z}$) to the previous and next elements in the sequence; do this for all occurrences of $(ab^{-1})^j$. I.e, if the sequence is $g_1, h_1, ab^{-1}, h_2, (ab^{-1})^{-2}, h_3$ (with all $g_i \in \langle a,b \rangle, h_i \in \langle c \rangle$), the resulting sequence after this step will be $g_1, h_1 ab^{-1} h_2 (ab^{-1})^{-2} h_3$. \item If the first and last elements of the sequence are from the same factor (in $\{a,b\},\{c\}$), concatenate them cyclically: i.e, if the sequence is $ab, c, b$, the resulting sequence after this step will be $bab, c$. \end{itemize} Doing this results in sequences $(r_i)_{i=1}^n,(s_i)_{i=1}^m$ such that $r \sim \Pi_i r_i, s \sim \Pi_i s_i$ (in $F_3$) and the sequences $(\varphi(r_i)), (\varphi(s_i))$ are cyclically reduced in $\pi_1(\Sigma_2)$ (since the only way to generate an element of $A$ or $B$ from images of $a,b,c$ is $\varphi(ab^{-1})^j = [g_1,g_2]^j \in A$; this can be seen from the definition of $\varphi$). For example, if $r$ were the element $abcba^{-1}cac^{-1}b$, we first partition according to $\{a,b\},\{c\}$ to get $ab, c, ba^{-1}, c, a, c^{-1}, b$, then concatenate powers of $ab^{-1}$ to get $ab, cba^{-1}c, a, c^{-1}, b$, and finally concatenate the first and last elements to get the sequence $(r_i) = bab, cba^{-1}c, a, c^{-1}$. Note that the resulting sequence is not uniquely defined, but any sequence which is the result of these steps will do for our purposes. \\ Now $(\varphi(r_i)), (\varphi(s_i))$ are cyclically reduced sequences of $\pi_1(\Sigma_2)$, and $\Pi_{i=1}^m \varphi(s_i) \sim \varphi(s) \sim \varphi(r) \sim \Pi_{i=1}^n \varphi(r_i)$. Assume $n \geq 2$, we will reach a contradiction. By the Conjugacy Theorem for Free Products with Amalgamation, $\exists \alpha \in A, 0 \leq k < n$ such that \begin{equation} \label{eq:1} \Pi_i \varphi(s_i) = \alpha \cdot \varphi(r_k) \varphi(r_{k+1}) ... \varphi(r_n) \varphi(r_1) ... \varphi(r_{k-2}) \varphi(r_{k-1}) \cdot \alpha^{-1} . \end{equation} Since $A \subset \Ima \varphi$, $\alpha = \varphi(\sigma)$ for some $\sigma \in F_3$. Therefore, pulling back Equation~\ref{eq:1} through $\varphi$, we get the following equation in $F_3$: \begin{myequation} \Pi_i s_i = \sigma \cdot r_k r_{k+1} ... r_{k-2} r_{k-1} \sigma^{-1} . \end{myequation} Note that this can be done since $\varphi$ is a monomorphism. Now, it can be seen that $s \sim \Pi_i s_i = \sigma r_k r_{k+1} ... r_{k-2} r_{k-1} \sigma^{-1} \sim r$ in $F_3$, contradicting our assumption. Therefore, $n \leq 1$. By symmetry of the above argument with respect to $r$ and $s$, $m \leq 1$ as well. If $m = 0$ or $n = 0$, we get that either $r=1$ or $s=1$, which is a contradiction to the assumptions, so $m = n = 1$. \\ Since $n = m = 1$, our sequences from above are $(r_i) = (r), (s_i) = (s)$, so by construction $\varphi(r), \varphi(s)$ are in one of the factors $H_1,H_2$. We shall consider the case $\varphi(r),\varphi(s) \in H_1$ and conclude that $\exists j \in \mathbb{Z} : \varphi(r),\varphi(s) \sim g_1^j$ in $\pi_1(\Sigma_2)$. The other case, where $\varphi(r),\varphi(s) \in H_2$ and we conclude that $\exists j \in \mathbb{Z} : \varphi(r),\varphi(s) \sim g_3^j$ in $\pi_1(\Sigma_2)$), is analogous. Since $\varphi(r),\varphi(s) \in H_1$, note that $r,s \in \langle a,b \rangle$. Recall, it was assumed that $r \not\sim s$ and $\varphi(r) \sim \varphi(s)$, and want to show that $\exists j \in \mathbb{Z}: \varphi(r), \varphi(s) \sim g_1^j$. Denote $G = \langle a,b \rangle$, and $\psi = \varphi \restriction_G : G \to H_1$. Note that the conjugacy classes of $G$ are: \begin{enumerate} \item $[1]_G$; \item $[\Pi_i a^{k_i}b^{l_i}]_G$ for some $0 \neq k_i,l_i \in \mathbb{Z}$; \item $[a^k]_G$ for some $0 \neq k \in \mathbb{Z}$; \item $[b^l]_G$ for some $0 \neq l \in \mathbb{Z}$, \end{enumerate} where some of the conjugacy classes listed in case 2 above are not distinct (i.e. $[aba^2b^2]_G = [a^2b^2ab]_G$); this will not matter to our argument. Define $\tilde{\psi}$, a function from the set of conjugacy classes of $G$ to the set of conjugacy classes of $H_1$, by \begin{myequation} \tilde{\psi}([x]_G) = [\psi(x)]_{H_1} . \end{myequation} Our assumptions can be rewritten as $[r]_G \neq [s]_G, \tilde{\psi}([r]_G) = \tilde{\psi}([s]_G)$, and we want to show $\tilde{\psi}([r]_G) = [g_1^j]_{H_1}$ for some $0 \neq j \in \mathbb{Z}$. Calculate $\tilde{\psi}$ for all the conjugacy classes of $G$ as listed above: \begin{enumerate} \item $\tilde{\psi}([1]_G) = [1]_{H_1}$; \item $\tilde{\psi}([\Pi_i a^{k_i}b^{l_i}]_G) = [\Pi_i g_1^{k_i} g_2 g_1^{l_i} g_2^{-1}]_{H_1}$ ($\forall i: 0 \neq k_i,l_i \in \mathbb{Z}$); \item $\tilde{\psi}([a^k]_G) = [g_1^k]_{H_1}$ ($0 \neq k \in \mathbb{Z}$); \item $\tilde{\psi}([b^l]_G) = [g_2 g_1^l g_2^{-1}]_{H_1} = [g_1^l]_{H_1}$ ($0 \neq l \in \mathbb{Z}$). \end{enumerate} Therefore: \begin{enumerate} \item If $\tilde{\psi}([r]_G) = \tilde{\psi}([s]_G) = [1]_{H_1}$, then $\psi(r) = \psi(s) = 1$ and then $r = s = 1$ (since $\psi$ is a monomorphism), in contradiction. \item If $\tilde{\psi}([r]_G) = \tilde{\psi}([s]_G) = [\Pi_i g_1^{k_i} g_2 g_1^{l_i} g_2^{-1}]_{H_1}$, then $[r]_G = [s]_G = [\Pi_i a^{k_i}b^{l_i}]_G$, in contradiction. \end{enumerate} The only cases which are left are $\tilde{\psi}([r]_G) = \tilde{\psi}([s]_G) = [g_1^j]$ for some $0 \neq j \in \mathbb{Z}$. \end{proof} \begin{remark} In exactly the same way, one can prove the following: Let $n \geq 2$, and define \begin{myequation} \varphi: F_{2n-1} = \langle a_1,...,a_{2n-1} \rangle \to \pi_1(\Sigma_n) = \langle g_1,...,g_{2n} | [g_1,g_2][g_3,g_4]...[g_{2n-1},g_{2n}] \rangle ,\\ a_{2i-1} \mapsto g_{2i-1} \ (\forall 1 \leq i \leq n) ,\\ a_{2i} \mapsto g_{2i} g_{2i-1} g_{2i}^{-1} \ (\forall 1 \leq i < n) . \end{myequation} Let $r,s \in F_{2n-1}$, and assume that $\varphi(r) \sim \varphi(s)$ in $\pi_1(\Sigma_n)$ and $r \not\sim s$ in $F_{2n-1}$. Then $\exists 1 \leq i \leq n, 0 \neq j \in \mathbb{Z}$ such that $\varphi(r),\varphi(s) \sim g_{2i-1}^j$ in $\pi_1(\Sigma_n)$. \\ \end{remark} The lemma is a corollary of Claim~\ref{cl:1}. \begin{proof}[Proof of Lemma~\ref{lemma:2}] Define \begin{myequation} A : F_3 \to F_3 ,\\ a \mapsto ac ,\\ b \mapsto b^{-1}c ,\\ c \mapsto c . \end{myequation} This is an automorphism of $F_3$. Note that $i_* = \varphi \circ A$ (with $\varphi$ as defined above). We want to use Claim~\ref{cl:1} with $r = A(\beta), s = A(\delta)$: \begin{myequation} A(\beta) = (ac)^{m_1} (b^{-1}c)^{n_1} ... (ac)^{m_p} (b^{-1}c)^{n_p} ,\\ A(\delta) = (ac)^{k_1} c^{-u_1} (b^{-1}c)^{l_1} c^{v_1} ... (ac)^{k_p} c^{-u_p} (b^{-1}c)^{l_p} c^{v_p} . \end{myequation} Assume by contradiction that $\delta \not\sim \beta$ in $F_3$. Then $A(\beta) \not\sim A(\delta)$ in $F_3$, since $A$ is an automorphism of $F_3$, and then by Claim~\ref{cl:1}, $\exists 0 \neq j \in \mathbb{Z}$ such that one of the following holds: \begin{itemize} \item $[A(\beta)]_{F_3} = [c^j]_{F_3}$; \item $[A(\beta)]_{F_3} = [(ab^{-1}c)^j]_{F_3}$; \item $[A(\beta)]_{F_3} = [b^j]_{F_3}$; \item $[A(\beta)]_{F_3} = [a^j]_{F_3}$. \end{itemize} To see that, for example, $[A(\beta)]_{F_3} = [c^j]_{F_3}$ leads to a contradiction, consider the projection \begin{myequation} p_{a,b} : \langle a,b,c \rangle \to \langle a,b \rangle ,\\ a \mapsto a, b \mapsto b, c \mapsto 1 . \end{myequation} If $[A(\beta)]_{F_3} = [c^j]_{F_3}$, then $[(a)^{m_1} (b^{-1})^{n_1} ... (a)^{m_p} (b^{-1})^{n_p}]_{\langle a,b \rangle} = [p_{a,b}(A(\beta))]_{\langle a,b \rangle} = [p_{a,b}(c^j)]_{\langle a,b \rangle} = [1]_{\langle a,b \rangle}$. This implies that at least one of the $m_i,n_i$ is $0$, else no cancellation can occur in $(a)^{m_1} (b^{-1})^{n_1} ... (a)^{m_p} (b^{-1})^{n_p}$. This is a contradiction to the assumption $\forall i: 0 \neq m_i,n_i$. The other three cases are dealt with similarly: the case $[A(\beta)]_{F_3} = [(ab^{-1}c)^j]_{F_3}$ using the projection \begin{myequation} p_{ab^{-1}c,c} : \langle a,b,c \rangle \to \langle a,b \rangle ,\\ ab^{-1}c \mapsto a, c \mapsto b, a \mapsto 1 . \end{myequation} (note this is well defined since $\{ab^{-1}c, a, c\}$ is a basis of $F_3$), and the cases $[A(\beta)]_{F_3} = [b^j]_{F_3}$, $[A(\beta)]_{F_3} = [a^j]_{F_3}$ using $p_{a,b}$ defined above. Since all cases lead to a contradiction, we conclude that $\delta \sim \beta$ in $F_3$, as desired. \end{proof} Recall $\pi_1(\Sigma_2) = \langle H_1*H_2 , A=B , \phi \rangle$ and recall the definition of $len: \pi_1(\Sigma_2) \to \mathbb{Z}_{\geq 0}$. We will also make use of the following claim: \begin{claim} \label{cl:length} Let $r \in \mathbb{N}, k_i \in \mathbb{Z}$ for $i \in \{1,...,2r\}$ with $|k_i| \geq 2$ for all $i$, and consider $\pi_1(\Sigma_2) = \langle g_1,g_2,g_3,g_4 | [g_1,g_2][g_3,g_4] \rangle$. Then \[ len \left( \Pi_{i=1}^r (g_1 g_3)^{k_{2i-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2i}} \right) > 1 . \] \end{claim} \begin{proof} Denote $\epsilon_i = sign(k_i) = \frac{k_i}{|k_i|}$. Consider \begin{myequation} p_{13}: \pi_1(\Sigma_2) \to F_2 = \langle h_1, h_3 \rangle ,\\ g_1 \mapsto h_1 ,\\ g_3 \mapsto h_3 ,\\ g_2, g_4 \mapsto 1 . \end{myequation} Denote $w = \Pi_{i=1}^r (g_1 g_3)^{k_{2i-1}} (g_2 g_1^{-1} g_2^{-1} g_3)^{k_{2i}}$. Assume by contradiction $len(w) \leq 1$. Note that $p_{13}$ preserves factors, i.e. $p_{13}(H_1) = \langle h_1 \rangle, p_{13}(H_2) = \langle h_3 \rangle$. If $len(w) = 0$, then $w = 1$, so $p_{13}(w) = 1$, and then $len(p_{13}(w)) = 0$. Else, $len(w) = 1$, so $w \sim c_1$ with $c_1 \in H_1 < \pi_1(\Sigma_2)$ or $c_1 \in H_2 < \pi_1(\Sigma_2)$. WLOG assume $c_1 \in H_1$. Then $p_{13}(w) \sim p_{13}(c_1) \in p_{13}(H_1) = \langle h_1 \rangle$, and then $len(p_{13}(w)) = 1$. One reaches the same conclusion in both cases: $len(p_{13}(w)) \leq 1$ (note that this is $len$ in $F_2$, which is a free product of $\mathbb{Z} = \langle h_1 \rangle$ with $\mathbb{Z} = \langle h_3 \rangle$ and so is trivially a free product with amalgamation). As a remark, this procedure can be applied for any factor-preserving homomorphism $\psi$ to get $len(g) = len(\psi(g))$. Consider $p_{13}(w)$: \[ p_{13}(w) = \Pi_{i=1}^r (h_1 h_3)^{k_{2i-1}} (h_1^{-1} h_3)^{k_{2i}} . \] This element is made of blocks: $(h_1 h_3)^{k_j}$ or $(h_1^{-1} h_3)^{k_j}$. Each of these blocks is reduced, so the only cancellations in the form of $p_{13}(w)$ given above can happen between two blocks. These are the options for cancellations between two blocks: \begin{itemize} \item If the first block is $(h_1 h_3)^{k_i}$ and the second is $(h_1^{-1} h_3)^{k_{i+1}}$, one of the following holds: \begin{myequation} \epsilon_i = \epsilon_{i+1} = 1: (h_1 h_3)^{|k_i|} (h_1^{-1} h_3)^{|k_{i+1}|} \rightsquigarrow \text{no cancellations} ; \\ \epsilon_i = -\epsilon_{i+1} = 1: (h_1 h_3)^{|k_i|} (h_1^{-1} h_3)^{-|k_{i+1}|} \rightsquigarrow (h_1 h_3)^{|k_i|-1} h_1 h_1 (h_1^{-1} h_3)^{-|k_{i+1}|+1} \ (*) ; \\ - \epsilon_i = \epsilon_{i+1} = 1: (h_1 h_3)^{-|k_i|} (h_1^{-1} h_3)^{|k_{i+1}|} \rightsquigarrow \text{no cancellations} ; \\ \epsilon_i = \epsilon_{i+1} = -1: (h_1 h_3)^{-|k_i|} (h_1^{-1} h_3)^{-|k_{i+1}|} \rightsquigarrow \text{no cancellations}. \end{myequation} Every cancellation of type $(*)$ results in a reduced block starting and ending with $h_1$. Before it comes either $h_3$ or $h_1$ (since $(h_1^{-1} h_3)^{k_{i-1}}$ ends with one of these symbols), and after it comes either $h_1$ or $h_3^{-1}$ (for similar reasons). Thus after a cancellation of this type, none of the two ends of the resulting reduced block can be cancelled any further. \item If the first block is $(h_1^{-1} h_3)^{k_i}$ and the second is $(h_1 h_3)^{k_{i+1}}$, one of the following holds: \begin{myequation} \epsilon_i = \epsilon_{i+1} = 1: (h_1^{-1} h_3)^{|k_i|} (h_1 h_3)^{|k_{i+1}|} \rightsquigarrow \text{no cancellations} ; \\ \epsilon_i = -\epsilon_{i+1} = 1: (h_1^{-1} h_3)^{|k_i|} (h_1 h_3)^{-|k_{i+1}|} \rightsquigarrow (h_1^{-1} h_3)^{|k_i|-1} h_1^{-1} h_1^{-1} (h_1 h_3)^{-|k_{i+1}|+1} \ (**) ; \\ - \epsilon_i = \epsilon_{i+1} = 1: (h_1^{-1} h_3)^{-|k_i|} (h_1 h_3)^{|k_{i+1}|} \rightsquigarrow \text{no cancellations} ; \\ \epsilon_i = \epsilon_{i+1} = -1: (h_1^{-1} h_3)^{-|k_i|} (h_1 h_3)^{-|k_{i+1}|} \rightsquigarrow \text{no cancellations}. \end{myequation} Every cancellation of type $(**)$ results in a reduced block starting and ending with $h_1^{-1}$. Before it comes either $h_3$ or $h_1^{-1}$ and after it comes either $h_1^{-1}$ or $h_3^{-1}$ (for reasons similar to above). Thus after a cancellation of this type, none of the two ends of the resulting reduced block can be cancelled any further. \end{itemize} Therefore, after at most $2r-1$ cancellations one reaches a reduced word. If it is not cyclically reduced, conjugate by the first block and perform one more reduction to get a cyclically reduced word $v$, conjugate to $p_{13}(w)$. Since every reduced block $(*)$ or $(**)$ above has length at least 2, and these are inserted without cancellations to our word $v$, we have $len(p_{13}(w)) = len(v) \geq 2 > 1$. This is a contradiction, so the initial assumption $len(w) \leq 1$ is false. \end{proof} \section{Proof of the theorems}\label{sec:3} \subsection{Additional definitions} In the proofs of the results, we shall use the following definitions. \begin{definition} A continuous map between two manifolds $f: M \to N$ induces a map $\tau_f: \pi_0(\mathscr{L}M) \to \pi_0(\mathscr{L}N)$ by $\tau_f([\gamma]) = [f \circ \gamma]$, for $\gamma: S^1 \to M$. \\ For any manifold $M$ and point $x \in M$, define the map \begin{myequation} \eta_{M,x} : \pi_1(M,x) \to \pi_0(\mathscr{L}M) , \\ [\tilde{\gamma}]_{\pi_1(M,x)} \mapsto [\tilde{\gamma}]_{\pi_0(\mathscr{L}M)}. \end{myequation} where $\tilde{\gamma}: S^1 \to M$ is a loop. \end{definition} Note that the following diagram commutes for any continuous map $f: M \to N$: \begin{center}\begin{tikzcd} \pi_1(M,x) \arrow[r, "f_*"] \arrow[d, "\eta_{M,x}"] & \pi_1(N,f(x)) \arrow[d, "\eta_{N,f(x)}"] \\ \pi_0(\mathscr{L}M) \arrow[r, "\tau_f"] & \pi_0(\mathscr{L}N) \end{tikzcd}\end{center} Note additionally that for $\alpha, \beta \in \pi_1(M,x)$, $\eta_{M,x}(\alpha) = \eta_{M,x}(\beta)$ if and only if $\alpha$ and $\beta$ are conjugate in $\pi_1(M,x)$. \begin{definition} A word $w \in F_2 = \langle V,H \rangle$ is called \textit{balanced} if it is of the form $w = V^{N_1} H^{M_1} ... V^{N_r} H^{M_r}$ for some $r \in \mathbb{N}$, $N_j,M_j \in \mathbb{Z} \setminus \{0\}$. \end{definition} Note that any balanced word is cyclically reduced. \begin{definition} Let $\varphi: M \to M$ be a diffeomorphism. A fixed point $x$ of $\varphi$ is called non-degenerate if $d\varphi_x$ does not have 1 as an eigenvalue. \end{definition} The egg-beater construction uses dynamics on a space $C$ which is then embedded into a surface of genus 2. Thus we shall use the following definitions of pushing forward the dynamics along the embedding. Let $X,Y$ be compact topological spaces, and $i: X \hookrightarrow Y$ a continuous embedding. Let $f: X \to \mathbb{R}$ be a continuous map on $X$, and assume the following condition holds: \begin{equation} \label{eq:condition} \text{For any path-component $C$ of $Y \setminus i(X)$, $f \restriction_{i^{-1}(\partial C)}$ is constant.} \end{equation} Let $C_y$ be the path-component of $Y$ that contains $y \in Y$, and denote $D_i = \bigcup_{y \in \Ima(i)} C_y \subseteq Y$. For all $y \in D_i$, denote by $\gamma_{i,y}: [0,1] \to C_y$ a continuous path with $\gamma_{i,y}(0) = y$, $\gamma_{i,y}(1) \in \Ima(i)$, and such that if $\gamma_{i,y}(t) \in \Ima(i)$ for some $t \in [0,1]$, then $\gamma_{i,y} \restriction_{[t,1]}$ is constant. Note that if $y \in \Ima(i)$, then $\gamma_{i,y} \equiv y$. Denote the following, not necessarily continuous, map: \begin{myequation} b_i: D_i \to \Ima(i) ,\\ y \mapsto \gamma_{i,y}(1) . \end{myequation} Define the following map, the \textit{pushforward} of $f$ through $i$: \begin{myequation} i_* f: D_i \to \mathbb{R} ,\\ y \mapsto f \circ i^{-1} \circ b_i(y) . \end{myequation} By Condition~\ref{eq:condition}, this is a continuous map $D_i \to \mathbb{R}$ which is constant on $Y \setminus i(X)$, and doesn't depend on the choice of the $\gamma_{i,y}$s. Note also that if $f$ is smooth and constant in a neighborhood of $i^{-1}(\partial \ i(X))$ then $i_* f$ is also smooth. Assume additionally that $X,Y$ are symplectic manifolds, $i$ is a symplectomorphism, and $f: S^1 \times X \to \mathbb{R}$ is a Hamiltonian function. Then $f$ induces the time-one-map of its flow; let it be denoted $F: X \to X$. The \textit{pushforward} of $F$ through $i$ is denoted $i_* F: Y \to Y$, and is the time-one-map of the flow induced by $i_* f$. \subsection{Review of the original proofs (genus $\geq 4$)} \label{sec:3.1} \subsubsection{Outline of the original proofs and dependencies of claims} First we shall outline the proofs of the original theorems, 1.3 from~\cite{PS14} and 1.1 from~\cite{10}. For concreteness, their statements are given here. \begin{theorem}[Theorem 1.3 from~\cite{PS14}] \label{thm:orig2} Let $\Sigma_4$ be a closed oriented surface of genus $\geq 4$, equipped with an area form $\sigma_4$, and $k \geq 2$ an integer. Then $powers_k(\Sigma_4,\sigma_4) = \infty$. \end{theorem} \begin{theorem}[Theorem 1.1 from~\cite{10}] \label{thm:orig3} Let $\Sigma_4$ be a closed oriented surface of genus $\geq 4$, equipped with an area form $\sigma_4$. Then for any non-principal ultrafilter $\mathcal{U}$ on $2^\mathbb{N}$, there exists a monomorphism $F_2 \hookrightarrow Cone_\mathcal{U}(Ham(\Sigma_4),d_H)$. \end{theorem} Both their proofs involve several intermediate results. For clarity, Figure~\ref{fig:orig_depends} lists the dependencies between these results. The proofs are based on a specific construction of a manifold $C$, an embedding $i: C \hookrightarrow \Sigma_4$, and some dynamics on $C$, which induce dynamics on $\Sigma_4$. These constructions are all sketched in Subsection~\ref{sec:outline}, and given in detail further in this subsection. Propositions~\ref{prop:main},~\ref{prop:main2} (which are Propositions 5.11 of~\cite{10} and 5.1 of~\cite{PS14} respectively) are claims on the dynamics on $\Sigma_4$, and Claims~\ref{cl:2},~\ref{cl:2.2} are the respective claims on the dynamics on $C$. The deductions marked $**$ use "hard" Floer homology and persistence modules, and do not depend on the genus of $\Sigma_4$, therefore they can be taken as is to the case where the surface is of genus 2,3. The deductions marked $*$ are a consequence of the fact that $i$ is constructed to be \textit{incompressible}, i.e. it induces injections $\pi_1(C) \hookrightarrow \pi_1(\Sigma_4)$ and $\pi_0(\mathscr{L}C) \hookrightarrow \pi_0(\mathscr{L}\Sigma_4)$. This will not hold in the case where $\Sigma$ is of genus 2,3 (see Subsection~\ref{sec:3.2} below), and this is exactly where Lemma~\ref{lemma:2} comes into play. Claims~\ref{cl:2},~\ref{cl:2.2} are proved directly by careful analysis of the dynamics on $C$. This subsection details the construction $i: C \hookrightarrow \Sigma_4$, the dynamics on $C$ and $\Sigma_4$, states all the above claims and propositions, and outlines their proofs. For full details, see~\cite{PS14} and~\cite{10}; we describe their work bottom-up, with respect to the directions of Figure~\ref{fig:orig_depends}. \\ \begin{figure} \centering \begin{tikzcd} Claim~\ref{cl:2} \arrow[d,rightsquigarrow,"*"] & Claim~\ref{cl:2.2} \arrow[d,rightsquigarrow,"*"] & \bigg] in \ C \\ Proposition~\ref{prop:main} \arrow[d,rightsquigarrow,"**"] & Proposition~\ref{prop:main2} \arrow[dd,rightsquigarrow,"**"] & \bigg] in \ \Sigma_4 \\ Theorem~\ref{thm:4} \arrow[d,rightsquigarrow] \\ Theorem~\ref{thm:orig3} & Theorem~\ref{thm:orig2} \end{tikzcd} \caption{Dependencies between original intermediate results.} \label{fig:orig_depends} \end{figure} The manifold $C$ mentioned above is the union of two annuli $[-1,1] \times \mathbb{R}/L\mathbb{Z}$ for some $L>4$, which intersect in two squares (see Figure~\ref{fig:qs} below). Special dynamics, called eggbeater dynamics, are defined on $C$ (see definition later in this subsection). In order to get results for a symplectic surface $M$, we need an embedding $i: C \hookrightarrow M$, which will induce eggbeater dynamics on $M$. Since $M$ is symplectic, it is orientable, and this leaves little choice for its homeomorphism type - the genus of $M$ determines it. Topologically, one can always think of $i$ as first embedding $C$ into the sphere, and then adding some handles. The results here use Floer homology, where basic objects of interest are homotopy classes of free loops in the manifold. Thus we want the embedding $i$ to have the property that homotopy classes of different orbits of the dynamics on $C$ will be pushed by $i$ to different homotopy classes in $M$, so that we will be able to distinguish between different orbits with Floer homology tools. Since the only choice in the embedding is how many handles to add and in which components of $M \setminus i(C)$ to attach them, we want every such component to have at least one end of a handle - otherwise, it will be contractible. There are four such components (consider Figure~\ref{fig:qs}), so for this method to work we need $M$ to be of genus at least 2. The original construction, found in~\cite{PS14} and presented later in this subsection, uses a surface of genus at least 4, and adds (at least) one handle in every such component. This produces results for surfaces of genus $\geq 4$, and has the added benefit that it makes $i$ incomressible - so that different free homotopy classes of loops in $C$ do not merge under $i$. However, this is not efficient if we want to minimize the genus. A slightly more efficient construction is found in Section~\ref{sec:3.2}, and produces results for surfaces of genus 2,3. \\ Both of the original proofs of Theorems~\ref{thm:orig2},~\ref{thm:orig3} use a construction of a sequence of homomorphisms $\Phi_k: F_2 \to Ham(\Sigma_4)$ (indexed by $k \in \mathbb{N}$). Every such Hamiltonian diffeomorphism $\Phi_k(w)$ is generated by a specific Hamiltonian denoted $H_{k,w}$, and the Hamiltonian isotopy generated by $H_{k,w}$ is denoted $\phi_{k,w}(t)$. These are all specified exactly later in this subsection. Note that these homomorphisms depend on the surface $\Sigma_4$ - the proofs for the generalized theorems (in Subsection~\ref{sec:3.2}) will use similiar but different homomorphisms. This sequence $\Phi_k$ induces a homomorphism \begin{myequation} F_2 \to Cone_\mathcal{U}(Ham(\Sigma), d_H) , \\ w \mapsto [(\Phi_k(w))_{k=1}^\infty] . \end{myequation} To show this is a monomorphism, and so prove Theorem~\ref{thm:orig3}, for any $1 \neq w \in F_2$ we need to show that $\lim_\mathcal{U} \frac{d_H(\Phi_k(w), id_\Sigma)}{k} = \lim_\mathcal{U} \frac{|| \Phi_k(w) ||_H}{k} > 0$. This is done in Theorem~\ref{thm:4}: \begin{theorem}[Theorem 2.1 from~\cite{10}] \label{thm:4} Let $1 \neq w \in F_2 = \langle H,V \rangle$. Then there exist constants $C = C(w) > 0, k_0 = k_0(w) \in \mathbb{N}$ such that for any $k > k_0$: \[ || \Phi_k(w) ||_H \geq C \cdot k. \] \end{theorem} \begin{corollary} Theorem~\ref{thm:4} implies Theorem~\ref{thm:orig3}. \end{corollary} In addition to the above sequence $\Phi_k$, a collection of free homotopy classes $\alpha_{k,w} \in \pi_0(\mathscr{L}C)$, indexed by $k \in \mathbb{N}, w \in F_2$, and another collection of free homotopy classes $\alpha_k^\prime \in \pi_0(\mathscr{L}C)$, indexed by $k \in K$, are specified, where $K \subset \mathbb{N}$ is some unbounded subset to be specified later in this subsection. To prove Theorems~\ref{thm:orig2},~\ref{thm:4}, the following propositions are used: \begin{prop}[Proposition 5.11 from~\cite{10}] \label{prop:main} Let $w = \Pi_j V^{N_j} H^{M_j} \in F_2$ be a balanced word. For large enough $k \in \mathbb{N}$, there are $2^{2r}$ non-degenerate fixed points of $\Phi_k(w)$ whose orbits have free homotopy class $\tau_i(\alpha_{k,w})$ (i.e. $[t \mapsto \phi_{k,w}(t)(z_0)]_{\pi_0(\mathscr{L}\Sigma_4)} = \tau_i(\alpha_{k,w})$ for $z_0$ a non-degenerate fixed point of $\Phi_k(w)$), and these fixed points are indexed by $\vec{\epsilon} = (\epsilon_0,...,\epsilon_{2r-1}) \in \{\pm 1\}^{2r}$, with the fixed point associated to sign vector $\vec{\epsilon}$ denoted $z(\vec{\epsilon})$. The action and Conley-Zehnder index of the point $z(\vec\epsilon)$ are: \begin{equation} \label{eq:2} \mathcal{A}(z(\vec{\epsilon})) = Lk \sum_{j=1}^r \left( \epsilon_{2j-2} N_j \left(1-\frac{1}{2|N_j|}\right)^2 + \epsilon_{2j-1} M_j \left(1-\frac{1}{2|M_j|}\right)^2 \right) + O(1) , \end{equation} \begin{equation} \label{eq:3} \mu_{CZ}(z(\vec\epsilon)) = 1 + \frac12 \sum_{j=1}^r \left( \epsilon_{2j-2} sign(N_j) + \epsilon_{2j-1} sign(M_j) \right) , \end{equation} where the action of a fixed point is understood to be that of its orbit under $\phi_{k,w}$, the action and Conley-Zehnder index are with respect to the Hamiltonian $H_{k,w}$, and where the $O$ notation in Equation~\ref{eq:2} is as $k \to \infty$. \end{prop} \begin{prop}[Proposition 5.1 from~\cite{PS14}] \label{prop:main2} Let $w = (VH)^r \in F_2$ for some $r \in \mathbb{N}$. For large enough $k \in K$, there are $2^{2r}$ non-degenerate fixed points of $\Phi_k(w)$ whose orbits have free homotopy class $\tau_i(\alpha_k^\prime)$, and fixed points in different orbits have action gaps that grow linearly with $k$: for such fixed points in different orbits $y, z$: \[ | \mathcal{A}(y) - \mathcal{A}(z) | \geq c \cdot k + O(1) , \] as $k \to \infty$, for some global constant $c > 0$. \end{prop} Note that both propositions depend on the definition of the embedding $i$, defined later in this subsection. Given Proposition~\ref{prop:main2}, Theorem~\ref{thm:orig2} is proven in Section 5.1 of~\cite{PS14}, and given Proposition~\ref{prop:main}, Theorem~\ref{thm:4} is proven in Section 5.4 of~\cite{10} (the case where $w$ is not conjugate to a power of $V$ or $H$ requires Proposition~\ref{prop:main}; the case where $w$ is conjugate to a power of $V$ or $H$ does not use it). The remainder of this subsection is devoted to outlining the construction of $\Phi_k, \phi_{k,w}, \alpha_{k,w}, \alpha_k^\prime$ and the proof of Propositions~\ref{prop:main},~\ref{prop:main2}, as proven in~\cite{PS14},~\cite{10}. \\ \subsubsection{The geometric construction}\label{sec:geometric} Let $\Sigma_4$ be a surface of genus $\geq 4$. Consider the cylinder $C_*=[-1,1]\times\mathbb{R}/L\mathbb{Z}$, for $L > 4$, with coordinates $x,y$ and the standard symplectic form $dx\wedge dy$. Denote by $C_V,C_H$ two copies of $C_*$. Consider the squares \[ S_0^\prime = [-1,1]\times[-1,1]/L\mathbb{Z}, S_1^\prime = [-1,1]\times[L/2-1,L/2+1]/L\mathbb{Z} \] in $C_*$. They give four squares $S_{V,0},S_{V,1}\subset C_V$ and $S_{H,0},S_{H,1}\subset C_H$. Define the symplectomorphism $VH_{0,1}:S_{V,0}\bigsqcup S_{V,1} \to S_{H,0} \bigsqcup S_{H,1}$ given by $VH\bigsqcup VH^\prime$, where \begin{myequation} VH:S_{V,0}\to S_{H,0} : (x,[y]) \mapsto (-y,[x]) ,\\ VH^\prime : S_{V,1}\to S_{H,1} : (x,[y]) \mapsto (y-L/2, [-x+L/2]) . \end{myequation} Define \[ C = C_V \bigcup_{VH_{0,1}} C_H . \] This is a symplectic manifold with symplectic form $\omega_0 = dx \wedge dy$ on every copy of the cylinder $C_*$. Denote by $C_V: C_* \hookrightarrow C, \ C_H: C_* \hookrightarrow C$ the two injections induced by the above union. Denote by $S_0,S_1 \subset C$ the identification of the squares $S_{V,0},S_{H,0}$ and $S_{V,1},S_{H,1}$. In fact, $S_0 \cup S_1 = C_V \cap C_H$. Fix two points $s_0 \in S_0, s_1 \in S_1$. Define 4 paths: two paths $q_1,q_3$ from $s_0$ to $s_1$, and two paths $q_2,q_4$ from $s_1$ to $s_0$ (see Figure~\ref{fig:qs}); $q_1, q_2$ are paths on $C_V$, and $q_3,q_4$ are paths on $C_H$. \begin{figure} \centering \begin{tikzpicture} \begin{scope}[decoration={markings,mark=at position 0.5 with {\arrow[scale=2]{>}}}] \draw[dotted] (-1,0) circle (3); \draw[dotted] (-1,0) circle (2.5); \draw (-1.5,3) node[anchor=south] {\Large $C_V$}; \draw[dashed] (1,0) circle (3); \draw[dashed] (1,0) circle (2.5); \draw (1.5,3) node[anchor=south] {\Large $C_H$}; \filldraw[black] (0,-2.55) circle (2pt); \draw (0,-2.8) node[anchor=north] {\Large $s_0$}; \filldraw[black] (0,2.55) circle (2pt); \draw (0,2.8) node[anchor=south] {\Large $s_1$}; \draw[postaction={decorate}] (-1,0) ++(288:2.75) arc (288:72:2.75); \draw[postaction={decorate}] (-1,0) ++(65:2.75) arc (65:-65:2.75); \draw[postaction={decorate}] (1,0) ++(468:2.75) arc (468:252:2.75); \draw[postaction={decorate}] (1,0) ++(245:2.75) arc (245:115:2.75); \draw (-4,0) node[anchor=east] {\Large $q_1$}; \draw (2,0) node[anchor=west] {\Large $q_2$}; \draw (-2,0) node[anchor=east] {\Large $q_3$}; \draw (4,0) node[anchor=west] {\Large $q_4$}; \end{scope} \end{tikzpicture} \caption{The paths $q_1, q_2, q_3, q_4$ in $C$.} \label{fig:qs} \end{figure} Note that $\pi_1(C,s_0) \simeq F_3$, the free group on 3 generators. The 3 generators $a,b,c$ of $\pi_1(C,s_0)$ are taken to be: \begin{myequation} a = [q_1 \# q_2]_{\pi_1(C,s_0)} , \\ b = [q_3 \# q_4]_{\pi_1(C,s_0)} , \\ c = [q_3 \# q_2]_{\pi_1(C,s_0)} , \end{myequation} where the $\#$ sign is used for path concatenation: $q\#q^\prime$ is the concatenation of paths $q$ and then $q^\prime$, if $q(1) = q^\prime(0)$. Consider the function $u_0: [-1,1] \to \mathbb{R}$, $u_0(s) = 1 - |s|$. Take an even, non-negative, sufficiently $C^0$-close smoothing $u$ to $u_0$ such that $u$ is supported away from $\{\pm 1\}$, and both $u-u_0$ and $\int_{-1}^t (u(s)-u_0(s)) ds$ are supported in a sufficiently small neighborhood of $\{\pm 1, 0\}$. For $k \in \mathbb{N}$, define \begin{myequation} f = f_k: C_* \to C_* , \\ f(x,[y]) = (x,[y + k L \cdot u(x)]) . \end{myequation} This mapping is a Hamiltonian diffeomorphism on $C_*$ with Hamiltonian \[ h_k(x,[y]) = -\frac{k}{2} + k \int_{-1}^x u(s) ds . \] Denote $f_{k,V} = (c_V)_* f_k$, $f_{k,H} = (c_H)_* f_k$. Recall that these are two Hamiltonian diffeomorphisms on $C$, one supported on $C_V$ and the other on $C_H$, both supported away from $\partial C$. Define a homomorphism \begin{myequation} \Psi_k: F_2 = \langle V,H \rangle \to Ham_c(C,\omega_0) ,\\ V \mapsto f_{k,V}, H \mapsto f_{k,H} , \end{myequation} where $Ham_c(M,\omega)$ denotes the group of Hamiltonian diffeomorphisms whose generators have compact support of a symplectic manifold $(M,\omega)$. Note that the image of a word $w = V^{N_1} H^{M_1} ... V^{N_r} H^{M_r} \in F_2$ is $f_{k,H}^{M_r} \circ f_{k,V}^{N_r} \circ ... \circ f_{k,H}^{M_1} \circ f_{k,V}^{N_1}$. These images are called \textit{eggbeater maps}. Pushing forward the flow generated by $h_k$ to $C$, one gets flows on $C_V$ and $C_H$ whose time-one-maps are $f_{k,V}$ and $f_{k,H}$, and concatenating these in the order induced by $w$ (as in the construction of $\Psi_k(w)$) one gets a flow $\psi_{k,w}(t): C \to C$, whose time-one-map is $\Psi_k(w)$. Denote the Hamiltonian that generates this flow $G_{k,w}: S^1 \times C \to \mathbb{R}$. \\ Consider a symplectic embedding $i: C \hookrightarrow \Sigma_4$ such that $i_*: \pi_1(C) \to \pi_1(\Sigma_4)$ and $\tau_i: \pi_0(\mathscr{L}C) \to \pi_0(\mathscr{L}\Sigma_4)$ are both injective, and such that each component of $\partial C$ separates $\Sigma_4$. For example, one can embed $C$ into $\mathbb{R}^2$, add at least one handle in every connected component of $\mathbb{R}^2 \setminus C$, and compactify by adding the point at infinity; see the construction in~\cite{PS14} and~\cite{10}. This is the point where Lemma~\ref{lemma:2} is used when generalizing the propositions to surfaces of genus 2,3 (see Subsection~\ref{sec:3.2}). Note that $i, f_{k,V}$ and $i, f_{k,H}$ satisfy Condition~\ref{eq:condition}. \\ The homomorphisms $\Phi_k: F_2 \to Ham(\Sigma_4)$ will be defined by $\Phi_k(w) = i_* \Psi_k(w)$. The mapping $\Phi_k(w)$ is indeed a diffeomorphism, because of the assumptions on $u$, and is Hamiltonian, since it is generated by the Hamiltonian $H_{k,w} = i_* G_{k,w}$. \\ Given $k \in \mathbb{N}, w = V^{N_1} H^{M_1} ... V^{N_r} H^{M_r} \in F_2$, set the free homotopy classes $\alpha_{k,w}$ to be \[ \alpha_{k,w} = \eta_{C,s_0}(\beta_{k,w}) \in \pi_0(\mathscr{L}C) , \] where \[ \beta_{k,w} = \Pi_{j=1}^r a^{k \cdot sign(N_j)} b^{k \cdot sign(M_j)} \in \pi_1(C,s_0) \] and $sign: \mathbb{Z} \to \{0, \pm1\}$ is \begin{myequation} n \mapsto \left\{\begin{array}{ll} \frac{n}{|n|} & n \neq 0 \\ 0 & n = 0 \end{array}\right. . \end{myequation} Additionally, choose $\nu_j,\mu_j \in (0,1)$ for $j=1,...,r$ such that $\frac{\nu_j}{L},\frac{\mu_j}{L} \in \mathbb{Q}$ for all $j$, and the values $\sum_{j=0}^{r-1} (\epsilon_{2j+1}(1-\mu_{j+1})^2 - \epsilon_{2j+4}(1-\nu_{j+1})^2)$ are all distinct, for all sign vectors $\vec\epsilon = (\epsilon_0,...,\epsilon_{2r-1}) \in \{\pm 1\}^{2r}$. For these choices, the set $K \subset \mathbb{N}$ from Proposition~\ref{prop:main2} is taken to be $K = \{k \in \mathbb{N} | \forall j: \frac{\mu_j k}{L}, \frac{\nu_j k}{L} \in \mathbb{Z} \}$. For $k \in K$ set \[ \alpha_k^\prime = \eta_{C,s_0}(\beta_k^\prime) \in \pi_0(\mathscr{L}C) , \] where \[ \beta_k^\prime = \Pi_{j=1}^r a^{\frac{\nu_j k}{L}} b^{\frac{\mu_j k}{L}} \in \pi_1(C,s_0) . \] From this point on, it is assumed that any index $k$ of $\alpha_k^\prime$ or $\beta_k^\prime$ is in the subset $K$, since otherwise $\alpha_k^\prime, \beta_k^\prime$ are not well-defined. This will also be explicitly stated. \\ The outline of the proofs of Propositions~\ref{prop:main},~\ref{prop:main2} is as follows. Since any non-degenerate fixed point of $\Phi_k(w)$ must lie in $\Ima i$, and in addition $i: C \to \Sigma_4$, $i_*: \pi_1(C,s_0) \to \pi_1(\Sigma_4,i(s_0))$ and $\tau_i: \pi_0(\mathscr{L}C) \to \pi_0(\mathscr{L}\Sigma_4)$ are all injective, it is enough to prove versions of Propositions~\ref{prop:main},~\ref{prop:main2} on $C$, i.e. prove the following claims: \begin{claim} \label{cl:2} Let $w = \Pi_{j=1}^r V^{N_j} H^{M_j} \in F_2$ be balanced. For large enough $k \in \mathbb{N}$, there are $2^{2r}$ non-degenerate fixed points of $\Psi_k(w)$ whose orbits have free homotopy class $\alpha_{k,w}$, and these fixed points have actions (with respect to $G_{k,w}$) and Conley-Zehnder indices as in Equations~\ref{eq:2},\ref{eq:3}. \end{claim} \begin{claim} \label{cl:2.2} Let $w = (VH)^r \in F_2$ for some $r \in \mathbb{N}$. For large enough $k \in K$, there are $2^{2r}$ fixed points of $\Psi_k(w)$ whose orbits have free homotopy class $\alpha_k^\prime$, and such fixed points in different orbits have action gaps (with respect to $G_{k,w}$) that grow linearly with $k$ as $k \to \infty$. \end{claim} This is proven in two steps. First, one proves these claims with respect to a piecewise-linear version $\psi_{k,w}^\prime(t)$ of the isotopy $\psi_{k,w}(t)$. This is done in Subsection 5.1 of~\cite{10} and in Section 5 of~\cite{PS14}, by analysis of the dynamics on $C$. Then, one shows that the non-degenerate fixed points of $\Psi_k(w)$ are exactly those of $\psi_{k,w}^\prime(1)$, for $k$ large enough. This is done in Subsection 5.3 of~\cite{10}. \subsection{Proof of the new theorems (genera 2,3)} \label{sec:3.2} The new proofs are very similar to the original ones, with the following changes. The interesting dynamics, which we want to keep, are the eggbeater dynamics, but we will need to change our construction of $i, C, \psi_{k,w}$ a bit to accomodate for the fact that the genus of the surface is now 2 or 3. More concretely, let $\Sigma$ be a surface of genus 2 or 3 with area form $\sigma$. The following data will be defined: \begin{itemize} \item A symplectic surface $(D,\omega)$ that contains $C$ from the original construction (i.e. $e_2: C \hookrightarrow D$ is a symplectic embedding that induces an embedding $\tau_{e_2}: \pi_0(\mathscr{L}C) \hookrightarrow \pi_0(\mathscr{L}D)$), \item an injection $i_2: D \hookrightarrow \Sigma$, and \item homomorphisms $\Xi_k: F_2 \to Ham_c(D, \omega)$ indexed by $k \in \mathbb{N}$, with $\Xi_k(w)$ the time-one-map of the flow $\xi_{k,w}(t): D \to D$, such that for all $k \in \mathbb{N}, w \in F_2$: $\xi_{k,w}(t) \restriction_C = \psi_{k,w}(t)$. \end{itemize} The Hamiltonian generating $\xi_{k,w}(t)$ will be denoted $F_{k,w}: S^1 \times D \to \mathbb{R}$. These data will specify the dynamics on $D$. In order to work on $\Sigma$, the dynamics will be pushed forward by $i_2$. This will result in a Hamiltonian $(i_2)_* F_{k,w}: S^1 \times \Sigma \to \mathbb{R}$, and the flow and time-one-map it generates, denoted $(i_2)_* \xi_{k,w}(t): \Sigma \to \Sigma$ and $(i_2)_* \Xi_k(w): \Sigma \to \Sigma$. The confused reader may consult the following diagram, which indicates the different surfaces mentioned till now and the injections between them, together with the Hamiltonians on them, the flows these Hamiltonians generate, and their time-one-maps: \begin{center}\begin{tikzcd} (C, G_{k,w}, \psi_{k,w}, \Psi_k(w)) \arrow[hookrightarrow]{d}{e_2} \arrow[hookrightarrow]{r}{i} & (\Sigma_4, H_{k,w} = i_* G_{k,w}, \phi_{k,w} = i_* \psi_{k,w}, \Phi_k(w) = i_* \Psi_k(w)) \\ (D, F_{k,w}, \xi_{k,w}, \Xi_k(w)) \arrow[hookrightarrow]{r}{i_2} & (\Sigma, (i_2)_* F_{k,w}, (i_2)_* \xi_{k,w}, (i_2)_* \Xi_k(w)) \\ \end{tikzcd}\end{center} All these data will be explicitly defined in Subsection~\ref{sec:3.2.1}. \\ With these constructions in hand, we will prove new versions of Propositions~\ref{prop:main},~\ref{prop:main2}, and these will imply the generalized Theorems~\ref{thm:2},~\ref{thm:3}. These dependencies are depicted in Figure~\ref{fig:new_depends}. The deductions marked $**$ can be taken as is from the proofs for surfaces of genus $\geq 4$ in~\cite{PS14},~\cite{10}. The deductions marked $*$ will now require justification, since $i_2$ will no longer be incompressible. Since Claims~\ref{cl:2},~\ref{cl:2.2} were already established (see proof in~\cite{PS14},~\cite{10} or sketch in Subsection~\ref{sec:3.1}), we have to show why they imply Propositions~\ref{prop:new},~\ref{prop:new2}. This is done in Subsection~\ref{sec:3.2.2}, and uses Lemma~\ref{lemma:2}. \begin{figure} \centering \begin{tikzcd} Claim~\ref{cl:2} \arrow[d,rightsquigarrow,"*"] & Claim~\ref{cl:2.2} \arrow[d,rightsquigarrow,"*"] & \bigg] in \ C \\ Proposition~\ref{prop:new} \arrow[d,rightsquigarrow,"**"] & Proposition~\ref{prop:new2} \arrow[d,rightsquigarrow,"**"] & \bigg] in \ \Sigma \\ Theorem~\ref{thm:3} & Theorem~\ref{thm:2} \end{tikzcd} \caption{Dependencies between generalized intermediate results.} \label{fig:new_depends} \end{figure} Propositions~\ref{prop:new},~\ref{prop:new2}, which by the above discussion imply the main theorems of this paper, are stated here. \begin{prop} \label{prop:new} Let $w = \Pi_{j=1}^r V^{N_j} H^{M_j} \in F_2 = \langle H,V \rangle$ be balanced. For large enough $k \in \mathbb{N}$, there are $2^{2r}$ non-degenerate fixed points of $(i_2)_* \Xi_k(w)$ in $\Sigma$ whose orbits have free homotopy class $\tau_{i_2}(\alpha_{k,w})$, and such fixed points have actions and Conley-Zehnder indices as in Equations~\ref{eq:2},\ref{eq:3}. \end{prop} \begin{prop} \label{prop:new2} Let $w = (VH)^r \in F_2$ for some $r \in \mathbb{N}$. For large enough $k \in K$, there are $2^{2r}$ fixed points of $(i_2)_* \Xi_k(w)$ in $\Sigma$ whose orbits have free homotopy class $\tau_{i_2}(\alpha_k^\prime)$, and such fixed points in different orbits have action gaps that grow linearly with $k$: for such fixed points in different orbits $y, z$: \[ | \mathcal{A}(y) - \mathcal{A}(z) | \geq c \cdot k + O(1) , \] as $k \to \infty$, for some global constant $c > 0$. The set $K \subset \mathbb{N}$ is as defined in Subsection~\ref{sec:3.1}. \end{prop} Note that since $\pi_0(\mathscr{L}C) \subset \pi_0(\mathscr{L}D)$, the expressions $\tau_{i_2}(\alpha_{k,w})$ and $\tau_{i_2}(\alpha_k^\prime)$ are well-defined. \subsubsection{The construction} \label{sec:3.2.1} In a sense, the proof for $\Sigma$ of genus 2 is harder than for genus 3. In the following, we will show the proof for $\Sigma$ of genus 2, and comment on the differences to the proof for genus 3 surfaces when they arise. Consider $C$, defined in Subsection~\ref{sec:geometric}, and two additional copies of the cylinder $C_*$, $C_1$ and $C_2$. Our manifold $D$ will be $D = C \bigsqcup C_1 \bigsqcup C_2$, equipped with the standard symplectic form $\omega_0 = dx \wedge dy$ on every component. Denote the symplectic inclusions $C_* \hookrightarrow C_1,C_2$ by $c_1,c_2$ respectively. Note that the additional annuli, $C_1$ and $C_2$, are needed to make sure that $i_2$ and $F_{k,w}$ satisfy Condition~\ref{eq:condition}; that is, to enable $F_{k,w} \circ i_2^{-1}: \Ima(i_2) \to \mathbb{R}$ to be extended by a locally constant function to a function on all of $\Sigma$. The symplectic embedding $i_2:(D,\omega_0) \hookrightarrow (\Sigma, \sigma)$ is built in stages. First define $i_2$ on $C$: embed $C$ symplectically into $\mathbb{R}^2$, embed the plane $\mathbb{R}^2$ into $S^2$, and then add 2 handles as shown in Figure~\ref{fig:handles}; that is, $C$ seperates the sphere into 4 connected components, each having two neighbors; connect any two non-neighboring components with a handle. In the case of genus 3, add a handle inside the 'outside' component to obtain a surface of genus 3 (we will refer to this handle as the extra handle). This defines $i_2 \restriction_C$. Define $i_2$ on $C_1 \bigsqcup C_2$ by embedding each of them symplectically into one of the non-extra handles (with $C_1$ and $C_2$ on different handles), such that the images $i_2(C_1), i_2(C_2)$ are not contractible. The orientation of the embeddings $i_2\restriction_{C_1}, i_2\restriction_{C_2}$, and which goes on which handle, will be defined immediately. The embedding $i_2$ can be seen in Figure~\ref{fig:handles}. \begin{figure}[!ht] \centering \begin{minipage}{.5\textwidth} \centering \scalebox{0.75}{ \begin{tikzpicture} \draw[gray,ultra thin] (-1,0) circle (3); \draw[gray,ultra thin] (-1,0) circle (2.5); \draw (-1.5,3) node[anchor=south] {\Large $i_2(C_V)$}; \draw[gray,ultra thin] (1,0) circle (3); \draw[gray,ultra thin] (1,0) circle (2.5); \draw (1.5,3) node[anchor=south] {\Large $i_2(C_H)$}; \filldraw[fill=white,draw=black,thick] (0,-2) ++(35:3.7) arc (35:145:3.7) -- ++(-35:0.6) arc (145:35:3.1) -- cycle; \filldraw[fill=white,draw=black,thick] (-2.75,0) circle (0.3); \filldraw[fill=white,draw=black,thick] (2.75,0) circle (0.3); \filldraw[fill=white,draw=black,thick] (-1.5,-3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (0,-1) circle (0.3); \filldraw[fill=white,draw=black,thick] (0,-5) circle (0.3); \draw[gray, ultra thin] (-1.5,-3) ++(-20:2.5) ++(-20:0.3) arc (-20:-200:0.3); \draw[gray, dashed] (-1.5,-3) ++(-20:2.5) ++(160:0.3) arc (160:-20:0.3); \draw[gray, ultra thin] (-1.5,-3) ++(-22:2.5) ++(-22:0.3) arc (-22:-202:0.3); \draw[gray, dashed] (-1.5,-3) ++(-22:2.5) ++(158:0.3) arc (158:-22:0.3); \draw[gray, ultra thin] (0,-2) ++(90:3.4) ++(90:0.3) arc (90:270:0.3); \draw[gray, dashed] (0,-2) ++(90:3.4) ++(270:0.3) arc (270:450:0.3); \draw[gray, ultra thin] (0,-2) ++(92:3.4) ++(92:0.3) arc (92:272:0.3); \draw[gray, dashed] (0,-2) ++(92:3.4) ++(272:0.3) arc (272:452:0.3); \draw (0,1) node[anchor=north] {\Large $i_2(C_1)$}; \draw (1.2,-4) node[anchor=west] {\Large $i_2(C_2)$}; \filldraw[black] (0,-2.55) circle (2pt); \draw (0,-2.8) node[anchor=north] {\Large $i_2(s_0)$}; \end{tikzpicture} } \caption{adding handles on $S^2$ after embedding $C$: this whole figure is in $S^2$.} \label{fig:handles} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \pgfdeclarelayer{bg} \pgfsetlayers{bg,main} \hspace{-2.5cm} \scalebox{0.9}{ \begin{tikzpicture} \filldraw[fill=white,draw=black,thick] (0,-2) ++(35:3.7) arc (35:145:3.7) -- ++(-35:0.6) arc (145:35:3.1) -- cycle; \filldraw[fill=white,draw=black,thick] (-2.75,0) circle (0.3); \filldraw[fill=white,draw=black,thick] (2.75,0) circle (0.3); \filldraw[fill=white,draw=black,thick] (-1.5,-3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (0,-1) circle (0.3); \filldraw[fill=white,draw=black,thick] (0,-5) circle (0.3); \node (origin) at (0,-2.55) {}; \node (A) at ($(0,-2) + (145:3.4)$) {}; \node (B) at ($(0,-2) + (35:3.4)$) {}; \node (C) at ($(-1.5,-3) + (54:2.6)$) {}; \node (D) at ($(-1.5,-3) + (-54:2.6)$) {}; \filldraw[black] (origin) circle (2pt); \draw (0,-2.8) node[anchor=north] {\Large $i_2(s_0)$}; \begin{scope} [gray,dashed,decoration={markings,mark=at position 0.55 with {\arrow[scale=2]{>}}}] \draw (origin) to[out=140,in=235] (A) arc (145:35:3.5) to[out=-55,in=0] (0,0.5) to[out=180,in=122,distance=50] (origin) decorate {(A) to[out=55,in=130] (B)}; \draw (origin) to[out=100,in=140,distance=30] (C) arc (54:-54:2.6) to[out=215,in=215,distance=60] (origin) decorate {(D) arc (-54:54:2.2)}; \end{scope} \draw (-4.3,0) node {\Large $g_1$}; \draw (0,2) node {\Large $g_2$}; \draw (1.5,-1) node {\Large $g_3$}; \draw (1.5,-3.5) node {\Large $g_4$}; \begin{pgfonlayer}{bg} \begin{scope} [gray,decoration={markings,mark=at position 0.4 with {\arrow[scale=2]{>}}}] \draw[postaction={decorate}] (origin) to[out=160,in=123,distance=200] (origin); \draw[postaction={decorate}] (origin) to[out=120,in=45,distance=130] (origin); \end{scope} \end{pgfonlayer} \end{tikzpicture} } \caption{the generators of $\pi_1(\Sigma, i_2(s_0))$, dashed lines only for readability.} \label{fig:gens} \end{minipage} \end{figure} Recall $s_0 \in C$, and $a,b,c$, the generators of $\pi_1(C, s_0)$. Fix some $s^\prime_k \in C_k$ for $k=1,2$. The generators of $\pi_1(C_k,s^\prime_k) = \mathbb{Z}$ are denoted $d_k$. These are positively oriented - that is, $d_k = [t\mapsto(0,Lt) \in C_k]$ as elements in $\pi_1(C_k,s_k^\prime)$. Denote the generators of $\pi_1(\Sigma, i_2(s_0))$ by $g_1,..,g_4$, such that $\pi_1 (\Sigma, i_2(s_0)) = \left\langle g_1,g_2,g_3,g_4\middle|[g_1,g_2] [g_3,g_4] \right\rangle$, and the loops $g_1$ and $g_3$ go around the two handles (see Figure~\ref{fig:gens}). In the case where $\Sigma$ is of genus 3, denote $\pi_1(\Sigma, i_2(s_0)) = \langle g_1, ..., g_6 | [g_1,g_2] [g_3,g_4] [g_5,g_6] \rangle$, such that the loops $g_1$ and $g_3$ still go around the two non-extra handles (the same as in Figure~\ref{fig:gens}), and such that $g_5$ goes around the extra handle. The loops $g_{k+1}$ go along the handle that the loops $g_k$ go around (for $k = 1,3,5$), oriented such that $[g_1,g_2][g_3,g_4][g_5,g_6] = 1$ in $\pi_1(\Sigma, i_2(s_0))$. Choose the image of $C_1$ to be contained in the handle that the loop $g_1 \in \pi_1(\Sigma, i_2(s_0))$ goes around (in Figures~\ref{fig:handles} and~\ref{fig:gens} this is the upper handle), and so $i_2(C_2)$ is contained in the handle that the loop $g_3$ goes around. Since the image of $C_1$ is located on a handle, is non-contractible, and $i_2$ is an embedding, one can choose how to orient the image of $C_1$: \[ \tau_{i_2} \circ \eta_{C_1,s_1^\prime} (d_1) = \eta_{\Sigma, i_2(s_0)} (g_1) \text{ or } \tau_{i_2} \circ \eta_{C_1,s_1^\prime} (d_1) = \eta_{\Sigma, i_2(s_0)} (g_1^{-1}) , \] and similiarly one can choose the orientation of $i_2(C_2)$: \[ \tau_{i_2} \circ \eta_{C_2,s_2^\prime} (d_2) = \eta_{\Sigma, i_2(s_0)} (g_3) \text{ or } \tau_{i_2} \circ \eta_{C_2,s_2^\prime} (d_2) = \eta_{\Sigma, i_2(s_0)} (g_3^{-1}) . \] Choose the orientation of the embeddings $i_2\restriction_{C_1}, i_2\restriction_{C_2}$ to be such that the elements $d_1,d_2$ map to $\eta_{\Sigma, i_2(s_0)} (g_1)$ and $\eta_{\Sigma, i_2(s_0)} (g_3^{-1})$, respectively, under $\tau_{i_2} \circ \eta_{C_k, s_k^\prime}$ (for $k = 1,2$). This concludes the definition of the embedding $i_2$. \\ By construction of $i_2: D \hookrightarrow \Sigma$, the pushforward $(i_2)_*: \pi_1(D, s_0) \to \pi_1(\Sigma, i_2(s_0))$ acts on the generators $a,b,c$ of $\pi_1(D,s_0)$ so: \begin{myequation} a \mapsto g_1 g_3 , \\ b \mapsto g_2 g_1^{-1} g_2^{-1} g_3 , \\ c \mapsto g_3 . \end{myequation} In order to define the Hamiltonian $F_{k,w}$ on $D$, a similar procedure as the one that defines $G_{k,w}$ is followed. Recall $h = h_k: C_* \to \mathbb{R}$, the Hamiltonian on $C_*$: \[ (x,[y]) \mapsto -\frac{k}{2} + k \int_{-1}^x u(s) ds , \] defined in Subsection~\ref{sec:3.1}. Note that the pairs $e_2 \circ c_V, h_k$ and $e_2 \circ c_H, h_k$ satisfy Condition~\ref{eq:condition}. Define two autonomous Hamiltonians on $D$: \begin{myequation} g_{k,V}: D \to \mathbb{R} , \\ g_{k,V} = (e_2 \circ c_V)_* h_k \bigsqcup - (c_1)_* h_k \bigsqcup (c_2)_* h_k ; \\ g_{k,H}: D \to \mathbb{R} , \\ g_{k,H} = (e_2 \circ c_H)_* h_k \bigsqcup (c_1)_* h_k \bigsqcup (c_2)_* h_k . \\ \end{myequation} Given $w \in F_2 = \langle V,H \rangle$, define $\xi_{k,w}(t)$, the flow on $D$, as the concatenation of the flows induced by $g_{k,V}, g_{k,H}$, in the order induced by $w$. The Hamiltonian which induces this flow is denoted $F_{k,w}: S^1 \times D \to \mathbb{R}$, and its time-one-map is $\Xi_k(w): D \to D$. Another equivalent way to define $\Xi_k(w)$ is to denote the time-one-maps of $g_{k,V}$, $g_{k,H}$ by $g_{k,V}^\prime, g_{k,H}^\prime$ respectively, and define the homomorphism \begin{myequation} \Xi_k: F_2 \to Ham_c(D,\omega_0) , \\ V \mapsto g_{k,V}^\prime, H \mapsto g_{k,H}^\prime . \end{myequation} Next, note that $F_{k,w}$ and $i_2$ satisfy Condition~\ref{eq:condition} for all $k,w$. Therefore we can define the pushforwards $(i_2)_* F_{k,w}: S^1 \times \Sigma \to \mathbb{R}, (i_2)_* \xi_{k,w}: \mathbb{R} \times \Sigma \to \Sigma, (i_2)_* \Xi_k(w): \Sigma \to \Sigma$. \subsubsection{Proof of Propositions~\ref{prop:new},~\ref{prop:new2}} \label{sec:3.2.2} First we state a few helper claims, and use them to prove Propositions~\ref{prop:new},~\ref{prop:new2}. Then, we will show the proofs of the claims. For the rest of this subsection, fix some $2 \leq k \in \mathbb{N}, w = V^{N_1} H^{M_1} ... V^{N_r} H^{M_r} \in F_2$ balanced. Recall that $\alpha_{k,w} = \eta_{C,s_0}(\beta_{k,w})$, $\alpha_k^\prime = \eta_{C,s_0}(\beta_k^\prime)$, and also note that since the connected component of $s_0$ in $D$ is $e_2(C)$, we shall abuse notation and denote $\pi_1(C,s_0) = \pi_1(D,s_0), \eta_{C,s_0} = \eta_{D,s_0}$, etc. The first claim characterizes all egg-beater orbits in $C$ in terms of their free homotopy classes. \begin{claim} \label{claim:3} Let $\tilde{\gamma} : S^1 \to C$ be a closed $\psi_{k,w}$-orbit. Then there exist $k_1,...,k_r, l_1,...,l_r \in \mathbb{Z}$, $u_1,...,u_r, v_1,...,v_r \in \{0,1\}$ such that $[\tilde{\gamma}]_{\pi_0(\mathscr{L}C)} = \eta_{C,s_0}(\Pi_{m=1}^r a^{k_m}c^{-u_m}b^{l_m}c^{v_m})$. \end{claim} The second claim provides a partial injectivity property of $\tau_{i_2}$. \begin{claim} \label{claim:4} Let $\gamma = \eta_{D,s_0}(\Pi_{m=1}^r a^{k_m}c^{-u_m}b^{l_m}c^{v_m}) \in \pi_0(\mathscr{L}D)$ for some $k_1,...,k_r,l_1,...,l_r \in \mathbb{Z}$, $u_1,...,u_r,v_1,...,v_r \in \{0,1\}$, and assume $\tau_{i_2}(\gamma) = \tau_{i_2}(\alpha_{k,w})$. Then $\gamma = \alpha_{k,w}$. Alternatively, if $k \in K$, assume $\tau_{i_2}(\gamma) = \tau_{i_2}(\alpha_k^\prime)$. Then $\gamma = \alpha_k^\prime$. \end{claim} The third claim will allow us to focus our calculations to $\xi_{k,w}$-orbits in $D$ which are in fact in $e_2(C)$. \begin{claim} \label{claim:5} Let $k \in \mathbb{N}$ be large enough, and let $z \in \Sigma$ be a non-degenerate fixed point of $(i_2)_* \Xi_k(w)$ whose $(i_2)_* \xi_{k,w}$-orbit is in the class $\tau_{i_2}(\alpha_{k,w})$, i.e. $[(i_2)_* \xi_{k,w}(t) (z)]_{\pi_0(\mathscr{L}\Sigma)} = \tau_{i_2}(\alpha_{k,w})$. Alternatively, let $k \in K$ be large enough, and let $z \in \Sigma$ be a non-degenerate fixed point of $(i_2)_* \Xi_k(w)$ whose $(i_2)_* \xi_{k,w}$-orbit is in the class $\tau_{i_2}(\alpha_k^\prime)$. In both cases, $z \in i_2(C)$. \\ \end{claim} \begin{proof}[Proof of Propositions~\ref{prop:new},~\ref{prop:new2}] Recall that Claims~\ref{cl:2},~\ref{cl:2.2}, proven in~\cite{10},~\cite{PS14}, are analogous to these propositions, but set in $C$. We want to show why they imply that these propositions hold in $\Sigma$. The proofs for both propositions are very similar, so we present them simultaneously. Let $\tilde{\gamma}: S^1 \to \Sigma$ be a closed orbit of $(i_2)_* \xi_{k,w}(t)$ in the class $\tau_{i_2}(\alpha_{k,w})$ (or $\tau_{i_2}(\alpha_k^\prime)$, in the case where $k \in K$) such that $z = \tilde{\gamma}(0)$ is a non-degenerate fixed point of $(i_2)_* \Xi_k(w)$. By Claim~\ref{claim:5}, if $k$ is large enough then $z \in i_2(C)$, therefore $y = i_2^{-1}(z) \in C$ is uniquely defined. Denote by $\tilde{\gamma_y} : S^1 \to C$ the $\xi_{k,w}(t)$-orbit of $y$. By Claim~\ref{claim:3}, there exist $k_1,...,k_r, l_1,...,l_r \in \mathbb{Z} , u_1,...,u_r, v_1,...,v_r \in \{0,1\}$ such that \[ [\tilde{\gamma_y}]_{\pi_0(\mathscr{L}C)} = \eta_{C,s_0}(\Pi_{m=1}^r a^{k_m}c^{-u_m}b^{l_m}c^{v_m}) . \] Since $\tau_{i_2}([\tilde{\gamma_y}]_{\pi_0(\mathscr{L}C)}) = [\tilde{\gamma}]_{\pi_0(\mathscr{L}\Sigma)} = \tau_{i_2}(\alpha_{k,w})$ (or $\tau_{i_2}(\alpha_k^\prime)$), one has $[\tilde{\gamma_y}]_{\pi_0(\mathscr{L}C)} = \alpha_{k,w}$ (or $[\tilde{\gamma_y}]_{\pi_0(\mathscr{L}C)} = \alpha_k^\prime$) by Claim~\ref{claim:4}. Thus non-degenerate fixed points of $(i_2)_* \Xi_k(w)$ in $\Sigma$ whose orbits are in the class $\tau_{i_2}(\alpha_{k,w})$ correspond in a 1-1 manner to non-degenerate fixed points of $\Xi_k(w)$ in $D$ whose orbits are in the class $\alpha_{k,w}$ (and there is a similar correspondence for the classes $\tau_{i_2}(\alpha_k^\prime),\alpha_k^\prime$). Note that since Claim~\ref{claim:4} uses Lemma~\ref{lemma:2}, this is the point where the algebraic Lemma~\ref{lemma:2} enters the proof. \\ Therefore it is enough to restrict the analysis to the dynamics on $D$, and further, to the dynamics on $C$, since $z \in i_2(C)$. By Claim~\ref{cl:2}, for large enough $k$ there are $2^{2r}$ non-degenerate fixed points of $\Xi_k(w)$ in the class $\alpha_{k,w}$, indexed by $\vec{\epsilon} = (\epsilon_0,...,\epsilon_{2r-1}) \in \{\pm 1\}^{2r}$, such that their actions and Conley-Zehnder indices are given by Equations~\ref{eq:2},\ref{eq:3}. Since $i_2$ preserves actions and Conley-Zehnder indices by construction, this proves Proposition~\ref{prop:new}. Similarly, setting $w = (VH)^r$, by Claim~\ref{cl:2.2} for large enough $k \in K$ there are $2^{2r}$ non-degenerate fixed points of $\Xi_k(w)$ in the class $\alpha_k^\prime$, such that fixed points in different orbits have action gaps that grow linearly with $k$. The injection $i_2$ preserves actions, so this proves Proposition~\ref{prop:new2}. \end{proof} We finish this section with the proofs of the claims. \begin{proof}[Proof of Claim~\ref{claim:3}] Denote $z = \tilde{\gamma}(0)$. Recall that $\psi_{k,w}^\prime$ is the piecewise-linear version of $\psi_{k,w}$ defined in Subsection~\ref{sec:3.1}, and that it is the concatenation of some autonomous Hamiltonian isotopies $f_{0,V}^{N_1t}, f_{0,H}^{M_1t}, ...$; while $\psi_{k,w}$ is their composition. Thus $\tilde{\gamma}$ and $t \mapsto \psi_{k,w}^\prime(t) (z)$ are freely homotopic loops in $C$. Therefore, it is enough to show there exist some $k_1,...,k_r, l_1,...,l_r \in \mathbb{Z}$, $u_1,...,u_r, v_1,...,v_r \in \{0,1\}$ such that $[t \mapsto \psi_{k,w}^\prime(t) (z)]_{\pi_0(\mathscr{L}C)} = \eta_{C,s_0}(\Pi_{m=1}^r a^{k_m} c^{-u_m} b^{l_m} c^{v_m})$, since homotopic loops have the same $\eta_{C,s_0}$-images. This is shown in the proof of Lemma 4.2 in~\cite{10} - in their notation, $\psi_{k,w}^\prime(t)$ is denoted $\phi^t$, and $k_\mu, l_\mu, u_\mu, v_\mu$ are denoted $n_\mu, m_\mu, \epsilon_\mu, \nu_\mu$ respectively, for $1 \leq \mu \leq r$ (note we do not need the whole statement of Lemma 4.2). \end{proof} \begin{proof}[Proof of Claim~\ref{claim:4}] The two assumptions are similar, and the proofs for the two cases are the same. For concreteness, assume $\tau_{i_2}(\gamma) = \tau_{i_2}(\alpha_{k,w})$, the other case is analogous. Let $\delta \in \pi_1(C,s_0)$ be any element such that $\gamma = \eta_{C,s_0}(\delta)$. By assumption, $\tau_{i_2}(\gamma) = \tau_{i_2}(\alpha_{k,w})$. Commutativity implies $\eta_{\Sigma,i_2(s_0)}((i_2)_* \delta) = \eta_{\Sigma,i_2(s_0)}((i_2)_* \beta_{k,w})$, so one sees that $(i_2)_* \delta$ and $(i_2)_* \beta_{k,w}$ are conjugate in $\pi_1(\Sigma, i_2(s_0))$. In the case where the genus of $\Sigma$ is 2, recall that $\pi_1(D,s_0) = F_3$, and that $(i_2)_* : F_3 \to \pi_1(\Sigma, i_2(s_0))$ is exactly the same homomorphism as in the requirements of Lemma~\ref{lemma:2}. In the case where the genus of $\Sigma$ is 3, denote \begin{myequation} p_{1234}: \pi_1(\Sigma, i_2(s_0)) \to \langle g_1,g_2,g_3,g_4 | [g_1,g_2][g_3,g_4] \rangle , \\ g_1 \mapsto g_1, g_2 \mapsto g_2, g_3 \mapsto g_3, g_4 \mapsto g_4 , \\ g_5,g_6 \mapsto 1 , \end{myequation} and note that $p_{1234} \circ (i_2)_*$ is exactly the homomorphism as in the requirements of Lemma~\ref{lemma:2}. Since $(i_2)_* \delta, (i_2)_* \beta_{k,w}$ are conjugate in $\pi_1(\Sigma, i_2(s_0))$, so are $(p_{1234} \circ (i_2)_*) (\delta), (p_{1234} \circ (i_2)_*) (\beta_{k,w})$ in $\langle g_1,g_2,g_3,g_4 | [g_1,g_2][g_3,g_4] \rangle$. In both cases, one can get by Lemma~\ref{lemma:2} that $\delta$ and $\beta_{k,w}$ are conjugate in $\pi_1(D,s_0)$, so $\gamma = \eta_{D,s_0}(\delta) = \eta_{D,s_0}(\beta_{k,w}) = \alpha_{k,w}$ by definition of $\eta_{D,s_0}$. \end{proof} \begin{proof}[Proof of Claim~\ref{claim:5}] Under both of the assumptions (that the orbit of $z$ is in the class $\tau_{i_2}(\alpha_{k,w})$ or $\tau_{i_2}(\alpha_k^\prime)$), the orbit of $z$ is in the class $\eta_{\Sigma, i_2(s_0)}(\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1} g_3)^{k_{2m}})$ for some sequence $k_1,...,k_{2r} \in \mathbb{Z}$, since $(i_2)_*(a) = g_1 g_3, (i_2)_*(b) = g_2 g_1^{-1} g_2^{-1} g_3$. Note that for large enough $k$, the elements $k_m$ in this sequence all satisfy $|k_m| \geq 2$. As noted in Section~\ref{sec:2}, $\pi_1(\Sigma)$ is a free product of $H_1 = \langle g_1,g_2 \rangle, H_2 = \langle g_3,g_4 \rangle$ with amalgamation of subgroups $A = \langle [g_1,g_2] \rangle, B = \langle [g_3,g_4] \rangle$ by the isomorphism $\phi: A \to B: [g_1,g_2] \mapsto [g_3,g_4]^{-1}$. In the case where the genus of $\Sigma$ is 3, $\pi_1(\Sigma)$ is a free product of $H_1 = \langle g_1,g_2 \rangle, H_2 = \langle g_3,g_4,g_5,g_6 \rangle$ with amalgamation of subgroups $A = \langle [g_1,g_2] \rangle, B = \langle [g_3,g_4][g_5,g_6] \rangle$ by the isomorphism $\phi : A \to B : [g_1,g_2] \mapsto ([g_3,g_4][g_5,g_6])^{-1}$. Note that $z \in i_2(D)$, else $z$ is a degenerate fixed point. Assume by contradiction $z \in i_2(C_1 \cup C_2)$. Denote the orbit of $z$ under $(i_2)_* \xi_{k,w}(t)$ by $\tilde{\gamma}: S^1 \to \Sigma$. If $z \in i_2(C_1)$, then $\tilde{\gamma}(t) \in i_2(C_1)$ for all $t$, since $\tilde{\gamma}(t) \in i_2(D)$ for all $t$, and $C_1$ is a connected component of $D$. This means $\tilde{\gamma}$ is (freely) homotopic to some loop $\tilde{\delta}: S^1 \to \Sigma$ based in $i_2(s_0)$ with $[\tilde{\delta}]_{\pi_1(\Sigma,i_2(s_0))} = g_1^n$ for some $n \in \mathbb{Z}$, since by construction $\tau_{i_2} \circ \eta_{C_1, s_1^\prime} (d_1) = \eta_{\Sigma,i_2(s_0)} (g_1)$ (recall that $d_1$ is the homotopy class of a loop going around $i_2(C_1)$ once). Denote the homotopy between $\tilde{\gamma},\tilde{\delta}$ by $F : S^1 \times [0,1] \to \Sigma$ with $F(\cdot,0) = \tilde{\gamma}, F(\cdot, 1) = \tilde{\delta}$. Then $\rho = F(0, \cdot)$ is a loop in $\Sigma$ based in $i_2(s_0)$, and $[\tilde{\gamma}]_{\pi_1(\Sigma,i_2(s_0))} = [\rho \# \tilde{\delta} \# \overline{\rho}]_{\pi_1(\Sigma, i_2(s_0))}$. The following holds: \begin{myequation} \Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}} \sim \\ \sim [\tilde{\gamma}]_{\pi_1(\Sigma, i_2(s_0))} = [\rho \# \tilde{\delta} \# \overline{\rho}]_{\pi_1(\Sigma, i_2(s_0))} = \chi g_1^n \chi^{-1} , \end{myequation} where $\chi = [\rho]_{\pi_1(\Sigma, i_2(s_0))}$. Thus $\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}} \sim g_1^n$. Note that $(g_1^n)$, a sequence with a single element, is a cyclically reduced sequence of $\langle H_1*H_2 , A=B, \phi \rangle$, so $g_1^n$ is a cyclically reduced element, with $len(g_1^n) = 1$. Recall that $w$ is given in cyclically reduced form, thus $\forall \ 1 \leq m \leq r: M_m, N_m \neq 0$. Since for all $m$, $|k_m| \geq 2$, by Claim~\ref{cl:length} it can be seen that $len \left(\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}} \right) > 1$, in contradiction. If the genus of $\Sigma$ is 3, then note that $p_{1234}$ (defined in the proof of Claim~\ref{claim:4}) preserves factors (in the sense of the proof of Claim~\ref{cl:length}), which means that $len(\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}}) = len(p_{1234}(\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}})) > 1$, by Claim~\ref{cl:length}. This is again a contradiction. Therefore $z \not\in i_2(C_1)$. Likewise, assume $z \in i_2(C_2)$. Similarly to the previous case, there is a conjugacy between $\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}}$ and $g_3^n$ in $\pi_1(\Sigma, i_2(s_0))$ for some $n \in \mathbb{Z}$. As before, $len(\Pi_{m=1}^r (g_1 g_3)^{k_{2m-1}} (g_2 g_1^{-1} g_2^{-1}g_3)^{k_{2m}}) > 1$ and $len(g_3^n) = 1$, but $len$ is conjugation-invariant, so this is a contradiction. All cases lead to a contradiction, therefore $z \in i_2(C)$. \end{proof} \section{Incompressibility in genus 3} \label{sec:incompressibility} This section describes an embedding $i_3$ of an egg-beater-like surface $E = C \bigsqcup C_1 \bigsqcup C_2 \bigsqcup C_3$ to a closed orientable surface $\Sigma_3$ of genus 3 such that $i_3 \restriction_C$ is incompressible, i.e. $(i_3 \restriction_C)_*: \pi_1(C) \to \pi_1(\Sigma_3)$ and $\tau_{i_3 \restriction_C}: \pi_0(\mathscr{L}C) \to \pi_0(\mathscr{L}\Sigma_3)$ are both injective. This incompressibility will imply Theorems~\ref{thm:2},~\ref{thm:3} by following the original proofs in~\cite{PS14} and~\cite{10}. Notation from previous sections is used in this section. \subsection{The egg-beater surface and map, and the embedding} Recall the egg-beater surface $D = C_V \bigcup_{VH_{0,1}} C_H \bigsqcup C_1 \bigsqcup C_2$ defined in Section~\ref{sec:3}, and the cylinder $C_* = [-1,1] \times \mathbb{R} / L\mathbb{Z}$. Let $C_3$ be another copy of $C_*$, and define $E = D \bigsqcup C_3$, with the symplectic form $dx \wedge dy$ on every component. Denote by $c_3: C_* \to E, e_3: D \to E$ the symplectic inclusions. The embedding $i_3: E \hookrightarrow \Sigma_3$ is defined in two parts. First, embed $C$ symplectically into $S^2$. Note that $C$ separates the sphere into 4 connected components. Choose any one of these components, and connect it with handles to the other 3 components, with one handle each. The result is a genus 3 surface $\Sigma_3$, and this defines $i_3 \restriction_C$. Define $i_3$ on $C_1 \bigsqcup C_2 \bigsqcup C_3$ by symplectically embedding each of them on a different handle. The embedding $i_3$ can be seen in Figure~\ref{fig:handles_gen_3}, and the generators of $\pi_1(\Sigma_3,i_3(s_0)) = \langle g_1,...,g_6 | [g_1,g_2][g_3,g_4][g_5,g_6] \rangle$ can be seen in Figure~\ref{fig:gens_gen_3}. The orientations of $i_3 \restriction_{C_j}$ for $j=1,2,3$ are chosen such that \begin{myequation} \tau_{e_3 \circ c_1} [t \mapsto (0,Lt)]_{\pi_0(\mathscr{L}C_*)} = \eta_{\Sigma_3,i_3(s_0)}(g_1) ,\\ \tau_{e_3 \circ c_2} [t \mapsto (0,Lt)]_{\pi_0(\mathscr{L}C_*)} = \eta_{\Sigma_3,i_3(s_0)}(g_3) ,\\ \tau_{c_3} [t \mapsto (0,Lt)]_{\pi_0(\mathscr{L}C_*)} = \eta_{\Sigma_3,i_3(s_0)}(g_5) . \end{myequation} The various cylinders, egg-beater surfaces and closed surfaces mentioned in this paper and the embeddings between them are summarized in Figure~\ref{fig:embeddings}. \begin{figure}[!ht] \centering \adjustbox{scale=1.2,center}{ \begin{tikzcd} & C \arrow[r,"i"] \arrow[d,"e_2"] & \Sigma_4 \\ C_* \arrow[ru,"c_V" near start,"c_H" near end] \arrow[r,"c_1" near start,"c_2" near end]\arrow[rd,"c_3"] & D \arrow[r,"i_2"] \arrow[d,"e_3"] & \Sigma \\ & E \arrow[r,"i_3"] & \Sigma_3 \end{tikzcd} } \caption{Embeddings between the surfaces.} \label{fig:embeddings} \end{figure} \begin{figure}[!ht] \centering \hspace{-0.05\textwidth}% \begin{minipage}{.5\textwidth} \centering \scalebox{0.8}{ \begin{tikzpicture} \draw[gray,ultra thin] (-1,0) circle (3); \draw[gray,ultra thin] (-1,0) circle (2.5); \draw (-2, -3.7) node[anchor=south] {\Large $i_3(C_V)$}; \draw[gray,ultra thin] (1,0) circle (3); \draw[gray,ultra thin] (1,0) circle (2.5); \draw (2, -3.7) node[anchor=south] {\Large $i_3(C_H)$}; \draw[gray, dashed] (1.3,-0.2) ++ (80:3.7) arc (80:-80:3.7) -- ++(-4,0) arc (260:100:3.7) -- ++(4,0); \draw (-4,2.8) node[anchor=south] {\Large $\tilde{\alpha}$}; \filldraw[fill=white,draw=black,thick] (-1.5, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (0, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (0, 1) circle (0.3); \draw[gray, ultra thin] (-1.5, 3) ++(-20:2.5) ++(-20:0.3) arc (-20:-200:0.3); \draw[gray, dashed] (-1.5, 3) ++(-20:2.5) ++(160:0.3) arc (160:-20:0.3); \draw[gray, ultra thin] (-1.5, 3) ++(-22:2.5) ++(-22:0.3) arc (-22:-202:0.3); \draw[gray, dashed] (-1.5, 3) ++(-22:2.5) ++(158:0.3) arc (158:-22:0.3); \draw (1.2, 2) node[anchor=west] {\Large $i_3(C_2)$}; \filldraw[fill=white,draw=black,thick] (-4.2, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (-2.7, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (-2.7, 1) circle (0.3); \draw[gray, ultra thin] (-4.2, 3) ++(-20:2.5) ++(-20:0.3) arc (-20:-200:0.3); \draw[gray, dashed] (-4.2, 3) ++(-20:2.5) ++(160:0.3) arc (160:-20:0.3); \draw[gray, ultra thin] (-4.2, 3) ++(-22:2.5) ++(-22:0.3) arc (-22:-202:0.3); \draw[gray, dashed] (-4.2, 3) ++(-22:2.5) ++(158:0.3) arc (158:-22:0.3); \draw (-1.5, 2) node[anchor=west] {\Large $i_3(C_1)$}; \filldraw[fill=white,draw=black,thick] (1.2, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (2.7, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (2.7, 1) circle (0.3); \draw[gray, ultra thin] (1.2, 3) ++(-20:2.5) ++(-20:0.3) arc (-20:-200:0.3); \draw[gray, dashed] (1.2, 3) ++(-20:2.5) ++(160:0.3) arc (160:-20:0.3); \draw[gray, ultra thin] (1.2, 3) ++(-22:2.5) ++(-22:0.3) arc (-22:-202:0.3); \draw[gray, dashed] (1.2, 3) ++(-22:2.5) ++(158:0.3) arc (158:-22:0.3); \draw (3.9, 2) node[anchor=west] {\Large $i_3(C_3)$}; \filldraw[black] (0,-2.55) circle (2pt); \draw (0,-2.8) node[anchor=north] {\Large $i_3(s_0)$}; \end{tikzpicture}} \caption{The embedding $i_3$ and a loop $\tilde{\alpha}$; this figure is embedded in $S^2$.} \label{fig:handles_gen_3} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \pgfdeclarelayer{bg} \pgfsetlayers{bg,main} \scalebox{0.8}{ \begin{tikzpicture} \filldraw[fill=white,draw=black,thick] (-1.5, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (0, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (0, 1) circle (0.3); \filldraw[fill=white,draw=black,thick] (-4.2, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (-2.7, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (-2.7, 1) circle (0.3); \filldraw[fill=white,draw=black,thick] (1.2, 3) ++(54:2.8) arc (54:-54:2.8) -- ++(126:0.6) arc (-54:54:2.2) -- cycle; \filldraw[fill=white,draw=black,thick] (2.7, 5) circle (0.3); \filldraw[fill=white,draw=black,thick] (2.7, 1) circle (0.3); \filldraw[black] (0,-2.55) circle (2pt); \draw (0,-2.8) node[anchor=north] {\Large $i_3(s_0)$}; \node (origin) at (0,-2.55) {}; \begin{scope} [gray,dashed,decoration={markings,mark=at position 0.7 with {\arrow[scale=2]{>}}}] \draw (origin) to[out=130,in=270] (-1,3) to[out=90,in=145,distance=100] (-2.7,5) arc (54:-54:2.5) to[out=226,in=160] (origin) decorate{(-2.7,5) arc (54:-54:2.5) to[out=226,in=160] (origin)}; \draw (-1.9,-0.7) node[anchor=east] {\Large $g_2$}; \draw (origin) to[out=55,in=270] (1.5,3) to[out=90,in=145,distance=100] (0,5) arc (54:-54:2.5) to[out=226,in=90] (origin) decorate{(0,5) arc (54:-54:2.5) to[out=226,in=90] (origin)}; \draw (0,0) node {\Large $g_4$}; \draw (origin) to[out=0,in=270] (5,3) to[out=90,in=145,distance=100] (2.7,5) arc (54:-54:2.5) to[out=226,in=45] (2,-1) to[out=225,in=30] (origin) decorate{(2.7,5) arc (54:-54:2.5) to[out=226,in=45] (2,-1) to[out=225,in=30] (origin)}; \draw (2.4,-1) node {\Large $g_6$}; \end{scope} \begin{pgfonlayer}{bg} \begin{scope} [gray,decoration={markings,mark=at position 0.4 with {\arrow[scale=2]{>}}}] \draw[postaction={decorate}] (origin) to[out=180,in=290] (-4,0) to[out=110,in=90,distance=50] (-2,1) to[out=270,in=140] (origin); \draw (-3.5,0) node {\Large $g_1$}; \draw[postaction={decorate}] (origin) to[out=120,in=250] (-0.5,1) to[out=70,in=90,distance=50] (1,1) to[out=270,in=70] (origin); \draw (-0.5,1.8) node {\Large $g_3$}; \draw[postaction={decorate}] (origin) to[out=40,in=290] (2,1) to[out=110,in=90,distance=50] (3.5,1) to[out=270,in=0] (origin); \draw (2.5,2) node {\Large $g_5$}; \end{scope} \end{pgfonlayer} \end{tikzpicture}} \caption{The generators of $\pi_1(\Sigma_3,i_3(s_0))$, dashed lines only for visibility.} \label{fig:gens_gen_3} \end{minipage} \end{figure} As can be seen in Figure~\ref{fig:gens_gen_3}, the induced homomorphism $(i_3)_*: F_3 \to \pi_1(\Sigma_3,i_3(s_0))$ acts on the generators $a,b,c$ of $F_3$ so: \begin{myequation} a \mapsto g_1 g_3 ,\\ b \mapsto g_3 g_5 ,\\ c \mapsto g_3 , \end{myequation} where $a,b,c \in \pi_1(C,s_0)$ are as defined in Subsection~\ref{sec:3.1}. The following claim is key in proving Theorems~\ref{thm:2},~\ref{thm:3} in genus 3: \begin{claim} \label{cl:incompressible} The map $i_3 \restriction_C$ is incompressible, i.e. $(i_3 \restriction_C)_*$ and $\tau_{i_3 \restriction_C}$ are injective. \end{claim} Define the egg-beater maps on $E$ as follows: \begin{myequation} \Theta_k: F_2 \to Ham_c(E) ,\\ V \mapsto (e_3 \circ e_2 \circ c_V)_* f_k \bigsqcup -(e_3 \circ c_1)_* f_k \bigsqcup -(e_3 \circ c_2)_* f_k \bigsqcup 0 \restriction_{C_3} ,\\ H \mapsto (e_3 \circ e_2 \circ c_H)_* f_k \bigsqcup 0 \restriction_{C_1} \bigsqcup -(e_3 \circ c_2)_* f_k \bigsqcup -(c_3)_* f_k . \end{myequation} These are Hamiltonian diffeomorphisms: $\Theta_k(V)$ can be seen to be generated by the Hamiltonian \[ (e_3 \circ e_2 \circ c_V)_* h_k \bigsqcup -(e_3 \circ c_1)_* h_k \bigsqcup -(e_3 \circ c_2)_* f_h \bigsqcup 0 \restriction_{C_3} , \] and $\Theta_k(H)$ by the Hamiltonian \[ (e_3 \circ e_2 \circ c_H)_* f_k \bigsqcup 0 \restriction_{C_1} \bigsqcup -(e_3 \circ c_2)_* f_k \bigsqcup -(c_3)_* f_k . \] The egg-beater maps on $\Sigma_3$ are $(i_3)_* \Theta_k(w)$, for $w \in F_2$. \subsection{Incompressibility of the embedding} The incompressibility of $i_3 \restriction_C$ is shown using a concept similar to the intersection number between curves. The intersection number of free homotopy classes of a surface is the minimum number of intersections between any two loops representing the classes, see Subsection 1.2.3 of~\cite{FM} for a precise definition. In this subsection, the next similar definition is used. \begin{definition} Given a surface $M$, a basepoint $p \in M$, and a curve $\tilde{\alpha}: S^1 \to M$, define \begin{myequation} int_{M,p,\tilde{\alpha}}: \pi_1(M,p) \to \mathbb{Z} ,\\ \gamma \mapsto \min_{\tilde{\gamma}} |\tilde{\gamma} \cap \tilde{\alpha}| , \end{myequation} where the minimum runs over all loops $\tilde{\gamma}: S^1 \to M$ such that $\tilde{\gamma}$ is a representative of $\gamma$ with basepoint $p$, and where $|\tilde{\gamma} \cap \tilde{\alpha}|$ denotes the number of intersections of $\tilde{\gamma},\tilde{\alpha}$. \end{definition} A loop $\tilde{\gamma}: S^1 \to M$ based at $p \in M$ is said to be in \textit{minimal position} with respect to $\tilde{\alpha}$ if $|\tilde{\gamma} \cap \tilde{\alpha}| = int_{M,p,\tilde{\alpha}}([\tilde{\gamma}]_{\pi_1(M,p)})$. A loop $\tilde{\gamma}$ in a surface $M$, based in $p$, is said to form a \textit{free bigon} with another loop (which is not necessarily based in $p$), $\tilde{\alpha}$, if there is an embedded closed disk $D \subset M$ such that $\partial D$ is the union of an arc of $\tilde{\alpha}$ and the image under $\tilde{\gamma}$ of an arc of $S^1$ that does not contain 0 (here $S^1 \simeq \mathbb{R} / \mathbb{Z}$). The following claim gives a criterion for being in minimal position with respect to $\tilde{\alpha}$: \begin{claim} \label{cl:bigon} Let $M$ be a closed orientable surface of genus $\geq 2$, $p \in M$, $\tilde{\alpha}: S^1 \to M$ be a loop which does not pass through $p$, and $\tilde{\gamma}: S^1 \to M$ be a loop based at $p \in M$. Then $\tilde{\gamma}$ is in minimal position with respect to $\tilde{\alpha}$ if and only if $\tilde{\gamma}$ does not form a free bigon with $\tilde{\alpha}$. \end{claim} This claim is a variant of the bigon criterion, presented as Proposition~1.7 of~\cite{FM}, which checks whether two free homotopy classes are in minimal position. The proof of Claim~\ref{cl:bigon} is analogous to the proof of Proposition~1.7 of~\cite{FM}, with some extra book-keeping in order to follow whether $\tilde{\gamma}$ and $\tilde{\alpha}$ pass through $p$. Using Claim~\ref{cl:bigon}, one can prove Claim~\ref{cl:incompressible}. \begin{proof}[Proof of Claim~\ref{cl:incompressible}] Consider the automorphism $I: F_3 \to F_3$ given by \begin{myequation} a \mapsto ac ,\\ b \mapsto cb ,\\ c \mapsto c , \end{myequation} and denote by $\varphi: F_3 \to \pi_1(\Sigma_3,i_3(s_0))$ the following homomorphism: \begin{myequation} a \mapsto g_1 ,\\ b \mapsto g_5 ,\\ c \mapsto g_3 . \end{myequation} Note that $(i_3 \restriction_C)_* = \varphi \circ I$. Since $I$ is an isomorphism, it preserves conjugacy classes, and so $i_3 \restriction_C$ is incompressible if and only if $\varphi$ preserves conjugacy classes, i.e. if and only if for all $x,y \in F_3$ which are not conjugates, $\varphi(x),\varphi(y) \in \pi_1(\Sigma_3,i_3(s_0))$ also are not conjugates. Recall the definitions and the statement of the Conjugacy Theorem for Free Products with Amalgamation, given in Section~\ref{sec:2}. The conjugacy relation, in any group, will again be denoted by $\sim$. Assume by contradiction that there exist $x,y \in F_3$ such that $x \not\sim y$ and $\varphi(x) \sim \varphi(y)$. In order for the concept of cyclically reduced elements of $F_3$ to be defined, consider $F_3 = \langle a,b,c \rangle$ as the free product $\left \langle \left \langle R \right \rangle * \left \langle S \right \rangle \right \rangle$, for some two proper subsets $R \bigsqcup S = \{a,b,c\}$. This partition of $\{a,b,c\}$ gives a well-defined concept of cyclically reduced sequences of elements of $F_3$, and also a well defined length function $len = len_{R,S} : F_3 \to \mathbb{Z}_{\geq 0}$, defined in Section~\ref{sec:2}. Without loss of generality, assume $x,y$ are cyclically reduced; this means that $\varphi(x), \varphi(y)$ are also cyclically reduced. We will reach a contradiction. \\ First, consider the case where for all partitions $R,S$, $len_{R,S}(x), len_{R,S}(y) < 2$. If $len_{R,S}(x) = 0$ or $len_{R,S}(y) = 0$, then $x = 1$ or $y = 1$ respectively, and therefore $\varphi(x) = 1$ or $\varphi(y) = 1$ respectively. This implies that $\varphi(x) = \varphi(y) = 1$, and by injectivity of $\varphi$, $x = y = 1$, in contradiction. Therefore, we get that for all partitions $R,S$, $len_{R,S}(x) = len_{R,S}(y) = 1$. This implies that there exist $r, s \in \{a,b,c\}$ such that $x \sim r^n, y \sim s^m$ for some $m,n \in \mathbb{Z}$. By passing to the abelianization $\mathbb{Z}^6$ of $\pi_1(\Sigma_3)$, one sees that in fact $r^n = s^m$, so $x \sim y$, in contradiction. Therefore there exists a proper partition $R \bigsqcup S = \{a,b,c\}$ such that $len_{R,S}(x) \geq 2$ or $len_{R,S}(y) \geq 2$. By symmetry between $x$ and $y$ and between $a,b,c$, one may assume that $R = \{a\}, S = \{b,c\}$, and $len_{R,S}(x) \geq 2$. Denote the groups $G = \langle g_1,g_2 \rangle \simeq F_2$, $H = \langle g_3,...,g_6 \rangle \simeq F_4$, subgroups $A = \langle [g_1,g_2] \rangle < G$, $B = \langle [g_3,g_4][g_5,g_6] \rangle < H$, and the isomorphism $\psi: A \xrightarrow{\sim} B : [g_1,g_2] \mapsto ([g_3,g_4][g_5,g_6])^{-1}$. Note that $\pi_1(\Sigma_3) = \langle G * H, A = B, \psi \rangle$ and that $\varphi$ preserves factors, in the sense of Section~\ref{sec:2}. Let $(x_1,...,x_n)$ be a cyclically reduced sequence representing $x$ (i.e. $x = \Pi_{i=1}^n x_i$). Since $\varphi(x) \sim \varphi(y)$, by the Conjugacy Theorem for Free Products with Amalgamation, there exist $\mu \in A, 1 \leq k \leq n$ such that $\varphi(y) = \mu \cdot \Pi_{i=k}^{k-1} \varphi(x_i) \cdot \mu^{-1}$ (where $\Pi_{i=k}^{k-1} \varphi(x_i)$ is shorthand for $\varphi(x_k) \cdot ... \cdot \varphi(x_n) \cdot \varphi(x_1) \cdot ... \cdot \varphi(x_{k-1})$), and since $A = \langle [g_1,g_2] \rangle$, $\varphi(y) = [g_1,g_2]^l \cdot \Pi_{i=k}^{k-1} \varphi(x_i) \cdot [g_1,g_2]^{-l}$ for some $l \in \mathbb{Z}$. If $l = 0$, $\varphi(y) = \Pi_{i=k}^{k-1} \varphi(x_i)$. Since $\varphi$ is injective, this equation can be pulled back to $F_3$ to get $y = \Pi_{i=k}^{k-1} x_i \sim x$, in contradiction. Thus $l \neq 0$. Consider the loop $\tilde{\alpha}: S^1 \to \Sigma_3$, depicted in Figure~\ref{fig:handles_gen_3}. The image of $\tilde{\alpha}$ does not intersect $i_3(C)$, and its free homotopy class is $[g_1 g_3 g_5]_{\pi_0(\mathscr{L}\Sigma_3)}$. Since $\Ima(\tilde{\alpha}) \cap i_3(C) = \emptyset$ then for any element $\gamma \in \pi_1(C,s_0)$, $(i_3)_*(\gamma)$ has a representative loop in $\Sigma_3$ that does not intersect $\tilde{\alpha}$ (as such a representative can be chosen to be contained in $i_3(C)$), and thus for any element $\gamma \in \pi_1(C,s_0)$, $\varphi(\gamma)$ also has a representative that does not intersect $\tilde{\alpha}$, since $\Ima \varphi = \Ima \ (i_3)_*$. Therefore, $int_{\Sigma_3,i_3(s_0),\tilde{\alpha}}(\varphi(\gamma)) = 0$ for all $\gamma \in \pi_1(C,s_0)$. We will show that $int_{\Sigma_3,i_3(s_0),\tilde{\alpha}}(\varphi(y)) > 0$, in contradiction. \\ Construct the following representative $\tilde{\upsilon}: S^1 \to \Sigma_3$ to $\varphi(y)$. Note that representatives of the element $[g_1,g_2] \in \pi_1(\Sigma_3)$ circle the leftmost handle (i.e. the one that contains $i_3(C_1)$). Take a representative $\tilde{\rho}: S^1 \to \Sigma_3$ to $[g_1,g_2]$, based in $i_3(s_0)$, that is in minimal position with respect to $\tilde{\alpha}$. Note that since $\tilde{\rho}$ circles the leftmost handle, $|\tilde{\rho} \cap \tilde{\alpha}| > 0$. Take a representative $\tilde{\tau}: S^1 \to i_3(C)$ to $\Pi_{i=k}^{k-1} \varphi(x_i)$ that is contained in $i_3(C)$, and denote $\tilde{\upsilon} = (\tilde{\rho})^l \# \tilde{\tau} \# (\tilde{\rho})^{-l}$ (where $\#$ denotes path concatenation). By reparametrization, it can be assumed that $\tilde{\upsilon} \restriction_{[0,\frac13]}, \tilde{\upsilon} \restriction_{[\frac13,\frac23]}, \tilde{\upsilon} \restriction_{[\frac23,1]}$ are parametrizations of $(\tilde{\rho})^l, \tilde{\tau}, (\tilde{\rho})^{-l}$ respectively (i.e. $\tilde{\upsilon}$ passes from $(\tilde{\rho})^l$ to $\tilde{\tau}$ at time $\frac13$, etcetera). If $\tilde{\upsilon}$ forms a free bigon with $\tilde{\alpha}$, the bigon's boundary must be made up of an arc of $\tilde{\alpha}$, which intersects $\tilde{\rho}$ at its endpoints (since $\tilde{\alpha}$ and $\tilde{\tau}$ are disjoint), and an arc $\tilde{\upsilon} \restriction_{[t_1,t_2]}$, where $[t_1,t_2] \subset S^1$ is some interval not containing 0. If $t_1 < \frac13$ and $t_2 > \frac23$, it can be seen that $\tilde{\tau}$ is contractible. This implies that $\Pi_{i=k}^{k-1} \varphi(x_i) = 1$ and therefore $x=1$, by contradiction to $n \geq 2$. Thus $t_1 \geq \frac23$ or $t_2 \leq \frac13$. But this implies that $\tilde{\rho}$ forms a free bigon with $\tilde{\alpha}$, in contradiction to the fact that $\tilde{\rho}$ is in minimal position with respect to $\tilde{\alpha}$. Thus $\tilde{\upsilon}$ cannot form a free bigon with $\tilde{\alpha}$. By Claim~\ref{cl:bigon}, $\tilde{\upsilon}$ is in minimal position with respect to $\tilde{\alpha}$, and therefore $int_{\Sigma_3,i_3(s_0),\tilde{\alpha}}(\varphi(y)) = |\tilde{\upsilon} \cap \tilde{\alpha}| > 0$. This is a contradiction, and completes the proof. \end{proof} \bibliographystyle{plain}
{ "timestamp": "2020-12-17T02:18:04", "yymm": "2012", "arxiv_id": "2012.08930", "language": "en", "url": "https://arxiv.org/abs/2012.08930" }
\section{Introduction} Sentence semantic matching is a fundamental \textit{Natural Language Processing~(NLP)} task that tries to infer the most suitable label for a given sentence pair. For example, Natural Language Inference~(NLI) targets at classifying the input sentence pair into one of the three relations~(i.e., \textit{Entailment, Contradiction, Neutral})~\cite{Kim2018SemanticSM}. Paraphrase Identification~(PI) aims at identifying whether the input sentence pair expresses the same meaning~\cite{dolan2005automatically}. Figure~\ref{f:example} gives some examples with different semantic relations from different datasets. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{example2.pdf} \caption{Some examples from SNLI and SciTail datasets.} \label{f:example} \end{figure} As a fundamental technology, sentence semantic matching has been applied successfully into many NLP fields, e.g., information retrieval~\cite{Clark2016CombiningRS}, question answering~\cite{liu2018finding}, and dialog system~\cite{serban2016building}. Currently, most work leverages the advancement of representation learning techniques~\cite{devlin2018bert,vaswani2017attention} to tackle this task. They focus on input sentences and design different architectures to explore sentence semantics comprehensively and precisely. Among all these methods, BERT~\cite{devlin2018bert} plays an important role. It adopts multi-layer transformers to make full use of large corpus~(i.e., BooksCorpus and English Wikipedia) for the powerful pre-trained model. Meanwhile, two self-supervised learning tasks~(i.e., Masked LM and Next Sentence Prediction) are designed to better analyze sentence semantics and capture as much information as possible. Based on BERT, plenty of work has made a big step in sentence semantic modeling~\cite{liu2019multi,Radford2018ImprovingLU}. In fact, since relations are the predicting targets of sentence semantic matching task, most methods do not pay enough attention to the relation learning. They just leverage annotated labels to represent relations, which are formulated as one-hot vectors. However, these independent and meaningless one-hot vectors cannot reveal the rich semantic information and guidance of relations~\cite{zhang-etal-2018-multi}, which will cause an information loss. \citeauthor{gururangan2018annotation}~(\citeyear{gururangan2018annotation}) has observed that different relations among sentence pairs imply specific semantic expressions. Taking Figure~\ref{f:example} as an example, most sentence pairs with ``\textit{contradiction}'' relation contain negation words~(e.g., \textit{nobody, never}). ``\textit{entailment}'' relation often leads to exact numbers being replaced with approximates~(\textit{person, some}). ``\textit{Neutral}'' relation will import some correct but irrelevant information~(e.g., \textit{absorb carbon dioxide}). Moreover, the expressions between sentence pairs with different relations are very different. Therefore, the comparison and contrastive learning among different relations~(e.g., pairwise relation learning) can help models to learn more about the semantic information implied in the relations, which in turn helps to strengthen the sentence analysis ability of models. They should be treated as more than just meaningless one-hot vectors. One of the solutions for better relation utilization is the embedding method inspired by Word2Vec. Some researchers try to jointly encode the input sentences and labels in the same embedding space for better relation utilization during sentence semantic modeling~\cite{du2019explicit,wang-etal-2018-joint-embedding}. Despite the progress they have achieved, label embedding method requires more data and parameters to achieve better utilization of relation information. It still cannot fully explore the potential of relations due to the small number of relation categories or the lack of explicit label embedding initialization~\cite{wang-etal-2018-joint-embedding}. To this end, in this paper, we propose a novel \emph{Relation of Relation Learning Network (R$^2$-Net)}~approach to make full use of relation information in a simple but effective way. In concrete details, we first utilize pre-trained BERT to model semantic meanings of the input words and sentences from a global perspective. Then, we develop a CNN-based encoder to obtain partial information~(\textit{keywords and phrase information}) of sentences from a local perspective. Next, inspired by self-supervised learning methods in BERT training processing, we propose a \textbf{R}elation of \textbf{R}elation~(R$^2$) classification task to enhance the learning ability of \emph{R$^2$-Net}~for the implicit common features corresponding to different relations. Moreover, a triplet loss is used to constrain the model, so that the intra-class and inter-class relations are analyzed better. Along this line, input sentence pairs with the same relations will be represented much closer and vice versa further apart. Relation information is properly integrated into sentence pair modeling processing, which is in favor of tackling the above challenges and improving the model performance. Extensive evaluations of two sentence semantic matching tasks (i.e., NLI and PI) demonstrate the effectiveness of our proposed \emph{R$^2$-Net}~and its advantages over state-of-the-art sentence semantic matching baselines. \section{Related Work} In this section, we mainly introduce the related work from two aspects: 1) \textit{Sentence Semantic Matching}, and 2) \textit{Label Embedding for Text Classification}. \subsection{Sentence Semantic Matching} With the development of various neural network technologies such as CNN~\cite{kim2014convolutional}, GRU~\cite{Chung2014EmpiricalEO}, and the growing importance of the attention mechanism~\cite{vaswani2017attention,Parikh2016ADA}, plenty of methods have been exploited for sentence semantic matching on large datasets like SNLI~\cite{bowman2015large}, SciTail~(Khot et al.~\citeyear{khot2018scitail}), and Quora~(Iyer et al.~\citeyear{iyer2017first}). Traditionally, researchers try to fully use neural network technologies to model semantic meanings of sentences in an end-to-end fashion. Among them, CNNs focus on the local context extraction with different kernels, and RNNs are mainly utilized to capture the sequential information and semantic dependency. For example, \citeauthor{mou2015natural}~(\citeyear{mou2015natural}) employed a tree-based CNN to capture the local context information in sentences. \citeauthor{zhang2018ImageEnhance}~(\citeyear{zhang2018ImageEnhance}) combined CNN and GRU into a hybrid architecture, which utilizes the advantages of both networks. They used CNN to generate phrase-level semantic meanings and GRU to model the word sequence and dependency between sentences. Recently, attention-based methods have shown very promising results on many NLP tasks, such as machine translation~(Bahdanau et al.~\citeyear{Bahdanau2014NeuralMT}), reading comprehension~\cite{zheng2019human}, and NLI~\cite{bowman2016fast}. Attention helps to extract the most important parts in sentences, capture semantic relations, and align the elements of two sentences properly~\cite{Cho2015DescribingMC,zhang2017context}. It has become an essential component for improving model performance and sentence understanding. Early attempts focus on designing different attention methods that are suitable for specific tasks, like inner-attention~\cite{Liu2016LearningNL}, co-attention~\cite{Kim2018SemanticSM}, and multi-head attention~\cite{shen2017disan}. To fully explore the potential of attention mechanism, \citeauthor{zhang2019drr}~(\citeyear{zhang2019drr}) proposed a dynamic attention mechanism, which imitates human reading behaviors to select the most important word at each reading step. This method has achieved impressive performance. Another direction is pre-trained methods. \citeauthor{devlin2018bert}~(\citeyear{devlin2018bert}) used very large corpus and multi-layer transformers to obtain a powerful per-trained BERT. This method leverages multi-head self-attention to encode sentences and achieves remarkable performances on various NLP tasks. With the powerful representation ability, pre-trained BERT model has accelerated the NLP research. However, most of these methods only focus on the input sentences and treat the labels as meaningless one-hot vectors, which ignores the potential of label information~\cite{zhang-etal-2018-multi}. There still remains plenty of space for further improvement on sentence semantic matching. \subsection{Label Embedding for Text Classification} As an extremely important part of training data, labels contain much implicit information that needs to be explored. In computer vision, researchers have proposed label embedding methods to make full use of label information. However, research on explicit label utilization in NLP is still a relatively new domain. One possible reason is that there are not that many labels in NLP tasks. Thus, label information utilization is only considered on the task with relatively a large number of labels or multi-task learning. For example, \citeauthor{zhang-etal-2018-multi}~(\citeyear{zhang-etal-2018-multi}) proposed a multi-task label embedding method for better implicit correlations and common feature extraction among related tasks. \citeauthor{du2019explicit}~(\citeyear{du2019explicit}) designed an explicit interaction model to analyze the fine-grained interaction between word representations and label embedding. They have achieved impressive performance on text classification tasks. In addition, \citeauthor{wang-etal-2018-joint-embedding}~(\citeyear{wang-etal-2018-joint-embedding}) and \citeauthor{pappers2019gile}~(\citeyear{pappers2019gile}) transferred the text classification task to a label-word joint embedding problem. They leveraged the semantic vectors of labels to guide models to select the important and relevant parts of input sentences for better performance. The above work demonstrates the superiority of explicit label utilization and inspires us to make better use of label information. \begin{figure*} \centering \includegraphics[width=0.86\textwidth]{new_model.pdf} \caption{Architecture of \emph{Relation of Relation Learning Network (R$^2$-Net)}.} \label{f:model} \end{figure*} \section{Problem Statements} \label{s:problem} In this section, we will introduce the definition of sentence semantic matching task and our proposed relation of relation classification task. \subsection{Sentence Semantic Matching} Sentence semantic matching task can be formulated as a supervised classification. Given two input sentences $\bm{s}^a = \{\bm{x}_1^a, \bm{x}_2^a, ..., \bm{x}_{l_a}^a \}$ and $\bm{s}^b = \{\bm{x}_1^b, \bm{x}_2^b, ..., \bm{x}_{l_b}^b \}$, where $\bm{x}_i^a$ and $\bm{x}_j^b$ are feature tokens for each sentence. The goal of this task is to train a classifier $\xi$, which is capable of computing the conditional probability $P(y|\bm{s}^a, \bm{s}^b)$ and predicting the relation for input sentence pair based on the probability. \begin{equation} \label{eq:task} \begin{split} &P(y|\bm{s}^a, \bm{s}^b)= \xi(\bm{s}^a, \bm{s}^b), \\ &y^{*} =argmax_{y\in\mathcal{Y}}P(y|\bm{s}^a, \bm{s}^b), \end{split} \end{equation} where the true label $y\in\mathcal{Y}$ indicates the semantic relation between the input sentence pair. $\mathcal{Y}=\{entailment, contradiction, neutral\}$ for NLI task and $\mathcal{Y}=\{Yes, No\}$ for PI task. \subsection{Relation of Relation Classification} \citeauthor{gururangan2018annotation}~(\citeyear{gururangan2018annotation}) has observed that relations can be helpful to reveal some implicit features or patterns for semantic understanding and matching. In order to properly and fully utilize relation information, we propose a \textbf{R}elation of \textbf{R}elation~(R$^2$) classification task to guide models to understand sentence relation more precisely. Given two input sentence pairs $(\bm{s}^a_1, \bm{s}^b_1)$ and $(\bm{s}^a_2, \bm{s}^b_2)$, the goal is to learn a classifying function $\mathcal{F}$ with the ability to identify whether these two input pairs have the same semantic relation: \begin{equation} \label{eq:r2-task} \begin{split} \mathcal{F}((\bm{s}^a_1, \bm{s}^b_1), (\bm{s}^a_2, \bm{s}^b_2)) = \begin{cases} 1, & \mbox{if } y_1 = y_2, \\ 0, & \mbox{if } y_1 \not= y_2, \end{cases} \end{split} \end{equation} where $y_1$ and $y_2$ stand for the semantic relations of two input sentence pairs, respectively. In order to make full use of relation information and do better sentence semantic matching, the following important questions should be considered: \begin{itemize}\setlength{\itemsep}{0pt} \item Since relations are the predicting targets, how to make full use of relation information to improve model performance properly without leaking it? \item How to integrate R$^2$ task into matching task effectively for relation usage and performance improvement? \end{itemize} To this end, we propose \emph{R$^2$-Net}~to properly and fully utilize relation information, and tackle the above issues. Next, we will introduce the technical details of \emph{R$^2$-Net}. \section{Relation of Relation Learning Network (R$^2$-Net)} \label{s:model} The overall architecture of \emph{R$^2$-Net}~is shown in Figure~\ref{f:model}(A). To better describe how \emph{R$^2$-Net}~tackles the above tasks and integrates R$^2$ task to enhance the model ability on sentence semantic matching, similar to Section~\ref{s:problem}, we also elaborate the technical details from two aspects: 1) Sentence Semantic Matching Part; 2) Relation of Relation Learning Part. \subsection{Sentence Semantic Matching Part} \label{s:encode_unit} This part focuses on identifying the most suitable label for a given input sentence pair. Specifically, for an input sentence pair, we first utilize powerful BERT to generate sentence semantic representation globally. Meanwhile, we develop a CNN-based encoder to capture the keywords and phrase information from a local perspective. Thus, the input sentence pair can be encoded in a comprehensive manner. Based on the comprehensive representation, we leverage a multi-layer perceptron to predict the corresponding label. \subsubsection{Global Encoding.} With the full usage of large corpus and multi-layer transformers, BERT~\cite{devlin2018bert} has accomplished much progress in many NLP tasks. Thus, we select BERT to generate sentence semantic representations for the input. Moreover, inspired by ELMo~\cite{Peters2018DeepCW}, we also use the weighted sum of all the hidden states of words from different transformer layers as the final contextual representations of input words in sentences. Specifically, we first split the input sentence pair $(\bm{s}^a, \bm{s}^b)$ into BPE tokens~(Sennrich et al.~\citeyear{sennrich2015neural}). Then, we concatenate two sentences to the required format, in which ``[\textit{SEP}]'' is adopted to concatenate two sentences and ``[\textit{CLS}]'' is added at the beginning and the end of the whole sequence. Then, we use multi-layer transformer blocks to obtain the representations of words and sentences in the input. Moreover, as illustrated in Figure~\ref{f:model}(B), suppose there are $L$ layers in the BERT. The contextual word representations in the input sentence pairs is then a pre-layer weighted sum of transformer block output, with the weights $\alpha_1, \alpha_2,...,\alpha_L$. \begin{equation} \label{eq:global-encoding} \begin{split} \bm{h}_0^l, \bm{H}^l &= TransformerBlock(\bm{s}^a, \bm{s}^b), \\ \bm{H} &= \sum_{l=1}^L\alpha_l\bm{H}^l, \quad \bm{v}_g = \bm{h}_0^L, \\ \end{split} \end{equation} where $\bm{h}_0^l$ denotes the representation of first token ``[\textit{CLS}]'' at the $l^{th}$ layer, and $\bm{v}_g$ denotes the global semantic representation of the input. $\bm{H}^l$ represents the sequence features of the whole input. $\alpha_l$ is the weight of the $l^{th}$ layer in BERT and will be learned during model training. \subsubsection{Local Encoding.} The semantic relation within the sentence pair is not only connected with the important words, but also affected by the local information~(e.g., phrase and local structure). Though Bert leverages multi-layer transformers to perceive important words to the sentence pair, it still has some weaknesses in modeling local information. To alleviate these shortcomings, we develop a CNN-based local encoder to extract the local information from the input. Figure~\ref{f:model}(C) illustrates the structure of this local encoder. The input of this encoder is the output features $\bm{H}$ from global encoding. We use convolution operations with different composite kernels~(e.g., bigram and trigram) to process these features. Each operation with different kernels is capable of modeling patterns with different sizes~(e.g., \textit{new couple, tall person}). Thus, we can obtain robust and abstract local features of the input sentence pair. Next, we leverage average pooling and max pooling to enhance these local features and concatenate them before sending them to a non-linear transformation. Suppose we have $K$ different kernel sizes, this process can be formulated as follows: \begin{equation} \label{eq:local-encoding} \begin{split} \bm{H}^k &= CNN_k(\bm{H}), \quad k=1,2,...,K,\\ \bm{h}^k_{max} &= max(\bm{H}^k), \bm{h}^k_{avg} = avg(\bm{H}^k), \\ \bm{h}_{concat} &= [\bm{h}^1_{max};\bm{h}^1_{avg};...;\bm{h}^K_{max};\bm{h}^K_{avg}], \\ \bm{v}_l &= ReLu(\bm{W}\bm{h}_{concat} + \bm{b}), \\ \end{split} \end{equation} where $CNN_k$ denotes the convolution operation with the $k^{th}$ kernel. $[\cdot;\cdot]$ is the concatenation operation. $\bm{v}_l$ represents the local semantic representation of the input. $\{\bm{W}, \bm{b}\}$ are trainable parameters. $ReLu(\cdot)$ is the activation function. After getting the global representation $\bm{v}_g$ and local representation $\bm{v}_l$, we investigate the different fusion methods to integrate them together, including simple concatenation, weighed concatenation, as well as weighted sum. Finally, we obtain that simple concatenation is flexible and can achieve comparable performance without adding more training parameters. Thus, we employ the concatenation $\bm{v} = [\bm{v}_g;\bm{v}_l]$ as the final semantic representation of the input sentence pair. \begin{table*} \centering \caption{Performance (accuracy) of models on different NLI dataset.} \begin{footnotesize} \begin{tabular}{lccc} \hline \textbf{Model} & \textbf{Full test} & \textbf{Hard test} & \textbf{SICK test} \\ \hline (1) CENN~\cite{zhang2017context} & 82.1\% & 60.4\% & 81.8\% \\ (2) CAFE~\cite{Tay2017ACA} & 85.9\% & 66.1\% & 86.1\% \\ (3) Gumbel TreeLSTM~\cite{choi2018learning} & 86.0\% & 66.7\% & 85.8\%\\ (4) Distance-based SAN~\cite{im2017distance} & 86.3\% & 67.4\% & 86.7\% \\ (5) DRCN~\cite{Kim2018SemanticSM} & 86.5\%& 68.3\% & 87.4\% \\ \hline (6) DRr-Net~\cite{zhang2019drr} & 87.5\% & 71.2\% & 87.8\% \\ (7) Dynamic Self-Attention~\cite{yoon2018dynamic} & 87.4\% & 71.5\% & 87.7\% \\ (8) Bert-base~\cite{devlin2018bert} & 90.3\% & 80.8\% & 88.5\% \\ \hline (9) \emph{R$^2$-Net} & \textbf{91.1\%} & \textbf{81.0\%} & \textbf{89.2\%} \\ \hline \end{tabular} \end{footnotesize} \label{t:snli-result} \end{table*} \subsubsection{Label Prediction.} This component is adopted to predict the label of input sentence pair, which is an essential part of traditional sentence semantic matching methods. To be specific, the input of this component is the semantic representation $\bm{v}$. We leverage a two-layer MLP to make the final classification, which can be formulated as follows: \begin{equation} \label{eq:label-prediction} \begin{split} P(y|(\bm{s}^a, \bm{s}^b)) = MLP_1(\bm{v}). \\ \end{split} \end{equation} \subsection{Relation of Relation Learning Part} \label{s:predict_unit} This part aims at properly and fully using relation information of input sentence pairs to enhance the model performance on sentence semantic matching. In order to achieve this goal, we employ two critical modules to analyze the \textit{pairwise relation} and \textit{triplet based relation} simultaneously. Next, we will describe each module in detail. \subsubsection{Relation of Relation Classification. } \label{s:r2_unit} Inspired by self-supervised learning methods in BERT, we intend \emph{R$^2$-Net}~to make full use of relation information among input sentence pairs in a similar way. Therefore, we introduce R$^2$ classification task into sentence semantic matching. Instead of just identifying the most suitable relation of input sentence pairs, we plan to obtain more knowledge about the input sentence pair by analyzing the \textit{pairwise relation} between the semantic representations ($\bm{v}_1$ for pair $(\bm{s}^a_1, \bm{s}^b_1)$, and $\bm{v}_2$ for pair $(\bm{s}^a_2, \bm{s}^b_2)$). Since a learnable nonlinear transformation between representations and loss substantially improves the model performance~\cite{Chen2020ASF}, we first transfer $\bm{v}_1$ and $\bm{v}_2$ with a nonlinear transformation. Then, we leverage heuristic matching~\cite{Chen-Qian2017ACL} to model the similarity and difference between $\bm{v}_1$ and $\bm{v}_2$. Next, we send the result $\bm{u}$ to a MLP with one hidden layer for final classification. This process is formulated as follows: \begin{equation} \label{eq:r2-prediction} \begin{split} &\bar{\bm{v}}_1 = ReLu(\bm{W}_r\bm{v}_1 + \bm{b}_r), \\ &\bar{\bm{v}}_2 = ReLu(\bm{W}_r\bm{v}_2 + \bm{b}_r), \\ &\bm{u} = [\bar{\bm{v}}_1; \bar{\bm{v}}_2; (\bar{\bm{v}}_1 \odot \bar{\bm{v}}_2); (\bar{\bm{v}}_1 - \bar{\bm{v}}_2)], \\ &P(\hat{y}|(\bm{s}^a_1, \bm{s}^b_1), (\bm{s}^a_2, \bm{s}^b_2)) = MLP_2(\bm{u}), \\ \end{split} \end{equation} where concatenation can retain all the information~\cite{zhang2017context}. The element-wise product is a certain measure of ``similarity'' of two sentences~\cite{mou2016natural}. Their difference can capture the degree of distributional inclusion in each dimension~\cite{weeds2014learning}. $\hat{y}\in\{1, 0\}$ indicates whether two input sentence pairs have same relation. \subsubsection{Triplet Distance Calculation. } Apart from leveraging R$^2$ classification task to learn pairwise relation information, we also intend to learn intra-class and inter-class information from the \textit{triplet based relation}. Thus, we also introduce a triplet loss~(Schroff et al.~\citeyear{Schroff2015FaceNetAU}) into \emph{R$^2$-Net}. As a fundamental similarity function, triplet loss is widely applied in information retrieval area~\cite{Liu2010LearningTR}, and is able to reduce the distance of input pairs with the same relation and increase the distance of these with different relations. Therefore, we first calculate the corresponding distances in this module. To be specific, the inputs of this component are three semantic representations: $\bm{v}_a$ for anchor pair $(\bm{s}^a_a, \bm{s}^b_a)$, $\bm{v}_p$ for positive pair $(\bm{s}^a_p, \bm{s}^b_p)$, $\bm{v}_n$ for negative pair $(\bm{s}^a_n, \bm{s}^b_n)$. In order to obtain better results, we first transform them into a common space with a full connection layer~\cite{Chen2020ASF}. Then, we calculate the distance between anchor and positive pairs, and the distance between anchor and negative pairs, respectively. This process is formulated as follows. \begin{equation} \label{eq:distance-prediction} \begin{split} &\bar{\bm{v}}_a = ReLu(\bm{W}_d\bm{v}_a + \bm{b}_d), \\ &\bar{\bm{v}}_p = ReLu(\bm{W}_d\bm{v}_p + \bm{b}_d), \\ &\bar{\bm{v}}_n = ReLu(\bm{W}_d\bm{v}_n + \bm{b}_d), \\ & d_{ap} = Dist(\bar{\bm{v}}_a, \bar{\bm{v}}_p), \quad d_{an} = Dist(\bar{\bm{v}}_a, \bar{\bm{v}}_n), \\ \end{split} \end{equation} where $\{\bm{W}_d, \bm{b}_d\}$ are trainable parameters. $Dist(\cdot)$ is the distance calculation function. \section{Experiments} \label{s:experiment} In this section, the details about model implementation are firstly presented. Then, five benchmark datasets on which the model is evaluated are introduced. Next, a detailed analysis about the model and experimental results is made. \begin{table} \centering \caption{Experimental Results~(accuracy) on SciTail dataset.} \begin{footnotesize} \begin{tabular}{lc} \hline \textbf{Model} & \textbf{SciTail test}\\ \hline (1) CAFE~\shortcite{Tay2017ACA} & 83.3\% \\ (2) ConSeqNet~\shortcite{Wang2018ImprovingNL} & 85.2\% \\ (3) BiLSTM Max-Out~\shortcite{Mihaylov2018CanAS} & 85.4\% \\ (4) HBMP~(Talman et al.\citeyear{talman2018natural}) & 86.0\% \\ (5) DRr-Net~\shortcite{zhang2019drr} & 87.4\% \\ \hline (6) Transformer LM~\shortcite{Radford2018ImprovingLU} & 88.3\% \\ (7) Bert-base~\shortcite{devlin2018bert} & 92.0\% \\ \hline (8) \emph{R$^2$-Net} & \textbf{92.9\%} \\ \hline \end{tabular} \end{footnotesize} \label{t:scitail-result} \end{table} \subsection{Training Details} \subsubsection{Loss Function.} As is mentioned in Section~\ref{s:problem}, both sentence semantic matching and R$^2$ task can be treated as classification tasks. Thus, we employ \textit{Cross-Entropy} as the loss for each input as follows: \begin{equation} \label{eq:loss_classify} \begin{split} L_{s} &= -\bm{y}_i \mathrm{log} P(y_i | (\bm{s}^a_i, \bm{s}^b_i)), \\ L_{R^2} & = -\hat{\bm{y}}_{i} \mathrm{log} P(\hat{y}|((\bm{s}^a_1, \bm{s}^b_1), (\bm{s}^a_2, \bm{s}^b_2))_i), \\ \end{split} \end{equation} where $\bm{y}_i$ is the one-hot vector for the true label of the $i^{th}$ instance. $\hat{\bm{y}}_{i}$ is the one-hot vector for the true relation of relations of the $i^{th}$ instance pair. Moreover, in order to learn more from relations and achieve better performance, we also introduce the triplet loss to force \emph{R$^2$-Net}~to better analyze the intra-class and inter-class information among sentence pairs with same or different relations: \begin{equation} \label{eq:loss_dist} \begin{split} L_{d} = max((d_{ap}-d_{an}+\alpha)_i, 0), \\ \end{split} \end{equation} where $\alpha$ is the margin. $(\cdot)_i$ denotes the $i^{th}$ triplet pair. Since these three loss functions require different number of inputs, we modify the input of \emph{R$^2$-Net}~to have three input sentence pairs~(i.e., anchor pair, positive pair, and negative pair), as shown in Figure~\ref{f:model}(A). Then, we calculate $L_{s}^1,L_{s}^2,L_{s}^3$ for label prediction loss of each input pair, randomly sample two groups from the input to calculate $L_{R^2}^1, L_{R^2}^2$ for R$^2$ task loss, and use three input pairs to calculate $L_{d}$ for triplet loss. Finally, we treat the weighed sum of these losses with a hyper-parameter $\beta$ as the loss function for entire model as follows: \begin{equation} \label{eq:loss} \begin{split} L = \frac{1}{N}\sum_{i=1}^{N}(\beta\frac{L_{s}^1+L_{s}^2+L_{s}^3}{3} + (1-\beta)(\frac{L_{R^2}^1+L_{R^2}^2}{2}+ L_{d})). \\ \end{split} \end{equation} \begin{table*} \centering \caption{Experimental Results~(accuracy) on Quora and MSRP datasets.} \begin{footnotesize} \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{Quora test} & \textbf{MSRP test} \\ \hline (1) CENN~\cite{zhang2017context} & 80.7\% & 76.4\%\\ (2) L.D.C~\cite{Wang2016SentenceSL} & 85.6\% & 78.4\%\\ (3) REL-TK~(Filice et al.~\citeyear{filice2015structural}) & - & 79.1\% \\ (4) BiMPM~\cite{Wang2017BilateralMM} & 88.2\% & -\\ (5) pt-DecAttachar.c~\cite{Tomar2017NeuralPI} & 88.4\% & -\\ (6) DIIN~\cite{gong2017natural} & 89.1\% & -\\ \hline (7) DRr-Net~\cite{zhang2019drr} & 89.8\% & 82.9\% \\ (8) DRCN~\cite{Kim2018SemanticSM} & 90.2\% & 82.5\% \\ (9) BERT-base~\cite{devlin2018bert} & 91.0\% & 84.2\%\\ \hline (10) \emph{R$^2$-Net} & \textbf{91.6\%} & \textbf{84.3\%} \\ \hline \end{tabular} \end{footnotesize} \label{t:pi-result} \end{table*} \subsubsection{Model Implementation. } We have tuned the hyper-parameters on validation set for best performance, and have used early-stop to select the best model. Since \emph{R$^2$-Net}~has different hyper-parameter settings on different datasets, we list some common hyper-parameters as follows. We apply the BERT-base with 12 layers, hidden size 768, and 12 heads. The kernel sizes of CNN in local encoding unit are $d_k=1, 2, 3$. The hidden state size of MLP in \emph{R$^2$-Net}~is $d_m=300$. The distance we use in distance calculation component is \textit{Euclidean Distance}. The margin $\alpha$ in the triplet loss is $\alpha=0.2$. For the pre-trained BERT, we set the learning rate $10^{-5}$ and use AdamW to fine-tune the parameters. For the rest of parameters, we set the initial learning rate to be $10^{-3}$ and decrease its value as the model training. An Adam optimizer with $\beta_1 =0.9$ and $\beta_2 = 0.999$ is adopted to optimize these parameters. The entire model is implemented with PyTorch and Transformers\footnote{https://github.com/huggingface/transformers}, and is trained on two Nvidia Tesla V100-SXM2-32GB GPUs. \subsection{Data Description} \label{s:data} In this section, we give a brief introduction of the datasets on which we evaluate all models. They are as follow: \begin{itemize} \item \textbf{SNLI:} SNLI dataset~\cite{bowman2015large} contains $570,152$ human annotated sentence pairs. The premise sentences are drawn from the captions of Flickr30k corpus~\cite{young2014image}, and the hypothesis sentences are manually composed. Despite the original test set, we also select the challenging hard subset~\cite{gururangan2018annotation} to evaluate the models. \item \textbf{SICK:} SICK dataset~\cite{marelli2014semeval} contains $10,000$ English sentence pairs, generated from 8K ImageFlickr dataset~(Hodosh et al.~\citeyear{hodosh2013framing}) and STS MSR-video description dataset\footnote{https://www.cs.york.ac.uk/semeval-2012/}. Each sentence pair is generated from randomly selected subsets of the above sources and manually labeled with the label set as SNLI did. \item \textbf{SciTail}: SciTail dataset~(Khot et al.~\citeyear{khot2018scitail}) is created from multiple-choice science exams and web sentences. It has $27,026$ examples with $10,101$ \textit{Entailment} examples and $16,925$ \textit{Neutral} examples. \item \textbf{Quora}: Quora dataset~(Iyer et al.~\citeyear{iyer2017first}) contains over $400,000$ potential question duplicate pairs, which are drawn from Quora website\footnote{https://www.quora.com/}. This dataset has balanced positive and negative labels, indicating whether the line truly contains a duplicate pair. \item \textbf{MSRP}: MSRP dataset~\cite{dolan2005automatically} consists of $5,801$ sentence pairs with a binary label. The sentences are distilled from a database of $13,127,938$ sentence pairs, extracted from $9,516,684$ sentences in $32,408$ news clusters from the web. \end{itemize} \begin{figure*} \centering \includegraphics[width=0.84\textwidth]{representation_case1.pdf} \caption{Visualization of representation $\bm{v}$ from \emph{R$^2$-Net}~and BERT-base models.} \label{f:case} \end{figure*} \subsection{Experimental Results} In this section, we will give a detailed analysis about models and experimental results. We have to note that we use \textit{accuracy} on different test sets to evaluate the model performance. \subsubsection{Performance on NLI task.} We compared our proposed \emph{R$^2$-Net}~to several published state-of-the-art baselines on different NLI datasets. All results are summarized in Table~\ref{t:snli-result} and Table~\ref{t:scitail-result}. Several observations are presented as follows. \begin{itemize} \item It is clear that \emph{R$^2$-Net}~achieves highly comparable performance over all the datasets: SNLI, SICK, and SciTail. Specifically, \emph{R$^2$-Net}~first fully uses BERT and CNN-based encoder to get a comprehensive understanding of sentence semantics from global and local perspectives. This is one of the reasons that \emph{R$^2$-Net}~outperforms other BERT-free baselines by a large margin. Another important reason is that \emph{R$^2$-Net}~employs R$^2$ task and triplet loss to make full use of relation information. Along this line, \emph{R$^2$-Net}~is capable of obtaining intra-class and inter-class knowledge among sentence pairs with the same or different relations. Thus, it can achieve better performance than all baselines, including the BERT-base model. \item \emph{R$^2$-Net}~has more stable performance on the challenging NLI hard test, in which the pairs with obvious identical words are removed~\cite{gururangan2018annotation}. Despite the obvious indicators, these still have implicit patterns for relations among sentence pairs. By considering R$^2$ task and triplet loss, \emph{R$^2$-Net}~has the ability to fully use relation information and obtain the implicit information, which leads to a better performance. \item BERT-base model~\cite{devlin2018bert} outperforms other BERT-free baselines by a large margin. The main reasons can be grouped into two parts. First, BERT takes advantages of multi-layer transformers to learn sentence patterns and sentence semantics on a large corpus. Second, BERT adopts two self-supervised learning tasks~(i.e., MLM and NSP) to better analyze the important words within a sentence and semantic connection between sentences. However, BERT still focuses on the input sequence, underestimating the rich semantic information that relations imply. Therefore, its performance is not as good as that \emph{R$^2$-Net}~achieves. \item Among BERT-free baselines, DRr-Net~\cite{zhang2019drr} and dynamic self-attention~\cite{yoon2018dynamic} achieve impressive performances. First, their performance proves that multi-layer structure and CNN have better ability to model sentence semantics from global and local perspectives. Then, they all develop a dynamic attention mechanism to improve self-attention mechanism. However, their encoding capability of extracting features or generating semantic representations is still not comparable with BERT. This observation inspires us that using powerful BERT as a basic encoder will be a better choice. \end{itemize} \begin{table} \centering \caption{Ablation performance (accuracy) of \emph{R$^2$-Net}.} \begin{footnotesize} \begin{tabular}{lcc} \hline \textbf{Model} & \textbf{SNLI test} & \textbf{SciTail test} \\ \hline (1)~Bert-base & 90.3\% & 92.0\% \\ \hline (2)~\emph{R$^2$-Net}~(w/o local encoder) & 90.7\% & 92.6\% \\ (3)~\emph{R$^2$-Net}~(w/o R$^2$ task learning) & 90.5\% & 92.3\% \\ (4)~\emph{R$^2$-Net}~(w/o triplet loss) & 90.9\% & 92.6\% \\ \hline (5)~\emph{R$^2$-Net}~ & \textbf{91.1}\% & \textbf{92.9}\% \\ \hline \end{tabular} \end{footnotesize} \label{t:ablation-result} \end{table} \subsubsection{Performance on PI task.} Apart from NLI task, we also select PI task to evaluate the model performance. PI task concerns whether two sentences express the same meaning and has broad applications in question answering communities\footnote{https://www.quora.com/}~\footnote{https://www.zhihu.com/}. Table~\ref{t:pi-result} reports the performance of models on different datasets. We also list the observations as follows: \begin{itemize} \item \emph{R$^2$-Net}~still achieves highly competitive performance over other baselines. The results demonstrate that R$^2$ task and triplet loss is effective in helping our proposed model to learn more about relations and improve the model performance, even if the number of relations is small. \item Almost all models have a better performance on Quora dataset than MSRP dataset. One possible reason is that Quora dataset has more data than MSRP dataset~(over 400k sentences pairs v.s. 5,801 sentence pairs). In addition to the data size, inter-sentence interaction is probably another reason. Lan et al.~(\citeyear{lan2018neural}) observes that Quora dataset contains many sentence pairs with less complicated interactions~(many identical words in sentence pairs). Meanwhile, \emph{R$^2$-Net}~also achieves better improvement on Quora dataset, indicating that more data or better label utilization method is needed for further performance improvement on MSRP dataset. \end{itemize} \subsubsection{Ablation Performance.} The overall experiments have proved the superiority of \emph{R$^2$-Net}. However, which part plays a more important role in performance improvement is still unclear. Therefore, we perform an ablation study to verify the effectiveness of each part, including \textit{CNN-based local encoder}, \textit{R$^2$ task classification}, and \textit{triplet loss}. The results are illustrated in Table~\ref{t:ablation-result}. Note that we select BERT-base as the baseline to compare the importance of each part. According to the results, we can observe varying degrees of model performance decline. Among all of them, R$^2$ task has the biggest impact, and triple loss has a relatively small impact on the model performance. These observations prove that R$^2$ task is more important for relation information utilization. \subsubsection{Case Study. } To provide some intuitionistic examples for explaining why our model gains a better performance than other baselines, we sample 700 sentence pairs from SNLI dataset and send them to \emph{R$^2$-Net}~and BERT-base models to generate the semantic representation $\bm{v}$. Then, we leverage t-sne~\cite{maaten2008visualizing} to visualize these representations with the same parameter settings. Figure~\ref{f:case}(A)-(B) report the results of \emph{R$^2$-Net}~and BERT-base models, respectively. By comparing two figures, we can obtain that the representations generated by \emph{R$^2$-Net}~have closer inter-class distances. Moreover, the representations have more obvious distinctions between different classes. These observations not only explain why our proposed \emph{R$^2$-Net}~achieves impressive performance, but also demonstrates that proper usage of relation information is able to guide models to analyze sentence semantics more comprehensively and precisely, which is in favor of tackling sentence semantic matching. \section{Conclusion} In this paper, we presented a simple but effective method named \emph{R$^2$-Net}~for sentence semantic matching. This method not only uses powerful BERT and CNN to encode sentences from global and local perspectives, but also makes full use of relation information for better performance enhancement. Specifically, we design a R$^2$ classification task to help \emph{R$^2$-Net}~for learning the implicit common knowledge from the pairwise relation learning processing. Moreover, a triplet loss is employed to constrain \emph{R$^2$-Net}~for better triplet based relation learning and intra-class and inter-class information analyzing. Extensive experiments on NLI and PI tasks demonstrate the superiority of \emph{R$^2$-Net}. In the future, we plan to combine the advantages of label embedding method for better sentence semantic comprehension. \section{Acknowledgments} This research was partially supported by grants from the National Natural Science Foundation of China (Grant No. 62006066, 61725203, 61972125, 61732008, and U19A2079), the Major Program of the National Natural Science Foundation of China (91846201), the Fundamental Research Funds for the Central Universities, HFUT. The author wants to thank Cui Gongrongxiu for constructive criticism on the earlier draft. \small \bibliographystyle{aaai21}
{ "timestamp": "2020-12-17T02:17:34", "yymm": "2012", "arxiv_id": "2012.08920", "language": "en", "url": "https://arxiv.org/abs/2012.08920" }
\section{Introduction} Within the past decades, high-order discontinuous Galerkin (DG) methods~\cite{reed1973,cockburn2000,warburton1999} have become a popular tool to discretize systems of partial differential equations (PDE) in the context of fluid flows. DG methods combine advantages of classical finite volume methods (FVM) and finite element methods (FEM), such as conservativity, their local character due to the weak coupling of neighboring cells via cell boundaries, and the applicability to structured and unstructured grids. Furthermore, these properties render DG methods highly attractive for high performance computing applications~\cite{altmann2013a,dryja2016}. High-order extended discontinuous Galerkin (XDG) methods are a sub-class of DG methods in which the geometry or domain of interest do not conform with the computational domain. The approximation space is enriched with an additional set of degrees of freedom (DOF) in cells which contain a non-smooth or discontinuous solution of the PDE, for example, as being present in the case of a water bubble in air in the context of multi-phase flows. In extended methods, an interface usually separates the computational domain into two disjoint regions. The idea of extended methods goes back to the XFEM approach by Mo\"{e}s~\cite{moes1999} who investigated crack propagation in solid mechanics. Adaptions to incompressible multi-phase flows were published, e.g., by Gro{\ss} and Reusken~\cite{gross2007} and Fries~\cite{fries2009}. Bastian and Engwer~\cite{bastian2009} were the first to bring the extended idea to DG methods, focusing on two- and three-dimensional, elliptic, scalar model problems on complex-shaped domains. This and further related works share the idea of using piece-wise planar, triangular sub-cells for the integration of the weak forms in cut-cells. Heimann~\cite{heimann2013} as well as Kummer~\cite{kummer2016} published extensions to incompressible flows based on a (symmetric) interior penalty discretization~\cite{arnold1982}. Kummer additionally employed a cell-agglomeration technique in order to enhance the conditioning of the system matrices, also applying the Hierarchical-Moment-Fitting (HMF) quadrature scheme as proposed by Müller~\cite{muller2013a}. Immersed boundary methods (IBM), originally proposed by Peskin~\cite{peskin1972} in the context of blood flows, summarized in detail by Mittal and Iaccarino~\cite{mittal2005}, and later adapted to DG methods by Fidkowski and Darmofal~\cite{fidkowski2007}, can be seen as a sub-class of extended methods in which the geometry representation and the discretization are separated to a large extent. A DG IBM can be considered as an XDG method where one region of interest describes the fluid phase and the other one represents the geometry, which is void. Further works were published by Qin and Krivodonova~\cite{qin2013} who presented a DG IBM using explicit time-integration schemes for smooth solutions of the Euler equations on a Cartesian grid. Müller et al.~\cite{muller2017} extended the work by Qin and Krivodonova~\cite{qin2013} to the compressible Navier-Stokes equations by adapting the HMF quadrature scheme~\cite{muller2013a} for the application in compressible flows in combination with a cell-agglomeration strategy. Recently, Geisenhofer et al.~\cite{geisenhofer2019} combined the DG IBM with a shock-capturing strategy based on artificial viscosity for the application in supersonic compressible flows. Additionally, a local time stepping scheme has been applied for computational speed, since non-agglomerated cut-cells restrict the maximum admissible time-step size drastically. Discontinuous flow phenomena such as shock waves can arise in supersonic flows. In general, DG methods are susceptible to stability issues which are caused by oscillating polynomial solutions in the vicinity of discontinuities or under-resolved regions of the computational domain~\cite{persson2006,barter2010,klockner2011}. Remedies are classical limiting approaches such as ENO/WENO schemes~\cite{shu1988,shu1989} or, e.g., a posteriori limiting approaches~\cite{giani2014,dumbser2014}. Another class are artificial viscosity methods, which date back to the early work by Von Neumann and Richmyer~\cite{vonneumann1950}. An additional `artificial', second-order, diffusive term smooths discontinuities over a layer~$O (h/P)$, which can be adequately resolved by the numerical scheme. Persson and Peraire~\cite{persson2006} proposed a sensor based on the modal-decay of the local solution to detect troubled cells and a general scaling for determining the amount of artificial viscosity. Further improvements and applications were published by Barter and Darmofal~\cite{barter2010}, Klöckner et al.~\cite{klockner2011}, and Lv et al.~\cite{lv2016}. However, a clear disadvantage of shock-capturing strategies is that low-order perturbations such as acoustic waves are usually damped out entirely. This still remains an issue to tackle in the context of high-order methods~\cite{wang2013}. By contrast, classical shock-fitting approaches, first published by Emmons \cite{emmons1944,emmons1948} in the 1940's, have in common that one boundary of the computational domain is fitted to the shock front so that only the post-shock region has to be computed. The conditions at the shock are prescribed via the Rankine-Hugoniot conditions. Moretti and Abbett~\cite{moretti1966} were the first to solve the supersonic blunt-body problem with a practical technique in the 1960's. It relies on a mapping of the physical, time-dependent domain onto a rectangular reference domain in the context of a finite difference method (FDM). Additionally, they applied a time-marching technique to compute the steady-state solution. We recommend the work by Salas~\cite{salas2010} for a historical and detailed introduction. Variants of the original approach which are based on Fourier and Chebyshev approximations were developed, e.g., by Zang et al.~\cite{zang1983} and Hussaini et al.~\cite{hussaini1985a}. Combinations were made by Wu and Zhu~\cite{wu1996} and Kopriva~\cite{kopriva1991} in the context of multi-domain methods, where the domain of interest is split into several sub-domains according to an a-priori knowledge about the flow. Then, the discontinuities are fitted to each sub-domain separately. Lately, Romick and Aslam~\cite{romick2017} applied shock-fitting to detonation problems~\cite{henrick2008,short2008}. A computational boundary was fitted to shock fronts and material interfaces, while solving the two-dimensional reactive Euler equations. Recently, another DG shock-fitting variant was developed by Corrigan et al.~\cite{corrigan2019}, who proposed a moving DG method with interface condition enforcement (MDG-ICE). Their approach relies on a weak formulation which enforces the interface condition separately from the conservation law. The discrete grid geometry is added as an additional variable to the solver. Their approach detects a priori unknown discontinuities via the interface condition enforcement and satisfies the conversation law by moving the computational grid in every time-step. They showed results with high-order accuracy for steady and unsteady test problems such as for a shock tube and a Mach-3 bow shock. The approach of Zahr and Persson~\cite{zahr2018} is similar but conceptually different in a sense that they state an optimization problem based on the variation of the discrete solution from its element-wise average. Additionally, they incorporate a grid distortion metric to penalize oscillating solutions and distorted grids. Zahr and Persson showed high-order convergence rates for the inviscid Burgers' equation as well as for a transonic flow through a nozzle and a supersonic flow around a cylinder. To the best of our knowledge, no efforts have been yet undertaken for a full high-order XDG method in the context of shock-fitting. We aim for solving compressible flows with high Mach numbers, where a sharp interface description of the shock front is employed to separate the pre- and post-shock state. In contrast to the previously mentioned works in the context of DG methods, a simple Cartesian background grid is employed, reducing the complexity of the grid handling to a minimum, since the shock front is inherently incorporated into the underlying discretization. This work is structured as follows: We state a non-dimensional, conservative form of the Euler equation for inviscid compressible flow in Section~\ref{sec:governing_equations}. In Section~\ref{sec:discretization}, we briefly introduce the employed XDG method in a one-dimensional setting. In Section~\ref{sec:sub-cell_accurate_correction}, we present an implicit pseudo-time-stepping procedure which corrects the shock interface position inside a cut background cell. To achieve this, we apply suitable indicators to determine the shifting direction of the shock interface. The new interface position is obtained by a simple bisection algorithm. We close this work with a summary and an outlook for future work in Section~\ref{sec:conclusion}. \section{Governing Equations} \label{sec:governing_equations} This work deals with supersonic inviscid compressible flow. This kind of flow can be mathematically described by the Euler equations which we state in a non-dimensional, conservative form in one spatial dimension as \begin{align} \label{eq:eulerEquations} \frac{\partial \vec{U}}{\partial t} + \frac{\partial \vec{F}(\vec U)}{\partial x} = 0\,, \end{align} where $\vec{U}$ denotes the vector of conserved quantities \begin{align} \label{eq:vectorConservedQuantities} \vec{U} = \left( \begin{array}{c} \rho\\ \rho u \\ \rho E \end{array} \right)\,, \end{align} and $\vec{F}$ is the convective flux given by \begin{align} \label{eq:convectiveFluxes} \vec{F}_1 = \frac{1}{\gamma \mathrm{M}_\infty^2}\left( \begin{array}{c} \rho u \\ \rho u^2 + p\\ u (\rho E + p) \end{array} \right) . \end{align} In Equations~\eqref{eq:eulerEquations} to~\eqref{eq:convectiveFluxes}, $t$ is the time, $x$ is the spatial coordinate, $\rho$ is the fluid density, $u$ is the velocity, $\rho E$ is the total energy, $p$ is the pressure, $\gamma$ is the heat capacity ratio which is $\gamma = 1.4$ for standard air, and $\mathrm{M}_\infty$ is the dimensionless reference Mach number which we set to $\mathrm{M}_\infty = 1 / \sqrt{\gamma}$. Consequently, the non-dimensional Euler equations match their equivalent with dimensions~\cite{feistauer2003}, which enables the direct use of initial conditions of any test cases without further modifications. The total energy consists of the sum of the internal energy~$\rho e$ and the kinetic energy, i.e., $\rho E = \rho e + 1/2\, \rho u^2 $. The Euler equations have to be closed by a suitable equation of state for the pressure~$p(\rho, e)$. In this work, we consider a calorically perfect gas which can be modeled by the ideal gas law \begin{align} \label{eq:ideal_gas_law} p(\rho, e) = (\gamma - 1) \rho e \,. \end{align} The local Mach number is given by the relation~$\mathrm{M} = |u| / a$, where $a = \sqrt{\gamma p / \rho}$ denotes the local speed of sound. \section{Discretization} \label{sec:discretization} In this section, we briefly introduce the spatial discretization of the employed XDG method in one spatial dimension and apply it to the Euler equations~\eqref{eq:eulerEquations} to~\eqref{eq:convectiveFluxes}. In particular, our approach makes use of a sharp interface description by defining interfaces by means of the zero iso-contour of a level-set function. Furthermore, different level-set functions can be applied for the representation of the geometry, e.g., the surface of solid bodies, and the shock front. The interested reader is referred to the works by Kummer et al.~\cite{kummer2020} and Smuda and Kummer~\cite{smuda2020a} for details about the employed XDG method and its features. The presented methodology is implemented in the open-source software package~\emph{BoSSS}\footnote{\url{https://github.com/FDYdarmstadt/BoSSS}, accessed on 11/24/2020}, which features a variety of applications in the context of boundary-fitted and unfitted DG discretizations for incompressible multi-phase flows~\cite{kummer2016,smuda2020a}, compressible flows~\cite{muller2017,geisenhofer2019}, and particulate flows~\cite{krause2017}. In order to incorporate the sharp interface description into the discretization approach, we introduce a simply connected computational domain $\Omega_h = (x_{\text{left}}, x_{\text{right}}) \subset \mathbb{R}$, which we partition into two disjoint regions, or sub-domains $\mathfrak{A}$ and~$\mathfrak{B}$. These are separated by the interface~$\mathfrak{I} = \{ x_{\mathfrak{I}} \} $, which corresponds to a single point in the one-dimensional setting. Thus, we obtain the partitioning \begin{equation} \Omega_h = \mathfrak{A} \, \dot \cup \, \mathfrak{I} \, \dot \cup \, \mathfrak{B} = (x_{\text{left}}, x_{\mathfrak{I}} ) \, \dot \cup \, \{ x_{\mathfrak{I}} \} \, \dot \cup \, ( x_{\mathfrak{I}} , x_{\text{right}} )\,. \label{eq:partioning} \end{equation} In the following, we briefly introduce several definitions: \begin{definition}[Basic notations] \begin{itemize} \item We discretize the computational domain $\Omega_h$ into a discrete set of non-overlapping cells~$\mathfrak{K}_h = \{ K_1, \dots, K_J \}$ with $K_j = (x_j, x_{j+1})$ and ~$x_{\text{left}} = x_1 \leq x_2 \leq \ldots \leq x_{J+1} = x_{\text{right}}$. The characteristic length scale is given by~$h = \max_{1 \leq j \leq J} ( x_{j+1} - x_{j} )$. \item The set of all cell boundaries or cell edges, respecitvely, is given by~$ \Gamma \coloneqq \cup_j \partial K_j \cup \mathfrak{I} = \{ x_1, \ldots x_{J}, x_{\mathfrak{I}} \} $. It can be split into $\Gamma = \Gamma_\mathrm{int} \cup \Gamma_\mathrm{D} \cup \Gamma_\mathrm{N}$, where $\Gamma_\mathrm{int} = \Gamma \setminus \partial \Omega$ denotes the set of all inner edges, and $\Gamma_\mathrm{D}$ and $\Gamma_\mathrm{N}$ denote the sets of boundary edges with Dirichlet and Neumann boundary conditions, respectively. \item We introduce a field of normal vectors~$n_\Gamma$ on~$\Gamma$, which is simply set to $n_\Gamma = -1$ at $x_{\text{left}}$ and to $n_\Gamma = 1$ everywhere else. This is mainly introduced as a general notation for the two- and three-dimensional setting. \end{itemize} \end{definition} \begin{definition}[Cut-cells and cut-cell grid] \label{def:cut-cell_grid} On the background grid $\mathfrak{K}_h = \{ K_1,\allowbreak \dots, K_J \}$, we define a cut-cell~$K_{j, \mathfrak{s}}$ of some sub-domain~$\mathfrak{s} \in \{ \mathfrak{A}, \mathfrak{B} \}$ as \begin{align} \label{eq:cut_cells} K_{j, \mathfrak{s}} \coloneqq K_j \cap \mathfrak{s}\,. \end{align} The set of all cut-cells builds the cut-cell grid \begin{align} \mathfrak{K}_h^\mathrm{X} = \{ K_{1, \mathfrak{A}}, K_{1, \mathfrak{B}}, \dots, K_{J, \mathfrak{A}}, K_{J, \mathfrak{B}}\}\,. \end{align} A background cell~$K_j$ which is cut by the interface~$\mathfrak{I}$, i.e., $\oint_{K_j \cap \mathfrak{I}} 1 \diff{S} \neq 0$, is divided into two cut-cells~$K_{J, \mathfrak{A}}$ and $K_{J, \mathfrak{B}}$. The cell~$K_j$ from the background grid is recovered if it is entirely contained in one sub-domain, i.e., $K_{J, \mathfrak{A}} = K_j$ with $K_{J, \mathfrak{B}} = \emptyset$, or $K_{J, \mathfrak{B}} = K_j$ with $K_{J, \mathfrak{A}} = \emptyset$. \end{definition} In general, the presented XDG method can be seen as a DG method applied on a cut-cell grid. This motivates the definition of the corresponding polynomial space in the subsequent definition. \begin{definition}[Extended discontinuous Galerkin (XDG) space] We define the broken polynomial, cut-cell XDG space \begin{align} \label{eq:xdg_space} \begin{split} \mathbb{P}_P^\mathrm{X} (\mathfrak{K}_h) \coloneqq \{ f \in L^2 (\Omega); \, \forall K \in \mathfrak{K}_h: f\vert_{K \cap \mathfrak{s}} \textrm{ is polynomial and } \mathrm{ deg} (f\vert_{K \cap \mathfrak{s}} \leq P) \}\\ = \mathbb{P}_P (\mathfrak{K}_h^\mathrm{X}) \end{split} \end{align} with a total polynomial degree~$P$. \end{definition} \begin{definition}[Jump operator] Last, we define a jump operator~$\jump{\psi}$ which acts on the set of edges~$\Gamma$ as \begin{align} \jump{\psi} \coloneqq \begin{cases} \psi^- - \psi^+\,, &\qquad x \in \Gamma_\mathrm{int}\, ,\\ \psi^-\,, &\qquad x \in \partial \Omega\, ,\\ \end{cases} \end{align} where $\psi^-$ and $\psi^+$ denote the limit of $\psi$ at $x$ when approaching from $-n_{\Gamma}$ and $+n_{\Gamma}$, respectively. \end{definition} Obviously, for an arbitrarily placed interface, the size of some cut-cell may be arbitrarily small. Especially when one considers higher spatial dimensions, small and ill-shaped cut-cells can significantly limit the global time-step size in the context of explicit time-stepping schemes and increase the condition numbers of the cell-local mass matrices~\cite{muller2017}. A potential remedy is the application of cell-agglomeration techniques, which remove undesired cut-cells from the computational grid. We refer to the works by Kummer et al.~\cite{kummer2020} and Müller et al.~\cite{muller2017} for further details and implementation issues. For this work, however, \emph{we assume that the interface is well-placed}: This means that the volume fraction of a cut-cell is above a certain threshold, i.e. $ \normVector{K_{j, \mathfrak{s}}} / \normVector{K_j} \geq \delta_\mathrm{agg}$ with the threshold set to $\delta_\mathrm{agg} = 0.3$. The convective fluxes of the Euler Equations~\eqref{eq:eulerEquations} to \eqref{eq:convectiveFluxes} are discretized by means of an approximate Riemann solver based on Godunov's method following Section 4.9 of the textbook~\cite{toro2009} by Toro. Note that classical choices such as the HLLC flux~\cite{toro1994} may fail due to inappropriate wave speed estimates in cases where high-sped flows impinge on solid stationary walls~\cite{toro2009}. In addition to such configurations, we could confirm this effect for pseudo-two-dimensional computations of a stationary normal shock wave, which is located on a cell boundary or an interface. In such cases, round-off errors determine the sign of several terms during the flux evaluation and, thus, prohibit a robust and reliable application of the HLLC flux. \section{Sub-Cell Accurate Correction of the Shock Interface Position} \label{sec:sub-cell_accurate_correction} In this section, we present a novel algorithm for the correction of the shock interface position~$x_\mathfrak{I}$ in a cut background cell, i.e., $x_\mathfrak{I} \in K_j$ with~$\oint_{K_j \cap \mathfrak{I}} 1 \diff{S} \neq 0$, see also Definition~\ref{def:cut-cell_grid}. To this end, we assume that a cell-accurate guess for~$x_\mathfrak{I}$ is already known. In Section~\ref{sec:indicators}, we introduce three indicators, which are based on a zeroth-order projection of the local solution and the normal shock relations, respectively, in order to correct the shock interface position~$x_\mathfrak{I}$. In Section~\ref{sec:implicit_pseudo_ts}, a novel implicit pseudo-time-stepping procedure is derived. This correction procedure is applied until the shock interface position~$x_\mathfrak{I}$ has converged to the exact shock position~$x_\mathrm{s}$, i.e., $\lim_{l \rightarrow \infty} x_{\mathfrak{I}}^l = x_\mathrm{s}$. After the termination of this procedure, the entire flow field can be advanced in time like in a standard DG computation. In particular, we show the results of a one-dimensional proof of concept for a stationary normal shock wave in Section~\ref{sec:implicit_pseudo_ts_proof}. \subsection{Indicators} \label{sec:indicators} Subsequently, we present three different indicators in order to determine the shifting direction of the shock interface~$\mathfrak{I}$ towards the exact shock position~$x_\mathrm{s}$ inside a cut background cell~$K_j$. \subsubsection{\textit{P0}-Indicator} \label{sec:p0_indicator} First, we present an indicator which is based on a zeroth-order projection of the local solution in both cut-cells~$K_{j, \mathfrak{s}} \in \mathfrak{K}_h^\mathrm{X}$ with~$K_{j, \mathfrak{s}} \neq \emptyset$ and~$\mathfrak{s} \in \{\mathfrak{A}, \mathfrak{B}\}$. We denote the local solution by \begin{align} \label{eq:cell_local_sol} \psi_{j, \mathfrak{s}} (x, t) \coloneqq \psi\vert_{K_{j, \mathfrak{s}}} \in \mathbb{P}_P^\mathrm{X} (\{ K_{j, \mathfrak{s}} \})\,. \end{align} If the shock interface position~$x_\mathfrak{I}$ does not coincide with the exact shock position~$x_\mathrm{s}$, i.e., $x_\mathfrak{I} \neq x_\mathrm{s}$, the polynomial solution oscillates so that it is hardly possible to extract reasonable physical information from it. Thus, we remove the high-order modes by solely extracting the zeroth-order mode \begin{align} \label{eq:zeroth_order_mode} \psi_{j, \mathfrak{s}}^{P0} = \Pi^{P0} (\psi_{j, \mathfrak{s}})\,, \end{align} using the projection operator \begin{equation} \label{eq:p0_proj_op} \begin{aligned} \Pi^{P0} : \mathbb{P}_P^\mathrm{X} (\{ K_{j, \mathfrak{s}} \}) &\rightarrow \mathbb{P}_{P=0}^\mathrm{X} (\{ K_{j, \mathfrak{s}} \})\,, \\ \psi &\mapsto \psi^{P0}\,, \end{aligned} \end{equation} with the essential property~$\langle \psi - \psi^{P0}, \vartheta \rangle = 0$, $\forall \vartheta \in \mathbb{P}_{P=0}^\mathrm{X} (\{ K_{j, \mathfrak{s}} \})$. The $L^2$ scalar product is denoted by~$\langle \cdot \,, \cdot \rangle$. Exemplarily, we show the numerical solution for a stationary normal shock wave with a Mach number of~$\mathrm{M}_\mathrm{s} = 1.5$ located at~$x_\mathrm{s} = 0.55$ in a cut background cell~$K_j = [0.5, 0.6]$ with the sub-domains~$\mathfrak{A}$ and~$\mathfrak{B}$ in Figure~\ref{fig:p0_indicator}. \begin{figure}[tbp] \centering \begin{tikzpicture} \begin{axis}[ xmin=0.45, xmax=0.65, ymin=0.8, ymax=2.2, xlabel=$x$, ylabel=Density~$\rho$, xmajorgrids=true, ymajorgrids=false, no markers, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, width=0.27\textwidth, height=0.27\textwidth ] \draw [fill=lightgray,draw=none] (0.45,0.8) rectangle (0.5,2.2); \draw [fill=lightgray,draw=none] (0.6,0.8) rectangle (0.65,2.2); \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[color=black, dashed] table[skip first n=1] {\datafile}; \addlegendentry{Shock} \pgfplotstableread{1_rho.txt}\datafile \addplot+[solid] table[skip first n=1] {\datafile}; \addlegendentry{$P=2$} \pgfplotstableread{1_rho_p0.txt}\datafile \addplot+[dashdotted] table[skip first n=1] {\datafile}; \addlegendentry{$P0$-projection} \draw[color=red] (axis cs:0.57,0.8) -- (axis cs:0.57,2.2); \node[color=red, anchor=south west] at (axis cs:0.57,0.8) {\small$x_\mathfrak{I}$}; \node[color=black, anchor=south west] at (axis cs:0.451,0.77) {\small$\rho_\mathrm{pre}$}; \node[color=black, anchor=south west] at (axis cs:0.6,1.65) {\small$\rho_\mathrm{post}$}; \node[color=black, anchor=south west] at (axis cs:0.505,1.97) {\small$\mathfrak{A}$}; \node[color=black, anchor=south west] at (axis cs:0.594,1.97) {\small$\mathfrak{B}$}; \end{axis} \end{tikzpicture} \caption{$P0$-projection of the local solution in a cut background cell~$K_j = [0.5,0.6]$. A stationary shock wave with a Mach number of $\mathrm{M}_\mathrm{s} = 1.5$ is located at $x_\mathrm{s} = 0.55$. The solution oscillates, since the shock interface position~$x_\mathfrak{I}$ does not coincide with the exact shock position~$x_\mathrm{s}$, i.e., $x_\mathfrak{I} \neq x_\mathrm{s}$. The $P0$-projection considers only the zeroth-order modes in sub-domain~$\mathfrak{A}$ and $\mathfrak{B}$, respectively. The exact density values~$\rho_\mathrm{pre}$ and $\rho_\mathrm{post}$ can be extracted from the near-band~$\mathfrak{K}_h^\mathrm{near}$ (light gray) for later usage. } \label{fig:p0_indicator} \end{figure} The high-order solution ($P=2$) oscillates, since the shock interface position and the exact shock position do not coincide, i.e., $x_\mathfrak{I} \neq x_\mathrm{s}$. Additionally, we show the corresponding $P0$-projections in both sub-domains~$\mathfrak{A}$ and~$\mathfrak{B}$. In general, the projections in~$\mathfrak{A}$ and~$\mathfrak{B}$ are different, i.e., $\psi_{j, \mathfrak{A}}^{P0} \neq \psi_{j, \mathfrak{B}}^{P0}$, since we have a separated set of DOF for each sub-domain. We construct an indicator based on the $P0$-projection~\eqref{eq:p0_proj_op}. The near-band~$\mathfrak{K}_h^\mathrm{near}$ is defined as the set of cells, which consists of the neighboring cells of all cut-cells. Obviously, in the one-dimensional case, $\mathfrak{K}_h^\mathrm{near}$ consists of only two cells. In Figure~\ref{fig:p0_indicator}, the exact density values~$\rho_\mathrm{pre}$ and~$\rho_\mathrm{post}$ of the pre- and post-shock region can be extracted from~$\mathfrak{K}_h^\mathrm{near}$. The width of the near-band may be extended in other test cases to extract~$\rho_\mathrm{pre}$ and~$\rho_\mathrm{post}$. Subsequently, we define the local indicator~$\mathcal{I}^{P0}_j (\rho)$ acting on the density~$\rho$ in a cut background cell~$K_j \in \mathfrak{K}_h$ with~$K_{j, \mathfrak{s}} \neq \emptyset$ as \begin{align} \label{eq:p0_indicator} \mathcal{I}^{P0}_j (\rho) = \mathcal{I}^{P0} (\rho) \vert_{K_j} \coloneqq \underbrace{\frac{\rho_\mathrm{pre} + \rho_\mathrm{post}}{2}}_{\substack{\textrm{exact solution in} \\ \textrm{near-band}~\mathfrak{K}_h^\mathrm{near}}} - \underbrace{\frac{\rho_{j, \mathfrak{A}}^\mathrm{P0} + \rho_{j, \mathfrak{B}}^\mathrm{P0}}{2}}_{\substack{P0\textrm{-projections} \\ \textrm{in cut-cells}~K_{j, \mathfrak{s}}}} \,, \qquad \mathfrak{s} \in \{ \mathfrak{A}, \mathfrak{B} \} \,. \end{align} In Equation~\ref{eq:p0_indicator}, we compare the average value of the exact solution, which we extract from the near-band~$\mathfrak{K}_h^\mathrm{near}$, to the average value of the $P0$-projections in the cut-cells~$K_{j, \mathfrak{A}}$ and~$K_{j, \mathfrak{B}}$. Based on the sign of the indicator~$\mathcal{I}^{P0}_j (\rho)$, the shifting direction of the shock interface~$\mathfrak{I}$ is determined by \begin{subequations} \label{eq:shifting_direction} \begin{align} &\textrm{if } \mathcal{I}^{P0}_j (\rho) > 0 \qquad \Rightarrow \qquad \textrm{shift }\mathfrak{I} \textrm{ to the right}\,,\\ &\textrm{if } \mathcal{I}^{P0}_j (\rho) < 0 \qquad \Rightarrow \qquad \textrm{shift }\mathfrak{I} \textrm{ to the left}\,. \end{align} \end{subequations} In other words, Equation~\ref{eq:shifting_direction} tells us the following: A value of~$\mathcal{I}^{P0}_j (\rho) < 0 $ indicates that the average value of the $P0$-projections is larger than the exact average value of the shock. Thus, the shock interface~$\mathfrak{I}_\mathrm{s}$ is located too far right in relation to the exact shock position, i.e., $x_\mathfrak{I} > x_\mathrm{s}$. We obtain the new shock interface position~$x_\mathfrak{I}^{l+1}$ by a bisection algorithm, which takes the last two shock interface positions~$x_\mathfrak{I}^{l}$ and~$x_\mathfrak{I}^{l-1}$ into account. During the first iterations, the boundaries~$\partial K_j$ of the cut background cell are considered. We evaluate Equation~\eqref{eq:p0_indicator} for the scenario shown in Figure~\ref{fig:p0_indicator}, which gives an initial value of~$\mathcal{I}^{P0}_j (\rho) = -0.389$. Bisection yields the new shock interface position~$x_\mathfrak{I}^{l+1} = [x(\partial K_j^\mathrm{left}) + x_\mathfrak{I}^l]/2 = [0.5 + 0.57] / 2 = 0.535$. \subsubsection{Indicators Based on the Normal Shock Relations} \label{sec:normal_shock_rel_indicators} We state the one-dimensional normal shock relations~\cite{anderson1990} \begin{subequations} \label{eq:normalShockRelations} \begin{align} \rho_\mathrm{pre} u_{1,\mathrm{pre}} &= \rho_\mathrm{post} u_{1,\mathrm{post}} && \textrm{(continuity eq.)}\label{eq:normalShockWaveRelations_rpt_conti}\,,\\ p_\mathrm{pre} + \rho_\mathrm{pre} u_{1,\mathrm{pre}}^2 &= p_\mathrm{post} + \rho_\mathrm{post} u_{1,\mathrm{post}}^2 && \textrm{(momentum eq.)}\label{eq:normalShockWaveRelations_rpt_mom}\,, \end{align} \end{subequations} for the definition of two additional indicators. Equation~\eqref{eq:normalShockRelations} has to be fulfilled up to the accuracy of the numerical method if the shock is resolved sharply. Consequently, a deviation from this state motivates the definition of the indicators~$\mathcal{I}^{\rho} (\vec{U})$ and~$\mathcal{I}^{m} (\vec{U})$, which are based on Equations~\eqref{eq:normalShockWaveRelations_rpt_conti} and~\eqref{eq:normalShockWaveRelations_rpt_mom}, respectively. We define the local indicator~$\mathcal{I}^{\rho}_j (\vec{U})$ in a cut background cell~$K_j \in \mathfrak{K}_h$ with $K_{j, \mathfrak{s}} \neq \emptyset$ as \begin{align} \label{eq:rho_indicator} \begin{split} \mathcal{I}^{\rho}_j (\vec{U}) = \mathcal{I}^{\rho} (\vec{U}) \vert_{K_j} &\coloneqq \frac{1}{\partial K_j} \oint_\mathfrak{I} \rho_{j, \mathfrak{A}} u_{j, \mathfrak{A}} - \rho_{j,\mathfrak{B}} u_{j, \mathfrak{B}} \diff{S}\\ &= \frac{1}{\partial K_j} \oint_\mathfrak{I} \jump{\rho_j u_{j}} \diff{S} \,, \end{split} \end{align} and the indicator~$\mathcal{I}^{m}_j (\vec{U})$ as \begin{equation} \label{eq:indicator_mom} \begin{split} \mathcal{I}^{m}_j (\vec{U}) = \mathcal{I}^{m} (\vec{U}) \vert_{K_j} &\coloneqq \frac{1}{\partial K_j} \oint_\mathfrak{I} p_{j, \mathfrak{A}} + \rho_{j,\mathfrak{A}} u_{j,\mathfrak{A}}^2 - \left(p_{j,\mathfrak{B}} + \rho_{j,\mathfrak{B}} u_{j,\mathfrak{B}}^2\right) \diff{S}\\ &= \frac{1}{\partial K_j} \oint_\mathfrak{I} \left[\!\!\left[ p_j + \rho_j u_{j}^2 \right]\!\!\right] \diff{S} \,. \end{split} \end{equation} The new shock interface position~$x_\mathfrak{I}^{l+1}$ is computed by applying the bisection algorithm as presented for the indicator~$\mathcal{I}^{P0}_j (\rho)$ in Section~\ref{sec:p0_indicator}. \subsection{Implicit Pseudo-Time-Stepping} \label{sec:implicit_pseudo_ts} In the following, we present a novel implicit pseudo-time-stepping procedure in order to correct the shock interface position~$x_\mathfrak{I}$ inside a cut background cell~$K_j$ by means of the indicators presented in Section~\ref{sec:indicators}. For that, we assume that a cell-accurate guess of~$x_\mathfrak{I}$ is already known. In Section~\ref{sec:implicit_pseudo_ts_setting}, we explain the setting of the test case. We show the results of a one-dimensional proof of concept in Section~\ref{sec:implicit_pseudo_ts_proof}. \subsubsection{Setting} \label{sec:implicit_pseudo_ts_setting} To illustrate the procedure, we consider a stationary shock wave located at~$x_\mathrm{s} = 0.55$ with a Mach number of $\mathrm{M_s} = 1.5$ by fixing the frame of reference to it. We choose a computational domain $\Omega_h = (0, 1)$ discretized by $10$ equally-sized cells with~$h=0.1$. We prescribe the exact pre- and post-shock states at the left and right boundary, respectively. The pre-shock conditions are given by \begin{align} \label{eq:pre_shock_cond} \left( \rho_\mathrm{pre},\,u_{\mathrm{pre}},\,p_\mathrm{pre}\right)^\intercal = \left(1,\,\sqrt{\gamma \frac{p_\mathrm{pre}}{\rho_\mathrm{pre}}}\mathrm{M_s},\,1 \right)^\intercal \end{align} and the post-shock conditions by \begin{subequations} \label{eq:post_shock_cond} \begin{align} \rho_\mathrm{post} &= \frac{(\gamma + 1) \mathrm{M_s}^2}{2 + (\gamma -1) \mathrm{M_s}^2} \rho_\mathrm{pre}\,,\\ u_{\mathrm{post}} &= \frac{2 + (\gamma - 1) \mathrm{M_s}^2}{(\gamma + 1) \mathrm{M_s}^2} u_{1, \mathrm{pre}}\,,\\ p_\mathrm{post} &= \left[ 1 + \frac{2 \gamma}{\gamma + 1} (\mathrm{M_s}^2 - 1) \right] p_\mathrm{pre}\,. \end{align} \end{subequations} In Equations~\eqref{eq:pre_shock_cond} and~$\eqref{eq:post_shock_cond}$, we set the heat capacity ratio to~$\gamma = 1.4$. As a starting guess, we mimic a sufficiently smooth initial condition, which can be obtained by means of a smoothed Heaviside function \begin{align} \label{eq:smoothed_Heaviside} H (x) = 0.5 \left[ \tanh \left( \frac{\normVector{x - x_\mathrm{s}}}{\tilde{C} h / \max(0, P)} + 1.0 \right) \right] \,, \end{align} where~$x$ denotes an arbitrary point in the domain, $x_\mathrm{s}$ is position of the shock front, and $\tilde{C} = 1.0$ is a user-defined factor to control the strength of the smoothing. The smoothed initial conditions of a physical quantity~$\psi_0 (x) = \psi (x, t_0)$ are then computed with \begin{align} \label{eq:smoothed_initial_cond} \psi_0 (x)= \psi_\mathrm{pre} (x) - H(x) \left[ \psi_\mathrm{pre} (x) - \psi_\mathrm{post} (x) \right]\,. \end{align} We apply the smoothing~\eqref{eq:smoothed_initial_cond} to all components of the state vector~$\vec{U}$, see Equation~\eqref{eq:vectorConservedQuantities}. Note that we do not expect any limitation in using a DG computation with a suitable shock-capturing strategy~\cite{geisenhofer2019,persson2006,barter2010} as an input for the XDG computation. In order to verify the robustness of the sub-cell correction procedure, we assume the shock interface~$\mathfrak{I}_0$ (we skip the index~$\mathrm{s}$ which denotes the interface as the \emph{shock interface} in this section in order avoid confusion) to be initially located at~$x_\mathfrak{I}^{l=0} = x_\mathfrak{I}^0 = x_\mathfrak{I} (t_0) = 0.57$. This position is sufficiently far away from the exact shock position~$x_\mathrm{s} = 0.55$. The shock interface is represented by the zero iso-contour of a level-set function which is given by \begin{align} \label{eq:level_set} \varphi_\mathrm{s} (x) = x - x_\mathfrak{I}^0 = x - 0.57\,. \end{align} We show the smoothed initial condition centered around the shock interface~$\mathfrak{I}_0$ in Figure~\ref{fig:initial_cond}. \begin{figure}[tbp] \centering \begin{tikzpicture} \begin{axis}[ xmin=0.45, xmax=0.65, ymin=0.8, ymax=2.2, xlabel=$x$, ylabel=Density~$\rho$, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, no markers, xmajorgrids=false, ymajorgrids=false, legend style={cells={align=left}}, width=0.27\textwidth, height=0.27\textwidth ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, dashed] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=red] (axis cs:0.57,0.8) -- (axis cs:0.57,2.2); \node[color=red, align=left, anchor=south west] at (axis cs:0.57,0.8) {\small$\mathfrak{I}_0$}; \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[] table[skip first n=1] {\datafile}; \addlegendentry{Shock}; \pgfplotstableread{XDG_SSW_OneLs_p2_xCells10_yCells3_agg0.3_ts=100_dtFixed0.1_shockPos=0.55_smooth=1.txt}\datafile \addplot+[] table[skip first n=1] {\datafile}; \addlegendentry{Smoothed initial\\condition, $P=2$}; \end{axis} \end{tikzpicture} \caption {Initial configuration for determining the sub-cell accurate position of a stationary shock wave in an XDG method. The smoothed initial condition mimics the solution of standard DG method with shock-capturing, see Equations~\eqref{eq:smoothed_Heaviside} and~\eqref{eq:smoothed_initial_cond}. The shock is located at~$x_\mathrm{s} = 0.55$. The initial position of the shock interface~$\mathfrak{I}_0$ is assumed to be at~$x_\mathfrak{I}^0 = x_\mathfrak{I} (t_0) = 0.57$.} \label{fig:initial_cond} \end{figure} Note that the shock interface is fixed in space in each correction step. The procedure terminates if the position~$x_\mathfrak{I}^l$ of the shock interface~$\mathfrak{I}_l$ coincides with the exact shock position~$x_\mathrm{s}$, i.e., $x_\mathfrak{I}^l = x_\mathrm{s}$. In order to obtain the steady-state solution during each correction step~$t_l$ of the shock interface position, which we call a \emph{pseudo-time-step}, we use a standard implicit Euler scheme with finite time-step sizes of ~$\Delta t_l = 0.1$. This mathematically corresponds to an under-relaxation-like method, and thus the nonlinear system can be solved using a standard Newton procedure. Due to the low number of DOFs the Jacobian matrix in some point can be evaluated in a brute-force fashion using a small perturbation in the order of $10^{-7}$ of each DOF. The software framework \emph{BoSSS} provides some acceleration of this process, which exploits the locality of the DG discretization: Since any perturbation only affects the residual in the respective cell and all its neighboring cells, one can perturb multiple DOFs at once, given that they are sufficiently far apart. This allows to construct several columns of the Jacobian matrix at once. Further advanced preconditioning techniques and implicit methods~\cite{dolejsi2004,fidkowski2005,shahbazi2009} are beyond the scope of this work, since we focus on the novel methodology for determining sub-cell accurate position of the shock front. We start the correction procedure from an initial numerical solution~$\vec{U}_0 (x) = \vec{U} (x, t_0)$ and an initial cell-accurate approximation of the shock interface position~$x_\mathfrak{I}^0$. Next, we set up the iterative pseudo-time-stepping procedure~$t_l \rightarrow t_{l+1}$ by discretizing the Euler equations~\eqref{eq:eulerEquations} to~\eqref{eq:convectiveFluxes} in combination with the shock interface~$\mathfrak{I}_l$ by the presented XDG method in space, see Section~\ref{sec:discretization}. For obtaining the steady-state solution~$\vec{U} (x, t_l)$ of each pseudo-time-step~$t_l$, we apply the implicit Euler scheme to advance the solution in time~$t_l^n \rightarrow t^{n+1}_l$. Note that the index~$l$ corresponds to a pseudo-time-step where the shock interface~$\mathfrak{I}_l$ is fixed in space and the index~$n$ denotes the time-step of the implicit Euler scheme. \subsubsection{Proof of Concept} \label{sec:implicit_pseudo_ts_proof} In Figure~\ref{fig:proof_of_concept}, we show the results of a one-dimensional proof of concept of the sub-cell accurate correction procedure in a pseudo-two-dimensional computation. \begin{figure}[tbp] \centering \begin{tikzpicture} \begin{groupplot}[ group style={ group name=mygroupplot, group size=3 by 4, vertical sep = 10mm, horizontal sep = 19mm, }, height=0.2\textwidth, width=0.2\textwidth, no markers ] \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0, ymax=1, ylabel={Initial value}, xmajorgrids=false, ymajorgrids=false, yticklabels={,,}, ytick style={draw=none}, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, title style={at={(0.5,1.0)}, anchor=north, yshift=5mm}, title ={\small Cut background cell~$K_j$}, ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, dashed] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=red, thick] (axis cs:0.57,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.57,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=red, align=left, anchor=south west] at (axis cs:0.57,0) {\small $\mathfrak{I}_0$ at\\\small $0.57$}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_0$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0.75, ymax=2.15, ylabel=Density~$\rho$, xmajorgrids=false, ymajorgrids=false, no markers, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, legend columns=-1, column sep=0.1cm, legend to name={CommonLegend}, title style={at={(0.5,1.0)}, anchor=north, yshift=5mm}, title ={\small Numerical solution}, ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_0$}; \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[color=lightgray, dashed] table[skip first n=1] {\datafile}; \pgfplotstableread{1_rho.txt}\datafile \addplot+[solid] table[skip first n=1] {\datafile}; \pgfplotstableread{1_rho_p0.txt}\datafile \addplot+[dashdotted] table[skip first n=1] {\datafile}; \legend{Shock,$P=2$, $P0$-proj.} \nextgroupplot[ xmin=-0.5, xmax=2.5, ymin=-1.0, ymax=5.0, ylabel=Indicator value, xmajorgrids=true, ymajorgrids=true, xtick={0, 1, 2}, xticklabels={$\mathcal{I}^{P0}$, $\mathcal{I}^{\rho}$, $\mathcal{I}^{m}$}, title style={at={(0.5,1.0)}, anchor=north, yshift=5mm}, title ={\small Indicators}, ] \pgfplotstableread{1_indicators_p0.txt}\datafile \addplot+[ ybar, fill=gray, ] table[] {\datafile}; \pgfplotstableread{1_indicators.txt}\datafile \addplot+[ ybar, fill=black!10!, solid ] table[] {\datafile}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small $t_0$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0, ymax=1, ylabel={Time-step 1}, xmajorgrids=false, ymajorgrids=false, yticklabels={,,}, ytick style={draw=none}, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, dashed] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=red, thick] (axis cs:0.535,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.535,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=red, align=left, anchor=south west] at (axis cs:0.535,0) {\small$\mathfrak{I}_1$ at\\\small$0.535$}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_1$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0.75, ymax=2.15, ylabel=Density~$\rho$, xmajorgrids=false, ymajorgrids=false, no markers, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, legend columns=-1, column sep=0.1cm, legend to name={CommonLegend} ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_1$}; \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[color=lightgray, dashed] table[skip first n=1] {\datafile}; \pgfplotstableread{2_rho.txt}\datafile \addplot+[solid] table[skip first n=1] {\datafile}; \pgfplotstableread{2_rho_p0.txt}\datafile \addplot+[dashdotted] table[skip first n=1] {\datafile}; \nextgroupplot[ xmin=-0.5, xmax=2.5, ymin=-1.0, ymax=5.0, ylabel=Indicator value, xmajorgrids=true, ymajorgrids=true, xtick={0, 1, 2}, xticklabels={$\mathcal{I}^{P0}$, $\mathcal{I}^{\rho}$, $\mathcal{I}^{m}$}, ] \pgfplotstableread{2_indicators_p0.txt}\datafile \addplot+[ ybar, fill=gray, ] table[] {\datafile}; \pgfplotstableread{2_indicators.txt}\datafile \addplot+[ ybar, fill=black!10!, solid ] table[] {\datafile}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_1$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0, ymax=1, ylabel={Time-step 2}, xmajorgrids=false, ymajorgrids=false, yticklabels={,,}, ytick style={draw=none}, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, dashed] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=red, thick] (axis cs:0.5525,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5525,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=red, align=left, anchor=south west] at (axis cs:0.5525,0) {\small$\mathfrak{I}_2$ at\\\small$0.5525$}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_2$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0.75, ymax=2.15, ylabel=Density~$\rho$, xmajorgrids=false, ymajorgrids=false, no markers, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, legend columns=-1, column sep=0.1cm, legend to name={CommonLegend} ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_2$}; \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[color=lightgray, dashed] table[skip first n=1] {\datafile}; \pgfplotstableread{3_rho.txt}\datafile \addplot+[solid] table[skip first n=1] {\datafile}; \pgfplotstableread{3_rho_p0.txt}\datafile \addplot+[dashdotted] table[skip first n=1] {\datafile}; \nextgroupplot[ xmin=-0.5, xmax=2.5, ymin=-1.0, ymax=5.0, ylabel=Indicator value, xmajorgrids=true, ymajorgrids=true, xtick={0, 1, 2}, xticklabels={$\mathcal{I}^{P0}$, $\mathcal{I}^{\rho}$, $\mathcal{I}^{m}$}, ] \pgfplotstableread{3_indicators_p0.txt}\datafile \addplot+[ ybar, fill=gray, ] table[] {\datafile}; \pgfplotstableread{3_indicators.txt}\datafile \addplot+[ ybar, fill=black!10!, solid ] table[] {\datafile}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_2$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0, ymax=1, xlabel=$x$, ylabel={Final time-step}, xmajorgrids=false, ymajorgrids=false, yticklabels={,,}, ytick style={draw=none}, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, dashed] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=red, thick] (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.55,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=red, align=left, anchor=south west] at (axis cs:0.55,0) {\small$\mathfrak{I}_\mathrm{last}$ at\\\small$0.55$}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_\mathrm{last}$}; \nextgroupplot[ xmin=0.45, xmax=0.65, ymin=0.75, ymax=2.15, xlabel=$x$, ylabel=Density~$\rho$, xmajorgrids=false, ymajorgrids=false, no markers, xtick={0.5, 0.55, 0.6}, xticklabels={$0.5$, $x_\mathrm{s}$, $0.6$}, legend columns=-1, legend style={/tikz/every even column/.append style={column sep=3mm}}, legend to name={CommonLegend} ] \draw[color=lightgray, thick] (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.5,\pgfkeysvalueof{/pgfplots/ymax}); \draw[color=lightgray, thick] (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymin}) -- (axis cs:0.6,\pgfkeysvalueof{/pgfplots/ymax}); \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_\mathrm{last}$}; \pgfplotstableread{XDG_SSW_Initial_Condition.txt}\datafile \addplot+[color=lightgray, dashed] table[skip first n=1] {\datafile}; \pgfplotstableread{5_rho.txt}\datafile \addplot+[solid] table[skip first n=1] {\datafile}; \pgfplotstableread{5_rho_p0.txt}\datafile \addplot+[dashdotted] table[skip first n=1] {\datafile}; \legend{Shock, {$P=2$}, {$P0$-proj.}} \nextgroupplot[ xmin=-0.5, xmax=2.5, ymin=-1.0, ymax=5.0, xlabel style={align=center}, xlabel={Indicator variant\\$\mathcal{I}^{P0}$: $+/- \Rightarrow$\\$\mathfrak{I} \textrm{: to right/left}$}, ylabel=Indicator value, xmajorgrids=true, ymajorgrids=true, xtick={0, 1, 2}, xticklabels={$\mathcal{I}^{P0}$, $\mathcal{I}^{\rho}$, $\mathcal{I}^{m}$}, ] \pgfplotstableread{5_indicators_p0.txt}\datafile \addplot+[ ybar, fill=gray, ] table[] {\datafile}; \pgfplotstableread{5_indicators.txt}\datafile \addplot+[ ybar, fill=black!10!, solid ] table[] {\datafile}; \node[color=black,fill=black!10!,anchor=north east] at (axis cs:\pgfkeysvalueof{/pgfplots/xmax},\pgfkeysvalueof{/pgfplots/ymax}) {\small$t_\mathrm{last}$}; \end{groupplot} \path ([yshift=-10mm]mygroupplot c1r4.south east) -- node[below]{\ref{CommonLegend}} ([yshift=-10mm]mygroupplot c3r4.south west); \end{tikzpicture} \caption{ Illustration of the implicit pseudo-time-stepping procedure. The shock interface~$\mathfrak{I}_l$ converges to the exact shock position~$x_\mathrm{s}=0.55$ by using the indicator~$\mathcal{I}^{P0} (\rho)$. Three different indicators are presented: $\mathcal{I}^{P0} (\rho)$ is based on a $P0$-projection of a high-order solution ($P=2$) of the density field~$\rho$, whereas $\mathcal{I}^{\rho} (\vec{U})$ and $\mathcal{I}^{m} (\vec{U})$ are based on the normal shock relations~\eqref{eq:normalShockWaveRelations_rpt_conti} and~\eqref{eq:normalShockWaveRelations_rpt_mom}, respectively. The shock interface position inside the cut background cell~$K_j = (0.5, 0.6)$ is iteratively adapted by applying a bisection procedure depending on the sign of~$\mathcal{I}^{P0} (\rho)$. In this test case, only the indicator~$\mathcal{I}^{P0} (\rho)$ delivers the correct shifting direction. The pseudo-time-steps~$t_l = \{t_0,t_1,t_2,t_\mathrm{last}\}$ are shown with gray labels. } \label{fig:proof_of_concept} \end{figure} The reader is referred to Section~\ref{sec:implicit_pseudo_ts_setting} for a description of the test case. The left column depicts the position of the shock interface~$\mathfrak{I}_l$ in the cut background cell~$K_j = (0.5, 0.6)$, the middle column shows the unlimited solution with a polynomial degree of~$P=2$ as well as the corresponding $P0$-projections in the cut-cells~$K_{j, \mathfrak{A}}$ and~$K_{j, \mathfrak{B}}$, and the right column depicts the values of the indicators~$\mathcal{I}^{P0} (\rho)$, $\mathcal{I}^{\rho} (\vec{U})$, and $\mathcal{I}^{m} (\vec{U})$. The results are plotted at the pseudo-time-steps~$t_l = \{t_0,t_1,t_2\}$, and for the converged solution at~$t_\mathrm{last}$ where the shock interface coincides with the exact shock position, i.e., $x_{\mathfrak{I}}^\mathrm{last} = x_\mathrm{s} = 0.55$. During the correction, only the indicator~$\mathcal{I}^{P0} (\rho)$ is capable to deliver the correct shifting direction of the shock interface in every pseudo-time-step. All indicators give the correct shifting direction at the pseudo-time-step~$t_1$. By contrast, at the pseudo-time-steps~$t_0$ and~$t_2$, the indicators~$\mathcal{I}^{\rho} (\vec{U})$ and~$\mathcal{I}^{m} (\vec{U})$ based on the normal shock relations~\eqref{eq:normalShockRelations} fail to return the correct shifting direction, see again the right column of Figure~\ref{fig:proof_of_concept}. The new interface positions~$x_\mathfrak{I}^{l+1}$ are determined by the bisection algorithm as discussed for the indicator~$\mathcal{I}^{P0} (\rho)$ in Section~\ref{sec:p0_indicator}. \section{Conclusion} \label{sec:conclusion} In this work, we have derived a novel reconstruction procedure for determining the sub-cell accurate position of a pseudo-two-dimensional shock front in the context of shock-fitting by employing an XDG method. The overall goal is the simulation of compressible flows with high-order accuracy, for which the presented methodology builds a fundamental basis. Many DG shock-capturing strategies lack an (arbitrary) high-order convergence rate. By contrast, we employ an XDG method in which a shock front is described by the zero iso-contour of a level-set function in order to circumvent this limitation. To this end, it is essential to obtain a sharp reconstruction of the shock front. This is a non-trivial task, since the position and shape of shock waves are in general not known a priori. We have introduced the methodology to determine the exact position of a stationary normal shock wave in a pseudo-two-dimensional setting. Furthermore, our approach features a cell-agglomeration strategy~\cite{muller2017,kummer2020} for the treatment of small and ill-shaped cut-cells. In Section~\ref{sec:sub-cell_accurate_correction}, we have presented a novel \emph{sub-cell accurate} correction procedure in order to find the \emph{exact} shock position in a cut background cell. We have developed an implicit pseudo-time-stepping procedure which determines the exact shock-position in an iterative way. In each pseudo-time-step, we use a standard implicit Euler scheme to drive the solution into the steady-state. In every pseudo-time-step, the position of the shock front in the cut background cell is judged by several local indicators. In this test case, the indicator based on a zeroth-order projection of the local solution has always determined the correct shifting direction of the shock interface. The new shock interface positions have been computed by using a bisection algorithm. Our ongoing and future work focuses on the extension to truly two-di\-men\-sion\-al, steady applications. For that, several marker points may be seeded on the reconstructed shock interface. An adapted implicit pseudo time-stepping procedure could be applied to the marker points until they all converge to the exact position on the shock front. Then, the shock level-set function could be reconstructed. For unsteady flow scenarios, the complexity may increase drastically, since at this point it is not foreseeable whether additional stabilization mechanism are required for the tracking of moving shock fronts. \section*{Acknowledgments} This work is supported by the \emph{Excellence Initiative} of the German Federal and State Governments and the Graduate School CE within the Center of Computational Engineering at the Technical University of Darmstadt. Thanks to L. Beck and J. Guti\'{e}rrez for the valuable comments on this work. \bibliographystyle{plain}
{ "timestamp": "2020-12-18T02:18:12", "yymm": "2012", "arxiv_id": "2012.08860", "language": "en", "url": "https://arxiv.org/abs/2012.08860" }
\section{\label{sec:level1}INTRODUCTION} The understanding of physical scale and energy scale is crucial in particle physics, condensed matter research as well as cosmology. For instance, conformal field theory (CFT) is an active research; it leads to AdS/CFT correspondence, which we can study symmetry restoration at high temperatures with the blackhole solutions \cite{Chai2020}. The effect of beyond-standard-models may emerge beyond the energy scale of TeV \cite{Lykken,Astrid} based on the measured Higgs mass and top quark mass, the perturbative analysis suggests that the electroweak Higgs vacuum might indeed not be stable; new degrees of freedom may appear below or at scales of $\mathcal{O}(10^{10})$ GeV, which changes the renormalization group evolution, and renders the potential stable. Renormalization is one of key concepts in the development of quantum field theory \cite{Peskin}. Wilson's work \cite{Wilson1971,Wilson1974} helps us understanding the renormalization in terms of relevant and irrelevant operators. The deep connection of the renormalization and the scaling of effective Lagrangian was illustrated. In 1983, Polchinski advanced the idea from Wilson. He shown that the connection found by Wilson can be made as the basis to prove the renormalizability of perturbative field theory \cite{Polchinski}. Sean Carroll commented that "Indeed, there can be many possible 'true' theories - many 'ultraviolet completions' that would give you exactly the same low-energy effective theory!" \cite{Sean2013}. Considering the perturbative EFT is the scaling-corresponding theory, i.e. scaled EFT, from the underlying bare field theory, we may ask if there is an extension of the scaling mechanism to yield a space of underlying field theories? We propose such mechanism in this work. Since the equations of motion (EoM) of the bare theory and the usual rescaled Lagrangian, $\mathcal{L}_{R}$, share the same form (also rescaled), we call two models are scaling-corresponding if there is such correspondence for their EoMs. We introduce the field-map that maps the quantum field, $\phi$, from scaled EFT model to the quantum field, $\Phi$, of the underlying bare model. For example, the field-map of scalar fields is defined by $\Phi(\phi)$. The system described by the partition function of the underlying bare theory as, $\tilde{\mathcal{Z}}=\int \text{D$\Phi $} \, e^{i \tilde{S}\left(\Phi ,\tilde{g}\right)}$, transforms as $\tilde{\mathcal{Z}}\to \int \text{D$\Phi(\phi) $} \, \text{D$\phi $}\, e^{i \tilde{S}\left(\Phi(\phi),\tilde{g}\right)}$, where $\tilde{S}$ and $\tilde{g}$ denote the related action and coupling(s) respectively. The mechanism requires that the transformed system is equivalent to the system of the scaled EFT model when we impose the field-map and certain constraints. We cover the construction of the mechanism and how it is related to Wilson's/Polchinski's approach for EFT in section 2. In section 3, we develop a generic method to identify and solve the constraints. We illustrate the mechanism through the real $\phi^4$ scalar field model. We found that if there is a self-interaction of order $N$ in the EFT model, the solution (of order $N-1$ radical equation) consists of multiple models (up to $N-1$ models) for the underlying bare theory. In our illustrative example - $\phi^4$ model, we use the two-loops renormalization from \cite{lezioni99}, and obtain the estimated cutoff of the EFT is at the scale of $\mathcal{O}(10^6)$ GeV. In the last section, we further generalise the mechanism - introducing coupling-map $\tilde{g}(\Phi)$, and apply to the field of cosmology - inflation paradigm. Given the limited observational constraints of inflation cosmology \cite{Wang2014}, it can be useful if there is model-selection "guideline". We demonstrate how the mechanism leads to the power-law model of slow-roll inflation from the $\phi^4$ EFT. We briefly verify the consistency of the generated inflation model by the observational constraints - the spectral index ($n_{s}$), e-folds, and tensor/scalar ratio ($r$) \cite{Leach2003, planck2018, Wang2014}. \section{\label{sec:level1}The extension of scaling-correspondence mechanism} In this section, we first briefly review the work developed by Wilson and Polchinski about the scaling of theories and renormalization as well as the analysis tool used by Polchinski. We then introduce the mechanism as an extension of the scaling transformation, and explains how it may generate related underlying bare theories with some interesting properties. Finally, we study the cutoff scale of the EFT model from the mechanism. In the standard context of renormalization of perturbative quantum field theory to remove the infinities of loop calculation, we add the counter-terms to the renormalized terms, i.e. $\mathcal{L}_{R} + \mathcal{L}_{ct} $ \cite{Peskin}. Wilson illustrated that the renormalization is indeed closely related to the scaling of the theory via the scaling transformation \cite{Wilson1971, Wilson1974}. He used the irrelevant and relevant operators to analyse how the scaling and renormalization of EFT is connected. In 1983, Polchinski further evaluated Wilson's idea in his paper "Renormalization and Effective Lagrangian" \cite{Polchinski}, and proved that the connection found by Wilson can be treated as the basis to prove the renormalizability of perturbative field theory. We refer the reader to his work in detail of the proof \cite{Polchinski}. Polchinski illustrated by FIG. \ref{fig:conver} to show that, perturbative EFT is indeed in a "convergent" zone of the theory space, i.e. $\lambda_4 -\lambda_6$ plane, when performing scaling-transformation - scaling the bare theory at scale starting from $\Lambda_0$ to a much lower scale $\Lambda_R$. The convergent zone means that the theories converge when we move up the cutoff scales. In such example (4-dimensional scalar field theory), $\lambda_4$ is the coupling of the relevant operator (dimensionless); $\lambda_6$ is the coupling of the irrelevant operator (dimension of [$mass^{-2}$]); points $C_1$ and $C_2$ are initial points of the bare theories that subscript $1$ and $2$ denote different cutoff scales that the cutoff scales of point-2 is higher than the cutoff scales of point-1; points $D_1$ and $D_2$ are in convergent zone as we rescale the theories. In convergent zone, relevant coupling $\lambda_4$ is the renormalized coupling, $\lambda^{R}_4$; in contrast, the irrelevant coupling is suppressed by $\mathcal{O}[\Lambda_{R}^2 / \Lambda_{0}^2]$. \begin{figure}[htbp] \includegraphics[width=180pt]{polchinskiFig2.png} \caption{\label{fig:conver} In $\lambda_4 -\lambda_6$ plane of the theory space of the example illustrated by Polchinski \cite{Polchinski}, the bare theories with different cutoffs at $C_1$ and $C_2$ moves forward to the convergence of trajectories at $D_1$ and $D_2$ as we move to the lower scale.} \end{figure} The EoMs associated with the theories for point $C_1$ and $C_2$ are the EoMs of bare theories, i.e. unscaled theories, while EoMs associated with the theories for point $D_1$ and $D_2$ are the EoMs of renormalized theories, i.e. scaled theories. Both scaled and unscaled theories are in the same EoM form (with different coefficients only because of scaling). We call two models are \textit{scaling-corresponding} if their EoMs are in the same form up to scaled coefficients. Therefore, the bare theory, $\mathcal{L}_{B}$, and the perturbative renormalized EFT, $\mathcal{L}_{R}$, are scaling-corresponding. In such scaling transformation, we have a specific \textit{field-map} (maps the quantum field of a system to the quantum field of another system): the usual field-rescaling, $\Phi = Z^{\frac{1}{2}}\; \phi$. By introducing a general field-map, $\Phi(\phi)$, we extend the scaling transformation by postulating a mechanism that generates an underlying bare theory (or a set of underlying bare theories) $ \tilde{\mathcal{L}} $ of the quantum field $\Phi$ such that $ \tilde{\mathcal{L}} $ and the perturbative renormalized EFT of the quantum field $\phi$, $\mathcal{L}_{R}$, are scaling-corresponding. The usual field-rescaling of renormalization procedure is the trivial case of the mechanism, e.g. $\Phi(\phi) = Z^{\frac{1}{2}}\; \phi$ with other scaled couplings. We illustrate the mechanism by the scalar field theory for simplicity. The system described by the partition function of the underlying bare theory is denoted as, \begin{equation} \tilde{\mathcal{Z}}=\int \text{D$\Phi $} \, e^{i \tilde{S}\left(\Phi ,\tilde{g}\right)} \label{eq:Ztau}, \end{equation} where $\tilde{S}$ and $\tilde{g}$ are the related action and coupling(s) respectively. The field-map maps the scalar fields from $\phi$ to $\Phi$, i.e. $\Phi(\phi)$. The system is then described by\footnote{We can add the terms $J \Phi$ and $J \phi$ to the Lagrangians of the actions of Eq. (\ref{eq:Ztau}) and Eq. (\ref{eq:Ztau2}) respectively to define the generating functional $\tilde{Z}[J]$.} \begin{equation} \tilde{\mathcal{Z}}= \int \text{D$\Phi(\phi) $} \, \text{D$\phi $}\, e^{i \tilde{S}\left(\Phi(\phi), \tilde{g}\right)} \delta(\Omega) \label{eq:Ztau2}, \end{equation} where $\Omega$ is the constraint to be identified; the field configurations are determined by both the field-map and $\phi$. We require, given specific constraints and a field-map, the system (\ref{eq:Ztau2}) is equivalent to the system of a perturbative renormalized EFT, which lies in the "\textit{finite-dimensional submanifold in the space of possible Lagrangians}", as referred by Polchinski \cite{Polchinski} (e.g. the theory lies in the convergent zone in FIG \ref{fig:conver}). By equivalent, we mean the systems reproduce the same EFT in low energies. In order to obtain the constraints and the field-map, we first obtain the EoM from a general form\footnote{We may postulate the potential by the sum of couplings with all combinations of field operators, e.g. $(\Phi_i)^{r} (\Phi_j)^{s}$.} of $\tilde{\mathcal{L}}$, and expand the field-map around $\phi=v$ as \begin{equation} \Phi(\phi) = \Phi_0 + \Phi_{(1)} (\phi - v ) + \Phi_{(2)} \frac{(\phi - v)^2}{2} + ..., \label{eq:expansion} \end{equation} where $\Phi_{(r)} \equiv \Phi^{(r)}(v)$, then match such EoM to the EoM of the Lagrangian, $\mathcal{L}_{R} + \mathcal{L}_{ct}$, to obtain the constraints. Since the symmetry group of the EFT is related to the symmetry of the EoM, the constraints constructed from the couplings ($\tilde{g}_i$), $\Phi_0$ and $\Phi_{(r)}$, for each term of the EoM, are associated to the symmetry group's structure of the EFT. The procedure is straight forward; we cover the generic method to solve the constraints and field-map with the examples in detail in the next section. Considering the first order expansion for the field-map, $\Phi(\phi) \simeq \Phi_0 + \Phi_{(1)} (\phi - v )$, it is corresponded to the specific type of field redefinition, $\phi \to Z_\phi \phi + \phi_0$. The kinetic term of the Lagrangian of the scalar field is trivially equivalent to the kinetic term of a rescaled field (by factor $Z_\phi $), and the new potential is equivalent to the shifted and rescaled field. From \cite{1804}, the functional integral with the field redefinition is generally transformed to \begin{equation} \mathcal{Z'}[J]= \int D\phi' \; e^{i \int \mathcal{L'}[\phi']+J\phi'}, \end{equation} where $\mathcal{L'}$ and $\phi'$ are the transformed Lagrangian and scalar field respectively, and $\mathcal{L'}[\phi']=\mathcal{L}[\phi]$. Note that the Jacobian of the functional measure, $| \frac{\delta F}{\delta \phi'} |$, is unity in dimensional regularization, where $F$ is the inverse map of field redefinition. The green function is transformed but the S-matrix is not affected because the choice of interpolating field variable does not matter \cite{1804}. Therefore, the field-mapped theory in the first order approximation reproduces the same S-matrix elements. Under the context of EFT, we study the high order terms more carefully. It is because expanding in the second order for the field-map, the kinetics term is transformed to the terms including interaction terms of new field and its derivative, e.g. $\phi^2 \partial{\phi}^2$. From \cite{Peskin}, when one integrating out the high momentum modes, the functional integral can be expressed as \begin{equation} \mathcal{Z}= \int [D\phi]_{b \Lambda} \; e^{i \int \mathcal{L}_{\text{eff}}}, \end{equation} where $\mathcal{L}_{\text{eff}} = \mathcal{L} + (\text{sum of connected diagrams})$. The sum of the connected diagrams include the generated high dimensional terms such as the interaction terms of the field and its derivative because of Taylor expansion of momenta, and the Lagrangian is rescaled. Therefore, the field-mapped theory would mix the generated high dimensional terms after field-map expansion with the connected diagrams when integrating out the high modes. To be EFT in low energies, the high dimensional operators of EFT can be expressed as the following form \cite{1804}, \begin{equation} \mathcal{L}_{\text{eff}} = \mathcal{L}_{\text{dim} \leq 4} + \frac{1}{\Lambda} \mathcal{C}^{(5)} \mathcal{O}^{(5)} + \frac{1}{\Lambda^2} \mathcal{C}^{(6)} \mathcal{O}^{(6)} + \text{...},\label{eq:EFTCond} \end{equation} where $ \mathcal{C}^{(r)}$ and $\mathcal{O}^{(r)}$ are the coefficients and the operators of high dimensional terms respectively. Eq. (\ref{eq:EFTCond}) is the condition for the field-mapped theory to be equivalent to the original perturbative theory at low energies regime. In the next section, after solving the field-maps, we check against such condition. The previous two paragraphs explain the mechanism under the context of S-matrix elements, the functional integral, and the effect of integrating out high momentum modes. The purpose is to show how the mechanism provides a consistent low-energies EFT. Note that the EoMs-correspondence and Eq. (\ref{eq:EFTCond}) make the mechanism non-arbitrary because of such specific constraints. Given the generated Lagrangian satisfying the condition - Eq. (\ref{eq:EFTCond}), one can use known EFT tools (e.g. removing redundant operators \cite{Brivio2019} and renormalization group equation (RGE) of SMEFT \cite{Jenkins2013}) to study the field-mapped theory\footnote{We expand the field-mapped Lagrangian which the KE term is in the canonical form.}. The high-dimensional operators of an EFT may lead to the non-trivial structure in the correlation functions \cite{Brivio2019}. The dimension-six operators of SMEFT consist of total 59 independent operators, and the RGE is fairly complicated\cite{Jenkins2013}. We consider such non-trivial-quantum-effects study in the future work. Consider a more complicated scenario such as applying to Pati-Salam's extension of standard model \cite{Molinaro2018, PhysRevD.100.075009,PhysRevD.102.095025}, in which the symmetry group is \begin{equation} \text{SU(4)}\otimes \text{SU(2)}_L\otimes \text{SU(2)}_R \label{eq:PSSym}. \end{equation} The EoMs are more complicated, but the procedure is the same in principle - postulating a general form of $\tilde{\mathcal{L}}$ with all the possible terms and prefactor-coefficients as the couplings, expanding the field-maps (including scalar, gauge, and fermion fields), matching the EoMs, and then identifying the constraints. The solution of the mechanism consisting of the constraints and the field-map, is generally not unique, because multiple solutions may exist. Therefore, multiple bare theories may reproduce the same EFT model, and they usually have the similar structure as we illustrate in the next section by the $\phi^4$ model. A reasonable cutoff scale of the EFT model is \textit{the energy scale of the validity of the solution} since the solution usually relies on the regime of the field $\Phi$ when we expand by Eq. (\ref{eq:expansion}) up to certain order. For instance, in FIG. \ref{fig:VTau}, the valid regime of a bare theory with the inverted double-well potential is around the meta-stable point $A$ while the solution is invalid at point $B$\footnote{In the next section, we show how to justify the valid region by the suppression factor of the field-map expansion.} ; the field difference of point $A$ and $B$ indicates the energy scale threshold of the mechanism that the EFT model is reproduced. In the Higgs mechanism \cite{Peskin}, the ground state of the potential causes the field to have a nonzero vacuum expected value, and as a result, below the electroweak-scale, the Higgs mechanism breaks the symmetry, and the field theory is described by the spontaneous symmetry breaking theory. Similarly, we would speculate that, the ground state of the potential of $\tilde{\mathcal{L}}$ of the example in FIG. \ref{fig:VTau}, causes the field $\Phi$ to have a nonzero value, below the energy scale threshold, the mechanism of this work takes place such that the EFT emerges; above the energy scale threshold, the system is described by the related bare theory, $\tilde{\mathcal{L}}$. However, the detail of the transition is not covered in this work. \begin{figure}[htbp] \includegraphics[width=230pt]{VTauFIG.png} \caption{\label{fig:VTau} Potential $\tilde{V}(\Phi)$ of an example of a bare theory with the inverted double-well potential. In this example, the regime of validity of the mechanism is around the field value of point $A$ that the EFT model is reproduced, and becomes invalid at point $B$.} \end{figure} \section{\label{sec:level1}Illustration for scalar field} In this section, we explain how the mechanism works in detail by applying to the real scalar $\phi^4$ theory. Here we demonstrate the generic method to obtain the constraints of the mechanism. We assume the generic form of the bare theory of a real scalar field in Euclidean spacetime\footnote{For simplicity, we include only up to $\Phi ^4$.} \begin{equation} \tilde{\mathcal{L}}=\frac{1}{2} c_1 \partial _{\mu } \Phi \partial ^{\mu } \Phi +\beta _0 \Phi+\frac{1}{2} m^2{}_0 \Phi ^2+\frac{\alpha _0 \Phi ^3}{6}+\frac{\lambda _0 \Phi ^4}{4!} \label{eq:LTau0}, \end{equation} where $c_1$, $\beta _0$, $m^2{}_0$, $\alpha _0$, and $\lambda _0$ denote the coefficients. The related renormalized EFT is \begin{equation} \mathcal{L}_{\text{ren}}=\frac{1}{2} Z_1 \partial _{\mu } \phi \partial ^{\mu } \phi +\frac{1}{2} Z_{m^2} m_b^2 \phi ^2+\frac{Z_{\lambda } \lambda _b \phi ^4}{4!} \label{eq:Lren}, \end{equation} where the renormalized $Z$-terms are two-loop renormalized \cite{lezioni99}. After Taylor expanding the field map $\Phi(\phi)$ in the first order and matching term-by-terms for the EoMs of $\tilde{\mathcal{L}}$ and $\mathcal{L}_{\text{ren}}$, we obtain the constraints\footnote{We neglect the term $\phi^4$ of the EoM to match because it is associated with an irrelevant operator of $\mathcal{L}_{\text{ren}}$.} \small \begin{equation} \begin{split} m^2{}_0 &=-\frac{2 \Phi _{\text{(1)}}^2 m_b^2 Z_{m^2}+\lambda _b Z_{\lambda } \left(\Phi _0-v \Phi _{\text{(1)}}\right){}^2}{2 \Phi _{\text{(1)}}^3} \\ \beta _0&=\frac{\left(\Phi _0-v \Phi _{\text{(1)}}\right) \left(6 \Phi _{\text{(1)}}^2 m_b^2 Z_{m^2}+\lambda _b Z_{\lambda } \left(\Phi _0-v \Phi _{\text{(1)}}\right){}^2\right)}{6 \Phi _{\text{(1)}}^3} \\ \alpha _0&=\frac{\lambda _b Z_{\lambda } \left(\Phi _0-v \Phi _{\text{(1)}}\right)}{\Phi _{\text{(1)}}^3}, \;\; \lambda _0=-\frac{\lambda _b Z_{\lambda }}{\Phi _{\text{(1)}}^3}, \; \; c_1=-\frac{Z_1}{\Phi _{\text{(1)}}}, \label{eq:cons} \end{split} \end{equation} and we can infer the \textit{field-map equation} \begin{equation} 2 m^2{}_0 \Phi '(\phi )^3=\lambda _b Z_{\lambda } \left(\Phi (\phi )-v \Phi '(\phi )\right)^2-2 m_b^2 Z_{m^2} \Phi '(\phi )^2 \label{eq:radEqn} \end{equation} \normalsize from the first constraint\footnote{One can also choose another constraint but we prefer the first one because it is related to a physical parameter - the mass-squared coefficient.}. The similar method for relating a general potential to the effective field theory (around vacuum expected value) is found in the appendix of the literature \cite{Liu2018}, which considers the general Higgs potential parameterization to reproduce standard model effective field theory. Interestingly, the field-map equation (\ref{eq:radEqn}) is a radical equation of order $N-1$, where $N$ is the highest power of $\phi$ of EFT, i.e. $N-1=3$ in case of $\phi^4$ scalar field. Therefore, If there is a self-interaction of order $N$ in the EFT model, the solution consists of multiple models (up to $N-1$ models) for the underlying bare theory. The bare theories are corresponded to the same EFT model's symmetry group, i.e. they reproduce the same symmetry of the EFT model. We \textit{naively speculate} that the feature for having multiple bare models to the self-interaction EFT is rather general because the procedure to get the radical equation above is quite generic; however, we can't make the claim without a rigorous proof. To get the normalized kinetic term for $\tilde{\mathcal{L}}$, we set $c_1=1$. The local minimum which we define as the vacuum expected value, VEV, for $\tilde{\mathcal{L}}$, is found \begin{equation} <\!\!\Phi\!\!> = v Z_1+\Phi _0 \label{eq:VEV}. \end{equation} FIG. \ref{fig:VTau3} is the illustrative plot for the potentials of $\tilde{\mathcal{L}}$ of different values of $\Phi_0$. Note we assume a small finite $\epsilon$ parameter; the work Ref. \cite{Chai2020.2} treats $\epsilon$ a small finite parameter in the similar manner. \begin{figure}[htbp] \includegraphics[width=250pt]{VTauFIG3.png} \caption{\label{fig:VTau3} Potentials $\tilde{V}$ of the bare models with the inverted double-well potential generated by the mechanism for $\phi^4$ EFT (the mass of the scalar field of the EFT is set to 1). We illustrate the potentials by four different $\Phi_0$ parameters.} \end{figure} By re-defining $\Phi \to \Phi$ + VEV, and dropping constant, $Z_2$ symmetry is restored. After substituting the leading order terms of the counter-terms from \cite{lezioni99}, we found the potential being inverted double-well and it's parameters are \begin{equation} \begin{split} \tilde{V}&=-\frac{12 m^2 \Phi ^2}{\epsilon }-\frac{216 \left(16 \pi ^2\right)^5 \Phi ^4 \epsilon ^2}{\lambda ^4}, \\ \tilde{\lambda }&=\frac{5184 \left(16 \pi ^2\right)^5 \epsilon ^2}{\lambda ^4}, \; \; \tilde{m}=\frac{2 \sqrt{6} m}{\sqrt{\epsilon }}, \label{eq:VTauNPara} \end{split} \end{equation} where $m$ and $\lambda$ are the mass and quartic coupling of the scalar field of the EFT respectively. The cubic roots of the field-map radical equation (\ref{eq:radEqn}), by $\epsilon$ expansion, lead to \small \begin{align} \Phi '(\phi )&=\frac{\lambda ^2}{3072 \pi ^4 \xi \epsilon }-\frac{\lambda ^2 v^2}{256 \left(\pi ^2 \mu^2 \xi \right)}+\frac{24 \pi ^2 v \epsilon \Phi (\phi )}{\mu^2}+O\left(\epsilon ^{3/2}\right) \\ \Phi '(\phi )&=\pm \frac{2 \sqrt{3} \pi \sqrt{\epsilon } \left| \Phi (\phi )\right| }{\mu}-\frac{12 \epsilon \left(\pi ^2 v \Phi (\phi )\right)}{\mu^2}+O\left(\epsilon ^{3/2}\right) \label{eq:fieldmapEqns}, \end{align} where $m\equiv i \mu$ and $m^2{}_0\equiv-\xi \, \tilde{m}^2$; the first solution of the field-map by leading terms and $\Phi_0$ are \small \begin{equation} \begin{split} \Phi (\phi )&=\frac{\lambda ^2 \mu^2 e^{-\frac{24 \pi ^2 v \epsilon (v-\phi )}{\mu^2}}}{73728 \pi ^6 v \epsilon ^2} + C, \\ \Phi _0&=\frac{\lambda ^2 \left(\mu^2 (\xi -1)+12 \pi ^2 v^2 \epsilon \right)}{73728 \pi ^6 \xi v \epsilon ^2} \label{eq:fieldmapSols1} \end{split} \end{equation} respectively, where $C$ is a constant; The second and third solutions of the field-map and $\Phi_0$ are \begin{equation} \begin{split} \Phi (\phi )&=-\frac{\lambda ^2 \mu^2 e^{\frac{2 \pi \sqrt{\epsilon } (v-\phi ) \left(\pm\sqrt{3} \mu+6 \pi v \sqrt{\epsilon }\right)}{\mu^2}}}{6144 \pi ^5 \epsilon ^{3/2} \left(\pm\sqrt{3} \mu+6 \pi v \sqrt{\epsilon }\right)}, \\ \Phi _0&=-\frac{\lambda ^2 \mu^2}{6144 \pi ^5 \epsilon ^{3/2} \left(\pm\sqrt{3} \mu+6 \pi v \sqrt{\epsilon }\right)} \label{eq:fieldmapSols2} \end{split} \end{equation} \normalsize respectively. The suppression factor of the field-map solutions is $\mu/\!\sqrt{\epsilon}$ that consistently justifies the validity of the first-order expansion approximation in the low energies. To check the first solution against the condition of EFT correspondence - Eq. (\ref{eq:EFTCond}), we found the coefficients of the high dimensional operators $\phi^5$,$\phi^6$,$\phi^7$, and $\phi^8$ are in order of $\mathcal{O}(\epsilon)$, $\mathcal{O}(\epsilon^2)$, $\mathcal{O}(\epsilon^3)$, and $\mathcal{O}(\epsilon^4)$ respectively\footnote{We expand the field-map up to second order in this work.}. For the second and third solutions, the coefficients of the high dimensional operators $\phi^5$,$\phi^6$,$\phi^7$, and $\phi^8$ are in order of $\mathcal{O}(\epsilon^{1/2})$, $\mathcal{O}(\epsilon)$, $\mathcal{O}(\epsilon^{3/2})$, and $\mathcal{O}(\epsilon^2)$ respectively. Therefore, the solution is valid because of the suppression factor $\epsilon$. We naturally define the energy scale threshold (cutoff) of the EFT by the mass parameter $\tilde{m}$ of Eq. (\ref{eq:VTauNPara}). The mechanism involving suppression factor is not uncommon; for instances, in \cite{Chen2017}, Boltzmann suppression is found that observing particles' mass much larger than the Hubble scale of the inflation is unlikely. We estimate the $\epsilon$ value to be $\sim3\times10^{-8}$ and the cutoff scale of $\phi^4$ EFT is in the order of $\mathcal{O}(10^6)$ GeV. We use the running renormalisation group equation of $\lambda$ \cite{Peskin}, and coupling-matching-condition technique\footnote{By matching the coupling values of models of low energies and high energies at the cutoff.} \cite{Molinaro2018}, as well as the values of the Higgs boson's mass and vacuum expected value \cite{Khan2014}. The smallness of $\epsilon$ is consistent with our assumption. The seesaw mechanism of Pati-Salam model \cite{Molinaro2018} estimates the Pati-Salam breaking scale is at $\sim2000$ TeV. And, the cutoff energy of an inverse seesaw model \cite{Nomura2019} to address the tiny-neutrinos issue is at around TeV scale. Interestingly, the estimated order of magnitudes of the scales of those beyond-standard-models are fairly in-line with the cutoff scale calculated from our mechanism. We can compare the VEVs by Eq (\ref{eq:VEV}). The ratio of the VEVs of the first to the second (or third) solutions (Eq. (\ref{eq:fieldmapSols1}) and Eq. (\ref{eq:fieldmapSols2})) is $\sim\mathcal{O}(10^{-2}) / \! \sqrt{\epsilon}$.\footnote{We assume the parameter $\xi$ is in the order of unity.} So, the VEVs of the solutions can have a \textit{hierarchy structure}. With the $\epsilon$ estimation, the VEVs' ratio is $\sim\mathcal{O}(10^2)$. Finally, we discuss briefly about the irrelevant operators in the perspective of our mechanism. As mentioned in Polchinski's work \cite{Polchinski}, the effect of the irrelevant operators is present, but suppressed by $\mathcal{O}[\Lambda_{R}^2 / \Lambda_{0}^2]$. To include the irrelevant operators, we can add the higher-dimensional operators with prefactors to Lagrangians of the Eq. (\ref{eq:LTau0}) and (\ref{eq:Lren}). Then we follow the generic method in this section to obtain the constraints, and $N-1$ order radical equation for the field-map trivially. If we can probe the related parameters of the high energy effect from the generated bare model by the observational data of early universe, in principle, we may examinate the low energy correction(s) by those irrelevant operators and validate the suppression factor $\mathcal{O}[\Lambda_{R}^2 / \Lambda_{0}^2]$. \section{\label{sec:level1}Generating Inflaton model} In this section, we further generalise the mechanism by introducing the \textit{coupling-maps} $\lambda_{i}(\Phi)$, and generate the inflaton model. We demonstrate how the mechanism leads to the power-law inflaton model of the slow-roll inflation from the $\phi^4$ EFT. For consistency checks, we compute the spectral index ($n_{s}$), e-folds, and tensor/scalar ratio ($r$), and compare them against the observational constraints in Ref. \cite{Leach2003, planck2018}. We propose the general form of the underlying bare theory is \small \begin{equation} \frac{1}{2} c_1 \partial _{\mu } \Phi \partial ^{\mu } \Phi + \lambda _1(\Phi ) \Phi +\frac{1}{2} \lambda _2(\Phi ) \Phi ^2 +\frac{1}{3} \lambda _3(\Phi )\Phi ^3 +\frac{1}{4} \lambda _4(\Phi )\Phi ^4 \label{eq:Linf0}. \end{equation} \normalsize As in the literature \cite{Chen2017}, the authors introduce the functions of the general parameterisation of the inflation-standard-model couplings, while we call the functions of the self-couplings the coupling-maps. Following the same approach in last section, we match the EoMs, expand up to the first order for the coupling-maps around $\Phi_0$, and obtain the constraints. We have the following coupling-maps differential equations: \begin{widetext} \footnotesize \begin{equation} \begin{split} \lambda _1(\Phi )&=\frac{\left(\Phi -v \Phi '\right) \left(2 \left(6 m_b^2 Z_{m^2} \left(\Phi '\right)^2+\lambda _b Z_{\lambda } \left(\Phi -v \Phi '\right)^2\right)+3 \left(\Phi '\right)^3 \lambda _4'(\Phi ) \left(\Phi -v \Phi '\right)^3\right)}{12 \left(\Phi '\right)^3} \\ \lambda _2(\Phi )&=-\frac{2 m_b^2 Z_{m^2} \left(\Phi '\right)^2+\lambda _b Z_{\lambda } \left(\Phi -v \Phi '\right)^2}{2 \left(\Phi '\right)^3}-\lambda _1'(\Phi )-\lambda _4'(\Phi ) \left(\Phi -v \Phi '\right)^3 \\ \lambda _3(\Phi )&=\frac{\left(\Phi -v \Phi '\right) \left(\lambda _b Z_{\lambda }+3 \left(\Phi '\right)^3 \lambda _4'(\Phi ) \left(\Phi -v \Phi '\right)\right)}{2 \left(\Phi '\right)^3}-\frac{1}{2} \lambda _2'(\Phi ), \;\;\; \lambda _4(\Phi )=-\frac{\lambda _b Z_{\lambda }}{6 \left(\Phi '\right)^3}-\frac{1}{3} \lambda _3'(\Phi )+\lambda _4'(\Phi ) \left(v \Phi '-\Phi \right) \label{eq:couplingEqns}, \end{split} \end{equation} \normalsize \end{widetext} where $\Phi' = d\Phi / d\phi$. The physical interpretation is that, the inflaton model consisting of coupling-maps becomes approximately the bare model of $\phi^4$ EFT at $\Phi\sim\Phi_0$. We set the boundary conditions for the coupling-maps by matching the model $\tilde{\mathcal{L}}$ in the previous section at $\Phi=\Phi_0$. It implies the boundary conditions: \small \begin{equation} \begin{split} \lambda _1\left(\Phi _0\right)&=-\frac{\left(v Z_1+\Phi _0\right) \left(6 m_b^2 Z_{m^2} Z_1^2+Z_{\lambda } \lambda _b \left(v Z_1+\Phi _0\right){}^2\right)}{6 Z_1^3}, \\ \lambda _2\left(\Phi _0\right)&=\frac{2 m_b^2 Z_{m^2} Z_1^2+Z_{\lambda } \lambda _b \left(v Z_1+\Phi _0\right){}^2}{2 Z_1^3}, \\ \lambda _3\left(\Phi _0\right)&=-\frac{Z_{\lambda } \lambda _b \left(v Z_1+\Phi _0\right)}{2 Z_1^3}, \; \; \lambda _4\left(\Phi _0\right)=\frac{Z_{\lambda } \lambda _b}{6 Z_1^3}. \end{split} \end{equation} \normalsize In order to solve Eq. (\ref{eq:couplingEqns}), we need to specify the field-map from Eq. (\ref{eq:fieldmapSols1}) or Eq. (\ref{eq:fieldmapSols2}). We choose the first solution as it is in higher energy scale to connect to the inflation scale, and we solve the system numerically\footnote{We assume the coupling parameter $\lambda=1$ and $\xi \sim \mathcal{O}(1)$ for simplicity.}. The numerical plot is shown in FIG. \ref{fig:VInf}. \begin{figure}[htbp] \includegraphics[width=250pt]{infFig.png} \caption{\label{fig:VInf} Inflaton potential $V_{\text{inf}}$ generated by the mechanism and solved numerically. In high-energies regime, the potential behaves as the power-law model. We extrapolate the inflaton's potential in the power-law-form as $V_{\text{inf}} \propto \Phi ^{3.02}$} \end{figure} Although it is difficult to solve the system of Eq. (\ref{eq:couplingEqns}) analytically, interestingly, the inflaton's potential can be approximated by the power-law potential at high energies, \begin{equation} V_{\text{inf}} \propto \Phi ^{\alpha} \label{eq:Vinf}, \end{equation} as FIG. \ref{fig:VInf} shows. We estimate the potential as $V_{\text{inf}} \propto \Phi ^{3.02}$. Note that the power-law potential is the chaotic inflaton \cite{Baumann} (one of the large-field inflaton models). Since Eq. (\ref{eq:couplingEqns}) is a parameterised differential system by the parameters from $Z$-terms of the EFT, we may expect the parameters of the $\phi^4$ EFT model are connected to such inflation model's parameter. However, without the analytical solution, such relationship is difficult to justify. By the extrapolated power-law potential from Eq. (\ref{eq:Vinf}), we compute the slow-roll parameters ($\epsilon_{\text{v}}$ and $\eta_{\text{v}}$) by formulas from \cite{Wang2014}, and found the inflation ends $\Phi\sim\mathcal{O}(10^{18})$ GeV, about the Planck mass scale. Given the $n_s$ formula from Ref. \cite{Pavluchenko2004} \begin{equation} n_s - 1 =2 \eta_{\text{v}}-6 \epsilon_{\text{v}} + \frac{1}{3} (44-18 c) \epsilon_{\text{v}}^2+(4 c-14) \eta_{\text{v}} \epsilon_{\text{v}}+\frac{2 \eta_{\text{v}}^2}{3} \label{eq:ns}, \end{equation} where $c\simeq0.08145$, and $n_s = 0.97$ from \cite{planck2018}\footnote{We use the upper bound of the $n_s$.}, we calculated that the e-folds, the tensor/scalar ratio (r), and the energy scale of inflation are $\simeq83$, $0.14$, and $5\times10^{15}$ GeV respectively. Comparing the typical expected e-folds ($>60$) \cite{Wang2014} and the inflation energy scale is \begin{equation} 3 < \frac{E_{\text{inf}}}{10^{15}\;\text{GeV}} < 29 \end{equation} from Ref. \cite{Leach2003}, the e-folds and the energy scale of the generated inflaton model are fairly reasonable. However there is tension for $r$, as the latest observational constraint \cite{planck2018} is $r< 0.10$. The power-factor estimate of Eq. (\ref{eq:Vinf}), $\alpha\simeq3.0$, is within the bound from the observational constraint $\alpha\lesssim(3.5-4.5)$ from Ref. \cite{Pavluchenko2004}. Nevertheless, our work in this section is to demonstrate that the mechanism can be applied for the coupling-maps generalization; we expect more intensive study is needed in order to obtain more accurate inflation model. \section{Discussion} Understanding the scale of a physical system plays crucial role in different fields of Physics. Inspired by Wilson's and Polchinski's work, we propose the mechanism by an extension of scale transformation for quantum field models. We describe the mechanism by the example of real scalar $\phi^4$ effective field theory. We found an interesting and generic feature that multiple bare theories ($N-1$) with similar structure of the potential are associated with the self-interaction of order $N$ from the EFT. We further generalise the mechanism by introducing coupling functions, and apply to the inflation paradigm. The generated inflaton is found to be power-law potential. We expect the future work can improve the understanding of the transition from high-energies models to low-energies EFT, e.g. by the lattice method for field theory. We may study the fermionic system with the gauge interaction of the symmetry group of the standard model by the mechanism and explore if there is hierarchy structure hinted in section 3. Can we formulate an unification scheme for the families of the fermions? We need to keep in mind that the tensor/scalar ratio deduced by generated inflaton in section 4 is in tension with the recent observation. It is worth to pursue a more systemic analysis to improve the model and make some novel predictions that we can check against the observations such as cosmic non-Gaussianities. \begin{acknowledgments} JCHL would like to thank Professor Wang Yi of HKUST for the valuable comments and advices. \end{acknowledgments}
{ "timestamp": "2021-03-10T02:15:06", "yymm": "2012", "arxiv_id": "2012.09029", "language": "en", "url": "https://arxiv.org/abs/2012.09029" }
\section{Introduction} The numerous examples of application of hybrid perovskites in solar cells,\cite{Park2019} light-emitting diodes,\cite{Xu2019} field-effect transistors,\cite{Wu2019} lasers\cite{Stylianakis2019} and other (opto)electronic devices demonstrate the vast potential of these materials due to their advantageous electrical and optical properties.\cite{Fu2020,Chouhan2020} In the development of solar cells, hybrid perovskites are of particular interest as an inexpensive material\cite{Song2017} with remarkable absorption properties\cite{Fujiwara2018} and performance.\cite{AbdMutalib2018} Hybrid perovskites can be used as a highly efficient single junction photovoltaic technology or in combination with silicon as a tandem solar cell.\cite{Li2020,AlAshouri2020} Several studies suggest that hybrid perovskites are mixed ionic--electronic semiconductors showing phenomena such as hysteresis or giant dielectric effects.\cite{Liu2019,JuarezPerez2014} Although the efficiency of perovskite solar cells has advanced remarkably,\cite{Nrel} fundamental questions concerning the interplay of ions and electronic charge carriers are still under debate.\cite{Mosconi2016} In addition to hysteresis, mobile ions are held responsible for band bending, the accumulation of charge carries near interfaces and modifying charge-carrier injection.\cite{LopezVaro2018,Gottesman2016,Ebadi2019,Li2020_2} Interestingly, basic device properties such as their open-circuit voltage were found to be influenced by the properties of ionic defects, which in turn depend on the details of perovskite processing and device architecture.\cite{Reichert2020,reichert2020probing} Consequently, the mutual impact of ions on electrical charge carriers and vice versa limits the elucidation of the electronic properties of perovskite solar cells. For example, charge-carrier transport in perovskite solar cells was investigated by Herz and co-workers.\cite{Herz2017} The charge-carrier mobility was found to strongly depend on the device architecture, the perovskite material and its exact stoichiometric composition. Recombination dynamics of charge carriers in hybrid perovskites have often been investigated using photoluminescence spectroscopy, with the dominant recombination mechanisms depending on the photo-excitation densities.\cite{Stranks2014,Johnston2015} Interestingly, the low phonon energy in hybrid perovskites was attributed to be a critical factor for nonradiative recombination.\cite{Kirchartz2018} However, it is well-known that photoluminescence of perovskite materials is influenced by various factors -- including the measurement conditions and environment -- thus complicating the extraction of fundamental information regarding the recombination processes in perovskite materials.\cite{Goetz2020} \begin{figure}[t] \includegraphics*[scale=0.8]{figures/results_1.pdf} \caption{a) Current--density voltage characteristics with $J_\mathrm{sc}(V_\mathrm{oc})$ pairs and b) chemical capacitance versus voltage for different light intensities ranging from 0.0001~sun to 1~sun. Different regimes have been colour-coded as follows: $V<0.71~\mathrm{V}$: dominant $R_\mathrm{p}$; $0.71~\mathrm{V}<V<0.85~\mathrm{V}$: ionic influence on quasi Fermi level splitting (A); $0.85~\mathrm{V}<V<0.98~\mathrm{V}$: surface recombination (B); $V>0.98~\mathrm{V}$: dominant $R_\mathrm{s}$.} \label{fig:results_1} \end{figure} Small-perturbation techniques such as impedance spectroscopy (IS), intensity-modulated photocurrent spectroscopy (IMPS) and intensity-modulated photovoltage spectroscopy (IMVS) are powerful methods for the investigation of charge-carrier dynamics in semiconducting materials. In contrast to IS, in which the sample is excited by an AC-voltage modulation, a light-intensity modulation $P=P_0+P_\mathrm{ac}\mathrm{sin}(\omega t)$ is employed for IMPS and IMVS. The photocurrent $\Delta I$ of the solar cell is tracked during IS and IMPS measurements, while for IMVS, the photovoltage $\Delta V$ is recorded. For different DC intensities, the frequency $\omega$ of the AC component is also varied. The measured photocurrent and photovoltage have the same frequency, but are usually phase ($\phi$) shifted with a different amplitude. The transfer functions for IMPS and IMVS are as follows: \begin{align} \label{eq:TF_IMPS_IMVS} Z(IMPS)= \frac{\Delta I}{P}\mathrm{e}^{\mathrm{i}\phi}, \qquad& Z(IMVS)= \frac{\Delta V}{P}\mathrm{e}^{\mathrm{i}\phi}. \end{align} With IS, the frequency dependence of the capacitance $C(\omega)$ can be calculated from the impedance $Z=\Delta V/ \Delta I$ by selecting a suitable equivalence model, the simplest one being: \begin{equation} \label{eq:C_Z} C=\frac{\mathrm{Im}\left(\frac{1}{\underline{Z}}\right)}{\omega}. \end{equation} IMPS and IMVS methods were previously employed to the study of recombination and transport processes of charge carriers in various types of solar cells.\cite{Ravishankar2019,Basham2014,Halme2011,Heiber2018} In addition to their time domain counterparts, such as transient photovoltage/photocurrent decay (TPV/TPC), IMPS and IMVS gain in significance for the investigation of hybrid perovskite solar cells. For example, Pocket et al.\ investigated a series of planar perovskite solar cells with IMPS, IMVS and TPV and showed that with all methods consistent values for the diode ideality factors can be obtained.\cite{Pockett2015} Another important work on the understanding and interpretation of intensity-modulated spectroscopy on perovskite solar cells is the study by Bernhardsgrütter et al.\cite{Bernhardsgrtter2019} The authors employed simulations to show that in addition to electronic transport and recombination, the transport of ions is visible in the spectra at low modulation frequencies. Additionally, various simulation parameters such as illumination intensity, charge-carrier mobilities, Shockley-Read-Hall lifetimes, ion densities and surface recombination velocities were varied and their influence on the resulting IMPS and IMVS spectra was examined. Ravishankar et al.\ developed an equivalent circuit model to find a basic explanation for the response of intensity modulated spectroscopic measurement for different perovskite materials and contact layers as well as for the interaction of electronic with ionic charge carriers.\cite{Ravishankar2019,Ravishankar2019_2} Importantly, the study by Chen et al.\ on planar perovskite solar cells indicates that IMVS provides information about a thermally activated recombination process of free charge carriers with homogeneously accumulated ones at the hole-transport layer interface.\cite{Chen2018} This result is confirmed by the study of Guill\'{e}n et al.\ with IMVS on perovskite solar cell fabricated in a mesoporous n-i-p architecture.\cite{Guilln2014} In this study, the previous indications that surface recombination is dominant in perovskite solar cells is confirmed and validated by measuring current density--voltage characteristics (J--V), IS, IMPS and IMVS on planar MAPbI\textsubscript{3} perovskite solar cells (detailed processing conditions can be found in Sec.~\ref{sec:methods}). We utilise the results of these measurements to significantly expand this explanation by providing a comprehensive discussion of the solar cell ideality factor. Our results indicate that injection barriers -- presumably caused by the accumulation of ions at the interfaces -- limit the open-circuit voltage ($V_\mathrm{oc}$) of the devices. This interpretation is supported by a comparison of IS, IMPS and IMVS time constants with ionic migration rates from our previous work\cite{Reichert2020,reichert2020probing} and those reported in literature. \section{Results and Discussion}\label{sec:results} \begin{figure}[t] \includegraphics*[scale=0.75]{figures/results_2.pdf} \caption{a) Intensity-modulated photovoltage spectroscopy (IMVS), b) intensity-modulated photocurrent spectroscopy (IMPS), c) derivation $-\omega\mathrm{d}C/\mathrm{d}\omega$ of IS measurements (Fig.~\ref{fig:Cf}) for different light intensities ranging from 0.0001 sun to 1 sun.} \label{fig:results_2} \end{figure} To acquire a complete data set via our multi-technique approach, a series of measurements of J--V (Fig.~\ref{fig:results_1}a) characteristics, capacitance--voltage measurements (CV) (Fig.~\ref{fig:results_1}b and \ref{fig:CV}), IMVS (Fig.~\ref{fig:results_2}a), IMPS (Fig.~\ref{fig:results_2}b) and IS (Fig.~\ref{fig:Cf}) with light intensities varying over five orders of magnitude were performed. CV measurements were done by a fast sweeping method, where a pre-bias of 1.2 V is applied to the solar cell for one minute.\cite{Fischer2018} Detailed measurement conditions can be found in Sec.~\ref{sec:methods}. The $J_\mathrm{sc}(V_\mathrm{oc})$ values extracted from the J--V curves measured under illumination fit well with the measurements performed in the dark, which motivates the calculation of the ideality factor as will be discussed later. To obtain the chemical capacitance $C_\mu$ (Fig.~\ref{fig:results_1}b), the geometrical capacitance $C_\mathrm{geo}$ has to be subtracted from the CV measurements (Fig.~\ref{fig:CV}). A small shift in the peak position and height of the injection capacitance can be observed, which is linked to the change in the charge-carrier density due to illumination. The chemical capacitance enables the calculation of the charge-carrier density within the active layer, as described below. In the case of IMVS, two peaks can be observed, which were labelled with $\rho$ and $\delta$. The illumination dependence of the $\rho$ peak extends over the entire frequency range, whereas $\delta$ shows only a small dependence on light intensity. An approximately linear increase in amplitude for both peaks was measured with increasing illumination. Three peaks are visible for IMPS, which were labelled with $hf$, $\beta$ and $\delta$. The high-frequency $hf$ response shows a negligible illumination dependence in contrast to the $\beta$ and $\gamma$ peaks. For evaluating the IS data, the derivation $-\omega\mathrm{d}C/\mathrm{d}\omega$ was calculated as shown in Fig.~\ref{fig:results_2}c. Peaks in Fig.~\ref{fig:results_2}c are correlated to ionic migration rates, \begin{equation} \label{eq:e_t} e_\mathrm{t}=\omega_\mathrm{max}. \end{equation} Because of the agreement in the peak positions of IS with those observed via IMPS, these three peaks were also labelled with $hf$, $\beta$ and $\gamma$. Notably, the responses $\beta$ and $\gamma$ show a strong dependence on illumination in both position and amplitude, in contrast to the $hf$ peak. \begin{figure}[t] \includegraphics*[width=\columnwidth]{figures/Masterarrhenius.pdf} \caption{Comparison of ionic migration rates determined by IS, IMPS and IMVS with values from literature and our recent work (Reichert 2020\cite{reichert2020probing}).} \label{fig:Masterarrhenius} \end{figure} To identify the peaks observed in the IS, IMPS and IMVS measurements (Figs.~\ref{fig:results_2}a, \ref{fig:results_2}b and \ref{fig:results_2}c), we compared the ionic migration rates calculated from their peak positions with those from our previous work\cite{Reichert2020,reichert2020probing} and other reports found in literature using an Arrhenius diagram (Fig.~\ref{fig:Masterarrhenius}). As shown in Fig.~\ref{fig:Masterarrhenius}, the determined rates of $\beta$, $\gamma$ and $\delta$ from IS, IMPS and IMVS agree well with ionic migration rates from literature, including our results on a batch of identically fabricated solar cells with variation of the precursor stoichiometry.\cite{Reichert2020,reichert2020probing} We therefore conclude that the low-frequency responses $\beta$, $\gamma$ and $\delta$ can be attributed to the same ionic defects as observed in the previous studies. This result is in agreement with the simulations of Bernhardsgrütter et al.\cite{Bernhardsgrtter2019} and other studies.\cite{TurrenCruz2018,Domanski2017,Chen2018,Prochowicz2017,Roose2017} The movement of mobile ions during IMPS and IMVS measurements can be explained by the variation of the internal electrical field caused by changing the photoinduced free charge-carrier density. This change leads to a redistribution of mobile ions and is therefore in analogy to the explanation of ionic movement caused by the modulation of the Fermi level in IS. Additionally, the dependence of the peak height in IS (Fig.~\ref{fig:results_2}c), which is related to the ionic-defect density, supports this finding. The observable increase of the peak height in Fig.~\ref{fig:results_2}c can be explained by changing the thickness of the Debye layers formed by ion accumulation at the interfaces.\cite{reichert2020probing,Almora2015} The migration and redistribution itself can have an impact on the electronic charge carriers and vice versa, making the overall charge-carrier dynamics in perovskite solar cells rather complex.\cite{Bernhardsgrtter2019} Deviations between the migration rates obtained by IS in comparison to our previous works\cite{Reichert2020, reichert2020probing} (i.e.\ for $\gamma$) can likely be explained by small variations in the processing conditions. Differences between IS and IMPS/IMVS can be the result of different measurement methods, as they can probe different parts of the same ionic defect distribution.\cite{Reichert2020} It has been previously shown that the high-frequency response $hf$ in the IMPS spectra is dominated by the $RC$ time constant.\cite{Bernhardsgrtter2019, Pockett2015,Ravishankar2019_2,Prochowicz2017} This finding is supported by the negligible intensity dependence of the peak position in IMPS and IS. Consequently, no information concerning the charge-carrier transport in the perovskite layer can be obtained. We point out that the calculation of the charge-carrier mobility with the active layer thickness $L$ and the peak frequency $\omega_{hf}$ of the $hf$ response,\cite{Nojima2019} \begin{equation} \label{eq:mobility} \mu=\frac{\omega_{hf}L^2}{2V_\mathrm{oc}}, \end{equation} would result in a value of $10^{-3}~\mathrm{cm^2/(Vs)}$, which is at least two orders of magnitude lower than values reported in literature.\cite{Long2014,Herz2017} We therefore estimated the geometrical capacitance from the capacitive response at $\omega \approx 2\cdot 10^5~\mathrm{Hz}$ for further calculations, i.e.\ before the $hf$ response dominates the spectra. We have also chosen this frequency for the CV measurement in order to exclude the influence of mobile ions on the calculation of the charge-carrier density as discussed later. Since IMVS is measured at $V_\mathrm{oc}$, where no net current is flowing, the series resistance is not a critical parameter and the interpretation of the high-frequency response in IMVS remains valid. The IMVS $\rho$ response, with an intensity dependence over the entire frequency range, cannot be compared to the ionic migration rates from Fig.~\ref{fig:Masterarrhenius}. We therefore propose that the $\rho$ response is the result of recombination of charge carriers. To support this concept, in the following, a detailed analysis of the solar cells' ideality factor is provided, to determine the dominant recombination mechanism and the impact of mobile ions on the electronic landscape. First, the calculation of the ideality factor from the measured $J_\mathrm{sc}(V_\mathrm{oc})$ values is considered. Starting from the well-established Shockley diode equation to describe the current--voltage characteristics of perovskite solar cells,\cite{Shockley1949} \begin{equation} J(V)=J_0 \left( \mathrm{exp}\left(\frac{eV}{n_\mathrm{id}k_\mathrm{B}T} \right)-1 \right)-J_\mathrm{sc}, \label{eq:ideal_shockley_dark} \end{equation} where $J_0$ is the saturation current density, $e$ the elementary charge, and $k_\mathrm{B}T$ the thermal energy, an equation for determining the ideality factor by setting $J(V_\mathrm{oc})=0$ and $J_0 \ll J_\mathrm{sc}$ can be found:\cite{Wolf1963,Tvingstedt2016} \begin{equation} \mathrm{ln}(J_\mathrm{sc})=\frac{eV_\mathrm{oc}}{n_\mathrm{id}k_\mathrm{B}T}+\mathrm{ln}(J_0). \label{eq:nid} \end{equation} Then $n_\mathrm{id}$ can be obtained by a fit of the $J_\mathrm{sc}(V_\mathrm{oc})$ values in Fig.~\ref{fig:results_1}a. \begin{figure}[t] \includegraphics*[scale=0.8]{figures/discussion.pdf} \caption{a) $n_\mathrm{id}$ calculated from real part of IMVS measurements according to Fig.~\ref{fig:Re_IMVS}, from $J_\mathrm{sc}(V_\mathrm{oc})$-values and $n_\mathrm{R}$ using Eqn.~\ref{eq:n_R}. J--V and partly $n_\mathrm{R}$ include the ionic response. Only the ideality factor found in regime B, applied also to regime A, allows a suitable $V_\mathrm{oc}$ reconstruction in b) across both regimes. b) Measured and reconstructed $V_\mathrm{oc}$ versus the generation rate $G$. The dashed line represent a linear curve as guide for the eyes.} \label{fig:discussion} \end{figure} A single fit over the entire voltage range results in a large fitting error. Therefore, two fits were made yielding values of 1.65 and 0.74 for the low and high illumination intensities (i.e.\ low generation rates $G$), respectively, as shown in Fig.~\ref{fig:discussion}a. We labelled the low-intensity regime with A and the high-intensity regime with B. Assuming a symmetric quasi-Fermi level splitting, the ideality factor is linked to the recombination order $\lambda+1$ as follows:\cite{Wolff2019} \begin{equation} n_\mathrm{id}\approx \frac{2}{\lambda+1} \label{eq:n_id} \end{equation} leading to values between 2/3 (Auger recombination) and 1 (direct band-to-band recombination) to 2 (Shockley-Read-Hall recombination, SRH).\cite{Nelson2003,Wrfel2005} This would suggest that SRH recombination is dominant at low light intensities, while Auger is dominant at high intensities. However, Auger recombination was found to be unlikely for such low charge-carrier densities. Therefore, an alternative explanation for these low ideality factors -- surface recombination -- will be discussed later. Next, the calculation of the ideality factor from intensity-modulated spectroscopy is considered. The charge-carrier density can be determined using the following equation:\cite{Heiber2018} \begin{equation} n=\frac{1}{eL}\int_{V_\mathrm{sat}}^{V}C_\mu(V')\mathrm{d}V'+n_\mathrm{sat}, \label{eq:n} \end{equation} where $V_\mathrm{sat}$ and $n_\mathrm{sat}$ refer to the saturation voltage and saturation charge-carrier density, respectively. $V_\mathrm{sat}$ is typically chosen to be in the reverse bias regime. $C_\mu$ is shown in Fig.~\ref{fig:results_1}b. $n_\mathrm{sat}$ can be estimated by:\cite{Proctor2013} \begin{equation} n_\mathrm{sat}(V_\mathrm{sat})=\frac{1}{eL}C_\mu (V_\mathrm{sat})(V_\mathrm{oc}-V_\mathrm{sat}), \label{eq:n_sat} \end{equation} and \begin{equation} V'=V-J(V)AR_\mathrm{s}, \label{eq:V_sat} \end{equation} which accounts for the voltage drop caused by the series resistance. Here $A$ is the area of the active layer and $R_\mathrm{s}$ is the series resistance. The $n(V_\mathrm{oc})$ dependence yields a curve, which can by described by:\cite{Foertig2012} \begin{equation} n(V_\mathrm{oc})=n_0 \mathrm{exp}\left(\frac{eV_\mathrm{oc}}{n_\mathrm{n}k_\mathrm{B}T} \right), \label{eq:n_Voc} \end{equation} where $n_0$ refers to the dark charge-carrier density at 0~V. Based on Eqn.~\ref{eq:n_Voc}, the ideality factor of the charge-carrier density is found to be $n_\mathrm{n}=6.0$ in regime A and $n_\mathrm{n}=7.8$ in regime B, as is shown in Fig.~\ref{fig:n_tau}a. The charge-carrier lifetime $\tau$ can be obtained from the $\rho$ response peaks of the IMVS measurement:\cite{Guilln2014} \begin{equation} \tau=\frac{1}{\omega_\mathrm{max}}. \label{eq:tau} \end{equation} The dependence of $\tau$ on $V_\mathrm{oc}$ follows:\cite{Foertig2012} \begin{equation} \tau_\mathrm{n}(V_\mathrm{oc})=\tau_0 \mathrm{exp}\left(-\frac{eV_\mathrm{oc}}{n_\tau k_\mathrm{B}T} \right). \label{eq:tau_Voc} \end{equation} and fitting with Eqn.~(\ref{eq:tau_Voc}) yields the ideality factor of charge carrier lifetime in regime A to $n_\tau=1.18$ and for regime B to $n_\tau=0.82$. Here, $\tau_0$ is the dark charge carrier lifetime at 0~V. These two ideality factors can be combined to calculate the ideality factor of recombination $n_\mathrm{R}$:\cite{Foertig2012,Set2015} \begin{equation} \frac{1}{n_\mathrm{R}}=\frac{1}{n_\mathrm{n}}+\frac{1}{n_\mathrm{\tau}}. \label{eq:n_R} \end{equation} For regime A, $n_\mathrm{R}$ equals 0.99, whereas for regime B $n_\mathrm{R}$ is 0.74. The latter value is consistent with the diode ideality factor calculated from $J_\mathrm{sc}(V_\mathrm{oc})$ values at high illumination intensities. For regime A, $n_\mathrm{R}$ is too low in comparison to the result from $J_\mathrm{sc}(V_\mathrm{oc})$ values. Calado et al.\cite{Calado2019} found that different voltage preconditions of the perovskite solar cell can lead to deviations in the apparent $n_\mathrm{id}$, i.e.\ values of 1 or 2 can be obtained although in both cases surface recombination is dominant. This effect was attributed to the redistribution of mobile ions. We note that our CV measurements were performed using a fast sweeping method,\cite{Fischer2018} where a prebias of 1.2~V for one minute is applied to the solar cell. The different preconditioning between CV and JV/IMVS might serve as explanation for the different apparent $n_\mathrm{id}$, if the impact of the redistribution of the mobile ions on the electronic landscape is taken into account, as will be discussed later. Another way to determine the ideality factor can be achieved by considering the real part of the IMVS measurement (Fig.~\ref{fig:Re_IMVS} and \ref{fig:discussion}a). Based on Eqn.~(\ref{eq:nid}), an equivalent equation using the change of $V_\mathrm{oc}$ upon modulation of the generation rate $G\approx J_\mathrm{sc}/eL$ can be found: \begin{equation} n_\mathrm{id}\approx \frac{e}{k_\mathrm{B}T}\frac{\Delta V_\mathrm{oc}}{\Delta \mathrm{ln}(G)}. \label{eq:n_id_IMVS} \end{equation} Here, $\Delta V_\mathrm{oc}$ is given by $\mathrm{Re}(\mathrm{IMVS})(f_\mathrm{min})$ at the lowest measured frequency $f_\mathrm{min}$. This allows the calculation of $n_\mathrm{id}$ as shown in Fig.~\ref{fig:discussion}a using Fig.~\ref{fig:Re_IMVS}. Values of $G<10^{19}~\mathrm{1/cm^3s}$ were excluded since they would lead to unreasonably high $n_\mathrm{id}$ caused by the apparent recombination response $\rho$ at low frequencies in the IMVS spectra of Fig.~\ref{fig:Re_IMVS}. The values for $n_\mathrm{id}$ using Eqn.~(\ref{eq:n_id_IMVS}) indicate a shift from $n_\mathrm{id}=0.73$ for high illumination intensities to $n_\mathrm{id}=1.64$ for low illumination intensities and are in good agreement with the values from $J_\mathrm{sc}(V_\mathrm{oc})$ (see Fig.~\ref{fig:discussion}a). According to the study of Tress et al.,\cite{Tress2018} values of $n_\mathrm{id}<1$ for high illumination intensity can be explained by considering the process of surface recombination. Charge carriers can be accumulated at the interfaces to the transport layers if potential barriers hinder their extraction. As a result, surface recombination can be the dominant mode of charge-carrier loss. This finding is confirmed by several studies,\cite{Guilln2014,Wheeler2015,Courtier2020} and especially by the study of Chen et al.\cite{Chen2018}, where a linear dependence between the recombination time constant and illumination intensity was found to support this conclusion. Experimental results and simulations by Sundqvist et al.\cite{Sundqvist2016} have shown that $n_\mathrm{id}$ is reduced to 2/3 if high injection barriers are present in the solar cell. An analytical explanation was provided. We therefore consider regime A in Figs.~\ref{fig:results_1}a and ~\ref{fig:discussion} to be dominated by surface recombination. This means that the quasi-Fermi level splitting is hindered by the work function of the contacts, which in turn reduces the $V_\mathrm{oc}$. In this case, $V_\mathrm{oc}$ should saturate for high illumination intensities.\cite{Tress2018} Such a saturation is shown in Fig.~\ref{fig:discussion}b. $V_\mathrm{oc}$ was taken from the JV measurement. Additionally, we reconstructed $V_\mathrm{oc}$ according to:\cite{Foertig2012} \begin{equation} V_\mathrm{oc}=n_\mathrm{R}\frac{k_\mathrm{B}T}{e}\mathrm{ln}\left(\frac{R(n)}{R_0} \right), \label{eq:V_oc_IMS} \end{equation} with the recombination rates given by: \begin{align} \label{eq:R_R_0} R=\frac{n(V)}{\tau_\mathrm{n}(V)},\qquad & R_0=\frac{n_0}{\tau_\mathrm{n_0}}, \end{align} and the recombination ideality factor $n_\mathrm{R}$ calculated from Eqn.~(\ref{eq:n_R}). $R_0$ is the dark recombination rate at 0~V. Both methods agree with each other and show an apparent saturation of $V_\mathrm{oc}$ indicating a limitation by injection barriers. According to the recent study of Li et al.,\cite{Li2020_2} an injection barrier can be introduced by mobile ions. It was shown that an accumulation of anions on the electron transport layer leads to a barrier, which hinders the electron extraction and injection. We note that the agreement between measured and reconstructed $V_\mathrm{oc}$ is only possible when the constant value of $n_\mathrm{R}=0.74$, determined only for high illumination intensities, is used (regime B). We propose that surface recombination is the dominant recombination mechanism independent of the illumination intensity. The apparent $n_\mathrm{id}$ and $n_\mathrm{R}$ larger than 1 calculated by Eqn.~(\ref{eq:n_R}) and (\ref{eq:n_id_IMVS}), respectively, must therefore by the result of an ion related mechanism: Calado et al.\cite{Calado2019} found that a redistribution of mobile ions can lead to a change in the electron and hole population overlap at the interface regions. That means the redistribution of mobile ions can cause a change from symmetric to asymmetric quasi-Fermi level splitting, which results in a shift of the apparent ideality factor depending on the illumination intensity. We therefore attribute regime A in Fig.~\ref{fig:results_1}a and \ref{fig:discussion} to be the result of ion redistribution, i.e.\ interaction between mobile ions at the interfaces with the free charge carriers. The limitation of $V_\mathrm{oc}$ by injection barriers should be observable in the low apparent built-in potentials calculated from CV measurements. The calculated built-in potentials according to the Mott-Schottky evaluation (Fig.~\ref{fig:CV}) by \begin{equation} \label{eq:MS} 1/C^2=\frac{2(V_\mathrm{bi}-V)}{e\epsilon_0\epsilon_\mathrm{R}N_\mathrm{eff}} \end{equation} show much higher values than $V_\mathrm{oc}$ and an apparent shift with illumination intensity. We assume that the built-in potential determined by the Mott--Schottky evaluation does not represent the real conditions within the solar cells, as the CV measurement is affected by mobile ions. Therefore, a correction of the built-in potential by a redistribution of the mobile ions with changing light intensity might by necessary, suppressing the shift of $V_\mathrm{bi}$, as discussed in our recent studies\cite{Reichert2020,reichert2020probing} and in detail by Ravishankar et al.\cite{Ravishankar2019,Ravishankar2019_2}. \section{Summary} We have experimentally shown that the slow, approximately light-independent responses in both IMVS and IMPS correspond to ion migration rates in correspondence with IS. Additionally, IMVS show a strong response over several orders of magnitude in light intensity, which can be related to charge-carrier recombination. We verified that surface recombination is the dominant loss mechanism and discussed the ideality factor of the solar cell in detail to support this finding. We showed that the apparent ideality factor can be influenced by the redistribution of mobile ions. This study provides a deeper understanding of the ionic and electronic charge-carrier dynamics in perovskite solar cells and how to improve their electrical parameters by identifying the dominant recombination mechanism of photogenerated charge carriers. \section{Methods}\label{sec:methods} \textbf{Processing:} The MAPbI\textsubscript{3} active layer was processed by the lead acetate trihydrate approach\cite{Zhang2015,An2019,Fassl2018} using a precursor solution with a stoichiometry of 1:3.00 MAI:PbAc\textsubscript{2} and an additive of hypophosphoric acid at a ratio of 1.7~$\mu$l/100 mg MAI. The perovskite solution was spin cast at 2000~rpm for 60~s in a dry-air filled glovebox (relative humidity $<0.5~\%$) on top of a modified poly(3,4-ethylene-dioxythiophene):poly(styrenesulfonate) (m-PEDOT:PSS) as hole-transport layer.\cite{Zuo2016} The hole-transport layer was spin cast on the cleaned substrates (Pre-patterned indium tin oxide (ITO) coated glass substrates (PsiOTech Ltd., $15~\Omega/\mathrm{sqr}$) ultrasonically cleaned with $2~\%$ Hellmanex detergent, deionized water, acetone, and isopropanol) at 4000~rpm for 30~s and annealed at $150~^\circ\mathrm{C}$ for 15~min.\cite{Zuo2016} The electron-transport layer ([6,6]-phenyl-C60-butyric acid methylester (PC\textsubscript{60}BM)) was dissolved in chlorobenzene (20~mg/ml) and dynamical spin cast at 2000~rpm for 30~s. After further annealing for 10~min. at $100~^\circ\mathrm{C}$, a bathocuproine (BCP), 0.5~mg/ml dissolved in isopropanol, hole-blocking layer was spin cast on top of the PC\textsubscript{60}BM. Thermally evaporated silver was used as counter electrode, evaporated at a rate of 0.1-1 $\mathring{A}$/s to a thickness of 80~nm. \textbf{Characterisation:} All measurements were performed using a Zurich Instruments MFLI lock-in amplifier with MF-IA, MF-MD and MF-5FM options and an Omicron A350 diode laser with fast analog modulation. The diode laser for AC and DC illumination was calibrated by measuring phase and amplitude over the entire frequency range using a photodetector (Newport 818-BB-21). The light intensity for 1~sun was determined using an AM 1.5 solar simulator with $100~\mathrm{mW/cm}^2$ irradiation (Wavelabs). To ensure a linear response during measuring IMPS and IMVS, the AC illumination-modulation amplitude was chosen to be $10~\%$ of the DC intensity. For achieving steady-state conditions, the samples were pre-illuminated for 30~s before each measurement. CV measurements were done applying an AC frequency of 80~kHz with an amplitude of $V_\mathrm{ac}=20~\mathrm{mV}$. For CV profiling, the solar cells were pre-biased at 1.2~V for 60~s and rapidly swept with 30~V/s in the reverse direction.\cite{Fischer2018} \begin{acknowledgments} Y.V.\ and C.D.\ thank the DFG for generous support within the framework of SPP 2196 project (PERFECT PVs). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC Grant Agreement no.\ 714067, ENERGYMAPS). \end{acknowledgments}
{ "timestamp": "2020-12-17T02:18:54", "yymm": "2012", "arxiv_id": "2012.08953", "language": "en", "url": "https://arxiv.org/abs/2012.08953" }
\section{Introduction} \label{} This work considers the notion of stability conditions of POD basis interpolation on Grassmann manifolds. This interpolation method is used to adapt Reduced-order models (ROMs) to parameter changes in various scientific fields, among others, design, optimization, control, uncertainty quantification, data-driven systems, etc. Here we introduce three important stability conditions that are quite essential to the interpolation method. The interesting thing about them is that they do not seem special to problems in hyperelasticity. It may be an illustration of general stability conditions applicable to a variety of problems of other scientific fields as well. ROMs aim to decrease the computational burden of large-scale systems and solve parametrized problems by generating models with lower complexity, but accurately enough to represent the high fidelity counterpart simulations. One popular method is the Proper Orthogonal Decomposition (POD)~\cite{holmes2012turbulence,Henri2003,Mosquera2019b}, also known as Kharhunen-Lo\`eve Decomposition (KLD)~\cite{Kar1946,loeveprobability}, Singular Value Decomposition (SVD)~\cite{golub1996} or Principal Component Analysis (PCA)~\cite{Jolliffe2002,Abdi2010}. We need to emphasize that all these POD techniques are referred as \emph{a posteriori} as they require some knowledge (at least partial) on the solution of the problem. Parametric Model Order Reduction (pMOR) is used to generate a ROM that approximates a full-order system with high accuracy over a range of parameters. In case of solving a parametric problem using the POD, the method starts by a sampling stage during which the full-order system is solved for some rather small set of \emph{training} points. The state variable field \emph{`snapshots'} are then compressed using the POD to generate a ROM basis that is expected to reproduce the most characteristic dynamics of its high-fidelity counterpart. Nevertheless, since the POD bases are generated for a set of training points, they are optimal only to these parameter values. Thus, a main drawback of POD is the sensitivity to parameter changes and the lack of robustness over the entire parameter space. Consequently, any ROM basis generated by the approach outlined above cannot be expected to give a good approximation away from the training point. In pMOR, the question we have to address is how to compute a good approximation of the POD basis related to a \emph{new parameter} value. Multiple methods have been proposed for adapting POD basis to address parameter variation as thoroughly documented in related review articles~\cite{Benner2015,Zimmermann2019,Cueto2014}. For nonlinear systems, even though a Galerkin projection reduces the number of unknowns, the computational burden for obtaining the solution could still be high due to the prohibitive computational costs involved in the evaluation of nonlinear terms. Hence, the nonlinear Galerkin projection in principle leads to a ROM but its evaluation could be more expensive than the evaluation of the original problem. To this effect, to make the resulting ROMs computationally efficient, POD is typically used together with a sparse sampling method, also called \emph{hyper reduction}, such as the missing point estimation (MPE)~\cite{Astrid_2008}, the empirical interpolation method (EIM)~\cite{Radermacher2016}, the discrete empirical interpolation method (DEIM)~\cite{Chaturantabut2010}, the Gappy POD method~\cite{Everson1995}, and the Gauss-Newton with approximated tensors (GNAT) method~\cite{Carlberg2013}. Parametric Model Order Reduction using \emph{POD basis interpolation} is done initially in the field of computational fluid dynamics which was proposed for parametrized systems that are linear in state~\cite{Mosquera2019b,Farhat2008,Amsallem2009,Mosquera2018}. Similar approach has been scarcely applied in hyperelasticity, like in~\cite{Niroomandi2012}, where they propose real time simulations of hypepelastic structures using POD basis interpolation, in combination with an asymptotic numerical method. Here, pMOR is used to hyperelastic structures by adapting pre-computed POD basis. When addressing the question of POD basis interpolation, the main point is that \emph{interpolation cannot be done in a linear space}. Indeed, any mode $p$ POD basis performed on some matrix $\bS\in \VecMat{n}{N_t}$ give rise to a truncated matrix $\bS_{p}\in \VecMat{n}{p}$ (where $n=3N_s$ and $N_s$, $N_t$ respectively correspond to the number of spatial points and time points). Now, despite the appearances, computation can not be done in the linear space $\VecMat{n}{p}$ of matrices, as the matrix $\bS_{p}$ encodes a $p$ dimensional vector subspace. The goal is thus to make interpolation on the set of $p$ dimensional subspaces of $\mathbb{R}^{n}$, which defines exactly the Grassmann manifold $\mathcal{G}(p,n)$. Such Grassmann manifold interpolation is well documented~\cite{Amsallem2009,Mosquera2018,Mosquera2019b,Edelman1998g,Absil2004}, all coming from the fluid mechanics community, and computation can be done explicitly. Thus, we might have been satisfied with a simple application of the existing and now well-known formulas, using the \emph{logarithm map} to linearize, and then the \emph{exponential map} to return back to the manifold. Such maps are issued from the riemannian structure of Grassmann $\mathcal{G}(p,n)$ and its associated geodesics~\cite{Gallot1990}. A first condition appears, as the logarithm map is only defined on some subset $\mathrm{U}\subset \mathcal{G}(p,n)$ explicitly defined as a subset of non singular matrices. So linearization can only be done once we have checked that all training points are contained in $\mathrm{U}$. In fact, such a condition is usually checked, as square matrices are generically non--singular. A second condition concerns the use of the exponential map, which is defined on all the vector space $\mathbb{R}^{d}$ (with $d=p(n-p)$ the dimension of $\mathcal{G}(p,n)$). Nevertheless, it is only \emph{injective} inside a subset $\mathrm{V}\subset \mathbb{R}^{d}$ deduced from the \emph{cut--locus}~\cite{Gallot1990} of the Riemannian manifold $\mathcal{G}(p,n)$. Considering all geodesics with the same starting point, such a cut--locus is in fact the set of points where such geodesics are no longer minimal, and thus the exponential map is no more injective. Without any control of such an injectivity condition, the return back of the interpolated curve \emph{via} the exponential map can lead to some disconnected curve on the manifold, which should be avoided. An explicit determination of such a cut--locus was already mentioned in~\cite{Wong1967}, without any proof, and a result by Kozlov~\cite[Theorem 12.5]{Kozlov2000} make a clear understanding of such a cut-locus using singular values of matrix representation of a velocity vector. We thus write an explicit way to compute such a cut--locus, with clear proof. As this result is not a classical one, and to be self contained, we had to develop the necessary mathematics to obtain such cut--locus of the Grassmann manifold $\mathcal{G}(p,n)$, as well as the open subset $\mathrm{V}$. In fact, from this cut--locus and its associated subset $\mathrm{V}$, it was possible to improve the already known exponential injectivity condition, obtained from the \emph{injectivity radius} of Grassmann manifolds~\cite{Kozlov2000}, and used in~\cite{Mosquera2019b} to control computations. In most of our cases, indeed, the injectivity condition issued from the cut--locus is better than the one obtained from injectivity radius. A third stability condition considered here is related to the intrinsic non-inclusion defect of the interpolated subspaces of different dimensions. Numerical results showed indeed that the accuracy of interpolation may not improve by increasing the POD modes. A consequence is that it is not possible to control or predict the interpolation behavior. At first glance, this fact seems inconsistent with the expected improvement of the solution by increasing the number of modes. We indicate that the non-connectivity of the solutions is inherited from the construction of the interpolation formulae using the Logarithm and the Exponential maps. To prove the fact, our basic tool is the computation of the principal angles of two POD basis of different mode $p$. This enables us to compute the geometric distance between subspaces of different dimension~\cite{Ye2016}. To this end, a new stability condition will be tied with the geometric distance which measures the non-inclusion defect between these subspaces. To the best of the author's knowledge, this finding has never been reported in the variety of ROM problems involving POD basis interpolation on Grassmann manifolds. From all this, we finally get three kinds of stability condition, each clearly established: $(1)$ a first one about the logarithm map domain of definition, $(2)$ a second one on the loss of injectivity of the exponential map, \emph{via} the cut--locus of Grassmann manifolds and $(3)$ a third one about the increasing mode, controlled from a well-defined geometric distance between subspaces of different dimensions. Considering the \emph{mechanical part}, the overall procedure comprises an off-line and an on-line stage. The off-line stage characterizes the potentially costly procedure of solving FEM problems associated with different values of the physical or modeling parameter (training points). The on-line stage consists of the POD basis interpolation on Grassmann manifolds to determine a ROM basis for an unseen target parameter. Then, a non-intrusive approach is introduced for the obtained spatial POD basis. Note, that this approach deviates from the POD methods that relying on a Galerkin/Petrov Galerkin projection on the governing equations. Instead, the ROM-FEM models are implemented by inserting the interpolated spatial POD basis using linear constraint equations in Abaqus. It is evident that, by constraining the degrees of freedom, the reduced model still embeds the high dimension. We remark that we followed this approach using a commercial code only for evaluating the stability and accuracy of the adaption of POD basis via interpolation on Grassmann manifolds. This is because it is not our objective to implement a method of nonlinear model reduction for the effective evaluation of the nonlinear terms, although it is a quite challenging task to be realized inside a commercial FEM code. In our applications we employed benchmark hyperelastic structures to elaborate the stability loss of POD basis interpolation even at low complexity models. We expect that the stability issues discussed herein will be also inherent and critical for more demanding problems in hyperleasticity, to mention among others computation of soft tissues, blood vessels, human skin inflation, human Mitral valve, etc. For the pMOR, two hyperelastic structures modeled with isotropic and anisotropic constitutive laws are studied. Specifically, for the anisotropic model, a subclass of transversely isotropic materials is considered. In this subclass, the strain energy function is assumed to depend only on two invariant measures of finite deformation~\cite{Holzapfel2007,bonet1998simple,almeida1998finite,itskov2001generalized}. At the numerical examples, the decision made is to enter the parameters in two ways considering a) the model anisotropy defined by the fiber orientation angle, and b) the material coefficients of the hyperelastic constitutive equations. \textbf{Organization of the article} \\ The present paper is organized as follows. In~\autoref{sec:Problem_Formulation} and~\autoref{sec:POD} we recall the theoretical background so to understand the way to make interpolation of POD bases using the corresponding points on a Grassmann manifold. Then~\autoref{sec:CF_Interpolation_GM} produces all explicit algorithm to obtain interpolation on Grassmann manifolds, and we also define three stability conditions: one from the logarithm map, a second one from the exponential map, and a third one from increasing POD modes. The mechanical part starts with~\autoref{sec:App_HyperElast}, which covers the framework of hyperelasticity theory in continuum mechanics for an incompressible transverse isotropic material. In ~\autoref{sec:Numerical_Investigations}, the interpolation performance using two hyperelastic structures is shown, and further important computational aspects are discussed. Finally, ~\autoref{sec:Conclusions} highlights the main results and some important outcomes. The~\ref{sec:Grassmann_Manifolds} is devoted to the mathematical proofs needed to have well-defined stability conditions, as for instance an explicit determination of the cut--locus of Grassmann manifolds. \section{Problem Formulation} \label{sec:Problem_Formulation} We consider some mechanical problem governed by a specific parameter $\lambda \in [\lambda_{min},\lambda_{max}]\subset \mathbb{R}$, which comes from hyperelasticity in our situation (see \autoref{sec:App_HyperElast}). For each parameter $\lambda$, the solution is given by a space-time smooth field \begin{equation*} (t,\XX)\in [0;T]\times \Omega_{0}\mapsto u^{\lambda}(\XX,t)\in \mathbb{R}^3 \end{equation*} where $\Omega_0$ is a closed convex subset of $\mathbb{R}^3$ and $T>0$. To avoid costly computations for all values $\lambda\in [\lambda_{min},\lambda_{max}]$, we would like to interpolate between a finite number of FEM solutions $u_i:=u^{\lambda_i}$, associated to $N$ training points $\lambda_1,\dotsc,\lambda_N$. In fact, it is at the level of the POD performed on the snapshot matrices $\bS(\lambda_i)$ (defined in the next section) associated to the solutions $u_i$ that this interpolation will be considered. But one of the essential points of this POD is that it associates to each snapshot matrix $\bS(\lambda_i)$ a certain point $\bom_i$ of a Grassmann manifold $\mathcal{G}$, and it is therefore needed at this stage to interpolate between points $\bom_1,\dotsc,\bom_N$ on $\mathcal{G}$. It is now proposed to detail the link between a POD reduction and the construction of a point on a Grassmann manifold. \section{Proper Orthogonal Decomposition and Grassmann manifolds}\label{sec:POD} The POD method can be applied to curves defined in Hilbert spaces of infinite dimension. The initial idea is to determine a subspace of a given dimension $p$ (which is the fixed number of modes of the POD), reflecting ``as well as possible" this curve, as it is very well explained in~\cite{Henri2003,Mosquera2018}. In most cases, however, we do not consider the entire curve, but only a finite number of points of a Hilbert space $\mathcal{H}_{\text{spatial}}=\mathbb{R}^{N_s}$ of finite dimension $N_s$ (the number of space points). More precisely any FEM solution $u$ of our problem under consideration produces a \emph{snapshot matrix} \begin{equation*} \bS_{jk},\quad 1\leq j\leq 3N_s,\quad 1\leq k\leq N_t \end{equation*} with $N_t$ the number of time steps. Such matrix encodes in fact $N_t$ vectors $\uu_{k}:=u(\cdot,t_k)\in \mathcal{H}_{\text{spatial}}$, and we write \begin{equation*} \bS:=[\uu_1,\dotsc,\uu_{N_t}] \end{equation*} Take now $\langle\cdot,\cdot\rangle$ to be the standard inner product of the Hilbert space $\mathcal{H}_{\text{spatial}}$. To any $p$ dimensional vector subspace $\mathcal{V}_p$ of $\mathcal{H}_{\text{spatial}}$, there is an associated orthogonal projection \begin{equation*} \boldsymbol{\pi}_{p} \: : \: \mathcal{H}_{\text{spatial}}\longrightarrow \mathcal{V}_{p} \end{equation*} and the POD method address the question to minimize the distance function \begin{equation*} \mathcal{J}(\mathcal{V}_p):=\sum_{k=1}^{N_t} \| \uu_{k} -\boldsymbol{\pi}_{p}(\uu_{k})\|^2,\quad \|\cdot\|:=\sqrt{\langle \cdot,\cdot\rangle} \end{equation*} over all $p$ dimensional subspaces $\mathcal{V}_p$. It then appears that the set of all such subspaces define a \emph{smooth compact Riemannian manifold}~\cite{Boothby1986,Gallot1990} \begin{equation*} \mathcal{G}(p,n):=\left\{\mathcal{V}_p\subset \mathcal{H}_{\text{spatial}},\quad \dim(\mathcal{V}_p)=p\right\},\quad n:=3N_s \end{equation*} so that any $p$ dimensional vector subspace $\mathcal{V}_p$ can be considered as some \emph{point} $\bom\in \mathcal{G}(p,n)$, and the question is finally to minimize $\mathcal{J}(\bom)$ over all $\bom \in \mathcal{G}(p,n)$. In practice, let consider an orthonormal basis $\phi_{1},\dotsc,\phi_{p}$ of $\mathcal{V}_p$ so that the matrix form of $\boldsymbol{\pi}_{p}$ is given by \begin{equation*} \boldsymbol{\Phi}_{p}\boldsymbol{\Phi}_{p}^{T},\quad \boldsymbol{\Phi}_p:=[\phi_{1},\dots,\phi_{p}]\in \VecMat{n}{p} \end{equation*} where $\VecMat{n}{p}$ is the vector space of $n\times p$ matrices, and (right) superscript ${(\cdot)}^T$ denotes the transposition operation. By direct computation, the distance function $\mathcal{J}$ is then rewritten \begin{equation*} \mathcal{J}(\bom)=\|\bS-\boldsymbol{\Phi}_{p}\boldsymbol{\Phi}_{p}^{T}\bS\|_{\text{F}}^{2} \end{equation*} where $\|\mathbf{A}\|_{\text{F}}:=\sqrt{\tr(\mathbf{A}\mathbf{A}^T)}$ is the Frobenius norm on $\VecMat{n}{p}$. Now it is classically known that minimization of $\mathcal{J}$ is given by Eckart--Young Theorem~\cite{Eckart193,Golub1987,golub1996,Stewart1973} and can be obtained via a \emph{singular value decomposition} of $\bS$. Indeed, take this SVD to be \begin{equation*} \bS=\bU \bSig \bV^{T},\quad \bU:=[\phi_{1},\dotsc,\phi_{N_t}] \end{equation*} with singular values $\sigma_1\geq \sigma_{2}\geq \dotsc \geq \sigma_{N_t}$. Then one solution of minimizing $\mathcal{J}$ is given by \begin{equation*} \bom_{0}:=\text{span}(\phi_{1},\dotsc,\phi_{p}) \end{equation*} which is unique whenever $\sigma_{p}>\sigma_{p+1}$~\cite{Henri2003}. Let also define the \emph{reduced model} $\bS_p$ of our snapshot matrix by \begin{equation*} \bS_p:=\boldsymbol{\Phi}_{p}\boldsymbol{\Phi}_{p}^{T}\bS,\quad \boldsymbol{\Phi}_{p}:=[\phi_{1},\dotsc,\phi_{p}]. \end{equation*} For each snapshot matrix $\bS(\lambda_{i})$ associated to training points $\lambda_{i}$ ($i=1,\dots,N$), we thus obtain a point \begin{equation*} \bom_{i}:=\text{span}(\phi_{1}^{(i)},\dotsc,\phi_{p}^{(i)})\in \mathcal{G}(p,n) \end{equation*} once chosen a fix mode $p$ for the POD. For a new target parameter $\widetilde{\lambda}$, interpolation has to be done on the Grassmann manifold $\mathcal{G}(p,n)$, which is now detailed. \section{ROM Adaptation Based on Interpolation in Grassmann Manifolds} \label{sec:CF_Interpolation_GM} Computation on manifold, such as the one of Lagrange interpolation, can only be done using \emph{local coordinates}. Such local coordinates are obtained via bijective maps, which are defined, in general, on subsets $\mathrm{U}$ of the manifold (called the \emph{local charts}). In the case of a Riemannian manifold, one can use the \emph{normal coordinates} directly deduced from the geodesics of the manifold. In our case, local charts will be given by \emph{logarithm maps}, so we obtain smooth diffeomorphisms \begin{equation*} \Log \: : \: \mathrm{U}\longrightarrow \mathrm{V}:=\Log(\mathrm{U})\subset \mathbb{R}^{d} \end{equation*} where $d$ is the dimension of the manifold, and the reverse operation is given by the exponential map. Nevertheless, such operation has to be well-defined, which is achieved when the exponential map is injective. Such an injectivity condition was already addressed in the work of Mosquera et al~\cite{Mosquera2019b}, using the injectivity radius of Grassmann manifolds (see ~\eqref{eq:Disk_Inj}). Other injectivity conditions are presented here, less restrictive than the one issued from the injectivity radius (see Remark~\ref{rem:Inj_Radius_wrt_Cut_locus}). Another issue is the one of increasing the number $p$ of mode. Indeed, one should expect that the interpolation is sharpened by increasing $p$, which can be controlled by using the \emph{geometric distance} computed for subspaces of different dimensions, as defined in~\cite{Ye2016}. Let us know present in the next~\autoref{subsec:Assumption_Local_Chart} the necessary assumptions to have a well-defined interpolation, while~\autoref{subsec:Interpol_Lagrangian} produce the algorithm to compute an interpolation, taking into account all stability conditions. Finally~\autoref{subec:Geometric_Distances} focus on the explicit formulae to compare two subspaces of different dimensions. \subsection{Interpolation from logarithm and exponential map: necessary assumptions}\label{subsec:Assumption_Local_Chart} Let us consider back the points $N$ points $\{\bom_{i}\}^{N}_{i=1}$ in the Grassmann manifold $\mathcal{G}(p,n)$, all obtained from the ROMs of the snapshot matrices (as detailed in \autoref{sec:POD}). The goal here is to obtain a \emph{well-defined} interpolation of a spatial POD basis associated with a new target point $\tilde{\lambda}$. This is detailed in \autoref{subsec:Interpol_Lagrangian}, and we just focus here on the main ideas issued from the seminal work of Amsallem~\cite{Amsallem2009}: \begin{enumerate} \item Choose a base point $\bom_0$ in the family $\bom_1,\dots,\bom_N$, altogether with its associated logarithm map $\Log_{\bom_0}$ (from Definition~\ref{def:Log_map_Grass}). \item Compute the velocity vectors $v_i:=\Log_{\bom_0}(\bom_i)$ all lying in a tangent plane, which a vector space $\mathbb{R}^{d}$ (with $d=p(n-p)$ the dimension of $\mathcal{G}(p,n)$). \item Compute a new velocity vector $\widetilde{v}$ associated to a target point $\widetilde{\lambda}$. \item Obtain an interpolated point $\widetilde{\bom}:=\Exp_{\bom_0}(\widetilde{v})\in \mathcal{G}(p,n)$ using the exponential map (from~\eqref{eq:Def_Exp_Map}) to return back to the Grassmann manifold $\mathcal{G}(p,n)$. \end{enumerate} As depicted in Figure~\ref{fig:Interpolation_Manifold}, it is nevertheless important not to forget that the logarithm map $\Log_{\bom_0}$ is only defined on some open set $\mathrm{U}_{\bom_0}$, taken from~\eqref{eq:Def_Open_Set_Log} and recalled below. So a first necessary condition is that \begin{itemize} \item (C1): All points $\bom_1,\dotsc,\bom_N$ lie in $\mathrm{U}_{\bom_0}$. \end{itemize} To check such a condition, recall first that each point $\bom\in \mathcal{G}(p,n)$ correspond to an orthonormal basis stored in a $n\times p$ matrix \begin{equation*} \bY=[\by_1,\cdots,\by_p]\in \VecMat{n}{p},\quad \bY^T\bY=\bI_{p}. \end{equation*} Taking now matrices $\bY_i$ corresponding to $\bom_i$ ($i=0,\dots,N)$, such condition translate into \begin{itemize} \item (C1)-matrix form: For all $i=1,\dots,N$, the matrix $\bY_0^{T}\bY_i$ is non singular. \end{itemize} From this and Theorem~\ref{thm:Exp_Diff_Cut_Locus}--\ref{thm:Cut_Locus_Grass} we deduce that the velocity vectors $v_i=\Log_{\bom_0}(\bom_i)$ all lie in the open set $\mathrm{V}_{\bom_{0}}=\Log_{\bom_0}\left(\mathrm{U}_{\bom_0}\right)$. Once computed the new velocity vector $\widetilde{v}\in \mathbb{R}^d$, according to Theorem~\ref{thm:Exp_Diff_Cut_Locus}, a second necessary condition is then \begin{itemize} \item (C2): $\widetilde{v}$ is inside the open set $\mathrm{V}_{\bom_{0}}$. \end{itemize} Such a condition seems to be more intricate than the previous one, but in fact it is simply related to singular values of a matrix. Indeed, in the case $2p\leq n$ (which will be our case), a velocity vector $\widetilde{v}$ is represented by a matrix $\widetilde{\bZ}\in \VecMat{n}{p}$ such that $\widetilde{\bZ}^T\bY_0=0$ (see~\eqref{eq:Def_Hor_Lift}). From Lemma~\ref{lem:rho_v_Grass} and Theorem~\ref{}, condition (C2) simply writes \begin{itemize} \item (C2)-matrix form: Taking $\widetilde{\theta}_1$ to be the maximum singular value of $\widetilde{\bZ}$, we have $\widetilde{\theta}_1<\pi/2$. \end{itemize} The first condition (C1) is usually trivially satisfied, and the second one (C2) can be evaluated on a range of new parameters $\widetilde{\lambda}$, so to have an interval $[\widetilde{\lambda}_a,\widetilde{\lambda}_b]$ of well-defined interpolation. This was done on both benchmarks (see Figure~\ref{fig:Stability_condition_1_pbm_1} and~\ref{fig:Stability_condition_1_pbm_2}). \begin{rem}\label{rem:Inj_Radius_wrt_Cut_locus} In the case of the compact manifold $\mathcal{G}(p,n)$, the exponential map is defined on all the vector space $\mathbb{R}^{d}$, so it is always possible to compute a new point $\Exp_{\bom_0}(\widetilde{v})$ on the Grassmann manifold, so we obtain an interpolation which can be not well-defined. In the previous work of Mosquera et al.~\cite{Mosquera2018,Mosquera2019b}, an injectivity condition on the exponential map was defined using the \emph{injectivity radius} of $\mathcal{G}(p,n)$, given by\eqref{eq:Disk_Inj}, which translate into \begin{equation*} \|\widetilde{v}\|=\left(\widetilde{\bZ}^{T}\widetilde{\bZ}\right)^{1/2}=\left(\sum_{i=1}^{p}\widetilde{\theta}_{i}^2\right)^{1/2}<\frac{\pi}{2} \end{equation*} where $\widetilde{\theta}_{i}$ are the singular values of $\widetilde{\bZ}$, leading to a weaker condition than the (C2) one (see Lemma~\ref{lem:Cut_Better_than_Radius}). \end{rem} \begin{rem}[Violation of stability condition (C2) from an application point of view] Let us consider the case of the north hemisphere of the $2D$ sphere of radius $1$, with $\bom_{0}=N$ being the North Pole. The tangent plane is simply given by $\mathbb{R}^{2}$, and to any velocity vector $v\in \mathbb{R}^2$ corresponds a point on the north hemisphere, using the exponential map. Here, the exponential map is non injective for all $v\in \mathbb{R}^2$ with length greater than $\pi/2$. If the interpolated curve inside $\mathbb{R}^2$ is outside the disk of radius $\pi/2$ (see Figure~\ref{fig:Interpolation_Manifold}), then the corresponding interpolated curve on the north hemisphere is disconnected. \end{rem} \begin{figure}[H] \begin{center} \footnotesize \begin{tikzpicture}[scale=0.7] \node [label={[xshift=0.8cm, yshift=-0.3cm]$N=\bom_0$}] (0) at (-7, 0) {$\bullet$}; \node [label=below:{$\bom_1$}] (1) at (-8, 1) {$\bullet$}; \node [label={[xshift=-0.4cm, yshift=-0.5cm]$\bom_2$}] (2) at (-6.5, -1.25) {$\bullet$}; \node [label=below:{$\bom_3$}] (3) at (-5.5, -2) {$\bullet$}; \node [label={[xshift=-0.4cm, yshift=-0.5cm]$\bom_4$}] (4) at (-4, 0.5) {$\bullet$}; \node [label=below:{$\bom_5$}] (5) at (-6, 3.25) {$\bullet$}; \node [label=below:{$\bom_6$}] (6) at (-7, 2.5) {$\bullet$}; \node [label=below:{$0$}] (7) at (4, 0) {$\bullet$}; \node [label=below:{$v_1$}] (8) at (2, 0.75) {$\bullet$}; \node [label=below:{$v_2$}] (9) at (4.75, -1) {$\bullet$}; \node [label=below:{$v_3$}] (10) at (5.75, -1.5) {$\bullet$}; \node [label={[xshift=-0.4cm, yshift=-0.5cm]$v_4$}] (11) at (7.25, 0.75) {$\bullet$}; \node [label=below:{$v_5$}] (12) at (4, 3.5) {$\bullet$}; \node [label=below:{$v_6$}] (13) at (3, 2.5) {$\bullet$}; \node (14) at (-6.75, 0.5) {}; \node (15) at (-6.75, 0.5) {}; \node (16) at (-5.3, 3) {}; \node (17) at (2.4, 3) {}; \node (18) at (1.25, -3) {}; \node (19) at (-5.5, -3) {}; \node (20) at (-1.75, 4) {$\Log_{\bom_0}$}; \node (21) at (-1.95, -3.95) {$\Exp_{\bom_0}$}; \node [red,label={[xshift=0.3cm, yshift=-0.2cm]$\red{\widetilde{\bom}}$}] (23) at (-9.76, 1.26) {$\bullet$}; \node [red,label={[xshift=0.3cm, yshift=-0.2cm]$\red{\widetilde{v}}$}] (24) at (8.1, -0.84) {$\bullet$}; \node (25) at (6.5, 4.14) {\red{$\Log_{\bom_{0}}\circ \Exp_{\bom_{0}}(\widetilde{v})\neq \widetilde{v}$}}; \draw [->,>=latex,bend left=15] (16.center) to (17.center); \draw [->,>=latex,bend left=15] (18.center) to (19.center); \draw[->,>=latex] (-7,0)--(-9.7,-2.53) node[yshift=0.5cm,midway] {$\pi/2$}; \draw (-7,0) circle(3.7); \draw (4,0) circle(3.7); \draw [line width=1pt] plot [smooth] coordinates {(-8,1) (-7,0) (-6.5, -1.25) (-5.5, -2) (-3.88, -1.34) (-4, 0.5) (-6, 3.25) (-7, 2.5)}; \draw [red,line width=1pt] plot [smooth] coordinates {(-8,1) (-7.26, 0.72) (-7,0) (-7.02, -0.8) (-6.5, -1.25) (-5.5, -2) (-4.07,-2.26)}; \draw [red,line width=1pt] plot [smooth] coordinates {(-3.31,-0.28) (-4, 0.5) (-5.1,1.86) (-6, 3.25) (-6.74,3.3) (-7, 2.5)}; \draw [red,line width=1pt] plot [smooth] coordinates {(-9.93,2.26) (-9.76, 1.26) (-10.69, 0.28)}; \draw[->,>=latex] (4,0)--(0.99,-2.16) node[yshift=-0.7cm,midway] {$\theta_1=\pi/2$}; \draw (-1,4.5)--(-1,-4.5)--(9,-4.5)--(9,4.5)--cycle; \draw (-7,-4.3) node {North hemisphere}; \draw (7,-4) node {Tangent plane $\mathbb{R}^2$}; \draw [line width=1pt] plot [smooth] coordinates {(2, 0.75) (4, 0) (4.75, -1) (5.75, -1.5) (7.16,-0.92) (7.25, 0.75) (5.78, 2.82) (4, 3.5) (3, 2.5)}; \draw [red,line width=1pt] plot [smooth] coordinates {(2, 0.75) (2.82, 0.24) (4, 0) (4.75, -1) (5.75, -1.5) (7.37,-1.52)}; \draw [red,dashed,line width=1pt] plot [smooth] coordinates {(7.37,-1.52) (8.1, -0.84) (7.7,-0.06)}; \draw [red,line width=1pt] plot [smooth] coordinates {(7.7,-0.06) (7.25, 0.75) (5.56, 2.6) (4, 3.5) (3.2, 3.26) (3, 2.5)}; \draw [line width=1pt](-10,-6)--(-9,-6) node[right] {Real curve}; \draw [red,line width=1pt](-10,-6.8)--(-9,-6.8) node[right] {Interpolated curve (disconnected)}; \draw [line width=1pt](-1,-6)--(0,-6) node[right] {Image of real curve from $\Log_{\bom_{0}}$ map}; \draw [red,line width=1pt](-1,-6.8)--(0,-6.8) node[right] {Interpolated curve}; \draw [red,line width=1pt,dashed](-1,-7.6)--(0,-7.6) node[right] {Loose of injectivity}; \draw [red,->,>=latex,bend left=15] (25.south) to (24.north); \end{tikzpicture} \end{center} \caption{Loss of injectivity of the exponential map} \label{fig:Interpolation_Manifold} \end{figure} \subsection{Interpolation algorithm from Lagrange polynomials}\label{subsec:Interpol_Lagrangian} Le us now produce the algorithm so to obtain an interpolated point $\widetilde{\bom}$ corresponding to a target parameter $\widetilde{\lambda}$. Such an algorithm is directly issued from the seminal work of Amsallem et al.\cite{Amsallem2009}, but it is modified so to obtain a well-defined interpolation, as we have to consider conditions (C1) and (C2) from the previous~\autoref{subsec:Assumption_Local_Chart}. As detailed in~\autoref{sec:POD}, the POD of mode $p$ which was done on the snaphsot matrices $\bS_i$ (corresponding to the parameter $\lambda_{i}$ for $i=1,\dots,N$) define points $\bom_1,\dots,\bom_{N}$ on the Grassmann manifold $\mathcal{G}(p,n)$, and thus matrices in $\VecMat{n}{p}$ with orthonormal column vectors. \begin{algo}[Interpolation on a Grassman manifold $\mathcal{G}(p,n)$]\label{alg:Interpol_Lagrang}\mbox{}\\ \vspace*{-0.5cm} \begin{description} \item[\textbf{Input}]: \begin{itemize} \item Integers $p,n$ such that $2p\leq n$. \item Matrices $\bY_1,\dots,\bY_N$ in $\VecMat{n}{p}$ such that $\bY_{i}^{T}\bY_i=\bI_{p}$, respectively corresponding to parameters $\lambda_{1},\dotsc,\lambda_{N}$ \item A target parameter $\widetilde{\lambda}$ \end{itemize} \item[\textbf{Output}]: A new matrix $\widetilde{\bY}$ defining a new point $\widetilde{\bom}\in \mathcal{G}(p,n)$, corresponding to the target parameter $\widetilde{\lambda}$. \item[\textbf{Computations}]: \begin{enumerate} \item Choose a matrix $\bY_{0}\in \{\bY_1,\dots,\bY_N\}$ such that \begin{equation*} \text{(C1) stability}:\quad \bY_{0}^{T}\bY_i \text{ is non singular for all } i \end{equation*} \item For each $i=1,\dots,N$, make a thin SVD and compute an $n\times p$ matrix $\bZ_i$: \begin{align*} \bY_{i}\left(\bY_{0}^{T}\bY_{i}\right)^{-1}-\bY_{0}&=\bU_{i}\bSig_{i} \bV_{i}^{T} \\ \bZ_{i}:=\bU_{i}\arctan\left(\bSig_{i}\right) \bV_{i}^{T}, \end{align*} all issued from the logarithm map (Definition~\ref{def:Log_map_Grass}). \item Compute an interpolated matrix and a thin SVD \begin{equation*} \widetilde{\bZ}:=\sum_{i=1}^{N} \prod_{i \neq j} \frac{\tilde{\lambda}-\lambda_j}{\lambda_i-\lambda_j}\bZ_i=\widetilde{\bU}\widetilde{\bThe}\widetilde{\bV} \end{equation*} \item (C2) stability: If $\widetilde{\theta}_1>\pi/2$, with $\widetilde{\theta}_1$ the largest singular value of $\widetilde{\bZ}$, then return an \emph{instability message}. \item Otherwise return the $n\times p$ matrix \begin{equation*} \widetilde{\bY}:=\bY_0\widetilde{\bV}\cos\widetilde{\bThe}+\widetilde{\bU}\sin\widetilde{\bThe} \end{equation*} issued from the exponential map~\ref{eq:Def_Exp_Map}. \end{enumerate} \end{description} \end{algo} \subsection{Instability problem due to increasing mode}\label{subec:Geometric_Distances} As one should expect, the accuracy of the interpolation algorithm~\ref{alg:Interpol_Lagrang} should improve as the number $p$ of mode increase. In fact, when considering the snapshot matrices $\bS_1,\dots,\bS_N$ associated to the parameters $\lambda_{1},\dots,\lambda_{N}$, a POD of mode $p$ define subspaces $\mathcal{V}_1,\dots,\mathcal{V}_N$ of dimension $p$ (see~\autoref{sec:POD}). By construction, for another mode $p'>p$, the corresponding subspaces $\mathcal{V}'_1,\dots,\mathcal{V}'_N$ are such that \begin{equation*} \mathcal{V}_i\subset \mathcal{V}_i'. \end{equation*} Take now a new parameter $\widetilde{\lambda}$ and suppose that algorithm~\ref{alg:Interpol_Lagrang} returns matrices $\widetilde{\bY}$ and $\widetilde{\bY}'$ which correspond respectively to mode $p$ and $p'>p$ interpolation. A stability condition should be \begin{itemize} \item[$\bullet$] (C3) The subspaces $\widetilde{\mathcal{V}}$ and $\widetilde{\mathcal{V}}'$ respectively associated to the matrices $\widetilde{\bY}$ and $\widetilde{\bY}'$ are such that $\widetilde{\mathcal{V}}\subset \widetilde{\mathcal{V}}'$. \end{itemize} More generally, let us consider two subspaces $\mathcal{V}$ and $\mathcal{V}'$ of different dimensions $p<p'$, represented by matrices $\bY\in \VecMat{n}{p}$ and $\bY'\in \VecMat{n}{p'}$ such that \begin{equation*} \bY^{T}\bY=\mathbf{I}_p,\quad (\bY')^{T}\bY'=\mathbf{I}_{p'}. \end{equation*} One method to measure the non-inclusion defect between subspaces $\mathcal{V}$ and $\mathcal{V}'$ is to consider the \emph{geometric distance} $\delta(\mathcal{V},\mathcal{V}')$, issued from~\cite{Ye2016}, and defined using \emph{principal angles} as follows: taking singular values of $\bY^{T}\bY'\in \VecMat{p}{p'}$ to be $\sigma_{1}\geq \dots \geq \sigma_p\geq 0$, we have \begin{equation}\label{eq:Geom_Distance} \delta(\mathcal{V},\mathcal{V}')=\delta(\bY,\bY'):=\bigg(\sum_{i=1}^{\text{min}(p,p')} \arccos^{2}(\sigma_{i}) \bigg)^{1/2}. \end{equation} We are finally able to check stability condition (C3) using the following: \begin{enumerate} \item Assume a set of POD modes $p \in \mathscr{P}_m$ and a threshold value $T_V$. \item For a given integer $p$ and a given target parameter $\widetilde{\lambda}$, compute matrix $\widetilde{\bY}$ issued from Algorithm~\ref{alg:Interpol_Lagrang}. \item For $p'>p$ compute matrix $\widetilde{\bY}'$ issued from the same algorithm Algorithm~\ref{alg:Interpol_Lagrang}. \item As we have $\widetilde{\bY}^{T}\widetilde{\bY}=\mathbf{I}_p$ and $(\widetilde{\bY}')^{T}\widetilde{\bY}'=\mathbf{I}_{p'}$ from Lemma~\ref{lem:Geod_Inside_Stiefel_Compact}, we deduce a geometric distance $\delta(\widetilde{\bY},\widetilde{\bY}')$ computed by~\eqref{eq:Geom_Distance}. \item Calculate \begin{equation}\label{eq:Threshold_Stab_C3} \epsilon = (\delta_{\text{max}}(\widetilde{\bY},\widetilde{\bY}') - \delta_{\text{min}}(\widetilde{\bY},\widetilde{\bY}'))/(\delta_{\text{min}}(\widetilde{\bY},\widetilde{\bY}')),\quad p \in \mathscr{P}_m \end{equation} \item If $\epsilon \geq T_V$ then return an instability message. \end{enumerate} Let us now describe the utilization of the (C3) stability condition from the application point of view. By computing the geometric distance $\delta(\widetilde{\bY},\widetilde{\bY}')$ we are able to explain the non-monotonic oscillatory behavior of the error norm due to increasing mode $p$. According to both situation under study, the first benchmark problem (see Figure~\ref{fig:Distance_subspaces_different_dimensions_pbl_1}) shows a clear oscillatory behavior, while the second one seems stable (see Figure~\ref{fig:Distance_subspaces_different_dimensions_pbl_2}): we thus compared the two $\epsilon$ values given by~\eqref{eq:Threshold_Stab_C3}, for each benchmark problem, and propose $T_V=100$ as a reference threshold. \section{Application to Hyperelasticity}\label{sec:App_HyperElast} \subsection{Kinematics of Continuum Mechanics Framework} Let $\Omega_{0} \subset R^3$ and $\Omega \subset R^3$ represent the reference and the current configurations of a body, parameterized in $\mathbf{X}$ and in $\mathbf{x}$, respectively. The non-linear deformation map $\varphi: \Omega_{0} \rightarrow \Omega$ at time $t$, transforms the referential (material) position $\mathbf{X}$ into the related current (spacial) position $\mathbf{x}=\varphi(\mathbf{X},t)$. The deformation gradient $\mathbf{F}$ is defined by \begin{equation} \mathbf{F} := \nabla \varphi(\mathbf{X}) = \frac{\partial \varphi( \mathbf{X})}{\partial \mathbf{X}} = \frac{\partial \mathbf{x}}{\partial \mathbf{X}} \end{equation} with the Jacobian $J(\mathbf{X}) = \det(\mathbf{F}) > 0$ (volume ratio). The right and left Cauchy-Green tensors are defined as $\mathbf{C} = \mathbf{F}^T \mathbf{F}$ and $\mathbf{B} = \mathbf{F} \mathbf{F}^T$, respectively. The three principal invariants of $\mathbf{C}$ which are identical to those of $\mathbf{B}$ are defined as \begin{equation} I_1 = \tr(\mathbf{C}), \quad I_2 = \frac{1}{2} [(\tr(\mathbf{C}))^2 - \tr \left(\mathbf{C^2}\right)], \quad I_3 = \text{det} (\mathbf{C}). \label{eq:Invariants_1-3} \end{equation} \subsection{Incompressible Transverse Isotropic Material} A material with one family of fibers is considered where the stress at a material point depends not only on the deformation gradient $\mathbf{F}$ but also on the fiber direction. The fibers are modeled by a \emph{flow}~\cite{Gallot1990} obtained from some unit vector field $\mathbf{a}_0$ on $\Omega_{0}$. The direction of a fiber at point $\mathbf{X} \in \Omega_{0}$ is thus obtained by the unit vector $\mathbf{a}_0 (\mathbf{X}), \, | \mathbf{a}_0 |=1$. Note that the unit vector field $\mathbf{a}_0$ induces a unit vector field $\mathbf{a}$ on current configuration $\Omega$ defined by \begin{equation*} \mathbf{F}(\mathbf{X})\mathbf{a}_0(\mathbf{X})=\alpha \mathbf{a}(\mathbf{x}) \end{equation*} where the length changes of the fibers along its direction $\mathbf{a}_0$ is determined by the stretch $\alpha$ as the ratio between the current and the reference configuration. Consequently, since $| \mathbf{a} |=1$, we can define the square of the stretch $\alpha$ following the symmetries of the deformation gradient \begin{equation*} \alpha^2 = \mathbf{a}_0\mathbf{F}^T \mathbf{F} \mathbf{a}_0 = \mathbf{a}_0\mathbf{C} \mathbf{a}_0. \end{equation*} \subsection{Linearization of the principle of internal virtual work in the spatial description} The linearization of the internal virtual work in the spatial description reads (see Section 8.4 in~\cite{Holzapfel2002}) \begin{equation} D_{\Delta \mathbf{u}} \delta W_{int}(\mathbf{u}, \delta \mathbf{u}) = \int_{\Omega}^{} (\text{grad} \delta \mathbf{u} : \mathbb{c} : \text{grad} \Delta \mathbf{u} + \text{grad} \delta \mathbf{u} : \text{grad} \Delta \mathbf{u} \: \boldsymbol{\sigma}) dv \label{eq:Linear_Int_work_spatial} \end{equation} or in index notation (with Einstein convention on repeated indices), \begin{equation} D_{\Delta \mathbf{u}} \delta W_{int}(\mathbf{u}, \delta \mathbf{u}) = \int_{\Omega}^{} \frac{\partial \delta u_a}{\partial x_b} (\delta_{ac} \sigma_{bd} + \mathbb{c}_{abcd}) \frac{\partial \Delta u_c}{\partial x_d} dv \label{eq:Linear_Int_work_spatial_index} \end{equation} where the term $\delta_{ac} \sigma_{bd} + \mathbb{c}_{abcd}$ is the effective elasticity tensor in the spatial description. The term $\delta_{ac} \sigma_{bd}$ corresponds to the geometrical stress contribution to linearization (initial stress contribution at every increment) whereas $\mathbb{c}_{abcd}$ represents the material contribution to linearization. The elasticity tensor $\mathbb{c}_{abcd}$ in the spatial description is derived from the push-forward of the linearized second Piola-Kirchhoff stress tensor which yields the linearized Kirchhoff stress tensor $ \Delta \boldsymbol{\tau}$ from relation \begin{equation} \Delta \boldsymbol{\tau}= J \mathbb{c} : \text{grad} \Delta \mathbf{u} \label{eq:PF_Linear_S_index_2} \end{equation} Replacing the direction $\Delta \mathbf{u}$ of the directional derivative with the velocity vector $\mathbf{v}$, $\Delta \boldsymbol{\tau}$ and $\text{grad} \Delta \mathbf{u}$ result in the Lie time derivative $\mathcal{L}_{v}(\boldsymbol{\tau})$ of $\boldsymbol{\tau}$ and the spatial velocity gradient $\mathbf{l}= \mathbf{\dot{F}} \mathbf{F}^{-1}$, respectively. Again, using the minor symmetries of $\mathbb{c}$, the following relation can be written \begin{equation} \mathcal{L}_{v}(\boldsymbol{\tau}) = \text{Oldr}(\boldsymbol{\tau}) = \boldsymbol{\dot{\tau}}- \mathbf{l} \boldsymbol{\tau} - \boldsymbol{\tau} \mathbf{l}^T = J \mathbb{c} : \mathbf{d} \label{eq:Oldroyd_stress_rate} \end{equation} where $\text{Oldr}(\boldsymbol{\tau})$ denotes the objective Oldroyd stress rate (convected rate) of the contravariant Kirchhoff stress tensor $\boldsymbol{\tau}$ and $\mathbf{d}=\text{sym}(\mathbf{l})$ (symmetric part of $\mathbf{l}$) the rate of the deformation tensor. At this point we have to recall that for structural elements (shells, membranes, beams, trusses) Abaqus/Standard uses the elasticity tensor related to the Green-Naghdi objective rate. The detailed constitutive model used here is given in~\cite{Holzapfel2007}. \section{Numerical Investigations} \label{sec:Numerical_Investigations} The objective of this section is to assess the stability and accuracy of POD basis interpolation on Grassmann manifolds using two examples of hyperelastic structures. \subsection{Abaqus implementation of POD-ROM approximations} \label{sec:multi_point_constraint_equations} To implement a ROM for FEM analysis, a non-intrusive approach is utilized to insert the interpolated spatial POD basis into a commercial code. Specifically, a ROM is constructed using the multi-point constraint equations in Abaqus~\cite{ABAQUS2014}. A linear multi-point constraint requires that a linear combination of nodal variables is equal to zero: \begin{equation} A_1 u^{P}_{i} + A_2 u^{Q}_{j} + \dots + A_N u^{R}_{k} = 0 \label{eq:Linear constraint equations} \end{equation} where $u^{P}_{i}$ is the nodal variable at node $P$, degree of freedom $i$ and $A_i, (i=1,\dots N)$ are coefficients that define the relative motion of the nodes. In Abaqus/Standard the first nodal variable specified ($u^{P}_{i}$ corresponding to $A_1$) will be eliminated to impose the constraint. In addition, the coefficient $A_1$ should not be set to zero. For the construction of a ROM, $p$ reference points are created corresponding to the total number of POD modes (arbitrary positioned in space). These reference points are used to define the constraint equations for introducing the spatial POD modes and to assign the extra degrees of freedom corresponding to the unknown `time' variables. Thus, the interpolated spatial basis $\boldsymbol{\tilde{\Phi}}_p:=[\tilde{\phi}_{1},\dots,\tilde{\phi}_{p}]\in \VecMat{n}{p}$ representing the subspace $\tilde{\bom}:=\text{span}(\tilde{\phi}_{1},\dotsc,\tilde{\phi}_{p})$ on $\mathcal{G}(p,n)$ is imposed to the linear constraint equations as follows: \begin{equation} u(x_l,t,\tilde{\lambda}) - \sum_{h=1}^{p} \tilde{\phi}_h (x_l) \psi_h(t) = 0 \label{eq:Linear constraint equations_POD} \end{equation} where $x_l,(l=1,\dots,N_s)$ is related to the nodal point positions, $\tilde{\phi}_h(x_l)$ represent the associated spatial POD $h$-mode for $x_l$, and $\psi_h(t)$ is the `time' variable assigned to the reference point $h$ that has to be determined. Note also that the system of equations defined in~\eqref{eq:Linear constraint equations_POD} has to be generated for each degree of freedom. \rem{In fact this is not a standard POD-Galerkin approach since we are not projecting the linearized system of equations onto the interpolated spatial POD basis. But it serves us to assess the stability and accuracy of the ROM FEM model which is constructed by the interpolated POD basis. We mention that generating an efficient ROM model is not the objective of this work.} \subsection{Inflation of a spherical balloon}\label{subsec:Spherical_Balloon} The first pMOR benchmark example concerns the inflation of a spherical balloon considering the material anisotropy defined by the fiber orientation angle as a parameter. The sphere has an initial radius of $R=10$, thickness $h=0.5$ and is loaded by an internal hydrostatic pressure of $P=40$ (no units). The FEM analysis is performed on an octant $\mathrm{S}_0$ of the sphere using plane symmetry boundary conditions, as depicted in Figure~\ref{fig:Geo_BC}, where three radial points $A(R,0,0)$, $B(0,R,0)$ and $C(0,0,R)$ are defined on each axis, respectively. Three-node shell elements (S3R) are used for the mesh~\cite{ABAQUS2014}. A total number of 514 elements are generated with 228 nodes. The hyperelastic constitutive behavior is implemented in Abaqus/Standard with a user-defined subroutine (UMAT)~\cite{ABAQUS2014}. \begin{figure}[http] \centering \includegraphics[width=.4\linewidth]{Balloon_structure_BC-eps-converted-to.pdf} \caption{Geometry of an octant $\mathrm{S_0}$ of a spherical balloon made of transversely isotropic hyperelastic material. Three radial points A, B and C are defined on axis 1,2 and 3, respectively; plane symmetry boundary conditions are used.} \label{fig:Geo_BC} \end{figure} \begin{rem}\label{rem:Local_Basis_Octant} The fiber orientation has to be defined on each point $M\in \mathrm{S}_0$ using an orthonormal basis of the tangent plane $T_M\mathrm{S}_0$, which has to be specified. The choice made in Abaqus is to consider first an outward normal $\mathbf{n}(M)$ to this tangent plane and then a first vector $\mathbf{E}_{1}(M)$ as the orthogonal projection (normalized) of $\mathbf{e}_1:=(1,0,0)$ onto $T_M\mathrm{S}_0$. The second unit vector is the cross product $\mathbf{E}_{2}(M):=\mathbf{n}(M)\wedge \mathbf{E}_{1}(M)$. \end{rem} \subsubsection*{Explicit fiber orientations on the octant} Let make now an explicit definition of the fiber orientations, with parameter some angle $\theta$ using local basis $\mathbf{E}_1(M),\mathbf{E}_2(M)$ of the tangent plane $T_M\mathrm{S}_0$ as explained in Remark~\ref{rem:Local_Basis_Octant}. More specifically, take \begin{equation*} M=(\cos(u)\sin(v),\sin(u)\sin(v),\cos(v))\in \mathrm{S}_0,\quad (u,v)\in \left]0;\frac{\pi}{2}\right[\times \left]0;\frac{\pi}{2}\right[ \end{equation*} and then define \begin{align*} \mathbf{E}_1(M)&:=\frac{\mathbf{X}^h}{\|\mathbf{X}^h\| },\quad \mathbf{X}^h:=\begin{pmatrix} 1-\cos^2(u)\sin^2(v) \\ -\sin(u)\cos(u)\sin^2(v) \\ -\cos(u)\sin(v)\cos(v) \end{pmatrix}, \\ \mathbf{E}_2(M)&:=\mathbf{n}(M)\wedge \mathbf{E}_{1}(M).\quad \end{align*} Note here that the vector $\mathbf{X}^h$ corresponds to the orthogonal projection of the vector $(1,0,0)$ onto the tangent plane $T_M\mathrm{S}_0$. Finally, the unit vector defining the fiber orientation is given by (see Figure~\ref{fig:Fibers_Sphere_Abaqus} for some examples). \begin{equation*} \mathbf{a}_0(\theta):=\cos(\theta)\mathbf{E}_1(M)+\sin(\theta)\mathbf{E}_{2}(M) \end{equation*} \begin{figure}[H] \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.4]{alpha_0_v4-eps-converted-to.pdf} \caption{Fibers orientation with $\theta=0$ degree} \label{fig:Fiber_0} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.4]{alpha_45_v4-eps-converted-to.pdf} \caption{Fibers orientation with $\theta=45$ degree} \label{fig:Fiber_45} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.4]{alpha_60_v4-eps-converted-to.pdf} \caption{Fibers orientation with $\theta=60$ degree} \label{fig:Fiber_60} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.4]{alpha_90_v4-eps-converted-to.pdf} \caption{Fibers orientation with $\theta=90$ degree} \label{fig:Fiber_90} \end{subfigure} \caption{Different fibers on the sphere.} \label{fig:Fibers_Sphere_Abaqus} \end{figure} \subsubsection*{Model for strain energy function} For a homogeneous transversely isotropic non-linear material, let consider a free energy function that depends only on two invariants ($I_1,I_4$) \begin{equation*} \Psi = \Psi\left(I_1(\mathbf{C}), I_4(\mathbf{C},\mathbf{a}_0)\right) \end{equation*} where $I_1=\tr(\mathbf{C})$, while \begin{equation} I_4 = \mathbf{a}_0\mathbf{C} \mathbf{a}_0, \label{eq:Invariants_4} \end{equation} is the invariant related to anisotropy. Since we assume incompressibility of the isotropic matrix material, i.e., $I_3=1$, the free energy is enhanced by an indeterminate Lagrange multiplier $p$ which is identified as a reaction pressure \begin{equation*} \Psi = \Psi[I_1(\mathbf{C}), I_4(\mathbf{C},\mathbf{a}_0)] + p (I_3-1). \end{equation*} The specific model used here is developed for membranous or thin shell-like sheets considering a plane stress state throughout the sheet~\cite{Holzapfel2007}. Following the method of Humphrey~\cite{Humphrey1990} which is based on a derivation by Spencer~\cite{Spencer1972}, the strain energy function is defined as \begin{equation} \Psi(I_1,I_4) := c_0(\text{exp}(Q)-1),\quad Q := c_1(I_1-3)^2 + c_2(I_4-1)^2 \label{eq:Strain_energy_MV} \end{equation} where $c_i,i=0,1,2$ are material parameters defined as: $c_0 = 86.1$, $c_1 = 0.0059$ and $c_2 = 0.031$ (dimensionless). \begin{rem} This model introduces an inherent constitutive coupling between the isotropic and anisotropic material response. In order to avoid non-physical behavior of soft biological tissues, the related strain-energy function must be polyconvex. It can be shown that polyconvexity of a (continuous) strain-energy function implies that the corresponding acoustic tensor is elliptic for all deformations, which means from the physical point of view that only real wave speeds occur; then the material is said to be stable. There exists a vast literature on polyconvexity, a term introduced by Ball~\cite{Ball1976}. In~\eqref{eq:Strain_energy_MV}, the anisotropic term $ c_2(I_4-1)^2$ is activated only when $I_4 \geq 1$ (the actual fiber stretches are greater than unity). Moreover, as discussed in~\cite{May1998}, the constitutive description based on~\eqref{eq:Strain_energy_MV} is limited to deformations in which the in-plane strains are positive, or tensile and is not able to incorporate the behavior of the structure in compression. Due to the membrane-like geometry of the structure, it is unlikely to support compressive strains without buckling. This limitation extends to the issue of bending stiffness, which is neglected in this model. \end{rem} \subsubsection*{Snapshot matrices and error norms} In what follows, the training points corresponding to the fiber orientation angle $\theta$ will be noted with parameter $\lambda$ for convenience with the previous sections. FEM simulations are performed in Abaqus/Standard for the points $\lambda_i \in \Lambda_s = \{0,45,50,60,85,90\}$. Note that the spherical balloon changes from a pumpkin (Figure~\ref{fig:Ballon_inflation_modes}(a)) to rugby shaped (Figure~\ref{fig:Ballon_inflation_modes}(d)) for $\lambda=0$ and $\lambda=90$, respectively. Observe in Figure~\ref{fig:Fibers_Sphere_Abaqus} that the fiber orientation on the sphere is far from being trivial for $\theta \in ]0;90[$. The target point for interpolation is set to $\widetilde{\lambda}=75$. Thus, it is natural to constraint the training set to $\Lambda_t =\{50,60,85,90\}$ (see Figure~\ref{fig:Ballon_inflation_modes} for some FEM results). We note that the target point $\tilde{\lambda} = 75$ represents a worst case scenario for assessing the interpolation accuracy since it is spaced nearly at the maximum distance between the adjacent training points $\lambda = 60$ and $\lambda = 85$. Additionally, another reason for this choice is the remarkable shape transition of the spherical balloon inflation in this range of fibration angles as can been seen from Figure~\ref{fig:Ballon_inflation_modes}(b) and Figure~\ref{fig:Ballon_inflation_modes}(c), respectively. Hence, this selection gives an upper bound of the interpolation accuracy over the considered parametric range. For each parametric simulation, a sequence of uniform time snapshots is extracted from the model database. From the discretization of the space-time fields (displacement/rotation), the snapshot matrices $\bS(\lambda_i)$ of size $(n=1728) \times (N_t=1000)$ are formed. The eigenvalue spectrum of the matrices $\bS(\lambda_i)$ corresponding to training points $\lambda_i \in \Lambda_t$ is shown in a log-log scale in Figure~\ref{fig:Sing_values_magnitude_infl_ballon}. The condition number of the matrices is of the order of $1.0e+10$. Notice that the distance between the first and the second eigenvalue is of two orders of magnitude. \begin{figure}[http] \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{atsis_Ballon_Utotal_theta_0.pdf} \caption{For $\theta = 0^{\circ}$} \label{fig:Inflation_0} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{atsis_Ballon_Utotal_theta_60.pdf} \caption{For $\theta = 60^{\circ}$} \label{fig:Inflation_60} \end{subfigure} \\ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{atsis_Ballon_Utotal_theta_75.pdf} \caption{For $\theta = 75^{\circ}$} \label{fig:Inflation_75} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=\linewidth]{atsis_Ballon_Utotal_theta_90.pdf} \caption{For $\theta = 90^{\circ}$} \label{fig:Inflation_90} \end{subfigure} \caption{Inflation modes of the benchmark anisotropic spherical balloon after reconstruction of the complete balloon using the plane symmetries conditions at the boundaries of the octant $\mathrm{S_0}$.} \label{fig:Ballon_inflation_modes} \end{figure} \begin{figure}[http] \centering \includegraphics[width=0.6\columnwidth]{Sing_values_magnitude_infl_ballon_total_dof-eps-converted-to.pdf} \caption{The eigenvalue spectrum of snapshot matrices $\bS_i$ corresponding to training points $\lambda_i=50,60,85,90$.} \label{fig:Sing_values_magnitude_infl_ballon} \end{figure} To quantify the accuracy of the interpolation, the relative $L_2$-error norm (in time) for a given target point $\tilde{\lambda}$ is evaluated with respect to the high-fidelity FEM solution. Using the interpolated and the HF-FEM snapshot matrices $\tilde{\bS}$ and $\bS^{\text{FEM}}$, respectively, the following error measure is defined at each time snapshot \begin{equation} e_{L_2}(\tilde{\bS})= \frac{\Vert \mathbf{\tilde{u}}_i - \mathbf{u}^{\text{FEM}}_i \Vert_{L_2}}{\Vert \mathbf{u}^{\text{FEM}}_i) \Vert_{L_2}}, \quad i=1,\dots,N_t \label{eq:L2_error_norm_HF} \end{equation} In addition, the relative Frobenius error norm represents a global error measure which considers the error in the full time interval of the time steps \begin{equation} e_{F}(\tilde{\bS})= \Vert \mathbf{\tilde{S}} - \mathbf{S}^{\text{FEM}} \Vert_{F} / \Vert \mathbf{S}^{\text{FEM}} \Vert_{F} \label{eq:Frobenious_error_norm} \end{equation} Using the linear constraint equations defined in~\eqref{eq:Linear constraint equations_POD}, $p$ reference points (for each POD mode) are created to assign the spatial POD basis representing the interpolated subspace $\tilde{\bom} \in \mathcal{G}(p,n)$ and the unknown time variables. Thus, the total number of equations of the ROM-FEM model is $6 \times p$ while the total number of equations of the corresponding HF-FEM model is $288 \times 6 = 1728$. \subsubsection*{Stability conditions (C1) and (C2)} First we need to know if the interpolation is (C1) and (C2) stable. \ \textbf{Stability (C1).} All points $\bom_1,\dotsc,\bom_N \in \mathcal{G}(p,n)$ lie in $\mathrm{U}_{\bom_0}$, given by~\eqref{eq:Def_Open_Set_Log}. We need to check that for all $i=1,\dots,N$, the matrix $\bY_0^{T}\bY_i$ is non singular. (C1) condition is satisfied for all $i=1,\dots,N$ and $p = 1,2,5,10,20$ POD modes considered. \ \textbf{Stability (C2).} We need to know if all velocity vectors $\widetilde{v}(\lambda)$ belong to the subset $\mathrm{V}_{\bom_0}$ given by~\eqref{eq:Def_Vm_Angle}, for the parametric range $\lambda \in [\lambda_{1},\lambda_{N} ]$. Thus, we have to check that the first (maximum) singular value $\theta_1$ of a horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ of the velocity vector $\widetilde{v}(\lambda)$ is such that $\theta_{1} < \pi/2$, for all $\lambda \in [\lambda_{1},\lambda_{N} ]$. We proceed by uniformly sampling 401 points over the parametric range $[50;90]$. Figure~\ref{fig:Stability_condition_1_pbm_1} shows the maximum eigenvalue $\theta_1$ of the horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ for all samples using $\bom_0(\lambda=85)$ as a reference point on the Grassmann manifold. These curves provide all important information for the (C2) stability of interpolation by detecting the exact intervals of the loss of injectivity of the Exponential mapping for various POD modes $p=1,2,5,10,20$. Observe the loss of injectivity in a specific interval of the parameter range for modes $p=10,20$. A remarkable result is the loss of injectivity inside the parameter range and not at the boundaries where the Exponential map becomes again injective. Note also that by increasing the dimension $p$, the curves shift more rapidly closer to $\pi/2$. Figure~\ref{fig:Stability_condition_1_pbm_1} reveals that interpolation is (C2) stable for the target point $\tilde{\lambda} = 75$ for all POD modes $p$. \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{Stability_condition_1_pbm_1-eps-converted-to.pdf} \caption{Stability (C2); Computation of the maximum eigenvalue $\theta_1$ of the horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ over the parametric range $\lambda \in [50;90]$. Observe the loss of injectivity in a specific interval in the parametric range for POD modes $p=10,20$. Reference point on Grassmann manifold $\bom_0(\lambda=85)$.} \label{fig:Stability_condition_1_pbm_1} \end{figure} \subsubsection*{Accuracy and Stability condition (C3)} Figure~\ref{fig:Standard_POD_vector_number_L2_error_norm} and Figure~\ref{fig:Standard_POD_vector_number_Frobenious_error_norm} show the relative $L_2$-error norm $e_{L_2}(\tilde{\bS})$ and the Frobenius error norm $e_{F}(\tilde{\bS})$ for the target point $\tilde{\lambda}$ of the ROM-FEM solution constructed from the interpolated $p$-dimensional spatial modes. Additionally, Table~\ref{table:Grassmannian_dim_pbl_1} shows the Grassmannian dimension for the different number of POD modes. \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{Standard_POD_vector_number_L2_error_norm-eps-converted-to.pdf} \caption{Relative $L_2$-error norm $e_{L_2}(\tilde{\bS})$ against the number of POD vectors for the POD ROM-FEM; target point: $ \tilde{\bom}(\lambda=75)$.} \label{fig:Standard_POD_vector_number_L2_error_norm} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.65\columnwidth]{Standard_POD_vector_number_Frobenious_norm_pbl_1-eps-converted-to.pdf} \caption{Relative Frobenius error norm against the number of POD vectors for the POD ROM-FEM; target point: $ \tilde{\bom}(\lambda=75)$.} \label{fig:Standard_POD_vector_number_Frobenious_error_norm} \end{figure} \begin{table}[ht] \caption{Dimension of the Grassmann manifold $\mathcal{G}(p,n)$} \centering \begin{tabular}{ c c c c c c} \hline \textbf{Number of modes} & \textbf{$p=1$} & \textbf{$p=2$} & \textbf{$p=5$} & \textbf{$p=10$} & \textbf{$p=20$} \\ \hline \textbf{Dimension: $p(n-p)$ } & 1727 & 3452 & 8615 & 17180 & 34160 \\ \hline \end{tabular} \label{table:Grassmannian_dim_pbl_1} \end{table} \textbf{Stability (C3).} We need to check if the interpolated subspaces $\widetilde{\mathcal{V}}$ and $\widetilde{\mathcal{V}}'$ respectively associated to the matrices $\widetilde{\bY}$ and $\widetilde{\bY}'$ correspond to mode $p$ and $p'>p$ interpolation are such that $\widetilde{\mathcal{V}}\subset \widetilde{\mathcal{V}}'$. Before examine if the interpolation is (C3) stable, observe from Figure~\ref{fig:Standard_POD_vector_number_L2_error_norm} and Figure~\ref{fig:Standard_POD_vector_number_Frobenious_error_norm} of the relative error norms \eqref{eq:L2_error_norm_HF} and \eqref{eq:Frobenious_error_norm}, respectively, that the error is minimum for $p=2$ POD modes and increases by introducing additional modes which at first glance contradicts the `expected' improvement of the solution by increasing the number of modes. In this case, the non-monotonous error decrease and the random oscillations follows from the non-inclusion defect between subspaces $\mathcal{V}$ and $\mathcal{V}'$ obtained by using different POD modes. To prove that fact, we compute the non-inclusion defect considering the geometric distance $\delta(\mathcal{V},\mathcal{V}')$ using the principal angles defined in \eqref{eq:Geom_Distance}. We assume a set of POD modes $p \in \mathscr{P}_m = \{1,2,5,10,20\}$ and a threshold value $T_V =100$. Figure~\ref{fig:Distance_subspaces_different_dimensions_pbl_1} lists the distances of the obtained POD basis of various dimensions $p \in \mathscr{P}_m$ in a symmetric table form. Observe that i) $\delta(\widetilde{\bY},\widetilde{\bY}') \neq 0$ for all $p \neq p'$ and ii) $\delta(\widetilde{\bY},\widetilde{\bY}')$ increase rapidly for $p>2$. Thus, this table explains why the relative error norms (Figure~\ref{fig:Standard_POD_vector_number_L2_error_norm} and Figure~\ref{fig:Standard_POD_vector_number_Frobenious_error_norm}) have a minimum at $p=2$ modes. Since the relative error $\epsilon$ given by~\eqref{eq:Threshold_Stab_C3} is here $\epsilon=554.03 > T_V $, we can conclude that the interpolation is not (C3) stable. The results make clear and prove the non-inclusion defect of different subspaces which in turn give rise to the oscillatory behavior of the error norms as described above. \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{Distance_subspaces_different_dimensions_pbl_1-eps-converted-to.pdf} \caption{Stability (C3); Geometric distance $\delta(\bY,\bY')$ between interpolated subspaces of different dimensions.} \label{fig:Distance_subspaces_different_dimensions_pbl_1} \end{figure} Moreover, the interpolation accuracy is assessed using the relative displacement error $e_{\mathbf{u}} =\Vert \mathbf{\tilde{u}}(t)-\mathbf{u}^{FEM}(t) \Vert_{L_2} / \Vert \mathbf{u}^{FEM}(t) \Vert_{L_2}$ at the nodal points computed for $p=$1,2,5 and 10 POD modes. Figure~\ref{fig:Snapshot_L2_norm_balloon_modes_state_12} and Figure~\ref{fig:Snapshot_L2_norm_balloon_modes_state_1} present the local error at the increment state $t=0.002$ and at the final increment state $t=1$ displayed at the position vector $\mathbf{x}^{FEM}(t)$ of the high-fidelity FEM model, respectively. In general, different patterns of the spatial error distribution can be observed with respect to the number of POD modes. In the majority of cases, the maximum error is located at the boundary points of the octant $\mathrm{S}_0$ of the initially spherical balloon where plane symmetries are imposed and at points of maximum displacement. Again, observe that the error is not decreasing by using more POD modes as Figure~\ref{fig:Snapshot_L2_norm_balloon_modes_state_1} shows. \begin{figure}[H] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_1_state_2_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_2_state_2_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_5_state_2_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_10_state_2_rel_displacements.png} \end{minipage} \caption{Relative displacement error $e_{\mathbf{u}} =\Vert \mathbf{\tilde{u}}(t)-\mathbf{u}^{FEM}(t) \Vert_{L_2} / \Vert \mathbf{u}^{FEM}(t) \Vert_{L_2}$ at the nodal points at state $t=0.002$ for POD modes $p= \{1,2,5,10\}$ displayed at the position vector $\mathbf{x}^{FEM}(t)$ of the high-fidelity FEM model; target point: $ \tilde{\bom}(\lambda=75)$.} \label{fig:Snapshot_L2_norm_balloon_modes_state_12} \end{figure} \begin{figure}[H] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_1_state_1000_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_2_state_1000_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_5_state_1000_rel_displacements.png} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{Snapshot_L2_norm_balloon_modes_10_state_1000_rel_displacements.png} \end{minipage} \caption{Relative displacement error $e_{\mathbf{u}} =\Vert \mathbf{\tilde{u}}(t)-\mathbf{u}^{FEM}(t) \Vert_{L_2} / \Vert \mathbf{u}^{FEM}(t) \Vert_{L_2}$ at the nodal points at state $t=1$ for POD modes $p= \{1,2,5,10\}$ displayed at the position vector $\mathbf{x}^{FEM}(t)$ of the high-fidelity FEM model; target point: $ \tilde{\bom}(\lambda=75)$.} \label{fig:Snapshot_L2_norm_balloon_modes_state_1} \end{figure} Finally, Figure~\ref{fig:Total_displacement_radial_points_ROM_FEM_balloon} shows the time-displacement histories for the radial points A, B and C on the initially spherical balloon for the POD ROM-FEM model compared against its high fidelity counterpart solution using POD mode $p=1$. It can be observed that the interpolated ROM-FEM solution delivers good accuracy and is accurate enough to predict the anisotropic balloon inflation at the target parameter. \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{Total_displacement_radial_points_ROM_FEM_balloon-eps-converted-to.pdf} \caption{POD ROM-FEM model using Lagrange interpolation; comparison of the displacement of radial points A,B and C against the high-fidelity FEM solution; training points: $\bom_0(\lambda=85)$ (reference point); $\bom_1(\lambda=50)$; $\bom_2(\lambda=60)$; $\bom_3(\lambda=90)$; target point: $ \tilde{\bom}(\lambda=75)$; POD modes $p=1$.} \label{fig:Total_displacement_radial_points_ROM_FEM_balloon} \end{figure} \subsection{Hyperelastic structure with multiple components}\label{subsec:Benchmark_structural} In what follows, pMOR is investigated for a hyperelastic structure considering the material stiffness as a parameter. The model consists of two basic components: a plane shell section which is connected with six truss elements (non-symmetrically) (see Figure~\ref{fig:Simple_MV_model}). The plane section has dimensions $20 \times 20$ (mm), a constant thickness of 0.5 mm and is meshed with rectangular shells (S4). The hyperelastic model defined in~\eqref{eq:Strain_energy_MV} (UMAT) is assigned to the plane section in which the fiber orientations are aligned with the x-axis. The following parameters are used: $c_0 = 0.0520$ (kPa), $c_1 = 4.63$ and $c_2 = 22.6$. The truss elements are of type T3D2 with a cross-section area of 1 mm$^2$. For these elements, an isotropic incompressible hyperelastic material model is implemented into Abaqus/Standard subroutine UHYPER~\cite{ABAQUS2014}. The material model is derived from the following strain-energy function \begin{equation} U = \alpha_1 (\text{exp}[\alpha_2 (I_1-3)] - 1) \label{eq:Strain_energy_CT} \end{equation} where $\alpha_1$ and $\alpha_2$ are material parameters defined as: $\alpha_1 = 0.0565$ kPa and $\alpha_2$ is used for the parametric analysis. At the boundary of the plane section ($x=0$) and at the foundations of the truss elements all degrees of freedom are set to zero. A constant hydrostatic pressure of 120 mmHg (0.016 MPa) is applied at the bottom side of the plane section. \begin{figure}[htbp] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{atsis_Simple_MV_geo_bc-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{atsis_Simple_MV_Total_displacement.pdf} \end{minipage} \caption{Geometry, boundary conditions and total displacement of the structural multi-component model subjected to hydrostatic pressure, comprised of an anisotropic hyperelastic plane shell section which is non-symmetrically supported by a set of hyperelastic truss elements.} \label{fig:Simple_MV_model} \end{figure} \subsubsection*{Snapshot matrices for pMOR} The FEM simulations are performed using Abaqus/Standard (Implicit) software. For the exponential parameter $\alpha_2$, the following set of training points are chosen where for convenience with the previous sections we changed the notation to $\lambda \in \{5, 10, 15, 20, 25, 30 \}$. Figure~\ref{fig:2PKS_stretch} shows the second Piola-Kirchhoff stress-stretch curves for the corresponding parameter values which reveals a wide spectrum of stress values. For each parametric simulation, a sequence of snapshots uniformly distributed over time using an increment of $\Delta t =0.001$ is extracted for all nodes of the plane structure from the model database. The space-time snapshot matrices $\bS(\lambda_i) \in \mathbb{R}^{n \times N_t} $ of size $(n=726) \times (N_t=1000)$ are associated to nodal displacement and rotational fields. The following training points $\lambda_i \in \Lambda_t = \{ 15,20,25,30 \}$ are chosen for estimating the target point $\tilde{\lambda}=17.5$. After construction of the set of low-dimensional POD basis for the training points $\lambda_i$, a POD basis for the target point $\tilde{\lambda}$ is interpolated on a Grassmann manifold using Lagrange interpolation. Then, the interpolated POD spatial basis is introduced in Abaqus software using the linear constraint equations (Section~\ref{sec:multi_point_constraint_equations}) to construct a ROM for FEM analysis associated to the target parameter point. For each ROM FEM model of $p$ POD modes, the same number of reference points are created to assign the interpolated spatial POD modes and the unknown `time' variables that need to be determined. The eigenvalue spectrum of snapshot matrices $\bS_{i}$ corresponding to training points $\lambda_i \in \Lambda_t$ is shown in a log-log scale in Figure~\ref{fig:Singular_value_magnitude}. It is evident that the distance between the first three eigenvalues is of one order of magnitude each. In our experiments we perform interpolation using $p=1,2,5,10,20$ POD modes since they capture the most important characteristics of the system. \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{2PKS_vs_stretch_c1_parameter-eps-converted-to.pdf} \caption{Second Piola-Kirchhoff stress vs stretch for the examined parameter range.} \label{fig:2PKS_stretch} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{Singular_values_SM_total_DOFS-eps-converted-to.pdf} \caption{The eigenvalue spectrum of snapshot matrices $\bS_i$ corresponding to training points $\lambda_i \in \Lambda_t = \{ 15,20,25,30 \}$.} \label{fig:Singular_value_magnitude} \end{figure} \subsubsection*{Stability conditions C1 and C2} First we need to know if the interpolation is well-defined by evaluating the (C1) and (C2) stability conditions. \textbf{Stability (C1).} This condition requires that all points $\bom_1,\dotsc,\bom_N \in \mathcal{G}(p,n)$ lie in $\mathrm{U}_{\bom_0}$ given by~\eqref{eq:Def_Open_Set_Log}. Thus, we need to check if the matrix $\bY_0^{T}\bY_i$ is non-singular for all $i=1,\dots,N$. Since this condition is satisfied for all $i=1,\dots,N$ and $p = 1,2,5,10,20$ POD modes considered in this example, the interpolation is (C1) stable. \ \textbf{Stability (C2).} We need to know if all velocity vectors $\widetilde{v}(\lambda)$ belong to the subset $\mathrm{V}_{\bom_0}$ given by~\eqref{eq:Def_Vm_Angle}, for the parametric range $\lambda \in [\lambda_{1},\lambda_{N} ]$. Thus we have to check that the first (maximum) singular value $\theta_1$ of an horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ of the velocity vector $\widetilde{v}(\lambda)$ is such that $\theta_{1} < \pi/2$, for all $\lambda \in [\lambda_{1},\lambda_{N} ]$. We proceed by uniformly sampling 151 points over the parametric range $[15;30]$. Figure~\ref{fig:Stability_condition_1_pbm_2} shows the maximum eigenvalue $\theta_1$ of the horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ for all samples using $\bom_0(\lambda=15)$ as a reference point on the Grassmann manifold. From these curves we are able to assess the (C2) stability of interpolation by detecting the exact intervals of the loss of injectivity of the Exponential mapping over the parametric range for different number of POD modes $p$. It is clear that for $p \leq 10$ the interpolation is stable over the entire parametric range. Observe the loss of injectivity in a specific interval of parameter $\lambda$ for $p=20$ modes. Again, as in the previous example, note that by increasing the dimension $p$, the curves progressively tend to shift closer to $\pi/2$. Figure~\ref{fig:Stability_condition_1_pbm_2} reveals that interpolation is (C2) stable for the target point $\tilde{\lambda} = 17.5$ for all POD modes p. \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{Stability_condition_1_pbm_2-eps-converted-to.pdf} \caption{Stability (C2); Computation of the maximum eigenvalue $\theta_1$ of the horizontal lift $\tilde{\mathbf{Z}}(\lambda)$ over the parametric range $[15;30]$. Observe the loss of injectivity in a specific interval of parameters for POD modes $p=20$. Reference point on Grassmann manifold $\bom_0(\lambda=15)$.} \label{fig:Stability_condition_1_pbm_2} \end{figure} \subsubsection*{Interpolation accuracy and Stability condition (C3)} The accuracy of interpolation is assessed by comparing the relative $L_2$-error norm $e_{L_2}(\tilde{\bS})$ and the relative Frobenius error norm $e_{F}(\tilde{\bS})$ defined by the ROM FEM model and its high-fidelity counterpart solution against the number of POD modes $p$, as shown in Figure~\ref{fig:Standard_POD_L2_error_norm} and Figure~\ref{fig:Standard_POD_vector_number_Frobenious_error_norm_pbl_2}, respectively. Additionally, Table~\ref{table:Grassmannian_dim_pbl_2} illustrates the Grassmannian dimension for the corresponding number of POD modes $p$. \textbf{Stability (C3).} We need to check if the interpolated subspaces $\widetilde{\mathcal{V}}$ and $\widetilde{\mathcal{V}}'$ respectively associated to matrices $\widetilde{\bY}$ and $\widetilde{\bY}'$ correspond to mode $p$ and $p'>p$ interpolation, are such that $\widetilde{\mathcal{V}}\subset \widetilde{\mathcal{V}}'$. Before performing this stability test, observe the monotonic decrease of the relative error norms \eqref{eq:L2_error_norm_HF} and \eqref{eq:Frobenious_error_norm} by increasing mode $p$, as depicted in Figure~\ref{fig:Standard_POD_L2_error_norm} and Figure~\ref{fig:Standard_POD_vector_number_Frobenious_error_norm_pbl_2}, respectively. We are now ready to see how the geometric distance $\delta(\mathcal{V},\mathcal{V}')$ using the principal angles defined in \eqref{eq:Geom_Distance} relate to the error norm behavior. Again, we assume a set of POD modes $p \in \mathscr{P}_m = \{1,2,5,10,20\}$ and a threshold value $T_V =100$. To this end, we compute the distances $\delta(\widetilde{\bY},\widetilde{\bY}')$ of the interpolated POD basis on Grassmann manifolds $\mathcal{G}(p,n)$ of various dimensions $p \in \mathscr{P}_m$, plotted in a symmetric table form, as Figure~\ref{fig:Distance_subspaces_different_dimensions_pbl_2} shows. Again, the results prove the non-connectivity of different subspaces of various dimensions $p$. What is remarkable to observe in this case, is that the geometric distance $\delta(\widetilde{\bY},\widetilde{\bY}') \approx 0$ for all $p \neq p'$. Moreover, the relative error $\epsilon$ given by~\eqref{eq:Threshold_Stab_C3} is here $\epsilon=73.60 < T_V $, which is sufficiently small to assure a (C3) stable interpolation. Finally, Figure~\ref{fig:Time_displacement_histories_ROM_FEM_POD_20} shows a comparison of the predicted time histories of selected nodal total displacements for the ROM FEM model using $p = 20$ POD modes against the high fidelity FEM solution. It is evident that all nodal time-histories are nearly identical. \begin{figure}[http] \centering \includegraphics[width=0.7\columnwidth]{Standard_POD_L2_error_vs_modes_total_DOFS-eps-converted-to.pdf} \caption{POD ROM-FEM; relative $L_2$-error norm $e_{L_2}(\tilde{\bS})$ against the number of POD vectors; target point: $ \tilde{\bom}(\lambda=17.5)$.} \label{fig:Standard_POD_L2_error_norm} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.65\columnwidth]{Standard_POD_vector_number_Frobenius_norm_pbl_2-eps-converted-to.pdf} \caption{Relative Frobenius error norm against the number of POD vectors for the POD ROM-FEM; target point: $ \tilde{\bom}(\lambda=17.5)$.} \label{fig:Standard_POD_vector_number_Frobenious_error_norm_pbl_2} \end{figure} \begin{table}[ht] \caption{Dimension of the Grassmann manifold $\mathcal{G}(p,n)$} \centering \begin{tabular}{ c c c c c c} \hline \textbf{Number of modes} & \textbf{$p=1$} & \textbf{$p=2$} & \textbf{$p=5$} & \textbf{$p=10$} & \textbf{$p=20$} \\ \hline \textbf{Dimension: $p(n-p)$ } & 725 & 1448 & 3605 & 7160 & 14120 \\ \hline \end{tabular} \label{table:Grassmannian_dim_pbl_2} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.6\columnwidth]{Distance_subspaces_different_dimensions_pbl_2-eps-converted-to.pdf} \caption{Stability (C3); Geometric distance $\delta(\bY,\bY')$ between interpolated subspaces of different dimensions $p$.} \label{fig:Distance_subspaces_different_dimensions_pbl_2} \end{figure} \begin{figure}[http] \centering \includegraphics[width=0.7\columnwidth]{Time_displacement_histories_ROM_FEM_POD_20_total_DOFS-eps-converted-to.pdf} \caption{POD ROM-FEM; comparison of selected nodal time-displacement histories against the high-fidelity FEM solution; training points: $\bom_0(\lambda=15)$; $\bom_1(\lambda=20)$; $\bom_2(\lambda=25)$; $\bom_3(\lambda=30)$; target point: $ \tilde{\bom}(\lambda=17.5)$; POD modes = 20.} \label{fig:Time_displacement_histories_ROM_FEM_POD_20} \end{figure} \section{Conclusions} \label{sec:Conclusions} Effective mathematical definitions for stability conditions of POD basis interpolation on Grassmann manifolds for pMOR in hyperelasticity are given. Special attention has been paid on the definition of local maps on Grassmann manifolds considering the Logarithm and Exponential maps. In this context, the notion of cut--locus is introduced since it optimally captures the loss of injectivity of the exponential map. The formulae for the Grassmannian cut--locus to establish a stable interpolation is mathematically proved. Another intrinsic stability condition is defined by computing the geometric distance of the interpolated POD basis of different mode. This enables us to explain intrinsic oscillations of the error norm with increasing mode, and on the contrary, solutions with monotonic behavior. The pMOR benchmark examples revealed important aspects of stability. \section{Acknowledgements} This work has been founded by DGA (``direction générale pour l’armement", French ministry of defense) under the RAPID contract called ``Innvivotech Tissus Mous" in partnership with BIOMODEX.
{ "timestamp": "2020-12-17T02:15:01", "yymm": "2012", "arxiv_id": "2012.08851", "language": "en", "url": "https://arxiv.org/abs/2012.08851" }
\section{Introduction} Liouville theorems give conditions under which harmonic functions are constant. For example, the classic Liouville theorem states that all bounded harmonic functions defined on an entire Euclidean space are constant. In this note, we are interested in the case when a harmonic function being square integrable forces the constancy of the function. We call this the $L^2$-Liouville property. On the other hand, essential self-adjointness deals with the uniqueness of self-adjoint extensions of a symmetric operator. Although seemingly unrelated to the $L^2$-Liouville property, for a strictly positive symmetric operator there is a characterization of essential self-adjointness via the triviality of functions in the kernel of the adjoint of the operator which connects these properties. Abstractly, this connection is certainly known to experts, see, e.g., \cite{GM}. The aim of this note is to first summarize this connection for symmetric operators on general $L^2$ spaces and then illustrate the statement for Laplacians on both graphs and manifolds. In particular, while essential self-adjointness always implies the $L^2$-Liouville property, additional assumptions, more precisely, positivity of the bottom of the spectrum and the space having infinite measure, are needed for the converse implication. We will use the graph and manifold case to show that these additional assumptions are necessary. It is well known that the Laplacian on a complete Riemannian manifold is essentially self-adjoint \cite{C, S} and that such manifolds satisfy the $L^2$-Liouville property \cite{S,Y}. Analogous results have been more recently established for Laplacians on weighted graphs via the use of intrinsic metrics \cite{HK, HKMW}. For an incomplete Riemannian manifold such as $M= N \setminus K$ where $N$ is complete and $K$ is a compact submanifold, the Laplacian is essentially self-adjoint if and only if the codimension of $K$ is greater than or equal to 4, see \cite{CdV, M}. We mention \cite{BP} for further results concerning essential self-adjointness of the Laplacian on an incomplete manifold and \cite{I} for some results in the graph case. In contrast, the $L^2$-Liouville property holds on $\mathbb R^n \setminus \{p\}$ for any dimension \cite{ABR}, while it fails on $\mathbb H^n \setminus \{p\}$ for $n=2, 3$ as the Green's function on $\mathbb H^n \setminus \{p\}$ is in $L^2$ for $n=2, 3$ as we will discuss below. Furthermore, we will give some analogues for the incomplete case for graphs by considering a graph with a vertex removed and with paths of finite length attached at neighboring vertices of the removed vertex. We organize the paper as follows. In Section \ref{Preliminary results} we use general theory to point out the connection between essential self-adjointness and the $L^2$-Liouville property. In Section \ref{s:examples} we first recall the setting of Laplacians on weighted graphs and Riemannian manifolds and then show by examples that the conditions in the general results obtained in Section \ref{Preliminary results} are necessary. In Section \ref{Liouville property manifolds} we study the $L^2$-Liouville property on a manifold with a point removed. More precisely, we discuss how the dimension being less than 4 ensures that the Green's function is square integrable on a neighborhood of the pole and positivity of the bottom of the spectrum implies that the Green's function is square integrable outside of this neighborhood. It should be mentioned that some results in this section can be obtained by combining our general theory with known results concerning essential self-adjointness of the Laplacian on manifolds. In Section \ref{s:Greens_graphs} we extend the ideas of the previous section on manifolds to weighted graphs. Finally, let us note that, in both the manifold and graph settings, there are known connections between Liouville theorems for bounded functions and recurrence, see \cite{G, W}, and between the $L^1$-Liouville property and stochastic completeness, see \cite{G, HK}. Hence, Liouville theorems and stochastic properties are connected. In this note we add another piece to this picture in pointing out the connection between the $L^2$-Liouville property and essential self-adjointness. \section{Preliminary results}\label{Preliminary results} We start with some standard Hilbert space notions. For more details, see, for example, Sections VIII.1 and VIII.2 in \cite{RS1} and Section X.3 in \cite{RS2}. We let $\ms{H}$ denote a Hilbert space. We let $$T:D(T)\subseteq \ms{H} \longrightarrow \ms{H}$$ denote a symmetric operator with dense domain $D(T)$, adjoint $T^*$ and closure $\ov{T}=T^{**}$. We say that $T$ is \emph{essentially self-adjoint} if $\ov{T}$ is self-adjoint, equivalently, if $T$ has a unique self-adjoint extension. We say that $T$ is \emph{strictly positive} if there exists a constant $C>0$ such that $$\as{Tf, f} \geq C \|f\|^2$$ for all $f \in D(T)$. We now highlight the other main property of interest which concerns the constancy of certain functions. In the case when $\ms{H}=L^2(X,m)$ for a measure space $(X,m)$ we say that $T$ satisfies the $L^2$-\emph{Liouville property} if every function $f \in \ker T^*$ is constant. In particular, we note that any function in the kernel of a self-adjoint extension of $T$ will be constant in this case. For the examples that we have in mind, namely Laplacians on graphs and manifolds without boundary, it turns out that the $L^2$-Liouville property is equivalent to the fact that every function which is harmonic in a pointwise or distributional sense and is in the corresponding $L^2$ space is constant. We now highlight the connection between essential self-adjointness and the $L^2$-Liouville property introduced above. \begin{theorem}\label{Main_theorem} Let $(X,m)$ be a measure space and let $T:D(T) \subseteq L^2(X,m) \longrightarrow L^2(X,m)$ be a symmetric operator such that every function $f \in \ker \overline{T}$ is constant. \begin{itemize} \item[(1)] If $T$ is essentially self-adjoint, then $T$ satisfies the $L^2$-Liouville property. \item[(2)] If $T$ satisfies the $L^2$-Liouville property, $T$ is strictly positive and $m(X)=\infty$, then $T$ is essentially self-adjoint. \end{itemize} \end{theorem} \begin{proof} (1) Since the adjoint is a closed operator, essential self-adjointness implies $T^* = \overline{T^*}= {\overline{T}}^*=\overline{T}$. Therefore, if $f \in \ker T^*$, then $f \in \ker \overline{T}$ and thus is constant by the assumption on $T$. Hence, $T$ satisfies the $L^2$-Liouville property. (2) We note that by Theorem X.26 in \cite{RS2} and the strict positivity assumption, $T$ is essentially self-adjoint if and only if $\ker T^* = \{0\}$. Let $f \in \ker T^*$. Then, as $T$ has the $L^2$-Liouville property, it follows that $f$ is constant. Since $f \in L^2(X,m)$ and $m(X)=\infty$, it now follows that $f$ is trivial. This completes the proof. \end{proof} We now comment on the assumption that every $f \in \ker \overline{T}$ is constant found in the result above. First we note that this is only used in the proof of statement (1). However, the strict positivity of $T$ assumed in (2) automatically implies that any function in $\ker \overline{T}$ is actually trivial. Furthermore, in the case of Laplacians on graphs and manifolds that are the main focus of this paper, $f \in \ker \overline{T}$ implies that $f$ has zero energy as we will discuss below. Thus, $f \in \ker \overline{T}$ is constant in the case that the underlying space is connected. \section{Laplacians on graphs and manifolds}\label{s:examples} We will now illustrate the general results from the previous section with Laplacians on weighted graphs and manifolds. In particular, we will use these examples to illustrate the necessity of the additional assumptions needed for the $L^2$-Liouville property to imply essential self-adjointness. \subsection{Laplacians on weighted graphs}\label{ss:Lap_weighted_graphs} We first introduce the setting of infinite weighted graphs and the associated Laplacians as in \cite{KL}. Let $X$ be a countably infinite set of vertices with $b:X\times X \longrightarrow [0,\infty)$ satisfying $b(x,x)=0$ for all $x \in X$, $b(x,y)=b(y,x)$ for all $x, y \in X$ and $\sum_y b(x,y)<\infty$ for all $x \in X$. When $b(x,y)>0$ we think of $x$ and $y$ as neighbors connected by an edge with weight $b(x,y)$ and write $x \sim y$. We note that we allow vertices to have infinitely many neighbors. Let $m:X \longrightarrow (0,\infty)$ and extend $m$ to all subsets of $X$ by additivity. We call $b$ a graph over the measure space $(X,m)$. In this case, our Hilbert space is $$\ms{H} = \ell^2(X,m) = \{ f: X \longrightarrow \mathbb{R} \ | \ \sum_{x \in X}f^2(x)m(x)<\infty\}$$ with inner product $\langle f, g \rangle =\sum_{x \in X} f(x)g(x)m(x)$. We denote the corresponding norm by $\|f\|^2=\langle f, f \rangle.$ Let $C(X)=\{ f: X \longrightarrow \mathbb{R} \}$ and let $$\ms{F} = \{f \in C(X) \ | \ \sum_{y \in X} b(x,y) |f(y)| <\infty \textup{ for all } x \in X \}.$$ For a function $f \in \ms{F}$, we define the formal Laplacian $\ms{L}$ as $$\ms{L}f(x) = \frac{1}{m(x)} \sum_{y \in X} b(x,y) (f(x) - f(y))$$ for $x \in X$. We say that a function $f \in \ms{F}$ is \emph{harmonic} if $$\ms{L}f=0.$$ We note that this definition does not involve any Hilbert space and that harmonicity does not depend on the choice of the measure $m$. The measure starts to play a role as soon as we require the function to additionally be in $\ell^2(X,m)$. We now point out a case in which all harmonic functions are trivial. This was already observed in \cite{HK}, see also \cite{KL, Sch2, Woj} for similar reasoning. \begin{example}[Infinite measure of paths]\label{Ex:measure} We call a sequence of vertices $(x_n)$ an infinite path if $x_n \sim x_{n+1}$ for all $n \in \mathbb{N}$. If every infinite path has infinite measure, i.e., $$\sum_n m(x_n)=\infty \quad \textup{ for all infinite paths } (x_n), $$ then any harmonic function in $\ell^2(X,m)$ is trivial. This follows, as, by a direct calculation and induction, any harmonic function which is not constant must strictly increase over some infinite path. Since this path will have infinite measure, it follows that such a function cannot be in $\ell^2(X,m)$. On the other hand, it is clear that any non-trivial constant function cannot be in $\ell^2(X,m)$ if $X$ has infinite measure. Thus, we see that in the case of infinite measure of infinite paths, all harmonic functions which are in $\ell^2(X,m)$ are trivial. In particular, the only case of interest for Liouville properties involving $\ell^2(X,m)$ is the case when the measure decays so that some path has finite measure. \end{example} We let $C_c(X)$ denote the finitely supported functions in $C(X)$ and let $L_c$ denote the restriction of $\ms{L}$ to $C_c(X)$, that is, $D(L_c)=C_c(X)$ and $L_c f = \ms{L}f$ for all $f \in C_c(X)$. In order for $L_c$ to be symmetric, we assume throughout that $$\ms{L}(C_c(X)) \subseteq \ell^2(X,m).$$ We then say that $L_c$ is essentially self-adjoint if $L_c$ has a unique self-adjoint extension. When $\ms{L}(C_c(X)) \subseteq \ell^2(X,m)$, one can calculate that $$D(L_c^*)=\{ f\in \ell^2(X,m) \cap \ms{F} \ | \ \ms{L}f \in \ell^2(X,m) \}$$ and that $L_c^*$ is a restriction of $\ms{L}$ to $D(L_c^*)$, see the proof of Theorem~6 in \cite{KL}. We note that $f \in \ker(L_c^*)$ if and only if $f \in \ell^2(X,m) \cap \ms{F}$ and $\ms{L}f=0$. In particular, $L_c$ satisfies the $\ell^2$-Liouville property if and only if all harmonic functions which are in $\ell^2(X,m)$ are constant. Example~\ref{Ex:measure} shows that under the assumption that infinite paths have infinite measure $L_c$ satisfies the $\ell^2$-Liouville property. Furthermore, under this assumption, $L_c$ is essentially self-adjoint by Theorem~6~in~\cite{KL}. Having introduced the two main properties of interest for the Laplacian in the graph setting we now turn to the form perspective. This will be used to show that functions which are in the kernel of $\overline{L}_c$ are constant when the graph is connected. Given the symmetric operator $L_c$ we can define the associated form $Q_c$ via $$Q_c(f,g)=\as{L_c f, g}$$ for all $f, g \in D(L_c)$. A direct calculation gives that the form $Q_c$ is a restriction of the energy form $\ms{Q}$ which is given by $$\ms{Q}(f,g) = \frac{1}{2}\sum_{x, y \in X} b(x,y) (f(x)-f(y))(g(x)-g(y))$$ for $f, g \in \ms{D}$ where $\ms{D}$ is the space of functions of finite energy defined as $$\ms{D}= \{ f\in C(X) \ | \ \sum_{x, y \in X} b(x,y) (f(x)-f(y))^2 <\infty\}.$$ A direct calculation gives the following version of a Green's formula $$\langle L_c f, g \rangle = Q(f,g) = \ms{Q}(f,g) = \langle f, L_c g \rangle$$ for all $f, g \in D(L_c)$. In particular, we note that $L_c$ is positive and the densely defined form $Q_c$ is closable. We denote the closure of $Q_c$ via $Q$ and note that $Q$ is a restriction of $\ms{Q}$. We now assume that $b$ is connected, that is, for $x, y \in X$, there exists a sequence of vertices $(x_i)_{i=0}^n$ such that $x_0=x$, $x_n=y$ and $b(x_i,x_{i+1})>0$ for all $i=0, 1, \ldots, n-1$. If $b$ is connected and $f \in \ker{\overline{L}_c}\subseteq D(Q)$, then it follows that $$Q(f) = \frac{1}{2}\sum_{x, y \in X}b(x,y)(f(x)-f(y))^2 =0$$ so that $f$ is constant. In particular, Theorem~\ref{Main_theorem} applies and gives the following connection in the setting of weighted graphs. \begin{corollary}\label{c:graphs_esa_liouv} Let $b$ be a connected graph over $(X,m)$ with $\mathcal{L}(C_c(X)) \subseteq \ell^2(X,m)$ and let $L_c$ be the restriction of the formal Laplacian to $C_c(X)$. \begin{itemize} \item[(1)] If $L_c$ is essentially self-adjoint, then $L_c$ satisfies the $\ell^2$-Liouville property, i.e., all harmonic functions in $\ell^2(X,m)$ are constant. \item[(2)] If $L_c$ satisfies the $\ell^2$-Liouville property, $L_c$ is strictly positive and $m(X)=\infty$, then $L_c$ is essentially self-adjoint. \end{itemize} \end{corollary} \begin{remark} We note, in particular, the following consequence of (1) from Corollary~\ref{c:graphs_esa_liouv}. A function $f \in \ms{F}$ is called $\lambda$-harmonic for $\lambda \in \mathbb{R}$ if $$\ms{L}f=\lambda f.$$ Clearly, when $\lambda=0$, this just means that $f$ is harmonic. However, we note that while the measure plays no role in the definition of a harmonic function, it does so for $\lambda$-harmonic functions for $\lambda \neq 0$. Now, by abstract theory, it can be shown that $L_c$ is essentially self-adjoint if and only if every $\lambda$-harmonic function for $\lambda<0$ which is in $\ell^2(X,m)$ is trivial, see, for example \cite{Woj, KL} or Theorem~6.2 in \cite{HKLW}. Hence, by taking the contrapositive of (1), we see that the existence of a non-constant harmonic function in $\ell^2(X,m)$ implies the existence of a non-trivial $\lambda$-harmonic function in $\ell^2(X,m)$ for $\lambda<0$. \end{remark} We will now discuss why the additional assumptions of strict positivity and infinite measure are needed for the implication in (2) above. In order to discuss strict positivity, we introduce the bottom of the spectrum of the Friedrichs extension of $L_c$. We denote the self-adjoint operator associated to the closed form $Q$ by $L$ and refer to $L$ as the Laplacian associated to $b$ over $(X,m)$. We note that $\overline{L}_c \subseteq L$ and these operators are equivalent precisely when $L_c$ is essentially self-adjoint. We note from the spectral theorem that $\lambda_0(L)$, the bottom of the spectrum of $L$, can be given as $$\lambda_0(L) = \inf_{f \in D(L), f \neq 0} \frac{\langle Lf,f\rangle}{\langle f, f \rangle}= \inf_{\varphi \in C_c(X), \varphi \neq 0} \frac{\langle L_c \varphi,\varphi\rangle}{\langle \varphi, \varphi \rangle}.$$ In particular, the strict positivity of $L_c$ is equivalent to the positivity of the bottom of the spectrum of $L$, i.e., $\lambda_0(L)>0$. We further note that $L$ which comes from $Q$ is not the only natural self-adjoint extension of $L_c$. Namely, one can also consider the Neumann Laplacian $L_N$ which arises from the restriction of the energy form $\ms{Q}$ to $$H^1=\{ f \in \ell^2(X,m) \ | \ \ms{Q}(f)<\infty \}.$$ We let $H_0^1 = D(Q)$ which is the closure of $C_c(X)$ with respect to the form norm $\|f\|_{\ms{Q}}^2 = \|f\|^2 + \ms{Q}(f)$, i.e., $$H_0^1 = \ov{C_c(X)}^{\| \cdot \|_{\ms{Q}}}.$$ If $H_0^1 \neq H^1$, which is referred to as the failure of form uniqueness, then there are two distinct self-adjoint extensions of $L_c$, namely the Laplacian $L$ and the Neumann Laplacian $L_N$, and thus $L_c$ is not essentially self-adjoint. We now introduce a number of analytic and geometric quantities which will be needed for the examples below. For more details, see \cite{HKMW}. Specifically, whenever $\varrho$ is a strongly intrinsic path metric and $\ov{X}$ is the metric completion of $X$ with respect to $\varrho$, then we let $\partial X = \ov{X} \setminus X$ denote the Cauchy boundary of $X$. For a set $K \subseteq X$, we let $$\mbox{Cap}(K)= \inf \{ \|f\|_{\ms{Q}} \ | \ f \in H^1, f \geq 1 \mbox{ on } K\}$$ denote the capacity of $K$ and let $$\mbox{Cap}(\partial X) = \inf \{ \mbox{Cap}(K \cap X) \ | \ \partial X \subseteq K \mbox{ with } K \subseteq \ov{X} \mbox{ open} \}.$$ It is known that if $\mbox{Cap}(\partial X) < \infty$, then $\mbox{Cap}(\partial X)>0$ if and only if $H^1_0 \neq H^1$, see Theorem 3 in \cite{HKMW}. We note that in the case that every infinite path has infinite measure as discussed in Example~\ref{Ex:measure} above, it is not hard to see that if $\partial X \neq \emptyset$, then $\mbox{Cap}(\partial X)=\infty.$ We now show the necessity of assuming both infinite measure and strict positivity in order for the $\ell^2$-Liouville property to imply essential self-adjointness. \begin{example}[Necessity of additional assumptions in (2) of Corollary~\ref{c:graphs_esa_liouv}]\label{e:graphs_add} \ (i) Necessity of $m(X)=\infty$. Consider the graph with $X =\mathbb{N}=\{1, 2, 3, \ldots\}$ and $b(x,y)>0$ if and only if $|x-y|=1$. It is easy to see that if $\ms{L}f = 0$, then $f$ is constant. In particular, $L_c$ has the $\ell^2$-Liouville property regardless of the choice of the measure $m$. On the other hand, if $$m(X)<\infty \qquad \textup{ and } \qquad \sum_{n=1}^\infty \frac{1}{b(n,n+1)}<\infty$$ it turns out that $L_c$ is not essentially self-adjoint. This can be seen as follows: we consider $\lambda$-harmonic functions for $\lambda<0$, i.e., functions satisfying $\ms{L}f=\lambda f$ for $\lambda<0$. As mentioned above, the essential self-adjointness of $L_c$ is equivalent to the triviality of such functions in $\ell^2(X,m)$. Now, such a function $f$ is in $\ell^2(X,m)$ if $m(X)<\infty$ and $f$ is bounded. If $f$ is non-trivial and $m(X)<\infty$, then $f$ is bounded if and only if $$\sum_{n=1}^\infty \frac{1}{b(n,n+1)}<\infty$$ as can be seen by applying Lemma 5.4 from \cite{KLW}. Therefore, in the case that $m(X)<\infty$ and $\sum_n 1/b(n,n+1)<\infty$, we have a non-trivial $\lambda$-harmonic function for $\lambda<0$ which is in $\ell^2(X,m)$ and thus $L_c$ is not essentially self-adjoint by Theorem~6.2 in \cite{HKLW}. This shows the necessity of $m(X)=\infty$ in (2) of Corollary~\ref{c:graphs_esa_liouv}. We now make some further remarks which will be used for the second example directly below. In particular, we want to establish that the Cauchy boundary of $X$ has positive and finite capacity. As $m(X)<\infty$ and the graph is stochastically incomplete by Theorem~5 in \cite{KLW}, it follows that $H^1\neq H^1_0$ as stochastic incompleteness, transience and $H^1 \neq H^1_0$ are all equivalent in the case of finite measure by results of \cite{Sch}, see also \cite{GHKLW}. As the capacity of the Cauchy boundary is finite since the measure of the graph is finite, Theorem 3 in \cite{HKMW} gives that $$0<\mbox{Cap}(\partial X)<\infty.$$ This will be used in the next example directly below. We further note that $\lambda_0(L)>0$ in this case by Theorem~3 in \cite{KLW} so that $L_c$ is strictly positive. To summarize: $L_c$ is a strictly positive operator which satisfies the $\ell^2$-Liouville property but is not essentially self-adjoint. \\ (ii) Necessity of strict positivity. We build on the example above. Specifically, we let $X=X_1 \cup X_2$ where $X_1 =\mathbb{N}=\{1, 2, 3, \dots\}$ and $b(x,y)>0$ for $x, y \in X_1$ if and only if $|x-y|=1$ with $m(X_1)<\infty$ and $\sum_n 1/b(n,n+1)<\infty$ so that this is a graph as in (i) above. We let $X_2= -\mathbb{N}=\{-1, -2, -3, \ldots\}$ with $b(x,y)=1$ for $x, y \in X_2$ if and only if $|x-y|=1$ and zero otherwise and $m$ satisfying $m(X_2)=\infty$. Finally, we let $b(-1,1)=b(1,-1)>0$ so that the resulting graph is connected and add no other edges. We now show that $L_c$ has the $\ell^2$-Liouville property. We first note that if $f \in \ms{F}$ satisfies $\ms{L}f=0$ and there exists a pair of vertices $x,y \in X$ such that $x \sim y$ and $f(x)\neq f(y)$, then $f(x) \neq f(y)$ for all $x, y \in X$ with $x \sim y$. Hence, if $f$ is harmonic and not constant, then it is not equal on all neighbors. Now, suppose that $f$ is harmonic, $f(-1) < f(-2)$ and let $C = f(-2)-f(-1)>0$. It follows by induction using $\ms{L}f=0$ that $f(-n) = f(-n+1)+C$ so that $$f(-n) = f(-2)+C \cdot (n-2) $$ for all $n \geq 2$. In particular, $f$ will not be bounded above on $X_2$ and thus $f \not \in \ell^2(X,m)$ since $m(X_2)=\infty$. A similar argument works if $f(-1) > f(-2)$. Therefore, all harmonic functions which are in $\ell^2(X,m)$ are constant and, furthermore, trivial as $m(X)=\infty$. In particular, $L_c$ has the $\ell^2$-Liouville property. We now show that $L_c$ is not strictly positive. By letting $1_n = 1_{K_n}$ denote the characteristic function of the set $K_n=\{-1, -2, \ldots, -n\}$ we see that $$\lambda_0(L) \leq \frac{\langle L_c 1_n,1_n\rangle}{\langle 1_n, 1_n \rangle}= \frac{2}{m(K_n)} \to 0$$ as $n \to \infty$. Therefore, $L_c$ is not strictly positive. Finally, by applying Theorem~3 from \cite{HKMW}, we see that $L_c$ is not essentially self-adjoint. More specifically, as the usual combinatorial graph metric is equivalent to a strongly intrinsic path metric on $X_2$, the Cauchy boundary of $X$ is equal to the Cauchy boundary of $X_1$, that is, $\partial X = \partial X_1$. In particular, since the Cauchy boundary of $X_1$ has finite and positive capacity as discussed in (i) above, it follows that $H_0^1 \neq H^1$ and thus $L_c$ is not essentially self-adjoint. We further note that $m(X)=\infty$ in this case. To summarize: $L_c$ satisfies the $\ell^2$-Liouville property and the space has infinite measure but $L_c$ is not essentially self-adjoint. \end{example} \subsection{Laplacians on manifolds} We now introduce Laplacians on Riemannian manifolds. For further background see, e.g., \cite{G19}. We let $(M,g)$ be a smooth connected Riemannian manifold. We consider the measure space $(M,\mu)$ with the Riemannian measure $d\mu$ associated to $g$. Our Hilbert space is $\mathcal H = L^2(M, \mu)= \{u \in L^0 \mid \|u\|<\infty\}$ where $\|u\|$ is induced by the inner product \[ \langle u,v \rangle = \int_M uv \, d\mu \] for $u,v \in L^2(M, \mu)$. The Laplacian is defined as a distribution via $\Delta u = -\mbox{div} \nabla u$. Here, $\nabla$ is defined by \[g(\nabla u, \xi) (x)= \xi u (x) \] for all $x \in M$ and all $\xi \in \Gamma(TM)$ where $ \Gamma(TM)$ is the space of all smooth sections of the tangent bundle $TM$ and $ \mbox{div}:\Gamma(TM)\longrightarrow \Gamma(TM)$ is defined by \[\langle \mbox{div} \xi, \psi \rangle = \int_M g(\xi, -\nabla \psi)\, d\mu\] for all $\psi \in C_c^\infty(M)$ where $C_c^\infty(M)$ is the space of smooth functions with compact support on $M$. If $M$ has a boundary, then we always assume that $M$ is orientable and impose Neumann boundary conditions for div. A generalized function $u$ is called \emph{harmonic} if $u$ is a distribution and $\Delta u =0$ as a distribution. We note that by the hypo-ellipticity of $\Delta$, any harmonic function is smooth. We denote by $T$ the operator acting as $\Delta$ on the domain $C_c^\infty(M)$. We note that if $M$ has a boundary, then $T$ satisfies Neumann boundary conditions. By Green's formula, \[ \langle u,\Delta v \rangle = \int_M g(\nabla u, \nabla v)\, d\mu = \langle \Delta u, v \rangle\] for $u,v \in D(T)$. Therefore, $T$ is symmetric. On any Riemannian manifold, there are two self-adjoint extensions of $T$, namely, $\Delta_D = \nabla_D^* \nabla_D$, the Dirichlet Laplacian, and $\Delta_N =\nabla_N^*\nabla_N$, the Neumann Laplacian. The domains of these operators are \begin{align*} D(\nabla_D) &=H^1_0= \mbox{the closure of $C_c^\infty$ in $H^1$} \\ D(\nabla_N)&=H^1=\{u \in L^2 \mid \nabla u \in \vec L^2\} \end{align*} where $H^1$ is a Hilbert space with $\langle u,v \rangle_{1} = \langle u,v \rangle + \langle \nabla u, \nabla v \rangle$. Furthermore, $T$ is strictly positive if and only if the bottom of the spectrum of $\Delta_D$ is positive, that is, \[\lambda_0 (\Delta_D) = \inf_{u \in H^1_0(M),\ u \neq 0} { \|\nabla u\|^2 \over \|u\|^2 } > 0.\] We consider the energy form $a$ with domain $H^1_0$ and acting as \[a(f) = \int_M g(\nabla f, \nabla f)\, d\mu\] for $f \in H^1_0$. Let $f$ be in the kernel of $\overline{T}$ and let $(f_n)$ be a sequence in $C_c^\infty(M)$ such that $f_n \to f$ and $T f_n \to \overline T f$ in $\mathcal H$. Then, by the Green's formula above and by lower semicontinuity of $a$ which follows as $a$ is a closed form by definition, we obtain \[ 0=\langle \overline T f, f \rangle= \lim_{n\to\infty} \langle T f_n, f_n \rangle = \liminf_{n\to\infty} a(f_n) \ge a(f). \] Hence, $\nabla f=0$, and by the weak Poincar\'e inequality, see \cite{CL}, $f$ is constant. In particular, all functions in the kernel of $\overline{T}$ are constant and thus Theorem~\ref{Main_theorem} applies and gives the following connection in the setting of Riemannian manifolds. \begin{corollary}\label{manifold_corollary} Let $M$ be a connected Riemannian manifold. Let $T$ be the Laplacian with domain $C^\infty_c(M)$. \begin{itemize} \item[(1)] If $T$ is essentially self-adjoint, then $T$ satisfies the $L^2$-Liouville property. In particular, all harmonic functions in $L^2(M,\mu)$ are constant if $M$ has no boundary. \item[(2)] If $T$ satisfies the $L^2$-Liouville property, $T$ is strictly positive and $\mu(M)=\infty$, then $T$ is essentially self-adjoint. \end{itemize} \end{corollary} We note that Gaffney showed that if $M$ is complete, then $H^1_0 = H^1$ \cite{G1, G2}. We remark that the condition $H^1_0 = H^1$ does not imply the essential self-adjointness of $T$. For example, if $M = N \setminus K$ where $N$ is a complete Riemannian manifold with dimension $n$ and $K$ is a compact submanifold with dimension $k$, then $H^1_0= H^1$ if and only if $n-k \ge 2$ and $T$ is essentially self-adjoint if and only if $n-k \ge 4$, see \cite{CdV, M}. Clearly, if $H^1_0 \neq H^1$, then $\nabla_D \neq \nabla_N$ so that $T$ is not essentially self-adjoint. We let $\overline{M}$ denote the metric completion of $M$ and call $\partial M = \overline{M} \setminus M$ the Cauchy boundary of $M$. We recall that the capacity of $\partial M$ is given by \[\mbox{Cap}(\partial M) = \inf \{ \|u \|^2_1 \mid u \in H^1, u=1 \mbox{ on a neighborhood of $\partial M$ in $M$} \}. \] It is known that if $0<\mbox{Cap}(\partial M)<\infty$, then $H^1_0 \neq H^1$, see \cite{M}. We now show the necessity of the additional assumptions in statement (2) of Theorem~\ref{Main_theorem} in this continuous model. \begin{example}[Necessity of additional assumptions in (2) of Theorem~\ref{Main_theorem}] \ (i) Necessity of strict positivity. To show the necessity of $T$ being strictly positive in order for the $L^2$-Liouville property to imply essential self-adjointness, we consider $T$ on the half-line $(0,\infty) \subset \mathbb R$. We observe that $T$ is not strictly positive. Indeed, let \[ u_n (x) = \begin{cases} 0, & x \in (0,1),\\ {1 \over n^3}(x-1), &x \in [1,n^2+1], \\ {1 \over n^3}((2n^2+1)-x), &x \in [n^2+1,2n^2+1],\\ 0, & x > 2n^2+1. \end{cases} \] Then $u_n \in H^1_0 ((0,\infty))$ and \[ \lambda_0(\Delta) \le { \|\nabla u_n\|^2 \over \| u_n\|^2 } \to 0 \] as $n \to \infty$. Thus, $T$ is not strictly positive. The Cauchy boundary of $(0,\infty)$ is $\{0\}$. If $\mbox{Cap}(0) =0$, then there exists a sequence $(f_n)$ in $H^1$ and sequences $(x_n)$ and $(y_n)$ in $(0,\infty)$ such that $0< x_n < y_n$, $y_n\to0$ as $n\to\infty$ with \[ \begin{cases} f_n|_{(0,x_n)}=1 \\ \Delta f_n|_{(x_n,y_n)} =0 \\ f_n|_{(y_n,\infty)}=0 \end{cases} \] and \[ 0 = \lim_{n\to\infty} \int_{x_n}^{y_n} |f'_n(x)|^2\, dx.\] Since $f_n(x) = {1 \over x_n-y_n} (x-y_n)$ for $x \in (x_n,y_n)$, we get \[ \int_{x_n}^{y_n} |f'_n(x)|^2\, dx = {1 \over y_n-x_n} \] which clearly does not tend to 0. This contradiction implies $\mbox{Cap}(0) >0$ so that $H^1_0 \neq H^1$ and thus $T$ on $(0,\infty)$ is not essentially self-adjoint. However, since any harmonic function $f$ on $(0,\infty)$, i.e., any function satisfying $f\cprime \cprime =0$, is affine, in order for $f$ to be in $L^2$, $f$ must be equal to 0. This gives the $L^2$-Liouville property. \\ (ii) Necessity of $\mu(X)=\infty$. To show the necessity of $\mu(X)=\infty$, we consider the interval $(0,1] \subset \mathbb R$. This is a manifold with boundary $\{1\}$ so that we impose the Neumann boundary condition at $\{1\}$. \eat{That is, the domain of $T$ is \[D= \{u \in C^\infty_c ((0,1])\mid u'(1)=0 \}\] and $T$ acts as $\Delta$.} Then $T$ satisfies the assumptions of Theorem \ref{Main_theorem} and is strictly positive. In fact, for $u \in D(T)=C_c^\infty((0,1])$ we have \begin{align*} \|u\|^2_{L^2}&=\int^1_0|u(x)|^2\, dx \\ &= \int^1_0 \left( \int^x_0 |u'(s)|\, ds \right)^2 dx \le \int^1_0 dx \int^1_0 |u'(s)|^2\, ds =\|u'\|^2_{L^2}. \end{align*} Moreover, $T$ is not essentially self-adjoint by the same argument as in (i). However, since any harmonic function $f$ on $(0,1]$ is affine, in order for $f$ to be in $D(T^*)$, $f$ must be a constant function by the Neumann boundary condition at $\{1\}$. This implies that $T$ satisfies the $L^2$-Liouville property. \end{example} \section{The $L^2$-Liouville property of the Laplacian on $M \setminus \{p\}$} \label{Liouville property manifolds} As mentioned in the introduction, the Laplacian on a complete manifold satisfies the $L^2$-Liouville property and is essentially self-adjoint. In this section, we consider the case of a manifold with a point removed, which is thus not complete, and show that these properties may fail. More specifically, we consider the case of a manifold with a point removed and show that positivity of the bottom of the spectrum and the dimension of the manifold are the keys to the failure of the $L^2$-Liouville property. In particular, we let $M\setminus \{p\}$ be a manifold of dimension 2 or 3 with a point $p$ removed and suppose that $\lambda_0=\lambda_0(\Delta_D)>0$ for the bottom of the spectrum of the Dirichlet Laplacian $\Delta_D$. Then, the fact that the dimension is 2 or 3 will imply that the Green's function with pole at $p$ is in $L^2$ on a neighborhood of $p$ and $\lambda_0>0$ will imply the Green's function is in $L^2$ outside of a compact set which includes $p$, implying the failure of $L^2$-Liouville property of $M \setminus \{p\}$. We note that the Laplacian on $M \setminus \{p\}$ is essentially self-adjoint if and only if $\mbox{dim } M \ge 4$, see \cite{CdV,M}. We start by recalling some basic facts and proving an estimate for the Green's function in terms of capacity. The capacity cap$(\Omega)$ of a precompact open set $\Omega \subseteq M$ is defined as \[\textrm{cap}(\Omega) = \lim_{n\to\infty} \inf_{\psi \in \textup{Lip} (K_n, \Omega)}\|\nabla \psi\|^2_{L^2(K_n)} \] where $(K_n)$ is an exhaustion of $M$ with $\overline{\Omega} \subseteq K_n$ and $\textup{Lip}(K_n, \Omega) \subseteq H^1_0(M)$ is the set of locally Lipschitz functions $\psi$ on $M$ with compact support in $K_n$ such that $0 \le \psi \le 1$ and $\psi|_{\overline \Omega}=1$ \cite{G}. We note that this notion is distinct from the capacity of the Cauchy boundary introduced in the previous section. We recall that $\inf_{\psi \in \textup{Lip} (K_n, \Omega)}\|\nabla \psi\|^2_{L^2(K_n)}$ is attained by the equilibrium potential $u_n$ of $\Omega$ in $K_n$ which is the unique solution to \[ \begin{cases} \Delta u_n (x)=0, & x \in K_n \setminus \overline{\Omega}\\ u_n(x) =0, & x \in M \setminus K_n\\ u_n(x)=1, & x \in \overline\Omega. \end{cases} \] \begin{lemma}\label{lemma5} Let $M$ be a connected Riemannian manifold $M$ without boundary. Let $p \in M$ and let $\Omega \subset M$ be a precompact open set with $p \in \Omega$. Let $\lambda_0 = \lambda_0(\Delta_D)$ denote the bottom of the spectrum of the Laplacian $\Delta_D$. If $\lambda_0>0 $, then $M$ admits a positive Green's function $G$ and \[ \|G(p, \cdot)\|_{L^2(M \setminus \Omega,\mu)} \le {C \over \sqrt{ \lambda_0}}\sqrt{ \textnormal{cap}(\Omega)} \] where $C = \sup_{y \in \partial \Omega} G(x,y)$. \end{lemma} \begin{proof} Let $\Omega \subset M$ be a precompact open set with $x \in \Omega$ and let $(K_n)$ be an exhaustion of $M$ such that $\Omega \subset K_1$. Since $\lambda_0>0$, $M$ is transient and thus Proposition 10.1 of \cite{G} implies that $M$ admits a positive Green's function $G$ and \[\mbox{cap}(\Omega)>0.\] For $p \in \Omega$, set $g_n(\cdot)=G_n(p,\cdot)$ and $g(\cdot)= G(p,\cdot)$, where $G_n$ is the Green's function of $K_n$ with Dirichlet boundary conditions extended by 0 to $M \setminus K_n$. Let \[C= \sup_{y \in \partial \Omega} g(y).\] For $n>1$, $g_n \le g$ by the maximum principle. Let $u_n \in H^1_0(M)$ be the equilibrium potential of $\Omega$ in $K_n$. Since $g_n \le Cu_n$ on $\partial ( K_n \setminus \Omega )$, it follows that $g_n \le Cu_n$ on $ K_n \setminus \Omega$ for every $n>1$ by the maximum principle. Hence \begin{align*} \|g_n\|_{L^2(K_n \setminus \Omega)} &\le C\|u_n\|_{L^2(K_n \setminus \Omega)} \\ &\le {C\over \sqrt{\lambda_0}} \|\nabla u_n\|_{L^2(K_n \setminus \Omega)} \to {C\over \sqrt{\lambda_0}} \sqrt{\mbox{cap}(\Omega)} \end{align*} as $n\to\infty$. Since $0 \le g_n\le g_{n+1}$ and $g_n \to g$ as $n \to \infty$ on $M \setminus \Omega$, we apply Beppo Levi's monotone convergence theorem to get \[ \|g\|_{L^2(M \setminus \Omega)} \le {C \over \sqrt{\lambda_0}} \sqrt{\mbox{cap}(\Omega)} \] which completes the proof. \end{proof} \begin{remark} That $\lambda_0>0$ implies the Green's function is in $L^2$ outside of a compact set can also be obtained via decay estimates on the Green's function on a complete manifold, see Corollary~22.4 in \cite{Li}. Our approach is more elementary as we only use the capacity of a set and the associated equilibrium potential. Furthermore, we do not assume completeness of the starting manifold. \end{remark} The result above gives that the Green's function is in $L^2$ outside of a compact set provided that the bottom of the spectrum is strictly positive. We now combine this with the fact that the Green's function is in $L^2$ on this compact set if the dimension is small to show the failure of the $L^2$-Liouville property for manifolds with positive bottom of the spectrum and small dimension. \begin{theorem}\label{t:NoLiouv_manifolds} Let $M$ be a connected Riemannian manifold without boundary. Let $\lambda_0 = \lambda_0(\Delta_D)$ denote the bottom of the spectrum of the Laplacian $\Delta_D$. If the dimension of $M$ is 2 or 3 and $\lambda_0>0$, then $G(p,\cdot)$, the positive Green's function with pole at $p \in M$, is in $L^2 (M \setminus \{p\}, \mu)$. In particular, the $L^2$-Liouville property fails on $M \setminus \{p\}$ for any $p \in M$. \end{theorem} \begin{proof} Let $p \in M$ and let $r>0$ be such that $B_r(p)$ is a precompact open set. If $\lambda_0>0$, then the positive Green's function $G$ with pole at $p$ exists and satisfies $G(p,\cdot) \in L^2(M \setminus B_r(p))$ by Lemma \ref{lemma5}. This together with the standard fact that $G(p,\cdot) \in L^2(B_r(p) \setminus \{p\})$ if the dimension of $M$ is 2 or 3, see \cite{G}, allows us to conclude that $G(p,\cdot) \in L^2(M \setminus \{p\})$. \end{proof} In contrast, in the rest of the section we will show that if the dimension is 4 or greater, then the $L^2$-Liouville property always holds on a manifold with a point removed. \begin{theorem}\label{922theorem7} Let $M$ be a connected complete Riemannian manifold $M$ without boundary. If the dimension of $M$ is at least 4, then the $L^2$-Liouville property holds on $M \setminus \{p\}$ for any $p \in M$. \end{theorem} We prove the theorem through a series of lemmas. For a fixed point $p \in M$, we write $r(x)=d(p,x)$ for the distance from $x \in M$ to $p$. For the following two lemmas, we assume that we are on an $n$-dimensional connected complete Riemannian manifold without boundary. \begin{lemma} \label{922lemma8} For a harmonic function $u$ on $B_1(p) \setminus \{p\}$, if \[u(x) = \begin{cases} o \left( {1\over r(x)^{n-2}} \right), & n\ge 3 \\ o \left( - \log (r(x)) \right), & n=2 \end{cases} \] as $r(x) \to 0$, then $u$ can be extended to $B_1(p)$ on which $u$ is harmonic. \end{lemma} \begin{proof} This is standard. One can apply the same argument as in the proof of Theorem 1.28 in \cite{HL}. \end{proof} \begin{lemma}\label{922lemma9} For a harmonic function $u$ on $B_1(p) \setminus \{p\}$, if $u \in L^2(B_1(p) \setminus \{p\})$, then \[u(x) = o \left( {1 \over r(x)^{n/2}} \right) \textup{ as } r(x) \to 0.\] \end{lemma} \begin{proof} Note that the Ricci curvature is uniformly bounded on $B_1(p)$. Thus, there exists $r_0>0$ such that for any $r < r_0$, the mean value inequality holds for $u$ \cite{LS}. Therefore, for any $x \in \partial B_r(p)$, \[ u^2(x) \le {C \over \textrm{vol} \left( B_{r/2}(x) \right)} \int_{B_{r/2}(x)} u^2 d\mu \le {C \over r^n} \int_{B_{2r}(p) \setminus \{p\}} u^2 d\mu = o \left( {1 \over r^n} \right) \textup{ as } r \to 0 \] where the last assertion follows from the assumption that $u \in L^2(B_1(p) \setminus \{p\})$. This proves the lemma. \end{proof} We now combine the above lemmas to prove our second theorem of this section. \begin{proof}[Proof of Theorem \ref{922theorem7}] Let $u$ be a harmonic function on $M \setminus \{p\}$ such that $u \in L^2(M \setminus \{p\})$. By Lemma \ref{922lemma9}, \[ u(x) = o \left( {1 \over r(x)^{n/2}} \right) \textup{ as } r(x) \to 0.\] For $n \ge 4$, this yields \[ u(x) = o \left( {1 \over r(x)^{n-2}} \right) \textup{ as } r(x) \to 0.\] Hence, $u$ can be extended to a harmonic function on $B_1(p)$ by Lemma \ref{922lemma8} and thus to an $L^2$ harmonic function on $M$. By the fact that the manifold is complete, $u$ is constant \cite{Y}. This proves the theorem. \end{proof} \begin{example} The Laplacian on the hyperbolic space $\mathbb{H}^n$ has $\lambda_0>0$ and does not satisfy the $L^2$-Liouville property on $\mathbb{H}^n \setminus \{p\}$ for $n=2,3$ by Theorem \ref{t:NoLiouv_manifolds}. Thus, we see that the dimension assumption in Theorem~\ref{922theorem7} is necessary. In contrast, the Laplacian on $\mathbb R^n \setminus \{0\}$ satisfies the $L^2$-Liouville property for any $n \ge 1$, see \cite{ABR}. \end{example} \begin{remark} By combining Corollary \ref{manifold_corollary} and the fact that for a complete manifold $M$ with no boundary the Laplacian on $M \setminus \{p\}$ is essentially self-adjoint if and only if dim $M \ge 4$, see \cite{M}, we can prove both: \begin{itemize} \item the failure of $L^2$-Liouville property under the assumptions of Theorem \ref{t:NoLiouv_manifolds} if the manifold additionally has infinite measure, \item the result found in Theorem \ref{922theorem7}. \end{itemize} \end{remark} \section{The $\ell^2$-Liouville property of the Laplacian on $X \setminus \{o\}$}\label{s:Greens_graphs} In the previous section, we considered the case of a manifold with a point removed. In particular, we showed that the dimension is key to the failure of the $L^2$-Liouville property when the manifold is made metrically incomplete via the removal of a point. The main tool to show this was an estimate on the Green's function in terms of the capacity in the case of positive bottom of the spectrum. In this section, we follow a similar development for graphs. However, we note that neither dimension nor what to do following the removal of a point are well-established ideas for graphs. In particular, as we do not assume local finiteness, we note that the removal of a vertex can result in a graph with infinitely many connected components. The idea we follow here is that, at the neighbors of the removed vertex, we attach infinite paths which have finite length, thus mimicking the manifold setting where the removal of a point results in an incomplete manifold. We start with the definition of a capacity for a set. We note that this is distinct from the capacity introduced in Section~\ref{s:examples} in the context of the Cauchy boundary of a graph. Let $b$ be a graph over $(X,m)$. For a finite set $\Omega \subseteq X$, we let $$\textup{cap}(\Omega) = \inf_{\varphi \in C_c(X), \varphi _{\vert \Omega} =1} \ms{Q}(\varphi)$$ where $\ms{Q}$ is the energy form of $b$. We first show that for a finite set $\Omega \subseteq X$, the capacity can be achieved via an exhaustion technique. More specifically, we call a sequence of finite subsets $(K_n)$ of $X$ an exhaustion sequence of $X$ if $b$ restricted to $K_n$ gives a connected graph, $K_n \subseteq K_{n+1}$ for all $n \in \mathbb{N}$ and $X = \bigcup K_n$. With these notions we state the following result which should be known to experts. However, we briefly sketch the proof for the convenience of the reader. \begin{lemma}\label{l:cap_exhaustion} Let $b$ be a connected graph over $(X,m)$. If $\Omega \subseteq X$ is finite and $(K_n)$ is an exhaustion sequence of $X$ with $\Omega \subseteq K_1$, then $$\textup{cap}(\Omega) = \lim_{n \to \infty} \ms{Q}(\varphi_n)$$ where $\varphi_n \in C_c(X)$ satisfies $$\begin{cases} \ms{L} \varphi_n(x) =0, & x \in K_n \setminus \Omega \\ \varphi_n(x)=1, & x \in \Omega \\ \varphi_n(x) = 0, & x \in X \setminus K_n \\ \end{cases}$$ for $n \in \mathbb{N}$. \end{lemma} \begin{proof} For every $n \in \mathbb{N}$, we let $$C(K_n, \Omega) = \{ \varphi \in C(K_n) \ | \ \varphi_{\vert \Omega}=1 \}$$ and extend any function in $C(K_n, \Omega)$ by zero so that the function is defined on $X$. As $C(K_n, \Omega)$ is a finite dimensional space, the energy form $\ms{Q}$ has a unique minimizer on $C(K_n, \Omega)$ which we denote by $\varphi_n$ and which, via the Green's formula, will satisfy the system of equations given in the statement. Clearly, we have $\ms{Q}(\varphi_{n+1}) \leq \ms{Q}(\varphi_n)$ for all $n \in \mathbb{N}$. A maximum principle for harmonic functions such as Lemma 1.39 in \cite{G18} gives uniqueness of $\varphi_n$ and will also imply $$ 0 \leq \varphi_n \leq 1 \qquad \textup{ and } \qquad \varphi_n \leq \varphi_{n+1}$$ for all $n \in \mathbb{N}$. We note, in particular, that this can be used to establish the independence of the limiting energy on the exhaustion sequence. Furthermore, for every $n \in \mathbb{N}$, we clearly have $\textup{cap}(\Omega) \leq \ms{Q}(\varphi_n)$ so that $$\textup{cap}(\Omega) \leq \liminf_{n \to \infty} \ms{Q}(\varphi_n).$$ On the other hand, for every $\varepsilon>0$, there exists $\varphi_\varepsilon \in C_c(X)$ such that ${\varphi_\varepsilon}_{\vert \Omega}=1$ and $$\ms{Q}(\varphi_\varepsilon) \leq \textup{cap}(\Omega) +\varepsilon$$ Now, since $\varphi_\varepsilon \in C_c(X)$, there exists some $N$ such that $\varphi_\varepsilon$ is supported on $K_N$ and thus $$\ms{Q}(\varphi_\varepsilon) \geq \ms{Q}(\varphi_N) \geq \ms{Q}(\varphi_n)$$ for all $n \geq N$. Therefore, for every $\varepsilon>0$, we have $$ \limsup_{n \to \infty}\ms{Q}(\varphi_n) \leq \textup{cap}(\Omega)+\varepsilon$$ Combining inequalities and letting $\varepsilon \to 0$ gives the result. \end{proof} The function $\varphi_n$ constructed in Lemma~\ref{l:cap_exhaustion} above is called the equilibrium potential of $\Omega$ in $K_n$. These functions will be used below to estimate the Green's function which will be introduced next. We let $e^{-tL}$ denote the heat semigroup associated to the Laplacian $L$ acting on $\ell^2(X,m)$. We then let $p: X \times X \times [0,\infty) \longrightarrow \mathbb{R}$ denote the heat kernel which is defined via $$e^{-t L}f(x)=\sum_{y \in X}p_t(x,y) f(y)m(y)$$ for all $x \in X$, $t \geq 0$ and $f \in \ell^2(X,m)$. Finally, we let $G:X\times X \longrightarrow [0,\infty]$ denote the Green's function defined via $$G(x,y) = \int_0^\infty p_t(x,y) dt$$ for $x, y \in X$. If $b$ over $(X,m)$ is connected and $G(x,y) < \infty$ for one pair of vertices $x, y \in X$, then we have $G(x,y) < \infty$ for all pairs $x,y \in X$. In this case, we say that the graph is transient, otherwise we say that the graph is recurrent. We note that the Green's function on a graph, and thus transience and recurrence, are often defined for random walks with a discrete time parameter. However, this is equivalent to using continuous time, see \cite{Sch}. By standard theory, transience is equivalent to the fact that $\textup{cap}(x)>0$ for some (equivalently, all) $x \in X$. Furthermore, whenever the bottom of the spectrum is positive, i.e., $$\lambda_0(L) = \inf_{\varphi \in C_c(X)} \frac{\ms{Q}(\varphi)}{\| \varphi \|^2}>0,$$ the graph is transient, see \cite{Sch, Soa, W, Woe} for further details. We assume that $b$ is transient, fix $o \in X$ and let $g \in C(X)$ be defined by $g(x)=G(o, x)$ for $x \in X$. We then calculate that $$\ms{L}g=\oh{1}_{o}$$ where $\oh{1}_{o} = 1_{o}/m(o)$ and $1_{o}$ is the characteristic function of the set $\{o\}$. In particular, $g$ is a function which is harmonic everywhere except for a single vertex. As in the previous section, we now discuss under which conditions $g$ is additionally in the Hilbert space $\ell^2(X,m)$. We then give a procedure for creating a harmonic function using $g$ on a graph over the vertex set $X \setminus \{o\}$. In the following, we will need the standard fact that for any exhaustion sequence $(K_n)$, if we denote the Laplacian resulting from restricting $\ms{L}$ to $\ell^2(K_n, m)$ by $L_n$ and the resulting heat kernel by $p_t^n(x,y)$ for all vertices $x, y \in X$ and $t \geq0$, then $$p_t^n(x,y) \to p_t(x,y)$$ as $n \to \infty$, see \cite{KL}. In particular, this implies $$g_n(x) \to g(x)$$ for all $x \in X$ as $n \to \infty$ where $g_n(x) = \int_0^\infty p_t^n(o, x) dt$. We note that for any $x \not \in K_n$, we have $p_t^n(o,x) =0 $ and thus $g_n(x)=0$. For a set $\Omega \subseteq X$, we let $$\partial \Omega = \{ x \in \Omega \ | \ \exists y \sim x, y \not \in \Omega\}$$ denote the boundary of the set. For a graph with positive bottom of the spectrum, we now show that the Green's function is in $\ell^2(X,m)$. We note that, in contrast to the manifold case above, the Green's function does not have a singularity at $o$. \begin{lemma}\label{l:greens_l^2} Let $b$ be a connected graph over $(X,m)$. Let $o \in X$ and let $\Omega \subseteq X$ be finite with $o \in \Omega$. Let $g(x) = G(o, x)$ for $x \in X$ and $C= \sup_{x \in \partial \Omega} g(x)$ where $G$ denotes the Green's function. Let $\lambda_0 = \lambda_0(L)$ denote the bottom of the spectrum of the Laplacian $L$. If $\lambda_0 >0$, then $$\|g\|_{\ell^2(X \setminus \Omega, m)} \leq \frac{C}{\sqrt{\lambda_0}} \sqrt{\textup{cap}(\Omega)}.$$ In particular, $g \in \ell^2(X,m)$. \end{lemma} \begin{proof} Let $(K_n)$ be an exhaustion sequence of $X$ such that $\Omega \subseteq K_1$. Let $\varphi_n$ denote the equilibrium potential of $\Omega$ in $K_n$ discussed in Lemma~\ref{l:cap_exhaustion} and let $g_n(x)=G_n(o,x)$ denote the Green's function on $K_n$. As $g_n \leq g$, we note that $g_n \leq C\varphi_n$ on $\partial \Omega$ and since $\ms{L} \varphi_n = \ms{L} g_n = 0$ on $K_n \setminus \Omega$ and $\varphi_n = g_n =0$ outside of $K_n$, it follows that $$g_n \leq C\varphi_n$$ by the maximum principle for harmonic functions, see Lemma 1.39 in \cite{G18}. Therefore, using the definition of $\lambda_0$ and Lemma~\ref{l:cap_exhaustion} we obtain \begin{align*} \|g_n\|^2_{\ell^2(K_n \setminus \Omega)} &\leq C^2 \|\varphi_n\|^2_{\ell^2(K_n \setminus \Omega)} \leq \frac{C^2}{\lambda_0}\ms{Q}(\varphi_n) \to \frac{C^2}{\lambda_0} \textup{cap}(\Omega) \end{align*} as $n \to \infty$. As $0 \le g_n\le g_{n+1}$ and $g_n \to g$ as $n \to \infty$ on $X$, we apply Beppo Levi's monotone convergence theorem to get $$\|g\|_{\ell^2(X \setminus \Omega)} \leq \frac{C}{\sqrt{\lambda_0}} \sqrt{\textup{cap}(\Omega)}.$$ The final statement follows since $\Omega$ is finite and $g$ is defined everywhere. \end{proof} Our aim is now to mimic Theorem~\ref{t:NoLiouv_manifolds} from the manifold case and to apply Lemma~\ref{l:greens_l^2} to create examples where the $\ell^2$-Liouville property fails. As $g$ satisfies $\ms{L}g=\oh{1}_{o}$ the basic idea is to create a new graph by removing the vertex $o$ where $g$ fails to be harmonic. For every vertex that was adjacent to $o$ in the original graph and for which the value of $g$ at that vertex is different than the value of $g$ at $o$, we then replace that vertex with a path to infinity on which we extend $g$ to be harmonic on the path. We note that, in general, this process will not result in a connected graph. In the construction outlined above, we will attach paths to infinity to certain neighbors of the removed vertex. We first analyze under which condition the extension of a non-constant harmonic function to such a path will be in $\ell^2(X,m)$. We will apply this to the function based on the Green's function in what follows below. \begin{lemma}\label{l:harmonic_extension} Let $X = \mathbb{N}_0=\{0, 1, 2, \ldots\}$ and let $b$ be a graph over $(X,m)$ with $b(j,k)>0$ if and only if $|j-k|=1$. If $v \in C(X)$ satisfies $$\ms{L}v(r)=0$$ for $r \geq 1$, then $v$ is uniquely determined by the choice of $v(1)$ and $v(0)$. More specifically, if $C=b(0,1)(v(1)-v(0))$, then $$v(r+1) = v(1)+ C \sum_{k=1}^r \frac{1}{b(k,k+1)}$$ for $r \geq 1$. In particular, if $C \neq 0$, then $v$ is non-constant and if $$\sum_{r=1}^\infty \left( \sum_{k=1}^{r} \frac{1}{b(k,k+1)} \right)^2 m(r+1) < \infty,$$ then $v \in \ell^2(X,m)$. \end{lemma} \begin{proof} Let $v \in C(X)$ satisfy $\ms{L}v(r)=0$ for $r \geq 1$. Using induction, we see that if $C=b(0,1)(v(1)-v(0))$, then we obtain $$v(r+1)-v(r) = \frac{C}{b(r,r+1)}$$ for all $r \geq 1$ so that $$v(r+1) = v(1)+ C \sum_{k=1}^r \frac{1}{b(k,k+1)}.$$ From this formula, it is clear that if $C \neq 0$, then $v$ is non-constant and the summability assumption implies that $v \in \ell^2(X,m)$ by straightforward estimates. \end{proof} \begin{remark}[A comment on the condition in Lemma~\ref{l:harmonic_extension}] We comment on the summability condition $\sum_{r=1}^\infty \left( \sum_{k=1}^{r} 1/b(k,k+1) \right)^2 m(r+1) < \infty$ appearing above in Lemma~\ref{l:harmonic_extension}. As already noted, we wish to use this condition in order to extend the Green's function to the path in such a way that the Green's function remains in $\ell^2(X,m)$. As Lemma~\ref{l:harmonic_extension} shows, the condition on $b$ and $m$ allows the extension of a non-constant positive harmonic function to this path. Let us now contrast this with other conditions found for graphs with $X=\mathbb{N}$ and $b(j,k)>0$ if and only if $|j-k|=1$. We note that by a direct calculation any function which is harmonic at all vertices on such a graph is always constant. In particular, the Laplacian on all such graphs satisfies the $\ell^2$-Liouville property. For this reason, in Lemma~\ref{l:harmonic_extension} we assume that the function is not harmonic at the first vertex. We note if $m(\mathbb{N})=\infty$, then the Laplacian on such a graph is essentially self-adjoint \cite{KL} so from Corollary~\ref{c:graphs_esa_liouv} one would not expect to have non-constant square integrable harmonic functions in this case. For $b$, the condition $\sum_k 1/b(k,k+1)<\infty$ is known to be equivalent to the transience of such graphs, see \cite{Woe}. Furthermore, in the case of $m(\mathbb{N})<\infty$, transience, stochastic incompleteness and the failure of form uniqueness are known to be equivalent, see \cite{GHKLW, Sch}. As the failure of form uniqueness implies the failure of essential self-adjointness, in the case of $m(\mathbb{N})<\infty$ and $\sum_k 1/b(k,k+1)<\infty$, we have the failure of essential self-adjointness. Obviously, these two conditions imply the summability assumption in Lemma~\ref{l:harmonic_extension} which intertwines $b$ and $m$. In fact, it turns out that this condition is equivalent to the failure of essential self-adjointness for the Laplacian on such graphs, see the forthcoming work \cite{IMW}. Finally, let us note that the condition above actually forces the path to have finite length in any intrinsic path metric. Thus, as in the manifold case, after the removal of the vertex and the addition of such a path, we end up with a space that is not metrically complete. We discuss these notions further below, see the discussion leading up to and the statement of Corollary~\ref{c:incomplete}. \end{remark} We now make precise a way to construct a graph following the removal of a vertex. We let $b$ be a transient graph over $(X,m)$, let $g(x) = G(o,x)$ where $G$ is the Green's function of the graph and $o\in X$. We then decompose the set of neighbors of $o$ into $$N_{o}=\{x \in X \ | \ x \sim o, g(x) \neq g(o) \} \quad \textup{and} \quad N_{c} = \{x \in X \ | \ x \sim o, g(x) = g(o) \}.$$ We ultimately will attach paths to infinity to vertices in $N_o$ and a single additional vertex to vertices in $N_c$. Then, the Green's function can be extended by Lemma~\ref{l:harmonic_extension} to the paths which are attached to vertices in $N_o$ and by a constant to the new vertices which are neighbors of vertices in $N_c$. We note that $\ms{L}g(o) = 1/m(o)$ implies that $N_o$ is non-empty. On the other hand, $N_o$ in general will not contain all of the neighbors of $o$. For example, if $x \sim o$ is a vertex such that all vertices that have a non-repeating path that connects them to $o$ must contain $x$ and this set is finite, then $x \in N_c$. Now, for every $x \in N_o$, we let $\mathbb{N}_x=\{x_1, x_2, x_3, \ldots\}$ denote a copy of the natural numbers and for every $x \in N_c$, we let $x_c$ denote a single new vertex. We then let $X_e = \bigcup_{x \in N_o} \mathbb{N}_x$ and $X_c = \bigcup_{x \in N_c} \{x_c\}$ and define a new vertex set via $$X_o = X\setminus \{o\} \cup X_e \cup X_c.$$ We then let $b_o: X_o \times X_o \longrightarrow [0, \infty)$ be an edge weight defined so as to be symmetric and satisfy: \begin{itemize} \item $b_o(x,y) = b(x,y)$ for $x,y \in X \setminus \{o\}$, \item for every $x \in N_o$ we let $b_o(x,x_1)>0$ for $x_1 \in \mathbb{N}_x$, \item for $x \in N_o$ and $x_j, x_k \in \mathbb{N}_x$ we let $b_o(x_j,x_k) > 0$ if and only if $|j-k|=1$, \item for every $x \in N_c$ we let $b_o(x,x_c)>0$ for $x_c \in X_c$, \item $b_o$ be $0$ for all other pairs of vertices. \end{itemize} In other words, we remove the vertex $o$ and to every neighbor of $o$ in $N_o$ we now attach an infinite path and to every neighbor of $o$ in $N_c$ we attach a single vertex and no other connections. The measure $m$ defined on $X$ can be extended to $X_o$ in an arbitrary manner; however, we will specify some additional requirements for both $m_o$ and $b_o$ below. \begin{remark}[A comment on connectedness] We recall that a path is a sequence of vertices $(x_n)$ with $b(x_n, x_{n+1}) > 0$ for all $n \in \mathbb{N}$ and that we call a graph connected if for any two vertices there exists a path that starts at one of the vertices and ends at the other. We note that the graph $b_o$ over $(X_o,m_o)$ will not, in general, be connected. This deserves some comment as we wish to analyze the $\ell^2$-Liouville property and the essential self-adjointness of the Laplacian associated to $b_o$ and the connections between these properties have only been established for connected graphs. We note that, a graph that is not connected will, in general, not satisfy the $\ell^2$-Liouville property as a harmonic function can take on different constant values on the connected components, i.e., the maximal connected subsets of the vertex set. In particular, if the removal of a vertex results in a graph with at least two connected components of finite measure, then the Laplacian on the resulting graph will never satisfy the $\ell^2$-Liouville property. On the other hand, the Laplacian on such a graph might still be essentially self-adjoint. However, for our results below, we attach paths to infinity and show that there is at least one infinite connected component where both of these properties fail for some additional assumptions of the edge weights and measure. In particular, the Laplacian can be decomposed into a direct sum of operators on connected components and if the operator on one of these components is not essentially self-adjoint, then the Laplacian on the entire graph is not essentially self-adjoint. \end{remark} Given this new graph $b_o$ over $(X_o,m_o)$, Lemma~\ref{l:harmonic_extension} shows that if $b$ is transient, then there is a unique way of using the values of $g$, the Green's function of $b$ over $(X,m)$, to define a function $g_o$ on the new vertex set $X_o$ so that $g_o$ is harmonic. More specifically, we let $g_o$ be a function on $X_o$ defined by letting $g_o(x) = g(x)$ for $x \in X \setminus \{o\}$ and $g_o(x_c)=g(o)$ for every $x_c \sim x \in N_c$. For every $x \in N_o$ and $x_1 \in \mathbb{N}_x$ we first let $$g_o(x_1)=g(x) + \frac{1}{b_o(x,x_1)} \sum_{y \in X \setminus \{o\}} b(x,y)(g(x)-g(y)).$$ We note that since $x \in N_o$ we have $g(x) \neq g(o)$ and since $\ms{L}g(x)=0$ we obtain $\sum_{y \in X \setminus \{o\}} b(x,y)(g(x)-g(y)) \neq 0$. In particular, we see that $g_o(x_1) \neq g(x)$. We then use Lemma~\ref{l:harmonic_extension} to define $g_o(x_r)$ for $r > 1$ so that $g_o$ is harmonic on $\mathbb{N}_x$. More specifically, if $C= b_o(x_1,x)(g_o(x_1)-g(x))$, then Lemma~\ref{l:harmonic_extension} gives $$g_o(x_{r+1}) = g_o(x_1) + C\sum_{n=1}^r \frac{1}{b_o(x_n, x_{n+1})}$$ for $r \geq 1$. As $C \neq 0$, we see that $g_o$ is not constant on $\mathbb{N}_x$. In this way, it follows by a direct calculation that $g_o$ is harmonic on $X_o$. Furthermore, if $g \in \ell^2(X,m)$, then Lemma~\ref{l:harmonic_extension} shows how to choose the weights $b_o$ and measure $m_o$ in such a way to ensure that the extension $g_o \in \ell^2(X_o,m_o)$. More specifically, we obtain the following statement which can be thought as a counterpart to Theorem~\ref{t:NoLiouv_manifolds} from the manifold setting. \begin{theorem}\label{t:graph_no_Liouv} Let $b$ be a connected graph over $(X,m)$ such that $\ms{L}(C_c(X)) \subseteq \ell^2(X,m)$ and $\lambda_0>0$. Let $o \in X$ be such that the number of neighbors of $o$ is finite. Let $b_o$ over $(X_o,m_o)$ be defined as above and so that for every $x \in N_o$ and for $x_n \in \mathbb{N}_x$ we have $$\sum_{r=1}^\infty \left( \sum_{n=1}^{r}\frac{1}{b_o(x_n,x_{n+1})}\right)^2 m_o(x_{r+1})<\infty.$$ Then, there exists a harmonic function in $\ell^2(X_o,m_o)$ which is non-constant on an infinite connected component of $b_o$. In particular, the Laplacian $L_{o,c}$ associated to $b_o$ over $(X_o,m_o)$ does not satisfy the $\ell^2$-Liouville property and is not essentially self-adjoint. \end{theorem} \begin{proof} By Lemma~\ref{l:greens_l^2}, as we assume that $\lambda_0>0$, we have $g \in \ell^2(X,m)$. By Lemma~\ref{l:harmonic_extension} we obtain that $g_o \in \ell^2(\mathbb{N}_x, m_o)$ for every $x \in N_o$ by our summability assumptions. As both $N_o$ and $N_c$ are finite sets by assumption, we obtain that $g_o \in \ell^2(X_o, m_o)$. Therefore, $g_o$ is a non-constant harmonic function in $\ell^2(X_o,m_o)$ so that the $\ell^2$-Liouville property fails for $b_o$ over $(X_o,m_o)$. Furthermore, as $N_o \neq \emptyset$ it follows from the construction that there exists at least one infinite connected component of $b_o$ over $(X_o,m_o)$ for which $g_o$ is non-constant. In particular, for this infinite connected component, it follows that Corollary~\ref{c:graphs_esa_liouv} applies to the restriction of $L_{o,c}$ to this component, and thus this restriction is not essentially self-adjoint. From this it follows that $L_{o,c}$ is not essentially self-adjoint. \end{proof} We note that some of the results on manifolds from the previous section invoke completeness in various parts. We will now make precise an analogue to geodesic completeness for graphs. In particular, we wish to discuss how in Theorem~\ref{t:graph_no_Liouv} the graph resulting from the removal of a vertex and attachment of paths as specified cannot be geodesically complete. A natural way to define a metric on a connected graph is to assign lengths to edges and then take the length of the shortest path that connects two vertices. We let $\sigma:X \times X \longrightarrow [0,\infty)$ be a symmetric function such that $$\sigma(x,y) > 0 \qquad \textup{ if and only if} \qquad x \sim y.$$ We think of $\sigma(x,y)$ as the length of the edge connecting $x$ and $y$ and thus call $\sigma$ a length function. For vertices $x, y \in X$ we let $\Pi_{x,y}$ denote the set of all paths from $x$ to $y$, that is, $(x_k)_{k=0}^n \in \Pi_{x,y}$ if $(x_k)$ is a path with $x_0=x$ and $x_n =y$. We then let the length of a path $(x_k)$ be defined by $l_\sigma((x_k))= \sum_{k=0}^{n-1} \sigma(x_k, x_{k+1})$ and define a metric via $$d_{\sigma}(x,y)= \inf_{(x_k) \in \Pi_{x,y}} l_\sigma((x_k)).$$ We call any metric arising in this way a path metric. We call a path $(x_n)$ a geodesic if $d_{\sigma}(x_j,x_k) = \sum_{i=j}^{k-1} \sigma(x_i, x_{i+1})$ for all indices $j<k$ for which the path is defined. We call the graph geodesically complete if every infinite geodesic has infinite length, that is, $l_{\sigma}((x_n))=\infty$ if $(x_n)$ is a geodesic with infinitely many vertices. A graph is called locally finite if every vertex has only finitely many neighbors. Theorem A.1 in \cite{HKMW} states that for path metrics on locally finite graphs geodesic completeness, metric completeness and the fact that all balls defined with respect to the metric are finite are equivalent. Thus, for locally finite graphs we call such graphs complete with respect to the path metric. However, to derive essential self-adjointness from completeness, we need to consider metrics that also take into account the values of the edge weights as well as the vertex measure. These are the so-called intrinsic metrics which we introduce next. Specifically, we call a metric $\varrho$ on $X$ intrinsic for $b$ over $(X,m)$ if $$\sum_{y \in X} b(x,y) \varrho^2(x,y) \leq m(x)$$ for all $x \in X$. A natural way to construct an intrinsic metric using lengths is to let $\textup{Deg}(x) = \frac{1}{m(x)} \sum_{y \in X} b(x,y)$ denote the weighted degree of a vertex $x \in X$ and let $$\sigma(x,y) = \min \left \{\frac{1}{\sqrt{\textup{Deg}(x)}},\frac{1}{\sqrt{\textup{Deg}(y)}} \right\}$$ denote the length of an edge. Letting $\varrho_\sigma = d_\sigma$ denote the path metric defined via $\sigma$ as above, then as $\varrho_\sigma(x,y) \leq \sigma(x,y)$ for all $x,y$ with $b(x,y)>0$ we get that $\varrho_\sigma$ is intrinsic. Thus, intrinsic metrics always exist in the graph setting. However, in contrast to the case for manifolds, it is not true that there exists a unique maximal intrinsic metric. For more background on the notion and use of intrinsic metrics in our setting, see \cite{FLW, K, Woj2}. One of the main results of \cite{HKMW} states that, for locally finite graphs and path metrics, completeness with respect to an intrinsic path metric implies essential self-adjointness. Thus we directly derive the following. \begin{corollary}\label{c:incomplete} Let $b$ be a connected locally finite graph over $(X,m)$ with $\lambda_0>0$. Let $b_o$ over $(X_o,m_o)$ be defined so that for every $x \in N_o$ and $x_n \in \mathbb{N}_x$ we have $$\sum_{r=1}^\infty \left( \sum_{n=1}^{r}\frac{1}{b_o(x_n,x_{n+1})}\right)^2 m_o(x_{r+1})<\infty.$$ Then, $b_o$ over $(X_o,m_o)$ is not complete with respect to any intrinsic path metric. In particular, $$l_\sigma((x_n))<\infty$$ for all paths attached at $x \in N_o$ where $\sigma$ is a length function which defines an intrinsic path metric for $b_o$ over $(X_o,m_o)$. \end{corollary} \begin{proof} The first part of the statement follows from Theorem~\ref{t:graph_no_Liouv} above and Theorem~2 in \cite{HKMW} which gives that the failure of essential self-adjointness implies incompleteness with respect to intrinsic path metrics. In particular, we note that the local finiteness assumption implies that $o$ has only finitely many neighbors and also that $\mathcal{L}(C_c(X)) \subseteq \ell^2(X,m)$ as follows by a direct calculation. For the statement on the length of paths, if $l_\sigma((x_n))=\infty$ for some path satisfying the summability assumption, then we could start with a complete graph with $\lambda_0>0$ and carrying out the procedure to construct $b_o$ over $(X_o,m_o)$ using this path would then result in a complete graph as we would attach only paths to infinity which have infinite length. Thus, the resulting graph would not have any geodesic of finite length and would thus be complete giving a contradiction to what we have just shown. Thus, we obtain that $l_\sigma((x_n))<\infty$ for all paths satisfying the summability assumption and for all lengths $\sigma$ which define intrinsic path metrics on $b_o$ over $(X_o,m_o)$. This completes the proof. \end{proof} We finish the paper with two remarks. \begin{remark} The assumption that $\lambda_0>0$ can be removed in the corollary above. In particular, we note that both the fact that $\sigma$ defines an intrinsic path metric for $b_o$ over $(\mathbb{N}_x,m_o)$ and the summability assumption on the edge weights and measure on the path $\mathbb{N}_x$ are independent of the original graph $b$ over $(X,m)$. Thus, we can use the result above to show that all such paths must have finite length with respect to an intrinsic metric regardless of the starting graph. Then, attaching such a path to any graph will then give metric incompleteness if $\sigma$ is extended to the graph in such a way as to make an intrinsic path metric. We note that we have assumed transience of $b$ over $(X,m)$ for the construction of $b_o$ over $(X_o,m_o)$; however, this is also not necessary for this result as attaching paths of finite length will always produce an incomplete graph. \end{remark} \begin{remark} We recall that Theorem~\ref{922theorem7} in the manifold setting gives that if a manifold is complete and the dimension of the manifold is greater than or equal to 4, then removing a point results in a manifold which satisfies the $L^2$-Liouville property. In fact, by \cite{M}, it follows in this case that the Laplacian on the manifold is essentially self-adjoint. An analogue to this for the graph case is that starting with a locally finite complete graph, by removing a point and attaching paths which have infinite length with respect to a length which gives an intrinsic metric, the resulting graph is still complete. In particular, the Laplacian on such a graph is essentially self-adjoint and, if connected, satisfies the $\ell^2$-Liouville property. In terms of capacity, there is a capacity which is positive for points in Euclidean space for dimensions 2 and 3, while zero for points in dimensions $4$ and above, see \cite{HKM}. The graph case also has a counterpart in the sense that when adding paths to infinity, we may introduce a boundary point at infinity when such paths have finite length and then consider the capacity of such a point. In this context, it would be interesting to see if there is a condition that would force either positive capacity or capacity zero of a boundary point at infinity for every intrinsic path metric. \end{remark} \section*{Acknowledgements} The authors are grateful to Marcel Schmidt for a careful reading of the manuscript and for helpful comments. R.K.W.~would also like to thank J{\'o}zef Dodziuk for his support and for many fruitful discussions. Furthermore, the authors would like to thank the referees for useful comments and Isaac Pesenson for the invitation to submit this article. \begin{bibdiv} \begin{biblist} \bib{ABR}{book}{ author={Axler, Sheldon}, author={Bourdon, Paul}, author={Ramey, Wade}, title={Harmonic function theory}, series={Graduate Texts in Mathematics}, volume={137}, edition={2}, publisher={Springer-Verlag, New York}, date={2001}, pages={xii+259}, isbn={0-387-95218-7}, review={\MR{1805196}}, doi={10.1007/978-1-4757-8137-3}, } \bib{BP}{article}{ author={Boscain, Ugo}, author={Prandi, Dario}, title={Self-adjoint extensions and stochastic completeness of the Laplace-Beltrami operator on conic and anticonic surfaces}, journal={J. Differential Equations}, volume={260}, date={2016}, number={4}, pages={3234--3269}, issn={0022-0396}, review={\MR{3434398}}, doi={10.1016/j.jde.2015.10.011}, } \bib{C}{article}{ author={Chernoff, Paul R.}, title={Essential self-adjointness of powers of generators of hyperbolic equations}, journal={J. Functional Analysis}, volume={12}, date={1973}, pages={401--414}, review={\MR{0369890 (51 \#6119)}}, } \bib{CL}{article}{ author={Chen, Roger}, author={Li, Peter}, title={On Poincar\'{e} type inequalities}, journal={Trans. Amer. Math. Soc.}, volume={349}, date={1997}, number={4}, pages={1561--1585}, issn={0002-9947}, review={\MR{1401517}}, doi={10.1090/S0002-9947-97-01813-8}, } \bib{CdV}{article}{ author={Colin de Verdi\`ere, Yves}, title={Pseudo-laplaciens. I}, language={French, with English summary}, journal={Ann. Inst. Fourier (Grenoble)}, volume={32}, date={1982}, number={3}, pages={xiii, 275--286}, issn={0373-0956}, review={\MR{688031}}, } \bib{FLW}{article}{ author={Frank, Rupert L.}, author={Lenz, Daniel}, author={Wingert, Daniel}, title={Intrinsic metrics for non-local symmetric Dirichlet forms and applications to spectral theory}, journal={J. Funct. Anal.}, volume={266}, date={2014}, number={8}, pages={4765--4808}, issn={0022-1236}, review={\MR{3177322}}, doi={10.1016/j.jfa.2014.02.008}, } \bib{G1}{article}{ author={Gaffney, Matthew P.}, title={The harmonic operator for exterior differential forms}, journal={Proc. Nat. Acad. Sci. U. S. A.}, volume={37}, date={1951}, pages={48--50}, issn={0027-8424}, review={\MR{0048138 (13,987b)}}, } \bib{G2}{article}{ author={Gaffney, Matthew P.}, title={A special Stokes's theorem for complete Riemannian manifolds}, journal={Ann. of Math. (2)}, volume={60}, date={1954}, pages={140--145}, issn={0003-486X}, review={\MR{0062490 (15,986d)}}, } \bib{GHKLW}{article}{ author={Georgakopoulos, Agelos}, author={Haeseler, Sebastian}, author={Keller, Matthias}, author={Lenz, Daniel}, author={Wojciechowski, Rados{\l}aw K.}, title={Graphs of finite measure}, journal={J. Math. Pures Appl. (9)}, volume={103}, date={2015}, number={5}, pages={1093--1131}, issn={0021-7824}, review={\MR{3333051}}, doi={10.1016/j.matpur.2014.10.006}, } \bib{GM}{article}{ author={Gesztesy, Fritz}, author={Mitrea, Marius}, title={A description of all self-adjoint extensions of the Laplacian and Kre\u{\i}n-type resolvent formulas on non-smooth domains}, journal={J. Anal. Math.}, volume={113}, date={2011}, pages={53--172}, issn={0021-7670}, review={\MR{2788354}}, doi={10.1007/s11854-011-0002-2}, } \bib{G}{article}{ author={Grigor{\cprime}yan, Alexander}, title={Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={36}, date={1999}, number={2}, pages={135--249}, issn={0273-0979}, review={\MR{1659871 (99k:58195)}}, doi={10.1090/S0273-0979-99-00776-4}, } \bib{G18}{book}{ author={Grigor{\cprime}yan, Alexander}, title={Introduction to analysis on graphs}, series={University Lecture Series}, volume={71}, publisher={American Mathematical Society, Providence, RI}, date={2018}, pages={viii+150}, isbn={978-1-4704-4397-9}, review={\MR{3822363}}, doi={10.1090/ulect/071}, } \bib{G19}{book}{ author={Grigor{\cprime}yan, Alexander}, title={Heat kernel and analysis on manifolds}, series={AMS/IP Studies in Advanced Mathematics}, volume={47}, publisher={American Mathematical Society, Providence, RI; International Press, Boston, MA}, date={2009}, pages={xviii+482}, isbn={978-0-8218-4935-4}, review={\MR{2569498 (2011e:58041)}}, } \bib{HKLW}{article}{ author={Haeseler, Sebastian}, author={Keller, Matthias}, author={Lenz, Daniel}, author={Wojciechowski, Rados{\l}aw}, title={Laplacians on infinite graphs: Dirichlet and Neumann boundary conditions}, journal={J. Spectr. Theory}, volume={2}, date={2012}, number={4}, pages={397--432}, issn={1664-039X}, review={\MR{2947294}}, } \bib{HL}{book}{ author={Han, Qing}, author={Lin, Fanghua}, title={Elliptic partial differential equations}, series={Courant Lecture Notes in Mathematics}, volume={1}, edition={2}, publisher={Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI}, date={2011}, pages={x+147}, isbn={978-0-8218-5313-9}, review={\MR{2777537}}, } \bib{HKM}{article}{ author={Hinz, Michael}, author={Kang, Seunghyun}, author={Masamune, Jun}, title={Probabilistic characterizations of essential self-adjointness and removability of singularities}, language={English, with English and Russian summaries}, journal={Mat. Fiz. Komp\cprime yut. Model.}, date={2017}, number={3(40)}, pages={148--162}, issn={2587-6325}, review={\MR{3706135}}, doi={10.15688/mpcm.jvolsu.2017.3.11}, } \bib{HK}{article}{ author={Hua, Bobo}, author={Keller, Matthias}, title={Harmonic functions of general graph Laplacians}, journal={Calc. Var. Partial Differential Equations}, volume={51}, date={2014}, number={1-2}, pages={343--362}, issn={0944-2669}, review={\MR{3247392}}, doi={10.1007/s00526-013-0677-6}, } \bib{HKMW}{article}{ author={Huang, Xueping}, author={Keller, Matthias}, author={Masamune, Jun}, author={Wojciechowski, Rados{\l}aw K.}, title={A note on self-adjoint extensions of the Laplacian on weighted graphs}, journal={J. Funct. Anal.}, volume={265}, date={2013}, number={8}, pages={1556--1578}, issn={0022-1236}, review={\MR{3079229}}, doi={10.1016/j.jfa.2013.06.004}, } \bib{I}{article}{ author={Inoue, Atsushi}, title={Essential self-adjointness of Schr{\"o}dinger operators on the weighted integers}, journal={forthcoming}, } \bib{IMW}{article}{ author={Inoue, Atsushi}, author={Masamune, Jun}, author={Wojciechowski, Rados{\l}aw K.}, title={Essential self-adjointness of the Laplacian on weighted graphs: stability and characterizations}, journal={forthcoming}, } \bib{K}{article}{ author={Keller, Matthias}, title={Intrinsic metrics on graphs: a survey}, conference={ title={Mathematical technology of networks}, }, book={ series={Springer Proc. Math. Stat.}, volume={128}, publisher={Springer, Cham}, }, date={2015}, pages={81--119}, review={\MR{3375157}}, } \bib{KL}{article}{ author={Keller, Matthias}, author={Lenz, Daniel}, title={Dirichlet forms and stochastic completeness of graphs and subgraphs}, journal={J. Reine Angew. Math.}, volume={666}, date={2012}, pages={189--223}, issn={0075-4102}, review={\MR{2920886}}, doi={10.1515/CRELLE.2011.122}, } \bib{KLW}{article}{ author={Keller, Matthias}, author={Lenz, Daniel}, author={Wojciechowski, Rados{\l}aw K.}, title={Volume growth, spectrum and stochastic completeness of infinite graphs}, journal={Math. Z.}, volume={274}, date={2013}, number={3-4}, pages={905--932}, issn={0025-5874}, review={\MR{3078252}}, doi={10.1007/s00209-012-1101-1}, } \bib{Li}{book}{ author={Li, Peter}, title={Geometric analysis}, series={Cambridge Studies in Advanced Mathematics}, volume={134}, publisher={Cambridge University Press, Cambridge}, date={2012}, pages={x+406}, isbn={978-1-107-02064-1}, review={\MR{2962229}}, doi={10.1017/CBO9781139105798}, } \bib{LS}{article}{ author={Li, Peter}, author={Schoen, Richard}, title={$L^p$ and mean value properties of subharmonic functions on Riemannian manifolds}, journal={Acta Math.}, volume={153}, date={1984}, number={3-4}, pages={279--301}, issn={0001-5962}, review={\MR{766266}}, doi={10.1007/BF02392380}, } \bib{M}{article}{ author={Masamune, Jun}, title={Essential self-adjointness of Laplacians on Riemannian manifolds with fractal boundary}, journal={Comm. Partial Differential Equations}, volume={24}, date={1999}, number={3-4}, pages={749--757}, issn={0360-5302}, review={\MR{1683058}}, doi={10.1080/03605309908821442}, } \bib{RS1}{book}{ author={Reed, Michael}, author={Simon, Barry}, title={Methods of modern mathematical physics. I}, edition={2}, note={Functional analysis}, publisher={Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York}, date={1980}, pages={xv+400}, isbn={0-12-585050-6}, review={\MR{751959}}, } \bib{RS2}{book}{ author={Reed, Michael}, author={Simon, Barry}, title={Methods of modern mathematical physics. II. Fourier analysis, self-adjointness}, publisher={Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London}, date={1975}, pages={xv+361}, review={\MR{0493420}}, } \bib{Sch}{article}{ author={Schmidt, Marcel}, title={Global properties of Dirichlet forms on discrete spaces}, journal={Dissertationes Math.}, volume={522}, date={2017}, pages={43}, issn={0012-3862}, review={\MR{3649359}}, doi={10.4064/dm738-7-2016}, } \bib{Sch2}{article}{ author={Schmidt, Marcel}, title={On the existence and uniqueness of self-adjoint realizations of discrete (magnetic) Schr{\"o}dinger operators}, conference={ title={Analysis and geometry on graphs and manifolds}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={461}, publisher={Cambridge Univ. Press, Cambridge}, }, date={2020}, } \bib{Soa}{book}{ author={Soardi, Paolo M.}, title={Potential theory on infinite networks}, series={Lecture Notes in Mathematics}, volume={1590}, publisher={Springer-Verlag}, place={Berlin}, date={1994}, pages={viii+187}, isbn={3-540-58448-X}, review={\MR{1324344 (96i:31005)}}, } \bib{S}{article}{ author={Strichartz, Robert S.}, title={Analysis of the Laplacian on the complete Riemannian manifold}, journal={J. Funct. Anal.}, volume={52}, date={1983}, number={1}, pages={48--79}, issn={0022-1236}, review={\MR{705991}}, doi={10.1016/0022-1236(83)90090-3}, } \bib{W}{book}{ author={Woess, Wolfgang}, title={Random walks on infinite graphs and groups}, series={Cambridge Tracts in Mathematics}, volume={138}, publisher={Cambridge University Press}, place={Cambridge}, date={2000}, pages={xii+334}, isbn={0-521-55292-3}, review={\MR{1743100 (2001k:60006)}}, doi={10.1017/CBO9780511470967}, } \bib{Woe}{book}{ author={Woess, Wolfgang}, title={Denumerable Markov chains}, series={EMS Textbooks in Mathematics}, note={Generating functions, boundary theory, random walks on trees}, publisher={European Mathematical Society (EMS), Z\"urich}, date={2009}, pages={xviii+351}, isbn={978-3-03719-071-5}, review={\MR{2548569 (2011f:60142)}}, doi={10.4171/071}, } \bib{Woj}{book}{ author={Wojciechowski, Radoslaw Krzysztof}, title={Stochastic completeness of graphs}, note={Thesis (Ph.D.)--City University of New York}, publisher={ProQuest LLC, Ann Arbor, MI}, date={2008}, pages={87}, isbn={978-0549-58579-4}, review={\MR{2711706}}, } \bib{Woj2}{article}{ author={Wojciechowski, Rados{\l}aw K.}, title={Stochastic completeness of graphs: bounded Laplacians, intrinsic metrics, volume growth and curvature}, journal={J. Fourier Anal. Appl.}, date={to appear}, eprint={arXiv:2010.02009 [math.MG]}, } \bib{Y}{article}{ author={Yau, Shing Tung}, title={Some function-theoretic properties of complete Riemannian manifold and their applications to geometry}, journal={Indiana Univ. Math. J.}, volume={25}, date={1976}, number={7}, pages={659--670}, issn={0022-2518}, review={\MR{0417452}}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2021-02-18T02:26:02", "yymm": "2012", "arxiv_id": "2012.08936", "language": "en", "url": "https://arxiv.org/abs/2012.08936" }
\section{Introduction} Intuitively, different image understanding tasks offer complementary information for scene understanding and reasoning~\cite{kim2018visual,van2019disentangled,aditya2019integrating,zador2019critique, wu2017visual, ye2020seeing,chen2019touchdown,sistu2019neurall}. Therefore, networks that can perform multiple visual tasks on the same image are of very high interest~\cite{caruana1997multitask,kevis2019attentive,Kanakis2020ReparameterizingCF,Strezoski2019ManyTL,Vandenhende2020MTINetMT}. A key aspect -- effectively serving the ultimate goal of scene understanding and reasoning -- is often not part of their design. This paper is about this utility question: can we determine {\it where in the image it is necessary (or even meaningful) to perform a task?} For example, the task of recognizing human body parts is meaningful only in the presence of humans. Similarly, any attempt to estimate the normals of the sky is absurd. One may argue that we cannot know beforehand whether some task is necessary to be performed, without recognizing the image content. The content of the image may then reveal the task necessity. This begs the question whether we can know what task needs to be performed where, while bypassing the content-task pairing altogether? When the answer to {\it where} is known -- either by learning or not -- we aim to design an algorithm that executes the given multi-task instructions in an efficient manner. For example, some applications of Augmented Reality may require human poses and the normals of the interacting surfaces. We show that such flexibility to locally activate some tasks allows us to design more compact multi-tasking networks. The task specific annotations of images are often sparse, either by definition or due to missing annotations. Take, for example, facial landmarks or image salience. Sometimes, the annotations may be missing simply because of being futile. Even the well-curated PASCAL-MT~\cite{pascal-context, kevis2019attentive} dataset has the sparsity of $30.4 \%$, $7.5 \%$, $41.9 \%$, $60.0 \%$, for semantic segmentation, human body parts, surface normals, and salience, respectively. Such label sparsity only tends to get worse, if the image annotations are crowd-sourced. In fact, it is simply impractical to expect pixel-wise dense annotations for large datasets, even at locations where the annotations are well defined. The case of merging datasets, by cross-label intersection, follows the same behaviour of sparsity. Under such circumstances, it may be unnecessary to waste computational resources during learning, for image pixels without labels. This calls for an efficient learning paradigm for multi-tasking from sparse labels. In this work, we show that efficient learning from sparse multi-task labels and executing spatially chosen multi-task instructions go hand-in-hand. The key idea of this paper is rather simple. We design a convolutional neural network that performs multiple, pixel-wise tasks. We feed every image along with a composition of spatially distributed multiple task requests -- which we call Task Palette -- to execute pixel-specific tasks. This process we call CompositeTasking. The proposed network uses a single encoder-decoder architecture to perform all the tasks in one forward pass. The simplicity of such architecture allows us to perform multiple tasks in an efficient and compact manner, thanks to the proposed method. An overview of our network is presented in Fig.~\ref{fig:overview}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures/hl_diagram.pdf} \captionsetup{font=small} \caption{\textbf{CompositeTasking.} Given an RGB image and a Task Palette as inputs, our CompositeTasking performs locally request-specific tasks to compute the output.} \label{fig:overview} \end{figure} The proposed method for CompositeTasking learns by task-specific batch normalization. Each task is performed by predicting layer-wise (only on the decoder side) affine batch normalization (BN) parameters, using a small task-conditioned network. Such design choice dedicates the encoder towards a compact visual representation shared among tasks. On the output side, each task is represented in an image format -- thereby performing the conditional image-to-image translation. The image format chooses some arbitrary embedding for every task. We aim to map images to such embedding, conditioned upon the spatially distributed task requests. The task specific losses are then computed by mapping the predicted embedding to the task-appropriate label representations. For example, the predicted 3-channel values are mapped to class probabilities for segmentation task to compute cross-entropy, whereas, pixel normals are directly regressed by minimizing the angular distance between the prediction and the normal's label. This design choice enforces tasks to share network parameters even on the decoder side. Surprisingly, such simple design already offers us very competitive results. During inference, only a small part of the embedding network performs computation. This allows our CompositeTasking network to use an efficient single encoder - single decoder architecture for all tasks. Furthermore, our training strategy enables users to request any task at any pixel. In fact, we also propose to learn the Task Palette, in case it is missing. The inferred palette follows some hand-crafted rules for task requests. It is then fed back to our network to execute the spatially distributed, rule-based tasks. Learning pixel-specific tasking has several benefits, which may be obvious when a parallel to the image segmentation is drawn. In this work, we demonstrate the benefits in regard to a couple of chosen applications, namely, learning the Task Palette, task editing, and rule transfer. In the following, we summarize key contributions of our work. \begin{itemize} \item We introduce the new problem of CompositeTasking which we demonstrate to be useful for images. \item A novel method for CompositeTasking is also proposed. It is significantly superior in terms of computational efficiency, and competitive in terms of performance for image understanding tasks. \item Applications of the proposed CompositeTasking network, namely on predicting with an estimated Task Palette, task editing, and rule transfer, are also demonstrated in this paper. \end{itemize} \section{Related Work} \label{sec:related_work} \noindent\textbf{Multi-task learning (MTL).} MTL is concerned with learning multiple tasks simultaneously, while exerting shared influence on model parameters. The potential benefits are manifold, and include speed-up of training or inference, higher accuracy, better representations, as well as higher parameter or sample efficiency. A comprehensive survey on architectures, optimization and other aspects of MTL can be found in~\cite{Crawshaw2020MultiTaskLW}. On one hand, many MTL methods in the literature perform multiple tasks by a single forward pass, using shared trunk~\cite{Bragman2019StochasticFG,Lu2017FullyAdaptiveFS,Vandenhende2019BranchedMN,Liu2019EndToEndML,Doersch2017MultitaskSV}, cross talk~\cite{Misra2016CrossStitchNF}, or prediction distillation~\cite{Xu2018PADNetMG,Zhang2018JointTL,Zhang2019PatternAffinitivePA,Vandenhende2020MTINetMT} architectures. On the other hand, the following methods perform one task at a time by conditioning a shared encoder, using feature masking~\cite{Strezoski2019ManyTL}, task-specific projections~\cite{zhao2018modulation}, attention mechanisms~\cite{kevis2019attentive} or parametrized convolutions~\cite{Kanakis2020ReparameterizingCF}, while using one decoder head for each task. With CompositeTasking, we are bridging the gap between the two paradigms, by performing multiple tasks on the input image within one forward pass, by performing one task at a time -- for each pixel. In stark contrast to conditioning a shared encoder, as done in~\cite{kevis2019attentive,Kanakis2020ReparameterizingCF,RA_series,Bilen2017UniversalRT,zhao2018modulation,Strezoski2019ManyTL}, we instead learn an unconditioned encoder, together with pixel-wise conditioning of a single unified decoder to output the task composition. \noindent\textbf{Conditional normalization (CN).} Conditional normalization is the workhorse of many methods solving diverse problems in multi-domain learning~\cite{RA_series,Rebuffi2018EfficientPO}, image generation~\cite{Karras2019ASG,Brock2019LargeSG,park2019SPADE}, image editing~\cite{ntavelis2020sesame}, style transfer~\cite{Huang2017ArbitraryST}, and super-resolution~\cite{Wang2018RecoveringRT}. The operating principle is as simple as applying condition-dependent affine transformations on the normalized batch~\cite{ioffe2015BN}, local response~\cite{Krizhevsky2017ImageNetCW}, instance~\cite{Ulyanov2016InstanceNT}, layer~\cite{Ba2016LayerN}, or feature group~\cite{Wu2018GroupN}, allowing features to occupy different regions in the space, depending on the triggered condition. While many of the aforementioned tasks however only require a conditioning on image level -- encoding for instance domain, class, or style -- we are in need to perform pixel-wise varying conditioning. Inspired by the success of dense conditioning for semantic image synthesis, we realise our pixel-wise task conditioning via spatially-adaptive normalization~\cite{Wang2018RecoveringRT,park2019SPADE}. To our knowledge, we are presenting the first method that learns multiple tasks with a single conditional unified head for all tasks, and is even so capable of performing multiple tasks -- for different regions -- in one forward pass. \noindent\textbf{Learning from partial labels.} Crowdsourcing platforms such as Amazon Mechanical Turk or reCAPTCHA made image annotation affordable, and have brought valuable contributions for the computer vision community~\cite{Russakovsky2015ImageNetLS,Lin2014MicrosoftCC,Manen2017PathTrackFT,Zhang2018TheUE}. In order to make best use of all annotators, an efficient approach is required for large-scale labelling, which may come at the price of only obtaining partial labels for each image. This trade-off is still favorable, since partially annotating more images typically outperforms dense labelling of fewer images, due to the increased variety of images seen during training~\cite{Durand2019LearningAD}. Although the partial label problem has been addressed for diverse tasks such as segmentation~\cite{xu2015learning, Alonso2017CoralSegmentationTD}, depth densification~\cite{Qiu2019DeepLiDARDS}, or multi-label classification~\cite{Durand2019LearningAD,Huynh2020InteractiveMC}, it has only been tackled from the perspective of a single task at a time. In contrast, with our approach one can handle partial annotations of multiple tasks in the same image. Offering the ability to focus the learning of interesting tasks in interesting regions can significantly boost sample efficiency during training. \section{CompositeTasking Network} \label{sec:composite_netowrk} In this paragraph, we introduce the formal notations. The input image is denoted as $\mathcal{I} \in \mathbb{R}^{3 \times H \times W}$, where $H$ is height and $W$ is the width of an image. The image is represented using $3$ color channels. Next, we introduce the Task Palette, which is denoted as $\mathcal{T} \in [1,...,K]^{H \times W}$. It is of the same spatial dimension as the input image $H \times W$, and it takes one of $K$ discrete values, where $K$ is the number of considered prediction tasks. The Task Palette specifies which task to predict at which pixel location. The model takes the image $\mathcal{I}$ and the Task Palette $\mathcal{T}$ as inputs and produces $\mathcal{O} = \mathsf{M} (\mathcal{I}, \mathcal{T})$, where $\mathcal{O} \in \mathbb{R}^{3 \times H \times W}$. The output has the same spatial dimension as the input image $H \times W$ and also has $3$ output channels. To construct the output $\mathcal{O}$, the model predicts task $t_{yx}$ at output location $o_{yx}$. The output $\mathcal{O}$ is called the Composite Task, since it is a spatial composition of considered tasks $1,..,K$. Pixel-wise, every task is represented as a $3$D vector $o_{yx} \in \mathbb{R}^3$. An overview of our architecture is presented in Figure~\ref{fig:UNET-likeesign}. A detailed network diagram can be found in the supplementary materials. \subsection{Network Overview} The proposed model is divided into two parts: the encoder and the decoder network. This is inspired by the U-net \cite{ronneberger2015u} architecture. The encoder only processes the input image $\mathcal{I}$. The decoder takes the features processed by the encoder, along with the Task Palette $\mathcal{T}$, to produce the output $\mathcal{O}$. The encoder's job is to learn to produce a very rich feature representation that is sufficient to predict all $K$ tasks. This representation is expected to capture enough information to be translated into different spatial compositions of tasks. The decoder's job is to take that feature representation, as well as a specific Task Palette $\mathcal{T}$, and translate it into the output $\mathcal{O}$. For the sake of simplicity, we choose $3$ channels for the output $\mathcal{O}$ to treat this problem as image-to-image translation. Choice for higher number channels can also be made, if needed. Such choice is made merely for the convenience, which is also empirically supported. Having the same number of output channels for each task, as well as the ability to predict different tasks at different spatial locations, allows us to have a truly multi-tasking network. This means that the exact same network can predict any considered task at any considered location with the exact same architecture. We believe that most of the visual tasks can also be embedded within few channels. In fact, a similar practice is common in the domain of instance segmentation \cite{DBLP:conf/nips/NewellHD17,8014800,DBLP:journals/corr/abs-1708-02551,DBLP:journals/pami/LiangLWSYY18,DBLP:journals/corr/FathiWRWSGM17,DBLP:journals/corr/abs-1708-02550,DBLP:conf/cvpr/NevenBPG19} and an another related work~\cite{achille2019task2vec}. Our image format-based output chooses some arbitrary embedding for every task. We aim to map images to such embedding, conditioned upon the spatially distributed task requests. The task specific losses are then computed by mapping the predicted embedding to the task-appropriate label representations. This allows our network to produce the same output format, thereby making addition of new tasks very simple. More importantly, additional tasks require no additional modules, network heads, etc. The architecture of the CompositeTasking network, as well as its parameter count remain exactly the same. \begin{figure}[t] \centering \vspace{+2pt} \includegraphics[width=0.95\columnwidth]{figures/composit_net_smaller.pdf} \captionsetup{font=small} \caption{\textbf{Overview of the CompositeTasking network architecture.} We follow a U-NET~\cite{ronneberger2015u} like design for Image-to-image translation. \textcolor{blue}{Blue} blocks are modules from the encoder backbone. \textcolor{green}{Green} blocks of the decoder are depicted in Figure \ref{fig:cond_module}. The \textcolor{gold}{yellow} block which produces the Task Palette embedding $\mathcal{E}(\mathcal{T})$ is depicted in Figure \ref{fig:e_embed_net}.\label{fig:UNET-likeesign} } \label{fig:composit_net} \end{figure} \subsection{Conditioning with the Task Palette} The spatially specific output conditioning is achieved by using a layer described in this section. This layer is inspired by \cite{park2019SPADE}, which was originally used to generates images with desired semantic structure. We however, perform BatchNorm normalization \cite{ioffe2015BN} only in the decoder, where task specific affine transformation parameters are predicted differently for each pixel conditioned upon the Task Palette $\mathcal{T}$ at that spatial location $t_{yx}$. For the notational convenience, we follow \cite{park2019SPADE}. Let $\mathbf{h}^i$ denote the activation of layer $i$ in the decoder, for a batch of $N$ samples. Let $H^i$ and $W^i$ be the height and width of the activation map and let $C^i$ be the number of channels in layer $i$. Then, the task specific conditioning is achieved by using a layer which computes, \begin{equation} \label{eq:cond} h^{i+1}_{ncyx} = \gamma^{i}_{cyx}(t_{yx}) \frac{h^{i}_{ncyx} - \mu^{i}_{c}}{\sigma^{i}_{c}} + \beta^{i}_{cyx}(t_{yx}), \end{equation} where $h^{i+1}_{ncyx}$ is the output of our task-specific conditioning layer, and $\mu^{i}_{c}$ and $\sigma^{i}_{c}$ are the mean and standard deviation of the activations in channel $c$: \begin{align} \mu^{i}_{c} = \frac{1}{N H^i W^i} \sum_{n,y,x} h^{i}_{ncyx},\\ \sigma^{i}_{c} = \sqrt{\frac{1}{N H^i W^i} \sum_{n,y,x} \left((h^{i}_{ncyx})^2 - (\mu^{i}_{c})^2\right)}. \end{align} The affine parameters $\gamma^{i}_{cyx}(t_{yx})$ and $\beta^{i}_{cyx}(t_{yx})$ condition the normalized activation $h^{i}_{ncyx}$ based on the requested tasks $t_{yx}$. Unlike \cite{park2019SPADE}, which allows surrounding semantics dependent $\gamma^{i}_{cyx}$ and $\beta^{i}_{cyx}$ by using $3 \times 3$ convolutions on conditioned semantics, we keep operations for each task request independent. In other words, during conditioning, \cite{park2019SPADE} aims to fit pixels meaningfully in the surrounding, whereas we are interested to make each request independent. This choice is motivated by the potential of flexible applications of our method, some of which are demonstrated later in this paper. In this process, we first transform the Task Palette to an embedding $\mathcal{E} = f(\mathcal{T})$, where $\mathcal{E} \in \mathbb{R}^{H \times W \times N_w}$. The parameters $\gamma^{i}_{cyx}(e_{yx}(t_{yx}))$ and $\beta^{i}_{cyx}(e_{yx}(t_{yx}))$ are then obtained using the embedding $\mathcal{E}$. Here, we present further details of our method using two blocks: (a) task representation; and (b) task composition. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures/e_embed_net.pdf} \captionsetup{font=small} \caption{\textbf{Task representation block.} Each task is process independently to learn the task-specific embedding. We broadcast these embeddings according to the task request $\mathcal{T}$. The broadcast $\mathcal{E}$ is fed into the composition blocks from Figure~\ref{fig:cond_module}.} \label{fig:e_embed_net} \end{figure} \paragraph{Task representation block.} In order to embed the Task Palette $\mathcal{T}$ into $\mathcal{E} = f(\mathcal{T})$ , we first learn the task specific embeddings $ \{e_1,e_2,\ldots,e_K\}$ for each task. This is done by embedding all distinct values of the Task Palette $e_k=f(z_k)$, where $z_k$ is the unique task code. The Palette's embedding $\mathcal{E}$ is then obtained by broadcasting task specific embeddings according to the task requests $\mathcal{T}$. All of the Task Composition Blocks use the same embedding $\mathcal{E}$. In Figure~\ref{fig:e_embed_net}, we can see that each task is processed independently through a fully connected Neural Network, before broadcasting into $\mathcal{E}$. \paragraph{Task composition block.} A graphical representation of this block depicted in Figure~\ref{fig:cond_module}. The task composition block receives the task embedding $\mathcal{E}$ and the features from the previous layer of the network. The conditioning operation of~\eqref{eq:cond} takes place within this block as follows: (i) features are processed using a standard convolution layer; (ii) embedding $\mathcal{E}$ is processed independently for each task using two layers of $1 \times 1$ convolutions to obtain $\gamma^{i}_{cyx}(e_{yx}(t_{yx}))$ and $\beta^{i}_{cyx}(e_{yx}(t_{yx}))$; (iii) the operation of~\eqref{eq:cond} is then performed followed by an activation function. Output of this block is the task-conditioned transformed features. As shown in Figure~\ref{fig:composit_net}, we use the task composition blocks only in the decoder and skip connections of our network. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures/cond_module.pdf} \captionsetup{font=small} \caption{{\textbf{Task composition block}. This block receives task embedding and previous layer's features as inputs, and performs task-conditioned transformation of the features.}} \label{fig:cond_module} \end{figure} \subsection{Computing the Loss and Training} Every task $k$ has its own loss function of interest $\mathcal{L}_k$ . The total loss is given by: \begin{equation} \label{eq:total_loss} \mathcal{L} = \sum_{k} \lambda_{k} \mathcal{L}_k (\mathcal{O}, \mathcal{Y}_k, \mathcal{T}), \end{equation} where $\mathcal{Y}_k$ are the labels for task $k$. Usually the losses are computed per pixel location as $\mathcal{L}_k(o_{yx}, y_{yx})$, but that doesn't necessarily have to be the case. The hyperparameter $\lambda_k$ helps to define the balance/trade-off of the predictive performance of all $K$ tasks under consideration. The CompositeTasking network uses a standard training procedure. The images from the training set $\mathcal{I}$ are continuously passed to the network as inputs, along with the desired Task Palettes $\mathcal{T}$ to predict the output $\mathcal{O} = \mathsf{M} (\mathcal{I}, \mathcal{T})$. The predicted output $\mathcal{O}$ is given to the loss function \eqref{eq:total_loss}, along with the corresponding labels $\mathcal{Y}_1,...,\mathcal{Y}_K$ and Task Palette $\mathcal{T}$. The final loss is minimized using standard optimization algorithms~\cite{kingma2014adam}. \section{Tasks and Rules} \label{sec:tasks_and_rules} We consider the following dense prediction tasks. \noindent\textbf{Semantic segmentation.} This is the task of predicting which of the defined semantic classes the pixel belongs to. \noindent\textbf{Human body parts.} Similar to semantic segmentation, this is the task of predicting which of the defined human body parts the pixel belongs to. \noindent\textbf{Surface normals.} This is the task of predicting the $3$D orientation of the surface contained in the pixel. \noindent\textbf{Semantic edges.} This is the task of predicting edges between different objects on the input image. \noindent\textbf{Saliency.} This is the task of predicting which locations in the image are most conspicuous for human observers. Since the network output is constrained to 3 channels, we need to embed the tasks accordingly. Surface normals are $3$D vectors by nature, and fit in the output shape. In the case of edges and saliency, the output corresponds to a scalar probability of the positive class. One way to embed them is to predict them at all $3$ channels and calculate the mean at test-time. For the task of semantic segmentation and human parts, the pixel-wise outputs are usually represented as length $C$ vectors representing the probability of each class. To this end, we transform the pixel-wise $3$D output $\mathbf{o}$. First, we define $3$D class anchors $\mathbf{a}_i$ to each class $i$, uniformly spread out in space (more details in the supplementary materials). Then, we compute a score vector $\mathbf{l} \in \mathbb{R}^{C}$ based on the distance to the class centers, \begin{equation} \label{eq:seg_loss} l_i = \frac{1}{\| \mathbf{o} - \mathbf{a}_i \| + \epsilon}, \end{equation} where $\epsilon$ is a small constant. Hence, $\mathbf{l}$ has the highest value at the index of the closest class center. Applying a softmax operation $\hat{o}_i = \frac{e^{l_i}}{\sum_{j=1}^{C}e^{l_j}}$ transforms $\mathbf{l}$ to a probability measure that can be used with the common loss functions. One can argue that if we look at $\hat{o}$ as the predicted class-probabilities, it is biased by the arbitrary definition of class anchors. If the class $i$ has the highest predicted probability $i=\arg\!\max_j \hat{o}_j$, only one of the classes $\{j: a_j \in \mathcal{N}(\hat{o}_i)\}$ close to $ \hat{o}_j$ can have the second highest probability. Therefore we can not analyze class similarity by looking at these predicted probabilities. However, since our fist priority for the segmentation tasks is predicting the correct class, this offers a simple and effective solution. In case one is interested in making more detailed conclusions from the prediction, alternative approaches can be taken. The rules for constructing Task Palettes $\mathcal{T}$ for our experiments are as follows. \noindent\textbf{Single task rule $\mathcal{S}$}: The Task Palette $\mathcal{P}$ has the same value $k$ at every location $\forall x,y: t_{yx}=k$. Task $k$ can be changed every time the Task Palette is requested from this rule. \noindent\textbf{Random mosaic rule $\mathcal{R}_{1r}$}: The image is spatially divided into four rectangles by intersecting a vertical and horizontal line through a randomly chosen point $\mathbf{c}=(c_x, c_y)$. Each region gets a task assigned to it randomly. The assigned tasks, as well as the point $\mathbf{c}$, can be changed every time the Task Palette is requested from this rule. \noindent\textbf{Semantic rules $\mathcal{R}_2$ and $\mathcal{R}_3$}: These rules assign the tasks with respect to the image semantics. \noindent\textbf{Random rule $\mathcal{R}_{rnd}$}: This rule assigns a randomly chosen task to each pixel independently. More details on these rules can be found in the supplementary materials. Our rationale for choosing these rules is based on our desire to analyse the behaviour of our method. Rule $\mathcal{S}$ is used in the field for solving specific problems. Rule $\mathcal{R}_{1r}$ is one way of seeing what happens if you train and test the network by mixing tasks in the output randomly, without any specific rule or structure behind it. Rules $\mathcal{R}_2$ and $\mathcal{R}_3$ represents rules with some semantic meaning behind it. Finally, rule $\mathcal{R}_{rnd}$ is designed to test the proposed method's limits. \section{Implementation Details} \subsection{Data Set Description} The experiments are conducted on the PASCAL-MT data set from \cite{kevis2019attentive}. While constructing the data set authors distilled labels for some of the tasks, while others were used from PASCAL~\cite{PASCAL} or PASCAL-Context~\cite{pascal-context}. The data set contains $4998$ training and $5105$ validation images. We predict the tasks mentioned in section \ref{sec:tasks_and_rules}. We evaluate the performance of semantic segmentation and human body parts with the mean intersection over union (mIoU). We evaluate the performance of saliency with computing the maximal mIoU over different thresholds. We compute the prediction of surface normals as the mean angular error (mErr). And finally, we evaluate the performance of edges with the optimal dataset F-measure (odsF) \cite{odsF}, using the implementation from \cite{seism}. These evaluation metrics are in concordance with recent multi-tasking work \cite{kevis2019attentive,Kanakis2020ReparameterizingCF}. \subsection{Experimental Setup} \label{sec:experimental_setup} In our experiments we use the following models: \noindent\textbf{CompositeTasking network (CTN)}. This is the network we proposed in section \ref{sec:composite_netowrk}. The network uses a ResNet34 \cite{he2016resnet} encoder. The decoder is build using spatially varying conditioning blocks from Figure~\ref{fig:cond_module}, and it is much smaller than the encoder, in terms of network parameters. The conditioning blocks use $1 \times 1$ regular convolutions in the skip connections and $3 \times 3$ elsewhere, which performed well empirically. \noindent\textbf{Single task networks baseline (STN)}. Here we have different networks for different tasks. Each network has the same architecture as the CompositeTasking network, but instead of the spatially varying conditioning, they use a regular BatchNorm. This way the network has the same capacity. In the case of the single tasking rule $\mathcal{S}$ from section \ref{sec:tasks_and_rules}, each task will be supervised by a different network. With the other CompositeTasking rules $\mathcal{R}_{1r}$ and $\mathcal{R}_{2}$ it is going to do the same, which means that the network for a specific task will only be supervised on the pixels that correspond to that task in the Task Palette $\mathcal{T}$. \noindent\textbf{Multi-head network baseline (MHN)}. Here we have a network with a shared encoder, and a different decoder for different tasks. This is a standard approach in multi-task learning~\cite{RA_series,Bilen2017UniversalRT,kevis2019attentive,Kanakis2020ReparameterizingCF}. The encoder is the same as in the CompositeTasking network, while the decoders have the same architecture, but use regular BatchNorm instead of the task specific conditioning. Similarly as above, this network is going to be supervised by supervising each decoder only on the pixels from its corresponding task. Since CompositeTasking is a new concept, we decided to evaluate the performance of our proposed CompositeTasking network (Figure \ref{fig:composit_net}) with standard baselines. The STN is a common pipeline for solving specific tasks in computer vision, while the MHN is a common pipeline for solving multiple tasks simultaneously in a multi-tasking fashion. For more implementational details and hyper-parameter values look at the supplementary materials. \section{Experiments} \label{sec:experiments} \subsection{General Behaviour} \label{sec:general_behaviour} To compare the performance of a method $m$ with baseline models from Section~\ref{sec:experimental_setup}, we use the average per-task drop with respect to the single-tasking baseline $b$, $\Delta_m=\frac{1}{T}\sum_{i=1}^{T}(-1)^{l_i}\frac{M_{m,i}-M_{b,i}}{M_{b,i}}$, where $l_i = 1$ if a lower value is better for measure $M_i$ of task $i$, and $0$ otherwise~\cite{kevis2019attentive}. We first evaluate on the single-task rule $\mathcal{S}$. From Table~\ref{table:random_mosaics}, we can see that the CompositeTasking network is on par with the baselines, even though it is trained with randomly cropped label regions of rule $\mathcal{R}_{1r}$, and tested on the single-task setting $\mathcal{S}$. This is very interesting, since it is substantially more compact in terms of memory and computational complexity, as can be seen in Figure \ref{fig:net_complexity}. Also, this experiment tells us that a lot can be learned even if only arbitrary regions for labels are being presented during training, as is the case with rule $\mathcal{R}_{1r}$. More interestingly, the part of the label that is being presented during training is a random rectangle region without any semantic meaning behind the chosen region, and still the network performs competitively to the strongest multi-head baseline trained on complete labels. A few examples of the networks predictions are presented in Figures~\ref{fig:mosaic_prediction} and~\ref{fig:st_predictions}. More examples are presented in the supplementary materials. We can see that it poses no problem for the network to sharply switch from predicting one task to another with negligible boundary artifacts, while using the exact same architecture to predict different tasks. For reference, the results from Table~\ref{table:random_mosaics} can be compared to SotA baselines in~\cite{kevis2019attentive} (Tables 2 and 3) and~\cite{Kanakis2020ReparameterizingCF} (Table 3). In Table \ref{table:r2_rule}, we see the performance evaluated using the semantic rule $\mathcal{R}_{2}$ where the tasks are being requested only at sparse, but meaningful and compact regions for each task. When we supervise by using rule $\mathcal{R}_{2}$, again we see that the CompositeTasking network performs on par with the baselines that have a lot more parameters. This shows that it is not necessary to waste so much resources when dealing with very sparse labels. The performance of our model is almost the same as the much more demanding baseline, that has a separate network for each task. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Testing the models on the single task rule}. Our method achieves the same performance as the baselines with much more capacity, when trained on randomly cropped label regions $\mathcal{R}_{1r}$.}} \label{table:random_mosaics} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Training rule & Method & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\uparrow$ \\ \hline \hline \multirow{2}{*}{$\mathcal{S}$} & STN & 69.50 & 63.69 & 58.76 & 15.58 & 69.38 & 0.0\% \\ \cline{2-8} & MHN & 68.10 & 60.77 & 54.21 & 16.44 & 67.21 & -4.60 \% \\ \hline \hline \multirow{2}{*}{$\mathcal{R}_{1r}$} & STN & 68.30 & 59.82 & 49.88 & 16.07 & 69.94 & -5.05\% \\ \cline{2-8} & MHN & 67.70 & 61.64 & 52.84 & 16.40 & 67.70 & -4.71\% \\ \cline{2-8} & CTN(Ours) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & -4.93\% \\ \hline \end{tabular} } \end{table} \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Testing the models on the semantic rule}. Our method achieves the same performance as the baselines with much more capacity, when trained on the semantic rule $\mathcal{R}_{2}$.}} \label{table:r2_rule} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Training rule & Model & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\uparrow$ \\ \hline \hline \multirow{2}{*}{$\mathcal{S}$} & STN & 65.80 & 79.32 & 56.21 & 14.75 & 73.23 & 0.0\% \\ \cline{2-8} & MHN & 64.60 & 73.94 & 50.65 & 15.68 & 71.01 & -5.57\%\\ \hline \hline \multirow{3}{*}{$\mathcal{R}_{1r}$} & STN & 64.80 & 74.12 & 44.81 & 15.42 & 73.22 & -6.58\% \\ \cline{2-8} & MHN & 65.20 & 75.97 & 49.07 & 15.62 & 71.74 & -5.15\% \\ \cline{2-8} & CTN(Ours) & 62.30 & 76.73 & 48.87 & 16.45 & 71.40 & -7.13\%\\ \hline \hline \multirow{3}{*}{$\mathcal{R}_{2}$} & STN & 63.90 & 83.91 & 59.63 & 17.13 & 70.04 & -2.30\% \\ \cline{2-8} & MHN & 64.40 & 83.71 & 58.44 & 17.44 & 67.02 & -3.87 \% \\ \cline{2-8} & CTN(Ours) & 69.20 & 84.70 & 59.74 & 18.12 & 67.95 & -2.37\% \\ \hline \end{tabular} } \end{table} \begin{figure}[t] \centering \captionsetup{font=small} \vspace*{2pt} \includegraphics[width=0.95\columnwidth]{figures/network_complexity.pdf} \caption{\textbf{Model Complexity.} Memory and computational requirements of the compared methods for predicting $5$ tasks. } \label{fig:net_complexity} \end{figure} \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Image}}} \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/1_img.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/2_img.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/3_img.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/4_img.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/5_img.png} \\ \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Pred}}} \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/1_pred.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/2_pred.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/3_pred.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/4_pred.png} & \includegraphics[width=\sz\columnwidth]{images/mosaic_pred/5_pred.png} \end{tabular} \caption{\textbf{Random mosaic compositions.} CompositeTasking network predictions on requests with the $\mathcal{R}_{1r}$ rule. The network shows the ability to sharply switch to a different tasks at neighbour pixels.} \label{fig:mosaic_prediction} \end{figure} \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.1575} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{cccccc} \includegraphics[width=\sz\columnwidth]{images/st_pred/1_img.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/1_pred_edges.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/1_pred_seg.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/1_pred_parts.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/1_pred_normals.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/1_pred_saliency.png} \\ \includegraphics[width=\sz\columnwidth]{images/st_pred/2_img.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/2_pred_edges.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/2_pred_seg.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/2_pred_parts.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/2_pred_normals.png} & \includegraphics[width=\sz\columnwidth]{images/st_pred/2_pred_saliency.png} \end{tabular} \caption{\textbf{Single task predictions.} Even though our model is made for CompositeTasking, it can also make predictions on requests with the $\mathcal{S}$ rule. } \label{fig:st_predictions} \end{figure} \subsection{Learning What to Do Where} \label{sec:learning_where} Up until now we have considered cases when it is known what tasks to predict where (the Task Palette $\mathcal{T}$ is known). This is definitely interesting in some use-cases, like for example Augmented Reality applications where a user can specifically request what he wishes the algorithm to do. It is even more interesting for the Task Palette to be predicted by the network itself, given the input image. One such example is when we have a semantic rule like $\mathcal{R}_{2}$, where for every image we can supervise what needs to be predicted where. We trained a network to predict the Task Palette from the input image ($75.04 \%$ mIoU), during supervision with $\mathcal{R}_{2}$. This can be used for automatic data labelling when we are interested in obtaining labels for multiple different tasks, but only in sparse regions of interest. \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} Image & Task palette & Prediction & Predicted Palette & Prediction \\ \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/1_img.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/1_tmap_CT_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/1_pred_CT.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/1_tmap_TM_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/1_pred_TM.png}\\ \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/2_img.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/2_tmap_CT_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/2_pred_CT.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/2_tmap_TM_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/2_pred_TM.png}\\ \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/3_img.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/3_tmap_CT_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/3_pred_CT.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/3_tmap_TM_gray.png} & \includegraphics[width=\sz\columnwidth]{images/predicting_task_map/3_pred_TM.png} \end{tabular} \caption{\textbf{Learning what to do where.} CompositeTasking network predictions with the learned Task Palette. A separate network is learned to predict the Task Palette with $75.04 \%$ mIoU. More examples can be found in the supplementary materials. } \label{fig:task_map_prediction} \end{figure} \subsection{Task Palette Editing} The world is striving towards automated processes, but things are not perfect just yet. Data labelling is a very unpleasant and time-consuming job if someone has to make dense annotations. Very often we are interested in labels of different tasks at different spatial locations. For example, we want to predict surface normals of cars so that we can perform realistic re-rendering, at the same time semantic segmentation of the surrounding objects and edges everywhere else. That is a very clear rule that can be learned by the setup proposed in \ref{sec:learning_where}. This will not be perfect however, and from time to time mistakes will be made. In such scenarios there may be a human in the loop, making sure that all mistakes along the execution pipeline are corrected. Our framework offers to take the predicted Task Palette, where most of it is correct, and edit the mistakes in certain regions. In that setup, most of the work is done automatically, and the human in the loop puts her focus mostly on regions of high interest to the use-case. One such visual example is presented in Figure~\ref{fig:rule_editing}, while more can be found in the supplementary materials. This highlights further the flexibility of our proposed method. \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} Image & Task Palette & Prediction & Edited Palette & Prediction \\ \includegraphics[width=\sz\columnwidth]{images/task_edit/img.jpg} & \includegraphics[width=\sz\columnwidth]{images/task_edit/task_map_gray.png} & \includegraphics[width=\sz\columnwidth]{images/task_edit/pred_task_map.png} & \includegraphics[width=\sz\columnwidth]{images/task_edit/task_edit_gray.png} & \includegraphics[width=\sz\columnwidth]{images/task_edit/pred_task_edit.png} \end{tabular} \caption{\textbf{Task Palette editing.} The original Task Palette, extracted from the label, did not look satisfactory. A correction was made manually. A prediction of the CompositeTasking network is shown before and after correction. } \label{fig:rule_editing} \end{figure} \subsection{Breaking the Rule} ``The only constant in life is change'', as Heraclitus once wisely said. In fact, also prediction tasks are constantly evolving and are subject to change according to use case, context or available resources. One could think of an existing task rule, say $\mathcal{R}_{2}$, needs to be adapted to cater for a changing use case that demands a new rule $\mathcal{R}_{3}$. It can be similar in one way, but different in another compared to $\mathcal{R}_{2}$. For instance, the same tasks are used from $\mathcal{R}_{2}$, but now $\mathcal{R}_{3}$ requires different tasks to be performed in different regions. One practical example of this is predicting surface normals. Often, the accurate normal labels are obtained by having accurate 3D models. The 3D models however may cover only a part of the scene, therefore of the image. This builds a rule of having normals only for the objects with 3D models. In fact, similar datasets exist. For example, datasets with 3D models of household objects like chairs and tables~\cite{lpt2013ikea}, and of the human body~\cite{Bogo2014FAUST}. Using the model trained on such datasets, one may be interested to predict normals beyond the reason of 3D models. Here we show that our CompositeTasking network can indeed be trained by breaking the rule. We break the old rule by simply requesting to execute the task of the new rule. This is then followed by fine-tuning our network on the new rule, if necessary. As shown in Table \ref{table:breaking_the_rule}, our model trained on $\mathcal{R}_{2}$ is already doing good on a newly introduced rule $\mathcal{R}_{3}$, without any fine-tuning. The rule $\mathcal{R}_{3}$ is somewhat similar to $\mathcal{R}_{2}$, and the model shows the ability to extrapolate on their differences. After fine-tuning it on $\mathcal{R}_{3}$, the performance improves even more, as expected. One example of this is presented in Figure \ref{fig:breaking_the_rule}, while more can be found in the supplementary materials. Interestingly, in Table \ref{table:breaking_the_rule} we observe that the old rule is even improved when training on the new similar rule. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Breaking the rule}. Our method can make successful predictions when changing from $\mathcal{R}_{2}$ to a somewhat similar rule $\mathcal{R}_{3}$ (described in the supplementary materials). With fine-tuning on the new rule, the predictions get even better.}} \label{table:breaking_the_rule} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Testing rule & Training rule & Edge$\uparrow$& Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \multirow{3}{*}{$\mathcal{R}_{3}$} & $\mathcal{R}_{3}$ & 70.20 & 61.19 & 18.34 & 75.35 & 0.0\% \\ \cline{2-7} & $\mathcal{R}_{2}$ & 69.70 & 59.41 & 20.11 & 65.21 & -6.68\% \\ \cline{2-7} & Fine-tuned $\mathcal{R}_{2} \rightarrow \mathcal{R}_{3}$ & 69.70 & 60.91 & 18.68 & 75.00 & -0.87\% \\ \hline \hline \multirow{2}{*}{$\mathcal{R}_{2}$} & $\mathcal{R}_{2}$ & 69.20 & 59.74 & 18.12 & 67.95 & 0.0\% \\ \cline{2-7} & Fine-tuned $\mathcal{R}_{2} \rightarrow \mathcal{R}_{3}$ & 69.40 & 60.84 & 17.95 & 68.31 & +0.90\% \\ \hline \end{tabular} } \end{table} \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} Image & Pred $R_{2}$ & Pred $R_{3}$ & Pred $R_{2}$ & Pred $R_{3}$ \\ & (Trained $R_{2}$) & (Trained $R_{2}$) & (Tuned $R_{2} \rightarrow R_{3}$) & (Tuned $R_{2} \rightarrow R_{3}$) \\ \includegraphics[width=\sz\columnwidth]{images/breaking_the_rule/1_img.png} & \includegraphics[width=\sz\columnwidth]{images/breaking_the_rule/1_pred_dataR2_netR2.png} & \includegraphics[width=\sz\columnwidth]{images/breaking_the_rule/1_pred_dataR3_netR2.png} & \includegraphics[width=\sz\columnwidth]{images/breaking_the_rule/1_pred_dataR2_netR3.png} & \includegraphics[width=\sz\columnwidth]{images/breaking_the_rule/1_pred_dataR3_netR3.png} \end{tabular} \caption{\textbf{Breaking the rule.} Predictions of the CompositeTasking network are show before and after fine-tuning from the old to the new semantic rule. Because the rules are similar in a way, the model can extrapolate even before fine-tuning. } \label{fig:breaking_the_rule} \end{figure} \subsection{Random Compositions} Finally, we are interested to see what happens if our model is evaluated on tasks chosen independently for each pixel at random, denoted as $\mathcal{R}_{rnd}$. Table~\ref{table:rnd_task_map} indicates that our model trained on the mosaic rule $\mathcal{R}_{1r}$ does not perform very well on the random rule $\mathcal{R}_{rnd}$. We conjecture this is because the rule $\mathcal{R}_{1r}$ assigns tasks only to large connected regions during training, and no incentive is given to learn the ability to switch tasks with high spatial frequency. Training on the random rule $\mathcal{R}_{rnd}$, consequently improves the performance on $\mathcal{R}_{rnd}$ significantly (Table~\ref{table:rnd_task_map}). A visualization of this results is presented in Figure \ref{fig:rnd_rule_pred}. Interestingly, although only using a single output, we can clearly observe a meaningful execution of different tasks all over the image. \begin{table}[t] \centering \captionsetup{font=small} \caption{{\textbf{Evaluating on randomly chosen tasks at each pixel location independently}. Notice the performance difference between training on $\mathcal{R}_{1r}$ vs. $\mathcal{R}_{rnd}$, and testing on $\mathcal{R}_{rnd}$. The edge evaluation is omitted due to the unsuitable evaluation protocol.}} \label{table:rnd_task_map} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c|} \hline \rowcolor{mygray} Trained on rule & Evaluated on rule & SemSeg $\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ \\ \hline \hline $\mathcal{R}_{1r}$ & $\mathcal{S}$ & 62.45 & 52.59 & 16.93 & 67.81 \\ \hline $\mathcal{R}_{1r}$ & $\mathcal{R}_{rnd}$ & 35.89 & 21.11 & 64.36 & 66.71 \\ \hline $\mathcal{R}_{rnd}$ & $\mathcal{R}_{rnd}$ & 59.58 & 52.28 & 17.16 & 67.60 \\ \hline $\mathcal{R}_{rnd}$ & $\mathcal{S}$ & 52.26 & 51.88 & 22.65 & 65.34 \\ \hline \end{tabular} } \end{table} \begin{figure}[t] \centering \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{2pt} \newcommand{\sz}{0.3} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccc} \textbf{Image} & \textbf{Task Palette} & \textbf{Prediction}\\ \includegraphics[width=\sz\columnwidth]{images_suplementary/rnd_rule/rnd_img.png} & \includegraphics[width=\sz\columnwidth]{images_suplementary/rnd_rule/rnd_TM_gray.png} & \includegraphics[width=\sz\columnwidth]{images_suplementary/rnd_rule/rnd_pred.png} \end{tabular} \caption{\textbf{Random tasks chosen at each pixel independently.} Predictions includes all five tasks of rule $\mathcal{R}_{rnd}$.} \label{fig:rnd_rule_pred} \end{figure} \section{Discussion} We feel that this is only the tip of the iceberg. In \cite{mittal2020emoticon}, for example, it is crucial to fuse different tasks like face detection, pose estimation, scene understanding and depth estimation to obtain state-of-the-art performance for the high-level task of emotion recognition. Many state-of-the-art pipelines for high-level tasks share that approach \footnote{Andrej Karpathy, senior director of AI at Tesla, \href{https://www.youtube.com/watch?v=oBklltKXtDE&ab_channel=PyTorch}{recently said} that they use a multi-tasking system with $48$ shared backbones and $1000$ different output task heads in their self-driving Autopilot high-level task} and more often than not, different predicted tasks feel important only at different spatial regions of the input image. A great potential is seen for CompositeTasking here. This compactness in terms of memory and computation efficiency can sometimes determine whether some solution to the problem can be practically implemented or not. Also, wasting resources when there is no need for it is never welcome. While supervising the high-level predictions, one can also attempt to learn the rule of what is beneficial to predict where, even if such a rule is not known a priori. This can bring a new level of understanding how the very complex Deep Learning models make decisions on high-level tasks, by observing the requests that the network is making during inference. \section{Conclusion} In this work, we introduced the concept of CompositeTasking as the fusion of multiple, spatially distributed tasks, motivated by the frequent availability of only sparse labels across tasks, and the desire for a compact multi-tasking network. To this end, we studied a novel task conditioning model -- a single encoder-decoder network that performs multiple, spatially varying tasks at once. We showed that CompositeTasking offers efficient multi-task learning from only sparse supervision, with performance competitive to dense supervision and a multi-headed multi-tasking design. Moreover, we demonstrated the unique flexibility by our approach with regards to interactive task editing, and rules transformations. \section{Supplementary Structure} In Section \ref{sec:architecture} of this supplementary material, we discuss more details about our proposed network's architecture and we provide a diagram with all it's details. In Section \ref{sec:rules} we provide more details about some Task Palette rules. In Section \ref{sec:visual_results} we provide additional visual results of our method. In Section \ref{sec:implementation_details_supp} we provide more details on implementational details and hyperparameters used to conduct experiments. In Section \ref{sec:more_exp} we provide further analysis, mostly in depth details about the hyperparameter choices. Finally, in Section \ref{sec:limitations} we discuss some limitations of our proposed method. \section{Architecture} \label{sec:architecture} \begin{figure*}[t] \centering \captionsetup{font=small} \includegraphics[width=0.95\linewidth]{figures/full_composit_net.pdf} \caption{{\textbf{CompositeTasking network architecture}. \textcolor{blue}{Blue} blocks process features unconditionally, while the \textcolor{green}{green} blocks process features in a spatial task-conditioning manner. The network is composed out of an \textcolor{blue}{encoder} and \textcolor{green}{decoder}. The \textcolor{green}{decoder} is composed only out of task composition blocks and upsampling operations. The task composition block is depicted in the lower-right corner of this diagram, while the upsampling operations are denoted with the blue rectangle containing the letter "U". The task composition block takes in a Task Palette embedding $\mathcal{E}(\mathcal{T})$, which is obtained using the \textcolor{gold}{task representation block}, depicted in the lower-left part of this diagram.}} \label{fig:full_composite_network} \end{figure*} The full CompositeTasking network diagram is presented in Figure~\ref{fig:full_composite_network}. The encoder is the well known ResNet$34$ network. The choice for ResNet$34$ trades-off between performance and memory/computational demands. In the diagram, the encoder is divided into $5$ blocks: B$1$,...,B$5$. Each block B$i$ is a part of the encoder that takes features of spatial size $(\frac{H}{2^{i-1}}, \frac{W}{2^{i-1}})$, and reduces the spatial size to $(\frac{H}{2^i}, \frac{W}{2^i})$. In the case of ResNet$34$, the spatial size is reduced with a strided convolution. The Task Palette Embedding $\mathcal{E}(\mathcal{T})$ is calculated once for each forward pass, with the task representation block from Figure~\ref{fig:full_composite_network}. Since all the task composition blocks use the same embedding $\mathcal{E}(\mathcal{T})$, a spatial pyramid of $\mathcal{E}(\mathcal{T})$ is created to be available in all spatial sizes $(\frac{H}{2^i}, \frac{W}{2^i})$, where $i=0,1,...,5$. \section{Task Palette Rules} \label{sec:rules} \noindent\textbf{Random mosaic rule $\mathcal{R}_{1r}$}: The image is spatially divided into four rectangles by intersecting a vertical and horizontal line through a point $\mathbf{c}=(c_x, c_y)$. The point is chosen randomly as $c_x \sim U[\frac{W}{4}, 3\frac{W}{4}]$, $c_y \sim U[\frac{H}{4}, 3\frac{W}{4}]$. Each region $r \in \{a,b,c,d\}$ receives its task $k_r$. In other words: \begin{equation} t_{yx} = \begin{cases} k_a,\, \text{for} \, x \leq c_x, y \leq c_y \\ k_b,\, \text{for} \, x > c_x, y \leq c_y \\ k_c,\, \text{for} \, x \leq, y > c_y \\ k_d,\, \text{for} \, x > c_x, y > c_y \end{cases} \end{equation} The specific tasks $k_a, ... k_d$, as well as $\mathbf{c}$, can be changed every time the Task Palette is requested from this rule. \noindent\textbf{The rule $\mathcal{R}_2$ requests}: \begin{itemize} \item \textbf{Surface normals} on pixels which belong to the semantic classes: bottle, chair, dining table, potted plant, sofa and tv monitor. In other word, this rule requests surface normals on common \emph{household objects} from the dataset. \item \textbf{Human body parts} on pixels which belong to \emph{humans}. \item \textbf{Segmentation} on pixels which belong to birds, horses, cows, cats, dogs and sheep. In other words, this rule requests segmentation on \emph{animals} from the dataset. \item \textbf{Saliency} on aeroplanes, bicycles, boats, buses, cars, motorbikes and trains. In other words, this rule requests saliency on \emph{vehicles} from the dataset. \item \textbf{Edges} everywhere else. \end{itemize} \noindent\textbf{The rule $\mathcal{R}_3$ requests}: \begin{itemize} \item \textbf{Surface normals} on pixels which belong to chairs, dining tables, sofas, bicycles, buses, cars, motorbikes and trains. In other words, it requests surface normals on \emph{some vehicles} and \emph{some common household objects} from the dataset. \item \textbf{Human body parts} on pixels which belong to \emph{humans}. \item \textbf{Saliency} on aeroplanes, boats, birds, horses, cows, cats, dogs, sheep, bottles, potted plants and tv monitors. In other words, it requests saliency on \emph{other vehicles}, \emph{other household objects} and \emph{animals} from the dataset. \item \textbf{Edges} everywhere else. \end{itemize} The rule $\mathcal{R}_3$ is designed to be similar to rule $\mathcal{R}_2$. It uses all the same tasks as rule $\mathcal{R}_2$, except for semantic segmentation, which has been left out. The tasks of predicting edges and human body parts are requested at the exact same locations. The tasks of surface normals, and saliency are requested on a few classes where they are already requested in rule $\mathcal{R}_2$, but mostly on different classes. \section{Additional Qualitative Results} \label{sec:visual_results} Here we provide more visual examples of the CompositeTasking network's capabilities \noindent\textbf{Random mosaics.} Additional visual examples from the experiment presented in Figure~\ref{fig:mosaic_prediction} are presented in Figure~\ref{fig:mosaic_prediction_supp}. We can see that the network successfully predicts different tasks simultaneously at different pixel locations, all during the same forward pass. It shows the ability to sharply change the task which is being predicted in different spatial locations, with respect to the Task Palette $\mathcal{T}$. \noindent\textbf{Single task predictions.} Additional visual examples from the experiments presented in Figure~\ref{fig:st_predictions} are presented in Figure~\ref{fig:st_predictions_supp}. Although the network is designed to be used in a CompositeTasking fashion, here we see that our network can make all task predictions successfully. In this case, images are encoded only once followed by multiple task-specific decodings. \noindent\textbf{Learning what to do where.} Additional visual examples from the experiments presented in Figure~\ref{fig:task_map_prediction} are presented in Figure \ref{fig:task_map_prediction_supp}. We can see the networks ability to predict the Task Palette. Using the predicted palette, the model successfully makes CompositeTasking predictions. \noindent\textbf{Task Palette editing.} Additional visual examples from the experiments presented in Figure~\ref{fig:rule_editing} are presented in Figure~\ref{fig:rule_editing_supp}. We can see that a user can make any modification to a given Task Palette, and obtain the respective CompositeTasking prediction. \noindent\textbf{Breaking the rule.} Additional visual examples from the experiments presented in Figure~\ref{fig:breaking_the_rule} are presented in Figure~\ref{fig:breaking_the_rule_supp}. We see the model's ability to transfer the knowledge of the rule it trained on ($\mathcal{R}_2$), to a new rule ($\mathcal{R}_3$). The performance gets even better when fine-tuning is done on the new rule. In the $1$st row we can see that the model was able to predict relatively good surface normals for the motorbike, even though it was never supervised to predict normals on motorbikes. After some fine-tuning on the new rule, it's predictions get even better. Notice how some almost flat surfaces on the vehicle (which have the same surface orientation) get more consistent normal predictions after fine-tuning. Since the new rule $\mathcal{R}_3$ does not contain the task of semantic segmentation anymore, that task is forgotten as can be seen in the $3$rd row. In the $4$th row, human body parts result, obtained using the rule $\mathcal{R}_3$, is shown. \begin{figure*}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Image}}} \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/1_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/2_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/3_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/4_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/5_img.png} \\ \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Pred}}} \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/1_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/2_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/3_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/4_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/5_pred.png}\\ \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Image}}} \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/6_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/7_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/8_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/9_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/10_img.png} \\ \multirow{1}{*}[15pt]{\rotatebox{90}{\textbf{Pred}}} \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/6_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/7_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/8_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/9_pred.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/mosaic_pred/10_pred.png} \end{tabular} \caption{\textbf{Random mosaic compositions.} CompositeTasking network predictions on requests with the $\mathcal{R}_{1r}$ rule.} \label{fig:mosaic_prediction_supp} \end{figure*} \begin{figure*}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.1575} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{cccccc} \textbf{Image} & \textbf{Edges} & \textbf{Segmentation} & \textbf{Human parts} & \textbf{Surface normals} & \textbf{Saliency} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/2_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/4_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/5_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/7_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/1_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/6_pred_saliency.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_pred_edges.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_pred_seg.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_pred_parts.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_pred_normals.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/st_pred/3_pred_saliency.png} \end{tabular} \caption{\textbf{Single task predictions.} Even though our model is made for CompositeTasking, it can also make predictions on requests with the $\mathcal{S}$ rule. } \label{fig:st_predictions_supp} \end{figure*} \begin{figure*}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} \textbf{Image} & \textbf{Task palette} & \textbf{Prediction} & \textbf{Predicted Palette} & \textbf{Prediction} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/4_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/4_tmap_CT_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/4_pred_CT.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/4_tmap_TM_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/4_pred_TM.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/3_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/3_tmap_CT_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/3_pred_CT.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/3_tmap_TM_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/3_pred_TM.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/2_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/2_tmap_CT_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/2_pred_CT.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/2_tmap_TM_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/2_pred_TM.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/1_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/1_tmap_CT_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/1_pred_CT.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/1_tmap_TM_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/predicting_task_map/1_pred_TM.png} \end{tabular} \caption{\textbf{Learning what to do where.} CompositeTasking network predictions with the learned Task Palette. A separate network is learned to predict the Task Palette with $75.04 \%$ mIoU. } \label{fig:task_map_prediction_supp} \end{figure*} \begin{figure*}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} \textbf{Image} & \textbf{Task Palette} & \textbf{Prediction} & \textbf{Edited Palette} & \textbf{Prediction} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/3_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/3_task_map_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/3_pred_task_map.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/3_task_edit_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/3_pred_task_edit.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/2_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/2_task_map_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/2_pred_task_map.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/2_task_edit_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/2_pred_task_edit.png} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/1_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/1_task_map_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/1_pred_task_map.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/1_task_edit_gray.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/task_edit/1_pred_task_edit.png} \end{tabular} \caption{\textbf{Task Palette editing.} The Task Palette has been modified manually. A prediction of the CompositeTasking network is shown before and after modification. } \label{fig:rule_editing_supp} \end{figure*} \begin{figure*}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.189} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{ccccc} \textbf{Image} & \textbf{Pred $R_{2}$} & \textbf{Pred $R_{3}$} & \textbf{Pred $R_{2}$} & \textbf{Pred $R_{3}$} \\ & \textbf{(Trained $R_{2}$)} & \textbf{(Trained $R_{2}$)} & \textbf{(Tuned $R_{2} \rightarrow R_{3}$)} & \textbf{(Tuned $R_{2} \rightarrow R_{3}$)} \\ \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/2_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/2_pred_dataR2_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/2_pred_dataR3_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/2_pred_dataR2_netR3.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/2_pred_dataR3_netR3.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/4_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/4_pred_dataR2_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/4_pred_dataR3_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/4_pred_dataR2_netR3.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/4_pred_dataR3_netR3.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/3_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/3_pred_dataR2_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/3_pred_dataR3_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/3_pred_dataR2_netR3.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/3_pred_dataR3_netR3.png}\\ \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/1_img.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/1_pred_dataR2_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/1_pred_dataR3_netR2.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/1_pred_dataR2_netR3.png} & \includegraphics[width=\sz\linewidth]{images_suplementary/breaking_the_rule/1_pred_dataR3_netR3.png} \end{tabular} \caption{\textbf{Breaking the rule.} Predictions of the CompositeTasking network are show before and after fine-tuning from the old to the new semantic rule. Because the rules are similar in a way, the model can already extrapolate even before fine-tuning. } \label{fig:breaking_the_rule_supp} \end{figure*} \section{Implementation Details} \label{sec:implementation_details_supp} The networks are trained on the train partition of the data set and evaluated on the validation partition of the data set, in absence of a test partition. The images are resized to $256\times 256$ both for training and validation. When using the rule $\mathcal{R}_{1r}$ for training, a new center $\mathbf{c}$ and task region-task assignments $k_r$ are sampled for every batch. Training data was augmented using random horizontal flipping, shifting, scaling, cropping, Gaussian noise, adaptive histogram equalization, brightness change, sharpening, blurring, contrast change, hue change and saturation change. Flipping, scaling and shifting were not used when predicting surface normals. Each distinct task $k$ of the Task Palette $\mathcal{T}$ is embedded into a latent vector $\mathbf{w}$ using the embedding network depicted in figure~\ref{fig:full_composite_network}. The embedding network is a fully connected neural network with $6$ layers and leaky ReLU activations. To obtain an embedding for a specific task, its code $z_k$ is given to the embedding network. The codes are vectors of length $20K$, where $K$ is the number of tasks. The code of task $1$ is constructed by putting ones in the first $20$ elements of the vectors, and zeroes everywhere else. For task $2$, we have ones in the vectors positions $21...40$, etc. This way, the code vectors for different tasks will be orthogonal to each other. The code vectors $z_k$ are also $L_2$ normalized to have unit length. Since the embedding $\mathcal{E}$ from Figure \ref{fig:e_embed_net} is needed in the decoder in different spatial sizes, a pyramid of downsized versions is prepared, along with the original $\mathcal{E}$. We use downsampling with bilinear interpolation. The losses for each task $k$ are computed independently for every pixel belonging to that task in $\mathcal{T}$, and averaged to produce $\mathcal{L}_k (\mathcal{O}, \mathcal{Y}_k, \mathcal{T})$ from (\ref{eq:total_loss}). For semantic segmentation and human body parts we use the focal loss \cite{lin2017focalloss} with $\gamma=2$, which was shown to be more effective in our setup than standard cross-entropy loss. We use weighted cross-entropy loss for edges. The weight for the positive class (edges) is $0.95$ while the weight or the negative class is $0.05$, because the edges are usually present in a substanially smaller portion of the image pixels. We use weighted cross-entropy loss for saliency. The weights are calculated in each batch as the inverse frequency of elements that belong to the positive and negative class, normalized to sum up to $1.0$. For surface normals we use $1 - CS(o,y)$, where $CS$ is the cosine similarity. The chosen loss weights $\lambda_k$ are $3$ for semantic segmentation, $4$ for human parts, $50$ for edges, $8$ for saliency and $4$ for surface normals. This provided the best trade-off in experiments. The exact $3$D class anchors $\mathbf{a}_i$ (see eq. \ref{eq:seg_loss}) used for the task of semantic segmentation and human body parts can be found in Tables~\ref{table:seg_class_anchors}~and~\ref{table:parts_class_anchors}, respectively. \begin{table}[t] \centering \tiny \captionsetup{font=small} \caption{{\textbf{Class anchors $\mathbf{a}_i$ for the task of semantic segmentation}. These anchors define which part of the $3$D output space corresponds to which class, based on the nearest neighbor principle. They also define the RGB class colors used for all the presented results.}} \label{table:seg_class_anchors} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|} \hline \rowcolor{mygray} Class id & Class name & $x_1$ & $x_2$ & $x_3$ \\ \hline \hline 0 & Background & $0$ & $0$ & $0$ \\ \hline 1 & Cat & $0$ & $0$ & $64$ \\ \hline 2 & Aeroplane & $0$ & $0$ & $128$ \\ \hline 3 & Chair & $0$ & $0$ & $192$ \\ \hline 4 & Potted plant & $0$ & $64$ & $0$ \\ \hline 5 & Sheep & $0$ & $64$ & $128$ \\ \hline 6 & Bicycle & $0$ & $128$ & $0$ \\ \hline 7 & Cow & $0$ & $128$ & $64$ \\ \hline 8 & Bird & $0$ & $128$ & $128$ \\ \hline 9 & Dining table & $0$ & $128$ & $192$ \\ \hline 10 & Sofa & $0$ & $192$ & $0$ \\ \hline 11 & Train & $0$ & $192$ & $128$ \\ \hline 12 & Boat & $128$ & $0$ & $0$ \\ \hline 13 & Dog & $128$ & $0$ & $64$ \\ \hline 14 & Bottle & $128$ & $0$ & $128$ \\ \hline 15 & Horse & $128$ & $0$ & $192$ \\ \hline 16 & TV monitor & $128$ & $64$ & $0$ \\ \hline 17 & Bus & $128$ & $128$ & $0$ \\ \hline 18 & Motorbike & $128$ & $128$ & $64$ \\ \hline 19 & Car & $128$ & $128$ & $128$ \\ \hline 20 & Person & $128$ & $128$ & $192$ \\ \hline \end{tabular} } \end{table} \begin{table}[t] \centering \tiny \captionsetup{font=small} \caption{{\textbf{Class anchors $\mathbf{a}_i$ for the task of human body parts}. These anchors define which part of the $3$D output space corresponds to which class, based on the nearest neighbor principle. They also define the RGB class colors used for all the presented results.}} \label{table:parts_class_anchors} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|} \hline \rowcolor{mygray} Class id & Class name & $x_1$ & $x_2$ & $x_3$ \\ \hline \hline 0 & Background & $0$ & $0$ & $0$ \\ \hline 1 & Head & $0$ & $0$ & $255$ \\ \hline 2 & Neck \& torso & $0$ & $255$ & $0$ \\ \hline 3 & Upper arms & $255$ & $0$ & $0$ \\ \hline 4 & Lower arms & $0$ & $255$ & $255$ \\ \hline 5 & Upper legs & $255$ & $0$ & $255$ \\ \hline 6 & Lower legs & $255$ & $255$ & $0$ \\ \hline \end{tabular} } \end{table} The model was optimized using the Adam optimization algorithm, using weight decay of magnitude $0.00001$. The encoder is pre-trained on ImageNet , while the weights of the decoder are randomly initialized. Because of that the encoder is optimized with a smaller learning rate of $0.00001$, while the decoder is optimized with the learning rate of $0.001$. If the loss does not improve over the course of $12$ epochs, the learning rates are multiplied by a factor of $0.3$. We use leaky ReLU as the activation function. Since the PASCAL data set only provides the training and validation partitions, the networks are trained for $100$ epochs which was shown to be enough for them to converge. They were trained on Nvidia GPUs with either 12Gb or 16Gb of RAM memory. A batch size of $10$ was used. \section{Further Analysis} \label{sec:more_exp} In this section, we present analysis that shed more light on our choice of hyper-parameters. The average per-task drop metric from Section~\ref{sec:general_behaviour} is also used here. Instead of computing the metric with respect to a single task baseline, now it is computed with respect to the performance of the model with the chosen hyper-parameters. All the following analysis are conducted by evaluating on the single task rule $\mathcal{S}$. In the task composition blocks we use one shared convolution that processes the Task Palette embedding $\mathcal{E}(\mathcal{T})$, before it is converted to the affine transformation parameters $\gamma$ and $\beta$. In Table \ref{table:n_conv_spade} we see that one shared convolution was the best choice. If we use more shared convolutions, we achieve a slightly worse performance with an increased computational cost. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Number of shared convolutions that process the Task Palette embedding $\mathcal{E}(\mathcal{T})$ in the task composition block}. }} \label{table:n_conv_spade} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} \# of convolutions & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $1$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $2$ & 68.60 & 61.93 & 51.49 & 17.15 & 67.56 & -0.92\% \\ \hline $3$ & 68.30 & 62.01 & 52.28 & 17.16 & 67.60 & -0.68\% \\ \hline \end{tabular} } \end{table} We used a regular convolution with kernel size $3 \times 3$ in the task composition blocks of the decoder. In Table \ref{table:kernel_size_dec} we see that a kernel size of $1 \times 1$ achieves worse performance then our choice. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Kernel size of regular convolutions inside the decoders task composition blocks}. }} \label{table:kernel_size_dec} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Kernel size & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $3 \times 3$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $1 \times 1$ & 66.20 & 61.50 & 51.05 & 17.87 & 67.16 & -2.89\% \\ \hline \end{tabular} } \end{table} We choose to do image-to-image translation by having $3$ output channels. In Table \ref{table:n_out_ch} we observe that if we use more output channels, there is no gain in predictive performance. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Number of output channels of the model}. }} \label{table:n_out_ch} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} \# of output channels & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $3$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $6$ & 68.50 & 62.35 & 52.99 & 17.08 & 67.65 & -0.13\% \\ \hline $9$ & 68.50 & 62.42 & 52.50 & 17.39 & 67.89 & -0.59\% \\ \hline \end{tabular} } \end{table} When a Task Palette embedding $\mathcal{E}(\mathcal{T})$ is produced, it is down-scaled in all spatial sizes that the decoder needs. In Table \ref{table:interpolation_type} we compare our choice of bi-linear interpolation to nearest neighbours. We choose the bi-linear interpolation because it achieves slightly better results. This discrepancy in performance may be caused due to the negligence of the sampling theorem, i.e. ignoring that downsampled pixels on the task boundary represent more than just one distinct task~\cite{zhang2019making}. Also, the nearest neighbours interpolation can be used as a trade-off between slightly worse performance and slightly faster computations. When applying nearest neighbours interpolation there is no need to take into account all the values of $\mathcal{E}(\mathcal{T})$, but only the closest elements to the new pixel centers. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Type of interpolation used during the Task Palette embedding $\mathcal{E}(\mathcal{T})$ downsampling}. }} \label{table:interpolation_type} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Interpolation & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline Bi-linear (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline Nearest neighbour & 67.60 & 60.92 & 51.37 & 17.20 & 67.51 & -1.65\% \\ \hline \end{tabular} } \end{table} In Table \ref{table:n_fc_z_w} we analyze what is the good number of fully connected layers in the task representation block. We can see that if we pick the number to be too small or too big, there appears to be problems during learning. Especially for a very big number of layers like $12$, the network is having a hard time converging. Having $6$ and $9$ layers gave very similar results, and $6$ was chosen simply because it is computationally less demanding. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Number of fully connected layers in the task representation block}. }} \label{table:n_fc_z_w} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} \# of layers & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $3$ & 68.00 & 7.61 & 42.62 & 16.37 & 71.53 & -19.77\% \\ \hline $6$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $9$ & 68.70 & 62.29 & 52.18 & 16.95 & 67.43 & -0.31\% \\ \hline $12$ & 52.80 & 36.46 & 13.69 & 67.60 & 44.93 & -94.33\% \\ \hline \end{tabular} } \end{table} From Table \ref{table:dim_w} we can see that the model is pretty robust to choosing the number of channels of the Task Palette embedding $\mathcal{E}(\mathcal{T})$. We chosen the number of channels which gave the best performance. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Number of channels of the Task Palette embedding $\mathcal{E}(\mathcal{T})$}. }} \label{table:dim_w} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} \# of channels & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $32$ & 68.70 & 62.14 & 51.86 & 16.95 & 67.46 & -0.47\% \\ \hline $64$ & 68.60 & 62.06 & 52.67 & 17.06 & 67.86 & -0.23\% \\ \hline $128$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $256$ & 68.40 & 60.36 & 51.53 & 17.11 & 67.35 & -1.48\% \\ \hline $512$ & 1.40 & 11.22 & 4.53 & 30.87 & 18.64 & -85.26\% \\ \hline \end{tabular} } \end{table} In Table \ref{table:enc_backbone} we analyze what happens if we use encoder backbones of different capacity. We can see a clear trend from the table --- as the backbone has more capacity (more learnable parameters and computations) the predictive performance of the model gets better. This was expected, and it shows one possible way to improve the performance. The choice for ResNet$34$ trades-off between performance and memory/computational demands. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Encoder backbone}. }} \label{table:enc_backbone} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Backbone & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline ResNet$18$ & 67.40 & 60.40 & 49.25 & 17.25 & 66.87 & -2.93\% \\ \hline ResNet$34$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline ResNet$50$ & 69.40 & 62.34 & 53.81 & 16.64 & 68.19 & +1.12\% \\ \hline ResNet$101$ & 69.60 & 63.75 & 55.78 & 16.71 & 69.55 & +2.69\% \\ \hline \end{tabular} } \end{table} In Table \ref{table:normlas_loss} we analyze the choice of the loss function for surface normals. As we can see, using cosine similarity achieves much better results that the popular L$1$ loss. Also, when using L$1$ loss, the values of the loss function over epochs looked much more unstable than compared to using cosine similarity. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Loss choice for the task of estimating surface normals}. }} \label{table:normlas_loss} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} Loss function & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline cosine similarity (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $l_1$ distance & 67.60 & 58.91 & 49.20 & 19.77 & 67.53 & -6.15\% \\ \hline \end{tabular} } \end{table} In Table \ref{table:seg_loss_gamma} we analyze the choice of $\gamma$ in the focal loss used in segmentation and human parts. We choose $\gamma=2$ because that it achieved the best performance. Using $\gamma=0$ is the same as using regular cross-entropy. Even though regular cross-entropy was close in performance to using the focal loss, when supervising with it the models often failed to converge. \begin{table}[t] \centering\small \captionsetup{font=small} \caption{{\textbf{Choice of parameter $\gamma$ from focal loss used in segmentation and human parts}. }} \label{table:seg_loss_gamma} \vspace*{-2pt} \resizebox{\columnwidth}{!}{ \setlength\tabcolsep{6pt} \renewcommand\arraystretch{1} \begin{tabular}{|c|c|c|c|c|c||c|c|c} \hline \rowcolor{mygray} $\gamma$ & Edge$\uparrow$& SemSeg$\uparrow$ & Parts$\uparrow$ & Normals$\downarrow$ & Sal$\uparrow$ & $\Delta_m\%$$\downarrow$ \\ \hline \hline $0$ & 68.40 & 61.39 & 51.93 & 17.29 & 67.40 & -1.19\% \\ \hline $1$ & 68.10 & 61.99 & 51.61 & 17.46 & 67.05 & -1.52\% \\ \hline $2$ (\textbf{chosen}) & 68.60 & 62.45 & 52.59 & 16.93 & 67.81 & 0.00\% \\ \hline $3$ & 68.70 & 61.60 & 51.39 & 17.06 & 67.88 & -0.83\% \\ \hline \end{tabular} } \end{table} \section{Limitations} \label{sec:limitations} \begin{figure}[t] \centering \captionsetup{font=small} \scriptsize \setlength{\tabcolsep}{1pt} \vspace*{-2pt} \newcommand{\sz}{0.48} \newcommand{\cgs}{\hspace{3pt}} \begin{tabular}{cc} \textbf{Image} & \textbf{Pred} \\ \includegraphics[width=\sz\columnwidth]{images_suplementary/mosaic_pred/fail1_img.png} & \includegraphics[width=\sz\columnwidth]{images_suplementary/mosaic_pred/fail1_pred.png} \\ \includegraphics[width=\sz\columnwidth]{images_suplementary/mosaic_pred/fail2_img.png} & \includegraphics[width=\sz\columnwidth]{images_suplementary/mosaic_pred/fail2_pred.png} \end{tabular} \caption{\textbf{Unsuccessful random mosaic composition predictions.}} \label{fig:fail_mosaic_prediction} \end{figure} The proposed network has the ability to only predict one task for each spatial location, during one forward pass. In some cases, this could be a limitation. When making predictions on high-level tasks, modern approaches predict many different low-level tasks and fuse them in order to make the final prediction. Although it is unnecessary to predict every task everywhere -- as laid out in our introduction --, sometimes it could certainly be necessary to perform more than one task per each spatial location. As one example, lets think about predicting where to turn the steering wheel in a self-driving application. When a person is in the frame we might want to detect them as a person (semantic segmentation), but we would also like to know the body part locations and the depth at its location so we can reason about the person's pose, intended motion, and how close the person is to the car. Even in the case of our one-task-per-location model however, predictions are not always perfect. Now we will look at some examples, where the model does not make good predictions. In the $1$st row of Figure \ref{fig:fail_mosaic_prediction}, we can see that the model fails to predict the body parts, and the semantic segmentation correctly. This happens often when there are many people in the scene, especially in strange circumstances like in this example. In the $2$nd row of Figure \ref{fig:fail_mosaic_prediction}, we can see that the model segmented the dog as a human, probably because it was unlikely that a dog is riding on a boat in the dataset. In Figure~\ref{fig:fail_st_predictions}, we can see that the model is not able to predict the human parts successfully, because the model recognises most of the human body as the motorcycle as can be notices in the semantic segmentation prediction. Finally, on Figure~\ref{fig:fail_task_map_prediction} we can see what happens if the Task Palette is not predicted correctly. Here the model is able to make a very good prediction given the appropriate palette. But when the palette is predicted wrongly, probably because of a lot of objects overlapping, the model makes a bad prediction.
{ "timestamp": "2021-06-21T02:02:44", "yymm": "2012", "arxiv_id": "2012.09030", "language": "en", "url": "https://arxiv.org/abs/2012.09030" }
\section{Introduction} The developments of high-intensity laser technologies have opened the door to laser-driven light sources with photon energies from \SI{e-2}{eV} (THz radiation) to \SI{e7}{eV} (gamma-rays)\cite{mourou2019nobel,Renner2019}. Photons with the energies of $285\sim540$ \si{eV} ($\lambda \approx 2.3\sim4.4$ \si{nm}), also defined as the water-window (WW) x-rays, are ideal to image the inner structure of live cells in vivo with high spatial resolutions as they can penetrate the water while strongly be absorbed by the carbon atoms\cite{Anne2010,Kordel2020}. Laser-driven WW x-ray sources have the advantages of micrometer source sizes, ultrashort durations and high brightness\cite{albert2016}. Further improving the conversion efficiency from laser to WW x-rays is one key issue for the development of laser-driven laboratory WW x-ray microscopes, which can be achieved by utilizing novel targets. Besides regular planar solid targets\cite{Sheil2017,Arai2018,John2019}, plenty of novel targets such as liquid jet targets\cite{Groot2003}, gas-puff targets\cite{Muller2013,Wachulak2013}, carbon nanotube targets\cite{Nishikawa2004} and low-density foam targets\cite{Chakravarty2013,Hara2018} have been studied. Some of them showed appealing conversion efficiency from laser to WW x-rays. As a kind of novel targets, nanowire array (NWA) targets have been used in high-power laser experiments for the enhanced generation of hot electrons\cite{Moreau2020}, energetic ions\cite{Dozieres2019,Ebert2020}, x-ray emissions\cite{Purvis2013,Hollinger2017} and neutrons\cite{Curtis2018}. Unlike planar targets where the laser pulses only heat the surface of the targets, the laser can penetrate into the NWA targets and interact with the side wall of the nanowires. As a result of such volumetric heating, the laser absorption of NWA targets is very high\cite{Habara2016}. Irradiated by ultrashort laser pulses with the intensity exceeding \SI{e18}{W/cm^2}, the nanowires can turn to high-energy-density plasmas with keV temperatures and near-solid densities. The relatively large volumes of the plasmas result in a long hydrodynamic cooling time of $T_h$. Here $T_h=L/C_s$, where $L$ is the plasma size and $C_s$ is the acoustic velocity\cite{Hollinger2017}. As a result, the radiative cooling time $T_r$ is shorter than $T_h$, which means a large portion of plasma energy can be converted to x-ray emissions. A record conversion efficiency of 20\% from the laser to x-rays with the photon energy exceeding \SI{1}{keV} has been reported using Au NWA targets\cite{Hollinger2017}. The wavelength of x-rays from the NWA targets is determined by the materials of the targets. Most NWA targets in previous studies were synthesized by electro-depositing metal atoms into anodized alumina oxide (AAO) membranes. The wavelengths of the emitted x-rays from the metal NWA targets are mainly in the range of $0.1-0.8$ \si{nm}. For example, Purvis et al. measured the x-rays in the wavelength range of $0.4-0.6$ \si{nm} from Au NWA targets\cite{Purvis2013}. X-ray emissions around \SI{0.15}{nm} were also reported from Ni and Co NWA targets\cite{Purvis2013,Bargsten2017}. For Si NWA targets, the wavelengths of emitted x-rays extended to $0.6-0.8$ \si{nm}\cite{Samsonova2019,Ebert2020}. All the above x-ray emissions were not in the WW region. In this paper, we utilize polyethylene (PE) NWA targets prepared by the heat-extrusion method instead of the electro-deposition method to generate WW x-rays around \SI{3.5}{nm}. Experimental results show that the yield of WW x-rays from the PE NWA targets is more than one order of magnitude higher than that from the planar targets. The influence of the nanowires' lengths on x-ray emissions is experimentally investigated as well. We also perform a series of simulations to study the emission processes of WW x-rays from the NWA targets. \section{Experimental set-up} The experiments were performed at Peking University utilizing the CLAPA laser system\cite{Geng2018}. As depicted in Fig. \ref{f1}a), a \SI{30}{fs} laser pulse with the central wavelength of \SI{800}{nm} was focused on the targets with \SI{90}{\degree} incident angle by an f/3 off-axis parabolic mirror (OAP). The CLAPA laser facility contains a cross-polarized wave (XPW) system to achieve a nanosecond contrast of $10^{10}$. To prevent the expansion of nanowires caused by the prepulses and amplified spontaneous emissions (ASE), we also employed a plasma mirror system to improve the laser contrast to $10^{-12}$ at \SI{40}{ps} before the main pulse. The on-target laser energy was about \SI{1.0}{J}. Considering the full width at half maximum (FWHM) spot size of \SI{4.0}{\micro m} $\times$ \SI{4.5}{\micro m} with an energy concentration ratio of 30\%, the peak intensity of the laser was \SI{4e19}{W/cm^2}. The corresponding maximum electric field was $E=$ \SI{1.6e13}{V/m}, which is strong enough to realize field ionization of carbon to C$^{6+}$\cite{ADK1986}. \begin{figure}[htbp] \centering \includegraphics[width=11cm]{f1.jpg} \caption{The experimental set-up. a) A schematic diagram of the laser and the NWA targets. The laser incidence angle is \SI{90}{\degree}. A flat-field grazing-incidence spectrometer is placed at \SI{45}{\degree} to the laser axis in the reflection direction to measure the x-ray emissions. b) and c) SEM images of the PE NWA targets with different lengths of nanowires.}\label{f1} \end{figure} Two kinds of PE NWA targets with the nanowires' diameters of \SI{200}{nm} and \SI{500}{nm} were used in the experiments. The nanowires' lengths varied from \SI{2}{\micro m} to \SI{10}{\micro m}. Under the nanowires, \SI{0.2}{mm} thick PE foils were employed as supporting substrates. Planar PE foils with the thickness of \SI{0.2}{mm} were also shot for comparison. The NWA targets were prepared by the heat-extrusion method with the following steps. First, we attached a PE sheet on a suitable AAO template composed of uniform parallel pores. Second, the PE-attached AAO templates were heated and mechanically compressed in vacuum so that the PE molecules were extruded into the pores of the template. Third, the NWA targets were obtained by dissolving the AAO templates in NaOH solutions. The scanning electron microscope (SEM) images of the PE NWA targets are displayed in Fig. \ref{f1}b) and c). As one can see, most nanowires stand straight and are isolated from each other. The diameters (D), interval spaces (S) and lengths (L) of the nanowires, as depicted in Fig. \ref{f1}a), are determined by the templates. In order to precisely place the targets on the laser focal spot, a motorized target positioning system was employed with an accuracy of \SI{2}{\micro m}\cite{Shou2019}. The x-ray emissions from the NWA targets and the planar targets were measured by a flat-field grazing-incidence spectrometer at \SI{45}{^\circ} to the laser axis in the reflection direction. A 1200 lines/mm laminar-type soft x-ray diffraction grating (Shimadazu 03-005) was employed in the spectrometer. The WW x-ray spectra were recorded on-line using an x-ray charge-coupled device (CCD) camera (Andor DO940P-DN). The distance from the targets to the 55-\textrm{$\mu$}m-wide entrance slit of the spectrometer was \SI{0.98}{m}. Before the entrance slit, a magnet with a length of \SI{5}{cm} and a field strength of \SI{0.67}{T} was placed to deflect the ions and electrons generated from laser-plasma interaction. \section{Results} Figure \ref{f2}a) - c) display the typical raw data of the WW soft x-ray emissions measured by the flat-field spectrometer from the planar targets, NWA targets with the nanowires' diameters of \SI{200}{nm} and \SI{500}{nm}, respectively. Compared to the planar targets, a pronounced enhancement of x-ray emissions from the NWA targets can be observed. Taking into account the acceptance angle of the spectrometer, the diffraction efficiency of the flat-field grating\cite{Yamazaki1999,Dong2012} and the quantum efficiency of the x-ray CCD\cite{Andor}, we can obtain the x-ray spectra with the absolute spectral response as shown in Fig. \ref{f2}d). Here a 4$\pi$ solid angle of the x-ray radiation is assumed. The spectra of WW x-rays with the wavelength range of \SIrange{2.3}{3.1}{nm} are not included due to the extreme aberration of the spectrometer. As one can see, more than one order of magnitude enhancement on the x-ray emissions from both NWA targets over the planar targets is demonstrated. The 500-nm-diameter NWA targets have stronger emissions than that of the 200-nm-diameter NWA targets. \begin{figure}[htbp] \centering \includegraphics[width=11cm]{f2.jpg} \caption{The WW x-ray spectra. a), b) and c) The raw data recorded by the x-ray CCD from planar targets, 200-nm-diameter NWA targets, and 500-nm-diameter NWA targets, respectively. The nanowires' lengths of the targets in b) and c) are both \SI{5}{\micro m}. d) The corresponding x-ray spectra of a) - c).}\label{f2} \end{figure} The measured WW x-rays are composed of continuum emissions and line emissions from carbon ions. The continuum emissions are from the recombination of free electrons with carbon ions and the bremsstrahlung radiation. At the laser intensity of \SI{4e19}{W/cm^2}, MeV relativistic electrons can be generated from the nanowires' surface and deposit their energy in nearby nanowires. So the bremsstrahlung radiation is significantly higher than that at lower intensities\cite{Sheil2017}, contributing to a stronger continuum emission. The line emissions riding over the strong continuum emission have the central wavelength of \SI{3.37}{nm} and \SI{4.03}{nm}, corresponding to the Ly$_\alpha$ and He$_\alpha$ emissions from C$^{5+}$ and C$^{4+}$, respectively. The K$_\alpha$ emission at \SI{4.48}{nm} from neutral carbon is not observed either from the planar or NWA targets. It indicates that most emissions are from ionized carbon ions instead of excited neutral ions. We speculate the relativistic driving laser leads to a quick field ionization of carbon atoms to C$^{4+}$ - C$^{6+}$ in the targets' surface, resulting in the strong Ly$_\alpha$ and He$_\alpha$ emissions as well as the suppression of K$_\alpha$ emissions. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{f3.jpg} \caption{Dependence of the WW x-ray yields on the lengths of nanowires for the 200-nm-diameter and 500-nm-diameter NWA targets. }\label{f3} \end{figure} We also studied the dependence of the nanowires' diameters and lengths on the yields of WW x-ray emissions. The yields are calculated from the integral of the WW x-ray spectra like Fig. \ref{f2}d). As shown in Fig. \ref{f3}, for the targets with the same length of nanowires, a slight enhancement of WW x-ray emissions can be observed for the 500-nm-diameter NWA targets compared to the 200-nm-diameter ones. The yields of WW x-rays rise with the increase of the nanowires' lengths. The conversion efficiency from laser to WW x-rays reaches 0.5\%/sr, or 6\% assuming a 4$\pi$ solid angle for the optimal 10-$\rm \mu m$-long NWA targets. When the lengths of nanowires exceed \SI{5}{\micro m}, the enhancement becomes less prominent. This tendency can be explained by the limited penetrating depth of the driven laser in the NWA targets, which was reported as \SI{6}{\micro m} in the previous work\cite{Bargsten2017}. Increasing the lengths of the nanowires from \SI{2}{\micro m} to \SI{5}{\micro m} can linearly enlarge the interaction volume, which eventually results in a higher laser absorption as well as stronger x-ray emissions. This effect becomes insignificant if the nanowires are longer than the penetrating depth of the laser. \section{Discussion} To investigate the electron heating and WW x-ray emission processes of the NWA targets, we first performed two-dimensional (2D) particle-in-cell (PIC) simulations utilizing the code EPOCH\cite{Arber2015}. The simulation box is \SI{6}{\micro m} $\times$ \SI{12}{\micro m} with a spatial resolution of \SI{2.5}{nm} $\times$ \SI{2.5}{nm}. A Gaussian laser pulse travels along the x axis from the left side of the simulation box with a central wavelength of \SI{800}{nm}. Its normalized vector potential, focal spot size and duration are $a_0=4$, \SI{4}{\micro m} and \SI{30}{fs}, respectively, the same as in the experiments. Here the normalized vector potential is $a_0=eE/m_ec\omega$, where $m_e$, $e$, $E$, $\omega$, and $c$ is the electron mass and charge, the peak electric field strength and angular frequency of laser, as well as the speed of light in vacuum, respectively\cite{macchi2013ion}. The NWA target is also set according to the experimental parameters. The PE plasma is composed of \SI{140}{n_c} electrons, \SI{20}{n_c} protons and \SI{20}{n_c} fully ionized carbons. Here the normalized density $\textrm{n}_\textrm{c}=$ \SI{1.7e21}{cm^{-3}}. The diameter, interval space and length of the nanowires are set as \SI{500}{nm}, \SI{800}{nm} and \SI{3}{\micro m}, respectively. The nanowires are in the x region of \SIrange{1}{4}{\micro m}. A \SI{1}{\micro m} thick PE plasma base is also set in the x region of \SIrange{4}{5}{\micro m}. \begin{figure}[htbp] \centering \includegraphics[width=11cm]{f4.jpg} \caption{The plasma heating and WW x-ray emission processes. PE NWA targets with the parameters of D = \SI{500}{nm}, S = \SI{800}{nm} and L = \SI{3}{\micro m} are simulated. a) The spatial distribution of the electric field $E_y$ at the simulation time of $t=$ \SI{32}{fs}. $E_y$ is normalized by $E_0=m_ec\omega/e=$ \SI{4e12}{V/m}. b) The ratio of the electron energy $E_{\textrm{electron}}$ to the laser energy $E_{\textrm{laser}}$ in the simulation box for the planar and NWA targets. c) The distribution of the electron temperature at $t=$ \SI{100}{fs}. d) The energy spectrum of the electrons at $t=$ \SI{100}{fs}. The fitted bulk electron temperature is \SI{16}{keV} as shown by the red dashed line. e) Dependence of the bound-bound emissivity and mean charge state on the temperatures of carbon plasmas. Two mass densities of \SI{0.4}{g/cm^3} (solid lines) and \SI{0.004}{g/cm^3} (dashed lines) are calculated from the atomic kinetic code FLYCHK. f) The measured x-ray spectrum as well as the simulated one according to the estimated temporal evolution of the plasmas.}\label{f4} \end{figure} Figure \ref{f4}a) displays the spatial distribution of the electric field $E_y$ at the simulation time of $t=$ \SI{32}{fs}. The strong $E_y$ among the nanowires indicates that the driving laser can penetrate the NWA targets and interact with the side wall of the nanowires. As a result, the energy transferred to the electrons in the NWA targets can be enhanced by 4 times compared to that of the planar targets as shown in Fig. \ref{f4}b). Another remarkable phenomenon is that $E_y$ decays along the axis of nanowires from \SIrange{1}{4}{\micro m} in Fig. \ref{f4}a). It indicates a limited penetrating depth of the driving laser, consistent with the results in Fig. \ref{f3}. The penetration of the laser in the NWA targets can result in the volumetric heating of the targets as depicted in Fig. \ref{f4}c). After the interaction of the laser, the distribution of electron temperature at the simulation time of \SI{100}{fs} indicates a volumetrically heated plasma. The electron temperature is slightly higher in the center of the laser spot due to the stronger laser field. We integrate the data in Fig. \ref{f4}c) to obtain the energy spectrum of the electrons as shown in Fig. \ref{f4}d). Considering that the number of high-energy electrons in the tail of the spectrum is very small compared to that of the bulk electrons, we focus on the low-energy but large-amount bulk electrons\cite{Sherlock2009, Rosmej2018}. The temperature of the bulk electrons in this simulation can be estimated as \SI{16}{keV} at $t=$ \SI{100}{fs} as shown by the red dashed line in Fig. \ref{f4}d). The PIC codes can successfully simulate the laser heating process in the NWA targets. However, the emission process of WW x-ray photons is not included in the state-of-the-art PIC simulations. We utilized the atomic kinetic code FLYCHK\cite{Chung2005} to investigate the emission process of the laser heated plasmas. Calculations of the x-ray emissions from steady-state carbon plasmas were first performed to study the dependence of the bound-bound emissivity and mean charge state of carbon on the temperatures of the plasmas. As displayed in Fig. \ref{f4}e), one can see that the line emissions of carbon ions mainly happen at the temperature of 100s eV. At higher temperatures, most of the carbon atoms are ionized to C$^{6+}$, which suppresses the probability of line emissions from bound electrons. Figure \ref{f4}e) also indicates that the volumetric heating of solid density plasmas is another reason for the high conversion efficiency of NWA targets besides the efficient laser absorption. This can be reflected by the comparison of the bound-bound emissivity between plasmas with the mass densities of \SI{0.4}{g/cm^3} (the mean density of NWA targets) and \SI{0.004}{g/cm^3}. Compared to low-density plasmas, the high-density NWA plasma has a larger emissivity and a shorter radiative cooling time. We also performed time-dependent simulations using FLYCHK to tentatively give the WW x-ray spectrum by considering the cooling process of the plasma. The key point is to get the temporal evolution of the plasma's temperature. We can not directly obtain it from PIC simulations because it requires too much computing resources if we want the simulate the whole process and meanwhile avoid numerical heating in the near-solid-density plasmas. So we use a semiquantitative model to estimate the evolution of the plasma's temperature. The PIC simulation indicates a hot plasma with the diameter of \SI{10}{\micro m} and the bulk electron temperature of \SI{16}{keV} is formed at $t=$ \SI{100}{fs}, which however is too hot to efficiently emit WW x-rays. Thereafter, a keV-electrons-driven heat dissipation process happens and results in a larger but colder plasma. This process is quite complicated due to the existence of the magnetic fields in the plasma\cite{Yang2021}. For simplicity, we only consider the process after the plasma's temperature drops below $T_0=$ \SI{600}{eV}, when efficient WW x-ray emissions happen according to Fig. \ref{f4}e). The corresponding plasma diameter $D_0$ can be estimated as $(16000/600)^{1/3}\times10\approx$ \SI{30}{\micro m}. Due to the high collision frequencies of electrons for such low-temperature plasma, a hydrodynamic cooling process will be dominant\cite{Purvis2013,Rolles2018}. Assuming the diameter of the hot plasma increases with the cooling time at the electrons' acoustic velocity of $C_s(t)=\sqrt{2T(t)/m_e}$, which is determined by the instantaneous temperature of the plasma, we have $T_0D_0^3=T(t)D^3(t)$. Here $D(t)=D_0+\int_0^t C_s(\tau)d\tau$ is the instantaneous diameter of the hot plasma. With the relationship of $dD(t)/dt=C_s(t)$, the derivative of $T(t)$ can be expressed as \begin{equation}\label{e1} \frac{dT(t)}{dt}=-\frac{3C_s(t)T(t)}{D(t)}=-\frac{3\sqrt{2/m_e}}{D_0T_0^{1/3}}T(t)^{11/6}. \end{equation} Finally, the temporal evolution of the plasma temperature can be derived as \begin{equation}\label{e2} T(t)=(T_0^{-5/6}+\frac{5}{2D_0}\frac{\sqrt{2/m_e}}{T_0^{1/3}}t)^{-6/5}. \end{equation} To include the effect of opacity, we also consider that the emitting area of the plasma surface increases as $D^2(t)$. Using the obtained plasma's temperature and the size of the emitting area as the input parameters, the time-dependent emissivity can be calculated by FLYCHK. By fixing the density of the NWA plasma as \SI{0.4}{g/cm^3}, the WW spectrum can be obtained by integrating the emissivity at different time as depicted in Fig. \ref{f4}f). The measured and simulated spectra fit each other quite well except the continuum part in the wavelength range of \SIrange{3.1}{3.8}{nm}, where the calculated spectrum is lower than the measured one. This difference may be due to the underestimate of the bremsstrahlung radiation as we only include the x-ray emissions in FLYCHK after the plasma is colder than \SI{600}{eV}, while electrons emit much more bremsstrahlung x-rays when the plasma is hotter. \section{Conclusion} In conclusion, we report the WW soft x-ray emissions from PE NWA targets irradiated by femtosecond laser pulses at relativistic intensity. The optimal conversion efficiency is measured as 0.5\%/sr, which is more than one order of magnitude higher compared to that of the planar targets. The yields of WW x-rays increase with the lengths of nanowires in the targets. We also perform simulations indicating that the high-efficiency generation of WW x-rays in NWA targets can be interpreted by the high laser absorption and the short radiative cooling time in the near-solid-density plasmas. Such a high-efficiency light source using NWA targets is quite suitable for laser-driven laboratory WW x-ray microscopes. \section*{Funding} National Grand Instrument Project (2019YFF01014402), NSFC innovation group project (11921006), and National Natural Science Foundation of China (Grant Nos. 11775010, 11535001, 61631001, 11905286). \section*{Acknowledgments} The PIC simulations were carried out in High-Performance Computing Platform of Peking University. \section*{Disclosures} The authors declare no conflict of interest.
{ "timestamp": "2020-12-17T02:15:15", "yymm": "2012", "arxiv_id": "2012.08856", "language": "en", "url": "https://arxiv.org/abs/2012.08856" }
\section{Introduction} An \emph{$r$-edge-colouring} of a graph (or hypergraph) is a colouring of its edges with~$r$ colours. A \emph{monochromatic subgraph} of an $r$-edge-coloured graph is one in which all the edges have the same colour. Lehel conjectured that every $2$-edge-colouring of the complete graph on~$n$ vertices admits a partition of the vertex set into two monochromatic cycles of distinct colours, where the empty set, a single vertex and a single edge are considered to be \emph{degenerate cycles}. This conjecture was proved for large~$n$ by Łuczak, Rödl and Szemer{\'e}di~\cite{Luczak1998} using Szemer\'edi's Regularity Lemma. Allen~\cite{Allen2008} improved the bound on~$n$ by giving a different proof. Finally Bessy and Thomass\'e~\cite{Bessy2010} proved Lehel's conjecture for all $n \geq 1$. Similar problems have also been considered for colourings with a general number of colours. In particular, a lot of attention has been given to the problem of determining the number of monochromatic cycles that are needed to partition an $r$-edge-coloured complete graph. Erd\H{o}s, Gy\'arf\'as and Pyber~\cite{Erdos1991} proved that every $r$-edge-coloured complete graph can be partitioned into $O(r^2\log r)$ monochromatic cycles and conjectured that~$r$ monochromatic cycles would suffice. Their result was improved by Gy\'arf\'as, Ruszink\'o, S\'ark\"ozy and Szemer\'edi~\cite{Gyarfas2006} who showed that $O(r\log r)$ monochromatic cycles are enough. However, Pokrovskiy~\cite{Pokrovskiy2014} disproved the conjecture and proposed a weaker version of the conjecture that each $r$-edge-coloured complete graph contains~$r$ monochromatic vertex-disjoint cycles that together cover all but at most~$c_r$ of the vertices, where~$c_r$ is a constant depending only on~$r$. Pokrovsky \cite{Pokrovskiy2016} subsequently proved that we can take $c_3 \leq 43000$ for large enough $n$. Recently, generalisations of Lehel's conjecture to hypergraphs have also been considered. For any positive integer~$k$, a \emph{$k$-uniform hypergraph}, or \emph{$k$-graph},~$H$ is an ordered pair of sets~$(V(H),E(H))$ such that $E(H) \subseteq \binom{V(H)}{k}$, where $\binom{S}{k}$ is the set of all subsets of~$S$ of size~$k$. We abuse notation by identifying the $k$-graph~$H$ with its edge set~$E(H)$. Hence by~$\s{H}$ we mean the number of edges of~$H$. Let $K_n^{(k)}$ be the complete $k$-graph on~$n$ vertices. In $k$-graphs there are several notions of cycle. For integers $1 \leq \ell < k < n$, a $k$-graph~$C$ on~$n$ vertices is called an \emph{$\ell$-cycle} if there is an ordering of its vertices $V(C) = \{v_0, \dots, v_{n-1}\}$ such that $E(C) = \{ \{v_{i(k-\ell)},\dots, v_{i(k-\ell)+k-1}\}\colon 0 \leq i \leq n/(k-\ell) -1\}$, where the indices are taken modulo~$n$. That is, an $\ell$-cycle is a $k$-graph with a cyclic ordering of its vertices such that its edges are sets of~$k$ consecutive vertices and consecutive edges share exactly~$\ell$ vertices. (Note that $k-\ell$ divides~$n$.) A single edge or any set of fewer than $k$ vertices is considered to be a \emph{degenerate $\ell$-cycle}. Further, $1$-cycles and $(k-1)$-cycles are called \emph{loose cycles} and \emph{tight cycles}, respectively. For loose cycles, Gy\'arf\'as and S\'ark\"ozy~\cite{gyarfas2013} showed that every $r$-edge-coloured complete $k$-graph on~$n$ vertices can be partitioned into~$c(k,r)$ monochromatic loose cycles. S\'ark\"ozy~\cite{Sarkozy2014} showed that, for~$n$ sufficiently large, $50kr\log(kr)$ loose cycles are enough. For tight cycles, Bustamante, Corsten, Frankl, Pokrovskiy and Skokan~\cite{Bustamante2020} showed that every $r$-edge-coloured complete $k$-graph can be partitioned into~$C(k,r)$ monochromatic tight cycles. See \cite{Gyarfas2016} for a survey on other results about monochromatic cycle partitions and related problems. In this paper, we investigate monochromatic tight cycle partitions in $2$-edge-coloured complete $k$-graphs on $n$ vertices. When $k=3$, Bustamante, H\`an and Stein \cite{Bustamante2017} showed that there exist two vertex-disjoint monochromatic tight cycles of distinct colours covering all but at most~$o(n)$ of the vertices. Recently, Garbe, Mycroft, Lang, Lo and Sanhueza-Matamala \cite{Garbe2019} proved that two monochromatic tight cycles are sufficient to cover all vertices. However, these cycles may not be of distinct colours. First we show that for all $k \geq 3$, there are arbitrarily large $2$-edge-coloured complete $k$-graphs that cannot be partitioned into two monochromatic tight cycles of distinct colours. \begin{prop} \label{prop:extremal} For all $k \geq 3$ and $m \geq k+1$, there exists a $2$-edge-colouring of $K_{k(m+1)+1}^{(k)}$ that does not admit a partition into two tight cycles of distinct colours. \end{prop} It is natural to ask whether we can cover almost all vertices of a $2$-edge-coloured complete $k$-graphs with two vertex-disjoint monochromatic tight cycles of distinct colours. The case when $k=3$ is affirmed in~\cite{Bustamante2017}. Here, we show that this is true when $k=4$. \begin{thm} \label{thm:1} For every $\varepsilon > 0$, there exists an integer~$n_1$ such that, for all $n \geq n_1$, every $2$-edge-coloured complete $4$-graph on $n$ vertices contains two vertex-disjoint monochromatic tight cycles of distinct colours covering all but at most~$\varepsilon n$ of the vertices. \end{thm} When $k=5$, we prove a weaker result that four monochromatic tight cycles are sufficient to cover almost all vertices. \begin{thm} \label{thm:2} For every $\varepsilon > 0$, there exists an integer $n_1$ such that, for all $n \geq n_1$, every $2$-edge-coloured complete $5$-graph on $n$ vertices contains four vertex-disjoint monochromatic tight cycles covering all but at most~$\varepsilon n$ of the vertices. \end{thm} To prove \cref{thm:1,thm:2}, we use the \emph{connected matching method} that has often been credited to {\L}uczak~\cite{Luczak1999}. We now present a sketch-proof for~\cref{thm:1}. Consider a $2$-edge-coloured complete $4$-graph $K_n^{(4)}$ on $n$ vertices. We start by applying the Hypergraph Regularity Lemma to the 2-edge-coloured complete $4$-graph~$K_n^{(4)}$. More precisely the Regular Slice Lemma of Allen, B\"ottcher, Cooley and Mycroft~\cite{Allen2017}, see \cref{lem:regular slice}. We obtain a 2-edge-coloured reduced graph~$\mathcal{R}$ that is almost complete. A monochromatic matching in a $k$-graph is a set of vertex-disjoint edges of the same colour. We say that it is tightly connected if, for any two edges~$f$ and~$f'$, there exists a sequence of edges $e_1,\dots,e_t$ of the same colour such that~$e_1 = f$,~$e_t=f'$ and $\s{e_i \cap e_{i+1}} = k-1$ for all $i \in [t-1]$. Using~\cref{cor:matchings_to_cycles}, it suffices to find two vertex-disjoint monochromatic tightly connected matchings of distinct colours in the reduced graph $\mathcal{R}$. The main challenge is to identify the `tightly connected components' (see \cref{section:preliminaries} for the formal definition) in which we will find the matchings. To do so, we introduce the concept of `blueprint', which is a $2$-edge-coloured $2$-graph with the same vertex set as~$\mathcal{R}$. The key property is that connected components in the blueprint correspond to tightly connected components in~$\mathcal{R}$. We conclude the introduction by outlining the structure of the paper. In \cref{section:preliminaries}, we introduce some basic notation and definitions. In \cref{section:extremal}, we prove \cref{prop:extremal}. In \cref{section:regularity}, we introduce the statements about hypergraph regularity and prove the crucial \cref{cor:matchings_to_cycles} that allows us to reduce our problem of finding cycles in the complete graph to one about finding tightly connected matchings in the reduced graph. In \cref{section:blueprints}, we give the definition of blueprint and setup some useful results. In \cref{section:matchings Kn4,section:matchings Kn5}, we prove \cref{thm:1,thm:2}, respectively. Finally, we make some concluding remarks in \cref{sec:concluding}. \section{Preliminaries} \label{section:preliminaries} If we say that a statement holds for $0 < a \ll b \leq 1$, then we mean that there exists a non-decreasing function $f\colon (0,1] \rightarrow (0,1]$ such that the statement holds for all $a, b \in (0,1]$ with $a \leq f(b)$. Similar expressions with more variables are defined analogously. If~$1/n$ appears in one of these expressions, then we implicitly assume that~$n$ is a positive integer. We often write $x_1\dots x_j$ for the set $\{x_1, \dots, x_j\}$. Moreover, for each positive integer $n$, we let $[n] = \{1, \dots, n\}$. Throughout this paper, any $2$-edge-colouring uses the colours red and blue. Let $H$ be a $2$-edge-coloured $k$-graph. We denote by~$H^{\red}$ (and~$H^{\blue}$) the subgraph of~$H$ on $V(H)$ induced by the red (and blue) edges of~$H$. Two edges~$f$ and~$f'$ in~$H$ are \emph{tightly connected} if there exists a sequence of edges $e_1,\dots,e_t$ such that~$e_1 = f$,~$e_t=f'$ and $\s{e_i \cap e_{i+1}} = k-1$ for all $i \in [t-1]$. A subgraph~$H'$ of~$H$ is \emph{tightly connected} if every pair of edges in~$H'$ is tightly connected in~$H$. A maximal tightly connected subgraph of~$H$ is called a \emph{tight component} of~$H$. Note that a tight component is a subgraph rather than a vertex subset as in the traditional graph case. A \emph{red tight component} and a \emph{red tightly connected matching} are a tight component and a tightly connected matching in $H^{\red}$, respectively. We define these terms similarly for \emph{blue}. Let $H$ be a $k$-graph and $S, W \subseteq V(H)$. We denote by~$H-W$ the $k$-graph with $V(H-W) = V(H) \setminus W$ and $E(H-W) = \left\{e \in E(H) \colon e \cap W = \varnothing \right\}$. We call~$H-W$ the \emph{$k$-graph obtained from~$H$ by deleting~$W$}. Further we let $H[W] = H - (V(H)\setminus W)$. Let $F$ be a $k$-graph or a set of $k$-element sets. We denote by $H-F$ the subgraph of~$H$ obtained by deleting the edges in~$F$. We define~$N_H(S,W)$ to be the set $\{e \in \binom{W}{k-|S|}\colon e\cup S \in H \}$ and we define~$d_H(S,W)$ to be its cardinality. Further we write~$N_H(S)$ and~$d_H(S)$ for $N_H(S,V(H))$ and $d_H(S,V(H))$, respectively. If $H$ is $2$-edge-coloured, then we write $N_H^{\red}(S,W)$, $d_H^{\red}(S,W)$, $N_H^{\blue}(S,W)$, $d_H^{\blue}(S,W)$ for $N_{H^{\red}}(S,W)$, $d_{H^{\red}}(S,W)$, $N_{H^{\blue}}(S,W)$, $d_{H^{\blue}}(S,W)$, respectively. The \emph{link graph of~$H$ with respect to}~$S$, denoted by~$H_S$, is the $(k-\s{S})$-graph satisfying $V(H_S) = V(H)\setminus S$ and $E(H_S) = N_H(S)$. For $j \in [k-1]$, the \emph{$j$-th shadow of~$H$}, denoted by~$\partial^j H$, is the \mbox{$(k-j)$}-graph with vertex set $V(\partial^j H) = V(H)$ and edge set \[E(\partial^j H) = \left\{e \in \binom{V(H)}{k-j}\colon e \subseteq f \text{ for some } f \in E(H) \right\}. \] For $\mu,\alpha > 0$, we say that a $k$-graph~$H$ on~$n$ vertices is $(\mu,\alpha)$\emph{-dense} if, for each $i \in [k-1]$, we have $d_H(S) \geq \mu \binom{n}{k-i}$ for all but at most~$\alpha\binom{n}{i}$ sets $S \in \binom{V(H)}{i}$ and $d_H(S) = 0$ for all other $S \in \binom{V(H)}{i}$. \begin{prop} \label{prop:edges_in_dense} Let $0 \leq \alpha , \mu \leq1$ and let~$H$ be a $(\mu, \alpha)$-dense $k$-graph on~$n$ vertices. Then $\s{H} \geq (\mu - \alpha)\binom{n}{k}$. Moreover, if $\mu > 1/2$, then $H$ is tightly connected. \end{prop} \begin{proof} Note that \begin{align*} \s{H} = \frac{1}{k} \sum_{S \in \binom{V(H)}{k-1}} d_H(S) \geq \frac{1}{k} (1-\alpha)\binom{n}{k-1} \mu n \geq (\mu - \alpha) \binom{n}{k}. \end{align*} Now suppose that $\mu > 1/2$. We show that $H$ is tightly connected. Note that, for $S, S' \in \binom{V(H)}{k-1}$ with $d_H(S), d_H(S') > 0$, we have $d_H(S), d_H(S') \geq \mu n > n/2$ and thus \begin{align*} N_H(S) \cap N_H(S') \neq \varnothing. \end{align*} Let $f = x_1 \dots x_k$ and $f' = y_1 \dots y_k$ be two edges of $H$. Inductively choose vertices $z_1, \dots, z_{k-1} \in V(H)$ such that \[ z_i \in N_H(z_1 \dots z_{i-1} x_{i+1} \dots x_k) \cap N_H(z_1 \dots z_{i-1} y_{i+1} \dots y_k) \] for all $i \in [k-1]$. It follows that $f$ and $f'$ are tightly connected. \end{proof} The following proposition shows that any $k$-graph that has all but a small fraction of the possible edges contains a $(1- \varepsilon, \alpha)$-dense subgraph. The proof was inspired by the proof of Lemma 8.8 in \cite{Han2017}. A different generalisation of this lemma can also be found as Lemma $2.3$ in \cite{Lang2020}. \begin{prop} \label{prop:dense} Let $1/n \ll \alpha \ll 1/k \leq 1/2$. Let~$H$ be a $k$-graph on~$n$ vertices with $\s{H} \geq (1-\alpha)\binom{n}{k}$. Then there exists a subgraph $H'$ of $H$ such that $V(H') = V(H)$ and~$H'$ is $(1-2\alpha^{1/4k^2}, 2\alpha^{1/4k^2})$-dense. \end{prop} \begin{proof} We call a set $S \subseteq V(H)$ with $\s{S} \in [k-1]$ \emph{bad} if $d_H(S) < (1-\alpha^{1/2})\binom{n}{k-\s{S}}$. For $i \in [k-1]$, let~$\mathcal{B}_i$ be the set of all bad $i$-sets. For each $i \in [k-1]$, we have \[ (1-\alpha)\binom{k}{i}\binom{n}{k} \leq \binom{k}{i} \s{H} = \sum_{S \in \binom{V(H)}{i}} d_H(S) \leq \binom{n}{i} \binom{n}{k-i} - \alpha^{1/2}\binom{n}{k-i}\s{\mathcal{B}_i}. \] This implies \[ \s{\mathcal{B}_i} \leq \frac{1}{\alpha^{1/2}} \left( \binom{n}{i} - \frac{(1-\alpha)\binom{k}{i}\binom{n}{k}}{\binom{n}{k-i}} \right) \leq 2 \alpha^{1/2} \binom{n}{i}. \] Let $\beta = \alpha^{1/2k}$. For all $j \in \{k-1, k-2, \dots, 1\}$ in turn, we construct $\mathcal{A}_j \subseteq \binom{V(H)}{j}$ inductively as follows. We set $\mathcal{A}_{k-1} = \mathcal{B}_{k-1}$. Given $2 \leq j \leq k-1$ and $\mathcal{A}_j$, we define $\mathcal{A}_{j-1} \subseteq \binom{V(H)}{j-1}$ to be the set of all $X \in \binom{V(H)}{j-1}$ such that $X \in \mathcal{B}_{j-1}$ or $d_{\mathcal{A}_j}(X) \geq \beta^{1/2}n$. \begin{claim} \label{claim:dense1} For all $i \in [k-1]$, $\s{\mathcal{A}_i} \leq \beta^i \binom{n}{i}$. Moreover, if $1 \leq i < j \leq k-1$ and a set $S \in \binom{V(H)}{i}$ satisfies $d_{\mathcal{A}_j}(S) \geq \beta^{1/2(j-i)}\binom{n}{j-i}$, then $S \in \mathcal{A}_i$. \end{claim} \begin{proofclaim} We first prove the first part by induction on $k-i$. For $i = k-1$, we have $\s{\mathcal{A}_{k-1}} = \s{\mathcal{B}_{k-1}} \leq 2 \alpha^{1/2}\binom{n}{k-1} \leq \beta^{k-1} \binom{n}{k-1}$. Now suppose $2 \leq i \leq k-1$ and $\s{\mathcal{A}_i} \leq \beta^i\binom{n}{i}$. By double counting tuples $(X,w)$ with $X \in \mathcal{A}_{i-1} \setminus \mathcal{B}_{i-1}$ and $X \cup w \in \mathcal{A}_i$, we have $\br{\s{\mathcal{A}_{i-1}}- \s{\mathcal{B}_{i-1}}}\beta^{1/2}n \leq i \s{\mathcal{A}_i}$. Hence \begin{align*} \s{\mathcal{A}_{i-1}} &\leq \frac{i}{\beta^{1/2}n}\s{\mathcal{A}_i} + \s{\mathcal{B}_{i-1}} \leq \frac{i}{\beta^{1/2}n}\beta^i\binom{n}{i} + 2 \alpha^{1/2}\binom{n}{i-1} \\ &= \beta^{i-1/2} \binom{n-1}{i-1} + 2 \alpha^{1/2}\binom{n}{i-1} \leq \beta^{i-1} \binom{n}{i-1}. \end{align*} This proves the first part of the claim. We now prove the second part of the claim. Fix $i \in [k-1]$. We proceed by induction on $j-i$. For $j = i+ 1$, the statement holds by the definition of $\mathcal{A}_i$. Now let $S \in \binom{V(H)}{i}$ and $j \geq i+2$ be such that $d_{\mathcal{A}_j} (S) \geq \beta^{1/2(j-i)} \binom{n}{j-i}$. If $S \in \mathcal{B}_i$, then $S \in \mathcal{A}_i$. Recall that if $T \in \binom{V(H)}{j-1} \setminus \mathcal{A}_{j-1}$, then $d_{\mathcal{A}_j} (T) < \beta^{1/2}n$. We have \begin{align*} \beta^{1/2(j-i)}\binom{n}{j-i} & \leq d_{\mathcal{A}_j}(S) \leq \sum_{\substack{ T \in \mathcal{A}_{j-1} \\ S \subseteq T}} d_{\mathcal{A}_{j}}(T) + \sum_{\substack{ T \in \binom{V(H)}{j-1} \setminus \mathcal{A}_{j-1} \\ S \subseteq T}} d_{\mathcal{A}_{j}}(T) \\ &\leq n d_{\mathcal{A}_{j-1}}(S) + \beta^{1/2}n d_{\binom{V(H)}{j-1}\setminus \mathcal{A}_{j-1}}(S) \\ &\leq n d_{\mathcal{A}_{j-1}}(S) + \beta^{1/2}n \binom{n}{j-i-1}, \end{align*} and thus \[ d_{\mathcal{A}_{j-1}}(S) \geq \beta^{1/2(j-i-1)}\binom{n}{j-i-1}. \] Hence by the induction hypothesis we have $S \in \mathcal{A}_i$. \end{proofclaim} For each $j \in [k-1]$, let $F_j$ be the set of edges $e \in H$ for which there exists some $S \in \mathcal{A}_j$ with $S \subseteq e$. Let $F = \bigcup_{j \in [k-1]} F_j$ and let $H' = H - F$. We will show that $H'$ is the desired $k$-graph. For $i \in [k-1]$, let $\mathcal{S}_i$ be the set of all $S \in \binom{V(H)}{i}$ such that $d_F(S) \geq \beta^{1/2k}\binom{n}{k-i}$. \begin{claim} For $i \in [k-1]$, $\s{\mathcal{S}_i} \leq \beta^{1/2}\binom{n}{i}$. \end{claim} \begin{proofclaim} For $j \in [k-1]$, we have \[ \s{F_j} \leq \s{\mathcal{A}_j} \binom{n-j}{k-j} \overset{\text{\cref{claim:dense1}}}{\leq} \beta^j\binom{n}{j}\binom{n-j}{k-j} = \beta^j\binom{k}{j}\binom{n}{k}. \] Thus \[ \s{F} \leq \sum_{j\in [k-1]} \s{F_j} \leq \sum_{j \in [k-1]} \beta^j \binom{k}{j} \binom{n}{k} \leq 2^k \beta \binom{n}{k}. \] Now, for $i \in [k-1]$, we have \[ \frac{\s{\mathcal{S}_i}\beta^{1/2k}\binom{n}{k-i}}{\binom{k}{i}} \leq \s{F} \leq 2^k \beta \binom{n}{k} \] and thus $\s{\mathcal{S}_i} \leq \beta^{1/2}\binom{n}{i}$. \end{proofclaim} Consider $i \in [k-1]$. Note that $\s{\mathcal{S}_i \cup \mathcal{B}_i} \leq 2\alpha^{1/4k^2} \binom{n}{i}$. Now let $S \in \binom{V(H)}{i} \setminus \br{\mathcal{S}_i \cup \mathcal{B}_i}$. As $S \not\in \mathcal{B}_i$, we have $d_H(S) \geq (1-\alpha^{1/2}) \binom{n}{k-i}$. As $S \not\in \mathcal{S}_i$, we have \begin{align*} d_{H'}(S) &= d_H(S) - d_F(S) \geq d_H(S) - \beta^{1/2k}\binom{n}{k-i} \\ &\geq (1-\alpha^{1/2}- \beta^{1/2k}) \binom{n}{k-i} \geq (1 - 2\alpha^{1/4k^2})\binom{n}{k-i}. \end{align*} Consider $X \in \binom{V(H)}{i}$ with $d_{H'}(X) \neq 0$. We want to show that $d_{H'}(X) \geq (1- 2\alpha^{1/4k^2})\binom{n}{k-i}$. By the above, it suffices to show that $X \not\in \mathcal{B}_i \cup \mathcal{S}_i$. Let $e \in H'$ with $X \subseteq e$. Since $e \not\in F_i$, $X \not\in \mathcal{A}_i$ and thus $X \not\in \mathcal{B}_i$. It remains for us to show that $X \not\in \mathcal{S}_i$. Assume the contrary that $X$ is contained in more that $\beta^{1/2k} \binom{n}{k-i}$ edges of $F$. Let $\mathcal{Y} = N_F(X)$, so $\s{\mathcal{Y}} \geq \beta^{1/2k} \binom{n}{k-i}$. For each $Y \in \mathcal{Y}$, fix a set $A_Y \in \bigcup_{j \in [k-1]} \mathcal{A}_j$ such that $A_Y \subseteq X \cup Y$ and let $T_Y = X \cap A_Y$ and $S_Y = Y \setminus A_Y$. If $A_Y \subseteq X$, then $A_Y \subseteq e \in H'$, a contradiction. Hence $A_Y \cap Y \neq \varnothing$ for all $Y \in \mathcal{Y}$. For $Y \in \mathcal{Y}$, we have $X \cap Y = \varnothing$, and thus $\s{T_Y} \leq \s{A_Y} -1 \leq k-2$. By an averaging argument, there exist $t \in \{0, 1, \dots, k-2\}$, $T \in \binom{X}{t}$, $a \in [k-1]$, $S \in \binom{V(H)}{k-i-a+t}$ and $\widetilde{\mathcal{Y}} \subseteq \mathcal{Y}$ such that, for all $Y \in \widetilde{\mathcal{Y}}$, we have $T_Y = T$, $\s{A_Y} = a$, $S_Y = S$ and \[ \s{\widetilde{\mathcal{Y}}} \geq \frac{\s{\mathcal{Y}}}{2^i (k-1) \binom{n}{k-i-a+t}} \geq \beta^{1/2(k-1)}\binom{n}{a-t}. \] Since $Y \setminus A_Y = S_Y = S$ and $\s{A_Y} = a$ for all $Y \in \widetilde{\mathcal{Y}}$, the $A_Y$ are distinct for all $Y \in \widetilde{\mathcal{Y}}$. Recall that $T \subseteq A_Y \in \mathcal{A}_a$ for each $Y \in \widetilde{\mathcal{Y}}$. If $T = \varnothing$, then $t=0$ and so $\s{\mathcal{A}_a} \ge \s{\widetilde{\mathcal{Y}}} > \beta^a \binom{n}{a}$ contradicting \cref{claim:dense1}. If $T \ne \varnothing$, then we have $d_{\mathcal{A}_a}(T) \ge \s{\widetilde{\mathcal{Y}}} \ge \beta^{1/2(k-1)}\binom{n}{a-t}$. \cref{claim:dense1} implies that $T \in \mathcal{A}_t$. Since $T \subseteq X \subseteq e$, we have $e \in F_t$ contradicting the fact that $e \in H' = H - \bigcup_{j \in [k-1]}F_j $. \end{proof} \section{Extremal example} \label{section:extremal} In this section, we prove \cref{prop:extremal}, that is, we prove that, for $k \geq 3$, there exist arbitrarily large $2$-edge-coloured complete $k$-graphs that do not admit a partition into two tight cycles of distinct colours. A \emph{$k$-uniform tight path} is a $k$-graph obtained by deleting a vertex from a tight cycle. First we need the following proposition. \begin{prop} \label{prop:path} Let $k \geq 3$, let~$P$ and $C$ be a $k$-uniform tight path and tight cycle, respectively. We have the following. \begin{enumerate}[label = \upshape (\roman*)] \item \label{extrprop1} If~$X$ and~$Y$ partition~$V(P)$ such that $\s{e \cap Y} \geq 2$ for all $e \in P$, then $2(\s{X} - (k-1)) \leq (k-2) \s{Y}.$ \item \label{extrprop2} If~$X$ and~$Y$ partition~$V(C)$ such that $\s{e \cap Y} \geq 2$ for all $e \in C$, then $2\s{X} \leq (k-2) \s{Y}.$ \end{enumerate} \end{prop} \begin{proof} We first prove \ref{extrprop1}. Let~$M$ be a matching of maximum size in~$P$. Since each edge of~$P$ contains at least~$2$ vertices of~$Y$, \[ \s{X} \leq \s{X \cap V(M)} + \s{V(P) \setminus V(M)} \leq (k-2)\s{M} + k-1 \leq \frac{(k-2)\s{Y}}{2} + k-1. \] Now we prove \ref{extrprop2}. Since $\s{e \cap Y} \geq 2$ and $\s{e \cap X} \leq k-2$ for each edge $e \in C$, we have \[ \s{X} = \frac{1}{k} \sum_{e \in C} \s{e \cap X} = \frac{1}{k} \sum_{e \in C} \frac{\s{e \cap X}}{\s{e \cap Y}} \s{e \cap Y} \leq \frac{1}{k} \sum_{e \in C} \frac{k-2}{2} \s{e \cap Y} = \frac{k-2}{2}\s{Y}. \] \end{proof} We are now ready to give our extremal example. Note that the case~$k=3$ of the extremal example is already given in \cite{Garbe2019}. Recall that, in a $k$-graph, we consider a single edge and any set of fewer than $k$ vertices to be degenerate cycles. \begin{proof}[Proof of \cref{prop:extremal}] Let $k\geq 3$, $m \geq k+1$ and $n = k(m+1)+1$. Let~$X$,~$Y$ and~$\{z\}$ be three disjoint vertex sets of~$K_n^{(k)}$ of sizes~$(k-1)m+k-2$,~$m+2$ and 1, respectively. We colour an edge~$e$ in~$K_n^{(k)}$ red if $z \in e$ and $\s{e \cap Y} \geq 2$ or $z \not\in e$ and $\s{e \cap Y} = 1$. Otherwise we colour it blue. Note that $K_n^{(k)} - z$ has the following~$3$ monochromatic tight components: \begin{align*} B_1 = \binom{X}{k},\ B_2 = \left\{e \in \binom{X \cup Y}{k}\colon \s{e \cap Y} \geq 2\right\},\ R = \left\{e \in \binom{X\cup Y}{k} \colon \s{e\cap Y} =1 \right\}. \end{align*} Note that~$B_1$ and~$B_2$ are blue and~$R$ is red. Suppose for a contradiction that~$K_n^{(k)}$ can be partitioned into a red tight cycle~$C_R$ and a blue tight cycle~$C_B$. First assume $z \in V(C_R)$. Since all the red edges containing $z$ are in a red tight component disjoint from $R$, we have $\s{V(C_R)} \leq k.$ Hence $\s{V(C_B)} = n - \s{V(C_R)} \geq n-k \geq km > k$ and $\s{V(C_B) \cap Y} = \s{Y \setminus V(C_R)} \geq m+2-(k-1) \geq 1$. So~$C_B$ is not degenerate and $C_B \subseteq B_2$. Any edge~$e \in C_B$ contains at least~$2$ vertices in~$Y$. By \cref{prop:path}\ref{extrprop2}, $2 \s{V(C_B) \cap X} \leq (k-2)\s{V(C_B) \cap Y}$. It follows that \begin{align*} 2(k-1)m -2 &= 2(\s{X} - (k-1)) \leq 2 \s{V(C_B)\cap X} \\ &\leq (k-2)\s{V(C_B) \cap Y} \leq (k-2)\s{Y} = (k-2)(m+2). \end{align*} This implies that $m \leq 2$, a contradiction. Hence, we may assume that $z \in V(C_B)$. This implies that $C_R \subseteq R$ or $\s{V(C_R)} \leq k-1$. Let $x_R = \s{V(C_R) \cap X}$, $y_R = \s{V(C_R) \cap Y}$, $x_B = \s{V(C_B) \cap X}$ and $y_B = \s{V(C_B) \cap Y}$. Let $P_B$ be the tight path $C_B -z$. Clearly $\s{V(P_B) \cap X} = x_B$ and $\s{V(P_B) \cap Y} = y_B$. Since $C_R \subseteq R$ or $\s{V(C_R)} \leq k-1$, \[ \s{V(C_R) \cap Y} \leq \max \left\{ \left\lfloor \frac{\s{X}}{k-1}\right\rfloor, k-1 \right\} = m < \s{Y}. \] Hence, $V(P_B) \cap Y \neq \varnothing$ and $\s{V(P_B)} \geq (n-1) - km \geq k$. We must have $P_B \subseteq B_2$. By \cref{prop:path}\ref{extrprop1}, we have that \begin{align} \label{eqxb} 2(x_B - (k-1)) \leq (k-2)y_B. \end{align} Thus \begin{align*} \s{V(P_B)} &= x_B + y_B \leq \frac{k}{2}y_B + k-1 \leq \frac{k}{2}\s{Y} + k-1 = \frac{k}{2}(m+2) + k-1 \\ &\leq mk = n-1 - k. \end{align*} This implies that $\s{V(C_R)} \geq k$. Hence $C_R \subseteq R$ and thus \begin{align} \label{eqxr} x_R = (k-1)y_R. \end{align} Since $x_R + x_B = \s{X} = (k-1)m+k-2$ and $y_R + y_B = \s{Y} = m+2$, \cref{eqxb} implies \begin{align*} (k-2)(m+2 - y_R) &\geq 2(\s{X} - x_R -(k-1)) \\ &= 2((k-1)m+k-2-(k-1)y_R-(k-1)), \end{align*} which implies $y_R \geq m-1$. If $y_R = m-1$, then \eqref{eqxr} implies that $x_R = (k-1)(m-1)$ and thus $x_B = 2k-3$ and~$y_B=3$. Let $P_B = v_1\dots v_{2k}$. Either the edge $v_1 \dots v_k$ or the edge $v_{k+1}\dots v_{2k}$ contains at most one vertex of~$Y$, a contradiction to $P_B \subseteq B_2$. Thus we may assume $y_R \geq m$. Note that by \cref{eqxr}, \[ (k-1)y_R = x_R \leq \s{X} = (k-1)m+k-2 \] and so $y_R = m$. Hence $x_R = (k-1)m$ and thus $x_B =k-2$ and $y_B = 2$. Moreover,~$C_B$ is a copy of $K_{k+1}^{(k)}$ that has a blue edge containing~$z$ and at least two vertices of~$Y$, a contradiction. \end{proof} \section{Hypergraph regularity} \label{section:regularity} In this section, we follow the notation of Allen, B\"ottcher, Cooley and Mycroft \cite{Allen2017}. A \emph{hypergraph} $\mathcal{H}$ is an ordered pair $(V(\mathcal{H}), E(\mathcal{H}))$, where $E(\mathcal{H}) \subseteq 2^{V(\mathcal{H})}$. Again, we identify the hypergraph $\mathcal{H}$ with its edge set $E(\mathcal{H})$. A subgraph $\mathcal{H}'$ of $\mathcal{H}$ is a hypergraph with $V(\mathcal{H}') \subseteq V(\mathcal{H})$ and $E(\mathcal{H}') \subseteq E(\mathcal{H})$. It is \emph{spanning} if $V(\mathcal{H}') = V(\mathcal{H})$. For $U \subseteq V(\mathcal{H})$, we define $\mathcal{H}[U]$ to be the subgraph of $\mathcal{H}$ with $V(\mathcal{H}[U]) = U$ and $E(\mathcal{H}[U]) = \{ e \in E(\mathcal{H}) \colon e \subseteq U\}$. We call $\mathcal{H}$ a \emph{complex} if $\mathcal{H}$ is down-closed, that is if $e \in \mathcal{H}$ and $f \subseteq e$, then $f \in \mathcal{H}$. A \emph{$k$-complex} is a complex with only edges of size at most~$k$. We denote by $\mathcal{H}^{(i)}$ the spanning subgraph of~$\mathcal{H}$ containing only the edges of size $i$. Let $\mathcal{P}$ be a partition of $V(\mathcal{H})$ into parts $V_1, \dots, V_s$. Then we say that a set $S \subseteq V(\mathcal{H})$ is \emph{$\mathcal{P}$-partite} if $\s{S \cap V_i} \leq 1$ for all $i \in [s]$. For $\mathcal{P}' = \{V_{i_1}, \dots, V_{i_r}\} \subseteq \mathcal{P}$, we define the subgraph of $\mathcal{H}$ induced by $\mathcal{P}'$, denoted by $\mathcal{H}[\mathcal{P'}]$ or $\mathcal{H}[V_{i_1}, \dots, V_{i_r}]$, to be the subgraph of $\mathcal{H}[\bigcup \mathcal{P}']$ containing only the edges that are $\mathcal{P}'$-partite. The hypergraph $\mathcal{H}$ is $\mathcal{P}$-partite if all of its edges are $\mathcal{P}$-partite. In this case we call the parts of $\mathcal{P}$ the \emph{vertex classes} of $\mathcal{H}$. We say that $\mathcal{H}$ is \emph{$s$-partite} if it is $\mathcal{P}$-partite for some partition $\mathcal{P}$ of $V(\mathcal{H})$ into~$s$ parts. Let $\mathcal{H}$ be a $\mathcal{P}$-partite hypergraph. If~$X$ is a $k$-set of vertex classes of~$\mathcal{H}$, then we write~$\mathcal{H}_X$ for the $k$-partite subgraph of $\mathcal{H}^{(k)}$ induced by~$\bigcup X$, whose vertex classes are the elements of~$X$. Moreover, we denote by $\mathcal{H}_{X^<}$ the $k$-partite hypergraph with $V(\mathcal{H}_{X^<}) = \bigcup X$ and $E(\mathcal{H}_{X^<}) = \bigcup_{X'\subsetneq X} \mathcal{H}_{X'}$. In particular, if $\mathcal{H}$ is a complex, then $\mathcal{H}_{X^<}$ is a $(k-1)$-complex because~$X$ is a set of size $k$. Let $i \geq 2$, and let $\mathcal{P}_i$ be a partition of a vertex set~$V$ into~$i$ parts. Let~$H_i$ and $H_{i-1}$ be a $\mathcal{P}_i$-partite $i$-graph and a $\mathcal{P}_i$-partite $(i-1)$-graph on a common vertex set $V$, respectively. We say that a $\mathcal{P}_i$-partite $i$-set in $V$ is \emph{supported on} $H_{i-1}$ if it induces a copy of the complete $(i-1)$-graph $K_i^{(i-1)}$ on~$i$ vertices in~$H_{i-1}$. We denote by $K_i(H_{i-1})$ the $\mathcal{P}_i$-partite $i$-graph on~$V$ whose edges are all $\mathcal{P}_i$-partite $i$-sets contained in~$V$ which are supported on~$H_{i-1}$. Now we define the \emph{density of~$H_i$ with respect to~$H_{i-1}$} to be \[ d(H_i \mid H_{i-1}) = \frac{\s{K_i(H_{i-1}) \cap H_i}}{\s{K_i(H_{i-1})}} \] if $\s{K_i(H_{i-1})} > 0$ and $d(H_i \mid H_{i-1}) = 0$ if $\s{K_i(H_{i-1})} = 0$. So $d(H_i \mid H_{i-1})$ is the proportion of $\mathcal{P}_i$-partite copies of $K_i^{i-1}$ in~$H_{i-1}$ which are also edges of~$H_i$. More generally, if $\mathbf{Q} = (Q_1, Q_2, \dots, Q_r)$ is a collection of~$r$ (not necessarily disjoint) subgraphs of~$H_{i-1}$, we define $K_i(\mathbf{Q}) = \bigcup_{j=1}^r K_i(Q_j)$ and \[ d(H_i \mid \mathbf{Q}) = \frac{\s{K_i(\mathbf{Q}) \cap H_i}}{\s{K_i(\mathbf{Q})}} \] if $\s{K_i(\mathbf{Q})} > 0$ and $d(H_i \mid \mathbf{Q}) = 0$ if $\s{K_i(\mathbf{Q})} = 0$. We say that~$H_i$ is \emph{$(d_i, \varepsilon, r)$-regular with respect to~$H_{i-1}$}, if we have $d(H_i \mid \mathbf{Q}) = d_i \pm \varepsilon$ for every $r$-set $\mathbf{Q}$ of subgraphs of~$H_{i-1}$ with $\s{K_i(\mathbf{Q})} > \varepsilon \s{K_i(H_{i-1})}$. We say that~$H_i$ is \emph{$(\varepsilon, r)$-regular with respect to~$H_{i-1}$} if there exists some~$d_i$ for which~$H_i$ is $(d_i, \varepsilon, r)$-regular with respect to~$H_{i-1}$. Finally, given an $i$-graph~$G$ whose vertex set contains that of~$H_{i-1}$, we say that~$G$ is \emph{$(d_i, \varepsilon, r)$-regular with respect to~$H_{i-1}$} if the $i$-partite subgraph of~$G$ induced by the vertex classes of~$H_{i-1}$ is $(d_i, \varepsilon, r)$-regular with respect to~$H_{i-1}$. We refer to the density of this $i$-partite subgraph of~$G$ with respect to~$H_{i-1}$ as the \emph{relative density of~$G$ with respect to $H_{i-1}$}. Now let $s \geq k \geq 3$ and let $\mathcal{H}$ be an $s$-partite $k$-complex on vertex classes $V_1, \dots, V_s$. For any set $A \subseteq [s]$, we write~$V_A$ for $\bigcup_{i \in A} V_i$. Note that, if $e \in \mathcal{H}^{(i)}$ for some $2 \leq i \leq k$, then the vertices of~$e$ induce a copy of $K_i^{i-1}$ in $\mathcal{H}^{(i-1)}$. Therefore, for any set $A \in \binom{[s]}{i}$, the density $d(\mathcal{H}^{(i)}[V_A] \mid \mathcal{H}^{(i-1)}[V_A])$ is the proportion of `possible edges' of $\mathcal{H}^{(i)}[V_A]$, which are indeed edges. We say that $\mathcal{H}$ is \emph{$(d_k, \dots, d_2, \varepsilon_k, \varepsilon, r)$-regular} if \begin{enumerate}[label=(\alph*)] \item for any $2 \leq i \leq k-1$ and any $A \in \binom{[s]}{i}$, the induced subgraph $\mathcal{H}^{(i)}[V_A]$ is $(d_i, \varepsilon, 1)$-regular with respect to $\mathcal{H}^{(i-1)}[V_A]$, and \item for any $A \in \binom{[s]}{k}$, the induced subgraph $\mathcal{H}^{(k)}[V_A]$ is $(d_k, \varepsilon, r)$-regular with respect to $\mathcal{H}^{(k-1)}[V_A]$. \end{enumerate} For a $(k-1)$-tuple $\mathbf{d} = (d_k, \dots, d_2)$, we write $(\mathbf{d}, \varepsilon_k, \varepsilon, r)$-regular to mean $(d_k, \dots, d_2, \varepsilon_k, \varepsilon, r)$-regular. We say that a $(k-1)$-complex $\mathcal{J}$ is \emph{$(t_0, t_1, \varepsilon)$-equitable} if it has the following properties. \begin{enumerate}[label = (\alph*)] \item $\mathcal{J}$ is $\mathcal{P}$-partite for some $\mathcal{P}$ which partitions $V(\mathcal{J})$ in to~$t$ parts, where $t_0 \leq t \leq t_1$, of equal size. We refer to $\mathcal{P}$ as the \emph{ground partition} of $\mathcal{J}$, and to the parts of $\mathcal{P}$ as the \emph{clusters} of $\mathcal{J}$. \item There exists a \emph{density vector} $\mathbf{d} = (d_{k-1}, \dots, d_2)$ such that, for each $2 \leq i \leq k-1$, we have $d_i \geq 1/t_1$ and $1/d_i \in \mathbb{N}$, and $\mathcal{J}$ is $(\mathbf{d}, \varepsilon, \varepsilon, 1)$-regular. \end{enumerate} For any $k$-set~$X$ of clusters of $\mathcal{J}$, we denote by $\hat{\mathcal{J}}_X$ the $k$-partite $(k-1)$-graph $(\mathcal{J}_{X^<})^{(k-1)}$ and call $\hat{\mathcal{J}}_X$ a \emph{polyad}. Given a $(t_0, t_1, \varepsilon)$-equitable $(k-1)$-complex $\mathcal{J}$ and a $k$-graph~$G$ on $V(\mathcal{J})$, we say that~$G$ is \emph{$(\varepsilon_k, r)$-regular with respect to a $k$-set~$X$ of clusters of $\mathcal{J}$} if there exists some~$d$ such that~$G$ is $(d, \varepsilon_k, r)$-regular with respect to the polyad $\hat{\mathcal{J}}_X$. Moreover, we write $d_{G, \mathcal{J}}^*(X)$ for the relative density of~$G$ with respect to $\hat{\mathcal{J}}_X$; we may drop either subscript if it is clear from context. We can now give the crucial definition of a regular slice. \begin{defn}[Regular slice] Given $\varepsilon, \varepsilon_k > 0, r, t_0, t_1 \in \mathbb{N}$, a graph~$G$ and a $(k-1)$-complex $\mathcal{J}$ on~$V(G)$, we call $\mathcal{J}$ a \emph{$(t_0, t_1, \varepsilon, \varepsilon_k,r)$-regular slice} for~$G$ if $\mathcal{J}$ is $(t_0, t_1, \varepsilon)$-equitable and~$G$ is $(\varepsilon_k, r)$-regular with respect to all but at most $\varepsilon_k \binom{t}{k}$ of the $k$-sets of clusters of~$\mathcal{J}$, where~$t$ is the number of clusters of $\mathcal{J}$. \end{defn} If we specify the density vector $\mathbf{d}$ and the number of clusters~$t$ of an equitable complex or a regular slice, then it is not necessary to specify~$t_0$ and~$t_1$ (since the only role of these is to bound $\mathbf{d}$ and~$t$). In this situation we write that $\mathcal{J}$ is $(\cdot, \cdot, \varepsilon)$-equitable, or is a $(\cdot, \cdot, \varepsilon, \varepsilon_k, r)$-regular slice for~$G$. Given a regular slice $\mathcal{J}$ for a $k$-graph~$G$, we define the $d$-reduced $k$-graph $\mathcal{R}_d^{\mathcal{J}}(G)$ as follows. \begin{defn}[The $d$-reduced $k$-graph] Let $k \geq 3$. Let~$G$ be a $k$-graph and let $\mathcal{J}$ be a $(t_0, t_1, \varepsilon, \varepsilon_k, r)$-regular slice for~$G$. Then, for~$d >0$, we define the \emph{$d$-reduced $k$-graph $\mathcal{R}_d^{\mathcal{J}}(G)$} to be the $k$-graph whose vertices are the clusters of $\mathcal{J}$ and whose edges are all $k$-sets~$X$ of clusters of $\mathcal{J}$ such that~$G$ is $(\varepsilon_k, r)$-regular with respect to~$X$ and $d^*(X) \geq d$. \end{defn} We now state the version of the Regular Slice Lemma that we need, which is a special case of~\cite[Lemma 10]{Allen2017}. \begin{lem}[Regular Slice Lemma {\cite[Lemma 10]{Allen2017}}] \label{lem:regular slice} Let $k \geq 3$. For all positive integers~$t_0$ and~$s$, positive~$\varepsilon_k$ and all functions $r\colon \mathbb{N} \rightarrow \mathbb{N}$ and $\varepsilon \colon \mathbb{N} \rightarrow (0,1]$, there are integers~$t_1$ and~$n_0$ such that the following holds for all $n \geq n_0$ which are divisible by~$t_1!$. Let $K$ be a $2$-edge-coloured complete $k$-graph on $n$ vertices. Then there exists a $(k-1)$-complex~$\mathcal{J}$ on~$V(K)$ which is a $(t_0, t_1, \varepsilon(t_1), \varepsilon_k, r(t_1))$-regular slice for both~$K^{\red}$ and $K^{\blue}$. \end{lem} Given a $2$-edge-coloured complete $k$-graph $H$ we want to apply the Regular Slice Lemma to $H^{\red}$ and $H^{\blue}$. The following lemma shows that in this setting the union of the corresponding reduced graphs $\mathcal{R}_d^\mathcal{J}(H^{\red}) \cup \mathcal{R}_d^\mathcal{J}(H^{\blue})$ is almost complete. \begin{lem}[{\cite[Lemma 8.5]{Garbe2019}}] \label{lem:reduced graph edge count} Let $k \geq 3$. Let $K$ be a $2$-edge-coloured complete $k$-graph and let $\mathcal{J}$ be a $(\cdot,\cdot,\varepsilon, \varepsilon_k,r)$-regular slice for both~$K^{\red}$ and~$K^{\blue}$. Let~$t$ be the number of clusters of $\mathcal{J}$. Then, provided that $d \leq 1/2$, we have $\s{\mathcal{R}_d^{\mathcal{J}}(K^{\red}) \cup \mathcal{R}_d^{\mathcal{J}}(K^{\blue})} \geq (1 - 2\varepsilon_k)\binom{t}{k}$. \end{lem} \begin{proof} Since $\mathcal{J}$ is a $(\cdot,\cdot,\varepsilon, \varepsilon_k,r)$-regular slice for both~$K^{\red}$ and~$K^{\blue}$ there are at least $(1-2 \varepsilon_k)\binom{t}{k}$ $k$-sets $X$ of clusters of $\mathcal{J}$ such that both~$K^{\red}$ and~$K^{\blue}$ are $(\varepsilon_k, r)$-regular with respect to $X$. Let $X$ be such a $k$-set. Since~$K^{\red}$ and~$K^{\blue}$ are complements of each other, we have $d_{K^{\red},\mathcal{J}}^*(X) + d_{K^{\blue},\mathcal{J}}^*(X) = 1$. Hence $d_{K^{\red},\mathcal{J}}^*(X) \geq 1/2$ or $d_{K^{\blue},\mathcal{J}}^*(X) \geq 1/2$ and thus, since $d \leq 1/2$, we have $X \in \mathcal{R}_d^{\mathcal{J}}(K^{\red}) \cup \mathcal{R}_d^{\mathcal{J}}(K^{\blue})$. \end{proof} Let $H$ be a $k$-graph. A \emph{fractional matching} of~$H$ is a function $\omega : E(H) \rightarrow [0, 1]$ such that for all $v \in V(H)$, $\sum_{e \in H: v \in e} \omega(e) \le 1$. The \emph{weight} of the fractional matching is defined to be $\sum_{e \in H} \omega(e)$. A fractional matching is \emph{tightly connected} if the subgraph induced by the edges~$e$ with $\omega (e)>0$ is tightly connected in~$H$. The following result from~{\cite{Allen2017}} converts a tightly connected fractional matching in the reduced graph into a tight cycle in the original graph. \begin{lem}[{\cite[Lemma 13]{Allen2017}}] \label{lem:cycle} Let $k,r,n_0,t$ be positive integers, and let $\psi, \varepsilon, \varepsilon_k, d_k, \dots, d_2$ be positive constants such that $1/d_i \in \mathbb{N}$ for each $2 \leq i \leq k-1$, and such that $1/n_0 \ll 1/t$, \[ \frac{1}{n_0} \ll \frac{1}{r}, \varepsilon \ll \varepsilon_k, d_{k-1}, \dots, d_2 \quad \text{and} \quad \varepsilon_k \ll \psi, d_k, \frac{1}{k}. \] Then the following holds for all integers $n \geq n_0$. Let~$G$ be a $k$-graph on~$n$ vertices, and $\mathcal{J}$ be a $(\cdot, \cdot, \varepsilon, \varepsilon_k, r)$-regular slice for~$G$ with~$t$ clusters and density vector $(d_{k-1}, \dots, d_2)$. Suppose that $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$ contains a tightly connected fractional matching with weight~$\mu$. Then~$G$ contains a tight cycle of length~$\ell$ for every $\ell \leq (1-\psi)k\mu n/t$ that is divisible by~$k$. \end{lem} We use the following fact, lemma and proposition to prove a stronger version of \cref{lem:cycle}, \cref{lem:main}, that allows us to control the location of the tight cycle. \begin{fact}[{\cite[Fact 7]{Allen2017}}] \label{fact} Suppose that $1/m_0 \ll \varepsilon \ll 1/t_1, 1/t_0, \beta, 1/k \leq 1/3$ and that $\mathcal{J}$ is a $(t_0, t_1, \varepsilon)$-equitable $(k-1)$-complex with density vector $(d_{k-1}, \dots, d_2)$ whose clusters each have size $m \geq m_0$. Let~$X$ be a set of~$k$ clusters of $\mathcal{J}$. Then \[ \s{K_k((\mathcal{J}_{X^<})^{(k-1)})} = (1 \pm \beta)m^k \prod_{i=2}^{k-1} d_i^{\binom{k}{i}}. \] \end{fact} \begin{lem}[Regular Restriction Lemma {\cite[Lemma 28]{Allen2017}}] \label{regrestriction} Suppose integers~$k,m$ and reals $\alpha, \varepsilon, \varepsilon_k, d_k, \dots, d_2 >0$ are such that \[ \frac{1}{m} \ll \varepsilon \ll \varepsilon_k, d_{k-1}, \dots, d_2 \quad \text{and} \quad \varepsilon_k \ll \alpha, \frac{1}{k}. \] For any $r,s \in \mathbb{N}$ and~$d_k > 0$, set $\mathbf{d} = (d_k, \dots, d_2)$, and let $\mathcal{G}$ be an $s$-partite $k$-complex whose vertex classes $V_1, \dots, V_s$ each have size~$m$ and which is $(\mathbf{d}, \varepsilon_k, \varepsilon, r)$-regular. Choose any $V_i' \subseteq V_i$ with $\s{V_i'} \geq \alpha m$ for each $i \in [s]$. Then the induced subcomplex $\mathcal{G}[V_1'\cup \dots \cup V_s']$ is $(\mathbf{d}, \sqrt{\varepsilon_k}, \sqrt{\varepsilon}, r)$-regular. \end{lem} The following proposition shows that a refinement of a regular slice is also a regular slice. \begin{prop} \label{prop:split} Let $1/m \ll \varepsilon \ll 1/N, 1/t_0, 1/t_1, 1/k \leq 1/3$. Let $\mathcal{J}$ be a $(t_0,t_1, \varepsilon)$-equitable $(k-1)$-complex with density vector $(d_{k-1}, \dots, d_2)$ and clusters $V_1, \dots, V_t$ each of size~$m$. Let $V_{i,1}, \dots, V_{i,N}$ be an equipartition of~$V_i$ for each $i \in [t]$. Then there exists a $(N t_0, N t_1, \sqrt{\varepsilon})$-equitable $(k-1)$-complex~$\widetilde{\mathcal{J}}$ with density vector $(d_{k-1}, \dots, d_2)$, ground partition $\{V_{i,j} \colon i \in [t], j \in [N]\}$ and $\widetilde{\mathcal{J}}[V_1, \dots, V_t] = \mathcal{J}$. \end{prop} \begin{proof} We construct~$\widetilde{\mathcal{J}}$ from $\mathcal{J}$ as follows. Let the ground partition of~$\widetilde{\mathcal{J}}$ be $\{V_{i,j} \colon i \in [t], j \in [N]\}$. Starting with the edges of $\mathcal{J}$ we iteratively add additional edges at random as follows. For each $2 \leq i \leq k-1$, beginning with~$i = 2$, we add each $i$-edge that contains two vertices that are in vertex classes with the same first index and is supported on the $(i-1)$-edges independently with probability~$d_i$. We now show that with high probability~$\widetilde{\mathcal{J}}$ is the desired $(k-1)$-complex. Note that it suffices to show that with high probability~$\widetilde{\mathcal{J}}$ is $(\mathbf{d},\sqrt{\varepsilon},\sqrt{\varepsilon},1)$-regular. Let $\widetilde{\mathcal{J}}^{\leq i} = \bigcup_{j \in [i]} \widetilde{\mathcal{J}}^{(j)}$ and $\mathbf{d}^{\leq i} = (d_i, \dots, d_2)$. For $i \in [k-1]$, let~$B_i$ be the event that $\widetilde{\mathcal{J}}^{\leq i}$ is not $(\mathbf{d}^{\leq i}, \sqrt{\varepsilon}, \sqrt{\varepsilon}, 1)$-regular. Note that $B_1 = \varnothing$. Consider $2 \leq i \leq k-1$ and $A \in \binom{[t] \times [N]}{i}$. Let~$B_{i,A}$ be the event that $\widetilde{\mathcal{J}}^{(i)}[V_A]$ is not $(d_i, \sqrt{\varepsilon}, 1)$-regular with respect to $\widetilde{\mathcal{J}}^{(i-1)}[V_A]$. \begin{claim} \label{claim:probability_bound} For $i \in [k-1]$ and $A \in \binom{[t] \times [N]}{i}$, we have $\mathbb{P}\left[B_{i,A} \mid \overline{B_{i-1}} \,\right] = e^{-\Omega(m^i)}$ as $m \rightarrow \infty$. \end{claim} \begin{proofclaim} Assume $\overline{B_{i-1}}$ holds. Let $A = \{(r_j,s_j)\colon j \in [i]\}$. Define $\widetilde{A} = \{r_j \colon j \in [i]\}$. If the~$r_j$ are distinct, then the claim holds by \cref{regrestriction} with $\mathcal{G} = \mathcal{J}[V_{\widetilde{A}}]$ and $\alpha = 1/N$. If not all the~$r_j$ are distinct, then $\s{K_i(\widetilde{\mathcal{J}}^{(i-1)}[V_A])} \geq \frac{1}{2}\br{\prod_{j=2}^{i-1}d_j^{\binom{i}{j}}}(m/N)^i$, by \cref{fact}. Thus for each subgraph~$Q$ of $\widetilde{\mathcal{J}}^{(i-1)}[V_A]$ such that $\s{K_i(Q)} > \sqrt{\varepsilon}\s{K_i(\widetilde{\mathcal{J}}^{(i-1)}[V_A])}$, a Chernoff bound implies that \begin{align*} &\mathbb{P}\left.\left[d(\widetilde{\mathcal{J}}^{(i)}[V_A] \mid Q) \neq d_i \pm \sqrt{\varepsilon} \right\vert \overline{B_{i-1}} \,\right] \\ = &\mathbb{P}\left.\left[ \s{\s{\widetilde{\mathcal{J}}^{(i)}[V_A] \cap K_i(Q)} - d_i\s{K_i(Q)}} > \frac{\sqrt{\varepsilon}}{d_i} d_i \s{K_i(Q)} \right\vert \overline{B_{i-1}} \,\right] \\ \leq &2\exp\br{-\frac{1}{3}\br{\frac{\sqrt{\varepsilon}}{d_i}}^2d_i \s{K_i(Q)}} \leq 2\exp\br{-\frac{1}{3}\frac{\varepsilon^{3/2}}{d_i}\s{K_i(\widetilde{\mathcal{J}}^{(i-1)}[V_A]}} \\ \leq &2\exp\br{-\frac{1}{6}\frac{\varepsilon^{3/2}}{d_i}\br{\prod_{j=2}^{i-1}d_j^{\binom{i}{j}}}\br{\frac{m}{N}}^i} \leq e^{-\Omega(m^i)}. \end{align*} Since there are at most $2^{(im)^{i-1}}$ choices for~$Q$, the claim follows by a union bound. \end{proofclaim} Note that if~$\widetilde{\mathcal{J}}$ is not $(\mathbf{d}, \sqrt{\varepsilon},\sqrt{\varepsilon},1)$-regular, then there exists some $i \in [k-1]$ and $A \in \binom{[t] \times [N]}{i}$ such that~$B_{i,A}$ holds. Further by choosing~$i$ minimal we can ensure that $\overline{B_{i-1}}$ holds. Thus, by a union bound and \cref{claim:probability_bound}, we have \begin{align*} \mathbb{P}\left[\widetilde{\mathcal{J}} \text{ is not $(\mathbf{d}, \sqrt{\varepsilon},\sqrt{\varepsilon},1)$-regular}\right] \leq &\sum_{i=1}^{k-1}\sum_{A \in \binom{[t] \times [N]}{i}} \mathbb{P}\left[B_{i,A} \cap \overline{B_{i-1}} \, \right] \\ \leq &\sum_{i=1}^{k-1}\sum_{A \in \binom{[t] \times [N]}{i}} \mathbb{P}\left[B_{i,A}\mid \overline{B_{i-1}} \,\right] = o(1). \end{align*} \end{proof} The following lemma is a strengthening of \cref{lem:cycle}. We believe the constant~$\beta$ and the corresponding condition could be removed if one were to go through the proof of \cref{lem:cycle} to prove a stronger result. \begin{lem} \label{lem:main} Let $1/n \ll 1/r, \varepsilon \ll \varepsilon_k, d_{k-1}, \dots, d_2$ and $\varepsilon_k \ll \varepsilon' \ll \psi, d_k, \beta, 1/k \leq 1/3$ and $1/n \ll 1/t$ such that~$t$ divides~$n$ and $1/d_i \in \mathbb{N}$ for all $2 \leq i \leq k-1$. Let~$G$ be a $k$-graph on~$n$ vertices and $\mathcal{J}$ be a $(\cdot, \cdot,\varepsilon, \varepsilon_k, r)$-regular slice for~$G$. Further, let $\mathcal{J}$ have~$t$ clusters $V_1, \dots, V_t$ all of size~$n/t$ and density vector $\mathbf{d} = (d_{k-1}, \dots, d_2)$. Suppose that the reduced graph $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$ contains a tightly connected fractional matching~$\varphi$ with weight~$\mu$. Assume that all edges with non-zero weight have weight at least~$\beta$. For each $i \in [t]$, let $W_i \subseteq V_i$ be such that $\s{W_i} \geq ((1-3\varepsilon')\varphi(V_i) + \varepsilon')n/t$. Then $G\left[\bigcup_{i \in [t]}W_i\right]$ contains a cycle of length~$\ell$ for each $\ell \leq (1-\psi)k\mu n/t$ that is divisible by~$k$. \end{lem} We first explain the main ideas of the proof. We would like to find a regular slice for $G' = G[\bigcup_{i \in [t]} W_i]$ to then apply \cref{lem:cycle}. The issue is that not all vertex classes in $G'$ have the same size. To get around this we take a refinement of the original partition and use \cref{prop:split} to find a new regular slice with that ground partition. The reduced graph for this new regular slice will be a blow up of the original reduced graph. We can find a corresponding tightly connected matching in this new reduced graph. Then we simply apply \cref{lem:cycle}. \begin{proof}[Proof of \cref{lem:main}] Let~$m= n/t$ and $\widetilde{m} = \lfloor \varepsilon'm/2 \rfloor$. For each $i \in [t]$, let $\widetilde{V}_i \subseteq V_i$ such that $\widetilde{m} \mid \s{\widetilde{V}_i}$ and $\s{V_i \setminus \widetilde{V}_i} \leq \varepsilon'm/2$. By \cref{regrestriction}, $\mathcal{J}[\widetilde{V}_1, \dots, \widetilde{V}_t]$ is $(\cdot, \cdot, \sqrt{\varepsilon})$-equitable with density vector $(d_{k-1}, \dots, d_2)$. Let $N = \lfloor m/ \widetilde{m}\rfloor$ and, for each $i \in [t]$, let $N_i = \lfloor ((1- 3\varepsilon')\varphi(V_i) + \varepsilon')N\rfloor \leq \lfloor \s{W_i}/\widetilde{m} \rfloor$. For each $i \in [t]$, let $V_{i,1}, \dots, V_{i,N}$ be an equipartition of $\widetilde{V}_i$ such that $V_{i,1}, \dots, V_{i,N_i} \subseteq W_i$. Let $\widetilde{W} = \{V_{i,j} \colon i \in [t], j \in [N_i]\}$ and $\widetilde{t} = \s{\widetilde{W}}$. By \cref{prop:split}, there exists a $(\cdot, \cdot, \varepsilon^{1/4})$-equitable $(k-1)$-complex $\mathcal{J}^*$ with density vector $(d_{k-1}, \dots, d_2)$ and ground partition $\{V_{i,j} \colon i \in [t], j \in [N]\}$ such that $\mathcal{J}[\widetilde{V}_1, \dots, \widetilde{V}_t] = \mathcal{J}^*[\widetilde{V}_1, \dots, \widetilde{V}_t]$. Let $\widetilde{\mathcal{J}} = \mathcal{J}_{\widetilde{W}}^*$, that is $\widetilde{\mathcal{J}}$ is the $(k-1)$-complex contained in $\mathcal{J}^*$ induced by the vertex classes in $\widetilde{W}$. Let $\widetilde{G}$ be the subgraph of $G[\bigcup \widetilde{W}]$ obtained by removing all edges contained in $k$-tuples of density less than~$d_k$ and in irregular $k$-tuples. We show that~$\widetilde{\mathcal{J}}$ is a regular slice for $\widetilde{G}$. Let~$X$ be a set of~$k$ clusters of~$\widetilde{\mathcal{J}}$. If the~$k$ clusters in~$X$ are all contained in distinct clusters of $\mathcal{J}$ that form a regular $k$-tuple of density at least~$d_k$, then let~$Y$ denote the $k$-set of these clusters. Note that $(G \cup \mathcal{J})[Y]$ is $((d, d_{k-1},\dots,d_2), \varepsilon_k, \varepsilon, r)$-regular, for some $d \geq d_k -\varepsilon_k$, and thus, by \cref{regrestriction}, $(\widetilde{G} \cup \widetilde{\mathcal{J}})[X]$ is $((d,d_{k-1}, \dots, d_2), \sqrt{\varepsilon_k}, \sqrt{\varepsilon}, r)$-regular. Hence $\widetilde{G}$ is $(d,\sqrt{\varepsilon_k},r)$-regular with respect to $(\widetilde{\mathcal{J}}_{X^<})^{(k-1)}$. Note that, for all other $k$-sets of clusters~$X$, the $k$-partite subgraph of $\widetilde{G}$ induced by the clusters in~$X$ is empty. For these $k$-sets of clusters, $\widetilde{G}$ is $(0,\sqrt{\varepsilon_k},r)$-regular with respect to the polyad $(\widetilde{\mathcal{J}}_{X^<})^{(k-1)}$. Thus~$\widetilde{\mathcal{J}}$ is a $(\cdot, \cdot, \sqrt{\varepsilon_k}, \varepsilon^{1/4},r)$-regular slice for $\widetilde{G}$. Note that $\widetilde{\mathcal{R}} = \mathcal{R}_{d_k - 2\sqrt{\varepsilon_k}}^{\widetilde{\mathcal{J}}}(\widetilde{G})$ is a blow-up of $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$. Consider the tightly connected fractional matching~$\varphi$ on $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$ with weight~$\mu$. We construct a tightly connected matching on~$\widetilde{\mathcal{R}}$ as follows. For each $e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)$, we will pick a matching~$M_e$ in $\widetilde{\mathcal{R}}$ of size $\widetilde{\varphi}(e) = \lfloor (1-3\varepsilon')\varphi(e)N \rfloor$. Note that, for each $i \in [t]$, \begin{align} \label{eq:matching} \sum_{e \ni V_i} \widetilde{\varphi}(e) \leq \lfloor ((1-3\varepsilon')\varphi(V_i) + \varepsilon')N\rfloor = N_i. \end{align} For each vertex~$V_i$ in $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$ and each edge $e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)$ that contains~$V_i$, we choose disjoint sets $I_{i,e} \subseteq [N_i]$ such that $\s{I_{i,e}} = \widetilde{\varphi}(e)$. This is possible by \cref{eq:matching}. Recall that~$\widetilde{\mathcal{R}}$ is a blow-up of $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$. For each edge $e =\{V_{i_1}, V_{i_2}, \dots, V_{i_k}\} \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)$, the subgraph~$\widetilde{\mathcal{R}}_e$ of~$\widetilde{\mathcal{R}}$ induced by the set of edges $\{\{V_{i_1,j_1}, \dots, V_{i_k,j_k}\} \colon j_1 \in I_{i_1,e}, \dots, j_k \in I_{i_k,e}\}$ is a balanced complete $k$-partite $k$-graph. Pick a perfect matching~$M_e$ in~$\widetilde{\mathcal{R}}_e$. Let $M= \bigcup_{e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)} M_e$. Note that~$M$ is a matching of size \begin{align*} \sum_{e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)}\widetilde{\varphi}(e) &= \sum_{e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G)} \lfloor (1- 3\varepsilon')\varphi(e)N\rfloor \geq \sum_{\substack{e \in \mathcal{R}_{d_k}^{\mathcal{J}}(G) \\ \varphi(e) > 0}}((1- 3\varepsilon')\varphi(e)N-1) \\ &\geq (1-3\varepsilon')\mu N- \mu/\beta = \br{1-3\varepsilon'- \frac{1}{N\beta}}\mu N \\ &\geq (1-3\varepsilon'-\varepsilon'/\beta)\mu N \geq (1- \sqrt{\varepsilon'})\mu N \geq (1-2\sqrt{\varepsilon'})\mu \frac{m}{\widetilde{m}}. \end{align*} In the second inequality above we used the fact that since $\varphi$ is a fractional matching with weight $\mu$ and all edges have weight at least $\beta$, there are at most $\mu / \beta$ edges of positive weight. Since~$\widetilde{\mathcal{R}}$ is a blow-up of $\mathcal{R}_{d_k}^{\mathcal{J}}(G)$,~$M$ is tightly connected. We conclude by applying \cref{lem:cycle} with $k,r,n,\widetilde{t},\psi^2, \varepsilon^{1/4}, \sqrt{\varepsilon_k}, d_k-2\sqrt{\varepsilon_k},d_{k-1}, \dots, d_2, \widetilde{\mathcal{J}},\widetilde{G}, \ell$ playing the roles of $k,r,n_0,t,\psi,\varepsilon,\varepsilon_k,d_k,\dots,d_2, \mathcal{J}, G, \ell$. \end{proof} For the next result, we need the following definition. \begin{defn} Let $\mu_k^s(\beta,\varepsilon, n)$ be the largest $\mu$ such that every $2$-edge-coloured $(1-\varepsilon, \varepsilon)$-dense $k$-graph on $n$ vertices contains a factional matching with weight $\mu$ such that all edges with non-zero weight have weight at least $\beta$ and lie in $s$ monochromatic tight components. Let $\mu_k^s(\beta) = \liminf_{\varepsilon \to 0} \liminf_{n \to \infty} \mu_k^s(\beta,\varepsilon,n)/n$. Similarly, let $\mu_k^*(\beta,\varepsilon,n)$ be the largest $\mu$ such that every $2$-edge-coloured $(1 - \varepsilon, \varepsilon)$-dense $k$-graph on $n$ vertices contains a factional matching with weight $\mu$ such that all edges with non-zero weight have weight at least $\beta$ and lie in one red and one blue tight component. Let $\mu_k^*(\beta) = \liminf_{\varepsilon \to 0} \liminf_{n \to \infty} \mu_k^*(\beta,\varepsilon,n)/n$. \end{defn} The following is the crucial result that reduces finding cycles in the original graph to finding tightly connected matchings in the reduced graph. \begin{cor} \label{cor:matchings_to_cycles} Let $1/n \ll \eta, \beta, 1/k, 1/s$ with $k \ge 3$. Let $K$ be a $2$-edge-coloured complete $k$-graph on $n$ vertices. Then $K$ contains $s$ vertex-disjoint monochromatic tight cycles covering at least $(\mu_k^s(\beta) - \eta)k n$ vertices. Moreover, $K$ contains two vertex-disjoint monochromatic tight cycles of distinct colours covering at least $(\mu_k^*(\beta) - \eta)k n$ vertices. \end{cor} \begin{proof} We prove the first statement. The second statement can be proved similarly. Without loss of generality assume that $\eta \leq 1/3$. Let $d_k = 1/2$ and $1/t_0 \ll \varepsilon_k \ll \varepsilon' \ll \varepsilon \ll \eta, \beta, 1/k, 1/s$. Note that $\mu_k^s(\beta, \varepsilon, t) \ge (\mu_k^s(\beta) - \eta^2)t$ for all $t \ge t_0$. We choose functions~$\widetilde{\varepsilon}(\cdot)$ and~$r(\cdot)$ where~$\widetilde{\varepsilon}(\cdot)$ approaches zero sufficiently quickly and~$r(\cdot)$ increases sufficiently quickly such that for any integer $t^* \geq t_0$ and $d_2, \dots, d_{k-1} \geq 1/t^*$ we may apply \cref{lem:main} with~$\widetilde{\varepsilon}(t^*)$ and~$r(t^*)$ playing the roles of~$\varepsilon$ and~$r$, respectively. We apply \cref{lem:regular slice} to obtain~$n_0$ and~$t_1$. Let $\widetilde{\varepsilon} = \widetilde{\varepsilon}(t_1)$ and $r=r(t_1)$. Let $n_1 \geq n_0$ be large enough such that for all $n \geq n_1$ and $d_2, \dots, d_{k-1} \geq 1/t_1$ we may apply \cref{lem:main}. Let $n_2 = n_1 + t_1!$. We show that the theorem holds for all $n \geq n_2$. Let~$K$ be a $2$-edge-coloured complete $k$-graph on~$n$ vertices. Let $\widetilde{n} \leq n$ be the largest integer such that~$t_1!$ divides~$\widetilde{n}$. Let~$\widetilde{K}$ be a complete subgraph of~$K$ on~$\widetilde{n}$ vertices. Note that $\widetilde{n} \geq n_1$. By \cref{lem:regular slice}, there exists a $(t_0, t_1, \widetilde{\varepsilon}, \varepsilon_k, r)$-regular slice~$\mathcal{J}$ for both $\widetilde{K}^{\red}$ and $\widetilde{K}^{\blue}$. Let $t$ be the number of clusters of $\mathcal{J}$ and let $(d_{k-1}, \dots ,d_2)$ be the density vector of $\mathcal{J}$. Let $\widetilde{H} = \mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\red}) \cup \mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\blue})$ be a $2$-edge-coloured $k$-graph such that $\mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\red}) \setminus \mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\blue}) \subseteq \widetilde{H}^{\red}$ and $\mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\blue}) \setminus \mathcal{R}_{d_k}^\mathcal{J}(\widetilde{K}^{\red}) \subseteq \widetilde{H}^{\blue}$. By \cref{lem:reduced graph edge count}, we have $\s{\widetilde{H}} \geq (1-2\varepsilon_k)\binom{t}{k}$. By \cref{prop:dense}, there exists a $(1-(2\varepsilon_k)^{1/(4k^2+1)}, (2\varepsilon_k)^{1/(4k^2+1)})$-dense subgraph $H \subseteq \widetilde{H}$ with $V(H) = V(\widetilde{H})$. Since $\varepsilon_k \ll \varepsilon$, $H$ is $(1- \varepsilon, \varepsilon)$-dense. Let $\varphi$ be a fractional matching in $H$ of weight $\mu = \mu_k^s(\beta,\varepsilon,t) \geq (\mu_k^s(\beta) - 2 \eta^2)t$ such that all edges with non-zero weight have weight at least $\beta$ and lie in $s$ monochromatic tight components $K_1, \dots, K_s$ of $H$. For each $j \in [s]$, we define a fractional matching $\varphi_j$ in $H$ by setting $\varphi_j(e) = \varphi(e)$ if $e \in K_i$ and $\varphi(e) = 0$ otherwise. For each $j \in [s]$, let $\mu_j$ be the weight of $\varphi_j$. It follows that $\sum_{j \in [s]} \mu_j = \mu$. Let $V_1, \dots, V_t$ be the clusters of $\mathcal{J}$. For each $i \in [t]$ and $j \in [s]$, we define \[ w_{i,j} = \max\{\sum_{\substack{e \in H \\ V_i \in e}} \varphi_j(e) - s \varepsilon', \varepsilon'\}. \] For each $i \in [t]$, let~$V_{i,1}, \dots, V_{i,s}$ be disjoint subsets of~$V_i$ such that $\s{V_{i,j}} = \lceil w_{i,j}n/t \rceil$. By \cref{lem:main}, there exist tight cycles $C_1, \dots, C_s$ in~$K$ such that, for all $j \in [s]$, $\s{C_j} = (1-\eta^2) \mu_j k \widetilde{n}/t$, $C_j \subseteq K\left[\bigcup_{i \in [t]}V_{i,j}\right]$ and $C_j$ has the same colour as $K_j$. Hence~$C_1, \dots, C_s$ are vertex-disjoint and together cover \begin{align*} (1- \eta^2) \mu k \widetilde{n}/t \geq (1-\eta^2)(\mu_k^s(\beta) - \eta^2) k \widetilde{n} \geq (\mu_k^s(\beta) - \eta) k n \end{align*} vertices of~$K$. \end{proof} \section{Blueprints} \label{section:blueprints} Let~$H$ be a $2$-edge-coloured $k$-graph. We define what we call a \emph{blueprint for~$H$} which is an auxiliary graph that can be used as a guide when finding connected matchings in~$H$. A form of the notion of blueprint for $k =3$ already appeared in~\cite{Haxell2009}. \begin{defn} \label{defn:blueprint} Let~$\varepsilon >0$, $k \geq 3$ and let~$H$ be a $2$-edge-coloured $k$-graph on~$n$ vertices. We say that a $2$-edge-coloured $(k-2)$-graph~$G$ with $V(G) \subseteq V(H)$ is an \emph{$\varepsilon$-blueprint for~$H$}, if \begin{enumerate}[label = {$(\text{BP}\arabic*)$}, leftmargin= \widthof{BP1000}] \item \label{BP1} for every edge~$e \in G$, there exists a monochromatic tight component~$H(e)$ in~$H$ such that~$H(e)$ has the same colour as~$e$ and $d_{\partial H(e)}(e) \geq (1-\varepsilon)n$ and \item \label{BP2} for $e, e' \in G$ of the same colour with $\s{e \cap e'} = k-3$, we have $H(e) = H(e')$. \end{enumerate} We say that \emph{$e$ induces $H(e)$} and write~$R(e)$ or~$B(e)$ instead of~$H(e)$ if~$e$ is red or blue, respectively. We simply say that~$G$ is a \emph{blueprint}, when $H$ and $\varepsilon$ are clear from context. For $S \in \binom{V(H)}{k-3}$, all the red (blue) edges of a blueprint containing~$S$ induce the same red (blue) tight component, so we call that component the red (blue) tight component induced by~$S$. Note that any subgraph of a blueprint is also a blueprint. \end{defn} The main aim of this section is to prove the following lemma that establishes the existence of blueprints in $2$-edge-coloured $(1-\varepsilon, \alpha)$-dense graphs. \begin{lem} \label{lem:generalblueprint} Let $1/n \ll \varepsilon \leq \alpha \ll 1/k \leq 1/3$. Let~$H$ be a $2$-edge-coloured $(1-\varepsilon, \alpha)$-dense $k$-graph on~$n$ vertices. Then there exists a $3\sqrt{\varepsilon}$-blueprint~$G_*$ for~$H$ with $V(G_*) = V(H)$ and $\s{G_*} \geq (1-\alpha -24k\sqrt{\varepsilon})\binom{n}{k-2}$. Moreover, if $k \geq 4$ and $\varepsilon \ll \alpha$, there exists a $(1-\alpha^{1/(4(k-2)^2+1)}, \alpha^{1/(4(k-2)^2+1)})$-dense spanning subgraph $G$ of $G_*$. \end{lem} We need a few simple preliminary results to prove \cref{lem:generalblueprint}. First we show that any $2$-edge-coloured $2$-graph with large minimum degree contains a large monochromatic connected subgraph. \begin{prop} \label{prop:dcomp} Let $0 < \beta \leq 1/6$ and let~$F$ be a $2$-edge-coloured $2$-graph with $\s{ V(F) } \leq n$ and $ \delta(F) \geq~(1-\beta)n$. Then there exists a subgraph~$F'$ of~$F$ of order at least~$(1-\beta)n$ that contains a spanning monochromatic component and $\delta(F') \geq (1-2\beta)n$. \end{prop} \begin{proof} Let~$F'$ be a subgraph of~$F$ of maximum order that contains a spanning monochromatic component. Assume without loss of generality that~$F'$ contains a spanning red component. Let~$S = V(F')$ and $\overline{S} = V(F) \setminus V(F')$. Since $\delta(F) \geq (1-\beta)n$, we have that $ \s{ S } \geq (1-\beta)n/2 $. Suppose, for a contradiction, that $ \s{ S } < (1-\beta)n$. Note that all edges between~$S$ and $\overline{S}$ are blue. If $\delta(F)-\s{ S } + 1 > \s{ \overline{S}}/2$, then each pair of vertices in~$S$ has a common neighbour in~$\overline{S}$ and so there is a blue component strictly containing~$S$ which contradicts the maximality of~$F'$. Therefore \[ \delta(F)-\s{ S } + 1 \leq \s{ \overline{S} }/2 = (\s{ V(F) } - \s{ S })/2 \leq (n - \s{ S })/2.\] Hence \[\s{ S } \geq 2 \delta(F) - n + 2 \geq 2(1-\beta)n -n + 2 = (1-2\beta)n+2.\] But now every pair of vertices in~$\overline{S}$ has a common neighbour in~$S$, since $ \s{ \overline{S} } \leq \s{ V(F) } - \s{ S } \leq 2 \beta n $ and so \[\delta(F) - \s{ \overline{S} } + 1 \geq (1-\beta)n - 2\beta n + 1 = (1-3 \beta)n +1 > n/2.\] Thus $\overline{S} \cup N_F(\overline{S})$ is spanned by a blue component. But since \[ \s{ \overline{S} \cup N_F(\overline{S}) } \geq \delta(F) \geq (1-\beta)n, \] we have a contradiction. It is easy to see that $\delta(F') \geq (1-2\beta)n$. \end{proof} \begin{prop} \label{prop:mindeg} Let $1/n \ll \gamma \leq 1/9$. Let~$F$ be a $2$-graph with $\s{ V(F) } \leq n$ and $\s{ E(F) } \geq (1-\gamma)\binom{n}{2}$. Then there exists a subgraph of~$F$ with minimum degree at least $(1-3\sqrt{\gamma})n$. \end{prop} \begin{proof} Let $W = \{v \in V(F)\colon d(v) < (1- 2\sqrt{\gamma})n\}$. We have that \[(1-2\gamma)n^2 \leq 2 \s{E(F)} = \sum_{v \in V(F)}d(v) \leq n^2 - 2\sqrt{\gamma}n\s{W}.\] This implies that $\s{ W } \leq \sqrt{\gamma}n $. Let~$F^* = F - W$. It follows that $\delta(F^*)\geq (1-2\sqrt{\gamma})n-\s{ W} \geq (1-3\sqrt{\gamma})n. $ \end{proof} \begin{cor} \label{cor:Ecomp} Let $1/n \ll \varepsilon \leq 1/324$. Let~$F$ be a $2$-edge-coloured $2$-graph with $\s{ V(F) } \, \leq n$ and $\s{ E(F) } \geq (1-\varepsilon)\binom{n}{2}$. Then there exists a subgraph~$F'$ of~$F$ of order at least $(1-3\sqrt{\varepsilon})n$ that contains a spanning monochromatic component and $\delta(F') \geq (1-6\sqrt{\varepsilon})n$. \end{cor} \begin{proof} By \cref{prop:mindeg}, there exists a subgraph~$F^*$ of~$F$ with $\delta(F^*) \geq (1-3\sqrt{\varepsilon})n$. We conclude by applying \cref{prop:dcomp} with~$F=F^*$ and $\beta = 3\sqrt{\varepsilon}$. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:generalblueprint}}{the blueprint lemma}} Now we show that for any $(1-\varepsilon, \alpha)$-dense $2$-edge-coloured graph we can find a dense blueprint. \begin{proof}[Proof of \cref{lem:generalblueprint}] Let $F = \partial^2H$. Since~$H$ is $(1-\varepsilon, \alpha)$-dense, \[ E(F) = \left\{e \in \binom{V(H)}{k-2}\colon d_H(e)>0\right\} = \left\{e \in \binom{V(H)}{k-2}\colon d_H(e)\geq (1-\varepsilon)\binom{n}{2}\right\} \] and \begin{align} \label{eq:edges} \s{E(F)} \geq (1-\alpha)\binom{n}{k-2}. \end{align} We now colour each edge~$e$ of~$F$ as follows. Note that the link graph $H_e$ is a $2$-graph. We induce a 2-edge-colouring on~$H_e$ by colouring the $2$-edge~$f \in H_e$ with the colour of the $k$-edge $e\cup f \in H$. By \cref{cor:Ecomp}, there exists a monochromatic component in~$H_e$ of order at least $(1-3\sqrt{\varepsilon})n$. Let~$K_e$ be such a component chosen arbitrarily. We colour the edge~$e$ according to the colour of~$K_e$. If~$e$ is red in~$F$, then we define $R(e) \subseteq H$ to be the red tight component containing all the edges~$e \cup f$ where $f \in K_e$. If~$e$ is blue in~$F$, then we define~$B(e)$ analogously. In the next claim we show that, for each $S \in \binom{V(H)}{k-3}$, almost all edges in~$F$ of the same colour containing~$S$ induce the same monochromatic tight component in~$H$. \begin{claim} For each $S \in \binom{V(H)}{k-3}$, there exist $\Gamma^{\red}(S) \subseteq N_F^{\red}(S)$ and $\Gamma^{\blue}(S) \subseteq N_F^{\blue}(S)$ with $\s{\Gamma^{\red}(S)} \geq \s{N_F^{\red}(S)} - 6 \sqrt{\varepsilon}n$ and $\s{\Gamma^{\blue}(S)} \geq \s{N_F^{\blue}(S)} - 6 \sqrt{\varepsilon}n$ such that, for all $y_1, y_2 \in \Gamma^{\red}(S)$, $R(S \cup y_1) = R(S \cup y_2)$ and, for all $y_1',y_2' \in \Gamma^{\blue}(S)$, $B(S \cup y_1') = B(S \cup y_2')$. \end{claim} \begin{proofclaim} We only prove the statement for~$N_F^{\red}(S)$ as the proof of the statement for~$N_F^{\blue}(S)$ is analogous. Assume $\s{N_F^{\red}(S)} > 6 \sqrt{\varepsilon}n$ (or else we simply set $\Gamma^{\red}(S) = \varnothing$). Let~$D$ be the directed graph with vertex set~$N_F^{\red}(S)$ and edge set \[ E(D) = \left\{y_1y_2 \colon y_1 \in V(K_{S \cup y_2})\right\}. \] Note that, for $y_1y_2 \in E(D)$, there exists an edge in $R(S \cup y_2)$ containing $S \cup y_1y_2$. So if~$y_1y_2$ is a double edge (that is, $y_1y_2, y_2y_1 \in E(D)$), then $R(S \cup y_1) = R(S \cup y_2)$. For $y \in N_F^{\red}(S)$, \[ d_D^-(y) \geq \s{N_F^{\red}(S) \cap V(K_{S \cup y})} \geq \s{N_F^{\red}(S)} -3\sqrt{\varepsilon}n, \] since $\s{V(K_{S\cup y})} \geq (1-3\sqrt{\varepsilon})n$. Hence the number of double edges in~$D$ is at least \[ \s{N_F^{\red}(S)}\br{\s{N_F^{\red}(S)}-3\sqrt{\varepsilon}n} - \frac{1}{2}\s{N_F^{\red}(S)}^2 = \frac{1}{2}\s{N_F^{\red}(S)}\br{\s{N_F^{\red}(S)}-6\sqrt{\varepsilon}n}. \] Thus there exists a vertex $y_0 \in N_F^{\red}(S)$ that is incident to at least $\s{N_F^{\red}(S)} -6\sqrt{\varepsilon}n$ double edges. Let $\Gamma^{\red}(S) = \{y_0\} \cup \{y \in N_F^{\red}(S)\colon yy_0, y_0y \in E(D)\}$. Note that $\s{\Gamma^{\red}(S)} \geq \s{N_F^{\red}(S)} - 6\sqrt{\varepsilon}n$ and $R(S \cup y) = R(S \cup y_0)$ for all $y \in \Gamma^{\red}(S)$. \end{proofclaim} Consider the multi-$(k-2)$-graph~$D^*$ with \[ E(D^*) = \left\{S \cup y \colon S \in \binom{V(H)}{k-3}, y \in \Gamma^{\red}(S)\cup \Gamma^{\blue}(S) \right\}. \] Note that \begin{align*} \s{E(D^*)} &= \sum_{S \in \binom{V(H)}{k-3}} \s{\Gamma^{\red}(S) \cup \Gamma^{\blue}(S)} \geq \sum_{S \in \binom{V(H)}{k-3}} (d_F(S) - 12 \sqrt{\varepsilon}n) \\ &\geq (k-2)\s{F}-24k\sqrt{\varepsilon} \binom{n}{k-2}. \end{align*} Every edge in~$D^*$ has multiplicity at most~$k-2$. So at least $\s{F} -24k\sqrt{\varepsilon}\binom{n}{k-2}$ edges $e \in \binom{V(H)}{k-2}$ have multiplicity~$k-2$ in~$D^*$. Let~$G_*$ be the $(k-2)$-graph on~$V(H)$ such that $e \in G_*$ if and only if~$e$ has multiplicity~$k-2$ in~$D^*$. So, by (\ref{eq:edges}), $\s{G_*} \geq \s{F} -24k\sqrt{\varepsilon}\binom{n}{k-2} \geq (1- \alpha -24k\sqrt{\varepsilon})\binom{n}{k-2}$. We now show that~$G_*$ is a $3\sqrt{\varepsilon}$-blueprint for~$H$. Consider any $e, e' \in G_*^{\red}$ with $\s{e \cap e'} = k-3$. Let $S = e\cap e'$, $y = e'\setminus S$ and $y' = e \setminus S$. Since $e,e' \in G_*^{\red}$, we have $y,y' \in \Gamma^{\red}(S)$ and so $R(e)=R(S\cup y) =R(S\cup y') = R(e')$. Further, for~$e \in G_*^{\red}$, we have $d_{\partial R(e)}(e) \geq \s{V(K_e)} \geq (1-3\sqrt{\varepsilon})n$. Analogous statements hold for edges of~$G_*^{\blue}$. If $k \geq 4$ and $\varepsilon \ll \alpha$, then $\s{G_*} \geq (1-4\alpha)\binom{n}{k-2}$ and thus by \cref{prop:dense} there exists a subgraph $G \subseteq G_*$ such that~$G$ is $(1-\alpha^{1/(4(k-2)^2+1)}, \alpha^{1/(4(k-2)^2+1)})$-dense and $V(G) = V(G_*) = V(H)$. \end{proof} \subsection{Some lemmas about blueprints} Let $H$ be a $k$-graph and $G$ be a blueprint for $H$. We write $H(G)$ for $\bigcup_{e \in G} H(e)$. We write $G^+$ for the subgraph of $H(G)$ with edge set \[ E(G^+) = \{e \in H(G) \colon f \subseteq e \text{ for some } f \in G\}, \] that is, the subgraph of $H(G)$ obtained by deleting all edges that do not contain an edge of $G$. Note that $G^+$ is a subgraph of $H$, not of $G$. For a red tight component~$R_*$ and a blue tight component~$B_*$ in $H$, we denote by $R_*^{k-2}$ and $B_*^{k-2}$ the edges of~$G$ that induce~$R_*$ and~$B_*$, respectively. We prove some lemmas that we will use several times later on. Roughly speaking, the following lemma says that if~$S$ is a set of~$k-4$ vertices of~$H$ contained in many edges of~both $R_*^{k-2}$ and $B_*^{k-2}$, then~$S$ is contained in an edge of~$R_*$ or~$B_*$. \begin{lem} \label{lem:vertexdeg} Let $1/n \ll \varepsilon \ll \alpha \ll 1.$ Let~$H$ be a $2$-edge-coloured $(1-\varepsilon, \alpha)$-dense $k$-graph on~$n$ vertices and~$G$ a $3\sqrt{\varepsilon}$-blueprint for~$H$. Let~$R_*$ and~$B_*$ be a red and a blue tight component of~$H$, respectively. Let $U \subseteq V(G)$ and $S \in \binom{U}{k-4}$ such that \[ d_{R_*^{k-2}}(S,U), d_{B_*^{k-2}}(S,U) \geq \varepsilon^{1/4}n^2. \] Then there exist $x,x',y,y' \in U$ such that $S \cup xx' \in R_*^{k-2}$, $S \cup yy' \in B_*^{k-2}$, $S \cup xx'y \in \partial R_*$, $S \cup yy'x \in \partial B_*$ and $S \cup xx'yy' \in H$. In particular, $(R_*^{k-2})^+[U] \cup (B_*^{k-2})^+[U] \neq \varnothing$. \end{lem} \begin{proof} Let $X_{R_*} = \{x \in U \colon d_{R_*^{k-2}}(S \cup x, U) \geq \varepsilon^{1/2}n\}$ and $X_{B_*} = \{x \in U \colon d_{B_*^{k-2}}(S \cup x, U) \geq \varepsilon^{1/2} n\}$. Note that \[ \varepsilon^{1/4} n^2 \leq d_{R_*^{k-2}}(S, U) = \frac{1}{2} \sum_{x \in U} d_{R_*^{k-2}}(S \cup x, U) \leq n \s{X_{R_*}} + \varepsilon^{1/2}n^2. \] Thus $\s{X_{R_*}} \geq (\varepsilon^{1/4} - \varepsilon^{1/2}) n \geq \frac{1}{2}\varepsilon^{1/4} n$. Similarly, $\s{X_{B_*}} \geq \frac{1}{2} \varepsilon^{1/4}n$. For each $x \in X_{R_*}$, let \begin{align*} Y_x &= \{y \in X_{B_*} \colon S \cup yy' \in B_*^{k-2} \text{ and } S \cup xyy' \in \partial B_* \text{ for some } y' \in U \} \\ &= \bigcup_{y' \in U} N_{B_*^{k-2}}(S \cup y') \cap N_{\partial B_*}(S \cup xy'). \end{align*} For each $y \in X_{B_*}$, there exists $y' \in U$ with $S \cup yy' \in B_*^{k-2}$. By \ref{BP1}, $d_{\partial B_*}(S \cup yy', X_{R_*}) \geq \s{X_{R_*}} - 3\sqrt{\varepsilon}n$. Hence each $y \in X_{B_*}$ is contained in at least $\s{X_{R_*}} - 3 \sqrt{\varepsilon}n$ of the sets~$Y_x$. By averaging, there exists an $x \in X_{R_*}$ such that \[ \s{Y_x} \geq \frac{(\s{X_{R_*}}- 3\sqrt{\varepsilon}n)\s{X_{B_*}}}{2 \s{X_{R_*}}} \geq \frac{1}{4} \s{X_{B_*}} \geq \frac{1}{8} \varepsilon^{1/4} n. \] Fix such an $x \in X_{R_*}$. For each $y \in Y_x$, choose a vertex $y' \in U$ such that $S \cup yy' \in B_*^{k-2}$ and $S \cup xyy' \in \partial B_*$. Let $X = N_{R_*^{k-2}}(S \cup x, U)$, so $\s{X} \geq \varepsilon^{1/2}n$, since $x \in X_{R_*}$. For each $y \in Y_x$, since $H$ is $(1-\varepsilon, \alpha)$-dense, there are at least $\s{X} - \varepsilon n$ vertices $x' \in X$ such that $S \cup xx'yy' \in H$. Thus, by averaging, there exists a vertex $x' \in X$ and a set $\widetilde{Y}_x \subseteq Y_x$ with \[ \s{\widetilde{Y}_x} \geq \frac{(\s{X} - \varepsilon n) \s{Y_x}}{2 \s{X}} \geq \frac{1}{4} \s{Y_x} \geq \frac{1}{32}\varepsilon^{1/4} n \] such that $S \cup xx'yy' \in H$ for all $y \in \widetilde{Y}_x$. Fix such an $x' \in X$. Since $S \cup xx' \in R_*^{k-2}$, we have that \[ \s{N_{\partial R_*}(S \cup xx') \cap \widetilde{Y}_x} \geq \s{\widetilde{Y}_x} - 3\sqrt{\varepsilon}n \geq \br{\frac{1}{32}\varepsilon^{1/4} -3\sqrt{\varepsilon}}n >0. \] Choose $y \in N_{\partial R_*} (S \cup xx') \cap \widetilde{Y}_x$. We have $S \cup xx' \in R_*^{k-2}$, $S \cup yy' \in B_*^{k-2}$, $S \cup xx'y \in \partial R_*$, $S \cup xyy' \in \partial B_*$ and $S \cup xx'yy' \in H$ as required. \end{proof} The following lemma shows that if we have a vertex set~$S \in \binom{V(G)}{k-3}$ such that $d_G^{\red}(S)$ and $d_G^{\blue}(S)$ are both large, then~$S$ is contained a lot of sets in $\partial R \cap \partial B$, where~$R$ and~$B$ are the red and blue tight components induced by the red and blue edges incident to~$S$, respectively. \begin{lem} \label{lem:shadow} Let $1/n \ll \varepsilon \ll 1$, $k \ge 3$ and $\delta > 5 \sqrt{\varepsilon}$. Let~$H$ be a $2$-edge-coloured $k$-graph on~$n$ vertices and~$G$ a $3\sqrt{\varepsilon}$-blueprint for~$H$. Let $T \in \binom{V(H)}{k-3}$. Let $S^{\blue} \subseteq N_G^{\blue}(T)$ and $S^{\red} \subseteq N_G^{\red}(T)$ be such that $|S^{\blue}|, |S^{\red}| \ge \delta n $. Then there exists a vertex $y \in S^{\blue}$ such that, for \begin{align*} \Gamma^{\red}_y = \{x \in S^{\red} \colon T \cup xy \in \partial R(T \cup x) \cap \partial B(T \cup y)\}, \end{align*} we have $\s{\Gamma^{\red}_y} \ge ( \delta - 6 \sqrt{\varepsilon})n $. Moreover, if $\delta \geq \varepsilon^{1/9}$, then $\s{\Gamma^{\red}_y}\geq (1-\varepsilon^{1/4}) \s{S^{\red}} $. The same statements hold when the colours are reversed. \end{lem} \begin{proof} Let $m_{\blue} = |S^{\blue}|$ and $m_{\red} = |S^{\red}|$. If $\delta < \varepsilon^{1/9}$, then we may assume that $m_{\blue} = m_{\red} = \lceil \delta n \rceil$ by deleting vertices in $S^{\blue}$ and $S^{\red}$ if necessary. Let~$D$ be the bipartite directed graph with vertex classes~$S^{\blue}$ and~$S^{\red}$ such that, for each $y \in S^{\blue}$ and $x \in S^{\red}$, we have $N_D^+(y) = N_{\partial B} (T \cup y) \cap S^{\red}$ and $N_D^+(x) = N_{\partial R}(T \cup x) \cap S^{\blue}$. Since~$G$ is a $3\sqrt{\varepsilon}$-blueprint for~$H$, we have that \begin{align*} \s{ E(D) } \geq m_{\blue}(m_{\red} - 3\sqrt{\varepsilon}n) + m_{\red}(m_{\blue} - 3\sqrt{\varepsilon}n) = 2m_{\blue}m_{\red} - 3\sqrt{\varepsilon}n(m_{\blue} + m_{\red}). \end{align*} Thus the number of double edges in~$D$ is at least $m_{\blue}m_{\red} -3\sqrt{\varepsilon}n(m_{\blue}+m_{\red})$. For each $y \in S^{\blue}$, let $\Gamma_y = \{x \in S^{\red} \colon xy, yx \in D\}$. Hence there is some vertex $y \in S^{\blue}$ such that \begin{align*} \s{\Gamma_y} \geq m_{\red}-3\sqrt{\varepsilon}n\br{\frac{m_{\blue} + m_{\red}}{m_{\blue}}} \geq \begin{cases} (\delta - 6 \sqrt{\varepsilon})n, &\text{if } \delta < \varepsilon^{1/9}; \\ m_{\red}(1-\varepsilon^{1/4}), &\text{otherwise.} \end{cases} \end{align*} Note that if $xy, yx \in D$ with $x \in S^{\red}$ and $y \in S^{\blue}$, then $T \cup xy \in \partial R(T \cup x) \cap \partial B(T \cup y)$. Hence $\Gamma_y \subseteq \Gamma_y^{\red}$ and thus the lemma follows. \end{proof} Roughly speaking, in the next lemma we consider the following situation. Let $R$ be a red tight complement in~$H$, $G$ be a blueprint for~$H$ and $R_G \subseteq G^{\red}$ be such that $H(R_G) \subseteq R$. We pick a maximal matching in~$R_G^+$ and let $U$ be the remaining vertices of~$H$ not in this matching, so $R_G^+[U]$ is empty. Then the lemma implies that the number of monochromatic tight components in $U$ is less than what we would expect. In particular, if $k = 4$, then the edges in $G[U]$ induce only two monochromatic tight components in~$H$. \begin{lem} \label{lem:reducing_components} Let $k \ge 4$ and $1/n \ll \varepsilon \ll \alpha, \delta \ll \eta \ll 1$. Let~$H$ be a $(1-\varepsilon, \alpha)$-dense $k$-graph and~$G$ a $3\sqrt{\varepsilon}$-blueprint for~$H$. Let $R$ be a red tight component in~$H$. Let~$R_G \subseteq G^{\red}$ be such that $H(R_G) \subseteq R$. Let $U \subseteq V(H)$ be such that $\s{U } \geq \eta n/2$ and $R_G^+[U] = \varnothing$. Let $S \in \binom{V(H)}{k-4}$ be such that the link graph $G_S$ of~$G$ satisfies $G_S^{\red}[U ] \subseteq (R_G)_S$ and $\delta(G_S[U ]) \geq \s{U } - \delta n$. Then there exists a subgraph $J_S$ of~$G_S[U]$ such that $\s{J_S} \geq \s{G_S[U]} - 7\delta^{1/4}n^2$ and $H(S \cup e) = H(S \cup e')$ for all $e,e' \in J_S$ of the same colour. In particular, if $k=4$, then the edges in $J$ induce only one red and one blue tight component in~$H$. The same statement holds when the colours are reversed. \end{lem} \begin{proof} Set $J_S^{\red} = G_S^{\red}[U ]$. Note that for $e, e' \in J_S^{\red}$, we have $e, e' \in (R_G)_S$ and thus $H(e) = H(e') = R$ since $H(R_G) \subseteq R$. Therefore to prove the lemma, it suffices to prove that there exists $J_S^{\blue} \subseteq G_S^{\blue}[U ]$ such that $\s{J_S^{\red}}+\s{J_S^{\blue}} \geq \s{G_S[U ]} - 7\delta^{1/4}n^2$ and $H(S \cup e) = H(S \cup e')$ for all $e,e' \in J_S^{\blue}$. For simplicity we assume $k=4$ and $S = \varnothing$. It is easy to see that an analogous argument works in the general case. Thus for the rest of the proof, we omit the subscript $S$. Let $K = G[U ]$. If $\s{K^{\blue}} < 2 \delta^{1/2}n^2$, then we are done by setting $J^{\blue} = \varnothing$ as \begin{align*} \s{J^{\red}} = \s{K^{\red}} = \s{K} - \s{K^{\blue}} \geq \s{K} - 2\delta^{1/2}n^2 \geq \s{K} - 7\delta^{1/4}n^2. \end{align*} Now assume $\s{K^{\blue}} \geq 2 \delta^{1/2}n^2$. Let $X = \{x \in V(K) \colon d_K^{\blue}(x) \geq \delta n\}$. We have that \begin{align*} 2\delta^{1/2} n^2 \leq \s{K^{\blue}} &\leq \sum_{x \in U } d_K^{\blue}(x) \leq n\s{X} + \delta n^2. \end{align*} Thus $\s{X} \geq \delta^{1/2} n$. Let~${D}$ be the digraph with vertex set~$X$ such that, for each $x \in X$, \begin{align*} N_{D}^+(x) & = N_{K}^{\blue}(x, X) \cup \{ x' \in N_{K}^{\red}(x,X) \colon xx'y \in \partial R \cap \partial B(xy) \text{ for some } y \in N_{K}^{\blue}(x)\}. \end{align*} We now bound $\delta^+(D)$ as follows. If $d_K^{\red}(x,X) \geq \delta n$, then by applying \cref{lem:shadow} (with $x, N_G^{\blue}(x, U ), N_G^{\red}(x,X), \delta$ playing the roles of $T,S^{\blue},S^{\red}, \delta$), we deduce that \begin{align*} \s{\{x' \in N_{K}^{\red}(x,X) \colon xx'y \in \partial R(xx') \cap \partial B(xy) \text{ for some } y \in N_{K}^{\blue}(x)\}} \ge (1-\varepsilon^{1/4})d_K^{\red}(x,X). \end{align*} Recall that $R = R(xx')$ for all $x' \in N_{K}^{\red}(x,X)$, $\s{X} \geq \delta^{1/2}n$ and $\varepsilon \ll \delta$. Hence \begin{align*} d_{D}^+(x) &\geq d_K^{\blue}(x,X) + (1-\varepsilon^{1/4})d_K^{\red}(x,X) \geq (1-\varepsilon^{1/4})(d_K^{\blue}(x,X) + d_K^{\red}(x,X)) \\ & = (1-\varepsilon^{1/4})d_K(x,X) \geq (1-\varepsilon^{1/4})(\s{X} - \delta n) \geq (1-2\delta^{1/2})\s{X}. \end{align*} On the other hand, if $d_K^{\red}(x,X) < \delta n$, then \begin{align*} d_{D}^+(x)\geq d_K^{\blue}(x,X) \geq \s{X} - \delta n - d_K^{\red}(x,X) \geq \s{X} - 2\delta n \geq (1-2 \delta^{1/2})\s{X}. \end{align*} Therefore, we have $\delta^+(D) \geq (1-2\delta^{1/2})\s{X}$ and so $\s{E({D})} \geq (1-2\delta^{1/2}) \s{X}^2 \geq 2(1-2\delta^{1/2})\binom{\s{X}}{2}$. Let~$F $ be the graph with vertex set~$X$ in which~$xx'$ forms an edge if and only if it forms a double edge in~${D}$. Note that $|F| \ge (1-4\delta^{1/2}) \binom{\s{X}}{2}$. By \cref{prop:mindeg}, there exists a subgraph~$F ^*$ of~$F $ with $\delta(F ^*) \geq (1 -6\delta^{1/4})\s{X}$. Clearly, $F ^*$ is connected. Let $J^{\blue} = \{ xx' \in K^{\blue} \colon x \in V(F ^*)\}$. We have \begin{align*} \s{J^{\red} \cup J^{\blue}} &\geq \s{K} - \sum_{x' \in U \setminus X}d^{\blue}_K(x') - \s{X\setminus V(F ^*)} n\\ & \geq \s{K} - \delta n^2 - 6 \delta^{1/4} n^2 \geq \s{G[U ]} - 7\delta^{1/4}n^2. \end{align*} We now show that $B(x_1z_1 ) = B(x_2z_2 )$ for all $x_1z_1, x_2z_2 \in J^{\blue}$. Since~$F ^*$ is connected and $d_{J^{\blue}}(x) > 0$ for all $x \in V(F^*)$, it suffices to consider the case when $x_1 x_2 \in F ^*$. If $x_1x_2 \in K^{\blue}$, then $x_1z_1 , x_1x_2 , x_2z_2 \in G^{\blue}$ and so $B(x_1z_1 ) = B(x_1x_2 ) = B(x_2z_2 )$, since $G$ is a blueprint. Now assume that $x_1x_2 \in K^{\red}$. Since $x_1x_2 \in F ^* \subseteq F $, there are $y_1 \in N_K^{\blue}(x_1)$ and $ y_2 \in N_K^{\blue}(x_2)$ such that $x_1x_2y_1 \in \partial R \cap \partial B(x_1y_1 )$ and $x_1x_2y_2 \in \partial R \cap \partial B(x_2y_2 )$. Let $u \in N_H(x_1x_2y_1 ) \cap N_H(x_1x_2y_2 ) \cap U$. Since $R^+_G[U] = \varnothing$, we have $x_1x_2y_1u , x_1x_2y_2u \in H^{\blue}$. Hence, $B(x_1y_1 ) = B(x_2y_2 )$. Moreover, since $x_1y_1 , x_1z_1 , x_2y_2 , x_2z_2 \in G^{\blue}$, we have $B(x_1z_1 ) = B(x_1y_1 )= B(x_2y_2 ) = B(x_2z_2 )$ as required. \end{proof} \section{Monochromatic connected matchings in \texorpdfstring{$K_n^{(4)}$}{Kn(4)}} \label{section:matchings Kn4} In this section, we prove that every almost complete red-blue edge-coloured $4$-graph~$H$ contains a red and a blue tightly connected matching that are vertex-disjoint and together cover almost all vertices of~$H$. \begin{lem} \label{lem:matchings} Let $1/n \ll \varepsilon \ll \alpha \ll \eta < 1$. Let~$H$ be a $2$-edge-coloured $(1-\varepsilon,\alpha)$-dense $4$-graph on~$n$ vertices. Then~$H$ contains two vertex-disjoint monochromatic tightly connected matchings of distinct colours such that their union covers all but at most~$3\eta n$ of the vertices of~$H$. \end{lem} Note that this implies $\mu_4^*(1, \varepsilon, n) \geq (1- 3\eta)n/4$ for $1/n \ll \varepsilon \ll \eta < 1$. Hence $\mu_4^*(1) \geq 1/4$. Therefore, together with \cref{cor:matchings_to_cycles}, \cref{lem:matchings} implies \cref{thm:1}. To prove \cref{lem:matchings} we first need the following lemma which chooses the initial tight components in~$H$ in which we find our tightly connected matchings. \begin{lem} \label{lem:4existence} Let $1/n \ll \varepsilon \ll \alpha \ll \eta < 1$. Let~$H$ be a $2$-edge-coloured $(1-\varepsilon,\alpha)$-dense $4$-graph on~$n$ vertices. Suppose that $H$ does not contain two vertex-disjoint monochromatic tightly connected matchings of distinct colours such that their union covers all but at most~$3\eta n$ of the vertices of~$H$. Then, there exists a red tight component~$R$ in $H$, a blue tight component~$B$ in~$H$, a $3\sqrt{\varepsilon}$-blueprint~$G$ for~$H$ and a matching~$M_0$ in $R \cup B$ such that the following holds, where $W_0 = V(G)\setminus V(M_0) $. \begin{enumerate}[label = \upshape (\roman*)] \item $ \delta(G) \geq (1-\alpha^{1/30})n$,\label{itm:4-2} \item $ R(e) =R$ and $ B(e') =B$ for all edges $e \in G^{\red}[V(M_0^{\red})\cup W_0]$ and all edges $e' \in G^{\blue}[V(M_0^{\blue})\cup W_0]$, \label{itm:4-3} \item $M_0 \subseteq (G^{\red})^+ \cup (G^{\blue})^+$, \label{itm:4-4} \item $(G^{\red})^+[W_0] \cup (G^{\blue})^+[W_0]$ is empty. \label{itm:4-5} \end{enumerate} \end{lem} \begin{proof} By \cref{lem:generalblueprint}, there exists a $3\sqrt{\varepsilon}$-blueprint $G_0$ for $H$ with $V(G_0) = V(H)$ and $\s{G_0} \geq (1-\alpha - 96\sqrt{\varepsilon}) \binom{n}{2} \geq (1-4\alpha)\binom{n}{2}$. By \cref{cor:Ecomp}, there exists a subgraph $G_1$ of $G_0$ of order at least $(1-6\sqrt{\alpha})n$ that contains a spanning monochromatic component and $\delta(G_1) \geq (1-12\sqrt{\alpha})n$. Note that that $G_1$ is also a $3\sqrt{\varepsilon}$-blueprint for $H$. We assume without loss of generality that~$G_1$ contains a spanning red component. Since~$G_1$ is a blueprint, all the red edges in~$G_1$ induce the same red tight component~$R$ in~$H$. Let $R^+ = (G^{\red}_1)^+ \subseteq R$. Let~$M$ be a matching in~$R^+$ of maximum size. Let $U = V(G_1) \setminus V(M)$. Thus $\s{ U } \geq \eta n$ (or else $\s{V(M)} \geq |V(G_1)| - \s{U} \ge (1-2\eta)n$, a contradiction). Moreover, $R^+[U] = \varnothing$. Since $\delta(G_1) \geq (1-12\sqrt{\alpha})n$, we have $\delta(G_1[U]) \geq \s{U} -\alpha^{1/3}n$. Hence, by \cref{lem:reducing_components} (with $4,U, \varnothing, \alpha^{1/3}$ playing the roles of $k,U,S,\delta$), there exists a subgraph~$J$ of~$G_1[U]$ such that $\s{J} \geq \s{G_1[U]} - 2\alpha^{1/13}n^2$, such that $H(e) = H(e')$ for all $e, e' \in J$ of the same colour. Let $G_2 = (G_1- G_1^{\blue}[U]) \cup J$ and $B = B(e)$ for $e \in J^{\blue}$. Note that $\s{G_2} \geq (1-\alpha^{1/14})\binom{n}{2}$. By \cref{prop:mindeg}, there exists a subgraph~$G$ of~$G_2$ such that $\delta(G) \geq (1 - \alpha^{1/30})n$, so \ref{itm:4-2} holds. Let $W = V(G) \setminus V(M)$. Next, we show that \ref{itm:4-3} and \ref{itm:4-4} hold but with $M,W$ instead of $M_0,W_0$. Note that $M^{\blue} = \varnothing$, so \ref{itm:4-4} holds by our construction. Since $G^{\red} \subseteq G_1^{\red}$ and $G_1^{\red}$ is connected and a blueprint, $R(e) = R$ for all $e \in G^{\red}$. Note that $G^{\blue}[V(M^{\blue})\cup W] = G^{\blue} - V(M) \subseteq G_2^{\blue}[U] = J^{\blue}$, so $B(e) = B$ for all $e \in G^{\blue}[V(M^{\blue})\cup W]$. Hence \ref{itm:4-3} holds. We now add vertex-disjoint edges of $(G^{\red})^+[W] \cup (G^{\blue})^+[W]$ to~$M$ and call the resulting matching $M_0$. We deduce that $M_0$ satisfies \ref{itm:4-3}--\ref{itm:4-5}. \end{proof} We now prove \cref{lem:matchings}. \begin{proof}[Proof of \cref{lem:matchings}] Suppose the contrary that $H$ does not contain two vertex-disjoint monochromatic tightly connected matchings of distinct colours such that their union covers all but at most~$3\eta n$ of the vertices of~$H$. We call this the initial assumption. Apply \cref{lem:4existence} and obtain a red tight component~$R$, a blue tight component~$B$ in~$H$, a $3\sqrt{\varepsilon}$-blueprint~$G$ for~$H$ and a matching~$M_0$ in $R \cup B$ satisfying \cref{lem:4existence}\ref{itm:4-2}--\ref{itm:4-5}. We now fix $G$, $R$ and $B$. We use the following notation for the rest of the proof. For a matching $M$ in $R \cup B$, we set \begin{align*} W & = W(M) = V(G)\setminus V(M),\\ W_\textup{red} &= W_\textup{red}(M) = \{w \in W \colon d_{G[W]}^{\blue}(w) \leq 8\sqrt{\varepsilon}n\},\\ W_\textup{blue} & = W_\textup{blue}(M) = \{w \in W\colon d_{G[W]}^{\red}(w) \leq 8\sqrt{\varepsilon}n\}. \end{align*} Note that $\s{W}\geq \eta n$ by the initial assumption. Without loss of generality, $\s{W_\textup{blue} (M_0)} \le \s{W_\textup{red} (M_0)}$. We define $\mathcal{M}$ be the set of matchings~$M$ in $R \cup B$ such that \begin{enumerate}[label = \upshape (\roman*$'$)] \item \label{itm:M1} $ R(e) =R$ and $ B(e') =B$ for all edges $e \in G^{\red}[W]$ and $e' \in G^{\blue}[V(M^{\blue})\cup W]$, \item $M^{\blue} \subseteq (G^{\blue})^+$, \label{itm:M2} \item $(G^{\red})[W] \cup (G^{\blue})^+[W]$ is empty. \label{itm:M3} \end{enumerate} Note that \ref{itm:M1} and \ref{itm:M2} are weaker statements of those in \cref{lem:4existence}\ref{itm:4-3} and~\ref{itm:4-4}, so $M_0 \in \mathcal{M}$. Let $\mathcal{M}'$ be the set of $M \in \mathcal{M}$ also satisfying \begin{enumerate}[label = \upshape (\roman*$'$),resume] \item $\s{W_\textup{blue}} \le \s{W_\textup{red}}$.\label{itm:M4} \end{enumerate} Observe that $M_0 \in \mathcal{M}'$, so $\mathcal{M}'$ is nonempty. Let $\gamma = 10\alpha^{1/30}$. We now show that, for all $M \in \mathcal{M}$, $W_\textup{red}$ and~$W_\textup{blue}$ partition~$W$, and moreover one of them is small. \begin{claim} \label{claim:mathcalM} Let $M \in \mathcal{M}$. The following holds: \begin{enumerate}[label = \upshape (\alph*)] \item \label{itm:Ma} for all $w \in W$, either $d_{G[W]}^{\red}(w) \le 7\sqrt{\varepsilon}n$ or $d_{G[W]}^{\blue}(w) \le 7\sqrt{\varepsilon}n$, \item \label{itm:Mb}$W_\textup{red}$ and~$W_\textup{blue}$ partition~$W$, \item \label{itm:Mc} either $\s{ W_\textup{blue}} \le \gamma n $ or $ \s{ W_\textup{red}} \leq \gamma n$. \end{enumerate} In particular, if $M \in \mathcal{M}'$, then $\s{W_\textup{blue}} \le \gamma n $. \end{claim} \begin{proofclaim} Suppose that $w \in W$ satisfies $d_{G[W]}^{\red}(w), d_{G[W]}^{\blue}(w) > 7\sqrt{\varepsilon}n$. By \cref{lem:shadow} (with $7\sqrt{\varepsilon},w, N_{G[W]}^{\red}(w),N_{G[W]}^{\blue}(w)$ playing the roles of $\delta, T, S^{\red}, S^{\blue}$), there exist $x \in N_{G[W]}^{\red}(w)$ and $y \in N_{G[W]}^{\blue}(w)$ such that $wxy \in \partial R \cap \partial B$. In particular, $d_H(wxy) \neq 0$ and thus $d_H(wxy) \geq (1-\varepsilon)n$, which implies that there exists a vertex $w' \in W$ such that $ww'xy \in H$. Note that $ww'xy \in (G^{\red})^+[W] \cup (G^{\blue})^+[W]$ contradicting~\ref{itm:M3}. Hence, $\min \{ d_{G[W]}^{\red}(w), d_{G[W]}^{\blue}(w)\} \le 7\sqrt{\varepsilon}n$. Since \cref{lem:4existence}\ref{itm:4-2} implies that $\delta(G[W]) \ge |W| - \alpha^{1/30} n > 16 \sqrt{\varepsilon}n$, we deduce that \ref{itm:Ma} and \ref{itm:Mb} hold. Recall that $|W| \ge \eta n > 2 \gamma n$. So one of $W_\textup{red}$ and $W_\textup{blue}$ has size greater than $\gamma n$. Suppose both are (that is, \ref{itm:Mc} is false). Since $\delta(G) \geq (1-\alpha^{1/30})n = (1-\gamma/10)n$, we have that there are at least \begin{align*} \s{W_\textup{blue}} \br{\s{ W_\textup{red}} -\gamma n/10 - 8\sqrt{\varepsilon}n} \geq \s{W_\textup{blue}} \br{\s{ W_\textup{red} } - \gamma n/5} > 3\s{W_\textup{red}}\s{W_\textup{blue}}/4 \end{align*} blue edges between~$W_\textup{blue}$ and~$W_\textup{red}$ and similarly there are at least $3\s{ W_\textup{red}} \s{ W_\textup{blue}} /4$ red edges between~$W_\textup{blue}$ and~$W_\textup{red}$. Thus $e(W_\textup{red},W_\textup{blue}) > \s{W_\textup{red}}\s{W_\textup{blue}}$, a contradiction. \end{proofclaim} Let $M_* \in \mathcal{M}'$ be such that $(\s{M_*}, \s{M_*^{\red}})$ is lexicographically maximum. We write $W^*, W_\textup{red}^*,W_\textup{blue}^*$ for $W(M_*), W_\textup{red}(M_*),W_\textup{blue}(M_*)$, respectively. The next claim shows that almost all $4$-edges in~$H[W^*]$ are blue and they form a tight component. Indeed, this follows from the fact that almost all edges in~$G[W^*]$ are red and thus almost all triples in~$W^*$ are in~$\partial R$. \begin{claim} \label{claim:B'} There exists a blue tight component~$B'$ in~$H$ such that the number of triples $xyz \in \binom{W_\textup{red}^*}{3} \cap \partial B'$ with $d_{B'}(xyz, W_\textup{red}^*) \geq \s{W_\textup{red}^*} - \varepsilon n$ is at least $(1-\alpha^{1/31})\s{\binom{W_\textup{red}^*}{3}}$. \end{claim} \begin{proofclaim} Let~$\mathcal{T}$ be the set of triples $xyz \in \binom{W_\textup{red}^*}{3} \cap \partial R$ such that $xy \in G^{\red}$. Note that, for any $x \in W_\textup{red}^*$, $y \in N^{\red}_{G}(x, W_\textup{red}^*)$ and $z \in N_{\partial R}(xy, W_\textup{red}^*)$, we have $xyz \in \mathcal{T}$. Thus \begin{align*} \s{\mathcal{T}} &\geq \frac{1}{3!}\s{W_\textup{red}^*}\br{\s{W_\textup{red}^*}-\alpha^{1/30}n-8\sqrt{\varepsilon}n}\br{\s{W_\textup{red}^*}-3\sqrt{\varepsilon}n} \\ &\geq \frac{\s{W_\textup{red}^*}^3}{3!}\br{1-\frac{2\alpha^{1/30}n}{\s{W_\textup{red}^*}}} \geq \br{1-\alpha^{1/31}}\s{\binom{W_\textup{red}^*}{3}}, \end{align*} as $\s{W_\textup{red}^*} \geq \eta n/2$. By \ref{itm:M3}, we have that if $xyz \in \mathcal{T}$ and $w \in N_H(xyz, W_\textup{red}^*)$, then $wxyz \in H^{\blue}$. For $xyz \in \mathcal{T}$, let~$B(xyz)$ be the maximal blue tight component containing all the edges~$xyzw$, where $w \in N_H(xyz, W_\textup{red}^*)$. We say that~$xyz$ generates the blue tight component~$B(xyz)$. It suffices to show that all $xyz \in \mathcal{T}$ generate the same blue tight component. First we show that triples that share two vertices generate the same blue tight component. Note that, for $xyz_1, xyz_2 \in \mathcal{T}$, we have $d_H(xyz_1, W_\textup{red}^*), d_H(xyz_2, W_\textup{red}^*) \geq \s{W_\textup{red}^*}-\varepsilon n > \s{W_\textup{red}^*}/2$ and thus there exists $w \in N_H(xyz_1)\cap N_H(xyz_2) \cap W_\textup{red}^*$. Since the edges~$wxyz_1$ and~$wxyz_2$ are blue, it follows that $B(xyz_1) = B(xyz_2)$. Now let $x_1y_1z_1, x_2y_2z_2 \in \mathcal{T}$, where $x_1y_1, x_2y_2 \in G^{\red}$. Let $w_1 \in N_{\partial R}(x_1y_1) \cap N_{\partial R}(x_2y_2) \cap N_{G^{\red}}(x_1)\cap N_{G^{\red}}(x_2) \cap W_\textup{red}^*$ and $w_2 \in N_{\partial R}(x_1w_1) \cap N_{\partial R}(x_2w_1) \cap W_\textup{red}^*$. It follows that $x_1y_1w_1, x_1w_1w_2, x_2w_1w_2, x_2y_2w_1 \in \mathcal{T}$. Hence $B(x_1y_1z_1) = B(x_1y_1w_1) = B(x_1w_1w_2) = B(x_2w_1w_2) = B(x_2y_2w_1) = B(x_2y_2z_2)$. Let~$B'$ be the unique blue tight component generated by all triples $xyz \in \mathcal{T}$. \end{proofclaim} The previous claim and a greedy argument imply that there is a matching~$M_*^{B'}$ in~$B'[W^*]$ that covers all but~$\eta n$ of the vertices in~$W^*$. Thus we may assume that $\s{M_*^{\blue}}\geq \eta n/4$, otherwise $\s{V(M_*^{\red}\cup M_*^{B'})} \geq n - 3\eta n $, which is a contradiction to the initial assumption. To complete the proof, we will show that in fact~$B' =B$, implying $M_*^{\red}$ and $M_*^{\blue} \cup M_*^{B'}$ are tightly connected matchings, a contradiction to the initial assumption. We now pick a special edge $e^* \in M_*^{\blue}$. Its special property that we desire is stated in~\cref{claim:edge restriction}. \begin{claim} \label{claim:e^*} There exist an edge $e^* = v_1^*v_2^*v_3^*v_4^* \in M_*^{\blue}$ and distinct vertices $w_1, \dots, w_4,$ $w_1', \dots w_4' \in W_\textup{red}^*$ such that, for each $j \in [4]$, \begin{enumerate}[label = \upshape (\alph*)] \item all the red edges of~$G$ incident to~$v_j^*$ induce~$R$, or \label{itm:6.5-a} \item $v_j^*w_j \in G^{\blue}$ and $v_j^*w_jw_j' \in \partial R \cap \partial B$. \label{itm:6.5-b} \end{enumerate} \end{claim} \begin{proofclaim} For each edge $e \in M_*^{\blue}$, let $v_1^e,v_2^e,v_3^e,v_4^e$ be an enumeration of its vertices. It is easy to see that there exists $M_1^{\blue} \subseteq M_*^{\blue}$ with $\s{ M_1^{\blue}} = \s{ M_*^{\blue} }/16$ such that for each $j\in [4]$ we have that either \begin{enumerate}[label = \upshape (\alph*$ '$)] \item for all $e \in M_1^{\blue}$, there is a red edge in~$G$ between~$v_j^e$ and~$W_\textup{red}^*$, or \label{1} \item for all $e \in M_1^{\blue}$, all edges in $G$ between~$v_j^e$ and~$W_\textup{red}^*$ are blue. \end{enumerate} Let~$J_1$ be the set of $j \in [4]$ such that \ref{1} holds and $J_2 = [4]\setminus J_1$. Since each vertex in~$W_\textup{red}^*$ is incident to a red edge of~$G$ that induces~$R$ and~$G$ is a blueprint for~$H$, we have that, for all $e \in M_1^{\blue}$ and all $j \in J_1$, all the red edges incident to~$v_j^e$ induce~$R$. For every $j \in J_2$, we have that \begin{align*} \s{ G^{\blue} [\left\{v_j^e \colon e \in M_1^{\blue}\right\}, W_\textup{red}^*] } &\geq \s{ M_1^{\blue} } \br{ \s{ W_\textup{red}^*} - \alpha^{1/30}n } \geq \br{1-\alpha^{1/31}} \s{ M_1^{\blue} } \s{ W_\textup{red}^*}. \end{align*} Thus there exists $w_j \in W_\textup{red}^*$ such that~$w_jv_j^e$ is blue for at least $\s{M_1^{\blue}}(1-\alpha^{1/32})$ of the vertices~$v_j^e$, with $e \in M_1^{\blue}$. It is easy to see that we can choose the~$w_j$ to be distinct. Hence there exist distinct vertices $w_1,w_2,w_3,w_4 \in W_\textup{red}^*$ and $M_2^{\blue} \subseteq M_1^{\blue}$ with $\s{M_2^{\blue}} = \s{M_1^{\blue}}/2 \geq \eta n /64$ such that for all $j \in J_2$ and all $e \in M_2^{\blue}$ we have that $w_jv_j^e \in G^{\blue}$. For $j \in J_2$, let $V_j= \{v_j^e \colon e \in M_2^{\blue}\}$ and note that $d_{G}^{\blue}(w_j,V_j) = \s{M_2^{\blue}} \geq \eta n /64$ and $ d_{G}^{\red}(w_j, W_\textup{red}^*) \geq \eta n/2$. For each $j \in J_2$, we apply \cref{lem:shadow} with colours reversed and $w_j, V_j, \widetilde{W}_\textup{red}^*$ playing the roles of $T, S^{\blue}, S^{\red}$ where $\widetilde{W}_\textup{red}^*$ denotes~$W_\textup{red}^*$ with all previously chosen vertices removed. Thus, we find distinct $w_j' \in W_\textup{red}^*\setminus \{w_1,w_2,w_3,w_4\}$ and $M_3^{\blue} \subseteq M_2^{\blue}$ with $\s{M_3^{\blue}} = \s{M_2^{\blue}}/2$ such that, for all $j \in J_2$ and all $e \in M_3^{\blue}$, we have that $v_j^ew_j \in G^{\blue}$ and $v_j^ew_jw_j' \in \partial R \cap \partial B$. We complete the proof by choosing $e^* = v_1^*v_2^*v_3^*v_4^* \in M_3^{\blue}$ and a distinct vertex $w_j' \in W_\textup{red}^*$ for each $j \in J_1$. \end{proofclaim} Let $W' = W_\textup{red}^* \setminus \{w_1, \dots, w_4, w_1', \dots, w_4'\}$. \begin{claim} \label{claim:edge restriction} The graph $B[e^* \cup W']$ does not contain two vertex-disjoint edges each of which contains an edge of~$G^{\blue}$ and $R[e^* \cup W']$ is empty. In particular, there do not exist two vertex-disjoint edges~$f_1$ and~$f_2$ in $(R \cup B)[e^* \cup W']$ each containing an edge of~$G^{\blue}$. \end{claim} \begin{proofclaim} First suppose there exist two vertex-disjoint edges $f_1, f_2 \in B[e^* \cup W']$ each of which contains an edge of~$G^{\blue}$. By the maximality of $\s{M_*}$, both~$f_1$ and~$f_2$ must intersect~$e^*$. For simplicity, we only consider the case that $e^* \setminus (f_1 \cup f_2) = \{v_1^*\}$ (the other cases can be proved similarly). By \cref{claim:e^*}, we have that all red edges of~$G$ incident to~$v_1^*$ induce~$R$ or $v_1^*w_1 \in G^{\blue}$ and $v_1^*w_1w_1' \in \partial R \cap \partial B$. First suppose that $v_1^*w_1 \in G^{\blue}$ and $v_1^*w_1w_1' \in \partial R \cap \partial B$. Let $w_1'' \in N_H(v_1^*w_1w_1',W^*\setminus (f_1\cup f_2))$ and $f_3 = v_1^*w_1w_1'w_1''$. Let $M' = (M_*\setminus \{e^*\})\cup \{f_1,f_2,f_3\}$. Note that $W(M') \subseteq W^*$. Since $ |W| \ge \eta n \ge 3 \gamma n $ and $|W^*_\textup{blue}| \le \gamma n $ by~\cref{claim:mathcalM}, we deduce that $M'$ satisfies~\ref{itm:M4}. Hence $M' \in \mathcal{M}'$ contradicting the maximality of $\s{M_*}$. Now assume that all the red edges of~$G$ incident to~$v_1^*$ induce~$R$. Let $M$ be a matching in $R \cup B$ containing $(M_* \setminus \{e^*\}) \cup \{f_1, f_2\}$ satisfying~\ref{itm:M2} and~\ref{itm:M3}. We now show that $M \in \mathcal{M}'$, which then contradicts the maximality of $\s{M_*}$. Recall that $v_1^* \in e^* \in M_*^{\blue}$, so \begin{align} W \subseteq (W^*\setminus (f_1 \cup f_2)) \cup \{v_1^*\} \text{ and } V(M^{\blue}) \cup W \subseteq V(M_*^{\blue}) \cup W^*. \label{eqn:WW_*} \end{align} Together with our assumption on $v_1^*$, $M$ satisfies~\ref{itm:M1}. Hence $M \in \mathcal{M}$. For all $w \in W \cap W_\textup{red}^*$, \begin{align*} d_{G[W]}^{\blue}(w) \overset{\eqref{eqn:WW_*}}{\leq} d_{G[W^*]}^{\blue}(w) +|v^*_1| \overset{\text{\cref{claim:mathcalM}\ref{itm:Ma}}}{\le} 7 \sqrt{\varepsilon}n + 1 \leq 8\sqrt{\varepsilon}n. \end{align*} and a similar inequality holds for all $w \in W \cap W_\textup{blue}^*$. This implies that $W_\textup{blue} \subseteq W^*_\textup{blue} \cup \{v^*\}$. Since $ |W| \ge \eta n \ge 3 \gamma n $ and $|W^*_\textup{blue}| \le \gamma n $ by~\cref{claim:mathcalM}, we deduce that $M$ satisfies~\ref{itm:M4}. Hence, $M \in \mathcal{M'}$ as required, a contradiction. Therefore, $B[e^* \cup W']$ does not contain two vertex-disjoint edges each of which contains an edge of~$G^{\blue}$. If $R[e^* \cup W']$ contains an edge~$f$, then a similar argument holds with $f$ replacing $\{f_1, f_2\}$. Note that if $\s{M} = \s{M_*}$, then we obtain a contradiction by showing that $\s{M_*^{\red}} < \s{M^{\red}}$. \end{proofclaim} Since $e^* \in M_*^{\blue} \subseteq (G^{\blue})^+$, we may assume without loss of generality that $v_1^*v_2^* \in G^{\blue}$. The following claim shows that one of the vertices~$v_1^*$ and~$v_2^*$ has small blue degree in~$G$ to~$W'$ (and thus it has large red degree to~$W'$). \begin{claim} \label{claim:at most one} We have $d_{G}^{\blue}(v_1^*, W') \leq 3\gamma n$ or $d_{G}^{\blue}(v_2^*,W') \leq 3\gamma n$. \end{claim} \begin{proofclaim} Suppose to the contrary that we have $d_{G}^{\blue}(v_1^*, W'), d_{G}^{\blue}(v_2^*,W') > 3\gamma n$. By \cref{claim:edge restriction}, it suffices to show that we can find two vertex-disjoint edges~$f_1$ and~$f_2$ in $(R \cup B)[e^* \cup W']$ each containing an edge of~$G^{\blue}$. It is easy to see that we can greedily choose vertices $x \in N_{G}^{\blue}(v_1^*, W')$, $x' \in N_{G}^{\red}(x,W') \cap N_{\partial B}(v_1^*x, W')$ and $x'' \in N_{\partial R}(xx', W') \cap N_H(v_1^*xx',W')$. Set $f_1 = v_1^*xx'x''$. By our construction, $v_1^*xx' \in \partial B$ and $xx'x'' \in \partial R$ implying $f_1 \in (R\cup B)[e^* \cup W']$. Similarly there exists an edge $f_2 = v_2^*yy'y'' \in (R \cup B)[e^* \cup W']$ disjoint from $f_1$ with $y,y',y'' \in W'$. \end{proofclaim} Without loss of generality assume $d_{G}^{\blue}(v_1^*,W') \leq 3 \gamma n$ and so $d_{G}^{\red}(v_1^*, W') \geq \s{W'} - \alpha^{1/31}n$. Let $w \in N_{\partial B}(v_1^*v_2^*) \cap N_{G}^{\red}(v_1^*) \cap W',$ $w' \in N_{G}^{\red}(w) \cap N_{\partial R}(v_1^*w) \cap N_H(v_1^*v_2^*w) \cap W'$ and $w'' \in N_H(v_1^*ww',W')$. (We can find these vertices greedily one by one.) By \cref{claim:B'}, we may further assume that $ww'w'' \in \partial B'$. By construction, we have that $v_1^*ww' \in \partial R$ and thus \cref{claim:edge restriction} implies that both~$v_1^*v_2^*ww'$ and~$v_1^*ww'w''$ are blue. Since $v_1^*v_2^*w \in \partial B$, we deduce that $v_1^*v_2^*ww', v_1^*ww'w'' \in B$ and so $ww'w'' \in \partial B$ implying that $\partial B \cap \partial B' \neq \varnothing$. Therefore~$B = B'$ as required. \end{proof} \section{Monochromatic connected matchings in \texorpdfstring{$K_n^{(5)}$}{Kn(5)}} \label{section:matchings Kn5} The aim of this section is to prove the following lemma which shows that $2$-edge-coloured dense $5$-graphs can be almost partitioned into four monochromatic tightly connected matchings. \begin{lem} \label{lem:5-matchings} Let $1/n \ll \varepsilon \ll \alpha \ll \eta < 1$. Let~$H$ be a $2$-edge-coloured $(1-\varepsilon,\alpha)$-dense $5$-graph on~$n$ vertices. Then~$H$ contains four vertex-disjoint monochromatic tightly connected matchings such that their union covers all but at most~$3\eta n$ of the vertices of~$H$. \end{lem} Note that this implies $\mu_5^4(1, \varepsilon, n) \geq (1- 3\eta)n/5$ for $1/n \ll \varepsilon \ll \eta < 1$. Hence $\mu_5^4(1) \geq 1/5$. Together with \cref{cor:matchings_to_cycles}, \cref{lem:5-matchings} implies \cref{thm:2}. We use the following notation throughout this section. Let~$H$ be a $2$-edge-coloured $5$-graph and let~$G$ be a blueprint for~$H$. Given a red tight component $R \subseteq H$, we write~$R^3$ for the edges of~$G$ that induce~$R$. We use analogous notation for blue tight components. Let $H$ be a $2$-edge-coloured dense $5$-graph. We first apply~\cref{lem:generalblueprint} to~$H$ to get a blueprint~$G$ for~$H$. Since $G$ is $2$-edge-coloured dense $3$-graph, we can apply \cref{lem:generalblueprint} again to~$G$ to obtain a blueprint for~$G$, which is a $2$-coloured $1$-graph. The following lemma summarises the structural information about~$H$ that we obtain in this way. \begin{lem} \label{lem:5-blueprint} Let $1/n \ll \varepsilon \ll \alpha \ll 1$. Let~$H$ be a $2$-edge-coloured $(1-\varepsilon, \alpha)$-dense $5$-graph on~$n$ vertices. Then there exists a $3$-graph~$G$ with $V(G) = V(H)$, two disjoint subsets~$V^{\red}$ and~$V^{\blue}$ of~$V(H)$, a red tight component $R \subseteq H$ and a blue tight component $B\subseteq H$ such that the following properties hold. \begin{enumerate}[label = \upshape (\roman*)] \item$G$ is a $(1-\alpha^{1/37}, \alpha^{1/37})$-dense $3\sqrt{\varepsilon}$-blueprint for~$H$. \item $\s{V(H) \setminus (V^{\red} \cup V^{\blue})} \leq \alpha^{1/75}n$. \item $d_{\partial R^3}(v) \geq (1-\alpha^{1/75})n$ for all $v \in V^{\red}$. \item $d_{\partial B^3}(v) \geq (1-\alpha^{1/75})n$ for all $v \in V^{\blue}$. \end{enumerate} \end{lem} \begin{proof} By \cref{lem:generalblueprint}, there exists a $(1-\alpha^{1/37}, \alpha^{1/37})$-dense $3\sqrt{\varepsilon}$-blueprint~$G$ for~$H$ with $V(G) = V(H)$. We apply \cref{lem:generalblueprint} to~$G$ and obtain a $\alpha^{1/75}$-blueprint~$J$ for~$G$ with $\s{J} \geq (1-\alpha^{1/75})n$. Note that, as a blueprint for a $3$-graph,~$J$ is a $1$-graph. Hence each edge of~$J$ contains precisely one vertex. By the definition of a blueprint all the red edges of~$J$ induce the same red tight component~$R_G$ of~$G$. Let $V^{\red} = \bigcup J^{\red}$. Since~$R_G$ is a red tight component of~$G$ all its edges induce the same red tight component~$R$ of~$H$. Define~$V^{\blue}$ and~$B$ analogously. \end{proof} Two edges~$f$ and~$f'$ in~$H$ are \emph{loosely connected} if there exists a sequence of edges $e_1,\dots,e_t$ such that~$e_1 = f$,~$e_t=f'$ and $\s{e_i \cap e_{i+1}} \geq 1$ for all $i \in [t-1]$. A subgraph~$H'$ of~$H$ is \emph{loosely connected} if every pair of edges in~$H'$ is loosely connected. A maximal loosely connected subgraph of~$H$ is called a \emph{loose component} of~$H$. We now prove \cref{lem:5-matchings}. The proof works by first finding a maximal matching in $R \cup B$, where $R$ and $B$ are the components given by \cref{lem:5-blueprint}, and then finding maximal connected matchings in the remaining vertices. \begin{proof}[Proof of \cref{lem:5-matchings}] Assume, for a contradiction, that such matchings do not exist. We call this the initial assumption. Apply \cref{lem:5-blueprint} and obtain $V^{\red}, V^{\blue}, G, R^3, R, B^3, B$ and let $V^* = V^{\red} \cup V^{\blue}$. Since there are only few vertices in $V(H) \setminus V^*$ we ignore these vertices from the start and construct our matchings in~$H[V^*]$. We begin by choosing a matching $M \subseteq (R \cup B)[V^*]$ of maximum size. Let $U = V^*\setminus V(M)$. Note that we have $R[U] = B[U] =\varnothing$ and $\s{U} \geq \eta n$ by the initial assumption. Let $U^{\red} = U \cap V^{\red}$ and $U^{\blue} = U \cap V^{\blue}$. The following claim shows that if $U^{\red}$ and~$U^{\blue}$ are both large, then $G[U]$ must contain many edges in~$R^3$ or many edges in~$B^3$. \begin{claim} \label{claim:structure} If $\s{U^{\red}}, \s{U^{\blue}} \geq \alpha^{1/309}n$, then $\max\{\s{R^3[U]},\s{B^3[U]}\} \geq \frac{1}{2}\s{U^{\red}}\s{U^{\blue}}\s{U} - 3\alpha^{1/155}n^3$. \end{claim} \begin{proofclaim} Define a bipartite graph~$K_0$ with vertex classes~$U^{\red}$ and~$U^{\blue}$ such that $x \in U^{\red}$ and $y \in U^{\blue}$ are joined by an edge if and only if $xy \in \partial R^3 \cap \partial B^3$. Recall that $d_{\partial R^3}(x) \geq (1-\alpha^{1/75})n$ and $d_{\partial B^3}(y) \geq (1-\alpha^{1/75})n$ for all $x \in U^{\red}$ and $y \in U^{\blue}$. Hence \[ \s{K_0} \geq \s{U^{\blue}}\s{U^{\red}} - \alpha^{1/75}n^2. \] Since~$G$ is $(1-\alpha^{1/37}, \alpha^{1/37})$-dense, we have $d_G(xy, U) \geq \s{U} - \alpha^{1/37}n$ for $xy \in K_0$. We now colour the edges of~$K_0$ such that $xy \in K_0$ is red if $d_{R^3}(xy, U) \geq \s{U} - 2\alpha^{1/76}n$ and blue if $d_{B^3}(xy, U) \geq \s{U} - 2\alpha^{1/76}n$. Since $K_0 \subseteq \partial R^3 \cap \partial B^3$, if $xyz \in G$ with $xy \in K_0$, then $xyz \in R^3 \cup B^3$. Hence it suffices to show that almost all edges of $K_0$ are of the same colour. Indeed, if we have that at least $\s{U^{\red}}\s{U^{\blue}} - 3 \alpha^{1/154}n^2$ edges of $K_0$ are red, then we have \[ \s{R^3[U]} \geq \frac{1}{2}(\s{U^{\red}}\s{U^{\blue}} - 3\alpha^{1/154}n^2) (\s{U} - 2 \alpha^{1/76}n) \geq \frac{1}{2} \s{U^{\red}}\s{U^{\blue}}\s{U} - 3\alpha^{1/155}n^3. \] We show that each edge $xy \in K_0$ is coloured either red or blue. It suffices to show that either $d_{R^3}(xy, U) < \alpha^{1/76}n$ or $d_{B^3}(xy, U) < \alpha^{1/76}n$. Indeed if $d_{R^3}(xy, U), d_{B^3}(xy, U) \geq \alpha^{1/76}n$, then by \cref{lem:shadow}, there exists $u, u' \in U$ such that $xyu \in R^3$, $xyu' \in B^3$ and $xyuu' \in \partial R \cap \partial B$. For any $u'' \in N_H(xyuu', U)$, we would have $xyuu'u'' \in R[U] \cup B[U]$, a contradiction to the maximality of $M$. Moreover, by \cref{lem:vertexdeg}, we have that $\min \{ d_{K_0}^{\red}(u), d_{K_0}^{\blue}(u)\} \leq \alpha^{1/76}n$ for all $u \in U$. Let~$K_1$ be the graph obtained from~$K_0$ by, for each~$u \in U$, deleting all red edges incident to~$u$ if $d_K^{\red}(u) \leq \alpha^{1/76}n$ and all blue edges incident to~$u$ if $d_K^{\blue}(u) \leq \alpha^{1/76}n$. Note that $\s{K_1} \geq \s{U^{\red}}\s{U^{\blue}} - \alpha^{1/77}n^2$ and that, in $K_1$, each vertex is incident to only edges of one colour. It is not too hard to see that by deleting at most $2\alpha^{1/154}n^2$ additional edges, we can obtain a subgraph $K_2$ of $K_1$ for which each vertex has degree~$0$ or large degree. More precisely, for all $u \in U^{\red}$, \[ d_{K_2}(u) \geq \s{U^{\blue}} - 3\alpha^{1/308}n \text{ or } d_{K_2}(u) = 0 \] and, for all $u \in U^{\blue}$, \[ d_{K_2}(u) \geq \s{U^{\red}} - 3\alpha^{1/308}n \text{ or } d_{K_2}(u) = 0. \] Since each vertex is incident to only edges of one colour and any two vertices in~$U^{\red}$ that have non-zero degree have a common neighbour this implies that all edges in~$K_2$ are of the same colour. Since $\s{K_2} \geq \s{U^{\red}}\s{U^{\blue}} - 3\alpha^{1/154}n^2$, this concludes the proof. \end{proofclaim} The following claim shows that there is a red tight component~$R_*$ and a blue tight component~$B_*$ of~$H$ such that almost all the edges in~$G[U]$ induce one of these components. \begin{claim} \label{claim:two_components} Let $\gamma = \alpha^{1/1110}$. There exists a red tight component~$R_*$ and a blue tight component~$B_*$ of~$H$ such that \begin{enumerate}[label = \upshape (\roman*)] \item \label{two_components_i} $\s{R_*^3[U]} \geq \s{G^{\red}[U]} - 8 \gamma^{1/5}n^3$ and $\s{B_*^3[U]} \geq \s{G^{\blue}[U]} - 8 \gamma^{1/5}n^3$, \item \label{two_components_ii} $\s{(R_*^3 \cup B_*^3)[U]} \geq (1-\gamma^{1/6})\binom{\s{U}}{3}$ and \item \label{two_components_iii} $R_* = R$ or $B_* = B$. \end{enumerate} \end{claim} \begin{proofclaim} First we show that, for each $u \in U$, there exists $J_u \subseteq G_u[U]$, where~$G_u$ is the link graph of~$G$ at~$u$, such that $\s{J_u} \geq \s{G_u[U]} - \alpha^{1/14}n^2$ and $R(e \cup u) = R(e' \cup u)$ for $e, e' \in J_u^{\red}$ and $B(e \cup u) = B(e' \cup u)$ for $e, e' \in J_u^{\blue}$. To show this fix $u \in U$. Without loss of generality assume that $u \in U^{\red}$. By \cref{lem:5-blueprint}, $d_{\partial R^3}(u,U) \geq \s{U} - \alpha^{1/75}n$. Let $U_* = N_{\partial R^3}(u,U)$. Clearly, $\s{U_*} \geq \eta n/2$ and $G_u^{\red}[U_*] \subseteq R^3_u$. Moreover, for all $x \in U_*$, we have $d_G(ux) > 0 $ and thus, since~$G$ is $(1-\alpha^{1/37}, \alpha^{1/37})$-dense, $d_G(ux) \geq (1-\alpha^{1/37})n$. It follows that $\delta(G_u[U_*]) \geq \s{U_*} - \alpha^{1/37}n$. Thus by applying \cref{lem:reducing_components} with $R^3, u, U_*, \alpha^{1/37}$ playing the roles of $R_G, S, U, \delta$, there exists $J_u \subseteq G_u[U_*] \subseteq G_u[U]$ such that \[ \s{J_u} \geq \s{G_u[U_*]} - 7\alpha^{1/148}n^2 \geq \s{G_u[U]} - \alpha^{1/75}n^2 - 7\alpha^{1/148}n^2 \geq \s{G_u[U]} - \alpha^{1/149}n^2 \] and $H(u \cup e) = H(u \cup e')$ for $e, e' \in J_u$ of the same colour. Now consider the auxiliary multi-$3$-graph $D = \bigcup_{u \in U} \{ e \cup u \colon e \in J_u\}$. Note that \[ \s{D} = \sum_{u \in U} \s{J_u} \geq \sum_{u \in U}\left(\s{G_u[U]} - \alpha^{1/149}n^2\right) \geq 3\s{G[U]} - \alpha^{1/149}n^3. \] Let~$F$ be the subgraph of $G[U]$ for which $e \in F$ if and only if $e$ is an edge of multiplicity $3$ in~$D$. Since~$G$ is $(1-\alpha^{1/37}, \alpha^{1/37})$-dense, \cref{prop:edges_in_dense} implies that $\s{G} \geq (1 - 2 \alpha^{1/37}) \binom{n}{3}$. Hence \begin{align*} \s{G[U]} &\geq \binom{\s{U}}{3} - 2\alpha^{1/37}\binom{n}{3} \geq \binom{\s{U}}{3} - 2\alpha^{1/37} \binom{\s{U}/\eta}{3} \\ &\geq \binom{\s{U}}{3} - \frac{4\alpha^{1/37}}{\eta^3}\binom{\s{U}}{3} \geq (1-\alpha^{1/38})\binom{\s{U}}{3}. \end{align*} Therefore $\s{F} \geq \s{G[U]} - \alpha^{1/149}n^3 \geq (1-\alpha^{1/150})\binom{\s{U}}{3}$. Recall that $\gamma = \alpha^{1/1110}$. By \cref{prop:dense,prop:edges_in_dense}, there exists a $(1-\gamma^{1/5}, \gamma^{1/5})$-dense subgraph $\widetilde{F} \subseteq F$ with $V(\widetilde{F}) = V(F) =U$ and, by \cref{prop:edges_in_dense}, $\s{\widetilde{F}} \geq (1-2\gamma^{1/5}) \binom{\s{U}}{3}$. Hence $\s{\widetilde{F}^{\red}} \geq \s{G^{\red}[U]} - 2\gamma^{1/5}n^3$. Let $S^{\red} = \{x \in U \colon d_{\widetilde{F}^{\red}}(x) \geq 6 \gamma^{1/5} n^2\}$. Let $F_0^{\red}$ be the subgraph of $\widetilde{F}^{\red}$ consisting of all edges that contain a vertex in $S^{\red}$. Note that $\s{F_0^{\red}} \geq \s{\widetilde{F}^{\red}} - 6 \gamma^{1/5}n^3 \geq \s{G^{\red}[U]} - 8\gamma^{1/5}n^3$. We claim that all the edges in $F_0^{\red}$ induce the same red tight component $R_*$ in $H$. Let $e, e' \in \widetilde{F}^{\red}$ with $u \in e \cap e'$. Note that $e \setminus u, e' \setminus u \in J_u^{\red}$ and so $R(e) = R(e')$. Hence edges in the same loose component of $\widetilde{F}^{\red}$ induce the same red tight component in $H$. In particular, since $F_0^{\red} \subseteq \widetilde{F}^{\red}$, for $u \in S^{\red}$, all the edges in $N_{F_0^{\red}}(u)$ induce the same red tight component $R(u)$ of $H$. Let $u, v \in S^{\red}$. We want to show that $R(u) = R(v)$. We may assume that $u$ and $v$ are in distinct loose components $L$ and $L'$ of $\widetilde{F}^{\red}$, respectively. In particular, any edge of~$\widetilde{F}$ that intersects both~$V(L)$ and~$V(L')$ is in~$\widetilde{F}^{\blue}$. If $u, v \in V^{\red}$, then $d_{\partial R^3}(u), d_{\partial R^3}(v) \geq (1-\alpha^{1/75})n$ implying $R(u) = R =R(v)$. Thus we may assume that one of $u$ and $v$ is in $V^{\blue}$, say $v \in V^{\blue}$. Let $\Gamma_L(u) = \{ u' \in V(L) \colon d_L(uu') \geq \gamma^{1/5}n\}$ and $\Gamma_{L'}(v) = \{ v' \in V(L') \colon d_{L'}(vv') \geq \gamma^{1/5}n\}$. It is easy to see that $\s{\Gamma_L(u)}, \s{\Gamma_{L'}(v)} \geq 5 \gamma^{1/5}n$. Let~$D'$ be the bipartite directed graph with parts $\Gamma_L(u)$ and $\Gamma_{L'}(v)$ such that, for $u' \in \Gamma_L(u)$, \begin{align*} N_{D'}^+(u') = \{v' \in \Gamma_{L'}(v) \colon &uu'v' \in \widetilde{F}^{\blue} \text{ and } uu'u''v' \in \partial R(uu'u'') \cap \partial B(uu'v') \\ &\text{and } uu'u''vv' \in H \text{ for some } u'' \in N_L(uu')\}, \end{align*} and, for $v' \in \Gamma_{L'}(v)$, \begin{align*} N_{D'}^+(v') = \{u' \in \Gamma_{L}(u) \colon &vv'u' \in \widetilde{F}^{\blue} \text{ and } u'v \in \partial B^3 \text{ and } vv'v''u' \in \partial B \cap \partial R(vv'v'') \\ &\text{and } vv'v''uu' \in H \text{ for some } v'' \in N_{L'}(vv')\}. \end{align*} By \cref{lem:shadow}, the fact that $\widetilde{F}$ is $(1-\gamma^{1/5}, \gamma^{1/5})$-dense and the fact that $H$ is $(1-\varepsilon, \alpha)$-dense, we have, for $u' \in \Gamma_L(u)$, \[ d_{D'}^+(u') \geq \s{\Gamma_{L'}(v)} - \gamma^{1/5}n - \varepsilon^{1/4}n - \varepsilon n > \s{\Gamma_{L'}(v)}/2. \] Similarly, also using the fact that $d_{\partial B^3}(v) \geq (1-\alpha^{1/75})n$, we have, for $v' \in \Gamma_{L'}(v)$, \[ d_{D'}^+(v') \geq \s{\Gamma_{L}(u)} - \gamma^{1/5}n -\alpha^{1/75}n - \varepsilon^{1/4}n - \varepsilon n > \s{\Gamma_{L}(u)}/2. \] It follows that~$D'$ contains a double edge~$u'v'$, where $u' \in \Gamma_{L}(u)$ and $v' \in \Gamma_{L'}(v)$. Let $u'' \in N_{L}(uu')$ and $v'' \in N_{L'}(vv')$ be the vertices that are guaranteed to exist by the definition of~$D'$. Since $u'v \in \partial B^3$, we have that $vv'u \in B^3$ and thus also $uu'v' \in B^3$. As $B[U] = \varnothing$, we have $vv'v''uu', uu'u''vv' \in H^{\red}$ and thus $R(uu'u'') = R(vv'v'')$. Hence $R(u) = R(v)$. We define~$F_0^{\blue}$ and $B_*$ in an analogous way. This proves \ref{two_components_i}. Note that \ref{two_components_ii} follows from \ref{two_components_i} using the facts $\s{U} \geq \eta n$ and $\s{G[U]} \geq (1- \alpha^{1/38})\binom{\s{U}}{3}$, which were noted earlier in this proof. We will now prove \ref{two_components_iii}. We distinguish between two cases. \begin{enumerate}[label=\textbf{Case \arabic*:},wide, labelwidth=0pt,labelindent=0pt,parsep=0pt] \item \boldmath $\s{U^{\red}}, \s{U^{\blue}} \geq \gamma^{1/13}n.$ \unboldmath \\ By \cref{claim:structure}, we have $\max\{\s{R^3[U]}, \s{B^3[U]}\} \geq \frac{1}{2} \s{U^{\red}}\s{U^{\blue}}\s{U} - 3\alpha^{1/155}n^3$. Since $\frac{1}{2} \s{U^{\red}}\s{U^{\blue}}\s{U} - 3\alpha^{1/155}n^3 \geq \frac{1}{2}\gamma^{2/13}\eta n^3 - 3 \alpha^{1/155}n^3 \geq 2 \gamma^{1/6}n^3$, we have $R_*^3 \cap R^3 \neq \varnothing$ or $B_*^3 \cap B^3 \neq \varnothing$ and thus $R_* = R$ or $B_* = B$. \item \boldmath $\s{U^{\blue}} \leq \gamma^{1/13}n$ \textbf{or} $\s{U^{\red}} \leq \gamma^{1/13}n.$ \unboldmath \\ Say $\s{U^{\blue}} \leq \gamma^{1/13}n$. Then $\s{U^{\red}} = \s{U} - \s{U^{\blue}} \geq \s{U} - \gamma^{1/13}n$. Let $Q^3 = \{ T \in \binom{U}{3} \colon \binom{T}{2} \cap \partial R^3 \neq \varnothing\}$. Since $d_{\partial R}(u, U) \geq \s{U} - \alpha^{1/75}n$ for $u \in U^{\red}$, there can be at most $\s{U^{\red}}\alpha^{2/75}n^2$ triples that intersect~$U^{\red}$ and are not in~$Q^3$. Hence \begin{align*} \s{Q^3} &\geq \binom{\s{U}}{3} - \s{U^{\blue}}^3 - \s{U^{\red}}\alpha^{2/75}n^2 \\ &\geq \binom{\s{U}}{3} - \gamma^{3/13}n^3 - \alpha^{2/75}n^3 \geq \binom{\s{U}}{3} - 2\gamma^{1/5}n^3. \end{align*} Note that $\s{R^3[U]} \geq \s{Q \cap G^{\red}[U]} \geq \s{G^{\red}[U]} - 2\gamma^{1/5}n^3$. Therefore, we have $R_* = R$. \qedhere \end{enumerate} \end{proofclaim} We define $R_\diamond = R \cup R_*$ and $B_\diamond = B \cup B_*$. Note that, by \cref{claim:two_components}\ref{two_components_iii}, $R_\diamond \cup B_\diamond$ is the union of at most three monochromatic tight components. Let $M_\diamond$ be a maximal matching in $(R_\diamond \cup B_\diamond)[V^*]$ containing $M$. Let $W = V^*\setminus V(M_\diamond)$. Since $M \subseteq M_\diamond$, we have $W \subseteq U$. By the initial assumption, we have $\s{W} \geq \eta n$. Note that $(R_* \cup B_*)[W] = \varnothing$ and, since $W \subseteq U$, $(R_*\cup B_*)[W] \geq \binom{\s{W}}{3}- \gamma^{1/6}n^3$. The following claim shows that almost all the edges in~$G[W]$ are of the same colour. \begin{claim} \label{claim:one_component} We have $\s{R_*^3[W]} \geq \binom{\s{W}}{3} - \gamma^{1/9}n^3$ or $\s{B_*^3[W]} \geq \binom{\s{W}}{3} - \gamma^{1/9}n^3$. \end{claim} \begin{proofclaim} Let $G_* = R_*^3 \cup B_*^3$. We define \begin{align*} W_\textup{red} = \{u \in W &\colon d_{G_*}(u,W) \geq 2\alpha n^2\text{ and } d_{B_*^3}(u,W) < \alpha n^2\}, \\ W_\textup{blue} = \{u \in W &\colon d_{G_*}(u,W) \geq 2\alpha n^2\text{ and } d_{R_*^3}(u,W) < \alpha n^2\}, \\ W_0 = \{u \in W &\colon d_{G_*}(u, W) < 2 \alpha n^2\}. \end{align*} Since $(R_* \cup B_*)[W] = \varnothing$, by \cref{lem:vertexdeg}, $W_\textup{red}, W_\textup{blue}$ and~$W_0$ partition~$W$. Let~$J$ be the subgraph of~$G_*[W]$ obtained by deleting all red edges containing a vertex in $W_\textup{blue} \cup W_0$ and all blue edges containing a vertex in $W_\textup{red} \cup W_0$. Note that $\s{J} \geq \s{G_*[W]} - 2\alpha n^3 \geq (1- \gamma^{1/7})\binom{\s{W}}{3}$ and $J \subseteq \binom{W_\textup{red}}{3} \dot\cup \binom{W_\textup{blue}}{3}$. Hence \begin{align} \label{ub} (1-\gamma^{1/7})\binom{\s{W}}{3} \leq \binom{\s{W_\textup{red}}}{3} + \binom{\s{W_\textup{blue}}}{3}. \end{align} Suppose that $\s{W_\textup{red}}, \s{W_\textup{blue}} \leq (1-\alpha^{1/8}) \s{W}$. By (\ref{ub}), we may assume without loss of generality assume that $\s{W_\textup{red}} \geq \s{W}/2$. Noting that $x \mapsto x^3 + (\s{W} -x)^3 $ is an increasing function for $x \geq \s{W}/2$ we have \begin{align*} \binom{\s{W_\textup{red}}}{3} + \binom{\s{W_\textup{blue}}}{3} &\leq \frac{1}{6}\br{\s{W_\textup{red}}^3 + \s{W_\textup{blue}}^3} \leq \frac{1}{6}\br{\s{W_\textup{red}}^3 + \br{\s{W} - \s{W_\textup{red}}}^3} \\ &\leq ((1-\gamma^{1/8})^3 + \gamma^{3/8})\frac{\s{W}^3}{6} < (1-\gamma^{1/7})\binom{\s{W}}{3}, \end{align*} a contradiction to (\ref{ub}). Hence at least one of $W_\textup{red}$ and $W_\textup{blue}$ has size at least $(1-\gamma^{1/8})\s{W}$. Without loss of generality assume $\s{W_\textup{red}} \geq (1 - \gamma^{1/8})\s{W}$. Note that any edge of~$J$ contained in $W_\textup{red}$ is in~$R_*^3$, hence \begin{align*} \s{R^3_*[W]} \geq \s{J} - \s{W \setminus W^{\red}}n^2 \geq \binom{\s{W}}{3} - \gamma^{1/9}n^3. \end{align*} This proves the claim. \end{proofclaim} Now assume without loss of generality that $\s{R_*^3[W]} \geq \binom{\s{W}}{3} - \gamma^{1/9}n^3$. Note that almost all edges in $H[W]$ are blue (otherwise there would have to be an edge in $R_*[W]$, which would contradict the maximality of $M$). More precisely, we have \[ \s{H^{\blue}[W]} \geq \frac{3!}{5!} \s{R_*[W]}(\s{W} - 3\sqrt{\varepsilon}n)(\s{W} - \varepsilon n) \geq (1-\gamma^{1/10})\binom{\s{W}}{5}. \] By \cref{prop:edges_in_dense,prop:dense}, there exists a $(1-\gamma^{1/1010}, \gamma^{1/1010})$-dense tightly connected subgraph $\widetilde{H}^{\blue}$ of $H^{\blue}[W]$ with $V(\widetilde{H}^{\blue}) = W$ and $\s{\widetilde{H}^{\blue}} \geq (1 - 2\gamma^{1/1010})\binom{\s{W}}{5}$. By an easy greedy argument, there exists a matching $M'$ in $\widetilde{H}^{\blue}$ that covers all but at most $\eta n$ of the vertices in~$W$. The matching $M' \cup M_\diamond$ covers all but at most $3\eta n$ of the vertices of~$H$. This contradicts the initial assumption. \qedhere \end{proof} \section{Concluding Remarks} \label{sec:concluding} For $k \ge 3$, let $f(k)$ be the minimum integer $m$ such that, for all large $2$-edge-coloured complete $k$-graphs, there exists $m$ vertex-disjoint monochromatic tight cycles covering almost all vertices. Note that $f(k)$ is well defined by~\cite{Bustamante2020} but the bound is very large. We have $f(3) =2$ by~\cite{Bustamante2017}. \cref{thm:1,thm:2} imply $f(4) = 2$ and $f(5) \leq 4$, respectively. In general, we believe that $f(k) = 2$ for all~$k$. However, we believe that new ideas may be needed as indicated by the following example. Let $A$ and $B$ be disjoint vertex sets. Let $K^{(k)}(A,B)$ be the $2$-edge-coloured complete $k$-graph on $A \cup B$ such that an edge $e$ is red if and only if $\s{e \cap A}$ is even. Now suppose that $\s{A} \gg \s{B}$. If $K^{(k)}(A,B)$ contains two vertex-disjoint monochromatic tight cycles of distinct colour covering almost all vertices, then one of the two cycles must lie entirely in~$A$ in the monochromatic tight component formed by the edges contained in~$A$. However, this tight component is not induced by any edge in the blueprint of $K^{(k)}(A,B)$ (which is $K^{(k-2)}(A,B)$ with colours swapped). Thus we ask the weaker question of whether one can bound $f(k)$ by some suitable function of~$k$. \section*{Acknowledgements} We thank Richard Lang and Nicol\'as Sanhueza-Matamala for their helpful comments. \bibliographystyle{abbrv}
{ "timestamp": "2020-12-17T02:15:46", "yymm": "2012", "arxiv_id": "2012.08875", "language": "en", "url": "https://arxiv.org/abs/2012.08875" }
\section*{Acknowledgment} The authors would like to thank the InfoRE company for the data contribution, the ReML-AI research group\footnote{\url{https://reml.ai}} for the data contribution and financial support, and the twenty three annotators for their hard work to support the shared task. Without their support, the task would not have been possible. \section{The ReINTEL 2020 Challenge} \subsection{Dataset Splitting} Data splitting for data challenge is a difficult process in order to avoid evidence ambiguity and concept drifting which are the main cause of unstable ranking issue in data challenges. In this competition, we apply RDS~\cite{nguyen2020reinforced} to split ReINTEL data into three sets including public train, validation, and private test sets. It is worth to mention that, RDS is a method to approximate optimum sampling for model diversification with ensemble rewarding to attain maximal machine learning potentials. It has a novel stochastic choice rewarding is developed as a viable mechanism for injecting model diversity in reinforcement learning. \subsubsection{Baselines} To apply RDS~\cite{nguyen2020reinforced} for the data splitting process, it requires to have baseline learners to obtain rewards for the reinforced process. It is recommended to choose representative baseline learners, to let the reinforced learner better capture different learning behaviors. The use of these baseline learners is important since each learner will behave differently depending on the patterns contained in the target data. As a result, RDS helps to increase the diversity of the data samples in different sets. Here we employ three models to classify reliable news using textual features as follows: \begin{itemize} \item \textbf{Bi-LSTM}~\cite{Schuster:bi-lstm} is a bi-directional LSTM model. It has two LSTMs in which, one LSTM takes input sequence in a forward direction, and another LSTM takes input sequence in a backward direction. The use of Bi-LSTM architecture helps to increase the amount of information available to the network, to gain better performance in most of sequence related tasks. Bi-LSTM network is a standard baseline for most of text classification tasks. \item \textbf{CNN-Text}~\cite{kim-2014-convolutional} is the use of CNN~\cite{CNN:1989} network on word embeddings to perform the classification tasks. The simple architecture outperformed all other models at the publication time. \item \textbf{EasyEnsemble}~\cite{EasyEnsemble:2009} is used to represent a tradition approach in dealing with im-balanced dataset. For the vectorization, we trained a Sent2Vec~\cite{pgj2017unsup} using the combined 1GB texts of Vietnamese Wikipedia data~\cite{vu:2019n} and 19 GB texts of~\newcite{newscorpus:2018}. \end{itemize} \subsubsection{Learning Dynamics} To disentangle dataset shift and evidence ambiguity of the data splitting strategy, we apply RDS stochastic choice reward mechanism \cite{nguyen2020reinforced} to create public training, public- and private testing sets. Figure~\ref{fig:learning_sto} illustrates the learning dynamic towards the goal. \begin{figure}[h] \centering \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{images/ReINTEL_STO_train_rest.pdf} \caption{Training Set} \label{fig:reintel_sto_trainrest} \end{subfigure} \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{images/ReINTEL_STO_vocab50k_valtest.pdf} \caption{Val \& Test Sets} \label{fig:reintel_sto_valtest} \end{subfigure} \caption{Learning Dynamics for splitting data into 3 sets (public training, public testing, and private testing) using RDS Stochastic Choice Reward Mechanism~\cite{nguyen2020reinforced}.} \label{fig:learning_sto} \end{figure} \section{Results} \subsection{Data Format} Each instance includes 8 main attributes with/without a binary target label. Table \ref{table: data_attribute} summarizes the key features of each attribute. \begin{table*}[ht!] \begin{tabular}{@{}llp{11.5cm}l@{}} \toprule \textbf{No} & \textbf{Attribute} & \textbf{Description} \\ \midrule 1 & id & Unique ID of each post \\ 2 & user\_name & Anomynized post owner’s identity \\ 3 & post\_message & Text content of the post \\ 4 & timestamp\_post & The time when the post is uploaded \\ 5 & num\_like\_post & Number of likes that the post received \\ 6 & num\_comment\_post & Number of comments that the post received \\ 7 & num\_share\_post & Number of shares that the post received \\ 8 & image & The image uploaded with the post \\ 9 & label & \begin{tabular}[c]{@{}l@{}}Manually annotated label indicating the reliability of the post\\ 1: Unreliable\\ 0: Reliable\end{tabular} \\ \bottomrule \end{tabular} \caption{Data attributes} \label{table: data_attribute} \end{table*} \subsection{Training/Testing Data} The challenge provides approximately 8,000 training examples with the respective target labels. The testing set consists of 2,000 examples without labels. \subsection{Result Submission} Participants must submit the result in the same order as the testing set in the following format: \begin{verbatim} id1, label probability 1 Id2, label probability 2 … \end{verbatim} \subsection{Evaluation Metric} The challenge task is evaluated based on Area Under the Receiver Operating Characteristic Curve (AUC-ROC), which is a typical metric for classification tasks. Let us denote $X$ as a \textit{continuous random variable} that measures the `classification' score of a given a news. As a binary classification task, this news could be classified as \textit{"unreliable"} if $X$ is greater than a threshold parameter $T$, and \textit{"reliable"} otherwise. We denote $f_1(x), f_0(x)$ as probability density functions that the news belongs to \textit{"unreliable"} and \textit{"reliable"} respectively, hence the true positive rate $TPR(T)$ and the false positive rate $FPR(T)$ are computed as follows: \begin{align} TPR(T) &= \int_{T}^{\infty}f_1(x)dx \\ FPR(T) &= \int_{T}^{\infty}f_0(x)dx \end{align} and the AUC-ROC score is computed as: \begin{align} AUC\_ROC &= \int_{-\infty}^{\infty}TPR(T)FPR'(T)dT \end{align} Here, submissions are evaluated with ground-truth labels using the \textit{scikit-learn}'s implementation \footnote{\url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html}}. \begin{table*}[h!] \caption{Top 6 teams on public-test and private-test with submitted papers and their final approaches. The rank is based on the ROC-AUC scores on the private-test.} \centering \label{table2:approaches} \scalebox{0.85}{ \begin{tabular}{l|p{1.8cm}|c|c|p{6.6cm}|l|l} \toprule \multirow{2}{*}{\#} & \multirow{2}{*}{Team} & \multicolumn{2}{c|}{ROC-AUC} & \multirow{2}{*}{Final Approach} & \multirow{2}{*}{Ensemble?} & \multirow{2}{*}{Multimodal?} \\ \cline{3-4} & & Public-test & Private-test & & & \\ \hline 1 & Kurtosis & 0.9399 & \textbf{0.9521} & TF-IDF + SVD; Emb + SVD; NB, LightGBM, CatBoost & Yes & No\\ \hline 2 & NLP\_BK & 0.9360 & 0.9513 & Bert4News + phoBERT + XLM + MetaFeatures & Yes & No \\ \hline 3 & SunBear & {0.9418} & 0.9462 & RoBerta + MLP & Yes & No \\ \hline 4 & uit\_kt & - & 0.9452 & phoBERT + Bert4News & Yes & No\\ \hline 5 & Toyo-Aime & \textbf{0.9427} & 0.9449 & CNN + Bert + Fully connected & Yes & Yes \\ \hline 6 & ZaloTeam & - & 0.9378 &viBERT + viELECTRA + phoBERT & Yes & No \\ \bottomrule \end{tabular} } \end{table*} \subsection{Participation} During the course two months of the competition, 61 participants sign up for the challenge. 30\% of the participants compete in groups of 2 (6 teams) and 4 members (2 teams). 19 participants sign our corpus usages agreement. From top 8 of the Private test leaderboard, 6 teams/participants submit their technical reports that demonstrate their strategies and findings from the challenge. The summary of the competition participation can be seen in Table \ref{table: participation_summary}. \begin{table}[h!] \begin{tabular}{@{}lp{2.6cm}@{}} \toprule \textbf{Metric} & Value \\ \midrule Number of participants & 61 \\ Number of teams & 8 \\ Number of signed agreements & 19 \\ Number of submitted papers & 6 \\ \bottomrule \end{tabular} \caption{Participation summary} \label{table: participation_summary} \end{table} \subsection{Outcomes} In total, 657 successful entries were recorded. The highest results of the Public test and Private test phase were 0.9427 and 0.9521 respectively. Key descriptive statistics of the results in each phase is illustrated in Table \ref{table: results_summary}. \begin{table}[h!] \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}llll@{}} \toprule & \textbf{Public Test} & \textbf{Private Test} & \textbf{Overall} \\ \midrule Total Entries & 571 & 86 & 657 \\ Highest ROC & 0.9427 & 0.9521 & 0.9474 \\ Mean ROC & 0.8463 & 0.8942 & 0.8703 \\ Std. ROC & 0.1215 & 0.1022 & 0.1119 \\ \bottomrule \end{tabular} } \caption{Results summary} \label{table: results_summary} \end{table} \section{Conclusion} The rise of misleading information on social media platforms has triggered the need for fact-checking and fake news detection. Therefore, the reliability of news has become a critical question in the modern age. In this paper, we introduce a novel dataset of nearly 10,000 SNSs entries with reliability labels. The dataset covers a great variety of topics ranging from healthcare to entertainment and economics. The annotation and validation process are presented in details with several filtering rounds. With both linguistic and visual features, we believe that the corpus is suitable for future research on fake news detection and news distributor behaviours using NLP and computer vision techniques. In Vietnam, where datasets on SNSs are scarce, our corpus will serve as a reliable material for other research. \section{Introduction} This challenge aims at identifying the reliability of information shared on social network sites (SNSs). With the blazing-fast spurt of SNSs (e.g. Facebook, Zalo and Lotus), there are approximately 65 million Vietnamese users on board with the annual growth of 2.7 million in the recent year, as reported by the Digital 2020 \footnote{\url{https://wearesocial.com/digital-2020}}. SNSs have become widely accessible for users to not only connect friends but also freely create and share diverse content \cite{shu2017fake,zhou2019fake}. A number of users, however, has exploited these social platforms to distribute fake news and unreliable information to fulfill their personal or political purposes (e.g. US election 2016 \cite{allcott2017social}). It is not easy for other ordinary users to realize the unreliability, hence, they keep spreading the fake content to their friends. The problem becomes more seriously once the unreliable post becomes popular and gains belief among the community. Therefore, it raises an urgent need for detecting whether a piece of news on SNSs is reliable or not. This task has gained significant attention recently~\cite{CSI:2017,shu2019defend,shu2019beyond,yang2019unsupervised}. The shared task focuses on the responsible (i.e. reliable) information identification on Vietnamese SNSs, referred to as ReINTEL. It is a part of the 7th annual workshop on Vietnamese Language and Speech Processing, VLSP 2020\footnote{\url{https://vlsp.org.vn/vlsp2020}} for short. As a binary classification task, participants are required to propose models to determine the reliability of SNS posts based on their content, image and metadata information (e.g. number of likes, shares, and comments). The shared task consists of three phases namely \textit{Warm up, Public Test, Private Test}, which is hosted on Codalab from October 21st, 2020 to November 30th, 2020. In summary, there are around 1000 submissions created by 8 teams and over 60 participants during the challenge period. As our first contribution, this shared task provides an evaluation framework for the reliable information detection task, where participants could leverage and compare their innovative models on the same dataset. Their knowledge contribution may help improve safety on online social platforms. Another valuable contribution is the introduction of a novel dataset for the reliable information detection task. The dataset is built based on a fair human annotation of over 10,000 news from SNSs in Vietnam. We hope this dataset will be a useful benchmark for further research. In this shared task, AUC-ROC is utilized as the primary evaluation metric. The remainder of the paper is organized as follows. The next section describes the data collection and annotation methodologies. Subsequently, the shared task description and evaluation are summarized in Section 3. In Section 4, we discusses the potentials of language and vision transfer learning for the detection task. Section 5 describes the competition, approaches and respective results. Finally, Section 6 concludes the paper by suggesting potential applications for future studies and challenges. \section{The ReINTEL 2020 Dataset} \subsection{Data Collection} \begin{figure*} \centering \includegraphics[width=.97\linewidth,height=7.0cm]{images/AnnotationTool.pdf} \caption{Data Annotation Tool} \label{fig:reintel_anno_tool} \end{figure*} We collect the data for two months from August to October 2020. There are two main sources of the data: SNSs and Vietnamese newspapers. As for the former source, public social media posts are retrieved from news groups and key opinion leaders (KOLs). Many fake news, however, has been flagged and removed from the social networking sites since the enforcement of Vietnamese cybersecurity law in 2019 \cite{tuanson_2018}. Therefore, to include the deleted fake news, we gather newspaper articles reporting these posts and recreate their content. All the collected data were originally posted in the period of March - June 2020. During this time, Vietnam was facing a second wave of Covid-19 with a drastic increase from 20 to 355 cases \cite{who}. The spread of Covid-19 results in an ‘infodemic’ in which misleading information is disseminated rapidly especially on social media \cite{hou2020assessment, huynh2020covid}. Hence, this period is a potential source of fake news. Besides Covid-19, the items in our dataset cover a wide range of domains including entertainment, sport, finance and healthcare. The result of the data collection stage is 10,007 items that are prepared for the annotation process. \subsection{Data Annotation} \label{dataset} \subsubsection{Annotator and Training} We recruit 23 human annotators to participate in the annotation process. The annotators receive one week training to identify fact-related posts and how to evaluate the reliability of the post based on primary features including the news source, its image and content. \subsubsection{Annotation Tool} Figure \ref{fig:reintel_anno_tool} demonstrates the annotation tool interface, which is designed to support quick and easy annotation. The first section contains guideline questions to remind the annotators of the labeling criterion including the news source credibility, the language appropriateness and fact accuracy. The second section is the post content, image and influence (i.e. number of likes, comments and shares). In Section 3, the annotators select a Reliability score for the post. There is a 5-point reliability Likert scale for fact-based posts with the following labels: 1 - Unreliable, 2 - Slightly unreliable, 3 - Neutral, 4 - Slightly reliable, 5 - Reliable. On the other hand, if the post is opinion-based and does not contain facts, the annotators should select label ‘0 - No category’ instead. The last section is a list of labeled items for the annotators to review and update their decision, if necessary, using the ‘Undo’ button. \subsubsection{Annotation Process} The annotation process is conducted from 9th to 19th October 2020. The annotators are divided into three groups to annotate 10,007 items independently. Therefore, each item will be annotated three times by different annotators. Once the annotators finish 30,021 annotations (i.e. 10,007 items annotated three times), we filter and summarise the result based on majority vote basis. Firstly, we combine labels of the same essence: Category 1 and 2 (Unreliable and Sightly unreliable) and Category 4 and 5 (Slightly reliable and Reliable). After merging the categories, we select the majority votes to be the final labels. If the majority vote is 1 or 2, the final label should be 1 - Unreliable. If the majority vote is 4 or 5, the final label should be 0 - Reliable. When the majority vote is 3 - Neutral, we finalise using ground truth labels. Lastly, if the majority agrees that the post is not fact-based (i.e. 0 - No Category), we remove it from the set. For items with no majority votes (i.e. three annotators have different opinions), we follow an alternate procedure. If the ground truth label is 1 - unreliable, the final label should be 1. On the other hand, if the ground truth label is 0 - reliable, we double check to separate reliable news from opinion-based items. The process is illustrated in Figure \ref{fig:reintel_anno_process}. \begin{figure*} \centering \includegraphics[width=.99\linewidth,height=6.5cm]{images/AnnotationProcess.pdf} \caption{Data Annotation Process} \label{fig:reintel_anno_process} \end{figure*} \subsubsection{Content Filtering} Once the annotation process is finished, data needs to go through the last step before being published for the competition – the content filtering. In this step, we manually check to ensure that data, including both text and image, published for the competition: \begin{enumerate} \item Does not violate any law, statue, ordinance, or regulation \item Will not give rise to any claims of invasion of privacy or publicity \item Does not contain, depict, include or involve any of the following: \begin{itemize} \item Political or religious views or other such ideologies \item Explicit or graphic sexual activity \item Vulgar or offensive language and/or symbols or content \item Personal information of individuals such as names, telephone numbers, and addresses \item Other forms of ethical violations \end{itemize} \end{enumerate} \section{Transfer Learning} Knowledge transfer has been found to be essential when it comes to downstream tasks with new datasets. If this transfer process is done correctly, it would greatly improve the performance of learning. Since ReINTEL challenge is a multimodal challenge, both visual based knowledge transfer and language based knowledge transfer are used by different teams. To be fair between participants, we required all teams to register for the use of pre-trained models. \autoref{tbl:pretrained_models} lists all pre-trained language and vision models registered by all participants. \subsection{Language Transfer Learning} For natural language processing tasks in Vietnamese, there have been many pre-trained language models are available. In 2016, ~\newcite{word2vecvn_2016} introduced the first monolingual pre-trained models for Vietnamese based on Word2Vec~\cite{mikolov2013efficient}. The use of pre-trained Word2VecVN models was proved to be useful in various tasks, such as the name entity recognition task~\cite{VU:2018}. In 2019, ~\newcite{vu:2019n} introduced the use of multiple pre-trained language models to achieve new state-of-the-art results in the name entity recognition task~\cite{Nguyen:19}. Up to date, there have been many other new monolingual language models for Vietnamese are available such as PhoBERT~\cite{phobert}, vElectra and ViBERT~\cite{the2020improving}. \begin{table*}[] \begin{tabular}{p{6cm}|l|l|p{5.7cm}} \toprule Model & Language & Vision & Description \\ \midrule Word2VecVN~\cite{word2vecvn_2016} & x & & Trained on 7GB texts of Vietnamese news \\ \hline FastText (Vietnamese version)~\cite{joulin2016fasttext} & x & & Trained on Vietnamese texts of the CommonCrawl corpus \\ \hline ETNLP~\cite{vu:2019n} & x & & Trained on 1GB texts of Vietnamese Wikipedia \\ \hline PhoBERT~\cite{phobert} & x & & Trained on 20GB texts of both Vietnamese news and Vietnamese Wikipedia \\ \hline Bert4News~\cite{bert4news} & x & & Trained on more than 20GB texts of Vietnamese news \\ \hline vElectra and ViBERT~\cite{the2020improving} & x & & vElectra was trained on 10GB texts, whereas ViBERT was trained on 60GB texts of Vietnamese news \\ \hline VGG16~\cite{simonyan2015deep} & & x & Trained on ImageNet~\cite{imagenet_cvpr09} \\ \hline YOLO~\cite{yolo:2018} & & x & Trained on ImageNet~\cite{imagenet_cvpr09} \\ \hline EfficientNet B7~\cite{EfficientNet:2019} & & x & Trained on ImageNet~\cite{imagenet_cvpr09} \\ \bottomrule \end{tabular} \caption{List of pre-trained models registered by all participants of ReINTEL challenge in 2020.} \label{tbl:pretrained_models} \end{table*} \subsection{Vision Transfer Learning} Different from language models, visual models are normally universal and existing pre-trained models can be directly applied in most of image processing tasks. For the use of visual features, there is only one team using multimodal features among top 6 teams of the leader board. This team, in fact, achieved the $1^{st}$ rank on the public test (see \autoref{table2:approaches}); but they did not get the same rank on the private test. This hints that the reliability of news mainly depends on content of news and other meta information, such as number of likes on social networks. Moreover, it is yet to be explored to capture the reliability of news using both vision and language information. \subsection{Language and Vision Transfer Learning} The use of both language and vision transfer learning is important for multimodal tasks. This line of research has attracted much attention with various new language-vision models, such as VilBERT~\cite{vilbert:2019}, 12-in-1~\cite{12in1_cvpr:2020}. No participants employ into this approach in the ReINTEL challenge due to the lack of language and vision pre-trained models in Vietnamese. Moreover, it is required to have extensive computer resources for applying this approach in a data challenge. In the future, we expect to see more research done in this direction because both images and texts are essential to SNS issues.
{ "timestamp": "2020-12-17T02:16:47", "yymm": "2012", "arxiv_id": "2012.08895", "language": "en", "url": "https://arxiv.org/abs/2012.08895" }
\section*{Introduction} Let $G$ be a real reductive group and $H\subseteq G$ a reductive subgroup. Then, for any irreducible unitary representation $(\pi,\mathcal{H}_\pi)$ of $G$, its restriction to $H$ decomposes into a direct integral of irreducible unitary representations $(\tau,\mathcal{H}_\tau)$ of $H$, i.e. there exists an $H$-equivariant unitary isomorphism \begin{equation} T:\mathcal{H}_\pi \to \int^\oplus_{\widehat{H}}\mathcal{H}_\tau\otimes\mathcal{M}_{\pi,\tau}\,d\mu_\pi(\tau),\label{eq:DirectIntegralDecomposition} \end{equation} where $\mathcal{M}_{\pi,\tau}$ is a family of Hilbert spaces (called \emph{multiplicity spaces}) and $\mu_\pi$ a Borel measure on the unitary dual $\widehat{H}$ of $H$. Here, $H$ acts only on the first factor of $\mathcal{H}_\tau\otimes\mathcal{M}_{\pi,\tau}$, so $$ m(\pi,\tau)=\dim\mathcal{M}_{\pi,\tau}\in\mathbb{N}\cup\{\infty\} $$ is the \emph{multiplicity of $\tau$ in $\pi$}. Note that the function $\tau\mapsto m(\pi,\tau)$ is unique up to a set of measure zero, and in this sense the direct integral decomposition is unique. The map $T$ is in general not \emph{pointwise defined}, i.e. there do not exist continuous linear maps $A_{\pi,\tau}:\mathcal{H}_\pi\to\mathcal{H}_\tau\otimes\mathcal{M}_{\pi,\tau}$ such that $T(v)_\tau=A_{\pi,\tau}(v)$ for $\mu_\pi$-almost every $\tau\in\widehat{H}$. In fact, the existence of a non-zero $H$-equivariant continuous linear map $\mathcal{H}_\pi\to\mathcal{H}_\tau$ implies that $\tau$ occurs discretely inside $\pi|_H$, i.e. $\mu_\pi(\{\tau\})>0$. To also capture the continuous part of the decomposition, one has to restrict to a subspace of $\mathcal{H}_\pi$. We show that the dense subspace $\mathcal{H}_\pi^\infty$ of smooth vectors is sufficient for this purpose: \begin{thmalph}\label{thm:Main} The restriction of $T$ to the smooth vectors $\mathcal{H}_\pi^\infty$ is pointwise defined, i.e. for every $\tau\in\widehat{H}$ there exists an $H$-equivariant continuous linear map $A_{\pi,\tau}^\infty:\mathcal{H}_\pi^\infty\to\mathcal{H}_\tau\otimes\mathcal{M}_{\pi,\tau}$ such that $$ T(v)_\tau = A_{\pi,\tau}^\infty(v) \qquad \mbox{for $\mu_\pi$-almost every $\tau\in\widehat{H}$.} $$ \end{thmalph} We remark that every $H$-equivariant continuous linear map $\mathcal{H}_\pi^\infty\to\mathcal{H}_\tau$ automatically maps into the smooth vectors $\mathcal{H}_\tau^\infty$ of $\tau$, i.e. is contained in $\Hom_H(\mathcal{H}_\pi^\infty,\mathcal{H}_\tau^\infty)$. Such operators are also called \emph{symmetry breaking operators} (see Kobayashi~\cite{Kob15}). The space of symmetry breaking operators has been studied intensively in connection with finite multiplicity/multiplicity one statements (see e.g. \cite{KO13,SZ12}) and branching laws for specific pairs of groups $(G,H)$ (see e.g. \cite{CO11,FW20,KS15,Moe17}). Theorem~\ref{thm:Main} shows that the direct integral decomposition \eqref{eq:DirectIntegralDecomposition} of an irreducible unitary representation can always be constructed in terms of symmetry breaking operators. This statement might have been common knowledge, but we could not find a reference in the literature. Theorem~\ref{thm:Main} implies an upper bound for the multiplicity $m(\pi,\tau)$ in terms of the dimension of the space $\Hom_H(\pi^\infty|_H,\tau^\infty)=\Hom_H(\mathcal{H}_\pi^\infty,\mathcal{H}_\tau^\infty)$ of symmetry breaking operators between the smooth vectors of $\pi$ and $\tau$: \begin{coralph}\label{cor:Main} For $\mu_\pi$-almost every $\tau\in\widehat{H}$: $$ m(\pi,\tau)\leq\dim\Hom_H(\pi^\infty|_H,\tau^\infty). $$ \end{coralph} We remark that, although this upper bound is attained for some representations $\tau$, the right hand side might be strictly larger in many cases. This means that not every non-trivial symmetry breaking operator contributes to the decomposition of the unitary representation.\\ \textbf{Acknowledgments.} We thank an anonymous referee for pointing out the reference \cite{Li18}. The author was supported by a research grant from the Villum Foundation (Grant No. 00025373). \section{Direct integrals of Hilbert spaces} We briefly recall the construction of direct integrals of Hilbert spaces following the exposition in \cite[Section 8]{KS18}. All Hilbert spaces are assumed to be separable. Let $(\mathcal{H}_\lambda)_{\lambda\in\Lambda}$ be a family of Hilbert spaces indexed by a second-countable topological space $\Lambda$ and let $\mu$ be a $\sigma$-finite Borel measure on $\Lambda$. Denote by $\langle\cdot,\cdot\rangle_\lambda$ the inner product on $\mathcal{H}_\lambda$. We identify elements $s\in\prod_{\lambda\in\Lambda}\mathcal{H}_\lambda$ with sections $s:\Lambda\to\bigsqcup_{\lambda\in\Lambda}\mathcal{H}_\lambda$ satisfying $s(\lambda)\in\mathcal{H}_\lambda$ for every $\lambda\in\Lambda$. Suppose we are given a subspace $\mathcal{F}\subseteq\prod_{\lambda\in\Lambda}\mathcal{H}_\lambda$, called the space of \emph{measurable sections}, satisfying: \begin{itemize} \item For all $s,t\in\mathcal{F}$ the map $\Lambda\to\mathbb{C},\,\lambda\mapsto\langle s(\lambda),t(\lambda)\rangle_\lambda$ is measurable. \item If $s\in\prod_{\lambda\in\Lambda}\mathcal{H}_\lambda$ is such that $\lambda\mapsto\langle s(\lambda),t(\lambda)\rangle_\lambda$ is measurable for all $t\in\mathcal{F}$, then $s\in\mathcal{F}$. \item There exists a countable subset $(s_n)_{n\in\mathbb{N}}\subseteq\mathcal{F}$ such that $\{s_n(\lambda):n\in\mathbb{N}\}$ spans a dense subspace of $\mathcal{H}_\lambda$ for every $\lambda\in\Lambda$. \end{itemize} The family $(\mathcal{H}_\lambda)_{\lambda\in\Lambda}$ together with the measure $\mu$ and the subspace $\mathcal{F}$ of measurable sections is called a \emph{measurable family of Hilbert spaces}. The direct integral of such a family is defined as the Hilbert space $$ \int^\oplus_\Lambda\mathcal{H}_\lambda\,d\mu(\lambda) = \left.\left\{s\in\mathcal{F}:\int_\Lambda\langle s(\lambda),s(\lambda)\rangle_\lambda<\infty\right\}\right/\sim $$ of square integrable sections modulo the subspace of sections which are zero almost everywhere. This Hilbert space carries the obvious inner product. A continuous linear map $T:E\to\int^\oplus_\Lambda\mathcal{H}_\lambda\,d\mu(\lambda)$ from a topological vector space $E$ into a direct integral is said to be \emph{pointwise defined} if there exists a continuous linear map $T_\lambda:E\to\mathcal{H}_\lambda$ for every $\lambda\in\Lambda$ such that for every $v\in E$: $$ T(v)(\lambda)=T_\lambda(v) \qquad \mbox{for almost every }\lambda\in\Lambda. $$ \begin{theorem}[{Gelfand--Kostyuchenko, see e.g. \cite[Theorem 1.5]{Ber88}}]\label{thm:GKThm} Every Hilbert--Schmidt operator $T:\mathcal{H}\to\int^\oplus_\Lambda\mathcal{H}_\lambda\,d\mu(\lambda)$ is pointwise defined. \end{theorem} The following statements are known by \cite[Lemma 1.3]{Ber88}. We include a short proof for convenience. \begin{lemma}\label{lem:DenseImageAndEquivariance} Assume that $T:\mathcal{H}\to\int^\oplus_\Lambda\mathcal{H}_\lambda\,d\mu(\lambda)$ is pointwise defined \begin{enumerate} \item\label{lem:DenseImageAndEquivariance1} If $T$ has dense image, then almost every $T_\lambda:\mathcal{H}\to\mathcal{H}_\lambda$ has dense image. \item\label{lem:DenseImageAndEquivariance2} If $T$ is equivariant with respect to continuous representations of a Lie group $H$ on $\mathcal{H}$ and each $\mathcal{H}_\lambda$, then almost every $T_\lambda$ is equivariant. \end{enumerate} \end{lemma} \begin{proof} For part \eqref{lem:DenseImageAndEquivariance1}, let $(s_n)\subseteq\int^\oplus_\Lambda\mathcal{H}_\lambda\,d\mu(\lambda)$ as in the definition of the direct integral. For each $n$ we let $$ \Lambda_n = \{\lambda\in\Lambda:s_n(\lambda)\notin\overline{T_\lambda(\mathcal{H})}\}. $$ We first show that every $\Lambda_n$ has measure zero. Assume to the contrary that some $\Lambda_n$ has positive measure. Then \begin{align*} \|s_n-T(v)\|^2 &= \int_\Lambda\|s_n(\lambda)-T_\lambda(v)\|^2\,d\mu(\lambda) \geq \int_{\Lambda_n} \|s_n(\lambda)-T_\lambda(v)\|^2\,d\mu(\lambda)\\ &= \int_{\Lambda_n} \|s_n(\lambda)_\perp\|^2+\|s_n(\lambda)_\|-T_\lambda(v)\|^2\,d\mu(\lambda)\ \geq \int_{\Lambda_n} \|s_n(\lambda)_\perp\|^2\,d\mu(\lambda), \end{align*} where $s_n(\lambda)_\|$ resp. $s_n(\lambda)_\perp$ denotes the orthogonal projection of $s_n(\lambda)$ to $\overline{T_\lambda(\mathcal{H})}$ resp. $T_\lambda(\mathcal{H})^\perp$. The latter integral is positive since $s_n(\lambda)_\perp\neq0$ on $\Lambda_n$ which is of positive measure. This shows that $\|s_n-T(v)\|\geq c>0$ for all $v\in\mathcal{H}$, contradicting the fact that $T$ has dense image.\\ We have shown that $\Lambda_n$ is of measure zero for every $n$, hence the countable union $\bigcup_n\Lambda_n$ has measure zero. This implies for almost every $\lambda$ that $s_n(\lambda)\in\overline{T_\lambda(\mathcal{H})}$ for all $n$, so by the assumption that $(s_n(\lambda))_n$ spans a dense subspace of $\mathcal{H}_\lambda$ for every $\lambda$, we obtain that $\overline{T_\lambda(\mathcal{H})}=\mathcal{H}_\lambda$ for almost every $\lambda$. To show \eqref{lem:DenseImageAndEquivariance2}, denote by $\pi$ the representation of $H$ on $\mathcal{H}$ and by $\tau_\lambda$ the representation of $H$ on $\mathcal{H}_\lambda$, $\lambda\in\Lambda$. For fixed $h\in H$ and $v\in\mathcal{H}$, the $H$-equivariance of $T$ implies that $$ [T_\lambda\circ\pi(h)](v) = [\tau_\lambda(h)\circ T_\lambda](v) \qquad \mbox{for almost every }\lambda\in\Lambda. $$ Letting $h$ resp. $v$ run through a countable dense subset of $H$ resp. $\mathcal{H}$ shows the claim. \end{proof} \section{Sobolev norms on unitary representations} Let $G$ be a real reductive group, $\mathfrak{g}$ its Lie algebra and $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ a Cartan decomposition. Denote by $\Delta_\mathfrak{k}\in\mathcal{U}(\mathfrak{k})$ the Casimir element of $\mathfrak{k}$ with respect to an invariant inner product on $\mathfrak{k}$. For an irreducible unitary representation $(\pi,\mathcal{H})$ of $G$ we write $(\pi^\infty,\mathcal{H}^\infty)$ for the subrepresentation on the Fréchet space of smooth vectors. For every $N\in\mathbb{N}$, $$ \|v\|_N^2 = \sum_{j=0}^N \|(d\pi(\Delta_\mathfrak{k}^j)v\|^2 \qquad (v\in\mathcal{H}^\infty) $$ defines a continuous norm $\|\cdot\|_N$ on $\mathcal{H}^\infty$ which dominates the Hilbert space norm $\|\cdot\|$ of $\mathcal{H}$. Therefore, the completion $\mathcal{H}^N$ of $\mathcal{H}^\infty$ with respect to the norm $\|\cdot\|_N$ naturally embeds into $\mathcal{H}$ and yields a scale of Hilbert spaces $$ \mathcal{H}^\infty\subseteq\ldots\subseteq\mathcal{H}^{N+1}\subseteq\mathcal{H}^N\subseteq\ldots\subseteq\mathcal{H}^0=\mathcal{H}. $$ \begin{lemma}\label{lem:SobolevEmbedding} For $4N>\dim\mathfrak{k}$ the embedding $\mathcal{H}^N\hookrightarrow\mathcal{H}$ is Hilbert--Schmidt. \end{lemma} \begin{proof} Note that we may assume $G$ to be connected by decomposing $\mathcal{H}$ into the direct sum of finitely many irreducible representations of the identity component $G_0$ of $G$.\\ Next, decompose $\mathcal{H}=\bigoplus_{\sigma\in\widehat{K}}\mathcal{H}[\sigma]$ into $K$-isotypic components. Then $d\pi(\Delta)|_{\mathcal{H}[\sigma]}$ is a scalar multiple of the identity. To describe the scalar, we identify an irreducible representation $\sigma$ of $K$ with its highest weight in $i\mathfrak{t}^*$ with respect to a maximal torus $\mathfrak{t}\subseteq\mathfrak{k}$ and a system of positive roots. Then $$ d\pi(\Delta_\mathfrak{k})|_{\mathcal{H}[\sigma]} = -(|\sigma+\rho_\mathfrak{k}|^2-|\sigma|^2)\id_{\mathcal{H}[\sigma]}, $$ where $|\cdot|$ denotes the norm on $\mathfrak{t}^*$ induced from the same invariant inner product on $\mathfrak{k}$ that was used for the construction of the Casimir element $\Delta_\mathfrak{k}$. It follows that the square of the Hilbert--Schmidt norm of the embedding $\mathcal{H}^N\hookrightarrow\mathcal{H}$ is given by $$ \sum_{\sigma\in\widehat{K}}\frac{\dim\mathcal{H}[\sigma]}{\sum_{j=0}^N(|\sigma+\rho_\mathfrak{k}|^2-|\sigma|^2)^{2j}} \leq C\sum_{\sigma\in\widehat{K}} (1+|\sigma|)^{-4N}\dim\mathcal{H}[\sigma] $$ for some $C=C_N>0$, where $\rho_\mathfrak{k}\in i\mathfrak{t}^*$ is half the sum of all positive roots. By a result of Harish--Chandra~\cite[Theorem 4]{HC54} combined with the Weyl Dimension Formula, there exist $C',C''>0$ such that $$ \dim\mathcal{H}[\sigma] \leq C'(\dim\sigma)^2 \leq C''(1+|\sigma|)^{\dim\mathfrak{k}-\dim\mathfrak{t}} \qquad \mbox{for all }\sigma\in\widehat{K}. $$ Hence, the square of the Hilbert--Schmidt norm of the embedding $\mathcal{H}^N\hookrightarrow\mathcal{H}$ can be bounded by a multiple of $$ \sum_{\sigma\in\widehat{K}} (1+|\sigma|)^{\dim\mathfrak{k}-\dim\mathfrak{t}-4N}. $$ Summation is over the highest weight lattice in $i\mathfrak{t}^*$, hence the sum is finite if and only if $\dim\mathfrak{k}-\dim\mathfrak{t}-4N<-\dim\mathfrak{t}$. \end{proof} \begin{remark} Lemma~\ref{lem:SobolevEmbedding} essentially shows that $\mathcal{H}^\infty$ is a nuclear Fr\'{e}chet space, which is well-known by the work of Harish-Chandra. Using this, most of the statements in Theorem~\ref{thm:Main} and Corollary~\ref{cor:Main} are also contained in \cite[Chapter 3.3]{Li18}. \end{remark} \section{Proof of the main results} Combining Lemma~\ref{lem:SobolevEmbedding} with Theorem~\ref{thm:GKThm} shows that the restriction of $T$ to the Sobolev completion $\mathcal{H}_\pi^N$ is pointwise defined for $N$ sufficiently large. Composing with the continuous linear embedding $\mathcal{H}_\pi^\infty\hookrightarrow\mathcal{H}_\pi^N$ and using Lemma~\ref{lem:DenseImageAndEquivariance}~\eqref{lem:DenseImageAndEquivariance2} shows Theorem~\ref{thm:Main}. Now let $A_{\pi,\tau}^\infty:\mathcal{H}_\pi^\infty\to\mathcal{H}_\tau\otimes\mathcal{M}_{\pi,\tau}$, $\tau\in\widehat{H}$, be as in Theorem~\ref{thm:Main}. For an orthonormal basis $(w_\alpha)_{\alpha=1,\ldots,m(\pi,\tau)}$ of $\mathcal{M}_{\pi,\tau}$, $m(\pi,\tau)=\dim\mathcal{M}_{\pi,\tau}\in\mathbb{N}\cup\{\infty\}$, write $$ A_{\pi,\tau}^\infty(v) = \sum_\alpha A_{\pi,\tau,\alpha}^\infty(v)\otimes w_\alpha \qquad (v\in\mathcal{H}^\infty) $$ with $A_{\pi,\tau,\alpha}\in\Hom_H(\mathcal{H}_\pi^\infty,\mathcal{H}_\tau)$. If $A_{\pi,\tau}^\infty$ has dense image, then the operators $A_{\pi,\tau,\alpha}^\infty$ have to be linearly independent, so $m(\pi,\tau)=\dim\mathcal{M}_{\pi,\tau}\leq\dim\Hom_H(\mathcal{H}_\pi^\infty,\mathcal{H}_\tau)$. Lemma~\ref{lem:DenseImageAndEquivariance}~\eqref{lem:DenseImageAndEquivariance1} implies that this is the case for almost every $\tau$, so Corollary~\ref{cor:Main} follows. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2}
{ "timestamp": "2021-11-29T02:30:19", "yymm": "2012", "arxiv_id": "2012.08942", "language": "en", "url": "https://arxiv.org/abs/2012.08942" }
\section{Introduction} The recent development of state-of-the-art imaging modalities has revolutionized the way we perform our everyday activities. For example, in self-driving cars, infrared images from the respective camera sensors positioned at the vehicle help to detect obstacles such as pedestrians at night. Remote sensing satellites, on the other hand, acquires multi-spectral and multi-resolution images that are needed for object detection and recognition from high altitudes. In medical diagnosis and treatment, Magnetic Resonance Images (MRI), provides a detailed view of the internal structures of the human brain such as white and gray matter whereas Positron Emission Tomography (PET) and single photon emission computed tomography (SPECT) images provide functional information like glucose metabolism and extent of cerebral blood flow (CBF) or perfusion activity in the specific regions of the brain. However, it is challenging to analyze such complementary information provided by these image modalities individually. Multi-modal image fusion based image processing technique solves this problem by combining two or more pre-registered images from single or multiple imaging modalities into a fused feature space. Ideally, a fusion algorithm should have low computational overload and it should also preserve input features for usage in the aforementioned real-world applications. In the early years of image fusion research, most of the methods had been proposed in a three-step approach. Firstly, the input images were converted into several feature maps using an appropriate image transformation method such as pyramid decomposition, wavelet transformation, and sparse representation. Then, the coefficients of the multimodal feature maps were combined using a pre-defined fusion strategy to get fused feature maps. Finally, the fused image was reconstructed using the inverse transformation applied to the fused feature maps. However, for an ideal fusion model design, these methods mainly focused on enhancing the transformation and fusion strategies to pursue good perceptual results by defining some intricate design rules. Recently, several unsupervised machine learning based CNNs have been proposed to achieve real time image fusion results. However, these image fusion based neural networks lacks the trust of the end-users since there are no tools to analyze the quality of the fused images as these networks act as blackboxes. To overcome this challenge, the neural network-based image fusion methods evaluate the quality of the results using performance metrics such as Structural Similarity Index (SSIM) \cite{ref9} that compares the perceptual similarity between the input images and the fused image. However, the shortcoming of such evaluation is that these metrics do not visualize the influence of input image features on the features of the fused image as it does not consider the underlying heuristics of the hidden network layers thereby not analyzing the internal mechanics of these networks. Therefore, image fusion using neural networks that have been evaluated on these performance metrics have low interpretability. Recently, several visualization techniques have been proposed that broadly try to interpret the neural network decisions. For example, the gradient based visualization methods backpropagate the gradients through the hidden layers of the neural network to interpret the per-pixel influence of the input image on the output predictions. However, such methods are specifically suited for problems such as image classification where the aim is to visually explain the class decision made by the neural network using jacobian based saliency map for the class score with respect to the input image. On the contrary, the primary goal for interpreting per-pixel decisions in a fusion problem is to compute the jacobian based saliency map for each pixel of the fused image with respect to each pixel of the input image. However, such analysis leads to very high computational overload since the number of backpropagation iterations is equivalent to the number of fused pixels. To circumvent this challenge, the deep learning-based software frameworks such as PyTorch and TensorFlow implicitly aggregate the per-pixel gradients from the input tensor elements to compute a single gradient value for each of the output tensor elements. In image fusion, this is not helpful since the gradient information related to how each pixel in the input image influences the machine decision is lost. Hence, the per-pixel saliency visualization by computing gradients of a neural network based fusion model remains untapped in popular literature. Therefore, in this paper, we present a novel per-pixel saliency based visualization approach with real time capabilities that will guide in the interpretability of image fusion based neural networks. The major novelties of this work are summarised below: \begin{itemize} \item We trained several state-of-the-art fusion based unsupervised CNNs under the same learning objective. Considering the importance of understanding the fusion blackboxes in life sensitive domain such as medical imaging, we focused on interpreting the trained neural networks specifically for MRI-PET medical image fusion. \item We performed fast computation of per-pixel jacobian based saliency maps for the fused image with respect to input image pairs. To the best of our knowledge, it is a first-of-its kind technique to visualize fusion networks by considering the backpropagation heuristics that helps it to be more transparent in a real-time setup. \item We constructed guidance images for each input modality by using gradients of the fused pixel with respect to the input pixel at the corresponding location in the input image. We also interpreted the gradient values in each of the guidance images with the grayscale intensities of the fused image by combining the images in the color channels of an RGB image. \item We computed scatter plots between the gradients of the guidance images which provides a visual overview of the correlation between the influence of each of the input modalities. For example, a positive correlation will show that the input modalities influence the fused image equally. \item We developed an interactive Graphical User Interface (GUI) named $\it{FuseVis}$, that combines all the visual interpretation tools in an efficient way. The FuseVis tool allows to compute saliency maps in real-time on the mouse over at the pixel pointed to by the mouse pointer. Our code is available at \href{https://github.com/nish03/FuseVis}{https://github.com/nish03/FuseVis}. \item Finally, we performed clinical case studies on MRI-PET image pairs using our FuseVis tool and visually interpreted the fusion results obtained from several different neural networks. We showed the usefulness of FuseVis in identifying the capability of the evaluated neural networks to solve clinically relevant problems. \end{itemize} Section 2 provides a detailed literature review of the existing works relevant to the problem of image fusion and visualization of neural networks. In Section 3, we explain the visual analysis goals, the proposed visualization concepts, and our FuseVis tool layout in detail. In Section 4, we provide the experimental details of the training setup, the hardware used, the architecture of the neural networks for image fusion, loss function used, and the hyperparameter studies conducted. In Section 5, we present a clinical application of MRI-PET medical image fusion and discuss key visualization requirements for case studies in this field. In Section 6, we interpret the visualization results obtained from each of the fusion based neural networks using our FuseVis tool. In the same section, we also show the frame rates achieved during the mouseover operation that supports the real-time capabilities of our tool. Finally, in Section 7, we summarise the major contributions of this work and discuss its applicability in the field of real-time visualization of neural networks in general. \section{Related Work} In this section, we provide an overview of the literature relevant to this paper. For this, we divide this section into five categories. The first sub-section summarises the classical image fusion approaches that involve non-machine learning based techniques proposed in the recent past along with some relevant review papers related to image fusion problem. The second sub-section gives an overview of the methods that applied the classical image fusion approach along with pre-trained deep neural networks to attain the fusion results. The next sub-section discusses recent methods in the field of unsupervised end-to-end deep learning based image fusion specifically in the area of multi-focus and multimodal image fusion. The fourth sub-section gives a detailed overview of the recent state-of-the-art visualization techniques that have been proposed for interpreting and explaining the black-box neural networks in general. Finally, we end the related work discussion with current literature on the fast computation of gradients in a neural network setup. \subsection{Classical Image fusion approaches} Many review papers in the past such as \cite{ref1, ref2, ref10} provided an overview of different methods for multimodal and multi-focus image fusion. It is quite pertinent from these papers that the classical method for the fusion of multimodal images involves activity level measurement and pre-defined fusion rules. Activity level measurement aims to transform the input images to obtain salient features by either performing coefficient based activity measurement \cite{ref4}, window-based activity measurement \cite{ref36}, or region based activity measurement \cite{ref15}. The transformation strategies were mainly categorized into domains namely multi-scale decomposition, sparse representation, and hybrid transformation among others. This was followed by pre-defined fusion rules to output a fused image. Subsequently, the quality of the fused image was evaluated by using performance metrics that involve human visual perception and some objective statistical assessments. Multi-scale decomposition (MSD) \cite{ref4, ref11, ref12, ref13, ref18, ref14, ref15, ref16, ref17, ref19, ref20, ref21, ref22, ref23, ref24, ref25} separates the source image into low and high frequency sub-bands in order to separately analyze the base and detail level features. Some of the popular transformation strategies under multi-scale image decomposition has been pyramids \cite{ref4, ref11}, discrete wavelets \cite{ref12,ref13,ref14}, dual-tree complex wavelets \cite{ref15}, contourlets \cite{ref16} and shearlets \cite{ref17, ref18}. Multi-scale geometric analysis based MSD methods such as non-subsampled contourlet (NSCT) \cite{ref19, ref20} and non-subsampled shearlet transform (NSST) \cite{ref21, ref22, ref23} have also been proposed due to their effectiveness with image representation. Edge preserving filters such as bilateral filters \cite{ref24} and guided filters \cite{ref25} have been popular in multi-scale decomposition based literature since they retain the salient features from the input images. Sparse representation (SR) based transformation strategies \cite{ref5, ref27, ref28, ref29, ref30, ref42a} do not decompose the original image into low and high frequency bands but instead assume that both frequency bands have similar sparse coefficients. Hybrid transformation strategies aim to define more than one transformation procedure such as curvelet-wavelet \cite{ref31}, combination of multi-scale decomposition and sparse representation based methods \cite{ref32, ref33, ref34}, Intensity-Hue-Saturation (IHS) and Principal Component Analysis (PCA) \cite{ref35} among others. Finally, the image fusion rules after the image transformation have traditionally been defined to combine the features obtained from multi-scale transform approaches into a single fused image and been a crucial factor that helps to provide state-of-the-art fusion performances \cite{ref10}. The three different components which together comprised a robust fusion rule in the past are coefficient grouping, coefficient combination as well as consistency verification. The coefficient grouping and combination strategies include choose-maximum rules \cite{ref37}, weighted-averaging \cite{ref4} and guided-filtering based weighted averaging \cite{ref25}. Subsequently, the consistency verification aimed to ensure that the neighborhood coefficients are fused using the same fusion rule by refining the calculated weight map based on some priors \cite{ref38}. At last, the fused image obtained from the classical methods were evaluated with several performance metrics \cite{ref39, ref40, ref41, ref42, ref9} since there is no gold standard for an ideal fused image. \subsection{Mixture of classical and deep learning based fusion approaches} In recent years, there have been several works that use deep learning based fusion networks \cite{ref3} to further enhance the fusion image quality. However, these works focused on using pre-trained neural networks to extract high and low-level image features for activity level measurement while the fusion rules were still manually defined. For example, in \cite{ref43}, a CNN for medical image fusion was used to perform activity level measurement and based on these measurements, coefficient grouping and coefficient combination were performed manually to obtain the fused image. On the other hand, in \cite{ref44}, the infrared and visible input images were first separated into low and high-frequency features using a pyramid based multi-scale decomposition approach, and then the high-frequency features were fed into a pre-trained deep learning based framework to extract multi-channel high-frequency features. Finally, the low-frequency features were fused using weighted averaging while the high-frequency multi-channel features were fused using a pre-defined max-selection rule. In \cite{ref7}, low-resolution grayscale images were divided into high and low-frequency features using wavelet transformation, and the high-frequency features were fed into a trained neural network which provided the high-resolution version of the input. Then, the absolute-max strategy was applied to the high-resolution outputs while the simple averaging was done to fuse the low-frequency features. Finally, the inverse wavelet transformation was done to retrieve a high-resolution output image. \cite{ref8} on the other hand, used multi-scale convolutional neural networks to obtain multi-scale feature maps that are fused and then post-processed to obtain the decision maps which are segmentation results of the fused image using watershed transformation. This is done to evaluate the quality of the fused image from different image fusion algorithms. \subsection{Unsupervised end-to-end deep learning based fusion approaches} All the fusion methods discussed up to now use the traditional approach of defining the transformation and fusion strategies while some use deep neural networks to extract feature maps from its hidden layers to perform the activity level measurement. Based on the feature maps generated, a final fusion rule is defined. The second observation with these methods is that the network is trained on image distributions which possess properties quite distinct from the inferred images. Though such usage of convolutional neural networks has provided good results in these approaches, it undermines the true ability of a standalone end-to-end learning algorithm capable of providing a final fused result by eliminating pre or post-processing steps such as traditionally used transformation and fusion strategies. Recently, there are end-to-end deep learning based image fusion methods that introduce an optimization strategy using a neural network that models all the transformation, fusion, and reconstruction strategies in its hidden network layers and learns a fused image in an unsupervised manner using loss functions such as SSIM \cite{ref9}. \cite{ref6, ref45, ref50} were some of the early works in the field of multi-focus image fusion which used end-to-end convolutional neural networks that jointly generated activity level measurements and fusion rules. However, these methods trained the network in a supervised environment by leveraging the training data with available groundtruth from image classification databases. \cite{ref61} on the other hand, proposed a simple encoder-decoder based architecture for directly mapping the source images with a single camera focus to get a multi-focus fused image while \cite{ref54, ref59} proposed a Conditional Generative adversarial network (CGAN) based approach for multi-focus image fusion. In the field of multimodal medical image fusion, \cite{ref46} proposed a novel end-to-end fusion network to fuse MRI and PET image pairs by extracting high and low-frequency features within the network layers with SSIM as the loss function. \cite{ref47} used a similar approach for fusing multi-exposure RGB images. However, they first converted the RGB images into YCbCr color space and used an end-to-end neural network to fuse only the luma component $Y$ of the input images while the Cb and Cr components were fused using weighted averaging strategy. Finally, the fused YCbCr color space was converted into the RGB fused color space. There have been several works in the field of visible and infrared image fusion as well that utilize unsupervised deep learning based fusion framework. \cite{ref48} trained an end-to-end convolutional neural network using visible and infrared pedestrian images from both day and night to attain robust fusion results. \cite{ref49} used a different approach to fuse infrared and visible images where they used only a single modality at once to optimize the common weights. Secondly, the feature extraction and feature reconstruction layers involved the trainable weights while the fusion layer did not contain any trainable parameters. \cite{ref51} went a step further and defined a trainable fusion layer. \cite{ref53, ref57, ref60} applied the GAN approach for fusing infrared and visible images using end-to-end neural networks. \cite{ref55} was another such GAN based approach but they used a single modality and fused multi-resolution input images. Finally, there were some works \cite{ref56} which used a novel densely connected network using unsupervised learning strategy to fuse source images for multiple fusion tasks whereas \cite{ref58} proposed a new GAN based method that combined the identity of one input image and the shape of another input image. \subsection{Visualization techniques} Recently, there have been several methods that interpreted the decisions made by neural networks by visualizing the model predictions. \cite{ref62} proposed a method for visualization of classification models by computing the gradients of the class predictions with respect to the input images. \cite{ref63} on the other hand, computed the gradients for activation neurons in the hidden layers of the network with respect to the input image by visualizing the specific regions of the image that activates a given neuron the most. \cite{ref64} extended this work and generalized the gradient-based visual explanations proposed in \cite{ref63} for different types of neural network architectures. \cite{ref65} proposed an alternative visualization method where they used a relevance score approach that visualized the contributions of each pixel of the input image to the classifier predictions. \cite{ref66} presented a method inherently similar to \cite{ref65} but they also visualized the output prediction by calculating the contributions of all neurons in the network to every feature of the input image. Perturbation based visualization algorithms, however, do not need backpropagation heuristics and is a model agnostic approach. \cite{ref67} was one of the first perturbation based approaches that tried to implement occlusion in the input image and examined the output of the classification based neural networks. The results showed that the model was not localizing the objects in the image as the probability of the correct class reduced significantly when the object was occluded. \cite{ref68} was another model agnostic method where the perturbations were performed in the input image around its neighborhood and the behavior of the model's predictions was later analyzed. Finally, the method weights the perturbed data with respect to its proximity to the original image and learns an interpretable model based on the old and the new predictions. \cite{ref70} interpreted the model predictions by changing the pixel intensities of the input images with some noise like blurring or occlusion and modeled the change in the prediction probability of the output image. \cite{ref69} showed that there are two important characteristics of a visualization method to properly explain the regions of the input that are important for the model's decisions. These characteristics are gradients and implementational invariance. To satisfy both the properties, the method integrated the gradients from multiple inputs due to which the method was able to satisfy the sensitivity and implementational invariance of a visualization method. Therefore, the method was able to visualize the prediction of a neural network with respect to its inputs and required just a few instances of gradient computations. \cite{ref71} proposed a visualization approach specifically for medical image fusion where they computed a heat map showcasing the mutual information between the source and the fused image. However, this method cannot interpret and explain the learning based fusion networks itself since there are no backpropagation heuristics involved. Although these perturbation based visualization approaches are generally applicable to a wide range of the problem, they still need several iterations to get the visualization results due to which they are not yet suitable for real-time deployment. Crucially, none of the visualization methods can be generalized for fusion based deep neural networks since, in a fusion problem, the number of output predictions is equal to the number of output pixels. This has huge memory and time requirements due to the very high number of backpropagation heuristics involved. \cite{ref72} efficiently computed the gradients of the loss function by conducting the backpropagation heuristics for per-training example. However, the idea only works on a neural network for image classification that has a single class prediction, and therefore, it cannot be applied to an image fusion problem where there are a large number of predictions in the form of fused pixels. Therefore, an efficient method is needed to decrease the computational cost for the visualization of high dimensional output predictions provided by deep neural networks. \section{Method} In this section, we will present the key visual analysis goals of a neural network for image fusion. Next, we will describe the proposed visualization concepts along with their mathematical formulations that help in achieving these visual analysis goals. Finally, we will present an overview of our user interface, FuseVis, and describe its user interaction capabilities. \subsection{Visual analysis goals} A fused image provides important information related to the input image modalities in combined feature space by preserving features from both the input images. However, by visualizing only the fused images, it could be challenging to interpret the sensitivity of the fusion methods for the input images. It is especially difficult to interpret which of the fusion methods reproduce features from which input image in a better way and how sensitive it is to changes in pixel intensities of the input images. Hence, this analysis is important to analyze the stability of a fusion network to the pixel intensity changes in the specific regions of the input images. \begin{figure}[!htb] \centering \includegraphics[width=15 cm]{FuseVis.pdf} \caption{The figure shows a layout of FuseVis's user interface.} \label{fig:fig1} \end{figure} \subsection{Visualization concepts} We present several visualization strategies that help to understand the fusion results. The most important of those strategies is to compute gradient based saliency heat maps that indicate the relevance of each input pixel to a single pixel in the fused image. These heatmaps are particularly valuable since they allow for an easy and intuitive investigation of what the respective fusion model perceived to be important in the input image features. Therefore, gradient based saliency heatmaps are useful in interpreting fusion blackboxes and understanding the characteristics of image fusion based neural networks. Using these saliency based heat maps, we can also estimate whether a neural network for image fusion is robust to new examples. Therefore, to improve the interpretability of such image fusion based neural networks, we present several visualization techniques that help to understand the fusion results. \subsubsection{Jacobian images} We define per-pixel saliency heat maps, named as $\textit{Jacobian images}$ which highlight the regions in the input images that most prominently influenced the per-pixel fusion results. Let us interpret a fusion operator as a mapping $f:\mathbb{R}^{2n} \rightarrow \mathbb{R}^{n}$ that maps two input images $\boldsymbol{x_1}$ and $\boldsymbol{x_2}$ to a fused image $\boldsymbol{y} = f(\boldsymbol{x_1},\boldsymbol{x_2})$, where all images have the same dimensions and contain $n$ pixels. Then, the saliency analysis is focused on a single pixel $i$ $\in$ $\{1, ..., n\}$ of the fused image that can be selected by the user interactively and is called $\textit{principle pixel}$, $y^{(i)}$, where we use a superscript enclosed in parenthesis to denote the pixel of an image. For each input image $\boldsymbol{x}$ $\in$ $\{\boldsymbol{x_1},\boldsymbol{x_2}\}$, we define the jacobian image of $y^{(i)}$ as the image of the partial derivatives with respect to the input image $\boldsymbol{x}$: \begin{equation} \label{eqn:label1} \frac{\partial y^{(i)}}{\partial \boldsymbol{x}} = \Bigg( \frac{\partial y^{(i)}}{\partial x^{(1)}}, \frac{\partial y^{(i)}}{\partial x^{(2)}}, ...., \frac{\partial y^{(i)}}{\partial x^{(i)}}, ...., \frac{\partial y^{(i)}}{\partial x^{(n)}}\Bigg) \end{equation} The jacobian computation involves backpropagating the gradients through the hidden layers of the neural network using chain rule-based automatic differentiation. A jacobian image visualizes the extent up to which each pixel element of the input image $\boldsymbol{x}$ influences the principle pixel $y^{(i)}$. Therefore, the jacobian images reveal the sensitivity of the principle pixel $y^{(i)}$ in the fused image to the changes in the pixel intensities of the input images. An example of a jacobian image is shown in Figure \ref{fig:fig1} e). It can be expected that the fused principle pixel $y^{(i)}$ is highly sensitive to changes in the grayscale intensities at $x^{(i)}$. Additionally, the local neighborhood of the pixel element $x^{(i)}$ might also have a direct influence on the fused principle pixel $y^{(i)}$. The jacobian image based visualization concept also helps to compare the sensitivity of the fused principle pixel with respect to changes in the pixel intensities of each of the two input images as well as estimate which of the two input image has a greater neighborhood influence on the fused pixel. \subsubsection{Guidance images} Based on the observation that the pixel $x^{(i)}$ has by far the largest saliency value, we developed a visualization concept named $\textit{Guidance images}$ which only considers the gradients of the principle pixel $y^{(i)}$ with respect to the input pixel element $x^{(i)}$ located at the corresponding coordinate in the input image: \begin{equation} \label{eqn:label2} \frac{\partial \boldsymbol{y}}{\partial \boldsymbol{x}} = \Bigg( \frac{\partial y^{(1)}}{\partial x^{(1)}}, \frac{\partial y^{(2)}}{\partial x^{(2)}}, ...., \frac{\partial y^{(i)}}{\partial x^{(i)}}, ...., \frac{\partial y^{(n)}}{\partial x^{(n)}}\Bigg) \end{equation} While jacobian images allow per-pixel interpretation of fusion outputs, guidance image aims to provide for each input image, a static overview of its influence on the fused image. An example of a guidance image is shown in Figure \ref{fig:fig1} c). The two static guidance images provide an overview of the jacobian images and allow us to compare the major gradient values on an absolute scale. This fosters comparison of the same pixel location in the two input modalities as well as different pixel locations in the same input image. The changes in the pixel intensities of the input image region with high gradients will significantly change the pixel intensities in the corresponding region of the fused image. On the contrary, it will require significant changes in the pixel intensities of the input image region with low gradients to have a similar effect on the fused features in the same region. Due to this, the guidance images reveal which of the two input modalities have higher influence on the fusion result in specific regions of interest as well as where each input image has a large influence on the fusion result. \subsubsection{Guidance RGB images} During the interactive visual analysis with jacobian images and guidance images, a typical analysis task is to compare the values in the guidance images of the two modalities in order to find out which modality influences the fused image more. Furthermore, one typically wants to compare a guidance image with the fused image. To simplify these local comparisons, we provide a single Guidance RGB image that encodes the two normalized guidance images as well as the fused image in the three color channels red, green, and blue. We perform a max-min normalization scaling of the guidance images to re-scale the features with a distribution value between 0 and 1. For every guidance image, the minimum value of that image gets transformed into 0, and the maximum value gets transformed into 1. An example of a Guidance RGB image is shown in Figure \ref{fig:fig1} i). In red are the regions where the MRI gradient is high, the PET gradient is low and the fused image has relatively low pixel intensities. By increasing fused pixel intensities, the color changes from red to magenta. Similarly, green and cyan correspond to high PET gradients with low MRI gradient values. In blue regions, both gradient values are low and the fused pixels are bright. The yellow depict regions where both gradients are high. In this way, the Guidance RGB image allows us to find different gradient constellations fast and the selection of the principle pixel is also possible on the Guidance RGB image. \subsubsection{Scatterplot} As the Guidance RGB image gives a local overview of how the gradients of each modality behave in comparison to the fused image, a scatterplot can depict the correlation and more advanced statistical relationships between the gradient values of the guidance images. For example, a positive correlation between these gradients would mean that the input modalities equally influence the fused pixels while a negative correlation will show that an increase in the influence of one modality will lead to a decrease in the influence of the other modality. For this statistical analysis of the gradient values in the guidance images, we show a scatterplot with MRI gradients along the x-axis and PET gradients along the y-axis with an example provided in Figure \ref{fig:fig1} k). The overall scatterplot in black shows a rather complicated relationship between the gradients. Therefore, we illustrated the points corresponding to pixels around the principle pixel in green. In this way, the local statistical relationships can be explored interactively. \begin{figure}[!htb] \centering \includegraphics[width=15cm]{Gamma.pdf} \caption{The first and second row shows the effect of the gamma correction on the \textit{Jacobian PET} and \textit{Guidance PET} images of MaskNet network with $\gamma_{corr1}$ and $\gamma_{corr2}$ varying between 0.1 and 2.0.} \label{fig:gamma correction} \end{figure} \subsubsection{Gamma correction} The jacobian images inherently have low luminance since the gradients of the principle pixel is extremely bright compared to the gradient of its neighborhood pixels. Therefore, to better visualize the small gradient values in the jacobian images, we used the gamma correction, $\gamma_{corr1}$, on jacobian images as $\Big(\frac{\partial y^{(i)}}{\partial \boldsymbol{x}}\Big)^{\gamma_{corr1}}$. To also adjust the luminance of the guidance images, we defined another gamma correction,$\gamma_{corr2}$, on guidance images as $\Big(\frac{\partial \boldsymbol{y}}{\partial \boldsymbol{x}}\Big)^{\gamma_{corr2}}$ in a 8-bit range of RGB images. We chose two separate gamma corrections for jacobian and guidance images to independently fine-tune the luminance of these images. Gamma correction is an efficient method to improve the luminance since it only has a single parameter to fix for better visualization of the underlying gradient intensities. Currently, the range of the gamma correction values varies between $0.1$ and $2$ in our FuseVis tool. An example of the effect of $\gamma_{corr1}$ and $\gamma_{corr2}$ on jacobian and guidance images respectively is shown in Figure \ref{fig:gamma correction}. It can be seen from the images that as the $\gamma_{corr1}$ and $\gamma_{corr2}$ decreased below $1$, the luminance of the images increased due to which the small gradients became brighter while ${\gamma_{corr}}$ above 1 decreased the luminance of the small gradients. \subsection{Overview of FuseVis tool} An overview of the FuseVis tool is shown in Figure \ref{fig:fig1} where the images related to the MRI are placed at the top row, the images related to PET at the bottom row whereas all the modality combining images are in the middle row. We placed input and fused images on the left, guidance images in the middle, and the jacobian images on the right side of the tool. The tool also consists of a dropdown menu to select the fusion method and two separate slider widgets, namely $Gamma1$ that represents $\gamma_{corr1}$ to fix the gamma correction of the jacobian images and $Gamma2$ that represents $\gamma_{corr2}$ to fix the gamma correction of the guidance images. The principle pixel can be chosen in any of the input images, guidance images, or the fused image using the mouseover operation. The user performs the mouseover interaction by pressing and holding the left button of the mouse and hovering around the image. While performing this operation, the current mouse cursor position defines the principle pixel coordinate and the local environment around the principle pixel is displayed in the zoomed-in versions of the images which is always placed to the right of the respective images. The local environment of the principle pixel is defined based on the red squared bounding box and the current principle pixel coordinate, $i$, is positioned around the center of the bounding box which is also shown as the red square in the zoomed images. \begin{figure}[!htb] \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.47_0.5_ssim.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.497_0.5_ssim.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.5_0.494_ssim.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.52_0.5_ssim.pdf} \end{subfigure \\ \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.47_0.5_l2.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.497_0.5_l2.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.5_0.494_l2.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{gamma_0.52_0.5_l2.pdf} \end{subfigure}% \\ \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{MRI_SSIM_all.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{MRI_L2_all.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{PET_SSIM_all.pdf} \end{subfigure}% \begin{subfigure}{0.25\textwidth} \includegraphics[width=\linewidth]{PET_L2_all.pdf} \end{subfigure}% \caption{Loss curves of the evaluated fusion based neural networks trained on MRI-PET image pairs.} \label{fig:loss curves} \end{figure} \section{Experimental Setup} As there are multiple real-world applications for image fusion, it is not possible to interpret fusion based neural networks for each of these experimental settings. Therefore, we focus on the evaluation of fusion based neural networks on a specific clinical application that deals with MRI-PET fusion for tumor resection planning. For this, we first comparably train the different neural networks. Then, we present medical case studies where we mention key visualization requirements of a robust fusion method. In the same section, we select the features in the input images that are most important for the clinical diagnosis. Finally, we use our tool to find out which of the fusion methods is best suited for this specific clinical application. \subsection{Fusion networks} We visualized multiple end-to-end unsupervised learning based fusion approaches available in popular image fusion literature using our FuseVis tool. The image fusion based neural networks that we evaluated are \textit{FunFuseAn} \cite{ref46}, \textit{MaskNet} \cite{ref52}, \textit{DeepFuse} \cite{ref47} and \textit{DeepPedestrian} \cite{ref48}. All of these fusion based neural networks have distinct network architectures (as shown in Figure \ref{fig:networks}), built specifically for different types of image fusion applications. In addition to these four end-to-end unsupervised fusion based neural networks, we also evaluated a non-machine learning based fusion approach by implementing a simple weighted averaging technique and compared its visualization results with those from these fusion based neural networks. The weighted averaging based image fusion is a pixel-level method that assigns higher weights to the sharpest pixels in each input image. Assuming pixel $x_1^{(i)}$ from one input image to be extremely bright compared to pixel $x_2^{(i)}$ from the other input image, then the weighted averaging method will give higher weightage $w_1^{(i)}$ to $x_1^{(i)}$ and lower weightage $w_2^{(i)}$ to $x_2^{(i)}$ , thereby resulting in a bright fused pixel element $y^{(i)}$: \begin{equation} \label{eqn:label3} w_1^{(i)} = \frac{x_1^{(i)}}{x_1^{(i)} + x_2^{(i)}}, \;\;\; \; w_2^{(i)} = \frac{x_2^{(i)}}{x_1^{(i)} + x_2^{(i)}}, \;\; \; \; y^{(i)} = w_1^{(i)}*x_1^{(i)} + w_2^{(i)}*x_2^{(i)} \end{equation} \subsection{Hardware-software setup} The work was performed with Python (version 2.7) and deep learning library PyTorch was used to implement fusion based neural networks. The Python wrapper of the GUI library \textit{Tkinter} was used to create the user interface of our FuseVis tool. All the visualization results were derived from FuseVis tool using a 64 bit Windows operating system equipped with a single GeForce GTX 1080 Ti GPU and 12 Intel Core i7-8700K CPU @ 3.70 GHz with 64-GB RAM. The same hardware-software setup was used for the real-time computation of the jacobian images. However, to fasten the training of the neural networks and to speed up the static computations of guidance images, we used a more powerful hardware setup containing six NVIDIA Tesla V100 GPUs. \subsection{Training dataset} For training each of the fusion networks in a uniform setup, we obtained several MRI-T2 and PET image pairs of 50 different patients publicly available at the Alzheimer’s Disease Neuroimaging Initiative (ADNI) \cite{ref73} with the age of patients varying between 55-90 years among both genders. The patients were chosen in such a way that the training dataset covers mild, moderate, and severe stages of Alzheimer's disease. All images were extracted as axial slices with a voxel size of 1.0 x 1.0 x 1.0 mm$^3$. The MRI-T2 images were N3m MPRAGE sequences while PET images were co-registered, averaged, and standardized with a uniform resolution for each of the subjects. We aligned the MRI-PET image pairs using the Affine transformation tool of the 3D Slicer registration library. \subsection{Network architectures} The architectures of the evaluated fusion based neural networks are shown in Figure \ref{fig:networks}. A \textit{Conv 3 x 3 x 9 x 16} block, for example, means a convolution operation performed on an input with nine channels that results in an output with 16 channels. For this, 16 kernel filters each with size \textit{3 x 3} and nine channels are utilized. Each channel of the kernel filter is convoluted with the corresponding input channel and subsequently summed up to generate a single feature map. This operation is repeated for all the kernel filters to extract output feature maps equal to the number of filters. The weights of the kernel filter have been initialized based on a uniform distribution while the striding and the padding operation have been performed in such a way that there is no downsampling in any hidden layer. A \textit{BatchNorm} means a batch normalization operation and \textit{LReLU} means a leaky ReLU activation function with a slope 0.2. It is noticeable that each of the networks has different architectures due to varying image fusion tasks that they solve. The architecture of \textit{FunFuseAn} \cite{ref46} is inspired by the classical non-machine learning based image fusion technique of feature extraction, fusion, and feature reconstruction with specific application to the MRI and PET image pairs. \textit{MaskNet} \cite{ref52} concatenates the input images and performs a dense feature fusion in the initial layers of the network before constructing a fused image. \textit{DeepFuse} \cite{ref47} defines an architecture where the feature extraction layers have tied weights for each of the input image modalities. This helps in increasing the brightness of an underexposed input image by using the luminance of an overexposed input image. \textit{DeepPedestrian} \cite{ref48} also concatenated the input images and utilized a deep architecture with multiple residual blocks specifically for infrared and visible image fusion. Since we aim to interpret the behavior of these image fusion based neural networks by performing a per-pixel gradient based saliency visualization using our FuseVis tool, we modified the original training setup of these fusion networks and defined a common strategy to train these networks. \subsection{Loss Function} An image fusion approach aims to preserve as much of the input image features as possible in the fused image. Therefore, the training objective should be defined in such a way it could measure the similarity between the input and the fused image and optimize the network parameters in order to achieve perceptually similar results. Structural Similarity Index (SSIM) \cite{ref9} is an efficient similarity metric that is also differentiable due to which it can be adopted as a primary loss function for training each of the fusion based neural networks. However, empirically setting only SSIM as the loss function leads to the change of brightness in the final fused image. The $\ell_2$ loss helps to overcome this shortcoming by preserving the luminance component better. Therefore, we define a combination of SSIM and $\ell_2$ losses to train each of the neural networks as shown in Equation \ref{eqn:ref12}. The $\boldsymbol{x_1}$ and $\boldsymbol{x_2}$ are MRI and PET training images respectively, the regularization hyperparameter $\lambda$ controls the weightage of the total $L_{SSIM}$ and total $L_{\ell_2}$ losses while the hyperparameters $\gamma_{ssim}$ and $\gamma_{\ell_2}$ controls the weightage between the individual SSIM and $\ell_2$ losses of each input image modality respectively. \begin{equation} \label{eqn:ref12} \begin{gathered} L_{SSIM}^{MRI} = 1 - SSIM(\boldsymbol{x_1}, \boldsymbol{y}), \;\;\;\; L_{SSIM}^{PET} = 1 - SSIM(\boldsymbol{x_2}, \boldsymbol{y})\\ L_{SSIM} = \gamma_{ssim}*L_{SSIM}^{MRI} + (1-\gamma_{ssim})*L_{SSIM}^{PET} \\ L_{\ell_2}^{MRI} = ||\boldsymbol{y} - \boldsymbol{x_1}||_2, \;\;\;\; L_{\ell_2}^{PET} = ||\boldsymbol{y} - \boldsymbol{x_2}||_2 \\ L_{\ell_{2}} = \gamma_{\ell_2}*L_{\ell_2}^{MRI} + (1-\gamma_{\ell_2})*L_{\ell_2}^{PET} \\ L_{total} = \lambda * L_{SSIM} + (1 - \lambda) * L_{\ell_2} \end{gathered} \end{equation} The loss convergence curves of all the evaluated unsupervised learning based fusion networks are shown in Figure \ref{fig:loss curves}. Each of the fusion networks was trained for 200 epochs, the batch size was fixed as 2 and the Adam optimizer was used as the optimization function during the backpropagation step with a learning rate of $2 x 10^{-3}$. \subsection{Hyperparameter tuning} Ideally, a fused image should preserve features from both the input images equally and there should be minimal addition of the brightness artifacts. For achieving this, we followed a hyperparameter tuning strategy where we aimed to find a plausible balance between the $L_{SSIM}^{MRI}$ and $L_{SSIM}^{PET}$ as well as $L_{\ell_2}^{MRI}$ and $L_{\ell_2}^{PET}$ loss curves by training a single fusion network with multiple combinations of $\lambda$, $\gamma_{ssim}$ and $\gamma_{\ell_2}$ values. Once a good balance was found between these curves, we evaluated the best $\lambda$ configuration that leads to the least brightness artifacts by analyzing the resultant fused images. We initially trained the \textit{FunFuseAn} network with $\lambda$ = 0.01, 0.2, 0.5, 0.8 and 0.99 with $\gamma_{ssim}$ and $\gamma_{\ell_2}$ each equal to 0.1, 0.3, 0.5, 0.7 and 0.9. Therefore, a total of 125 instances of \textit{FunFuseAn} network were learned in separate training environments. For each $\lambda$ configuration, we analyzed the combination of $\gamma_{ssim}$ and $\gamma_{\ell_2}$ that nearly balances the SSIM and $\ell_2$ loss curves for both MRI and PET images. Then, we fine tuned $\gamma_{ssim}$ and $\gamma_{\ell_2}$ parameters for each $\lambda$ by training several more instances of the \textit{FunFuseAn} network in the vicinity of the previous best $\gamma_{ssim}$ and $\gamma_{\ell_2}$ in order to find the next best configuration that balances the loss curves even better. \begin{table}[H] \caption{The table shows the partial loss values of the trained fusion networks after 200 epochs and the fine tuned hyperparameter configurations.} \centering \begin{tabular}{|P{2.5cm}|P{1cm}|P{1cm}|P{1cm}|P{1.3cm}|P{1.3cm}|P{1.3cm}|P{1.3cm}|} \hline Network & $\lambda$ & $\gamma_{ssim}$ & $\gamma_{\ell_2}$ & $L_{SSIM}^{MRI}$ & $L_{SSIM}^{PET}$ & $L_{\ell_2}^{MRI}$ & $L_{\ell_2}^{PET}$ \\ \hline FunFuseAn & 0.99 & 0.47 & 0.5 & 0.2524 & 0.2147 & 0.0148 & 0.0094 \\ \hline MaskNet & 0.99 & 0.5 & 0.494 & 0.2208 & 0.2236 & 0.0109 & 0.0115 \\ \hline DeepFuse & 0.99 & 0.497 & 0.5 & 0.2824 & 0.2830 & 0.0385 & 0.0338\\ \hline DeepPedestrian & 0.99 & 0.52 & 0.5 & 0.2164 & 0.2219 & 0.0148 & 0.0175 \\ \hline \end{tabular} \label{tab:hyper} \end{table} After fixing the best $\gamma_{ssim}$ and $\gamma_{\ell_2}$ configuration for each $\lambda$, we determined which of the $\lambda$ values leads to negligible brightness artifacts in the fused image. Therefore, we trained new network instances with additional values of $\lambda$ = 0.9, 0.999, 0.9999, 0.99999, 0.999999 and 1.0 to properly evaluate the corner cases of our loss function. We observed that the brightness artifact in the fused images worsened as the values of $\lambda$ were increased above 0.999 with $\lambda$ = 1.0 having the most intense brightness in the fused image. $\lambda$ = 0.99 was one configuration where we observed negligible change in brightness of the fused image which meant a weightage of 0.01 to $\ell_2$ was sufficient to remove the brightness artifact problem from the SSIM loss function. $\lambda$ = 0.999 also had negligible brightness addition but the fused image from this configuration lost some PET features due to which we fixed $\lambda$ = 0.99 as the best hyperparameter for the $FunFuseAn$ network. Since $\lambda$ = 0.99 was sufficient to overcome the brightness artifact problem, we used the same $\lambda$ = 0.99 value for all the other fusion networks and then evaluated several instances of these networks to obtain the best $\gamma_{ssim}$ and $\gamma_{\ell_2}$ that balances the loss curves. The loss curves in Figure \ref{fig:loss curves} were generated with the best hyperparameter configuration for each of the fusion based neural networks. The final hyperparameter values for the fusion networks can be seen in Table \ref{tab:hyper}. It is observable that all the networks were able to balance the loss curves while \textit{DeepFuse} network converged at higher loss values compared to the other fusion networks. \section{Medical Case studies} In this section, we discuss the specific clinical application of the MRI-PET image fusion in detail. Then, we perform case studies on pre-registered MRI and PET image pairs by providing the pathological information about the patients from whom these images were acquired. Finally, we will present key visualization requirements that will make a fusion approach suitable for usage in this specific clinical application. \subsection{Glioma and its pathological features} Glioma is a type of brain tumor that originates in the glial cells that surround and support the neurons in the brain. MRI enables the clinicians to estimate the size and the location of glioma, thereby acting as an initial marker for malignancy. For example, in an MRI-T2 grayscale image, brighter pixel intensities imply hyperintense signal abnormalities which are observable in the brain regions with a tumorous lesion such as glioma. The grayscale intensities in these tumorous lesions could be similar to the grayscale intensities in the lateral ventricle region of the brain that mostly contains cerebrospinal fluid (CSF) with high water content. However, there could be several sub-regions within the boundary of high-grade glioma with varying pathological features, which is challenging to visualize by only interpreting the MRI images as the MRI contrast is quite uniform within the tumor boundary. Hence, the aggressiveness and increased potential of the sub-tumor tissues cannot be adequately determined. Functional imaging modalities such as PET and SPECT are better equipped to differentiate between the sub-regions of glioma by visualizing the functional information like glucose metabolism and the extent of CBF or perfusion activity in these sub-regions. The innermost part of glioma is the necrotic core that contains dead tissues and there is no possibility of CBF and/or glucose metabolism due to which it has very dark PET features. The region that surrounds necrotic tissues is called an enhancing tumor which is filled with inflammation fluid and the blood-brain barrier leaks in this region due to cancerous angiogenesis. Hence, this region has glucose metabolism with blood supply to support the cancerous cell growth resulting in lighter dark PET features. The necrotic core and enhancing tumor region constitute the bulk of glioma that clinicians want to visualize using fused MRI and PET features and performs a precise resection of these two regions. The non-enhancing solid tumor is the outermost region of glioma which is generally not resected since the blood-brain barrier is still intact in these tissues. The bright regions in the PET image resemble healthy tissues with a high level of blood perfusion and normal glucose metabolism. \subsection{Clinical test examples} The clinical test examples were acquired from Harvard Whole Brain Atlas database \cite{ref74} with the combination of MRI-T2 and PET/SPECT images from patients suffering from different types of glioma. The test examples were disjoint to the training dataset and the network has not seen such clinical pathology during its training. The first clinical MRI-PET image pair is shown in the first four columns of Figure \ref{fig:fused}. The scans are of a patient who was suffering from \textit{Anaplastic Astrocytoma}, a rare and malignant brain lesion classified under the category of high-grade glioma. A lesion in the right and the left side of the brain is visible in the MRI image and has bright grayscale intensities. On the other hand, bright regions in the PET image convey normal blood flow while very dark regions suggest no blood flow to the necrotic tissues. In the second clinical MRI-PET image pair as shown in the last four columns in Figure \ref{fig:fused}, the patient had a long history of tobacco usage and was originally suffering from \textit{Metastatic Bronchogenic Carcinoma}, which is a type of lung cancer. The patient began having headaches and the scans revealed brain metastases that occurred due to the spread of cancer cells present in the lungs to the brain resulting in the diagnosis of glioma. A lesion in the right side of the brain with bright features is visible in the MRI image while very dark PET features reveal no blood flow in the necrotic region. \subsection{Visualization requirements} As the anatomical MRI image cannot precisely estimate the sub-tumor boundaries within glioma and the functional PET/SPECT image is unable to model the overall extent of the tumor; therefore, one of the goals of MRI-PET image fusion is to better delineate the sub-peripheries of glioma by envisioning the metabolic and/or CBF characteristics of its sub-regions. However, it is challenging to interpret the suitability of these fused images for the mentioned clinical application, if the results are not supported by supplementary visualization tools. This secondary visual analysis becomes critically important when several fusion methods provide similar results. Additionally, the recent image fusion methods are developed using unsupervised learning based deep neural networks that are non-transparent (i.e. a ‘black box’) due to its difficulty to retrace the fusion decision in light of huge parameter space. This shortcoming is especially problematic in sensitive medical domains like MRI-PET image fusion where understanding and explaining fusion results obtained from neural networks is required for a robust clinical decision. Therefore, the following visual tools are required to understand the suitability of a fusion approach for usage in the clinical application: \begin{itemize} \item A fusion approach should assist clinicians in visualizing the extent of hyper dark PET regions resembling necrotic core with no blood flow being superimposed on the bright anatomical boundary of the whole tumor mass. This information is important for clinicians to estimate the extent up to which a tumor resection is required. For example, in the first and the fifth column of Figure \ref{fig:fused}, the principle pixel in this very dark PET region was chosen for visual analysis. \item A fusion approach should preserve the very bright PET features which convey high blood perfusion and normal metabolism in healthy brain tissues as it helps clinicians in visualizing the regions with high brain activity due to external stimuli at a particular time. For example, in the third and seventh column of Figure \ref{fig:fused}, the principal pixel in the bright PET region was chosen for visual analysis. \item A fusion approach should be stable and less sensitive to changes in input features from a clinically less significant modality. For example, the change in grayscale MRI intensities within the necrotic core shall not highly influence the fused grayscale intensities as it might corrupt the clinically important dark PET features. For example, the \textit{MaskNet} and \textit{DeepPedestrian} networks are less sensitive to changes in the MRI features which can be visualized in the guidance MRI and guidance PET images of these networks shown in Figure \ref{fig:guidance}. \item A fusion approach should be less sensitive to the changes in grayscale pixel intensities located in one sub-region of glioma (say enhancing tumor) when the principle pixel is located in the other regions of glioma (say necrotic core). Therefore, a fusion method should have a negligible influence of the neighborhood pixels exterior to a local feature with the principle pixel interior to the local feature. For example, the fusion methods such as \textit{Weighted Averaging} and \textit{MaskNet} have no or very low gradients in the neighborhood pixels which are outside the very dark PET features resembling necrotic core as shown in the jacobian images in Figure \ref{fig:jacobian}. \end{itemize} \section{Results and discussion} In this section, we present a detailed visual analysis of the fusion based neural networks using our FuseVis tool. First, we will analyze the fused images from each of the fusion methods and discuss their clinical significance. Then, we will discuss the visualization insights by performing the saliency analysis and examine the feasibility of the fusion methods for usage in the clinical application. Finally, we will present the timing results of our FuseVis tool and show the tool's real time capabilities. \begin{figure}[!htb] \centering \includegraphics[width=15cm]{Fused_images.pdf} \caption{The figure shows the fusion results for each of the fusion methods. The zoomed image within the clinical region of interests are always placed on the right of the unzoomed image.} \label{fig:fused} \end{figure} \subsection{Fused images} The fused images of the \textit{Weighted Averaging} method as shown in the third row of Figure \ref{fig:fused} gives very high weightage to the bright input intensities in the regions where intensities in the other modality are darker. Therefore, we can perceive a reproduction of features from relatively brighter regions of the MRI and PET images due to which \textit{Weighted Averaging} method is unable to preserve the clinically important dark PET features related to the necrotic core but favorably preserves the bright PET features resembling healthy tissues. The fused images of the \textit{FunFuseAn} as shown in the fourth row of Figure \ref{fig:fused} are comparatively similar to the \textit{Weighted Averaging} approach where it is not able to preserve the very dark PET features. However, the fused images from \textit{FunFuseAn} look relatively dull compared to the fused images from \textit{Weighted Averaging} since the \textit{FunFuseAn} network mixes both MRI and PET features even though the boundary information about the necrotic region is lost. The analysis of the fused images from \textit{MaskNet} in the fifth row of Figure \ref{fig:fused} shows a significant loss of anatomical edges from MRI such as the brain skull. However, contrary to \textit{Weighted Averaging} and \textit{FunFuseAn}, \textit{MaskNet} preserved the PET features better in both the dark and bright regions resembling the necrotic core and healthy tissues respectively. However, the dark PET features are slightly blurred due to changes in the overall brightness of the fused image. The fused image obtained from \textit{DeepFuse} as shown in the sixth row of Figure \ref{fig:fused} has an even higher shift in grayscale intensities due to which the method is unable to preserve the dark PET features even though the overall anatomical MRI structures are well preserved. The change in brightness can be explained by the fact that the $L_{\ell_2}^{MRI}$ and $L_{\ell_2}^{PET}$ converged at higher loss values compared to other networks due to which the brightness component of SSIM was not properly optimized. Additionally, the architecture of \textit{DeepFuse} network has been crafted for adding exposure to underexposed images by using the brightness component from each of the input image modalities and employ it to generate very bright fusion results. The fused image from \textit{DeepPedestrian} as shown in the seventh row of Figure \ref{fig:fused} clearly delineates the boundary of the necrotic core by preserving the very dark PET features, which is of high clinical significance for medical professionals. However, the anatomical edges from MRI are lost in the fused image which is also one of the main artifacts in the \textit{MaskNet} network. Another important observation is that the bright PET features resembling healthy tissues appear to be not well preserved due to an overall brightness shift. \begin{figure}[!htb] \centering \includegraphics[width=15cm]{Guidance_images.pdf} \caption{The figure shows the guidance MRI and guidance PET images for each of the fusion methods in the clinical region of interests. The $\gamma_{corr2}$ was fixed at 0.5 for all the guidance images.} \label{fig:guidance} \end{figure} \textit{Summary}: The analysis of the fused images shows that even though SSIM was used as a natural loss function for training the fusion networks in a common training setup, each of the fusion methods displayed very different fusion results. The \textit{MaskNet} and \textit{DeepPedestrian} methods provide clinically relevant results where the boundaries of the necrotic core of glioma are perceivable with the dark PET features clearly superimposed on the anatomical tumor features of MRI in the fused image. However, these methods do not provide loss-free fusion results as there are missing anatomical information in both these approaches while the dark PET features are not entirely preserved in the \textit{MaskNet} network. Crucially, it is not clear by only looking into the fused images of these two methods, that up to what extent the changes in the bright MRI features can affect the final fusion results within the clinically significant dark PET regions. This analysis is important for visualizing the stability of these networks to changes in the MRI features which is a critical visualization requirement. Hence, we will perform this analysis using the visualization concepts in the subsequent sub-sections. \subsection{Guidance images} In this section, we analyze the fusion methods with respect to its sensitivity to feature level changes in the input images by visualizing the \textit{Guidance MRI} and \textit{Guidance PET} images in Figure \ref{fig:guidance} and \textit{Guidance RGB} images in Figure \ref{fig:guidance RGB} for the two clinically relevant regions. We chose the first region as bright MRI with very dark PET features since it is the region that contains the necrotic core of glioma which clinicians are interested to operate for resection. The second region of interest has dark MRI with bright PET features resembling healthy tissues which is interesting for the visualization of high brain activity due to external stimuli. Ideally, a guidance MRI image should have low gradients meaning a fusion method should be less sensitive to changes in both dark and bright MRI regions. \textit{Weighted Averaging}: For the region with bright MRI and dark PET intensities in both the test examples, it can be observed that the guidance MRI image is significantly brighter than the guidance PET image, revealing higher sensitivity to changes in MRI features. This suggests that a small change in the MRI pixel intensities will sharply modify the fused pixel intensities while it requires a substantial alteration in the PET pixel intensities to hold a considerable effect on the fused pixel intensities. This is an undesired outcome as a fusion method should be stable to changes in the pixel intensities of MRI. The guidance RGB image shows magenta color in these regions as the red channel containing guidance MRI image has high gradients and the blue channel containing the fused image have high pixel intensities compared to the green channel containing the guidance PET image with low gradients. On the other hand, the region with dark MRI and bright PET intensities shows an adverse effect where the guidance PET image is relatively brighter compared to the guidance MRI image since higher weightage is now given to the bright PET grayscale values. The guidance RGB image in this region shows cyan color since the green channel with guidance PET image has high gradients while the blue channel with the fused image also has higher intensities. \textit{FunFuseAn}: For the region with bright MRI and dark PET intensities, there are higher gradients in the guidance MRI image compared to the guidance PET image. This conveys that the fused pixels within the necrotic core have a higher sensitivity to changes in the MRI pixels. The guidance RGB image also shows red color within the necrotic region indicating only the guidance MRI have high intensities compared to the images in other color channels. For the region with dark MRI and bright PET intensities, the guidance MRI and guidance PET images have low gradients. This means that the fused pixels in this region are quite stable to changes in the MRI and PET pixel intensities. Hence, the color in the guidance RGB image is predominantly blue as the fused pixel intensities are higher than the gradient values in both the guidance images. Therefore, it is challenging to interpret which of the two input modalities has a higher influence on the fused image in this particular region. \textit{DeepFuse}: In the bright MRI and dark PET region resembling necrotic core, there are relatively high gradient values in the guidance MRI image compared to the guidance PET image. This is the reason for the shades of magenta in the guidance RGB image. For the dark MRI and bright PET intensity region in the first test example, the pattern is similar where there are higher gradients in the guidance MRI image compared to the guidance PET image due to which magenta color is visible in the guidance RGB image. For the second test example; however, the guidance images from both MRI and PET have low gradients indicating a low influence of change in input pixel intensities within this region. Hence, the guidance RGB image also appears to be light blue in color in this region. \begin{figure}[!htb] \centering \includegraphics[width=15cm]{Guidance_RGB_images.pdf} \caption{The figure shows the guidance RGB images for each of the fusion methods.} \label{fig:guidance RGB} \end{figure} \textit{MaskNet \& DeepPedestrian}: For the necrotic region with bright MRI and dark PET intensities in the first test example, the guidance PET image is relatively brighter than the guidance MRI image for both \textit{MaskNet} and \textit{DeepPedestrian} networks. This conveys that both the networks are highly sensitive to the changes in very dark PET intensities of the necrotic region compared to the changes in MRI intensities. Since both the guidance MRI and the fused image have low intensities compared to the guidance PET image in this region, the guidance RGB image exhibits green color. However, for the second test example, both the guidance images have very low gradients in the same region, showcasing the stability of the networks for pixel intensity changes in this region. Since all the color channels in the guidance RGB image have low intensities, the guidance RGB image does not have a fixed color scheme and is dark. For the region with dark MRI and bright PET intensities; however, the behavior is similar in both the test examples where both the guidance images have low gradients due to which it is challenging to differentiate the influence of each of the modalities while the \textit{MaskNet} network has learned the bright PET features better leading to blue color in the guidance RGB image. \textit{Summary}: The guidance images provided a static overview of the influence of input principle pixel on the fused principle pixel and assisted in visualizing which of the two modalities has higher influence in the clinical region of interest. Therefore, it provided new insights related to the stability of fusion networks in these regions which were not perceivable by looking at the fused images. The guidance images clearly show that \textit{MaskNet} and \textit{DeepPedestrian} networks performed very differently compared to other fusion methods. For the region with the dark PET features resembling necrotic core, all the methods except \textit{MaskNet} and \textit{DeepPedestrian} were sensitive to changes in the bright MRI intensities which is not suitable for a reliable analysis of the necrotic tumor boundary since the dark PET features might not be properly preserved in the fused image due to changes in the MRI features. However, both \textit{MaskNet} and \textit{DeepPedestrian} preserved the dark PET features in the fused image and were quite stable to the changes in the MRI pixel intensities due to which both these methods are far more suitable for the clinical application than any other fusion methods. But the guidance images don't reveal the influence of neighborhood pixels on the fused principle pixel which is important for estimating a better fusion approach between the two of these networks. \begin{figure}[!htb] \centering \includegraphics[width=15cm]{Jacobian_images.pdf} \caption{The figure shows the jacobian MRI and jacobian PET images for each of the fusion methods in the clinical region of interests. The $\gamma_{corr1}$ was fixed at 0.3 for all the jacobian images.} \label{fig:jacobian} \end{figure} \subsection{Jacobian images} The jacobian images of the selected principle pixel is shown in Figure \ref{fig:jacobian}. The analysis of the jacobian images will help in understanding the influence of the neighborhood pixels located around the principle pixel in the clinically relevant regions of the input image. It will also assist in visually comparing the neighborhood influence of MRI pixels with PET pixels. \textit{Weighted Averaging}: The jacobian MRI and jacobian PET images within the region with bright MRI and dark PET intensities for both the test examples show a higher gradient of the principle pixel in the jacobian MRI image compared to the jacobian PET image. For the dark MRI and bright PET intensities, there is a relatively high gradient in the principle pixel of the jacobian PET image showcasing sensitivity to changes in the bright PET pixel. One crucial observation from each of the jacobian images is that there is no influence of the neighborhood pixels on the prediction of the fused principle pixel since \textit{Weighted Averaging} performs the per-pixel computation of fused pixel intensities. \textit{FunFuseAn}: After analyzing the jacobian MRI and jacobian PET images within the region of bright MRI and dark PET intensities, it can be observed that similar to the \textit{Weighted Averaging} method, there is a higher gradient value for the principle pixel of the jacobian MRI image. The immediate neighborhood pixels in the jacobian MRI image also have significant gradient values meaning these pixels also influence the outcome of fused principle pixel. This is clinically useful since the change in intensities of pixels located within the necrotic core should affect the fused principle pixel. Additionally, there are more positive gradient values for the outer neighborhood pixels in the jacobian MRI image than the jacobian PET image. For the combination of dark MRI and bright PET intensities; however, there are low gradients for the principle pixel in both jacobian MRI and jacobian PET images. This conveys that the fused principle pixel is relatively stable to small changes in the bright PET and dark MRI pixel intensities. Hence, visualizing principle pixel in the jacobian images does not give much information about which input principle pixel has a higher influence on the fused principle pixel. On the other hand, the neighborhood pixel influence on the fused principle pixel is quite widespread due to the large number of positive gradients in both the jacobian MRI and jacobian PET images. \textit{MaskNet}: For the combination of bright MRI and dark PET intensities, the jacobian images in the first test example convey quite distinct characteristics compared to the jacobian images in the second test example. For the first test example, the jacobian PET image has a much higher gradient for the principle pixel compared to the jacobian MRI image. This reveals a high sensitivity of the fused principle pixel within the necrotic tumor core with respect to small changes in the pixel intensity of the PET principle pixel whereas the fused principle pixel is quite stable to changes in the MRI principle pixel. Additionally, the influence of neighborhood pixels of PET within the dark necrotic tumor core on the fused principle pixel is highly intense compared to neighborhood pixels of MRI. This conveys that the changes in the pixel intensities of the PET inside the dark necrotic core can greatly affect the fused principle pixel while the PET pixels outside the necrotic core boundary have a negligible affect on the fused principle pixel. However, in the second test example, both the jacobian MRI and jacobian PET images have low gradients for the principle pixel as well as in the neighborhood region within the necrotic core. This demonstrates that the network requires quite significant changes in these features to change the fusion output. For the combination of dark MRI and high PET intensities in both the test examples, the jacobian MRI and jacobian PET have similar properties like \textit{FunFuseAn} where there are low gradients for the principle pixel in both the images; therefore, equally influencing the fused principle pixel. However, there are less number of neighborhood pixels in the jacobian MRI and jacobian PET images that influences the principle pixel when compared to the \textit{FunFuseAn} network. \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{Scatterplots.pdf} \caption{The figure shows the scatterplots between the gradients of the guidance MRI and guidance PET images for each of the fusion methods. The green scatter points are the gradients for the pixels in the zoomed region of interest.} \label{fig:scatterplots} \end{figure} \textit{DeepFuse}: The jacobian images in both the test examples, when visualized in the regions of bright MRI and dark PET intensities show a significantly high number of neighborhood pixels influencing the fused principle pixel while a high gradient value for the principle pixel can be seen in the jacobian MRI image compared to the jacobian PET image. Therefore, the fused principle pixel is highly sensitive to changes in the MRI pixel while it is stable to changes in the PET pixel. By analyzing the jacobian images for the combination of dark MRI and bright PET intensities, it is observable that the gradient values at the principle pixel are low; therefore, showcasing stability to the pixel intensity changes from both the modalities while widespread neighborhood pixels are influencing the fused principle pixel. \textit{DeepPedestrian}: The combination of bright MRI and dark PET intensities in the first test example shows different jacobian results compared to the second test example. It can be noticed that the gradient value of the principle pixel in the jacobian MRI image is low while the gradient for the principle pixel is high in the jacobian PET image. This suggests that the fusion network is very stable to changes in the MRI pixel while sensitive to changes in the PET pixel within the necrotic tumor core. There are also positive gradients for the neighborhood pixels within the clinically important necrotic sub-region of glioma in the jacobian PET images due to which these pixels also influence the fusion principle pixel prediction. In the jacobian PET image, there is some influence from the neighborhood pixels outside the necrotic tumor core but they have low gradients compared to the gradients of the neighborhood pixels within the necrotic core. The jacobian images for the bright MRI and dark PET intensities in the second test example have very unique characteristics since both the jacobian MRI and jacobian PET images have very low gradients for the principle pixel as well as for the neighborhood pixels. Therefore, the network is very stable to changes in pixel intensities in this specific region of interest. For the combination of dark MRI and bright PET intensities, both the test examples show similar results where the neighborhood pixel influence on the fused principle pixel are relatively low in both the input modalities while the gradients for the principle pixel are also low meaning that the network does not get influenced by changes in pixels of each of the input modalities. \textit{Summary}: The analysis of jacobian images helped to visually compare the influence of the input neighborhood pixels from each of the input modalities on the fused principle pixel. It was observed that most of the fusion based neural networks have a significant influence on neighborhood pixels. This visual analysis was not possible by only interpreting the guidance and fused images due to which jacobian images provide additional interpretation towards the suitability of a particular fusion method in a clinical setup. It was observed that \textit{Weighted Averaging} has no neighborhood influence which is understandable by the fact that it is a per-pixel computation scheme. The \textit{MaskNet} network; however, provides clinically favorable results as it has a high influence of neighborhood pixels within the necrotic core and very low influence from the pixels outside the necrotic core boundary. \subsection{Scatterplots} The scatterplots shown in Figure \ref{fig:scatterplots} helps in understanding the relationship between the gradients of guidance MRI and guidance PET images where each point resembles a pixel with the gradient value in guidance MRI at the horizontal axis and the gradient value in guidance PET at the vertical axis. A positive correlation between these gradients would mean that both the input modalities influence the fused pixel with equal strength. A negative correlation will show that an increase in the influence of one modality will lead to a decrease in the influence of the other modality. For the cases, where the gradients from one of the modalities are constant and only the gradient from the other modality varies, then this conveys that the feature in the input modality where the gradient varies is localized only in a subset of the pixels. If there is no correlation we cannot draw any conclusion from the scatterplot. By examining the scatterplots for each of the fusion methods, no clear relationship between the guidance MRI and guidance PET images can be observed, except for the positive correlation between the low gradient values of guidance MRI and guidance PET images in the \textit{FunFuseAn} and \textit{DeepFuse} networks as well as a negative correlation between the gradients in the \textit{Weighted Averaging} method. However, by looking at the gradients within the zoomed region of interests shown by green points, a correlation patterns for few of the fusion networks can be determined. The scatterplot for \textit{Weighted Averaging} shows a negative correlation between the guidance MRI and guidance PET images. Therefore, for bright MRI and dark PET intensity setups in each of the two test examples, higher gradient values in guidance MRI causes the green scatter points to tilt towards the higher end of the horizontal axis while for dark MRI and bright PET intensity setup in the two test examples, the scatter points tilts in the opposite direction. This behavior shows that an increase in the influence of one modality led to a decrease in the influence of the other and \textit{Weighted Averaging} only prefers brightness as a feature which is clinically not interesting. The scatterplot of \textit{FunFuseAn} shows a positive correlation between the lower gradient values of guidance MRI and guidance PET and there is no correlation between the other gradient values of guidance MRI and guidance PET images. However, by visualizing the green points for dark MRI and bright PET intensities, we can observe a positive correlation between the gradient values which means both input modalities equally influence the fused pixel in this region. For the bright MRI and dark PET intensity setup, the scatter points are distributed in such a way that gradients from guidance MRI are high and constant while the gradients from the guidance PET varies. The scatterplots for the \textit{MaskNet} network in bright MRI and dark PET intensity setup of the first test example show a positive correlation between the gradient values. In the second test example though, a positive correlation is seen only between the very low gradient values of guidance MRI and guidance PET images, while for higher gradients it is difficult to interpret a correlation. For the dark MRI and bright PET intensity setup, the gradients show a positive correlation in both the test examples. The scatterplots for \textit{DeepFuse} does not show a clear correlation pattern between the gradients of guidance MRI and guidance PET images even after interpreting the green scatter points in the specific region of interests. For the second test example in the bright MRI and dark PET features of \textit{DeepPedestrian}, the gradients of guidance PET image varies comprehensibly while the gradients of guidance MRI image remains low and constant. In the dark MRI and bright PET intensity setup for both the test examples, \textit{DeepPedestrian} has gradients that are closely clustered and it is challenging to interpret a correlation. \textit{Summary}: The scatterplots helped in interpreting the correlation between the gradients of the guidance MRI and guidance PET images for each of the fusion methods, which was difficult to estimate by visualizing only the jacobian and the guidance images. These correlations are important to visualize since an ideal fusion method should not have equal influence from both the input modalities in the regions where there are more clinically important features in one modality than the other. For example, in the region where there are dark PET features resembling the necrotic core, the gradients from guidance MRI should always be low and the scatterplot should not have a positive correlation between the gradients. The scatterplots of \textit{MaskNet} and \textit{DeepPedestrian} showed such characteristics but only for one of the test examples. \subsection{Memory and Frame rates} Table \ref{tab:latency} shows the timing results of the jacobian computations as well as the frame rates of our FuseVis tool for the visualization of jacobian images. As it can be seen from the results, the computation of jacobians is quite fast due to the powerful GPU hardware used during the experiments. The timings are averaged over 100 randomly selected principle pixels. The results unveil that \textit{FunFuseAn} and \textit{DeepFuse} are the fastest among the fusion based networks. One reason is that these networks have far fewer parameters to be optimized. Networks such as \textit{MaskNet} and \textit{DeepPedestrian} have a high number of trained parameters and dense hidden layers due to which the automatic differentiation through backpropagation requires higher computational time. We also estimated frame rates within a static framework where we iteratively saved the jacobian images for each of the principle pixels without using our FuseVis tool. Then, we displayed these images in the FuseVis tool during the mouseover operation. For this, we measured a frame rate of almost 40 frames per second ($fps$) which is expected considering there were no jacobian computations with backpropagation heuristics involved for each change of principle pixel during the mouseover operation. However, the disadvantage of such a setup is that it takes around 3 $GB$ of memory to save jacobian images locally for each input modality at 100 dots per inch ($DPI$) resolution whereas our tool requires zero additional memory usage. \begin{table}[H] \caption{The table shows the timing and frame rate results for each of the fusion based neural networks.} \centering \begin{tabular}{|P{4.5cm}|P{1.5cm}|P{1.3cm}|P{1.3cm}|P{2cm}|} \hline Setting & FunFuseAn & MaskNet & DeepFuse & DeepPedestrian \\ \hline Jacobian computations & 0.003 s & 0.004 s & 0.003 s & 0.005 s \\ \hline FuseVis - Jacobian images & 20 fps & 15 fps & 20 fps & 10 fps \\ \hline Guidance images & 198 s & 265 s & 195 s & 360 s \\ \hline \end{tabular} \label{tab:latency} \end{table} \section{Conclusions} In this work, we presented a novel interactive approach to visually inspect fusion networks. For achieving this, we developed an easy-to-use $\textit{FuseVis}$ tool that enables the end-user to visualize jacobian images of the selected principle pixel in real-time while performing a mouseover interaction. The real-time generation of jacobian images using an interactive user interface showcases the influence of pixels in the input images on a fused pixel. It is an intuitive idea that helps to overcome the huge computational complexity of generating per-pixel jacobian images that involves several backpropagation heuristics. The FuseVis tool also enables the user to visualize guidance images which estimate the sensitivity of image fusion networks to the changes in input features. Therefore, the guidance images are a very useful technique to get a concise overview of the huge information content in the jacobian images by analyzing the influence of the input pixel to the fused pixel at the same location for all pixels. We currently train different fusion architectures in a similar environmental setting to foster a suitable comparison of black box fusion algorithms for a critical application in medical diagnosis. However, our first of its kind visualization tool can easily be used to interpret any neural network in its original experimental setup. By visually analyzing the neural networks for image fusion within a specific clinical application, it was found that only \textit{MaskNet} and \textit{DeepPedestrian} provided results that would be helpful to clinicians. However, these methods do not preserve all clinically relevant features efficiently due to which these methods cannot be used in a clinical setup yet. Additionally, it is not trivial to attain fusion groundtruth of an ideal fused image based on the specific clinical requirements. Therefore, there is a need to train fusion based neural networks by using annotations provided by medical artists. We expect that our work should enable further research in the field of visual explanation and interpretation of fusion based neural networks as well as neural networks for other image processing based applications. \vspace{6pt} \funding{This work was supported by the European Social Fund and the Free State of Saxony under NeuroFusion grant (project no. 100312752) and SePIA grant (project no. 100299506).} \acknowledgments{Some of the computations were also performed on an HPC Cluster at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden.} \conflictsofinterest{The authors declare that there is no conflict of interest.} \reftitle{References}
{ "timestamp": "2020-12-17T02:18:19", "yymm": "2012", "arxiv_id": "2012.08932", "language": "en", "url": "https://arxiv.org/abs/2012.08932" }
\section{Experimental details} The experimental set-up, as sketched in Fig.~1a in the main text, is comprised of a magnetically trapped BEC of $N_\mathrm{a} = 65\times 10^3$ $^{87}$Rb atoms, dispersively coupled to a narrow-band high-finesse optical cavity. The cavity field has a decay rate of $\kappa=~2\pi \times$4.55~kHz, which is the same order of magnitude as the recoil frequency $\omega_\mathrm{rec} = E_\mathrm{rec}/\hbar =~2\pi \times$3.55~kHz. The wavelength of the pump laser is $\lambda_\mathrm{P} = 803\,$nm, which is red detuned with respect to the relevant atomic transition of $^{87}$Rb at 795~nm. The maximum light shift per atom is $U_0 = -2\pi \times 0.36~\mathrm{Hz}$. We fix the effective detuning to $\delta_{\mathrm{eff}} \equiv \delta_{\mathrm{C}} - (1/2)N_\mathrm{a} U_0=-2 \pi \times 18.5~\mathrm{kHz}$, where $\delta_{\mathrm{C}} = \omega_{\mathrm{P}}-\omega_{\mathrm{C}}$ is the pump-cavity detuning. An experimental sequence starts by preparing the system in the self-organized density wave (DW) phase. This is achieved by linearly increasing the pump strength $\epsilon$ from zero to its final value $\epsilon_\mathrm{0}=3.3~E_\mathrm{rec}$ in 10~ms at a fixed pump-cavity detuning $\delta_\mathrm{eff}=-2\pi \times$18.5~kHz. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\columnwidth]{sfig1.pdf} \caption{{$\mathbb{Z}_2$ symmetry breaking in space.} {(a)} Pump protocol starting in the DW phase, tuning into the BEC phase and back to the DW phase. {(b),(c)} Relative phases $\phi$ (blue) and intracavity photon numbers $N_\mathrm{P}$ (red), measured by the heterodyne detector for single experimental runs, showing the two typical outcomes (b) $\delta \phi \approx 0$ and (c) $\delta \phi \approx \pi$. {(d)} Histogram of the phase difference $\delta \phi$ for 397 experimental runs.} \label{sfig:1} \end{figure} \section{$\mathbb{Z}_2$ symmetry breaking in space} Due to optical path length drifts, we cannot compare the phase $\phi$ for DW realisations of different experimental runs. The stability of our balanced heterodyne detection is however sufficient to compare the phase for two subsequent DW realisations within the same experimental sequence applying the pump protocol in Fig.~\ref{sfig:1}(a). In a perfect system, the phase difference $\delta \phi$ between two subsequent realizations of the DW phase can take two values $\delta \phi = 0$ or $\delta \phi = \pi$ and does not depend on any system parameters. Since the underlying discrete symmetry breaks spontaneously, we expect equiprobable realisation of the two possible outcomes shown in Figs.~\ref{sfig:1}(b) and \ref{sfig:1}(c). The relative occurrence of $\delta \phi$ for 397 realisations is plotted in Fig.~\ref{sfig:1}(d) using a binning of $0.08~\pi$. The two maxima, corresponding to $\delta \phi = 0$ and $\delta \phi = \pi$, are clearly distinguishable and the ratio of all realisations where $\delta \phi \in[-\frac{\pi}{2},\frac{\pi}{2}]$ over $\delta \phi \in[\frac{\pi}{2},\frac{3}{2}\pi]$ is 1.15. This number is close to unity, which shows that the underlying spatial $\mathbb{Z}_2$ symmetry in our system is well established. \section{$\mathbb{Z}_2$ symmetry breaking in time} In this section we show how the $\mathbb{Z}_2$ symmetry breaking associated with the DW phase leads to a spontaneous breaking of the discrete $\mathbb{Z}_2$ time translation symmetry associated with the modulated pump strength. Again, the stability of the phase reference of our heterodyne detection system is not sufficient to compare the phases $\phi$ for different experimental runs. Therefore we follow a similar procedure as in Fig.~\ref{sfig:1}, entering the DW phase twice within the same experimental run. After entering the DW phase for the second time, we start to modulate the pump strength in the same way as in Fig.~1 of the main text. The applied pump protocol is presented in Fig.~\ref{sfig:2}(a). As discussed in Fig.~\ref{sfig:1}(d), the phase difference $\delta \phi$ between two subsequent realizations of the DW phase can take two values $\delta \phi = 0$ or $\delta \phi = \pi$ and does not depend on any system parameters. As a consequence, the time-phase difference between the observed subharmonic time-crystal oscillation and the oscillation of the pump strength becomes constrained to the possible values zero and $\pi$. This is seen by evaluating the relative phase $\delta \phi$ at the time $t_{\textrm{max}}$, where the modulated pump strength acquires a maximum, indicated by the vertical black line in Fig.~\ref{sfig:2}(a) at $t= 3 T_D$. Since the underlying discrete symmetry breaks spontaneously, we expect equiprobable realisation of the two possible outcomes shown in Figs.~\ref{sfig:2}(b) and \ref{sfig:2}(c). The normalised complex value of the Fourier spectrum at the subharmonic frequency, $S_\mathrm{\phi}(\omega_\mathrm{D}/2)$ of the relative phase $\delta \phi(t)$, rescaled by its maximum for 423 experimental runs, is shown in \ref{sfig:2}(d). The ratio between the occurrences with $Re[S_\phi(\omega_\mathrm{D}/2)]<0$ over the events with $Re[S_\phi(\omega_\mathrm{D}/2)]>0$ is 1.05, which shows that the discrete time translation symmetry associated with the modulation is well established. \begin{figure}[tbp] \centering \includegraphics[width=0.9\columnwidth]{sfig2.pdf} \caption{Spontaneous breaking of the {$\mathbb{Z}_2$ time translation symmetry.} {(a)} Pump protocol starting in the DW phase, tuning into the BEC phase and back to the DW phase. After a waiting time of 0.5~ms the modulation strength $f_\mathrm{0}$ is linearly increased to $f_\mathrm{0}=0.3$. {(b),(c)} Relative phases $\phi$ (blue) and intracavity photon numbers $N_\mathrm{P}$ (red), measured by the heterodyne detector for single experimental runs showing the two typical outcomes $\delta \phi = 0$ (b) or $\delta \phi = \pi$ (c). As a consequence, also the time-phase difference between the subharmonic response and the modulated pump strength is constrained to the values zero and $\pi$. This is seen by observing the relative phase $\delta \phi$ at the time $t_{\textrm{max}}$, where the modulated pump strength acquires a maximum, indicated by the vertical black dashed line at $t \approx 3 T_D$. {(d)} Normalised Fourier component $S_\mathrm{\phi}(\omega_\mathrm{D}/2)$ of the relative phase $\delta \phi(t)$ rescaled by its maximum for 423 experimental runs.} \label{sfig:2} \end{figure} \section{Theoretical model} In the frame rotating at the pump frequency $\omega_\mathrm{P} = 2 \pi / \lambda_\mathrm{P}$, the Hamiltonian for the system reads\cite{Ritsch2013,Nagy:2008hk} \begin{equation}\label{eq:2ham} \hat{H} = \hat{H}_\mathrm{C} + \hat{H}_\mathrm{A} + \hat{H}_\mathrm{AA} +\hat{H}_\mathrm{AC}. \end{equation} In Eq.~\eqref{eq:2ham}, the Hamiltonian for the cavity with a single mode function $\mathrm{cos}(kz)$ is \begin{equation} \hat{H}_\mathrm{C} = -\hbar\delta_{\mathrm{C}} \hat{a}^{\dagger}\hat{a}, \end{equation} where $\hat{a}$ ($\hat{a}^{\dagger}$) is the cavity mode annihilation (creation) operator. The single-particle Hamiltonian for the atoms is given by \begin{equation} \hat{H}_\mathrm{A} =\int dy dz \hat{\Psi}^{\dagger}(y,z)\left[-\frac{\hbar^2}{2m}\nabla^2 + \epsilon\, \mathrm{cos}^2(ky) \right]\hat{\Psi}(y,z), \end{equation} where $m$ is the mass of an atom and $\hat{\Psi}(y,z)$ is the atomic field operator. The short-range collisional interaction between the atoms is captured by the Hamiltonian \begin{equation} \hat{H}_\mathrm{AA} =U_\mathrm{a} \int dy dz \hat{\Psi}^{\dagger}(y,z)\hat{\Psi}^{\dagger}(y,z)\hat{\Psi}(y,z)\hat{\Psi}(y,z). \end{equation} The effective 2D interaction strength is $U_\mathrm{a} = \sqrt{2\pi}a_s \hbar^2/m\ell_x$, where $a_s$ is the $s$-wave scattering length and $\ell_x$ is the harmonic oscillator length in the $x$ direction. The Hamiltonian for the light-matter interaction reads \begin{align} \hat{H}_\mathrm{AC} =\hbar U_0 \int dy &dz \hat{\Psi}^{\dagger}(y,z)\biggl[ \mathrm{cos}^2(kz)a^{\dagger} a \\ \nonumber &+ \alpha_{\mathrm{P}} \left( a + a^{\dagger} \right) \mathrm{cos}(ky) \mathrm{cos}(kz) \biggr]\hat{\Psi}(y,z), \end{align} where $\alpha_{\mathrm{P}} \equiv \sqrt{\epsilon/\hbar|U_0|}$ is the unitless amplitude of the pump field. The dynamics of the system follows from the Heisenberg-Langevin equations, \begin{align} \frac{\partial}{\partial t} \hat{\Psi} &= \frac{i}{\hbar}[\hat{H}, \hat{\Psi}] \\ \frac{\partial}{\partial t} \hat{a} &= \frac{i}{\hbar}[\hat{H}, \hat{a}] - \kappa \hat{a} + \xi, \end{align} where the stochastic noise term $\xi$ satisfies $\langle \xi^*(t)\xi(t') \rangle = \kappa \delta(t-t')$. We simulate the dynamics in the semiclassical limit by transforming $\hat{\Psi}$ and $\hat{a}$ into classical fields according to the truncated Wigner approximation (TWA) method \cite{Polkovnikov2010,Blakie2008,Carusotto2013}. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\columnwidth]{sfig3.pdf} \caption{{Mean-field stability region of the DTC.} Blue area denotes the region in the $(f_0,\omega_{\mathrm{D}})$-plane where a stable period-doubling response exists within the mean-field model in the absence of short-range interaction.} \label{sfig:3} \end{figure} The TWA is a semiclassical phase space method that goes beyond mean-field theory and can be utilised to test the robustness of time crystals against quantum and stochastic noise due to the dissipative cavity \cite{Cosme2019,Kessler2020}. We numerically integrate the resulting equations of motion for an ensemble of $10^3$ initial conditions, which sample the initial quantum noise in the fields and the stochastic noise due to the dissipative cavity. In our simulations, apart from $\epsilon_0$ chosen as $\epsilon_0 = 1.03~\epsilon_{\mathrm{cr}}$, where $\epsilon_{\mathrm{cr}}$ is the critical pump strength for the BEC-DW phase transition, we use the same parameters and protocol for the pump strength as in the experiment. In the comparison of calculations of $N_{\mathrm{P}}(t)$ and $C(t)$ for variable collisional interaction strengths $E_a$ in Fig.~4 in the main text, we adjust the pump strengths such that the number of intracavity photons in the DW phase is fixed to the same value. \\ \\ \section{Mean-field phase diagram} In order to obtain a rough orientation with regard to the system parameters suitable for the appearance of a dissipative time crystal (DTC) phase, we construct a dynamical phase diagram in the clean mean-field limit, wherein the mean-field breaking short-range interaction is neglected. In particular, we seek period-doubling solutions, which are stable for at least 40 modulation cycles. As depicted in Fig.~\ref{sfig:3}, modulation frequencies in the range $\omega_{\mathrm{D}}\in 2\pi \times [2,8]~\mathrm{kHz}$ provide an island with a stable DTC phase. This is consistent with the experimental results in Fig.~2(g) in the main text. Fig.~\ref{sfig:4}(a) and \ref{sfig:4}(b) show single shot measurements of the evolution of the intracavity photon number $N_\mathrm{P}$ (red) and the relative phase $\phi$ between the pump and the cavity light field (blue). In Fig.~\ref{sfig:4}(c) and \ref{sfig:4}(d), the corresponding mean-field simulations including phenomenological atom loss are presented. \begin{figure}[!htbp] \centering \includegraphics[width=0.7\columnwidth]{sfig4.pdf} \caption{{Comparison of experimental data to mean-field simulations including phenomenological atom loss.} {(a)} Time sequence for the pump with modulation strength $f_\mathrm{0}=$~0.3 and modulation period $T_\mathrm{D} = 0.25\,$ms. In the time interval delimited by dashed lines, $f_\mathrm{0}$ is linearly ramped from zero to its desired value. {(b)} The corresponding response of the intracavity photon number $N_\mathrm{P}$ (red) and the relative phase $\phi$ between the pump and the cavity light field (blue). {(c),(d)} Corresponding mean-field simulations including atom loss.} \label{sfig:4} \end{figure} \section{Theoretical results with Temporal Disorder} \begin{figure}[!htbp] \centering \includegraphics[width=0.6\columnwidth]{sfig5.pdf} \caption{Numerical results from TWA for noisy drive. {(a)-(c)} Short-time and (d)-(f) long-time dynamics. (a),(d) Single realization of the disordered drive [dark] and the clean periodic drive [light]. TWA results for the (b),(e) intracavity photon number and (c),(f) non-equal time correlation. The modulation strength is $f_\mathrm{0}=$~0.3 and modulation period is $T_\mathrm{D} = 0.25\,$ms. } \label{sfig:5} \end{figure} In Fig.~\ref{sfig:5}, we present the results of our TWA simulations for a noisy drive. Specifically, we add a Gaussian white noise onto the pump strength signal. An exemplary trace of the noisy drive is shown in Figs~\ref{sfig:5}(a) and \ref{sfig:5}(b). Note, however, that the noise in our numerical results shown here is band-limited to 0.025 GHz, which is set by the integration step of our stochastic differential equation solver. In contrast, the noise in the experiment is band-limited to 50 kHz. This explains the appearance of a more intermittent noise in the pump signal when Fig.~\ref{sfig:5} is compared to Fig. 3 in the main text. Similar to the experiment, we quantify the noise strength by $n \equiv \sum_{\omega} |\mathcal{E}_\mathrm{noisy}(\omega)|/\sum_{\omega}|\mathcal{E}_\mathrm{clean}(\omega)|$, where $\mathcal{E}_\mathrm{noisy}$ ($\mathcal{E}_\mathrm{clean}$) is the Fourier spectrum of the pump in the presence (absence) of white noise. The noise strength used in Fig.~\ref{sfig:5} is $n=2.0$. For this relatively weak temporal disorder, which breaks the discrete time translation symmetry imposed by the drive, our TWA results suggest that the system still exhibits long-lived period-doubling without any sign of decay after $\sim 350$ driving cycles. This corroborates the robustness of the DTC against temporal perturbations as explored experimentally in the main text. \section{Mapping to the Dicke model} The period-doubling instability of the DTC can be understood using a simple albeit incomplete description according to the mapping of the full atom-cavity Hamiltonian onto the Dicke model via the Schwinger-boson representation. Using the Holstein-Primakoff representation in the thermodynamic limit of $N \to \infty$, the collective spin in the Dicke model can be transformed back into bosons, leading to a coupled oscillator system, where the coupling strength is periodically driven. This coupled oscillator Hamiltonian can then be diagonalised to obtain a Hamiltonian for the lower and upper polaritonic states, where their respective frequencies are parametrically driven. Thus, driving at twice the lower polariton frequency leads to an exponential instability, which translates to a period-doubling response in the full atom-cavity model due to the presence of dissipation and the nonlinearity of the cavity-mediated interaction between the atoms.
{ "timestamp": "2021-07-27T02:31:06", "yymm": "2012", "arxiv_id": "2012.08885", "language": "en", "url": "https://arxiv.org/abs/2012.08885" }
\section{\@startsection{section}{1}% \z@{-1.2\linespacing\@plus-.5\linespacing}{.8\linespacing}% {\normalfont\bfseries\Large}} \def\subsection{\@startsection{subsection}{2}% \z@{-.8\linespacing\@plus-.3\linespacing}{.3\linespacing\@plus.2\linespacing}% {\normalfont\bfseries\large}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.7\linespacing\@plus.1\linespacing}{-1.5ex}% {\normalfont\bfseries}} \def\@secnumfont{\bfseries} \makeatother \fi \makeatother \newtheorem{theorem}{Theorem} \newtheorem*{theorem*}{Theorem} \newtheorem*{corollary*}{} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem*{claim}{Claim} \newtheorem*{question}{Question} \newtheorem{example}[theorem]{Example} \newtheorem*{remark}{Remark} \newtheorem{notation}[theorem]{Notation} \theoremstyle{definition} \newtheorem{defin}[theorem]{Definition} \newenvironment{definition} {\pushQED{\qed}\renewcommand{\qedsymbol}{$\lozenge$}\defin} {\popQED\enddefin} \makeatletter \newenvironment{statement}[1][\proofname]{\par \normalfont \topsep6\p@\@plus6\p@\relax \trivlist \item[\hskip\labelsep \bfseries #1\@addpunct{.}]\ignorespaces }{% \endtrivlist\@endpefalse } \providecommand{\proofname}{Proof} \makeatother \newcommand{{CKh}}{{CKh}} \newcommand{{Kh}}{{Kh}} \newcommand{CDKh}{CDKh} \newcommand{DKh}{DKh} \newcommand{vCKh}{vCKh} \newcommand{vKh}{vKh} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathfrak{s}}{\mathfrak{s}} \newcommand{v^{\text{u}}_+}{v^{\text{u}}_+} \newcommand{v^{\text{u}}_-}{v^{\text{u}}_-} \newcommand{v^{\text{l}}_+}{v^{\text{l}}_+} \newcommand{v^{\text{l}}_-}{v^{\text{l}}_-} \newcommand{v^{\text{u/l}}_+}{v^{\text{u/l}}_+} \newcommand{v^{\text{u/l}}_-}{v^{\text{u/l}}_-} \newcommand{v^{\text{l/u}}_+}{v^{\text{l/u}}_+} \newcommand{v^{\text{l/u}}_-}{v^{\text{l/u}}_-} \newcommand{s^{\text{u}}_{\text{max}}}{s^{\text{u}}_{\text{max}}} \newcommand{s^{\text{l}}_{\text{max}}}{s^{\text{l}}_{\text{max}}} \newcommand{s^{\text{u}}_{\text{min}}}{s^{\text{u}}_{\text{min}}} \newcommand{s^{\text{l}}_{\text{min}}}{s^{\text{l}}_{\text{min}}} \newcommand{\overset{\bullet}{+}}{\overset{\bullet}{+}} \newcommand{\overset{\bullet}{-}}{\overset{\bullet}{-}} \newcommand{\hans}[1]{{\color{blue}#1}} \begin{document} \vspace*{-40pt} \title{Minimal crossing number implies minimal supporting genus} \author{Hans U.\ Boden} \address{ Mathematics \& Statistics, McMaster University, Hamilton, Ontario } \email{\href{mailto:boden@mcmaster.ca}{boden@mcmaster.ca}, \href{mailto:will.rushworth@math.mcmaster.ca}{will.rushworth@math.mcmaster.ca}} \author{William Rushworth} \def\textup{2020} Mathematics Subject Classification{\textup{2020} Mathematics Subject Classification} \expandafter\let\csname subjclassname@1991\endcsname=\textup{2020} Mathematics Subject Classification \expandafter\let\csname subjclassname@2000\endcsname=\textup{2020} Mathematics Subject Classification \subjclass{57K12} \keywords{virtual links, link parity} \begin{abstract} A virtual link may be defined as an equivalence class of diagrams, or alternatively as a stable equivalence class of links in thickened surfaces. We prove that a minimal crossing virtual link diagram has minimal genus across representatives of the stable equivalence class. This is achieved by constructing a new parity theory for virtual links. As corollaries, we prove that the crossing, bridge, and ascending numbers of a classical link do not decrease when it is regarded as a virtual link. This extends corresponding results in the case of virtual knots due to Manturov and Chernov. \end{abstract} \maketitle \section{Introduction}\label{Sec:intro} In this note we prove that a minimal crossing diagram of a virtual link minimises the genus of surfaces \( \Sigma \) such that the link possesses a representative in \( \Sigma \times I \). This extends the corresponding result in the case of virtual knots due to Manturov \cite{Manturov2010,Manturov13}. Our result affirmatively answers a basic question, open since the inception of virtual knot theory: is it possible to simultaneously minimize the complexity of a diagram and the complexity of the surface supporting it? In fact, we establish a stronger result: a minimal crossing diagram is automatically of minimal genus. A virtual link is an equivalence class of virtual link diagrams, up to the generalised Reidemeister moves \cite[Section 2]{Kauffman1998}. The \emph{classical crossing number} of a virtual link is the minimal number of classical crossings, taken over all diagrams of the link. Equivalently to the diagrammatic formulation, a virtual link may be defined as an equivalence class of smooth embeddings \( \bigsqcup S^1 \hookrightarrow \Sigma \times I \), for \( \Sigma \) a closed orientable surface, up to self-diffeomorphism and (de)stabilization of \( \Sigma \times I \) \cite[Section 3]{Carter2000,Kauffman1998}. This last operation is the addition or removal of a \(1\)-handle of \( \Sigma \) disjoint to the embedding. The \emph{supporting genus} of a virtual link is the minimal genus of a surface \( \Sigma \) such that the link possess a representative in \( \Sigma \times I \). Let \( D \) be a diagram of the virtual link \( L \). As described in \Cref{Sec:apps}, to \( D \) there is a naturally associated representative of \( L \) in \( \Sigma \times I \); the particular surface \( \Sigma \) produced in this way is known as the \emph{Carter surface of \( D \)}. A diagram of a virtual link \( L \) is said to be \emph{minimal genus} if its Carter surface has genus equal to the supporting genus of \( L \). \begin{theorem}\label{Thm:main} Let \( D \) be a virtual link diagram. If \( D \) is of minimal classical crossing number then it is a minimal genus diagram. \end{theorem} That is, a diagram that realises the classical crossing number also realises the supporting genus. This verifies \cite[Conjecture 5.1]{Boden-Karimi-2019} and yields the following result for classical links. \begin{corollary}\label{Cor:classical} The crossing number of a classical link does not decrease when it is considered as a virtual link. \end{corollary} \Cref{Thm:main} is proved by introducing a new parity theory for virtual links, before employing a parity projection argument. Parity is a powerful concept in virtual knot theory, and extending it to virtual links is an important task (see \cite[Section 1]{Rushworth2019}). The parity theory for links that we introduce is applicable to a restricted class of virtual link diagrams, and it is the natural extension of the so-called \emph{homological parity} for virtual knots due to Manturov \cite{Manturov2010,Manturov13}. In addition to a combinatorial definition, we present a topological definition of our parity theory in \Cref{Sec:top}. \Cref{Thm:main} is a consequence of \Cref{Thm:sub}. The latter result guarantees that given a diagram of a virtual link \( L \), one may convert classical crossings to virtual crossings to produce a minimal genus diagram of \( L \). As examples of its utility, we apply \Cref{Thm:sub} to show that the bridge and ascending numbers of a virtual link are realised on minimal genus diagrams (see \Cref{Prop:bridge,Prop:asc}). The corresponding result regarding the bridge number of a virtual knot is due to Chernov \cite{Chernov2013} and Manturov \cite{Manturov13}. The parity constructed in this note appears to be well-suited to answering questions of the form: given a quantity extracted from a virtual link diagram, can we minimise it on a minimal genus representative? \Cref{Thm:main}, \Cref{Prop:bridge}, and \Cref{Prop:asc} are instances of this question in the case of crossing, bridge, and ascending number, and it is interesting to consider other instances. In particular, we wish to advertise the following open question: can the unknotting number of a virtual knot be realised on a minimal genus diagram? We construct the requisite parity theory in \Cref{Sec:parity}, before proving \Cref{Thm:main} and related results in \Cref{Sec:apps}. \subsubsection*{Conventions} All surfaces are closed and orientable, and are denoted by \( \Sigma \). We denote by \( RI \), \(RII\), and \(RIII\) the classical Reidemeister moves. For our purposes a \emph{link in a thickened surface} is a smooth embedding \( \bigsqcup S^1 \hookrightarrow \Sigma \times I \), considered up to isotopy, where \( I = [0,1] \). A \emph{diagram} of a link in a thickened surface is a link diagram drawn on a surface. Two diagrams of a given link in a thickened surface are related by a finite sequence of the moves \( RI \), \(RII\), \(RIII\), and isotopy, where the Reidemeister moves occur in disc neighbourhoods of \( \Sigma \). We denote diagrams of links in thickened surfaces by the \verb|\mathfrak| character \( \mathfrak{D} \), reserving Roman characters for virtual links and their diagrams. \subsubsection*{Acknowledgements} We thank Homayun Karimi, Andrew Nicas, and a referee for their helpful comments on an earlier version of this work. We are indebted to Adam Sikora for a number of comments that significantly improved this work. We also thank Zhiyun Cheng, Micah Chrisman, Vassily Manturov, and Puttipong Pongtanapaisan for their valuable feedback. \section{A homological parity for links}\label{Sec:parity} In this section we introduce a new theory of parity on virtual links (well-defined for a certain sequences of generalised Reidemeister moves). This parity may be thought of an extension of the homological parity for knots \cite{Manturov2010,Manturov13}, the definition of which is non-local: the full knot diagram is used to determine the parity of a crossing. As a consequence, the construction does not extend to links. The parity that we define in this section is local in nature -- it is computed directly at a crossing -- so that it may naturally be applied to links with arbitrarily many components. We begin with the combinatorial definition of our parity theory in \Cref{Sec:comb}, before presenting an equivalent topological definition in \Cref{Sec:top}. \subsection{Combinatorial definition}\label{Sec:comb} Our construction proceeds as follows. Working at the level of link diagrams on surfaces we introduce a function, \( f_{\mathscr{C}} \), on the crossings of such diagrams. We show that \( f_{\mathscr{C}} \) satisfies the appropriate version of the parity axioms for link diagrams on surfaces, and thus descends to a \emph{bona fide} parity on virtual link diagrams, well-defined for sequences of generalised Reidemeister moves defined by isotopies of links in thickened surfaces. \begin{figure} \includegraphics[scale=0.75]{Detour.pdf} \caption{An example of the detour move on virtual link diagrams: a segment containing only virtual crossings may be removed and replaced arbitrarily (with any new crossings produced being virtual).} \label{Fig:detour} \end{figure} Given an isotopy of links in thickened surfaces there is no canonical way to define a sequence of generalised Reidemeister moves: there is a choice of how some isotopies are realised as detour moves (as in \Cref{Fig:detour}). However, this choice does not affect the resulting parity, nor our proof of \Cref{Thm:main} (and related results). If the reader wishes, they can remove this choice by working \emph{mutatis mutandis} with sequences of Gauss diagrams (in place of virtual link diagrams), as detour moves do not appear in such sequences \cite{GPV}. First, we state the axioms of a parity for link diagrams on surfaces. These axioms correspond directly to those given by Ilyutko, Manturov, and Nikonov for virtual knots and other knotted objects \cite{Ilyutko2014,Nikonov-2016}. \begin{definition} \label{Def:parityaxioms} Consider the category whose objects are link diagrams on surfaces, and morphisms are sequences of Reidemeister moves (where such moves take place on disc neighbourhoods). Given an assignment of a function \( f(\mathfrak{D}) \) to every object \( \mathfrak{D} \), with domain the set of crossings of \( \mathfrak{D} \) and codomain \( \mathbb{Z}_2 \), we refer to the image of a crossing under \( f(\mathfrak{D}) \) as \emph{the parity} of the crossing; crossings that are mapped to \( 0 \) are \emph{even}, and those mapped to \( 1 \) are \emph{odd}. Such an assignment of functions is \emph{a parity} if it satisfies the following axioms: \begin{enumerate}[start=0] \item If diagrams \( \mathfrak{D} \) and \( \mathfrak{D}' \) are related by a single Reidemeister move, then the parities of the crossings that are not involved in this move do not change. \item If \( \mathfrak{D} \) and \( \mathfrak{D} ' \) are related by a Reidemeister I move that eliminates a crossing, then the parity of that crossing is even. \item If \( \mathfrak{D} \) and \( \mathfrak{D} ' \) are related by a Reidemeister II move eliminating the crossings \( c_1 \) and \( c_2 \), then \( c_1 \) and \( c_2 \) are both even or both odd. \item If \( \mathfrak{D} \) and \( \mathfrak{D} ' \) are related by a Reidemeister III move then the parities of the three crossings involved in the move are unchanged. Further, these three crossings are all even, all odd, or exactly two are odd. \qedhere \end{enumerate} \end{definition} An extremely useful application of a parity theory is \emph{parity projection}. Consider a sequence of virtual link diagrams \begin{equation*} D_1 \rightarrow D_2 \rightarrow \cdots \rightarrow D_n \end{equation*} related by generalised Reidemeister moves. Given a parity, define \( p(D_i) \) to be the diagram obtained from \(D_i \) by replacing every odd crossing with a virtual crossing (that is, \( \raisebox{-4pt}{\includegraphics[scale=0.35]{cc1.pdf}} \to \raisebox{-4pt}{\includegraphics[scale=0.35]{cc3.pdf}} \)). Virtual links may be defined in terms of Gauss diagrams; on the Gauss diagram of \( D_i \) parity projection corresponds to simply deleting the chords associated to odd crossings. We obtain the new sequence \begin{equation*} p(D_1) \rightarrow p(D_2) \rightarrow \cdots \rightarrow p(D_n). \end{equation*} That \( p(D_i) \) is related to \( p(D_{i+1}) \) by a generalised Reidemeister move is a consequence of the parity axioms. This operation is known as parity projection; for full details see \cite[Section \(1.3\)]{Manturov13}. We make use of a parity projection argument in the proof of \Cref{Thm:main}. We use simple closed curves to define colourings of link diagrams on a surface, and use such colourings to define a parity theory. \begin{definition}[\( \gamma \)-colouring]\label{Def:gcolour} Let \( \mathfrak{D} \) be a link diagram on \( \Sigma \), and \( \gamma \subset \Sigma \) a simple closed curve that intersects \( \mathfrak{D} \) transversely and away from crossings, such that every component of \( \mathfrak{D} \) has even intersection number with \( \gamma \). A \emph{\( \gamma \)-colouring of \(\mathfrak{D}\)} is a colouring of the components of \( \mathfrak{D} \) exactly one of two colours, such that the colour switches when passing through \( \gamma \). The colour of a component does not change at a crossing. \end{definition} An example of a \( \gamma \)-colouring is given in \Cref{Ex:gcolour}; our convention is to depict the curves \( \gamma \) in red, and use the colours blue and green for the components of link diagrams. Notice that \Cref{Def:gcolour} is well-posed due to the intersection condition. For a fixed curve \( \gamma \), a diagram of a link of \( m \) components possesses either \(0\) or \( 2^m \) \( \gamma \)-colourings; the diagram possesses \(0\) \(\gamma \)-colourings if and only if there is a component that intersects \( \gamma \) an odd number times. \begin{definition}\label{Def:gparity1} Suppose that \( \mathfrak{D} \) and \(\gamma \) are as in \Cref{Def:gcolour}. Let \( \mathscr{C} \) be a \(\gamma \)-colouring of \( \mathfrak{D} \). Define a function on the crossings of \( \mathfrak{D} \), denoted \( f_{\mathscr{C}} \), as follows, \begin{equation}\label{Eq:gparity} \begin{aligned} &f_{\mathscr{C}} \left(\, \raisebox{-11pt}{\includegraphics[scale=0.5]{even1.pdf}} \! \!\! \right) = f_{\mathscr{C}} \left(\, \raisebox{-11pt}{\includegraphics[scale=0.5]{even2.pdf}}\! \!\! \right) = 0 \quad \text{and} &f_{\mathscr{C}} \left(\, \raisebox{-11pt}{\includegraphics[scale=0.5]{odd.pdf}}\! \!\! \right) = 1. \end{aligned} \end{equation} \end{definition} \begin{example}\label{Ex:gcolour} A link diagram on a torus, and a \( \gamma \)-colouring of it. With respect to the given \( \gamma \)-colouring two crossings are even, and two are odd. \begin{equation*} \begin{matrix} \includegraphics[scale=0.5]{wh.pdf} &\qquad &\includegraphics[scale=0.5]{whcol.pdf} \\ \end{matrix} \end{equation*} \end{example} Let \( \mathfrak{D} \), \( \mathfrak{D}' \) be diagrams on \( \Sigma \) related by a single Reidemeister move, and \( \gamma \subset \Sigma \) a simple closed curve. Suppose that \(\mathfrak{D} \) possesses a \(\gamma \)-colouring \( \mathscr{C} \); it is clear that \( \mathscr{C} \) induces a \(\gamma \)-colouring of \( \mathfrak{D}' \). \begin{proposition}\label{Prop:gparity1} Let \( \mathfrak{D} \), \( \gamma \), and \( \mathscr{C} \) be as in \Cref{Def:gparity1}. The function \( f_{\mathscr{C}} \) is a parity. \end{proposition} \begin{proof} That \( f_{\mathscr{C}} \) satisfies the axioms of \Cref{Def:parityaxioms} may be verified by directly comparing the Reidemeister moves to \Cref{Eq:gparity}, recalling that these moves are supported on disc neighbourhoods of \( \Sigma \). We suffice ourselves with some example verifications. The following hold for any \( \gamma \)-colouring (a possible position of the curve \( \gamma \) is denoted by the red arc): \begin{equation*} \begin{matrix} \includegraphics[scale=0.8]{r1even.pdf} & \quad & \includegraphics[scale=0.8]{r2odd.pdf} & \quad & \includegraphics[scale=0.8]{r3mix.pdf} \\ 0~\text{odd crossings} & & 0 ~\text{or}~2~ \text{odd crossings} & & 0 ~\text{or}~2~ \text{odd crossings} \end{matrix} \end{equation*} \end{proof} Given a sequence of diagrams \( \mathfrak{D}_i \) on \( \Sigma \) such that \( \mathfrak{D}_i \) is related to \( \mathfrak{D}_{i+1} \) by a Reidemeister move, there is a naturally associated sequence of virtual link diagrams \( D_i \), related by generalised Reidemeister moves. Note that the sequence of virtual link diagrams may be longer than that of diagrams on \( \Sigma \): the former sequence may include detour moves, that are not present in the latter. We use \Cref{Prop:gparity1} to define a parity for such (sequences of) virtual link diagrams. \begin{proposition}\label{Prop:gparity2} Let \( \lbrace \mathfrak{D}_i \rbrace \), \( 1 \leq i \leq n \), be a sequence of diagrams on \( \Sigma \) such that \( \mathfrak{D}_i \) is related to \( \mathfrak{D}_{i+1} \) by a Reidemeister move, and \(\lbrace D_j \rbrace \), \( 1 \leq j \leq m \geq n \), the associated sequence of virtual link diagrams. Suppose that \( \mathfrak{D}_i \) possesses a \( \gamma \)-colouring, \( \mathscr{C} \), for some simple closed curve \( \gamma \); by abuse of notation also denote by \( \mathscr{C} \) the induced \( \gamma \)-colouring on \( \mathfrak{D}_j \), \( i \neq j \). Then \( f_{\mathscr{C}} \) descends to a parity on the virtual link diagrams \( D_j \). \end{proposition} \begin{proof} Suppose that the move \( D_{j} \rightarrow D_{j+1} \) is a detour move. Then there is a one-to-one correspondence between the classical crossings of \( D_{j} \) and \( D_{j+1} \), and both diagrams are associated to the same diagram on \( \Sigma \). It follows that the parity of a crossing is unchanged across the move \( D_{j} \rightarrow D_{j+1} \). Now suppose that the move \( D_{j} \rightarrow D_{j+1} \) is a generalised Reidemeister move induced by a Reidemeister move \( \mathfrak{D}_{i} \rightarrow \mathfrak{D}_{i+1} \) on \( \Sigma \). \Cref{Prop:gparity1} guarantees that \( f_{\mathscr{C}} \) satisfies the axioms of \Cref{Def:parityaxioms} for the move \( \mathfrak{D}_{i} \rightarrow \mathfrak{D}_{i+1} \). Consider the function induced by \( f_{\mathscr{C}} \) on the virtual link diagrams \( D_j \); it follows immediately from the observations above that this function satisfies the parity axioms for a move \( D_{j} \rightarrow D_{j+1} \). \end{proof} Henceforth we shall use \( f_{\mathscr{C}} \) to denote both the parity for link diagrams on \( \Sigma \), and the induced parity for associated virtual link diagrams. We refer to both as the \emph{\( \mathscr{C} \)-parity}. \subsection{Topological definition}\label{Sec:top} The \( \mathscr{C} \)-parity enjoys a topological definition in terms of covering spaces that we now describe (for a similar construction in the case of the Gaussian parity for virtual knots, see \cite[Section 5]{Boden2017}). Given a link diagram \( \mathfrak{D} \) on \( \Sigma,\) suppose that \( \gamma \subset \Sigma \) is a simple closed curve with even intersection number with every component of \( \mathfrak{D} \). Let \( \pi : \widetilde{\Sigma} \times I \to \Sigma \times I \) be a double cover of \( \Sigma \times I \) formed by cutting two copies of \( \Sigma \times I \) along \( \gamma \times I \), and identifying the resulting boundaries (see \Cref{Fig:cover}). The diagram \( \mathfrak{D} \) represents a link, \( \mathfrak{L} \), in \( \Sigma \times I \), and \( {\pi}^{-1} \left( \mathfrak{L} \right) \) is a link with twice as many components as \( \mathfrak{L} \) (this is a consequence of the intersection condition on \( \mathfrak{D} \) and \( \gamma \)). A \( \gamma \)-colouring of \( \mathfrak{D} \) is equivalent to a choice of lifting of \(\mathfrak{L}\) to a link \( \widetilde{\mathfrak{L}} \) in \(\widetilde{\Sigma} \times I\): the colouring may be used to indicate a choice of preferred component of \( {\pi}^{-1} \left( \mathfrak{L} \right) \) for each component of \( \mathfrak{L} \) (an example is given in \Cref{Fig:cover}). Specifically, segments of distinct colours are lifted to distinct sheets of the cover \(\pi: \widetilde{\Sigma}\times I \to \Sigma \times I\). A subtlety is presented by the fact that \( {\pi}^{-1} ( \gamma \times I ) \) consists of two connected components. Given a \( \gamma \)-colouring of \( \mathfrak{D} \), the specific intersection between the components of \( \widetilde{\mathfrak{L}} \) and \( {\pi}^{-1} ( \gamma \times I ) \) may be determined by choosing orientations for \(\mathfrak D\) and \( \gamma \), and comparing the sign of the intersection point \(\mathfrak{D} \cap \gamma\) with the colours of \(\mathfrak D\) as it crosses \(\gamma\). The lift does not depend of the orientations for \(\mathfrak D\) and \( \gamma \). Further, it is clear that such a choice of \( \widetilde{\mathfrak{L}} \) defines a \( \gamma \)-colouring of \( \mathfrak{D} \), via the correspondence between sheets of the cover and colours of the segments. It follows that the parity constructed in \Cref{Sec:comb} may alternatively be defined in terms of a topological choice of lift, as opposed to a combinatorial choice of colouring. Recall that given a parity theory, parity projection is the removal of odd crossings by converting them to virtual crossings. In the topological setting described here the link \( \widetilde{\mathfrak{L}} \) is precisely the link obtained from \( \mathfrak{L} \) by (the appropriate notion of) parity projection. Specifically, let \(D\) be a virtual link diagram defined by \( \mathfrak{D} \), and \( \mathscr{C} \) the \( \gamma \)-colouring associated to the choice of lift \( \widetilde{\mathfrak{L}} \). If \( p(D) \) is the diagram obtained from \( D \) by parity projection with respect to \( f_{\mathscr{C}} \), then \( p(D) \) is a diagram of the virtual link represented by \( \widetilde{\mathfrak{L}} \). As such, it follows that parity projection is realised as the operation of lifting \( \mathfrak{L} \) to a prescribed double cover of \( \Sigma \times I\). \begin{figure} \centering \includegraphics[scale=0.5]{cover.pdf} \caption{The \( \gamma \)-colouring of \Cref{Ex:gcolour} is equivalent the depicted choice of lift.}\label{Fig:cover} \end{figure} \section{Applications of homological parity for links}\label{Sec:apps} With the \( \mathscr{C} \)-parity in place, we use it to obtain our main result in \Cref{Sec:thm}, before presenting further applications in \Cref{Sec:lem}. \subsection{Proof of \Cref{Thm:main}}\label{Sec:thm} We begin by recalling some necessary definitions. \begin{definition}[Carter surface \cite{Carter91,Kamada2000}]\label{Def:cs} Let \( D \) be a virtual link diagram. Consider \( D \) as an abstract \(4\)-valent graph (in which classical crossings are vertices, virtual crossings are not); there is an orientable surface with boundary, \( F \), that deformation retracts onto this graph. The \emph{Carter surface} of \( D \) is the closed orientable surface obtained by gluing discs to the boundary of \( F \). \end{definition} For a virtual link diagram, \( D \), the construction of the Carter surface of \( D \) naturally produces a link diagram \( \mathfrak{D} \) on the Carter surface, such that \( D \) corresponds to \( \mathfrak{D} \). This construction is unaffected by detour moves; the Carter surface associated to a virtual link diagram depends only on its underlying Gauss diagram. A \emph{subdiagram} of a virtual link diagram \( D \) is a diagram obtained by converting a (possibly empty) subset of the classical crossings of \(D\) to virtual crossings. A \emph{proper subdiagram} is formed by converting a non-empty subset of classical crossings. \Cref{Thm:main} is a consequence of the following result that enjoys wider utility: applications to ascending number and bridge number are given in \Cref{Sec:lem}. \begin{theorem}\label{Thm:sub} A diagram of a virtual link \(L\) possesses a subdiagram that also represents \(L\) and has minimal genus. \end{theorem} \begin{proof} Let \( D \) be a diagram of the virtual link \(L\), and \( \Sigma \) the Carter surface of \( D \). Denote by \( \mathfrak{D} \) the diagram on \( \Sigma \) defined by \( D \). If \( \Sigma \) realises the supporting genus of \( L \) then \( D \) is the desired minimal genus subdiagram. Suppose that \( \Sigma \) does not realise the supporting genus of \( L \). By Kuperberg's Theorem \( \mathfrak{ D } \) is related by a finite sequence of Reidemeister moves on \( \Sigma \) to a diagram \( \mathfrak{D}' \), such that there exists an essential simple closed curve, \( \gamma \subset \Sigma \), disjoint to \( \mathfrak{D}' \). Denote this sequence as \begin{equation}\label{Eq:ss} \mathfrak{D} = \mathfrak{D}_1 \rightarrow \mathfrak{D}_2 \rightarrow \cdots \rightarrow \mathfrak{D}_n = \mathfrak{D} '. \end{equation} Without loss of generality, we may assume that \(\gamma\) is such that \Cref{Eq:ss} is the shortest possible sequence of this kind. It follows that \(\mathfrak{D}_i\) is cellularly embedded for \(1\leq i < n\) (that is, the complement of a neighbourhood of \( \mathfrak{D}_i \) in \( \Sigma \) is a disjoint union of discs). The sequence of \Cref{Eq:ss} defines a sequence of virtual link diagrams related by generalised Reidemeister moves, denoted as \begin{equation}\label{Eq:sd} D = D_1 \rightarrow D_2 \rightarrow \cdots \rightarrow D_m = D ', \end{equation} with \( m \geq n \). Our assumption on the length of the sequence of \Cref{Eq:ss} implies that \( \mathfrak{D}_{n-1} \rightarrow \mathfrak{D}_n \) is an \( RII \) move of the form \begin{equation*} \raisebox{-38pt}{\includegraphics[scale=0.8]{fr2.pdf}}. \end{equation*} We may further assume that \( \gamma \) (depicted by the red arc) is disjoint to \( \mathfrak{D}_{n-1} \) outside of the disc neighbourhood depicted above. Observe that \( \mathfrak{D}_{n-1} \) has intersection number \(0\) with \( \gamma \), and that there exists a \( \gamma \)-colouring of \( \mathfrak{D}_{n-1} \) such that all components possess the same colour, except in the region supporting the \(RII\) move, where the colouring is: \begin{equation*} \raisebox{-28pt}{\includegraphics[scale=0.85]{fr2col.pdf}}. \end{equation*} Denote this distinguished \( \gamma \)-colouring by \( \mathscr{C} \). Every diagram \( \mathfrak{D}_i \) in the sequence possesses a \( \gamma \)-colouring induced by \( \mathscr{C} \); by abuse of notation also denote these \( \gamma \)-colourings by \( \mathscr{C} \). Notice that every crossing of \( \mathfrak{D}_n \) is even with respect to the parity \( f_{\mathscr{C}} \). The same is true for \( D_m \). This analysis of the sequence of \Cref{Eq:ss}, combined with \Cref{Prop:gparity2}, allows us to project the sequence of \Cref{Eq:sd} to obtain a new sequence \begin{equation}\label{Eq:s2} p(D) = p(D_1) \rightarrow p(D_2) \rightarrow \cdots \rightarrow p(D_m) = p(D '), \end{equation} where \( p (D_i) \) denotes the virtual link diagram obtained from \( D_i \) via parity projection with respect to \( f_{\mathscr{C}} \). Every crossing of \( D_m \) is even with respect to \( f_{\mathscr{C}} \), so that \( p ( D_m ) = D_m \), and we may concatenate the sequence of \Cref{Eq:sd} with that of \Cref{Eq:s2} (in reverse) to obtain a sequence of generalised Reidemeister moves from \( D \) to \( p(D) \). Thus \( D \) and \( p(D) \) both represent the virtual link \(L\). We claim that the sequence of \Cref{Eq:s2} preserves the Carter surface. That is, that the diagrams \( p(D_i) \) and \( p(D_j) \) have homeomorphic Carter surfaces, for all \( 1 \leq i,j \leq m \). To see this, first recall that the only generalised Reidemeister move that may possibly alter the homeomorphism type of the Carter surface is an \( RII \) move. Suppose that \( p ( D_{i} ) \rightarrow p ( D_{i+1} ) \) is an \( RII \) move; then \( D_{i} \rightarrow D_{i+1} \) is an \( RII \) move involving even crossings (with respect to \( f_{\mathscr{C}} \)). To verify that \( p ( D_{i} ) \rightarrow p ( D_{i+1} ) \) preserves the Carter surface we employ the topological definition of the \( \mathscr{C} \)-parity. The move \( D_{i} \rightarrow D_{i+1} \) is associated to a move \( \mathfrak{D}_{j} \rightarrow \mathfrak{D}_{j+1} \) of \Cref{Eq:ss}. By assumption this latter move occurs on a disc neighbourhood of \( \Sigma \), and \( \mathfrak{D}_{j} \), \( \mathfrak{D}_{j+1} \) are cellularly embedded. It follows that there is a disc, \( \Delta \), as indicated in \Cref{Fig:oddr2}. Suppose that \( \pi : \widetilde{\Sigma} \to \Sigma \) is the double cover associated to \( \gamma \) (as described in \cref{Sec:top}); one component of \( {\pi}^{-1} ( \Delta ) \) must be as depicted in \Cref{Fig:oddr2}. Let \( \widetilde{\mathfrak{D}}_{j} \), \( \widetilde{\mathfrak{D}}_{j+1} \) denote the lifts of \(\mathfrak{D}_{j} \), \( \mathfrak{D}_{j+1} \) prescribed by \( \mathscr{C} \) (the diagrams \( \widetilde{\mathfrak{D}}_{j} \), \( \widetilde{\mathfrak{D}}_{j+1} \) are diagrams on \( \widetilde{\Sigma} \)). As described in \Cref{Sec:top}, lifting with respect to \( \pi \) is equivalent to parity projection with respect to \( f_{\mathscr{C}} \). In particular, \( \widetilde{\mathfrak{D}}_{j} \) and \( p ( D_{i} ) \) have the same Gauss diagram, as do \( \widetilde{\mathfrak{D}}_{j+1} \) and \( p ( D_{i+1} ) \). It follows that we may obtain the Carter surface of \( p ( D_{i} ) \) by cutting out a neighbourhood of \( \widetilde{\mathfrak{D}}_{j} \) in \( \widetilde{\Sigma} \), and capping its boundary with discs; the Carter surface of \( p ( D_{i+1} ) \) is obtained in the same manner from \( \widetilde{\mathfrak{D}}_{j+1} \). Depending on whether the \( RII \) move \( \widetilde{\mathfrak{D}}_{j} \rightarrow \widetilde{\mathfrak{D}}_{j+1} \) adds or removes crossings, one of the diagrams involved is as the top-right diagram in \Cref{Fig:oddr2}. Notice that the arcs of this diagram involved in the \( RII \) move are part of the boundary of a single disc component of \( {\pi}^{-1} ( \Delta ) \). It follows that \( p ( D_{i} ) \) and \( p ( D_{i+1} ) \) have homeomorphic Carter surfaces, and the sequence of \Cref{Eq:s2} preserves the Carter surface. \begin{figure} \includegraphics[scale=0.5]{oddr2.pdf} \caption{Lifting an \( RII \) move involving even crossings, using the cover associated to \( \gamma \).} \label{Fig:oddr2} \end{figure} Next, we claim that \( D \) contains an odd crossing with respect to \( f_{\mathscr{C}} \), so that \( p(D) \) is a proper subdiagram of \(D\) that represents \(L\). Assume towards a contradiction that \( D \) does not contain an odd crossing with respect to \( f_{\mathscr{C}} \). Then \( p(D) = D \), so that \( p(D) \) has Carter surface \( \Sigma \) also. As \( p(D_m) = D_m \) the diagram \( p(D_m) \) has Carter surface \( \Sigma ' \), obtained from \( \Sigma \) by destabilizing along \( \gamma \). This destabilization must change the homeomorphism type of \( \Sigma \): if \( \gamma \) is non-separating then \( \Sigma \) and \( \Sigma ' \) have different genera, and if \( \gamma \) is separating then \( \Sigma \) and \( \Sigma ' \) have a different number of connected components. It follows that the Carter surfaces of \( p(D) \) and \( p(D_m) \) are not homeomorphic. But the sequence of \Cref{Eq:s2} (starting at \( p(D) \) and ending at \( p(D_m) \)) preserves the homeomorphism type of the Carter surface, hence the desired contradiction. In conclusion, we have produced \( p(D) \) that represents \( L \) and is a proper subdiagram of \( D \). If \( p(D)\) is not a minimal genus diagram of \( L \), then we may repeat the process described above (with a different curve in place of \( \gamma \), guaranteed to exist by Kuperberg's Theorem). The proof is completed by iterating this process: after a finite number of iterations a minimal genus subdiagram of \( D \) is obtained, that must also represent \(L\) (the number of iterations required is bounded above by the number of classical crossings of \( D \)). \end{proof} \begin{proof}[Proof of \Cref{Thm:main}] Suppose that \( D \) realises the crossing number of \( L \). Apply \Cref{Thm:sub} to obtain a new diagram of \(L\), denoted \( D' \). It is guaranteed that \( D' \) is minimal genus and a subdiagram of \( D \). By hypothesis \( D \) is of minimal crossing number, hence \( D = D' \), so that \( D \) is a minimal genus diagram. \end{proof} \subsection{Realising the bridge and ascending numbers on minimal genus diagrams}\label{Sec:lem} We present further examples of the utility of \Cref{Thm:sub}. First, we use it to prove that the bridge number of a virtual link is realised on a minimal genus diagram. This extends the corresponding result in the case of virtual knots due to Chernov \cite{Chernov2013} and Manturov \cite{Manturov13}. A \emph{bridge} of a virtual link diagram is an arc that contains one or more overcrossings (and any number of virtual crossings). The \emph{bridge number} of a link \(L\) is the minimum number of bridges over all diagrams for \(L\) (for further details see \cite{Hirasawa2011}). It is \emph{a priori} conceivable that a classical link may admit a virtual link diagram with fewer bridges than any of its classical diagrams: we use \Cref{Thm:sub} to show that this cannot occur. \begin{proposition}\label{Prop:bridge} The bridge number of a virtual link is realised on a minimal genus diagram. In particular, the bridge number of a classical link does not decrease when it is considered as a virtual link. \end{proposition} \begin{proof} Let \( D \) be a diagram that realises the bridge number of a virtual link \( L \). Apply \Cref{Thm:sub} to \( D \) to produce a minimal genus subdiagram of \( D \), that also represents \( L \), and observe that changing classical crossings to virtual crossings cannot increase the bridge number of a diagram. \end{proof} The meridional rank conjecture \cite[Problem 1.11]{Kirby} posits that the bridge number of a classical link is equal to the meridional rank of the link group. An affirmative answer to this conjecture would provide an alternative proof of the fact that the bridge number of a classical link does not decrease when it is considered as a virtual link (as established in \Cref{Prop:bridge}). In fact, the meridional rank conjecture implies the stronger statement that the bridge number of a classical link does not decrease when it is considered as a \emph{welded} link. (For further details see \cite{Boden2015}.) Next, we consider the ascending number, a numerical invariant introduced by Ozawa \cite{Ozawa2010} (also known as the \emph{warping degree} \cite{Shimizu2011}). The definition readily extends to virtual links, as follows. An oriented virtual knot diagram is said to be \emph{ascending} if one encounters only undercrossings, or crossings that have previously been met, when traversing the diagram from an arbitrary basepoint. There is a similar definition for oriented virtual link diagrams (given basepoints on and an ordering of the link components). The ascending number of an oriented virtual link diagram \( D \), denoted \( a(D) \), is the minimum number of crossing changes needed to make it ascending. The ascending number of an oriented virtual link is the minimal ascending number of a diagram representing the link. \Cref{Thm:sub} applies to show that the ascending number of a classical link is preserved when passing to the virtual category. \begin{proposition}\label{Prop:asc} The ascending number of a virtual link is realised on a minimal genus diagram. The ascending number of a classical link does not decrease when it is considered as a virtual link. \end{proposition} \begin{proof} The claim follows easily from \Cref{Thm:sub}: notice that if \(D\) is a virtual link diagram with subdiagram \(D'\) then \(a(D') \leq a(D).\) \end{proof} We conclude this note by advertising the following open question. The \emph{unknotting number} of a classical or virtual knot \( K \) is the minimal number of crossing changes \( \raisebox{-4pt}{\includegraphics[scale=0.35]{cc1.pdf}} \to \raisebox{-4pt}{\includegraphics[scale=0.35]{cc2.pdf}} \) needed to convert a diagram of \( K \) to an unknot diagram. The \emph{unlinking number} of a virtual link \( L \) is defined similarly. Not all virtual knots (links) can be unknotted (unlinked) by crossing change, in which case the unknotting (unlinking) number is defined to be infinite. \begin{question} Is the unknotting number of a classical knot preserved when it is considered as a virtual knot? Is the same true for the unlinking number of a classical link? For virtual knots and links, are the unknotting and unlinking numbers attained on minimal genus diagrams? \end{question} The unknotting number of a classical knot is bounded above by its ascending number \cite{Ozawa2010}, so that \Cref{Prop:asc} provides evidence in favour of a positive answer to the classical cases of the question posed above. Another interesting open question is obtained by replacing `unlinking' with `splitting'. (Again, not all virtual links can be split by crossing change, in which case the splitting number is defined to be infinite.) Finally, we note the operation of converting classical crossings to virtual crossings is an unknotting, unlinking, and splitting operation for virtual links. It is interesting to consider questions analogous to those above in the context of this operation. \bibliographystyle{plain}
{ "timestamp": "2021-02-03T02:04:53", "yymm": "2012", "arxiv_id": "2012.09000", "language": "en", "url": "https://arxiv.org/abs/2012.09000" }
\section{Introduction} Changing some specific features (such as sentiment, tense, or human face pose) of a given text or image is very important in many applications. These features are usually embedded in the weights of ``black-box'' deep neural networks. Hence, disentangling the latent space of the neural network becomes a valuable step of such tasks. In the early years, researchers tried to use some disentangled latent variables to control latent features of images and text. For example, \newcite{chen2016infogan} use scalar latent variables to control the writing style for handwritten digits as well as the pose of human face 3D-rendered images by maximizing the mutual information between the latent variable and the generator. Furthermore, there is previous research that tends to completely separate the latent space into disentangled components. For example, \newcite{john-etal-2019-disentangled} use multiple adversarial optimizers to decrease the dependency between the latent vectors of style and content. A severe limitation in previous works is adversarial training, which is always difficult to train and usually requires many resources. Especially when we are extracting multiple style vectors and each of them represents a specific feature, then for each kind of feature, there needs to be a discriminator, according to \newcite{john-etal-2019-disentangled}. Therefore, a generator with so many discriminators will make the training process extremely complicated. In this paper, we distinguish the concepts of style type and value: (1) a \textit{style type} is a style class that represents a specific feature of text or an image, e.g., sentiment, tense, or face direction; and (2) a \textit{style value} is one of the different values within a style type, e.g., sentiment (positive/negative), or tense~(past/now/future). We propose a unified distribution-controlling method that gives a unique representation to each style value\ in each style type. We assume that the representations of each style value\ are sampled from separate Gaussian distributions. Then, we generate multiple style vectors (for different style types) and a content vector using the input text, and force the style vectors to be close to the ground-truth style-value\ distributions. To ensure that the semantic information is not lost, we sample a style vector from the ground-truth style-value\ distribution for each style type, and combine all the style vectors and the content vector into one vector. Then, we force the combined vector to reconstruct the original sentence. To avoid adversarial training, we propose a loss function for style-content disentanglement to make it more efficient, which is applicable to both vanilla and variational autoencoders. We further propose loss functions for the disentanglement among multiple style types. In addition, we point out a severe problem in multi-type\ disentanglement, which is called \textit{training bias}. We prove mathematically that our multi-type\ disentanglement loss function can alleviate the training bias problem. We conducted experiments on two datasets: Yelp Service Reviews ~\cite{Shen2017Style} and Amazon Product Reviews~\cite{Fu2018Style}. The experimental results show that our method can provide a comparable disentanglement effect and even better style-transfer effect without the help of adversarial training. The experimental results also show our method's effectiveness for alleviating the training bias. The contributions of this paper are briefly as follows: \begin{itemize} \item We propose a unified distribution-controlling method for disentanglement, which provides unique representations for each style value\ in each style type\ and provides a natural advantage for multi-type\ disentanglement. \item We propose loss functions to disentangle style and content without adversarial training. Our method is applicable even in the situation where one type\ contains multiple style values, both in vanilla and variational autoencoders. \item We propose loss functions for disentangling multiple style types, which can also alleviate the bias caused by multi-type\ training data. Based on a solid theory, the effectiveness of these loss functions is also shown in experiments. \end{itemize} The rest of this paper is organized as follows. Section~\ref{sec:approach} introduces the details of our approach and the losses designed for style-content and multi-type\ disentanglement. We then conduct experiments in Section~\ref{sec:exp}, discuss related work in Section~\ref{sec:rel}, and finally conclude our work in Section~\ref{sec:con}. \section{Approach}\label{sec:approach} In this section, we first make an assumption about style vectors, called \textit{unified distribution assumption}. On top of this, we then build our model architecture in Fig.~\ref{fig:arch}. We further propose two loss functions for style-content disentanglement (which encourage that the style vectors and the content vector do not affect each other) and multi-type\ disentanglement (which reduces the effect among the style vectors). \subsection{Unified Distribution-Controlling Method} Intuitively, for the sentences belonging to one specific style value, the style vector generated by them should have the same representation. Since this is a very strong condition, we use a relaxed form of this requirement as \textit{unified distribution assumption}, in which we require that all the vectors that belong to one specific style value\ follow a unified distribution, the parameters of which can also be updated during training. We use a Gaussian distribution with different mean and variance for each style value. Then, we require the disentangled vectors to satisfy the following requirements. \begin{itemize} \item Disentangled vectors that belong to the same style value\ should obey the same Gaussian distribution. \item The Gaussian distributions corresponding to any two different style values\ should be independent from each other. \end{itemize} The control under the {unified distribution assumption} is as follows. When we are conducting style transfer, we only need to sample a new style vector from the target style value\ and replace the original one, then the corresponding style of the generated sentence will also be transferred. \subsection{Model} We now describe our model in detail. As shown in Fig.~\ref{fig:arch}, the basic architecture of our method is an autoencoder. We first encode the input sentence and generate several disentangled style vectors and a content vector from the encoded sentence. The style vector is required to be close to the correct unified style distribution. Ideally, after the sentence is perfectly disentangled into style and content vector, if the style is replaced by the unified representation of the same style value, the original sentence can be correctly reconstructed by the new style vector and the original content vector. Therefore, in the final step, we sample a vector from the correct unified style distribution to replace the original one and reconstruct the original sentence. \begin{figure} \centering \includegraphics[width=\linewidth]{graphics/arch.pdf} \smallskip \caption{Architecture of our method: the circles labeled with $\mu_{i}$ and $\Sigma_{i}$ represent the style-value\ distributions, while the zig-zag arrows are the omission of the sampling process from the correct style value.} \label{fig:arch} \end{figure} \subsubsection{Autoencoder.} In disentanglement approaches, the best way to prevent the loss of information is using autoencoders, which encode the input sentence to a latent space and then reconstruct the original sentence from this space. In our method, we use two kinds of autoencoders: a vanilla and a variational autoencoder~\cite{kingma2014auto}. Each training example is composed of a sentence $x$ and a bunch of style values\ corresponds to different style types: $t_{1\star},\ldots,t_{G\star}$\footnote{We use $t_{ij}$ to represent the $j$-th value\ of the $i$-th style type, and ``$\star$'' means that we have the same action for each style type~(when it replaces $i$) or value~(when it replaces $j$). } ($G$ is the number of style types). Given the input token sequence $x=\{x_1, x_2,\ldots, x_n\}$, we use an LSTM-based~\cite{hochreiter1997long} vanilla and a variational autoencoder to build the reconstruction loss: \begin{equation} \begin{small} \begin{aligned} J_\text{AE} =& \mbox{$-\sum_t\log P(x_t|h, x_1, x_2,\ldots, x_{t-1})$}\\ J_\text{VAE} =&-\int q_E(h|x)\log [P(x|h)]dh \\ &+ \lambda_\text{KL}\text{KL}(q_E(h|x)||P(h)),\\ \end{aligned} \end{small} \end{equation} where $h$ is the latent vector, $P(x|h)$ is the decoder part, $P(h)$ is the prior distribution of $h$ (usually, $\mathcal N(0,1)$), and $\lambda_\text{KL}$ is a hyperparameter. The latent vector is then split into $G$ style vectors $s_1,\ldots,s_G$ of equal length and a content vector $c$. \subsubsection{Style Attachment Loss on Latent Style Space.} For any style type, we need to make the style vector $s_\star$ ``appear like'' sampled from the correct style value's unified distribution. For simplicity, we omit the subscripts and use $s$ for style vector, and $t$ for style value\ in this section. So, we need to maximize the probability of the style vector $s$ belonging to style value\ $t$, denoted $P(t|s)$. In Fig.~\ref{fig:arch}, the Gaussian distributions of the style values\ have parameters $\mu_{\star j}$, $\Sigma_{\star j}$, $j\,{\in}\, \{1,\ldots,|T|\}$ ($T$ represents the set of style values, $|T|$ represents the number of style values). Then, the probability density function (PDF) of the $j$-th style value\ $T_{\star j}$ is as follows: \begin{equation}\label{eq:gauss} \begin{small} \begin{aligned} p_\text{Nor}(s|T_{\star j})=\frac{\exp\Big(-\frac{1}{2}(s-\mu_{\star j})\Sigma_{\star j}^{-1}(s-\mu_{\star j})\Big)}{\sqrt{(2\pi)^d \det(\Sigma_{\star j})}}, \end{aligned} \end{small} \end{equation} where $d$ means the dimension of the style vector, and ``Nor'' is short for ``Normal distribution''. Then, we use Bayes' theorem to calculate this probability as shown in Eq.~\ref{eq:asmsfm}: \begin{equation}\label{eq:asmsfm} P(t|s) = \frac{p_\text{Nor}(s|t)p(t)}{p(s)} = \frac{p_\text{Nor}(s|t)p(t)}{\sum_{t'\in T} (p_\text{Nor}(s|t')p(t'))}, \end{equation} where $p(t)$ is the prior distribution of the style values, which is decided by the dataset. Therefore, we define the style attachment loss as the negative log-likelihood (NLL) loss according to the labeled style value: \begin{equation} \mbox{$ L_\text{al} = -\frac{1}{|D|}\sum_{m=1}^{|D|}\log P(t^{(m)}|s^{(m)})$,} \end{equation} where $|D|$ denotes the size of the training set, and $t^{(m)}$ represents the label of the $m$-th case in the training set. \subsubsection{Style Classification Loss.} We need to make each unified distribution really map to the corresponding style value, so we sample vectors from each style value\ distribution and force them to be classified to that style value. We still use the Gaussian distribution to calculate the classification loss. We first sample $M$ vectors from the distributions $\mathcal N(\mu_{\star j},\Sigma_{\star j})$, denoted $\tilde s^{(m)}_{\star j}$ (which is the $m$-th sample from the $j$-th style value\ distribution). Then, we calculate the probability of $\tilde s^{(m)}_{\star j}$ for the style value\ $T_{\star j}$~as~in~Eq.~\ref{eq:pc}: \begin{equation}\label{eq:pc} P_\text{c}(T_{\star j}|\tilde s^{(m)}_{\star j}) = \frac{p_\text{Nor}(\tilde s^{(m)}_{\star j}|T_{\star j})}{\sum_{t'\in T} p_\text{Nor}(\tilde s^{(m)}_{\star j}|t')}. \end{equation} Since the distribution's parameters $\mu_{\star j}$ and $\Sigma_{\star j}$ also need to be updated in the training phase, we use a reparameterization trick~\cite{devroye1996random,kingma2014auto} to make the sampling process differentiable: \begin{equation} \epsilon\sim\mathcal N(\textbf{O},\textbf{I}),\quad \tilde s^{(m)}_{\star j} = A_{\star j}\epsilon+\mu_{\star j},\\ \end{equation} where $\textbf{O}$ is an all-zero matrix, $\textbf{I}$ is an identity matrix, and $A_{\star j}A_{\star j}^\top\,{=}\,\Sigma_{\star j}$. So, we take $A_{\star j}$, $j\,{\in}\, \{1,2,\ldots,|T|\}$, as the distribution parameters that need to be trained instead of $\Sigma_{\star j}$. Then, we define the classification loss as an NLL~loss: \begin{equation} \mbox{$ L_\text{cl} = -\frac{1}{M|T|}\sum_{j=1}^{|T|}\sum_{m=1}^{M}\log P_\text{c}(T_{\star j}|\tilde s^{(m)}_{\star j})$.} \end{equation} \subsection{Style-Content Disentanglement}\label{sec:sc} We need to guarantee that the content vector does not contain anything about the style. Since we need to disentangle the content vector with both the style vectors before and after the style value\ sampling, we propose to minimize two pieces of mutual information: $I(c,t)$ between the content vector $c$ and the style labels $t$, and $I(c,s)$ between the content vector~$c$ and the style vectors $s$. \paragraph{$\textbf{I(c,t)}$:} To minimize the mutual information $I(c,t)$, we only need to minimize its upper bound, which can be stated as follows (a detailed proof is given in the extended paper) \begin{equation}\label{eq:ict} \begin{small} \begin{aligned} I(c,t)&=\mbox{$\mathbb{E}_x\Big[\int_c\sum_{t}p(c,t|x)\log\frac{p(c,t|x)}{p(c|x)p(t)}\Big]$}\\ &\le\mbox{$\mathbb{E}_x\Big[\sum_{t'}p(t')KL(p(c|t,x)||p(c|t',x))\Big]$,}\\ \end{aligned} \end{small} \end{equation} where $p(t')$ is a constant in specific datasets, $p(c|t,x)$ need to be modeled by another Gaussian distribution $\mathcal N_c(\mu'_t,\Sigma'_t)$, and then we force all the content vectors with label $t$ to obey this distribution. To achieve that, we minimize the negative log-likelihood of the style labels in a batch. \begin{equation}\label{eq:probnll} \mbox{$L_\text{prob-nll}=-\frac{1}{M}\sum_{m=1}^M\log p_c(c^{(m)}|t^{(m)})$,} \end{equation} where $p_c(c^{(m)}|t^{(m)})$ is obtained from the Gaussian distribution $\mathcal N_c(\mu'_t,\Sigma'_t)$ in a similar way as $p_\text{Nor}(s|T_{\star j})$ in Eq.~\ref{eq:gauss}, and~$M$ represents the batch size. So, the loss function of style-content disentanglement is shown as follows. \begin{equation} \begin{aligned} L_\text{sc}&= \mbox{$\mathbb{E}_x\Big[\sum_{t'}p(t')KL(p_c(c|t,x)||p_c(c|t',x))\Big]$}\\ &+\lambda_\text{sc}L_\text{prob-nll}, \\ \end{aligned} \end{equation} where $\lambda_\text{sc}$ is a predefined hyperparameter, and $p_c(c|t)$ is constrained by the loss in Eq.~\ref{eq:probnll}. The KL divergence item is tractable, because $p_c$ is a Gaussian distribution. According to the final form of $L_\text{sc}$, our method is similar to the previously proposed variational fair autoencoder~\cite{louizos2015variational}. In their method, they propose a maximum mean discrepancy penalty as a regularizer to the model, which encourages the statistical moments of two classes to be the same. Differently from them, we build a prior distribution to each label and encourage these distributions to be the same, which is easy to apply to multi-class cases. On the other hand, the method of compressing the input $x$ out of $c$~\cite{moyer2018invariant} is more eligible on variational autoencoders. In comparison, our derivation of $I(c,t)$ is based on our Gaussian assumption, which is eligible for both vanilla and variational autoencoder architectures. \paragraph{$\textbf{I(c,s)}$:} \label{sec:ics} We prove (in the extended paper) that minimizing $I(c,s)$ is equal to minimizing $KL(p(c|x)||p(c))$ and $KL(p(s|x)||p(s))$, which is just the regularization term in variational autoencoders. So, we do not have any loss function for $I(c,s)$. \subsection{Multi-type\ Disentanglement}\label{sec:mg} There are always multiple style types\ occurring in a text or image, such as sentiment polarity and tense. We tend to solve two problems in this section: (1) make the style vectors $s_1,\ldots,s_G$ ($G$ is the number of style types) for different style types\ independent to each other, and (2) when the training set has labels of multiple types, there will be a \textit{training bias} that makes different types\ affect each other. We can write the training bias as $p(T_{i\star}|T_{j\star})>p(T_{i\star}), i\ne j$, where $T_{i\star}$ and $T_{j\star}$ stand for style value\ labels in style types\ $i$ and $j$, respectively.\footnote{Ideally, it should be $p(T_{i\star}|T_{j\star})=p(T_{i\star}), i\ne j$.} For example, if positive sentiment always occurs together with the past tense, then the positive sentiment style vector tends to carry information of the past tense. Implicit disentanglement~\cite{higgins2017beta,chen2018isolating} uses unsupervised methods for scalar multi-type\ disentanglement, where each dimension of the latent vector encodes a specific feature (style type). Inspired by $\beta$-TC\-VAEs~\cite{chen2018isolating}, multi-type\ vector disentanglement can also be done by minimizing the \textit{total correlation} term: \begin{equation}\label{eq:ltc} \mbox{$KL(q(s_1,\ldots,s_G)||\prod_i q(s_i))$,} \end{equation} where $G$ means the number of different types. In our unified distribution settings, the $s_i$'s are generated by their own style-value\ distribution instead of the input~$x$. So, $q(s_1,s_2\ldots,s_G)$ can be factorized as Eq.~\ref{eq:qsss} ($T_{1x},\ldots,$ $T_{Gx}$ are the corresponding style values\ of $x$ in $G$ types): \begin{equation}\label{eq:qsss} \begin{aligned} \mbox{$q(s_1,s_2\ldots,s_G)$}& = \mbox{$\mathbb E_x[\prod_iq(s_i|T_{1x},\ldots,T_{Gx})]$}\\ &=\mbox{$\sum_x\Big[\frac{\prod_i^Gq(T_{1x},\ldots,T_{Gx}|s_i)q(s_i)}{p(x)^{G-1}}\Big]$.}\\ \end{aligned} \end{equation} The proof is shown in the extended paper. Usually, since $T_{ix}$ are potentially related, we have: \begin{equation} \begin{small} \begin{aligned} \mbox{$q(T_{1x},\ldots,T_{Gx}|s_i)=\prod_j^G q(T_{jx}|s_i,T_{kx(k=1\ldots G,k\ne j)})$.} \end{aligned} \end{small} \end{equation} Then, we have the following theorem (proved in the extended paper) to achieve multi-type\ disentanglement. Here, $\mathcal H(\cdot)$ denotes the entropy of a probability distribution. \begin{theorem}\label{the:tts}{\it For random vector variables $s_1, \ldots, s_G$ and values\ $t_1,\ldots, t_G$ sampled from G style types\footnote{For clarity, $t_i$ is a random variable, $T_{ix}$ is a constant label.}, if $\mathcal H(p(t_i|s_i))\,{=}\,0$, for all $i$, and $\mathcal H(p(t_j|s_i))$ $=$ MAX,\footnote{We use ``=\,MAX" to represent ``reaches the maximum value".} for all $i,j$ with $j\ne i$, then for all $i,j$ with $j\ne i$, it holds that $p(s_j|t_i)=p(s_j)$ and $p(t_i|t_j,s_i)=p(t_i|s_i)$. }\end{theorem} To alleviate the training bias, we also need to ensure that $p(t_i|t_j)=p(t_i)$ instead of just $p(t_i|t_j,s_i)=p(t_i|s_i)$. So, we also need to make $\mathcal H(p(t_j|t_i))$, for all $i,j$ with $j\ne i$, reach the maximum value. According to Theorem~\ref{the:tts}, when we guarantee that $\mathcal H(p(t_i|s_i))=0$, for all $i$, and $\mathcal H(p(t_j|s_i))=\text{MAX}$ and $\mathcal H(p(t_j|t_i))=\text{MAX}$, both for all $i,j$ with $j\ne i$, we can make $q(t_{1x},\ldots,t_{Gx}|s_i)=\prod_j^G q(t_{jx}|s_i)$. Then, the \textit{total correlation} term reaches its minimum value $0$ (proved in the extended paper) Apparently, we have the loss function for multi-type\ disentanglement: \begin{equation}\label{eq:mgd} \small \mbox{$L_{m} = \sum_i\sum_{j,j\ne i}\Big[\mathcal H(p(t_i|s_i)) - \mathcal H(p(t_j|s_i))- \mathcal H(p(t_j|\tilde s_i))\Big]$,} \end{equation} where $\tilde s_i$ is a sample from the distribution of style value\ $t_i$. \begin{table*}[!t] \centering \begin{small} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{}& \multicolumn{2}{c|}{Yelp} & \multicolumn{3}{c|}{Amazon} \\ \cline{3-7} \multicolumn{2}{|c|}{}&\newcite{john-etal-2019-disentangled} & Our method&\newcite{john-etal-2019-disentangled} &\multicolumn{2}{c|}{Our method}\\ \hline \multicolumn{2}{|c|}{Style Type} &\multicolumn{2}{c|}{Sentiment}&\multicolumn{2}{c|}{Sentiment}& Tense\\ \hline \multicolumn{2}{|c|}{Random Guess} &\multicolumn{2}{c|}{61.30}&\multicolumn{2}{c|}{50.82}& 52.30\\ \hline \multirow{3}{*}{Vanilla} & Style$^\uparrow$ &\textbf{97.40}&97.31&82.10&\textbf{83.15}&92.50\\ \cline{2-7} &Content$^\downarrow$ &65.80&\textbf{65.48}&67.50&\textbf{52.30} &64.40\\ \cline{2-7} & Style + Content$^\uparrow$ &\textbf{97.40}&97.31&81.90&\textbf{83.15}&92.50\\ \hline \multirow{3}{*}{VAE} &Style$^\uparrow$ &\textbf{97.40}&97.37&81.00&\textbf{81.75}&92.40\\ \cline{2-7} &Content$^\downarrow$ &69.70&\textbf{63.02}&69.30&\textbf{52.30}&64.40\\ \cline{2-7} & Style + Content$^\uparrow$&\textbf{97.40}&97.38&81.00&\textbf{81.75}&92.40\\ \hline \end{tabular} \end{small} \smallskip \caption{The classification accuracies on each space. The up arrow means a good result should have a larger value, while the down arrow means lower is better. Note that the accuracy for content space is the lower the better, because the goal of disentanglement is to let the content space not contain any information of style. For the tense style, there are no such evaluations in previous work, so we did not list any baseline results in this table.} \label{tab:de} \end{table*}% \subsection{Training and Inference} In the training phase, we minimize the following objective: \begin{equation}\label{eq:total} J=J_\text{AE}+\lambda_aL_\text{al}+\lambda_cL_\text{cl}+\lambda_sL_\text{sc} +\lambda_mL_{m}. \end{equation} The item $J_\text{AE}$ can be replaced by $J_\text{VAE}$ for variational autoencoder architectures. $\lambda_a, \lambda_c, \lambda_s$, and $\lambda_m$ in Eq.~\ref{eq:total} are predefined hyperparameters. In the inference phase, if we would like to change the current style value\ to another style value\ (within type\ $i$), we need to follow subsequent steps. First, encode the sentence to a latent representation $h$, and split $h$ to obtain $s_1,\ldots,s_G$ and $c$. Second, we sample a new style vector $\tilde s_i$ from the target style distribution. Finally, replace $s_i$ with $\tilde s_i$ and generate a sentence in the target style by the decoder. \section{Experiments}\label{sec:exp} In this section, we answer the following three questions: (1)~Is the disentanglement effect comparable to previous works? (2) Is the style-transfer performance comparable to or does it even outperform previous works? (3) Will the proposed method alleviate the training bias problem? \subsection{Data and Preprocessing} We tested our method on two datasets: Yelp Service Reviews\footnote{\url{https://github.com/shentianxiao/language-style-transfer}}~\cite{Shen2017Style,Zhao2018Adversarially} and Amazon Product Reviews\footnote{\url{https://github.com/fuzhenxin/textstyletransferdata}}~\cite{Fu2018Style}. These datasets all contain different sentiment values\ (positive and negative). We annotated the tense label~\footnote{The tense label files are available at: \url{https://drive.google.com/drive/folders/1I1hOZChFPFc2LYreWp1W6l4WpdS0pnVK?usp=sharing}} in the Amazon dataset by matching the sentence tokens with a time word corpus, which is collected from the TimeBank dataset\footnote{\url{http://timeml.org}}~\cite{pustejovsky2003timebank}. This is consistent with the method described in \newcite{hu2017toward}. We do not label the Yelp dataset in the same way, because the time information is so vague that our automatic annotation will cause too many errors and mislead our research. \begin{figure*}[!t] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{cccc} \multicolumn{2}{c}{-----------Yelp-----------} & \multicolumn{2}{c}{-----------Amazon-----------}\\ \includegraphics[width=0.25\textwidth]{./graphics/yelp_vanilla_style.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/yelp_vanilla_content.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/amazon_vanilla_style1.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/amazon_vanilla_style2.pdf}\\ (a) Vanilla, Sentiment&(b) Vanilla, Content &(c) Vanilla, Sentiment &(d) Vanilla, Tense\\ \includegraphics[width=0.25\textwidth]{./graphics/yelp_VAE_style.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/yelp_VAE_content.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/amazon_VAE_style1.pdf}&\includegraphics[width=0.25\textwidth]{./graphics/amazon_VAE_style2.pdf}\\ (e) VAE, Sentiment&(f) VAE, Content &(g) VAE, Sentiment&(h) VAE, Tense\\ \end{tabular} } \smallskip \caption{The t-SNE visualization of each latent space. (a), (b), (e), and (f) are the style space (``$s$'') and content space (``$c$'') of the Yelp dataset. (c), (d), (g), and (h) are the two style spaces (sentiment ``$s_1$'' and tense ``$s_2$'') of the Amazon dataset, which is generated simultaneously in multi-type\ disentanglement. All latent spaces are generated by two different methods: a vanilla autoencoder and a variational autoencoder.} \label{fig:vis} \end{figure*} \subsection{Disentanglement Effect} We use two metrics to evaluate the disentanglement effect. In the first metric, we train separate logistic regression classifiers for all the generated style vectors and content vectors. We compare the performance with previous work in Table~\ref{tab:de}. According to Table~\ref{tab:de}, we found that the high accuracies of style vectors on Yelp and Amazon sentiment/tense are nearly identical to the corresponding accuracy of competent space (style and content), which means that the style vectors contain nearly all sentiment/tense information. Comparatively, the performance of content vectors are only comparable to a random guess result. These results indicate that our disentanglement method is very effective. In Fig.~\ref{fig:vis}, we listed the visualization result of each latent space using t-SNE\footnote{In practice, we use the multi-core version for acceleration: \url{https://github.com/DmitryUlyanov/Multicore-TSNE}}~\cite{maaten2008visualizing}. For the vanilla auto-encoders, we can see that in (a), (c), and (d), the different style values, such as positive/negative of sentiment and past/now/future of tense, apparently stay in separate places of themselves. The margins between any two style values\ are sufficient for discrimination. In Fig.~\ref{fig:vis} (b), the content space is indistinguishable due to its lack of sentiment information. For the variational autoencoders, in Fig.~\ref{fig:vis} (e), (f), (g), and (h), we found that all of them are in very regular shape. All disentangled groups of style values, including positive/negative of sentiment and past/now/future of tense, appear as separate ellipses. The reason for this phenomenon is that we force the KL divergence of each style value\ to be smaller, but at the same time, we force to discriminate different style values. Therefore, different style values\ tend to stay closer but with a large margin between them. \subsection{Style-Transfer Performance} To be consistent with previous works, we use four evaluation metrics to evaluate the style-transfer effect of our disentanglement method, and list all the results in Table~\ref{tab:overall}. (1) Style-transfer accuracy (STA): we trained two external sentence classifiers for sentiment and tense using TextCNN~\cite{kim2014convolutional} following previous works~\cite{hu2017toward,Fu2018Style,john-etal-2019-disentangled}, and then used them to measure the sentiment/tense accuracy for the style-transferred sentences with the target style value\ as ground-truth. The external classifier achieves an acceptable accuracy on the validation set (97.68\% on Yelp, 82.3\% on Amazon Sentiment, and 96.5\% on Amazon Tense), which provides a reliable approximation for the sentiment/tense accuracy. \begin{table*}[htp] \centerline{ \resizebox{\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c| c|c|c |c|c|c|c|} \hline & \multicolumn{5}{c|}{Yelp} &\multicolumn{6}{c|}{Amazon} \\\cline{2-12} & STA & CS & WO &PPL & BLEU & STA/Sentiment &STA/Tense & CS & WO & PPL & BLEU \\\hline \newcite{Zhao2018Adversarially} & 0.818& 0.883 &0.272 & 85 & - & 0.552& - &0.926 &0.169 & 75 & - \\\hline \newcite{li-etal-2018-delete} & 0.862 &\textbf{0.941} &0.522 & 70 & - & \color{gray}{0.430} & -&\color{gray}{\textbf{0.976}} &\color{gray}{\textbf{0.799}}& \color{gray}{65}&- \\\hline \newcite{xu2018unpaired} & 0.803 &0.924 &0.427 & 470 & - & 0.723 & -&0.912 &0.222 & 332&- \\\hline Logeswaran et al.~\shortcite{logeswaran2018content}& 0.905&0.879 &0.503 &133 & 17.4 & 0.857 & 0.942 &0.908 &0.356 & 187& 16.6\\\hline \newcite{lample2019multipleattribute}& 0.877 & 0.856 &0.459 & 48& 14.6& 0.896 & 0.965 &0.897 &0.372 & 92& 18.7\\\hline \newcite{john-etal-2019-disentangled}~(Vanilla)& 0.883 &0.915 &0.549 & 52 & 18.7 & 0.720& 0.926 &0.921 &0.354& 73& 16.5\\\hline \newcite{john-etal-2019-disentangled}~(VAE) & 0.934 &0.904 &0.473& 32 & 17.9 & 0.822 & 0.945& 0.900 &0.196& 63&9.8 \\\hline \hline Ours~(Vanilla) & 0.877 & 0.908 & \textbf{0.554 } & 45 & 16.1 & 0.789 & 0.963& \textbf{0.930} & \textbf{0.387}& 68& 15.4 \\\hline Ours~(Vanilla)$-L_\text{cl}$ & 0.605 & 0.852 & 0.419 & 51 & 13.6 & 0.679 & 0.908& 0.896 & 0.382 & 74& 12.5\\\hline Ours~(Vanilla)$-L_\text{sc}$ & \color{gray}{0.383} & \color{gray}{0.875} & \color{gray}{0.550} & \color{gray}{42} & 15.8 & 0.746 & 0.881& 0.904 & \textbf{0.387}& 65& 16.9\\\hline Ours~(VAE) & \textbf{0.944} & 0.912 & 0.455 & 27 & \textbf{21.2} & \textbf{0.902} & \textbf{0.993}& 0.900 & 0.338& 44&\textbf{20.1} \\\hline Ours~(VAE)$-L_\text{cl}$ & 0.734 & 0.860 & 0.391 & 32 & 15.8 & 0.751 & 0.932& 0.907 & 0.310 & 57& 13.4 \\\hline Ours~(VAE)$-L_\text{sc}$ & 0.656 & 0.868 & 0.438 & \textbf{25} & 19.8 & 0.720 & 0.877& 0.898 & 0.345& \textbf{43}&17.8 \\\hline \end{tabular} }} \smallskip \caption{The overall style transfer performance. For tense style, we do not have previous works to compare with, so we only listed our own result. STA: style transfer accuracy, CS: cosine similarity w/o sentiment/tense word, WO: word overlap w/o sentiment/tense word. For the sentiment type, the transfer direction is ``Neg$\rightarrow$Pos'' and ``Pos$\rightarrow$Neg''. For the tense type, the transfer direction is that ``Past$\rightarrow$Now'', ``Now$\rightarrow$Future'', and ``Future$\rightarrow$Past''. Note that the grey numbers have very low STA, which means it always fails in style transferring. So, we highlight the best values of the evaluations apart from them. \label{tab:overall} \end{table*}% (2) Cosine-similarity (CS): We computed the cos-similarity between the original sentence's vector and the style-transferred sentence's vector. Each sentence vector is obtained by concatenating the \textit{max}, \textit{min}, \textit{mean} of word vectors after deleting the sentiment/tense words. This is again consistent with previous works~\cite{Fu2018Style,john-etal-2019-disentangled}. The goal of this metric is to evaluate the semantic similarity between the original sentence and the style-transferred sentence apart from sentiment words. (3) Word overlap (WO): Following \newcite{john-etal-2019-disentangled}, we calculated the unigram overlap between the original and the style-transferred sentence, which is defined as the ratio between the number of words in the intersection set and the number of words in the union set of the two sentences. (4) Perplexity (PPL): We applied a trigram KneserNey~\cite{kneser1995improved} language model as the perplexity evaluator. We trained two language models separately on respective datasets, and use the trained model to evaluate the fluency of generated sentences. A lower PPL value represents a more fluent sentence. (5) BLEU: We calculate the BLEU 1$\sim$4 score between the original sentence and the style-transferred sentence, and take the average of BLEU 1$\sim$4 as the BLEU score. Again, we need to delete the sentiment/tense words before evaluation. By Table~\ref{tab:overall}, in Yelp and Amazon sentiment, our method is comparable with previous works on the CS and WO metrics except for \citep{li-etal-2018-delete}. But our variational autoencoder (VAE) architectures can outperform \citep{li-etal-2018-delete} by a large margin on the STA metric. Also, our VAE architectures can outperform all the previous methods in STA on the two datasets (with Student's t-test, $p<0.01$). In the Amazon tense experiment, although we do not have previous works to compare with, we achieved a high STA value, while the CS and WO remained comparable with the same metric in Amazon sentiment. We found that the CS and WO value of vanilla autoencoders (AEs) are higher than the same metrics of VAEs, because VAEs are more flexible in the latent space, so that the generated sentence is much more varied than vanilla AEs. For ablation tests, we remove $L_\text{cl}$ and $L_\text{sc}$ from the objective for comparison. We did not try to remove $L_\text{al}$, because the whole model would become disconnected by doing that. According to Table~\ref{tab:overall}, after $L_\text{cl}$ is removed, the value of STA decreased a lot, because the model can no longer discriminate different style values\ without $L_\text{cl}$. On the other hand, when we remove $L_\text{sc}$, the STA value gets even lower. This is likely because the content space is flooded with style information without style-content disentanglement; thus, the sentence cannot be completely transferred to the target style. \begin{table}[!t] \centerline{ \resizebox{0.8\linewidth}{!}{ \begin{tabular}[t]{|l|c|c|c|} \hline & & Sentiment & Tense \\ \hline Origin &STA &0.8150 &0.9715\\ \hline \multirow{2}{*}{Vanilla} &STA$_\text{keep}$ &0.7170 &0.9220\\ &$\Delta$ &\textbf{0.0980} &\textbf{0.0495}\\ \hline \multirow{2}{*}{Vanilla - $L_{m}$} &STA$_\text{keep}$ &0.7020 &0.6920\\ &$\Delta$ & 0.1130 &0.2795\\ \hline \multirow{2}{*}{VAE} &STA$_\text{keep}$ &0.7850 &0.8825\\ &$\Delta$ & \textbf{0.0300} & \textbf{0.0890} \\ \hline \multirow{2}{*}{VAE - $L_{m}$} &STA$_\text{keep}$ & 0.7260 & 0.8220\\ &$\Delta$ &0.0890 &0.1495\\ \hline \end{tabular} }} \smallskip \caption{\textbf{STA$_\text{keep}$} stands for the current style type's STA after another type\ is transferred, i.e., we observe the STA of \textit{sentiment} when we are transferring the sentence's \textit{tense}. Also, we observe the STA of \textit{tense} when we are transferring \textit{sentiment}. $\Delta=\text{STA}-\text{STA}_\text{keep}$. The line of ``Origin'' represents the CNN predicted result (STA) of the original sentence. \label{tab:delta} \end{table}% \subsection{Alleviation of Training Bias} When the training bias is alleviated, the value\ of a style type\ tends not to be affected by the transfer of another style type's value. So, we are trying to measure the style preservation for a baseline style type\ (that is held constant) while trying to transfer the other style type. Therefore, a lower deviation of STA from the original style type\ is better, and removing the loss $L_{m}$ should produce poorer results. The effect of our multi-type\ loss function on alleviating training bias is shown in Table~\ref{tab:delta}, where we listed the model's accuracies on one style type\ when the other style type\ is transferred (STA$_\text{keep}$ score). We report the STA$_\text{keep}$ of vanilla AEs and VAEs on the two types\ (sentiment and tense), and we remove the loss $L_{m}$ from the models for comparison. We found that the STA$_\text{keep}$ score of our full model can be very close to the original sentence's STA score. But if we remove the $L_{m}$ item, the accuracy of any type\ would decrease a lot after the style value\ of another type\ is transferred. This fact illustrates the effectiveness of our multi-type\ disentanglement method. \subsection{Human Evaluation} We conducted a human evaluation on the Yelp dataset and the Amazon dataset, like previous works. We randomly sampled 1,000 cases from the sentences generated by each model and asked 6 data graders to give each case a sentiment label or tense label (for transfer accuracy (TA)) and two scores on content preservation (CP), and language quality (LQ). Each score is between 1 to 5. The detailed annotation principles are listed in the extended paper. We randomly shuffled the generated sentences to conduct the grading process in a strictly blind fashion. The human evaluation results are shown in Table~\ref{tab:he}. Our measure of inter-rater agreements (the Krippendorff's alpha values~(\citeyear{krippendorff2004content})) are also listed in Table~\ref{tab:he}, all of them are acceptable due to Krippendorff's principle~(\citeyear{krippendorff2004content}). \begin{table}[!t] \centerline{ \resizebox{\linewidth}{!}{ \begin{tabular}{|l|l|c|c|c|c|c|c|} \hline && \multicolumn{3}{c|}{Sentiment}& \multicolumn{3}{c|}{Tense}\\\cline{3-8} && TA & CP & LQ& TA & CP & LQ\\\hline \multirow{6}{*}{Yelp} &\newcite{Zhao2018Adversarially} &75.42 &3.23& 3.86 & - & -& -\\\cline{2-8} &\newcite{john-etal-2019-disentangled}~(Vanilla) &82.11& 3.52 &4.02& - & -& -\\\cline{2-8} &\newcite{john-etal-2019-disentangled}~(VAE) &85.70 &3.70 &4.26& - & -& -\\\cline{2-8} &Ours (Vanilla) &84.28 & 3.69& 4.32& - & -& -\\\cline{2-8} &Ours (VAE) & \textbf{86.04}& \textbf{3.78}&\textbf{4.39}& - & -& -\\\cline{2-8} & IRA & 0.75 & 0.71& 0.82& - & -& -\\\cline{2-8} \hline \hline \multirow{5}{*}{Amazon} &\newcite{john-etal-2019-disentangled}~(Vanilla) &76.35& 3.01 &3.65&87.34 &2.97 & 3.96\\\cline{2-8} &\newcite{john-etal-2019-disentangled}~(VAE) &79.60 &3.26 &3.76& 88.92&3.14 & 4.14\\\cline{2-8} &Ours (Vanilla) &79.03 & 3.34& 3.74&91.09 & 3.21&4.09 \\\cline{2-8} &Ours (VAE) & \textbf{83.28}& \textbf{3.52}&\textbf{4.08}&\textbf{93.45} &\textbf{3.58} & \textbf{4.23}\\\cline{2-8} & IRA & 0.82 & 0.78& 0.85& 0.93& 0.89& 0.87\\\hline \end{tabular} }} \smallskip \caption{Human evaluation results on the Yelp and Amazon dataset. Here, IRA represents inter-rater agreements.} \label{tab:he \end{table}% \section{Related Work}\label{sec:rel} Disentanglement is a very important method for interpretable deep learning models~\cite{chen2019looks,sha2020estimating,sha2021learn}. Disentanglement works can be split into implicit disentanglement and explicit disentanglement. We summarize their characteristics as follows, focusing on the sentiment and the tense style for this comparison with previous works. Implicit disentanglement means to disentangle meaningful factors from a variational space, but we are not sure how many disentangled components the latent space will be separated. $\beta$-VAEs~\cite{higgins2017beta} and $\beta$-TCVAE~\cite{chen2018isolating} are unsupervised methods that extend variational autoencoders (VAEs)~\cite{kingma2014auto,rezende2014stochastic} and learn disentangled representations by putting a penalty on the \textit{total correlation (TC)} item. There are also a number of methods that extend $\beta$-VAEs and analyze different variants under specific problems~\cite{moyer2018invariant,mathieu2018disentangling,kumar2017variational,esmaeili2018structured,hoffman2016elbo,narayanaswamy2017learning,kim2018disentangling,rezende2018taming,shao2020controlvae}. However, in the training process of implicit disentanglement methods, some components may be pruned~\cite{stuhmer2019independent}, which may lead to incorrect interpretations of the data. In comparison, an explicit disentanglement is able to separate the latent space into more interpretable components and control them using latent variables. For example, \newcite{chen2016infogan} present a GAN-based method that maximizes the mutual information between a scalar variable and a generator. Then, the scalar variable can be taken as a controller for the style of the generated text or image. Basically, adversarial methods are always used to guarantee that different disentangled factors are independent~\cite{john-etal-2019-disentangled,romanov2019adversarial}. However, adversarial methods suffered from oscillating and unstable model parameters, which makes the training hard to converge. Also, some research~\cite{elazar-goldberg-2018-adversarial,moyer2018invariant} pointed out that adversarial training is not so reliable in disentanglement-invariant representations. One of the most common applications of disentanglement is text/image style transfer. Apart from some disentangle-free approaches~\cite{preoctiuc2018user,logeswaran2018content,dai-etal-2019-style,lample2019multipleattribute,dankers2019modelling}, there are three ways for style transfer as follows: (1) example imitating: to extract a high-level feature and force the input text/image's feature to approach the example's feature~\cite{gatys2016image}; (2) scalar variable tuning: after disentangling the input into latent scalar variables, slightly tune a scalar variable larger or smaller, expecting the generated text/image would change accordingly~\cite{chen2016infogan,hu2017toward,kumar2017variational,malandrakis-etal-2019-controlled}; and (3) vector-variable replacing: after disentangling the input sentence, sample some target style's vectors and replace the original style vector with their average~\cite{john-etal-2019-disentangled}. However, an averaged style vector usually means that we have to sample some examples in the target style and calculate the average of their style vector, which is inconvenient compared to our unified style representation. Apart from sentiment and tense, there are also other kinds of styles. \newcite{kang2019xslue} proposed a benchmark for style classification and discussed various kinds of styles, including \textit{emotion}, \textit{age}, and \textit{formality}. \newcite{kang2019male} proposed a parallel persona style dataset. \section{Conclusion}\label{sec:con} In this paper, we proposed a unified distribution method as well as multiple loss functions to avoid adversarial training in the disentangling process. Our method is easy to be applied in multi-type disentanglement. we conducted style disentanglement experiments and style transfer experiments to prove the effectiveness of our method. \section*{Acknowledgments} This work was supported by the EPSRC grant ``Unlocking the Potential of AI for English Law'', a JP Morgan PhD Fellowship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund. We also acknowledge the use of Oxford's Advanced Research Computing (ARC) facility, of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1), and of GPU computing support by Scan Computers International Ltd. \small \section{Cover Letter} % Dear reviewers: We have made the following changes to our paper since the EMNLP submission: \subsection{As the response to R1:} (1) (Weakness 1) We changed the terms ``style genre'' and ``style type" to ``style type" and ``style value" for better understanding. (2) (Weakness 2) We rerun the model and report BLEU score in Table~\ref{tab:overall}. The reason why we do not report BLEU. in EMNLP is that the related previous works also didn’t report BLEU, and we need to compare against them. Instead, although WO and COS are not the usual metrics for language generation, they are usually reported in style transfer tasks. (3) (Weakness 3) For human evaluation, we add more details on how questions were presented to the annotators in the supplementary material. (4) (Weakness 4) In Table~\ref{tab:he}, there was a typo, our CP value is 3.78 instead of 3.68, which is the highest. We have corrected this. (5) (Questions Sec 1) We have changed the confusing terminologies. We have changed the describing of ``training bias'' and ``style vector + content vector -> reconstruct sentence'', we believe it is much easier for understanding now. (6) (Questions Sec 3) NLL is described clearer now. We have stated in the paper that Yelp is not very clear in tense. (7) (Questions Sec 5) In supplementary material, we further listed some style transfer cases with two style types transferred at the same time, which are multi-type\ examples. (8) (Questions Sec 6) We added the human evaluation of tense. \subsection{As the response to R2:} (1) (Reasons to reject 1) We have made high level introduction before each loss function, and thoroughly proofread this part to make each notation identical throughout the paper. We have changed the style discriminator loss to style classification loss for clarity. (2) (Reasons to reject 2.1) For the experiments, our goal is to achieve a comparable disentangling result while not using an adversarial method. So, comparable is a reasonable result. Besides, our style transfer result does have some better result than previous works. (3) (Reasons to reject 2.2) ``Multi-genre make some noises'': of course, it will make some noises, the goal of multi-genre is to simultaneously disentangle more than one style genre. In Fig.~\ref{fig:vis}, in (g) VAE, Sentiment, and (h) VAE, Tense, we can see that the disentanglement effect is very good for both style genres. (4) (Reasons to reject 3) We added two further baselines~\cite{logeswaran2018content,lample2019multipleattribute} in Table~\ref{tab:overall}. We have ablation tests in EMNLP paper. The ablation tests are in Table~\ref{tab:overall} for $L_{disc}$ and $L_{sc}$. In Table~\ref{tab:delta}, there is an ablation test about $L_{mg}$. Different loss functions have different effects on the performance. (5) (Reasons to reject 4) For the style transfer experiment, our goal is not to become state-of-the-art, our goal is to be comparable to previous state-of-the-art while not use adversarial method. Actually, we do have some results higher than previous work. (6) (Reasons to reject 5) We have cited these papers. (7) (Question 1) Eq.~\ref{eq:ict} is based on the encoding result instead of the process, so whether it is VAE or AE does not affect the proof. (8) (Question 2) Definitely need to be separated, and it is not about separate content words, it is about separate the semantic information of sentences as a whole. (9) (Question 3) Pretrained language models like BERT will bring much noisy information from pretraining data, which is bad for disentangling. So, we do not see any possible method to apply it to disentangling tasks (10) (Question 4) We have cited TimeBank. \subsection{As the response to R3} (1) (Weakness 1) $L_\text{prob-nll}$ is trying to diminish the style information in the content vector. Each style type first collects all its corresponding content vectors, and then shortens the KL divergence between different style types. That is what is meant by $L_{sc}$. The ablation test of $L_{sc}$ is shown in Table~\ref{tab:delta}. We have proofread this part to make it more clearer. The value of $\lambda_{sc}$ is missing in the supplementary material, which should be 0.8. (2) (Weakness 2) we have compared our work with Logeswaran et al. 2018, and Lample et al. 2019, and we have outperformed them. (3) (Weakness 3) There were some mistakes in description, both results should be “comparable”, which is also our goal. We have proofread this part to make it more clearer. (4) (Weakness 4) We have made a mistake here, after checking the records, our CP number should be 3.78, which is the highest. (5) (Weakness 5) We have added the setting of $\lambda_{KL}$ in supplementary material. The generating method of $s_1, …, s_G$ is described in the Model/Autoencoder part. The line of ``Vanilla – $L_{sc}$'' is also changed to grey. We also made better descriptions of notations h in Eq (1); T in Eq (2); O and I in Eq (6); H in Eq (14). We have made the Gaussian terms clearer, the style Gaussian is $\mathcal N$, the probability density function is $p_{Nor}$, the content Gaussian is $\mathcal N_c$, the probability density function is $p_{c}$. (6) (Question 1) From the experiments, $L_{mg}$ does not affect much STA, CS, WO, and PPL. So, we just show its effect on alleviating the training bias in Table~\ref{tab:delta}. \end{document}
{ "timestamp": "2021-08-04T02:13:40", "yymm": "2012", "arxiv_id": "2012.08883", "language": "en", "url": "https://arxiv.org/abs/2012.08883" }
\section{Introduction} Research on benchmarking learning algorithms for NLP tasks~\cite{1904.08067,2004.03705,yadav-bethard-2018-survey,guntara-etal-2020-benchmarking} have largely focused on the quality of the models by some accuracy metrics such as the F1 score. The costs which include processing time, memory resources, computing power, and human expertise that is needed to train the models and be utilized for prediction are often ignored. As NLP is getting popular to be implemented across industries, one of the biggest hurdles in this early adoption is determining which methods to use. Companies want to provide the best model but often had struggled with resources to build it~\cite{magoula2020ai}. This is especially true for companies operating within a large emerging market with under-resourced languages, which often means the lack of human expertise, data limitation, short experiment time, and budget limitation. For companies already serving a large number of users, they also need to consider how the model scales and keeps prediction time fast even without expensive GPU servers. In this paper, we focus on the NLP industry landscape of the large emerging market of Indonesia, where highly demanded products are based on text classification and sequence labeling~\cite{ruliputra-2019-enterprise}. These two tasks has under-resourced annotated data in Indonesian language~\cite{wilie2020indonlu}. We run experiments over a number of datasets using several learning algorithms. We report F1-scores, training time, resulting model's size, and average prediction time. We discuss how our experiment results have influenced our business decision and how it can help other companies in adapting NLP technologies fast and more effectively. \section{Indonesian NLP Landscape} \input{sec_indo_nlp} \section{Learning Algorithms} \label{sec:learning-alg} \input{sec_lern_alg} \section{Algorithm benchmark} \input{sec_alg_bench} \section{Experiment Results} \input{sec_exp_res} \section{Discussion} \input{sec_discuss} \section{Related Work} \input{sec_related} \section{Conclusion} Transformer-based pre-trained model outperform statistical methods on various NLP tasks. However, it requires extra cost in terms of training time, memory to store the model, and its prediction time, compared to statistical approach. It also relies heavily on GPU which is still relatively uncommon and is expensive for cloud service. Our benchmark showed that the accuracy difference between the Transformer-based and statistical approach is 7\% at worst, therefore it is recommended for early adopters to use simple methods for their production environment. After having enough resources to host large models, using pre-trained Transformers with the right (ideally distilled) base model should give the best results. We call for more research into efficient models to give more incentives to industry players in emerging market to use the Transformer from the get-go. \bibliographystyle{splncs04} \subsection{Tasks and Datasets} We run our experiments across multiple actual industry's datasets. However, some of the data used in our experiments are private, thus cannot be published. Nevertheless, we provide data description and statistic (see Table~\ref{eda-data}) to give a general view of what the data is about. \begin{table} \centering \caption{Data statistics. Train: total training data. Dev: total validation data. Test: total testing data. $N$: total data. $c$: total label. $d$: average data per class for classification. $l$: average sentence length. $V$: vocab size. $ts$: average sentence length for 100 sample prediction data.} \begin{tabular}{l|rrrr|rrrr} \hline \textbf{} & \textbf{Smltk} & \textbf{Health} & \textbf{Telco} & \textbf{Sent} & \textbf{EntK} & \textbf{POS} & \textbf{TermA} & \textbf{Prod} \\ \hline {Train} & {11134} & {57938} & {11520} & {28717} & {10955} & {3000} & {7222} & {1365} \\ {Dev} & {1280} & {6894} & {1440} & {3191} & {1250} & {1000} & {802} & {854} \\ {Test} & {1272} & {6897} & {1440} & {4748} & {1372} & {1000} & {2006} & {853} \\ {$N$} & {13686} & {71729} & {14400} & {36656} & {13577} & {5000} & {10030} & {3072} \\ {$c$} & {96} & {5} & {144} & {2} & {14} & {3} & {23} & {69} \\ {$d$} & {142.51} & {14345.8} & {100} & {18328} & {-} & {-} & {-} & {-} \\ {$l$} & {5.08} & {8.56} & {5.27} & {15.6} & {12.36} & {15.72} & {26.11} & {9.61} \\ {$V$} & {2878} & {16892} & {3357} & {22896} & {18004} & {5211} & {15624} & {5655} \\ {$ts$} & {4.88} & {9.06} & {5.16} & {19.14} & {12.13} & {15.81} & {26.47} & {9.93} \\ \hline \end{tabular} \label{eda-data} \vspace{-10px} \end{table} \noindent{\textbf{Text Classification Task}} \begin{itemize} \item \textbf{Smltk} Bot intent classification for small talk (e.g. greetings, joking, etc.). The language is informal and the labels are imbalanced. \item \textbf{Health} Text classification for conversation between doctors and patients. The data is semi-formal and grouped into five labels: patient's complaint, patient's action, doctor's diagnosis, doctor's recommendation, and other. \item \textbf{Telco} Intent classification for a telecommunication’s bot. It contains semi-formal question and instruction with a balanced data across labels. \item \textbf{Sent}\footnote{\url{https://www.kaggle.com/grikomsn/lazada-indonesian-reviews}} Sentiment analysis data about product reviews from an e-commerce platform. The data was annotated based on user's rating. \end{itemize} \noindent{\textbf{Sequence Labeling Task}} \begin{itemize} \item \textbf{EntK} Extended NER with 14 different labels including person, location, email, phone, datetime, number, currency and 5 different units. This is our internal dataset that was manually gathered and annotated. \item \textbf{POS}\footnote{\url{https://github.com/kmkurn/id-pos-tagging}} POS Tagging dataset from the PAN Localization Project~\cite{dinakaramani2014designing}. We use one of the data splits that was done by~\cite{kurniawan2018}. \item \textbf{TermA}\footnote{\url{https://github.com/jordhy97/final\_project}} A semi-formal review data from AiryRoom, a hotel aggregator platform. The data is annotated into aspects and their sentiment~\cite{fernando2019aspect}. \item \textbf{Prod}\footnote{\url{https://github.com/derhif/enamex-center}} This dataset contains product title with its annotated attributes from several e-commerce websites in Indonesia~\cite{articlerifat}. \end{itemize} \subsection{Experiment Setup} For training we use a single GPU of Tesla T4 15GB. Whereas for prediction, we compare the same GPU machine with a CPU Intel(R) Xeon(R) CPU @ 2.20GHz (4 cores) with memory of 26.75 GB. We use TF-IDF weighted $n$-grams ($n =$ 1, 2) as the word vector in our classic methods. For the FastText, we use the pretrained Indonesian word vector\footnote{\url{https://fasttext.cc/docs/en/crawl-vectors.html}} with 300-dimension then fine-tune it on our training dataset. For pre-trained language models, we compare two base models. The first one is a multilingual BERT (mBERT) base model\footnote{\url{https://huggingface.co/bert-base-multilingual-cased}}~\cite{devlin-etal-2019-bert} which contains 104 different languages, including Indonesian language. This model often used as base pre-trained model for non-English dataset. Secondly, we use IndoNLU's~\cite{wilie2020indonlu} lite model\footnote{\url{https://huggingface.co/indobenchmark/indobert-lite-base-p1}}, an ALBERT~\cite{lan2020albert} base model trained using Indonesian dataset. We use AdamW Optimizer~\cite{loshchilov2017decoupled} with the learning rate of 1e-3, 1e-4, 1e-5. The batch size is set to 16, following the recommended hyperparameter settings~\cite{wilie2020indonlu,devlin-etal-2019-bert}, for all datasets and methods to ensure fairness in the experiment. We use early stopping based on 3 consecutive validation loss. \subsection{Evaluation Metrics} Other than the model quality, we also observe training time, size of the model, as well as its loading and prediction time. \noindent {\textbf{F1 Score:}} F1-macro is used as our evaluation metric to average over classes. This metric is used for both binary and multi-label classification, also for the sequence labeling. \noindent {\textbf{Training Resources:}} When training the neural models, we use one GPU and track the total training time. Because hyperparameter tuning is costly, we run the training once using the recommended settings. We also track every saved file that represents the model to calculate the total model size. In the case of a company where the business provides a Platform-as-a-Service (PaaS), this size corresponds to the amount of storage and memory needed to load the model, and it scales according to the number of users/clients. \noindent{\textbf{Load Time:}} Loading the model into memory is a prerequisite before it can be used. In the case where we have hundreds of models with a limited machine resource, it is impossible to always host all models, especially if the models are not used often. Periodically, the model would be removed from memory and be rebuild when it is needed. Knowing that loading time becomes information that needs to be taken into account. \noindent{\textbf{Prediction Time:}} We compare the prediction time between using one CPU and one GPU. For each dataset, we take 100 stratified random samples based on token’s length. We run prediction one-by-one for 100 samples then sum its prediction time. To heighten the accuracy of our experiment, we rerun the prediction using pytest-benchmark,\footnote{\url{https://pypi.org/project/pytest-benchmark}} which automatically minimizes outliers, for 100 rounds. For the classic algorithms, we only run on the CPU. \section{Loading and Prediction Time} A detail comparison for model loading time and prediction time for each method and dataset in CPU and GPU. \begin{table}[ht] \vspace{-20px} \caption{\label{model-load} Load time in seconds for CPU and GPU } \centering \begin{tabular}{lrrrrrrrr} \hline \textbf{Method} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} \\ \hline {} & \multicolumn{2}{c}{\textbf{Smalltalk}} & \multicolumn{2}{c}{\textbf{Healthcare}} & \multicolumn{2}{c}{\textbf{Telco}} & \multicolumn{2}{c}{\textbf{Sentiment}} \\ \hline {\texttt{LR}} & {0.015} & {--} & {0.034} & {--} & {0.009} & {--} & {0.044} & {--} \\ {\texttt{SVM}} & {0.017} & {--} & {0.035} & {--} & {0.011} & {--} & {0.044} & {--} \\ {\texttt{Bi-LSTM}} & {1.022} & {1.169} & {7.363} & {7.338} & {1.245} & {1.243} & {6.220} & {6.425} \\ {\texttt{CNN}} & {1.510} & {1.368} & {8.029} & {8.020} & {1.639} & {1.657} & {7.651} & {7.408} \\ {\texttt{mBERT}} & {9.195} & {9.121} & {21.371} & {21.031} & {13.354} & {12.574} & \multicolumn{2}{c}{NA} \\ {\texttt{IndoNLU}} & {7.607} & {7.643} & {21.34} & {21.23} & {8.847} & {8.829} & {14.29} & {14.27} \\ \hline {} & \multicolumn{2}{c}{\textbf{EntK}} & \multicolumn{2}{c}{\textbf{POS}} & \multicolumn{2}{c}{\textbf{TermA}} & \multicolumn{2}{c}{\textbf{Prod}} \\ \hline {\texttt{CRF}} & {0.014} & {--} & {0.013} & {--} & {0.007} & {--} & {0.030} & {--} \\ {\texttt{Bi-LSTM}} & {0.409} & {0.416} & {0.403} & {0.407} & {0.131} & {0.123} & {0.100} & {0.099} \\ {\texttt{CNN}} & {1.531} & {1.433} & {1.931} & {1.868} & {0.478} & {0.456} & {0.199} & {0.167} \\ {\texttt{mBERT}} & {9.175} & {8.253} & {9.117} & {8.213} & {8.942} & {8.656} & {9.013} & {8.214} \\ {\texttt{IndoNLU}} & {6.420} & {6.265} & {6.311} & {6.197} & {6.360} & {6.251} & {6.346} & {6.229} \\ \hline \end{tabular} \vspace{-20px} \end{table} \begin{table}[ht] \vspace{-20px} \caption{\label{model-infer} Prediction time in seconds for CPU and GPU } \centering \begin{tabular}{lrrrrrrrr} \hline \textbf{Method} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} & \textbf{CPU} & \textbf{GPU} \\ \hline {} & \multicolumn{2}{c}{\textbf{Smalltalk}} & \multicolumn{2}{c}{\textbf{Healthcare}} & \multicolumn{2}{c}{\textbf{Telco}} & \multicolumn{2}{c}{\textbf{Sentiment}} \\ \hline {\texttt{LR}} & {0.369} & {--} & {0.395} & {--} & {1.344} & {--} & {0.214} & {--} \\ {\texttt{SVM}} & {0.101} & {--} & {0.179} & {--} & {0.121} & {--} & {0.212} & {--} \\ {\texttt{Bi-LSTM}} & {0.099} & {0.101} & {0.164} & {0.122} & {0.126} & {0.112} & {0.276} & {0.155} \\ {\texttt{CNN}} & {0.120} & {0.107} & {0.159} & {0.128} & {0.148} & {0.125} & {0.187} & {0.188} \\ {\texttt{mBERT}} & {4.470} & {0.901} & {6.442} & {1.071} & {5.400} & {1.045} & \multicolumn{2}{c}{NA} \\ {\texttt{IndoNLU}} & {3.425} & {0.963} & {5.680} & {1.147} & {4.417} & {1.129} & {7.377} & {1.114} \\ \hline {} & \multicolumn{2}{c}{\textbf{EntK}} & \multicolumn{2}{c}{\textbf{POS}} & \multicolumn{2}{c}{\textbf{TermA}} & \multicolumn{2}{c}{\textbf{Prod}} \\ \hline {\texttt{CRF}} & {0.063} & {--} & {0.115} & {--} & {0.039} & {--} & {0.195} & {--} \\ {\texttt{Bi-LSTM}} & {0.604} & {0.444} & {0.916} & {0.494} & {0.671} & {0.443} & {0.670} & {0.439} \\ {\texttt{CNN}} & {0.539} & {0.373} & {0.958} & {0.666} & {0.612} & {0.456} & {0.520} & {0.422} \\ {\texttt{mBERT}} & {109.581} & {134.869} & {112.975} & {134.451} & {111.513} & {134.653} & {110.023} & {140.688} \\ {\texttt{IndoNLU}} & {14.532} & {12.146} & {18.508} & {12.649} & {15.843} & {10.668} & {13.920} & {9.474} \\ \hline \end{tabular} \vspace{-20px} \end{table} \subsection{Classical ML} Logistic regression (LR) and support vector machine (SVM) are often used to train models for text classification. Before the advent of deep learning, SVM models often top the chart in text classification tasks~\cite{manevitz2001one}. These two methods are also popular and within the repertoire of a typical Indonesian data scientists and engineers. Conditional Random Field (CRF) is a common technique for sequence labeling. Even in deep neural methods, CRF is often used as the last layer to improve the performance~\cite{ma-hovy-2016-end,CHEN2017221}. In this work, we only use basic features, such as orthography, prefix, suffix, bigram, and trigram, without external knowledge. \subsection{Bi-LSTM} \label{sec:bi-lstm} Bidirectional-LSTMs (Bi-LSTM) are able to capture information from long sequences in forward and backward direction. It shows good results~\cite{huang2015bidirectional,CHEN2017221} and until recently was the go-to state-of-the-art technique for sequential data like text. We stack two Bi-LSTM layers to capture the information from both character and word level as shown in Figure~\ref{fig:char-word-bilstm}. The result from the character level is concatenated with the word embedding before getting passed into another Bi-LSTM layer. For text classification, we concatenate the output from the forward and the backward word level layer, before passing it to a dense layer to get the result. \begin{figure}[htp] \centering \includegraphics[width=200pt]{bilstm_bilstm.eps} \caption{Character-Word level BI-LSTM. 'Aku Budi' means 'I am Budi'.} \label{fig:char-word-bilstm} \vspace{-10px} \end{figure} \subsection{Convolutional Neural Network} Although it has roots in computer vision, Convolutional Neural Network (CNN) has been shown to perform well for text classification~\cite{kim-2014-convolutional,johnson-zhang-2015-effective} via 1-dimensional convolution to capture sequence of words. We use Kim's~\cite{kim-2014-convolutional} architecture for text classification with some adjustments. Instead of word2vec~\cite{41224}, we use FastText~\cite{joulin2016bag} to handle out-of-vocabulary (OOV) words. For sequence labeling, we modify the previous character level representation (Subsection~\ref{sec:bi-lstm}, Figure~\ref{fig:char-word-bilstm}) to CNN before concatenating it to the word-level embedding and feeding it into the Bi-LSTM layer. \subsection{Transformers} The Transformer has become the latest state-of-the-art method in many NLP tasks as it has shown to outperform other neural models like RNN or LSTM~\cite{2004.03705}. With sufficient computing power, it can run faster (relative to the RNNs) because of its ability to run in parallel~\cite{vaswani2017attention}. The Transformer also gave rise to pre-trained language models such as BERT~\cite{devlin-etal-2019-bert} and ALBERT~\cite{lan2020albert}, which is a lighter version of BERT with lower memory consumption. Here, we use inductive transfer learning to extract knowledge from existing language models and fine-tune it to the downstream tasks. This method has shown the biggest improvement and widely used in many NLP applications~\cite{ruder2019transfer}.
{ "timestamp": "2021-04-16T02:06:33", "yymm": "2012", "arxiv_id": "2012.08958", "language": "en", "url": "https://arxiv.org/abs/2012.08958" }
\section*{Introduction} An extended pseudo-metric space, here called an ep-metric space, is a set $X$ together with a function $d:X \times X \to [0,\infty]$ such that the following conditions hold: \begin{itemize} \item[1)] $d(x,x)=0$, \item[2)] $d(x,y) = d(y,x)$, \item[3)] $d(x,z) \leq d(x,y) + d(y,z)$. \end{itemize} There is no condition that $d(x,y)=0$ implies $x$ and $y$ coincide --- this is where the adjective ``pseudo'' comes from, and the gadget is ``extended'' because we are allowing an infinite distance. A metric space is an ep-metric space for which $d(x,y)=0$ implies $x=y$, and all distances $d(x,y)$ are finite. The traditional objects of study in topological data analysis are finite metric spaces $X$, and the most common analysis starts by creating a family of simplicial complexes $V_{s}(X)$, the Vietoris-Rips complexes for $X$, which are parameterized by a distance variable $s$. To construct the complex $V_{s}(X)$, it is harmless at the outset is to list the elements of $X$, or give $X$ a total ordering --- one can always do this without damaging the homotopy type. Then $V_{s}(X)$ is a simplicial complex (and a simplicial set), with simplices given by strings \begin{equation*} x_{0} \leq x_{1} \leq \dots \leq x_{n} \end{equation*} of elements of $X$ such that $d(x_{i},x_{j}) \leq s$ for all $i,j$. If $s \leq t$ then there is an inclusion $V_{s}X \subset V_{t}(X)$, and varying the distance parameter $s$ gives a diagram (functor) $V_{\ast}(X): [0,\infty] \to s\mathbf{Set}$, taking values in simplicial sets. Following Spivak \cite{fuzzy-Spivak} (sort of), one can take an arbitrary diagram $Y: [0,\infty] \to s\mathbf{Set}$, and produce an ep-metric space $\operatorname{Re}(Y)$, called its realization. This realization functor has a right adjoint $S$, called the singular functor, which takes an ep-metric space $Z$ and produces a diagram $S(Z): [0,\infty] \to s\mathbf{Set}$ in simplicial sets. One needs good cocompleteness properties to construct the realization functor $\operatorname{Re}$. Ordinary metric spaces are not well behaved in this regard, but it is shown in the first section (Lemma \ref{lem 3}) that the category of ep-metric spaces has all of the colimits one could want. Then $\operatorname{Re}(Y)$ can be constructed as a colimit of finite metric spaces $U^{n}_{s}$, one for each simplex $\Delta^{n} \to Y_{s}$ of some section of $Y$. The metric space $U^{n}_{s}$ is the set $\{0,1, \dots ,n\}$, equipped with a metric $d$, where $d(i,j) = s$ for $i \ne j$. A morphism $U^{n}_{s} \to Z$ of ep-metric spaces is a list $(x_{0},x_{1}, \dots ,x_{n})$ of elements of $Z$ such that $d(x_{i},x_{j}) \leq s$ for all $i,j$. Such lists have nothing to with orderings on $Z$, and could have repeats. With a bit of categorical homotopy theory, one shows (Proposition \ref{prop 7}) that $\operatorname{Re}(Y)$ is the set of vertices of the simplicial set $Y_{\infty}$ (evaluation of $Y$ at $\infty$), equipped with a metric that is imposed by the proof of Lemma \ref{lem 3}. One wants to know about the homotopy properties of the counit map $\eta: Y \to S(\operatorname{Re}(Y))$, especially when $Y$ is an old friend such as the Vietoris-Rips system $V_{\ast}(X)$. But $\operatorname{Re}(V_{\ast}(X))$ is the original metric space $X$ (Example \ref{ex 13}), the object $S(X)$ is the diagram $[0,\infty] \to s\mathbf{Set}$ with $(S(X)_{t})_{n} = \hom(U^{n}_{t},X)$, and the counit $\eta: V_{t}(X) \to S_{t}(X)$ in simplicial sets takes an $n$-simplex $\sigma: \Delta^{n} \to V_{t}(X)$ to the list $(\sigma(0),\sigma(1), \dots ,\sigma(n))$ of its vertices. We show in Section 3 (Theorem \ref{th 16}, the main result of this paper) that the map $\eta: V_{t}(X) \to S_{t}(X)$ is a weak equivalence for all distance parameter values $t$. The proof proceeds in two main steps, and involves technical results from the theory of simplicial approximation. The steps are the following: \smallskip \noindent 1)\ We show (Lemma \ref{lem 14}) that the map $\eta$ induces a weak equivalence $\eta_{\ast}: BNV_{t}(X) \to BNS_{t}(X)$, where $\eta_{\ast}: NV_{t}(X) \to NS_{t}(X)$ is the induced comparison of posets of non-degenerate simplices. Here, $V_{t}(X)$ is a simplicial complex, so that $BNV_{t}(X)$ is a copy of the subdivision $\operatorname{sd}(V_{t}(X))$, and is therefore weakly equivalent to $V_{t}(X)$. \medskip \noindent 2)\ There is a canonical map $\pi: \operatorname{sd} S_{t}(X) \to BNS_{t}(X)$, and the second step in the proof of Theorem \ref{th 16} is to show (Lemma \ref{lem 15}) that this map $\pi$ is a weak equivalence. \smallskip \noindent It follows that the map $\eta$ induces a weak equivalence $\operatorname{sd}(V_{t}(X)) \to \operatorname{sd}(S_{t}(X))$, and Theorem \ref{th 16} is a consequence. \medskip The fact that the space $S_{t}(X)$ is weakly equivalent to $V_{t}(X)$ for each $t$ means that we have yet another system of spaces $S_{\ast}(X)$ that models persistent homotopy invariants for a data set $X$. One should bear in mind, however, that $S_{t}(X)$ is an infinite complex. To see this, observe that if $x_{0}$ and $x_{1}$ are distinct points in $X$ with $d(x_{0},x_{1}) \leq t$, then all of the lists \begin{equation*} (x_{0},x_{1},x_{0},x_{1}, \dots ,x_{0},x_{1}) \end{equation*} define non-degenerate simplices of $S_{t}(X)$. \tableofcontents \section{ep-metric spaces} An {\it extended pseudo-metric space} \cite{HMc-2018} (or an {\it uber metric space} \cite{fuzzy-Spivak}) is a set $Y$, together with a function $d:Y \times Y \to [0,\infty]$, such that the following conditions hold: \begin{itemize} \item[a)] $d(x,x)=0$, \item[b)] $d(x,y) = d(y,x)$, \item[c)] $d(x,z) \leq d(x,y) + d(y,z)$. \end{itemize} Following \cite{Scocc-thesis}, I use the term {\it ep-metric spaces} for these objects, which will be denoted by $(Y,d)$ in cases where clarity is required for the metric. \medskip Every metric space $(X,d)$ is an ep-metric space, by composing the distance function $d: X \times X \to [0,\infty)$ with the inclusion $[0,\infty) \subset [0,\infty]$. \medskip A morphism between ep-metric spaces $(X,d_{X})$ and $(Y,d_{Y})$ is a function $f: X \to Y$ such that \begin{equation*} d_{Y}(f(x),f(y)) \leq d_{X}(x,y). \end{equation*} These morphisms are sometimes said to be non-expanding \cite{HMc-2018}. I shall use the notation $ep-\mathbf{Met}$ to denote the category of ep-metric spaces and their morphisms. \medskip \begin{example}[Quotient ep-metric spaces]\label{ex 1} Suppose that $(X,d)$ is an ep-metric space and that $p: X \to Y$ is a surjective function. For $x,y \in Y$, set \begin{equation}\label{eq 1} D(x,y) = \inf_{P}\ \sum_{i}\ d(x_{i},y_{i}), \end{equation} where each $P$ consists of pairs of points $(x_{i},y_{i})$ with $x=x_{0}$ and $y_{k}=y$, such that $p(y_{i})=p(x_{i+1})$. Certainly $D(x,x) = 0$ and $D(x,y) = D(y,x)$. One thinks of each $P$ in the definition of $D(x,y)$ as a ``polygonal path'' from $x$ to $y$. Polygonal paths concatenate, so that $D(x,z) \leq D(x,y) + D(y,z)$, and $D$ gives the set $Y$ an ep-metric space structure. This is the {\it quotient} ep-metric space structure on $Y$. If $x,y$ are elements of $X$, the pair $(x,y)$ is a polygonal path from $x$ to $y$, so that $D(p(x),p(y)) \leq d(x,y)$. It follows that the function $p$ defines a morphism $p: (X,d) \to (Y,D)$ of ep-metric spaces. \end{example} \begin{example}[Dividing by zero]\label{ex 2} Suppose that $(X,d)$ is an ep-metric space. There is an equivalence relation on $X$, with $x \sim y$ if and only if $d(x,y) =0$. Write $p: X \to X/\sim\ =: Y$ for the corresponding quotient map. Given a polygonal path $P = \{ (x_{i},y_{i}) \}$ from $x$ to $y$ in $X$ as above, $d(y_{i},x_{i+1}) = 0$, so the sum corresponding to $P$ in (\ref{eq 1}) can be rewritten as \begin{equation*} d(x,y_{0}) + d(y_{0},x_{1}) + d(x_{1},y_{1}) + \dots + d(x_{k},y). \end{equation*} It follows that $d(x,y) \leq D(p(x),p(y))$, whereas $D(p(x),p(y)) \leq d(x,y)$ by construction. Thus, if $D(p(x),p(y)) = 0$, then $d(x,y)=0$ so that $p(x)=p(y)$. \end{example} \begin{lemma}\label{lem 3} The category $ep-\mathbf{Met}$ of ep-metric spaces is cocomplete. \end{lemma} \begin{proof} The empty set is the initial object for this category, Suppose that $(X_{i},d_{i}), i \in I$, is a list of ep-metric spaces. Form the set theoretic disjoint union $X = \sqcup_{i}\ X_{i}$, and define a function \begin{equation*} d: X \times X \to [0,\infty] \end{equation*} by setting $d(x,y) = d_{i}(x,y)$ if $x,y$ belong to the same summand $X_{i}$ and $d(x,y)= \infty$ otherwise. Any collection of morphisms $f_{i}: X_{i} \to Y$ in $ep-\mathbf{Met}$ defines a unique function $f=(f_{i}): X \to A$, and this function is a morphism of $ep-\mathbf{Met}$ since \begin{equation*} d(f(x),f(y)) = d(f_{i}(x),f_{j}(y)) \leq \infty = d(x,y) \end{equation*} if $x \in X_{i}$ and $y \in X_{j}$ with $i \ne j$. Suppose given a pair of morphisms \begin{equation*} \xymatrix{ A \ar@<1ex>[r]^{f} \ar@<-1ex>[r]_{g} & X } \end{equation*} in $ep-\mathbf{Met}$, and form the set theoretic coequalizer $\pi: X \to C$. The function $p$ is the canonical map onto a set of equivalence classes of $X$, which classes are defined by the relations $f(a) \sim g(a)$ for $a \in A$. We give $C$ the quotient ep-metric space structure, as in Example \ref{ex 1}. Suppose that $\alpha: (X,d_{X}) \to (Z,d_{Z})$ is an morphism of ep-metric spaces such that $\alpha \cdot f = \alpha \cdot g$. Write $\alpha_{\ast}: C \to Z$ for the unique function such that $\alpha_{\ast} \cdot p = \alpha$. Suppose given a polygonal path $P = \{ (x_{i},y_{i}) \}$ from $x$ to $y$ in $X$. Then $\alpha(y_{i}) = \alpha(x_{i+1})$, so that \begin{equation*} d_{Y}(\alpha(x),\alpha(y)) \leq \sum_{i}\ d_{Y}(\alpha(x_{i}),\alpha(y_{i})) \leq \sum_{i}\ d_{X}(x_{i},y_{i}). \end{equation*} This is true for every polygonal path from $x$ to $y$ in $X$, so that \begin{equation*} d_{Y}(\alpha_{\ast}p(x),\alpha_{\ast}p(y)) \leq d_{C}(p(x),p(y)). \end{equation*} It follows that $\alpha_{\ast}: (C,d_{C}) \to (Z,d_{Z})$ is a morphism of ep-metric spaces. \end{proof} \begin{example}[``Bad'' filtered colimit]\label{ex 4} If one starts with a diagram of metric spaces, the colimit $C$ that is produced by Lemma \ref{lem 3} is an ep-metric space, and it may be that $d(x,y) = 0$ in the coequalizer $C$ for some elements $x,y$ with $x \ne y$. In particular, suppose that $X_{s} = \{ (\frac{1}{s\sqrt 2},0),(0,\frac{1}{s\sqrt 2})\} \subset \mathbb{R}^{2}$ for $0 < s < \infty$. Write $p_{s} = (\frac{1}{s\sqrt 2},0)$ and $q_{s} = (0,\frac{1}{s\sqrt 2})$ in $X_{s}$. Then $d(p_{s},q_{s}) = \frac{1}{s}$. For $s \leq t$ there is an ep-metric space map $X_{s} \to X_{t}$ which is defined by $p_{s} \mapsto p_{t}$ and $q_{s} \mapsto q_{t}$. The filtered colimit $\varinjlim_{s}\ X_{s}$ has two distinct points, namely $p_{\infty}$ and $q_{\infty}$, and $d(p_{\infty},q_{\infty}) \leq d(p_{s},q_{s}) = \frac{1}{s}$ for all $s >0$. It follows that $d(p_{\infty},q_{\infty}) = 0$, whereas $p_{\infty} \ne q_{\infty}$. \end{example} \begin{lemma}\label{lem 5} Suppose that $X$ is an ep-metric space. Then there is an isomorphism of ep-metric spaces \begin{equation*} \psi: \varinjlim_{F}\ F \xrightarrow{\cong} X, \end{equation*} where $F$ varies over the finite subsets of $X$, with their induced ep-metric space structures. \end{lemma} \begin{proof} The collection of finite subsets of $X$ is filtered, and the set $X$ is a filtered colimit of its finite subsets, so the function defining the ep-metric space map $\psi$ is a bijection. Write $d_{\infty}$ for the metric on the filtered colimit. If $x,y \in X$ and $d(x,y) = s \leq \infty$ in $X$, then there is a finite subset $F$ with $x,y \in F$ such that $d(x,y) = s$ in $F$. The list $(x,y)$ is a polygonal path from $x$ to $y$ in $F$, so that $d_{\infty}(x,y) \leq d(x,y)$. It follows that $d(x,y) = d_{\infty}(x,y)$, and so $\psi$ is an isomorphism. \end{proof} An ep-metric space $(X,d)$ has an associated system of posets $P_{\ast}(X): [0,\infty] \to s\mathbf{Set}$, where $P_{s}(X)$ is the collection of finite subsets $F$ of $X$ such that $d(x,y) \leq s$ for any two members $x,y$ of $X$. This construction defines a system of abstract simplicial complexes $V_{\ast}(X)$, which can be constructed entirely within simplicial sets when $X$ has a total ordering. In that case, the $n$-simplices of the simplicial set $V_{s}(X)$ are the strings $x_{0} \leq x_{1} \leq \dots \leq x_{n}$ such that $d(x_{i},x_{j}) \leq s$. The diagram $V_{\ast}(X): [0,\infty] \to s\mathbf{Set}$ is the Vietoris-Rips system. The spaces $V_{s}(X)$ are independent up to weak equivalence of the ordering on $X$, because there is a canonical weak equivalence (a ``last vertex map'') $\gamma: BP_{s}(X) \to V_{s}(X)$ of systems, while the spaces $BP_{s}(X)$ are defined independently from the ordering. In classical terms, the nerve $BP_{s}(X)$ of the poset $P_{s}(X)$ (non-degenerate simplices of the Vietoris-Rips complex $V_{s}(X)$) is the barycentric subdivision of $V_{s}(X)$. \begin{example}[Excision for path components]\label{ex 6} Suppose that $X$ and $Y$ are finite subsets of an ep-metric space $Z$, with the induced ep-metric space structures. Consider the inclusions of finite ep-metric spaces \begin{equation*} \xymatrix{ X \cap Y \ar[r] \ar[d] & Y \ar[d] \\ X \ar[r] & X \cup Y } \end{equation*} inside $Z$. Write $X \cup_{m} Y$ for the corresponding pushout in the category of ep-metric spaces. The unique map \begin{equation*} X \cup_{m} Y \to X \cup Y \end{equation*} of ep-metric spaces is the identity on the underlying point set. Write $d_{m}$ for the metric on $X \cup_{m} Y$. Then $d_{m}(x,y)$ is the minimum of sums \begin{equation}\label{eq 2} \sum\ d(x_{i},x_{i+1}), \end{equation} indexed over paths \begin{equation*} P:\ x = x_{0},x_{1}, \dots .x_{n}=y, \end{equation*} such that for each $i$ the points $x_{i},x_{i+1}$ are either both in $X$ or both in $Y$. All sums in (\ref{eq 2}) are finite, and $d_{m}(x,y)$ is realized by a particular path $P$ since $X$ and $Y$ are finite. Note that $d(x,y) \leq d_{m}(x,y)$, by construction, and that $d(x,y) = d_{m}(x,y)$ if $x,y$ are both in either $X$ or $Y$. There are induced simplicial set maps \begin{equation*} V_{s}(X) \cup V_{s}(Y) \to V_{s}(X \cup_{m} Y) \to V_{s}(X \cup Y), \end{equation*} all of which are the identity on vertices. There is a $1$-simplex $\sigma = \{x,y\}$ of $V_{s}(X \cup_{m} Y)$ if and only if there is a path \begin{equation*} P:\ x=x_{0},x_{1}, \dots ,x_{n}=y \end{equation*} consisting of $1$-simplices in either $X$ or $Y$, such that \begin{equation*} \sum d(x_{i},x_{i+1}) \leq s. \end{equation*} Then all $d(x_{i},x_{i+1}) \leq s$, so that $x$ and $y$ are in the same path component of $V_{s}(X) \cup V_{s}(Y)$. It follows that there is an induced isomorphism \begin{equation}\label{eq 3} \pi_{0}(V_{s}(X) \cup V_{s}(Y)) \cong \pi_{0}V_{s}(X \cup_{m} Y). \end{equation} The isomorphisms (\ref{eq 3}) induce isomorphisms \begin{equation}\label{eq 4} \pi_{0}(V_{s}(X) \cup V_{s}(Y)) \cong \pi_{0}V_{s}(X \cup_{m} Y). \end{equation} for arbitrary subsets $X$ and $Y$ of an ep-metric space $Z$, by an application of Lemma \ref{lem 5}. \end{example} \section{Metric space realizations} Write $U^{n}_{s}$ for the collection of axis points $x_{i}=\frac{s}{\sqrt 2}e_{i}$, where \begin{equation*} e_{i} = (0, \dots ,\overset{i+1}{1}, \dots ,0) \in \mathbb{R}^{n+1}. \end{equation*} for $0 \leq i \leq n$. Observe that $d(x_{i},x_{j}) = s$ in $\mathbb{R}^{n+1}$ for $i \ne j$. Another way of looking at it: $U^{n}_{s}$ is the set $\mathbf{n} = \{0,1,\dots ,n\}$ with $d(i,j) = s$ for $i \ne j$. An ep-metric space morphism $f: U^{n}_{s} \to Y$ consists of points $f(x_{i})$, $0 \leq i \leq n$, such that $d_{Y}(f(x_{i}),f(x_{j})) \leq s$ for all $i,j$. \medskip Write $s\mathbf{Set}^{[0,\infty]}$ for the category of diagrams (functors) $X: [0,\infty] \to s\mathbf{Set}$ and their natural transformation, which take values in simplicial sets and are defined on the poset $[0,\infty]$. I usually write $s \mapsto X_{s}$ for such a diagram $X$. In particular, $X_{\infty}$ is the value that the diagram $X$ takes at the terminal object of $[0,\infty]$. Suppose that $K$ is a simplicial set. The representable diagram $L_{s}K$ satisfies the universal property \begin{equation*} \hom(L_{s}K,X) \cong \hom(K,X_{s}). \end{equation*} One shows that \begin{equation*} (L_{s}K)_{t} = \begin{cases} \emptyset & \text{if $t<s$,} \\ K & \text{if $t \geq s$.} \end{cases} \end{equation*} The set of maps $L_{s}\Delta^{n} \to X$ can be identified with the set of $n$-simplices of the simplicial set $X_{s}$. A morphism $L_{t}\Delta^{m} \to L_{s}\Delta^{n}$ consists of a relation $s \leq t$ and a simplicial map $\theta: \Delta^{m} \to \Delta^{n}$. In the presence of such a morphism, the function $\theta: \mathbf{m} \to \mathbf{n}$ defines an ep-metric space morphism $U^{m}_{t} \to U^{n}_{s}$, since $s=d(\theta(i),\theta(j)) \leq d(i,j)=t$. \subsection{The realization functor} Suppose that $X: [0,\infty] \to s\mathbf{Set}$ is a diagram. The category $\mathbf{\Delta}/X$ of simplices of $X$ has maps $L_{s}\Delta^{n} \to X$ as objects and commutative diagrams \begin{equation*} \xymatrix@C=10pt{ L_{t}\Delta^{m} \ar[rr]^{\theta} \ar[dr]_{\tau} && L_{s}\Delta^{n} \ar[dl]^{\sigma} \\ & X } \end{equation*} as morphisms. Equivalently, a simplex of $X$ is a simplicial set map $\Delta^{n} \to X_{s}$, and a morphism of simplices is a diagram \begin{equation}\label{eq 5} \xymatrix{ \Delta^{m} \ar[r]^{\theta} \ar[d]_{\tau} & \Delta^{n} \ar[d]^{\sigma} \\ X_{t} & X_{s} \ar[l] } \end{equation} Every simplex $\Delta^{n} \to X_{s}$ determines a simplex \begin{equation*} \Delta^{n} \to X_{s} \to X_{\infty}, \end{equation*} and we have a functor $r: \mathbf{\Delta}/X \to \mathbf{\Delta}/X_{\infty}$, where $\mathbf{\Delta}/X_{\infty}$ is the simplex category of the simplicial set $X_{\infty}$. There is an inclusion $i: \mathbf{\Delta}/X_{\infty} \to \mathbf{\Delta}/X$, and the composite $r \cdot i$ is the identity. The maps \begin{equation}\label{eq 6} \xymatrix{ \Delta^{n} \ar[r]^{1} \ar[d]_{r(\sigma)} & \Delta^{n} \ar[d]^{\sigma} \\ X_{\infty} & X_{s} \ar[l] } \end{equation} define a natural transformation $h: i\cdot r \to 1$. There is a functor \begin{equation}\label{eq 7} \mathbf{\Delta}/X \to s\mathbf{Set} \end{equation} which takes a morphism (\ref{eq 5}) to the map $\theta: \Delta^{m} \to \Delta^{n}$. The translation category $E_{X}$ for the functor (\ref{eq 7}) is a simplicial category that has objects consisting of pairs $(\sigma,x)$ where $\sigma: \Delta^{n} \to X_{s}$ and $x \in \Delta^{n}$ (of a fixed dimension). A morphism $(\tau,y) \to (\sigma,x)$ of $E_{X}$ is a morphism $\theta: \tau \to \sigma$ as in (\ref{eq 5}) such that $\theta(y) = x$. The path component simplicial set $\pi_{0}E_{X}$ of the category $E_{X}$ is isomorphic to the colimit \begin{equation*} \varinjlim_{L_{s}\Delta^{n} \to X}\ \Delta^{n}. \end{equation*} There is a correponding translation category $E_{X_{\infty}}$ for the functor which takes the simplex $\Delta^{n} \to X_{\infty}$ to the simplicial set $\Delta^{n}$, and there is an induced functor $i_{\ast}: E_{X_{\infty}} \subset E_{X}$. The functor $r: \mathbf{\Delta}/X \to \mathbf{\Delta}/X_{\infty}$ induces a functor $r_{\ast}: E_{X} \to E_{X_{\infty}}$. The composite $r_{\ast} \cdot i_{\ast}$ is the identity on $E_{X_{\infty}}$, and the map (\ref{eq 6}) defines a natural transformation $i_{\ast} \cdot r_{\ast} \to 1$ of functors $E_{X_{\infty}} \to E_{X_{\infty}}$. The translation categories $E_{X}$ and $E_{X_{\infty}}$ are therefore homotopy equivalent, and thus have isomorphic simplicial sets of path components. It follows that there are isomorphisms of simplicial sets \begin{equation}\label{eq 8} X_{\infty} \xleftarrow{\cong} \varinjlim_{\Delta^{n} \to X_{\infty}}\ \Delta^{n} \xrightarrow{\cong} \varinjlim_{L_{s}\Delta^{n} \to X}\ \Delta^{n} \end{equation} Suppose that $X: [0, \infty] \to s\mathbf{Set}$ is a diagram, and set \begin{equation*} \operatorname{Re}(X) = \varinjlim_{L_{s}\Delta^{n} \to X}\ U^{n}_{s} \end{equation*} in the category of ep-metric spaces. It follows from the identifications of (\ref{eq 8}) that $\operatorname{Re}(X)$ is the set of vertices of $X_{\infty}$, equipped with an ep-metric space structure. If $x$ and $y$ are two such vertices, and are the boundary of a $1$-simplex \begin{equation*} \Delta^{1} \to X_{s} \to X_{\infty} \end{equation*} then $x$ and $y$ are in the image of a map $U^{1}_{s} \to \operatorname{Re}(X)$, so that $d(x,y) \leq s$. If there is a sequence of $1$-simplices $\omega_{i}: \Delta^{1} \to X_{s_{i}}$ that define a polygonal path \begin{equation*} P: x=x_{0} \leftrightarrows x_{1} \leftrightarrows \dots \leftrightarrows x_{k} =y \end{equation*} of $1$-simplices in $X_{\infty}$, then $d(x,y) \leq \sum_{i}\ s_{i}$ by definition. Formally, we set \begin{equation}\label{eq 9} d(x,y) = \inf_{P}\ \{ \sum_{i}\ s_{i}\}. \end{equation} provided such polygonal paths exist. Otherwise, we set $d(x,y) = \infty$. The resulting metric $d$ is the metric which is imposed on the set of vertices of $X_{\infty}$ by the requirement that \begin{equation*} \operatorname{Re}(X) = \varinjlim_{L_{s}\Delta^{n} \to X}\ U^{n}_{s} \end{equation*} in the category of ep-metric spaces --- see Lemma \ref{lem 3}. We have shown the following: \begin{proposition}\label{prop 7} Suppose that $X: [0,\infty] \to s\mathbf{Set}$ is a diagram. Then the ep-metric space $\operatorname{Re}(X)$ has underlying set given by the set of vertices of $X_{\infty}$, with metric defined within path components by (\ref{eq 9}). Elements $x$ and $y$ that are in distinct path components have $d(x,y) = \infty$. \end{proposition} \begin{example}[Realization of Vietoris-Rips systems]\label{ex 8} Suppose that $X$ is a finite ep-metric space, and that $X$ is totally ordered. The realization $\operatorname{Re}(V_{\ast}(X))$ has $X$ as its underlying set, and $V_{\infty}(X) = \Delta^{X}$ is a finite simplex, which is connected, so that there is a finite polygonal path in $X$ between any two points $x,y \in X$. We have a relation \begin{equation*} d(x,y) \leq \sum_{i}\ s_{i} \end{equation*} in $X$ for any polygonal path $P$ which is defined by $1$-simplices $\omega_{i} \in V_{s_{i}}(X)$. This means that $d(x,y)$ in $X$ coincides with the distance between $x$ and $y$ in the metric space $\operatorname{Re}(X)$. It follows that the identity on the set $X$ induces an isomorphism of ep-metric spaces \begin{equation*} \phi: \operatorname{Re}(V_{\ast}(X)) \xrightarrow{\cong} X. \end{equation*} This map $\phi$ is an isomorphism of ep-metric spaces, by Lemma \ref{lem 5} and the previous paragraphs. \end{example} \begin{example}[Degree Rips systems]\label{ex 9} Continue with a finite totally ordered ep-metric space $X$ as in Example \ref{ex 8}, let $k$ be a positive integer, and consider the degree Rips system $L_{\ast,k}(X)$. We choose $k$ such that the system of complexes $L_{\ast,k}(X)$ is non-empty, ie. such that $k \leq \vert X \vert$. Then $L_{t,k}(X) = V_{t}(X)$ for $t$ sufficiently large, and $X$ is the underlying set of $\operatorname{Re}(L_{\ast,k}(X))$. The maps \begin{equation*} \operatorname{Re}(L_{\ast,k}(X)) \to \operatorname{Re}(V_{\ast}(X)) \to X \end{equation*} are isomorphisms of metric spaces, by a cofinality argument. \end{example} Here is a special case: \begin{lemma} Suppose that $K$ is a simplicial complex and that $s > 0$. Then $\operatorname{Re}(L_{s}K) = \operatorname{Re}(L_{s}\operatorname{sk}_{1}(K))$, and $\operatorname{Re}(L_{s}K)$ is the set of vertices $K_{0}$ with a metric $d$ defined by \begin{equation*} d(x,y) = \begin{cases} \infty & \text{if $[x] \ne [y]$ in $\pi_{0}(K)$,} \\ \min_{P}\ s \cdot k & \text{if $[x]=[y]$.} \end{cases} \end{equation*} where $P$ varies through the polygonal paths \begin{equation*} P: x = x_{0} \leftrightarrows x_{1} \leftrightarrows \dots \leftrightarrows x_{k} =y \end{equation*} of $1$-simplices between $x$ and $y$. \end{lemma} \begin{proof} Write $\operatorname{Re}(K) = \operatorname{Re}(L_{s}K)$. The simplicial set $K$ is a colimit of its simplices, and so there is an isomorphism \begin{equation*} \varinjlim_{\Delta^{n} \to K}\ L_{s}\Delta^{n} \xrightarrow{\cong} L_{s}K. \end{equation*} It follows that there is an isomorphism \begin{equation*} \varinjlim_{\Delta^{n} \to K}\ U^{n}_{s} \xrightarrow{\cong} \operatorname{Re}(L_{s}K). \end{equation*} Suppose that $n \geq 2$. Then $\partial\Delta^{n}$ and $\Delta^{n}$ have the same vertices, and any two vertices $x,y$ are on a common face $\Delta^{n-1} \subset \Delta^{n}$. It follows that $d(x,y)=s$ in $\operatorname{Re}(\partial\Delta^{n})$ and $\operatorname{Re}(\Delta^{n})$, and the induced map \begin{equation*} \operatorname{Re}(\partial\Delta^{n}) \to \operatorname{Re}(\Delta^{n}) \end{equation*} is an isomorphism for $n \geq 2$. The displayed metric $d$ on the vertices of $K$ defines a metric space $\operatorname{Re}(K)$, with maps $\sigma_{\ast}: U^{n}_{s} \to \operatorname{Re}(K)$ for all simplices $\sigma: \Delta^{n} \to K$, which maps are natural with respect to the simplicial structure of $K$. Any family of metric space morphisms $f_{\sigma}: U^{n}_{s} \to Y$ determines a unique function $f: K_{0} \to Y$. Also, $d(f(x),f(y)) \leq s$ if $x,y$ are in a common simplex $\Delta^{1} \to K$. If $P$ is a polygonal path between $x$ and $y$ as above, then $d(f(x),f(y)) \leq k \cdot s$. This is true for all such polygonal paths, so $d(f(x),f(y)) \leq d(x,y)$. If $x$ and $y$ are in distinct components of $K$, then $d(f(x),f(y)) \leq d(x,y) = \infty$. \end{proof} The following result says that the realization $\operatorname{Re}(X)$ of a diagram $X: [0,\infty] \to s\mathbf{Set}$ depends only on the associated diagram of graphs $\operatorname{sk}_{1}(X)$. \begin{lemma} Suppose that $X: [0,\infty] \to s\mathbf{Set}$ is a diagram. Then the inclusion $\operatorname{sk}_{1}X \subset X$ induces an isomorphism \begin{equation*} \operatorname{Re}(\operatorname{sk}_{1}X) \xrightarrow{\cong} \operatorname{Re}(X). \end{equation*} \end{lemma} \begin{proof} The diagram of $1$-skeleta $\operatorname{sk}_{1}X$ is a colimit \begin{equation*} \varinjlim_{L_{s}\Delta^{n} \to X}\ L_{s}\operatorname{sk}_{1}\Delta^{n}, \end{equation*} since the functor $\operatorname{sk}_{1}$ preserves colimits. There are commutative diagrams \begin{equation*} \xymatrix{ \operatorname{Re}(L_{s}\operatorname{sk}_{1}\Delta^{n}) \ar[r] \ar[d]_{\cong} & \operatorname{Re}(\operatorname{sk}_{1}X) \ar[d] \\ \operatorname{Re}(L_{s}\Delta^{n}) \ar[r] & \operatorname{Re}(X) } \end{equation*} that are natural in the simplices of $X$, and it follows that the induced map $\operatorname{Re}(\operatorname{sk}_{1}X) \to \operatorname{Re}(X)$ is an isomorphism, as required. \end{proof} \subsection{Partial realizations} Suppose again that $X: [0,\infty] \to s\mathbf{Set}$ is a diagram in simplicial sets. We construct partial realizations by writing \begin{equation*} \operatorname{Re}(X)_{s} = \varinjlim_{L_{t}\Delta^{n} \to X,\ t\leq s}\ U^{n}_{t}. \end{equation*} This is the colimit of a functor taking values in ep-metric spaces, which is defined on the full subcategory $\mathbf{\Delta}/X_{\leq s}$ of $\mathbf{\Delta}/X$ having objects $L_{t}\Delta^{n} \to X$ with $t \leq s$. A map $L_{t}\Delta^{n} \to X$ can be identified with a simplex $\Delta^{n} \to X_{t}$, and the relation $t \leq s$ defines a simplex $\Delta^{n} \to X_{t} \to X_{s}$, so that we have a functor \begin{equation*} r: \Delta/X_{\leq s} \to \mathbf{\Delta}/X_{s}, \end{equation*} along with an inclusion $i: \mathbf{\Delta}/X_{s} \subset \mathbf{\Delta}/X_{\leq s}$. The composite $r \cdot i$ is the identity, and the composite $i \cdot r$ is homotopic to the identity, just as before. There is a functor $\mathbf{\Delta}/X_{\leq s} \to s\mathbf{Set}$ which takes a simplex $\Delta^{n} \to X_{t}$ to the simplicial set $\Delta^{n}$. By manipulating path components of homotopy colimits, one finds isomorphisms \begin{equation*} X_{s} \xleftarrow{\cong} \varinjlim_{\Delta^{n} \to X_{s}}\ \Delta^{n} \xrightarrow{\cong} \varinjlim_{L_{t}\Delta^{n} \to X,\ t\leq s}\ \Delta^{n}. \end{equation*} that are analogous to the isomorphisms of (\ref{eq 9}). It follows, as in Theorem \ref{prop 7}, that the set underlying the metric space $\operatorname{Re}(X)_{s}$ is the set of vertices of the simplicial set $X_{s}$. The metric $d$ on $(X_{s})_{0}$ is defined as before: $d(x,y) = \infty$ if $x$ and $y$ not in the same path component of $X_{s}$. Otherwise \begin{equation}\label{eq 10} d(x,y) = \inf_{P}\ \{\sum t_{i}\}, \end{equation} indexed over all polygonal paths \begin{equation*} P: x = x_{0} \leftrightarrows x_{1} \leftrightarrows \dots \leftrightarrows x_{k} =y \end{equation*} that are defined by $1$-simplices $\omega: \Delta^{1} \to X_{t_{i}}$ with $t_{i} \leq s$. We then have the following analogue of Proposition \ref{prop 7}: \begin{proposition}\label{prop 12} Suppose that $X: [0,\infty] \to s\mathbf{Set}$ is a functor. Then the ep-metric space $\operatorname{Re}(X)_{s}$ has underlying set given by the set of vertices of $X_{s}$, with metric defined within path components by (\ref{eq 10}). Elements $x,y$ in distinct path components have $d(x,y) = \infty$. \end{proposition} The map $X_{s} \to X_{\infty}$ defines a map $\operatorname{Re}(X)_{s} \to \operatorname{Re}(X)$ and $\operatorname{Re}(X)_{\infty} = \operatorname{Re}(X)$. There is an isomorphism of ep-metric spaces \begin{equation} \varinjlim_{s}\ \operatorname{Re}(X)_{s} \xrightarrow{\cong} \operatorname{Re}(X), \end{equation} since the element $\infty$ is terminal in $[0,\infty]$ and $\operatorname{Re}(X)_{\infty} = \operatorname{Re}(X)$. \begin{example}[Partial metrics for Vietoris-Rips complexes]\label{ex 13} Suppose that $X$ is a finite totally ordered ep-metric space. Consider the associated functor $V_{\ast}(X): [0,\infty] \to s\mathbf{Set}$. The associated ep-metric space $\operatorname{Re}(X)_{s}$ has underlying set $X$. We have $d(x,y) = \infty$ if $x,y$ are in distinct path components of $V_{s}(X)$. Otherwise \begin{equation*} d(x,y) = \inf_{P}\ \{ \sum\ d(x_{i},x_{i+1}) \}, \end{equation*} indexed over all polygonal paths \begin{equation*} P: x=x_{0},x_{1}, \dots ,x_{n}=y, \end{equation*} with $d(x_{i},x_{i+1}) \leq s$. If $d(x,y) = t \leq s$ in $X$ then $d(x,y)=t$ in $\operatorname{Re}(X)_{s}$. Otherwise, the distance between $x$ and $y$ in the same path component of $\operatorname{Re}(X)_{s}$ is more interesting --- it is achieved by a particular path $P$ since $X$ is finite, and $d(x,y)$ is a type of weighted path length. We see in Example \ref{ex 8} that there is an isomorphism of ep-metric spaces $\phi: \operatorname{Re}(V_{\ast}(X)) \xrightarrow{\cong} X$. It follows that there is an ep-metric space map $\phi_{s}: \operatorname{Re}(X)_{s} \to X$ which is the identity on the underlying point set $X$, and compresses distances. \end{example} \section{The singular functor} The right adjoint $S$ of the realization functor $\operatorname{Re}$ is defined for an ep-metric space $Y$ by \begin{equation*} S(Y)_{s,n} = \hom(U_{s}^{n},Y), \end{equation*} where $\hom(U_{s}^{n},Y)$ is the collection of ep-metric space morphisms $U^{n}_{s} \to Y$. Equivalently, $S(Y)_{s,n}$ is the set of families of points $( x_{0}, x_{1}, \dots ,x_{n} )$ in $Y$ such that $d(x_{i},x_{j}) \leq s$. A simplex $( x_{0}, x_{1}, \dots ,x_{n} )$ is alternatively a function $\mathbf{n} \to Y$ (a ``bag of words''), with a distance restriction. There is no requirement that the elements $x_{i}$ are distinct. This simplex is non-degenerate if and only if $x_{i} \ne x_{i+1}$ for $0 \leq i \leq n-1$. \medskip Suppose that an ep-metric space $X$ is totally ordered, as in Example \ref{ex 8} above. Then, in view of the discussion of Example \ref{ex 8}, the canonical map $\eta: V_{\ast}(X) \to S\operatorname{Re}(V_{\ast}(X))$ consists of functions $\eta: V_{t}(X) \to S_{t}(X)$ which send simplices $\sigma :x_{0} \leq x_{1} \leq \dots \leq x_{n}$ with $d(x_{i},x_{j}) \leq t$ to the list of points $(x_{0},x_{1}, \dots ,x_{n})$. If $\sigma$ is non-degenerate, so that the vertices $x_{i}$ are distinct, then $\eta(\sigma)$ is a non-degenerate simplex of $S_{t}(X)$. \medskip The poset $NZ$ of non-degenerate simplices of a simplicial set $Z$ has $\sigma \leq \tau$ if there is a subcomplex inclusion $\langle \sigma \rangle \subset \langle \tau \rangle$, where $\langle \sigma \rangle$ is the subcomplex of $Z$ which is generated by the simplex $\sigma$. Equivalently, $\sigma \leq \tau$ if there is an ordinal number map $\theta$ such that $\theta^{\ast}(\tau) = \sigma$. The map $\eta$ induces a morphism $\eta_{\ast}: NV_{t}(X) \to NS_{t}(X)$ of posets of non-degenerate simplices. \begin{lemma}\label{lem 14} Suppose that $X$ is a totally ordered ep-metric space. Then the induced simplicial set map \begin{equation*} \eta_{\ast}: BNV_{t}(X) \to BNS_{t}(X) \end{equation*} of associated nerves is a weak equivalence. \end{lemma} \begin{proof} Given a non-degenerate simplex $\sigma \in S_{t}(X)$, write $L(\sigma)$ for its list of distinct elements. Suppose that $\langle \tau \rangle \subset \langle \sigma \rangle$, where $\tau$ and $\sigma$ are non-degenerate simplices of $S_{t}(X)$. Then $\tau = s \cdot d(\sigma)$ for an (iterated) face map $d$ and degeneracy $s$. Then \begin{equation*} L(\tau) = L(s \cdot d(\sigma)) = L(d(\sigma)) \subset L(\sigma). \end{equation*} It follows that the assignment $\sigma \mapsto L(\sigma)$ defines a poset morphism \begin{equation*} L: NS_{t}(X) \to NV_{t}(X). \end{equation*} The composite \begin{equation*} NV_{t}(X) \xrightarrow{\eta} NS_{t}(X) \xrightarrow{L} NV_{t}(X) \end{equation*} is the identity on $NV_{t}(X)$. Consider the composite poset morphism \begin{equation}\label{eq 12} NS_{t}(X) \xrightarrow{L} NV_{t}(X) \xrightarrow{\eta} NS_{t}(X). \end{equation} Given a non-degenerate simplex $\tau = (y_{0},\dots y_{r})$ of $S_{t}(X)$, write $L(\tau) = ( s_{0}, \dots s_{k})$ for the list of distinct elements of $\tau$, in the order specified by the total order for $X$. Then the list \begin{equation*} V(\tau) = (y_{0}, \dots ,y_{r},s_{0}, \dots ,s_{k}) \end{equation*} is a simplex of $S_{t}(X)$, since each $s_{j}$ is some $y_{i_{j}}$, and there are relations \begin{equation*} \langle \tau \rangle \leq \langle V(\tau) \rangle \geq \langle L(\tau) \rangle \end{equation*} as subcomplexes of $S_{t}(X)$. The simplex $V(\tau)$ has the form $V(\tau) = s(V_{\ast}(\tau))$ for a unique iterated degeneracy $s$ and a unique non-degenerate simplex $V_{\ast}(\tau)$ (see Lemma \ref{lem 18}), and $\langle V(\tau) \rangle = \langle V_{\ast}(\tau) \rangle$. Suppose that $\gamma$ is non-degenerate in $S_{t}(X)$ and that $\gamma \in \langle \tau \rangle$. Then $\gamma = d(\tau)$ for some face map $d$, and $\gamma = (x_{0}, \dots ,x_{k})$ is a sublist of $\tau = ( y_{0}, \dots ,y_{r})$. The ordered list $L(\gamma)$ of distinct elements of $\gamma$ is a sublist of $L(\tau)$, and $V(\gamma)$ is a sublist of $V(\tau)$. There is a diagram of relations \begin{equation*} \xymatrix{ \langle \tau \rangle \ar[r] & \langle V_{\ast}(\tau) \rangle & \langle L(\tau) \rangle \ar[l] \\ \langle \gamma \rangle \ar[r] \ar[u] & \langle V_{\ast}(\gamma) \rangle \ar[u] & \langle L(\gamma) \rangle \ar[l] \ar[u] } \end{equation*} It follows that the composite (\ref{eq 12}) is homotopic to the identity on the poset $NS_{t}(X)$, and the Lemma follows. \end{proof} The subdivision $\operatorname{sd}(Z)$ of a simplicial set $Z$ is defined by \begin{equation*} \operatorname{sd}(Z) = \varinjlim_{\Delta^{n} \to Z}\ BN\Delta^{n}. \end{equation*} The poset morphisms $N\Delta^{n} \to NZ$ that are induced by simplices $\Delta^{n} \to Z$ together induce a map \begin{equation*} \pi: \operatorname{sd}(Z) \to BNZ. \end{equation*} It is known \cite{J34} (and not difficult to prove) that the map $\pi$ is a bijection for simplicial sets $Z$ that are polyhedral. A polyhedral simplicial set is a subobject of the nerve of a poset. All oriented simplicial complexes are polyhedral in this sense. Examples include the Vietoris-Rips systems $V_{s}(X)$ associated to a totally ordered ep-metric space $X$, since \begin{equation*} V_{s}(X) \subset V_{\infty}(X) = BX, \end{equation*} where $BX$ is the nerve of the totally ordered poset $X$. \begin{lemma}\label{lem 15} Suppose that $X$ is an ep-metric space. Then the map \begin{equation*} \pi: \operatorname{sd}(S_{t}(X)) \to BNS_{t}(X) \end{equation*} is a weak equivalence. \end{lemma} \begin{proof} We show that all subcomplexes $\langle \sigma \rangle$ which are generated by non-degenerate simplices $\sigma$ of $S_{t}(X)$ are contractible. Then Lemma 4.2 of \cite{J34} implies that the map $\pi$ is a weak equivalence. A non-degenerate simplex $\sigma$ has the form $\sigma = (x_{0},x_{1}, \dots ,x_{k})$ with $x_{i} \ne x_{i+1}$. The simplices $\tau$ of $\langle \sigma \rangle$ have the form \begin{equation*} \tau = \theta^{\ast}\sigma = (x_{\theta(0)}, \dots ,x_{\theta(k)}), \end{equation*} where $\theta: \mathbf{k} \to \mathbf{n}$ is an ordinal number morphism. For each such $\theta$, the list \begin{equation*} (x_{0},x_{\theta(0)}, \dots, x_{\theta(k)}) \end{equation*} defines a simplex $\tau_{\ast}$ of $\langle \sigma \rangle$, since $\tau_{\ast}$ is a face of the simplex \begin{equation*} s_{0}(\sigma) = (x_{0},x_{0}, \dots ,x_{k}). \end{equation*} The simplices $\tau_{\ast}$ define functors \begin{equation*} \xymatrix{ x_{0} \ar[r] \ar[d] & x_{0} \ar[r] \ar[d] & \dots \ar[r] & x_{0} \ar[d] \\ x_{\theta(0)} \ar[r] & x_{\theta(1)} \ar[r] & \dots \ar[r] & x_{\theta(m)} } \end{equation*} or homotopies, that consist of simplices of $\langle \sigma \rangle$ that patch together to give a contracting homotopy $\langle \sigma \rangle \times \Delta^{1} \to \langle \sigma \rangle$. \end{proof} \begin{theorem}\label{th 16} Suppose that $X$ is a totally ordered ep-metric space. Then there is a diagram of weak equivalences \begin{equation*} \xymatrix{ BNV_{t}(X) \ar[d]_{\eta_{\ast}} & \operatorname{sd}(V_{t}(X)) \ar[l]_{\pi}^{\cong} \ar[r]^-{\gamma} \ar[d]^{\eta_{\ast}} & V_{t}(X) \ar[d]^{\eta} \\ BNS_{t}(X) & \operatorname{sd}(S_{t}(X)) \ar[l]^{\pi} \ar[r]_-{\gamma} & S_{t}(X) } \end{equation*} In particular, the map $\eta: V_{t}(X) \to S_{t}(X)$ is a weak equivalence. This diagram is natural in $t$. \end{theorem} \begin{proof} The map $\eta_{\ast}: BNV_{t}(X) \to BNS_{t}(X)$ is a weak equivalence by Lemma \ref{lem 14}. The map $\pi: \operatorname{sd}(S_{t}(X)) \to BNS_{t}(X)$ is a weak equivalence by Lemma \ref{lem 15}. The instances of the maps $\gamma$ are weak equivalences \cite{J34}. It follows that the maps $\eta_{\ast}: \operatorname{sd}(V_{t}(X)) \to \operatorname{sd}(S_{t}(X))$ and $\eta: V_{t}(X) \to S_{t}(X)$ are weak equivalences. \end{proof} \begin{remark} The total ordering on the ep-metric space $X$ in Theorem \ref{th 16} is intimately involved in the definition of the Vietoris-Rips system $V_{\ast}(X)$, the morphism \begin{equation*} \eta: V_{t}(X) \to S_{t}(\operatorname{Re}(V_{\ast}(X))) = S_{t}(X), \end{equation*} and all induced maps $\eta_{\ast}$. The counit $\eta: Z \to S(\operatorname{Re}(Z))$ is not a sectionwise weak equivalence in general. One can show that $S_{\infty}(\operatorname{Re}(Z))$ is the nerve of the trivial groupoid on the vertex set of $Z_{\infty}$ (Proposition \ref{prop 7}), and is therefore contractible, whereas the space $Z_{\infty}$ may not be contractible. For example, if $K$ is a simplicial set, then there is an identification $K = (L_{s}K)_{\infty}$. It may be that the map \begin{equation*} \eta: BP_{t}(X) \to S_{t}(\operatorname{Re}(BP_{\ast}(X))) \end{equation*} is a weak equivalence for arbitrary ep-metric spaces $X$, but this has not been proved. Such a result would give a non-oriented version of Theorem \ref{th 16}. \end{remark} The following is a classical result, which is included here for the sake of completeness. This result is usually neither expressed nor proved in the form displayed here. \begin{lemma}\label{lem 18} Suppose that $\sigma$ is an $n$-simplex of a simplicial set $X$. Then there is a unique iterated degeneracy and a non-degenerate simplex $x$ such that $\sigma = s(x)$. \end{lemma} An iterated degeneracy is a surjective ordinal number map $s: \mathbf{n} \to \mathbf{k}$. Such a map induces a function $s: X_{k} \to X_{n}$ for a simplicial set $X$. Lemma \ref{lem 18} says that $\sigma = s(x)$ for some iterated degeneracy $s$ and a non-degenerate simplex $x$, and that this representation is unique. \begin{proof}[Proof of Lemma \ref{lem 18}] Suppose that $\sigma = s(x) = s'(x')$ where $s,s'$ are iterated degeneracies and $x,x'$ are non-degenerate. The $x = d(\sigma)$ for some face map $d$ such that $d \cdot s = 1$, and so $d(s'(x')) = s''(d''(x))$ for some iterated degeneracy $s''$ and face map $d''$. But $x$ is non-degenerate, so that $s''=1$ and $x = d''(x')$. Similarly, $x' = \tilde{d}(x)$ for some face map $\tilde{d}$. But then $x$ and $x'$ have the same dimension, and $d=1$, so that $x=x'$. If $s \ne s'$ there is a face map $d$ such that $d \cdot s=1$ but $d \cdot s' \ne 1$. Then $\sigma =s(x) = s'(x)$ for $s \ne s'$ and $x$ non-degenerate, then \begin{equation*} d(\sigma) = x = d(s'(x)) = s''(d''(x)) \end{equation*} for some degeneracy $s''$ and face map $d''$, at least one of which is non-trivial. But $x$ is non-degenerate, so that $s''=1$ and $x=d''(x)$ only if $d'' =1$. This contradicts the assumption that $s \ne s'$. \end{proof} \bibliographystyle{plain}
{ "timestamp": "2020-12-17T02:21:23", "yymm": "2012", "arxiv_id": "2012.09026", "language": "en", "url": "https://arxiv.org/abs/2012.09026" }
\section{Introduction} Analytical nuclear gradients are the foundation of the quantum chemical elucidation of complex reaction mechanisms via molecular dynamics simulations and minimum-energy and transition-state structure optimization. However, the routine calculation of \textit{ab initio} energies and forces with accurate wave function methods is prohibited by their steep cost, e.g., coupled-cluster singles, doubles, and perturbative triples [CCSD(T)] scales as\cite{scuseria_comparison_1990} $N^7$ and full configuration interaction scales as\cite{olsen_passing_1990} $N!$ where $N$ is a measure of system size. In recent years, machine learning has opened up a new way of mitigating the cost of quantum chemical calculations. \cite{bartok_gaussian_2010,rupp_fast_2012,lavecchia_machine-learning_2015,gawehn_deep_2016,raccuglia_machine-learning-assisted_2016,wei_neural_2016,smith_ani-1_2017,chmiela_machine_2017,kim_virtual_2017,ulissi_address_2017,segler_neural-symbolic_2017,smith_ani-1_2017,schutt_schnet_2017,butler_machine_2018,lubbers_hierarchical_2018,popova_deep_2018,chmiela_towards_2018,smith_less_2018,smith_less_2018,chmiela_towards_2018,s_smith_outsmarting_2018,christensen_operators_2019,unke_physnet_2019, profitt_shared-weight_2019,christensen_fchl_2020, zaverkin_gaussian_2020, park_accurate_nodate} A particular approach that has proven to be highly data efficient and transferable across chemical space is molecular-orbital-based machine learning (MOB-ML) method \cite{welborn_transferability_2018, cheng_universal_2019, cheng_regression_2019}. MOB-ML relies on information of local molecular orbitals to predict the pair-wise sum of a post-Hartree--Fock correlation energy at drastically reduced cost. \cite{welborn_transferability_2018, cheng_universal_2019, cheng_regression_2019} The gradient theory for MOB-ML is comparable to that of non-canonical wave function-based correlation methods, due to factors that include orbital localization and the non-variational energy expression. There exists only a handful of local wave function-based correlation methods for which this effort has been performed. \cite{schutz_analytical_2004, pinski_analytical_2019} In this work, we establish a general Lagrangian framework to obtain the analytical nuclear gradients of the MOB-ML energy. The framework enforces orthogonality, localization, and Brillouin constraints on the molecular orbitals (section~\ref{sec:mobml_gradient}). A noteworthy aspect of this framework is that it is agnostic to the training data used for the MOB-ML model, thereby yielding accurate gradient predictions for wave function theory methods for which analytical gradients have not yet been derived or implemented. Furthermore, the computational cost of evaluating the MOB-ML energy gradient is comparable to that of a Hartree--Fock (HF) gradient or a hybrid density functional theory (DFT) gradient, such that it is orders of magnitude faster than evaluating the gradients of \textit{ab initio} wave function theories. We numerically validate the MOB-ML gradient theory by comparison to energy finite differences in section~\ref{sec:results}. Furthermore, we show that using only training data based on energy calculations (not gradients), MOB-ML efficiently and accurately yields gradients for diverse sets of molecules (section~\ref{sec:results}). Comparison of MOB-ML to other ML methods on the example of the ISO17 data set highlights the data efficiency and high transferability of MOB-ML for gradient predictions. We show that MOB-ML optimized structures for molecules in the ISO17 set are systematically improved with respect to the reference HF method and we compare the performance to that of a standard DFT functional. \section{MOB-ML Analytical Nuclear Gradients} \subsection{MOB-ML Energy Theory} \label{sec:energy} MOB-ML relies on molecular orbital information from a HF calculation to predict a wave function correlation energy. The working equation for the MOB-ML energy is \cite{welborn_transferability_2018, cheng_universal_2019, cheng_regression_2019} \begin{equation} \label{eq:energy} E_{\text{MOB-ML}} \sqbrak*{\mathbf{f}} = E_{\text{corr}} \sqbrak*{\mathbf{f}} + E_{\text{HF}} \text{,} \end{equation} where $E_{\text{HF}}$ is the HF energy, and $E_{\text{corr}} \sqbrak{\mathbf{f}}$ is the machine-learned correlation energy, \begin{equation} \begin{split} E_{\text{corr}} \sqbrak*{\mathbf{f}} & = \sum_{i} \epsilon_{ii} \sqbrak*{\mathbf{f}_{i}} + 2 \sum_{i > j} \epsilon_{ij} \sqbrak*{\mathbf{f}_{ij}} \text{.} \end{split} \end{equation} The matrix of feature vectors, $\mathbf{f}$, is divided into two sub-classes. The first sub-class is made up by the diagonal components of $\mathbf{f}$, $\mathbf{f}_{i}$, which represent the valence-occupied orbital $i$. The second sub-class is made up by the off-diagonal components of $\mathbf{f}$, $\mathbf{f}_{ij}$, which represent the interaction between the valence-occupied orbitals $i$ and $j$. Both diagonal and off-diagonal feature vectors are composed of elements from the HF Fock matrix in the MO basis, $\mathbf{F}$, and the MO repulsion integrals, $\boldsymbol\resizebox{!}{\CapLen}{$\kappa$}$, where \begin{equation} \begin{split} \sqbrak*{\boldsymbol\resizebox{!}{\CapLen}{$\kappa$}^{pq}}_{mn} &= ( pq | mn ) \\ & = \sum_{\mu \nu \kappa \sigma} C_{\mu p} C_{\nu q} C_{\kappa m} C_{\sigma n} (\mu \nu | \kappa \sigma) \text{.} \end{split} \end{equation} Here, $(\mu \nu | \kappa \sigma)$ are the four-center atomic orbital integrals with $\mu$, $\nu$, $\kappa$, and $\sigma$ representing atomic orbital indices. We restrict the MO indices of $\mathbf{F}$ and $\boldsymbol\resizebox{!}{\CapLen}{$\kappa$}$ to the valence-occupied and valence-virtual MOs and we only include 2-center Coulomb- and exchange-type MO integrals, $\sqbrak*{\boldsymbol\resizebox{!}{\CapLen}{$\kappa$}^{pp}}_{qq}$ and $\sqbrak*{\boldsymbol\resizebox{!}{\CapLen}{$\kappa$}^{pq}}_{pq}$ respectively. We evaluate the feature vectors following the protocol that is specified in Ref.~\citen{husch_enforcing_2020}. \subsection{MOB-ML Gradient Theory} \label{sec:mobml_gradient} \subsubsection{Lagrangian framework} \label{sec:mobml_lagrangian} MOB-ML is a non-variational theory for which the analytical nuclear gradient theory can be derived within a Lagrangian framework, \begin{equation} \begin{split} \frac{\text{d} E_{\text{MOB-ML}} }{\text{d} q} = \frac{\text{d} \mathcal{L}}{\text{d} q} &= \frac{ \partial \mathcal{L}}{ \partial q} + \frac{ \partial \mathcal{L}}{ \partial \mathbf{C}} \frac{\partial \mathbf{C}}{ \partial q} \\ & = \frac{ \partial \mathcal{L}}{ \partial q} \text{,} \end{split} \end{equation} where $q$ refers to nuclear coordinate. The calculation of the nuclear response of the HF MOs, $\partial \mathbf{C} / \partial q$, is avoided because the Lagrangian $\mathcal{L}$ is minimized with respect to all of its variational parameters which are the MO coefficients, $\mathbf{C}$. The MOB-ML energy Lagrangian is \begin{equation} \label{eq:lagrangian} \begin{split} \mathcal{L} & \sqbrak*{\mathbf{C},\mathbf{x},\mathbf{z},\mathbf{z}^{\text{core}},\mathbf{z}^{\text{val-occ}},\mathbf{z}^{\text{val-vir}}, \boldsymbol{\lambda} } = \\ & E_{\text{MOB-ML}} \sqbrak*{\mathbf{f}} + \sum_{pq} x_{pq} \paran*{ \mathbf{C}^{\dagger} \mathbf{S} \mathbf{C} - \mathbf{I} }_{pq} \\ & + \sum_{ai} z_{ai} F_{ai} \Big|_{i \in \text{occ}, a \in \text{vir}} + \sum_{ri} z_{ri}^{\text{core}} F_{ri} \\ & + \sum_{i>j} z^{\text{val-occ}}_{ij} r_{ij} + \sum_{ab} z^{\text{val-vir}}_{ab} r_{ab} \\ & + \sum_{wa} \lambda_{wa} P_{wa} \text{,} \end{split} \end{equation} where $\mathbf{x},\mathbf{z},\mathbf{z}^{\text{core}},\mathbf{z}^{\text{val-occ}},\mathbf{z}^{\text{val-vir}},\text{ and } \boldsymbol{\lambda}$ are the Lagrange multipliers. We refer to the core MOs with column indices $r, s$, to the valence-occupied localized MOs (LMOs) with column indices $i, j, k, l$, to the valence-virtual LMOs with column indices $a, b$, and to the non-valence-virtual MOs with column indices $w, x$. The indices $m, n, p, q$ are used to index generic molecular orbitals. The first term on the right hand side (RHS) of Eq.~\ref{eq:lagrangian} is the MOB-ML energy described by Eq.~\ref{eq:energy}. The second term on the RHS constrains the HF MOs, $\mathbf{C}$, to be orthonormal, which is commonly referred to as the Pulay force \cite{pulay_ab_1969}. The third term on the RHS is known as the Brillouin conditions, which account for the dependence of the correlation energy on the HF optimized molecular orbitals. The frozen-core conditions, $F_{ri}=0$, account for neglecting the correlation energy contributions from the core orbitals. The localization conditions, $r_{i j} = 0$ and $r_{a b} = 0$, account for how the valence-occupied and valence-virtual MOs are localized respectively. In this work, we employ Foster-Boys localization \cite{foster_canonical_1960} and intrinsic bond orbitals (IBO) localization \cite{knizia_intrinsic_2013}, but it is straightforward to generalize to other localization methods. The valence virtual conditions, $P_{wa} = 0$, reflect how the valence virtual MOs are obtained through a unitary transformation of the virtual MOs. This unitary transformation corresponds to the column space of a projection matrix formed by projecting the virtual MOs onto the IAOs. The complementary null space of this projection matrix corresponds to the non valence-virtual orbitals. This projection matrix is defined as \begin{equation} \mathbf{P} = \mathbf{C}^{\text{IAO,}\dagger}_{\text{vir}} \mathbf{C}^{\text{IAO}}_{\text{vir}} \text{,} \end{equation} where \begin{equation} \label{eq:ciao_vir} \mathbf{C}^{\text{IAO}}_{\text{vir}} = \mathbf{X}_{\text{occ}}^{\text{IAO}, \dagger} \mathbf{S}_1 \mathbf{C}_{\text{vir}} \text{,} \end{equation} and where $\mathbf{C}_{\text{vir}}$ is the virtual MO coefficient matrix. The matrix $\mathbf{X}^{\text{IAO}}$ transforms between the original AO and IAO basis sets and is expanded in Appendix~\ref{appendix:IBO}. All together, this yields the following analytical nuclear gradient, \begin{equation} \label{eq:mobml_grad} \begin{split} \frac{\text{d} E_{\text{MOB-ML}} }{\text{d} q} &= E_{\text{ML}}^{(q)} + E_{\text{HF}}^{(q)} + \sum_{pq} x_{pq} \paran*{ \mathbf{C}^{\dagger} \mathbf{S}^{(q)} \mathbf{C} }_{pq} \\ & + \sum_{ai} z_{ai} F_{ai}^{(q)} \Big|_{a \in \text{vir}, i \in \text{occ}} + \sum_{ri} z^{\text{core}}_{ri} F_{ri}^{(q)} \\ & + \sum_{i>j} z^{\text{val-occ}}_{ij} r_{ij}^{(q)} + \sum_{ab} z^{\text{val-vir}}_{ab} r_{ab}^{(q)} \\ & + \sum_{wa} \lambda_{wa} P_{wa}^{(q)} \text{,} \end{split} \end{equation} where the superscript $(q)$ denotes the explicit derivative of the quantity with respect to a nuclear coordinate. Eq.~\ref{eq:mobml_grad} is the general MOB-ML analytical nuclear gradient and we will now outline how to determine the Lagrange multipliers for our particular use case to arrive at a final working equation. \subsubsection{Minimizing with respect to MO coefficients} \label{subsubsec:minimizeL} All of the Lagrange multipliers ($\mathbf{x}$, $\mathbf{z}$, $\mathbf{z}^{\text{core}}$, $\mathbf{z}^{\text{val-occ}}$, $\mathbf{z}^{\text{val-vir}}$ and $\boldsymbol{\lambda}$) are determined by minimizing the MOB-ML Lagrangian with respect to its variational parameters, which are the MO coefficients, $\mathbf{C}$. Differentiating the Lagrangian with respect to these parameters yields \begin{equation} \label{eq:stationary_cond} \begin{split} \sum_{\mu} C_{\mu p} &\frac{\partial \mathcal{L}}{\partial C_{\mu q}} = E_{pq} + 2 x_{p q} + \paran*{ \mathbf{D} \sqbrak*{ \mathbf{z} } }_{p q} \\ & + \paran*{ \mathbf{D} \sqbrak*{ \mathbf{z}^{\text{core}} } }_{p q} + \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} }_{p q} \\ & + \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} }_{p q} + \paran*{ \mathbf{D} \sqbrak*{ \boldsymbol{\lambda} } }_{pq} = 0 \text{,} \\ \end{split} \end{equation} where \begin{equation} \begin{split} E_{pq} & = \sum_{\mu} C_{\mu p} \frac{\partial \paran*{ E_{\text{corr}} \sqbrak*{\mathbf{f}} + E_{\text{HF}} } }{\partial C_{\mu q}} \\ & = 4 F_{pq} \Big|_{q \in \text{occ}} + \paran*{ \mathbf{F} \bar{\mathbf{D}}^{\text{F}} } _{pq} \Big|_{q \in \text{loc}} \\ & + 2 \paran*{ \mathbf{g} \sqbrak*{\mathbf{C} \bar{\mathbf{D}}^{\text{F}} \mathbf{C}^\dagger }}_{pq} \Big|_{q \in \text{occ}} \\ & + 2 \sum_{m} \sqbrak*{ \boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}^{pq} }_{mm} \paran*{ D^{\text{J}}_{qm} + D^{\text{J}}_{mq}} \Big|_{mq \in \text{loc}} \\ & + 2 \sum_{m} \sqbrak*{ \boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}^{pm} }_{qm} \paran*{ D^{\text{K}}_{qm} + D^{\text{K}}_{mq} } \Big|_{mq \in \text{loc}} \\ & + \sum_{nm} R^n_{pm} \paran*{D^{\text{R},n}_{qm} + D^{\text{R},n}_{mq}} \Big|_{mq \in \text{val-occ}} \text{,} \\ \end{split} \end{equation} \begin{equation} \begin{split} ( \mathbf{D} &\sqbrak*{\mathbf{z}} )_{pq} = \\ & \quad \sum_{\mu} C_{\mu p} \Bigg( \sum_{a i} z_{ai} \frac{\partial F_{ai}}{\partial C_{\mu q}} \Bigg) \Big|_{i \in \text{occ}, a \in \text{vir}} \\ & = \paran*{ \mathbf{F} \mathbf{z}}_{p q} \Big|_{q \in \text{occ}} \\ & \quad + \paran*{ \mathbf{F} \mathbf{z}^{\dagger} }_{p q} \Big|_{q \in \text{vir} } + 2 \paran*{ \mathbf{g}[ \bar{\mathbf{z}} ] }_{p q} \Big{|}_{q \in \text{occ}} \text{,} \\ \end{split} \end{equation} \begin{equation} \label{eq:dzcore} \begin{split} & \paran*{ \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} }_{pq} = \sum_{\mu} C_{\mu p} \Bigg( \sum_{r k} z^{\text{core}}_{rk} \frac{\partial F_{rk}}{\partial C_{\mu q}} \Bigg) \\ & = \paran*{ \mathbf{F} \mathbf{z}^{\text{core}}}_{p q} \Big|_{q \in \text{val-occ}} \\ & \quad + \paran*{ \mathbf{F} \mathbf{z}^{\text{core},\dagger} }_{p q} \Big|_{q \in \text{core} } + 2 \paran*{ \mathbf{g}[ \bar{\mathbf{z}}^{\text{core}} ] }_{p q} \Big{|}_{q \in \text{occ}} \text{,} \end{split} \end{equation} \begin{equation} \begin{split} \label{eq:azloc} \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} }_{p q} & = \sum_{\mu} C_{\mu p} \Bigg( \sum_{i>j} z_{i j}^{\text{loc}} \frac{\partial r_{i j}}{\partial C_{\mu q}} \Bigg) \text{,} \\ \end{split} \end{equation} \begin{equation} \begin{split} \label{eq:azvalvir} \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} }_{p q} & = \sum_{\mu} C_{\mu p} \Bigg( \sum_{a>b} z_{a b}^{\text{vir}} \frac{\partial r_{a b}}{\partial C_{\mu q}} \Bigg) \text{,} \end{split} \end{equation} and \begin{equation} \label{eq:dlambda} \begin{split} ( \mathbf{D} &\sqbrak*{ \boldsymbol{\lambda} } )_{p q} = \sum_{\mu} C_{\mu p} \Bigg( \sum_{wa} \lambda_{wa} \frac{\partial P_{wa}}{\partial C_{\mu q}} \Bigg) \\ & = ( \mathbf{P} \boldsymbol{\lambda} )_{pq} \Big|_{q \in \text{non-val-vir}} \text{.} \end{split} \end{equation} Eqs.~\ref{eq:azloc} and \ref{eq:azvalvir} are expanded in Appendices \ref{appendix:Boys} and \ref{appendix:IBO}, respectively, $\mathbf{F}$ is the HF Fock matrix, $\mathbf{g}$ includes all of the usual HF two-electron terms, $\mathbf{R}^n$ is expanded in Appendix \ref{appendix:Boys}, the condition $q \in \text{loc}$ restricts the sum to valence-occupied and valence-virtual MOs, $\bar{\mathbf{z}} = \mathbf{z} + \mathbf{z}^{\dagger}$, $\bar{\mathbf{z}}^{\text{core}} = \mathbf{z}^{\text{core}} + \mathbf{z}^{\text{core,}\dagger}$, and $\bar{\mathbf{D}}^{\text{F}} = \mathbf{D}^{\text{F}} + \mathbf{D}^{\text{F}, \dagger}$. The matrices $\mathbf{D}^{\text{F}}$, $\mathbf{D}^{\text{J}}$, and $\mathbf{D}^{\text{K}}$ are calculated by \begin{equation} \label{eq:denM} \begin{split} D^{\text{M}}_{pq} &= \sum_{i} \frac{\partial \epsilon_{ii} \sqbrak*{\mathbf{f}_{i}} }{\partial \mathbf{f}_{i}} \frac{\partial \mathbf{f}_{i}}{\partial M_{pq}} \Big|_{pq \in \text{loc}} \\ & + 2 \sum_{i > j} \frac{\partial \epsilon_{ij} \sqbrak*{\mathbf{f}_{ij}} }{\partial \mathbf{f}_{ij}} \frac{\partial \mathbf{f}_{ij}}{\partial M_{pq}} \Big|_{pq \in \text{loc}} \text{,} \\ \end{split} \end{equation} where $M_{pq}$ refers to $F_{pq}$, $\sqbrak*{\boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}^{pp}}_{qq}$ and $\sqbrak*{\boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}^{pq}}_{pq}$, respectively. The matrix $\mathbf{D}^{\text{R,n}}$ is \begin{equation} \label{eq:denR} \begin{split} D^{\text{R},n}_{pq} &= 2 \sum_{i > j} \frac{\partial \epsilon_{ij} \sqbrak*{\mathbf{f}_{ij}} }{\partial \mathbf{f}_{ij}} \frac{\partial \mathbf{f}_{ij}}{\partial R^n_{pq}} \Big|_{pq \in \text{val-occ}} \text{.} \\ \end{split} \end{equation} The partial derivatives $\frac{\partial \epsilon_{ii} \sqbrak*{\mathbf{f}_{i}} }{\partial \mathbf{f}_{i}}$ and $\frac{\partial \epsilon_{ij} \sqbrak*{\mathbf{f}_{ij}} }{\partial \mathbf{f}_{ij}}$ on the RHS of Eqns.~\ref{eq:denM} and \ref{eq:denR} are the derivatives of the machine learning prediction with respect to the feature vectors. We emphasize that any machine learning method (e.g. Gaussian process regression, regression clustering, neural net, etc.) can be readily used in this gradient framework without modification given $\frac{\partial \epsilon_{ii} \sqbrak*{\mathbf{f}_{i}} }{\partial \mathbf{f}_{i}}$ and $\frac{\partial \epsilon_{ij} \sqbrak*{\mathbf{f}_{ij}} }{\partial \mathbf{f}_{ij}}$. Furthermore, we note that the following analytical nuclear gradient derivation generalizes to any type of feature-vector design and construction, so long as the feature-vector elements are obtained from $\mathbf{F}$ and $\boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}$. The partial derivatives $\frac{\partial \mathbf{f}_{i}}{\partial M_{pq}}$, $\frac{\partial \mathbf{f}_{ij}}{\partial M_{pq}}$ and $\frac{\partial \mathbf{f}_{ij}}{\partial R^n_{pq}}$ are expanded in the supplementary material. We now proceed to solve for each of the Lagrange multipliers. First, combining the stationary conditions described by Eq.~\ref{eq:stationary_cond} with the auxiliary conditions $\boldsymbol{x} = \boldsymbol{x}^{\dagger}$ yields the linear Z-vector equations \begin{equation} \label{eq:LinearZvec} \begin{split} \paran*{ 1 - \mathcal{P}_{p q} } &( \mathbf{E} + \mathbf{D} \sqbrak*{\mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ & + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{ \boldsymbol{\lambda} } )_{p q} = 0 \text{,} \end{split} \end{equation} where $ \mathcal{P}_{p q}$ permutes the indices $p$ and $q$, which is used to solve for $\mathbf{z}$, $\mathbf{z}^{\text{core}}$, $\mathbf{z}^{\text{val-occ}}$, $\mathbf{z}^{\text{val-vir}}$ and $\boldsymbol{\lambda}$. The matrix $\boldsymbol{x}$ is then obtained as \begin{equation} \label{eq:overlap_lag} \begin{split} x_{pq} & = - \tfrac{1}{4} \paran*{ 1 + \mathcal{P}_{p q} } \big( \mathbf{E} + \mathbf{D} \sqbrak*{ \mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} \\ & \quad + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{ \boldsymbol{\lambda}} \big)_{p q} \text{.} \end{split} \end{equation} The Lagrange multipliers $\mathbf{z}^{\text{val-occ}}$ are solved by considering the (valence-occupied)-(valence-occupied) part of Eq.~\ref{eq:LinearZvec}, yielding \begin{equation} \label{eq:LinearZvec_occ_occ} \begin{split} \paran*{ 1 - \mathcal{P}_{i j} } ( & \mathbf{E} + \mathbf{D} \sqbrak*{ \mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ & + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{ \boldsymbol{\lambda}} )_{i j} = 0 \text{.} \end{split} \end{equation} Eq.~\ref{eq:LinearZvec_occ_occ} can be further simplified by showing that \begin{equation} \begin{split} \paran*{ 1 - \mathcal{P}_{i j} } \paran*{\mathbf{D} \sqbrak*{ \mathbf{z}} }_{i j} & = 0 \text{,} \\ \paran*{ 1 - \mathcal{P}_{i j} } \paran*{\mathbf{D} \sqbrak*{ \mathbf{z}^{\text{core}}} }_{i j} & = 0 \text{,}\\ \paran*{ 1 - \mathcal{P}_{i j} } \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} }_{i j} & = 0 \text{,} \\ \paran*{ 1 - \mathcal{P}_{i j} } \paran*{\mathbf{D} \sqbrak*{ \boldsymbol{\lambda} } }_{i j} &= 0 \text{.} \\ \end{split} \end{equation} As a result, $\mathbf{z}^{\text{val-occ}}$ is independent of all other Lagrange multipliers, which simplifies Eq.~\ref{eq:LinearZvec_occ_occ} to \begin{equation} \label{eq:zvalocc_cpl} E_{i j} - E_{j i} + \sum_{k > l} \paran*{ \mathcal{B}_{i j k l} - \mathcal{B}_{j i k l} } z^{\text{val-occ}}_{k l} = 0 \text{,} \end{equation} where the 4-dimensional tensor $\mathcal{B}$ is expanded in Appendix \ref{appendix:Boys}. The set of linear system of equations defined by Eq.~\ref{eq:zvalocc_cpl} are the Z-vector coupled perturbed localization (Z-CPL) equations which are used to solve for $\mathbf{z}^{\text{val-occ}}$. Subsequently, Eq.~\ref{eq:azloc} can be used to compute $\mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} $. The Lagrange multipliers $\mathbf{z}^{\text{core}}$ are solved by considering the core-(valence-occupied) part of Eq.~\ref{eq:LinearZvec}, yielding \begin{equation} \label{eq:LinearZvec_core_val} \begin{split} \paran*{ 1 - \mathcal{P}_{r i} } ( & \mathbf{E} + \mathbf{D} \sqbrak*{ \mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ & + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{\boldsymbol{\lambda}} )_{r i} = 0 \text{,} \end{split} \end{equation} which further simplifies to \begin{equation} \label{eq:zcore} \begin{split} E_{ri} - E_{ir} + \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} }_{ri} + \paran*{ \mathbf{F} \mathbf{z}^{\text{core}} - \mathbf{z}^{\text{core}} \mathbf{F} }_{r i} = 0 \text{.} \end{split} \end{equation} These are the Z-vector equations used to solve for $\mathbf{z}^{\text{core}}$. Subsequently, Eq.~\ref{eq:dzcore} can be used to calculate $\mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}}$. The Lagrange multipliers $\mathbf{z}^{\text{val-vir}}$ are solved by considering the (valence-virtual)-(valence-virtual) part of Eq.~\ref{eq:LinearZvec}, yielding \begin{equation} \label{eq:LinearZvec_vir_vir} \begin{split} \paran*{ 1 - \mathcal{P}_{a b} } (& \mathbf{E} + \mathbf{D} \sqbrak*{ \mathbf{z}} + \mathbf{D} \sqbrak*{ \mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ & + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{\boldsymbol{\lambda}} )_{a b} = 0 \text{,} \end{split} \end{equation} which further simplifies to \begin{equation} \label{eq:zcpl_valvir} E_{a b} - E_{b a} + \sum_{c > d} \mathcal{C}_{a b c d} z^{\text{val-vir}}_{c d} = 0 \text{,} \end{equation} where the 4-dimensional tensor $\mathcal{C}$ is expanded in Appendix~\ref{appendix:IBO}. These are the Z-CPL equations which are used to solve for $\mathbf{z}^{\text{val-vir}}$. Subsequently, Eq.~\ref{eq:azvalvir} can be used to compute $\mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} $. The Lagrange multipliers $\boldsymbol{\lambda}$ are solved by considering the (non valence-virtual)-(valence-virtual) part of Eq.~\ref{eq:LinearZvec}, yielding \begin{equation} \begin{split} \paran*{ 1 - \mathcal{P}_{w a} } (& \mathbf{E} + \mathbf{D} \sqbrak*{\mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ &+ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{\boldsymbol{\lambda}})_{w a} = 0 \text{,} \\ \end{split} \end{equation} which further simplifies to \begin{equation} \begin{split} & E_{wa} - E_{aw} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} _{wa} - ( \mathbf{P} \boldsymbol{\lambda})_{aw} = 0 \text{.} \\ \end{split} \end{equation} These are the Z-vector equations used to solve for $\boldsymbol{\lambda}$. Subsequently, Eq.~\ref{eq:dlambda} can be used to compute $\mathbf{D} \sqbrak*{\boldsymbol{\lambda}}$. Finally, the Lagrange multipliers $\mathbf{z}$ are solved by considering the virtual-occupied part of Eq.~\ref{eq:LinearZvec}, yielding \begin{equation} \begin{split} ( 1 &- \mathcal{P}_{a i} ) ( \mathbf{E} + \mathbf{D} \sqbrak*{\mathbf{z}} + \mathbf{D} \sqbrak*{\mathbf{z}^{\text{core}}} + \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} \\ &+ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} + \mathbf{D} \sqbrak*{\boldsymbol{\lambda}} )_{a i} \Big|_{a \in \text{vir}, i \in \text{occ}} = 0 \text{,} \\ \end{split} \end{equation} which further simplifies to \begin{equation} \label{eq:DFTinDFT_External_Occ} \begin{split} & E_{ai} - E_{ia} + \paran{ 2 \mathbf{g} \sqbrak*{ \bar{\mathbf{z}}^{\text{core}} } }_{a i} \\ & \quad + \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-occ}}} }_{ai} - \paran*{ \mathbf{a} \sqbrak*{\mathbf{z}^{\text{val-vir}}} }_{ia} \\ & \quad - \paran*{\mathbf{P} \boldsymbol{\lambda} }_{i a} + \paran*{ \mathbf{F} \mathbf{z} - \mathbf{z} \mathbf{F} + 2 \mathbf{g} \sqbrak*{ \bar{\mathbf{z}} } }_{a i} = 0 \text{.} \\ \end{split} \end{equation} Here, the MO indices $a$ and $i$ refer to the full virtual and occupied spaces, respectively. These are the Z-vector coupled perturbed Hartree--Fock (Z-CPHF) equations. With the solutions to all Z-vector equations we can return to Eq.~\ref{eq:overlap_lag} to solve for $\boldsymbol{x}$. \subsubsection{Incorporating molecular-orbital localization} \label{sec:mobml_total_gradient} To provide the working expression of Eq.~\ref{eq:mobml_grad} in terms of derivative AO integrals, we must specify the molecular-orbital localization method. For this derivation, we choose the Foster-Boys and IBO localization methods to localize the valence-occupied and valence-virtual orbitals, respectively, such that \begin{equation} \label{eq:mobml_grad_ao} \begin{split} \frac{\partial \mathcal{L}}{\partial q} & = \text{tr} \sqbrak*{ \mathbf{d}_{\text{a}} \mathbf{h}^{(q)} } + \text{tr} \sqbrak*{ \mathbf{X}_1 \mathbf{S}_{1}^{(q)} } + \text{tr} \sqbrak*{ \mathbf{X}_2 \mathbf{S}_{2}^{(q)} } \\ & + \text{tr} \sqbrak*{ \mathbf{X}_{12} \mathbf{S}_{12}^{(q)} } + \sum_{n} \text{tr} \sqbrak*{ \mathbf{W}_n \paran*{\mathbf{R}^{n} }^{(q)} } \\ & + \tfrac{1}{2} \sum_{\mu \nu \lambda \sigma} D_{\mu \nu \kappa \sigma} (\mu \nu | \kappa \sigma)^{(q)} \text{,} \\ \end{split} \end{equation} where $\mathbf{h}$ is the standard one-electron Hamiltonian, $\mu$, $\nu$, $\kappa$ and $\sigma$ label AO basis functions in the original basis, $(\mu \nu| \kappa \sigma)$ are the two-electron repulsion integrals, $\mathbf{S}_2$ is the overlap matrix of the minimal AO basis (MINAO) used in the IBO procedure, and $\mathbf{S}_{12}$ is the overlap matrix between the original AO and MINAO basis sets. The effective one-particle density $\mathbf{d}_{\text{a}}$ is defined as \begin{equation} \mathbf{d}_{\text{a}} = \boldsymbol{\gamma} + \tfrac{1}{2}\mathbf{C} \bar{\mathbf{D}}^{\text{F}} \mathbf{C}^\dagger + \tfrac{1}{2} \mathbf{C} \bar{\mathbf{z}} \mathbf{C}^\dagger + \tfrac{1}{2} \mathbf{C} \bar{\mathbf{z}}^{\text{core}} \mathbf{C}^\dagger \text{,} \end{equation} where $\boldsymbol{\gamma}$ is the full system HF density. The effective two-particle density $\mathbf{D}$ is defined as \begin{equation} \begin{split} D_{\mu \nu \kappa \sigma} &= \paran*{ \mathbf{d}_{\text{b}} }_{\mu \nu} \gamma_{\kappa \sigma} - \tfrac{1}{2} \paran*{ \mathbf{d}_{\text{b}} }_{\mu \kappa} \gamma_{\nu \sigma} \\ & + 2 \sum_{pq} D^{\text{J}}_{pq} C_{\mu p} C_{\nu p} C_{\kappa q} C_{\sigma q} \\ & + 2 \sum_{pq} D^{\text{K}}_{pq} C_{\mu p} C_{\kappa p} C_{\nu q} C_{\sigma q} \text{,} \end{split} \end{equation} where the effective one-particle density $\mathbf{d}_{\text{b}}$ is defined as \begin{equation} \mathbf{d}_{\text{b}} = \boldsymbol{\gamma} + \mathbf{C} \bar{\mathbf{D}}^{\text{F}} \mathbf{C}^\dagger + \mathbf{C} \bar{\mathbf{z}} \mathbf{C}^\dagger + \mathbf{C} \bar{\mathbf{z}}^{\text{core}} \mathbf{C}^\dagger \text{.} \end{equation} The matrices $\mathbf{X}_1$, $\mathbf{X}_2$, $\mathbf{X}_{12}$, and $\mathbf{W}_n$ are defined as \begin{equation} \begin{split} \label{eq:X1} \mathbf{X}_1 & = \mathbf{C} \mathbf{x} \mathbf{C}^{\dagger} + \sum_{a > b} \frac{ \partial r_{ab} }{ \partial \mathbf{S}_1 } z_{ab}^{\text{val-vir}} \\ & \quad + \sum_{w a} \frac{ \partial P_{wa} }{ \partial \mathbf{S}_1 } \lambda_{wa} \text{,} \\ \mathbf{X}_2 & = \sum_{a > b} \frac{ \partial r_{ab} }{ \partial \mathbf{S}_2 } z_{ab}^{\text{val-vir}} + \sum_{w a} \frac{ \partial P_{wa} }{ \partial \mathbf{S}_2 } \lambda_{wa} \text{,} \\ \mathbf{X}_{12} &= \sum_{a > b} \frac{ \partial r_{ab} }{ \partial \mathbf{S}_{12} } z_{ab}^{\text{val-vir}} + \sum_{w a} \frac{ \partial P_{wa} }{ \partial \mathbf{S}_{12} } \lambda_{wa} \text{} \\ \end{split} \end{equation} and \begin{equation} \label{eq:Wxyz} \begin{split} \mathbf{W}_n &= \sum_{i > j} \frac{ \partial r_{ij} }{ \partial \mathbf{R}^n } z_{ij}^{\text{val-occ}} + \mathbf{C} \mathbf{D}^{\text{R},n} \mathbf{C}^\dagger \text{,} \end{split} \end{equation} where Eq.~\ref{eq:X1} is expanded in Appendix~\ref{appendix:IBO} and Eq.~\ref{eq:Wxyz} is expanded in Appendix~\ref{appendix:Boys}. \section{Computational Details} In this work, we perform calculations on three different data sets: (i) the thermalized water data set published in Ref.~\citenum{cheng_universal_2019}, (ii) a thermalized set of organic molecules featuring up to seven heavy atoms (QM7b-T) \cite{cheng_universal_2019}, and (iii) the ISO17 data set of conformers taken from molecular dynamic (MD) trajectories for constitutional isomers with the chemical formula C$_7$O$_2$H$_{10}$ \cite{schutt_schnet_2017}. All MOB-ML energy and analytical gradient are implemented in and performed with \textsc{entos qcore}\cite{manby_entos_nodate}. The DF-HF calculations for the QM7b-T set\cite{cheng_universal_2019}, and the ISO17 set,\cite{schutt_schnet_2017} are performed with a cc-pVTZ \cite{dunning_gaussian_1989} basis set and a cc-pVTZ-JKFIT density fitting basis. \cite{weigend_fully_2002} The DF-HF calculations for the water calculations are performed with a aug-cc-pVTZ \cite{kendall_electron_1992} and a aug-cc-pVTZ-JKFIT\cite{weigend_fully_2002} basis set. We employ a molecular orbital convergence threshold of $\texttt{orbital\_grad\_threshold} = 1 \times 10^{-8}$~a.u. In all MOB-ML calculations, the Foster--Boys \cite{foster_canonical_1960} localization method is used to localize the valence-occupied MOs. The valence-virtual space is either localized with Foster--Boys localization (QM7b-T, ISO17) or the IBO localization method \cite{knizia_intrinsic_2013} (water). The diagonal and off-diagonal feature vectors are constructed following the procedure outlined in Ref.~\citenum{husch_enforcing_2020}. For all Z-CPHF calculations a convergence threshold of $1 \times 10^{-8}$~a.u. is specified. All WF calculations are performed in Molpro \cite{werner_molpro_2019-1} with the frozen-core approximation, and with density fitting. All WF pair energy calculations employ the non-canonical MP2 \cite{moller_note_1934,schutz_low-order_1999,hetzer_low-order_2000,werner_fast_2003} or non-canonical coupled-cluster singles, doubles, and perturbative triples [CCSD(T)] \cite{scuseria_closedshell_1987,scheiner_analytic_1987,scuseria_comparison_1990,lee_analytic_1991,schutz_low-order_2000,schutz_local_2000,werner_efficient_2011} correlation treatments with the cc-pVTZ, cc-pVTZ-MP2FIT, \cite{weigend_efficient_2002} aug-cc-pVTZ and aug-cc-pVTZ-MP2FIT \cite{weigend_efficient_2002} basis sets. An interface between Molpro and \textsc{entos qcore} is used such that WF pair energies are calculated using the DF-HF LMOs produced by \textsc{entos qcore}. All WF gradient calculations employ the canonical MP2 or CCSD(T) correlation treatments with the aug-cc-pVTZ, aug-cc-pVTZ-JKFIT and aug-cc-pVTZ-MP2FIT basis sets. For all Z-CPHF calculations needed for the WF gradient an iterative solver with a convergence threshold of $1 \times 10^{-9}$~a.u. is used. The MOB-ML models for water are trained on non-canonical CCSD(T)/aug-cc-pVTZ pair correlation energies. When constructing the feature vector all non-zero elements from the Fock and $\boldsymbol{\resizebox{!}{\CapLen}{$\kappa$}}$ matrices are used. All linear regression (LR) models are trained using Scikit-Learn. \cite{pedregosa_scikit-learn_2011} All Gaussian process regression (GPR) \cite{rasmussen_gaussian_2006} models use the Matern 5/2 kernel \cite{rasmussen_gaussian_2006,genton_classes_nodate} and are optimized using the scaled conjugate gradient option in GPy. \cite{gpy_gpy_2012} All regression clustering models are trained following the framework outlined in Ref.~\citenum{cheng_regression_2019} using a GPR within each cluster. The MOB-ML models for the QM7b-T data set, and the ISO17 data set are trained on non-canonical MP2/cc-pVTZ pair correlation energies. Feature selection is performed using random forest regression \cite{breiman_random_2001} with the mean decrease of accuracy criterion, which is sometimes referred to as permutation importance.\cite{breiman_statistical_2001} All GPR models use the Matern 5/2 kernel and are optimized using the scaled conjugate gradient option in GPy. \section{Results and Discussion} \label{sec:results} First, we compare the MOB-ML analytical gradient to the numerical gradient for an exemplary molecule to illustrate the correctness of our derivation and implementation in Table~\ref{tab:analytical_vs_numerical}. \begin{table} \caption{Mean absolute error (MAE) of the MOB-ML analytical nuclear gradient with respect to the MOB-ML numerical nuclear gradient for a non-equilibrium geometry of water. The numerical nuclear gradients in were obtained with a two-step central difference formula with a step size of $5 \times 10^{-4}$ bohr. The non-equilibrium geometry of water has bond lengths of 0.986$\text{\AA}$ and 0.958$\text{\AA}$, and a bond angle of 94.5$^{\circ}$. All MOB-ML models are trained on data for 100 water geometries.} \label{tab:analytical_vs_numerical} \begin{tabular*}{\columnwidth}{p{0.55\columnwidth} c} \hline Regression technique & MAE (hartree/bohr) \\ \hline Linear regression & $1.45 \times 10^{-8}$ \\ Gaussian process regression & $3.75 \times 10^{-8}$ \\ Clustered Gaussian process regression & $2.28 \times 10^{-8}$ \\ \hline \end{tabular*} \end{table} Table~\ref{tab:analytical_vs_numerical} shows that the mean absolute errors (MAE) of the analytical MOB-ML gradients of a distorted water molecule with respect to the numerical ones are on the order of $10^{-8}$~hartree/bohr for all MOB-ML models. A similar MAE is commonly found when comparing analytical and numerical gradients of pure electronic structure methods. \cite{lee_analytic_1991,schutz_analytical_2004,lee_analytical_2019,pinski_analytical_2019} Additionally, Table~\ref{tab:analytical_vs_numerical} shows that the difference of the numerical and analytical gradient is largely independent of the regression technique (linear regression, Gaussian process regression, or a clustered Gaussian process regression) applied within the MOB-ML model. More generally, this illustrates (as also pointed out in Section~\ref{subsubsec:minimizeL}) that any desired regression technique can be applied within MOB-ML without changes to the gradient framework provided that the regression prediction is differentiable with respect to the features. As a second demonstration, we consider the thermally accessible potential energy surface of a single water molecule, following our previous work. \cite{cheng_universal_2019} Fig.~\ref{fig:water_learning_curve} shows the MAE for the energy predictions and for the associated analytical gradients we obtained with MOB-ML models trained on CCSD(T) energies performed on thermalized water geometries. \begin{figure} \includegraphics[width=\columnwidth]{learning_curve.png} \caption{ MOB-ML learning curves for CCSD(T) energies (top panel) and gradients (bottom panel) for a single water molecule. Mean absolute errors (MAE) for the predictions are reported as function of the number of water geometries used for training data; only CCSD(T) energies (not gradients) are used for the MOB-ML training data. The green circles correspond to the mean MAE obtained from 50 random samples of the training data, the green shaded area corresponds to the 90\% confidence interval for the predictions and for the gradients obtained from 50 random samples. The black horizontal line at $0.3$~mH/bohr in the bottom panel indicates the commonly used threshold to determine geometry optimization convergence. } \label{fig:water_learning_curve} \end{figure} As already highlighted in Ref.~\citen{cheng_regression_2019}, the MAE for the energy prediction decreases steeply with the number of training geometries and we reach an MAE of $2 \times 10^{-4}$~kcal/mol when training on correlation energies of 100 training geometries. Additionally, we see that the MAE of the analytical MOB-ML gradients with respect to the analytical CCSD(T) gradients strictly decreases with an increasing amount of training data although the training data in this context are correlation energy labels and not gradients. The MAE of the MOB-ML analytical gradient is $9 \times 10^{-3}$~kcal/mol/\AA\ when training on correlation energies for 100 water geometries. We can contextualize this result by considering that the threshold commonly used to determine if a structure optimization is converged is $0.36$~kcal/mol/$\text{\AA}$. The MAE for the gradient drops below this threshold when training on as few as three to nine water geometries. This demonstrates that MOB-ML is able to describe potential energy surfaces to a high accuracy and with a high data efficiency. In Fig.~\ref{fig:qm7b_learning}, we show that this result generalizes to a diverse set of molecules. To this end, we first study the QM7b-T data set which is comprised of a thermalized set of 7211 organic molecules with 7 or fewer heavy atoms. \cite{cheng_thermalized_2019} Fig.~\ref{fig:qm7b_learning} shows the MAE for the MOB-ML energy prediction and for the associated analytical gradient with respect to the corresponding MP2 quantities as a function of the number of MP2 reference energy calculations. \begin{center} \begin{figure}[htbp] \includegraphics[width=\columnwidth]{qm7b_learning_curve.png} \caption{ MOB-ML learning curves for MP2 energies (top panel) and gradients (bottom panel) for the QM7b-T data set. Mean absolute errors (MAE) for the predictions are reported as function of the number of randomly selected molecules used for training data; only MP2 energies (not gradients) are used for the MOB-ML training data.} \label{fig:qm7b_learning} \end{figure} \end{center} As already reported in Ref.~\citen{husch_enforcing_2020}, the learning curve for the energy decreases steeply and we obtain an MAE of 1.0 kcal/mol when training on about 70 structures. The decrease in the MAE for the energy prediction is accompanied by a decrease in the MAE for the analytical MOB-ML gradient with respect to the analytical MP2 gradient. We reach a MAE of $2.08$ kcal/mol/\AA~when training on 220 structures. To compare MOB-ML for gradient predictions with other machine learning methods, we now also examine the ISO17 data set \cite{schutt_schnet_2017}. The ISO17 data set consists of conformers taken from MD trajectories for constitutional isomers with the chemical formula C$_7$O$_2$H$_{10}$. Table~\ref{tab:iso17} shows the performance of two MOB-ML models, one trained on 220 QM7b-T structures and one trained on 100 ISO17 structures, and summarizes the MAEs obtained with other ML models in the literature, i.e., SchNet, \cite{schutt_schnet_2017} FCHL\cite{christensen_operators_2019}, PhysNet \cite{unke_physnet_2019}, the shared-weight neural network (SWNN) \cite{profitt_shared-weight_2019}, GM-sNN, \cite{zaverkin_gaussian_2020} and GNNFF. \cite{park_accurate_nodate} The MOB-ML models are the only ML models which are on average chemically accurate although the MOB-ML models were only trained on energies for 100 ISO17 molecules and 220 QM7b-T molecules, respectively. The fact that our model trained on a small set of the seven-heavy atom molecules which are smaller in size than ISO17 and which are chemically more diverse (QM7b-T additionally contains the elements N, S, Cl) showcases again how transferable and data efficient MOB-ML models are. The next best model in terms of the energy MAE is GM-sNN which was trained on energies and gradients for 400k ISO17 structures and achieves an MAE of 1.97 kcal/mol. The force MAE of the MOB-ML models (1.63 and 1.64 kcal/mol/\AA, respectively) is comparable to that of GM-sNN (1.66 kcal/mol/\AA) while employing only 0.025\% of the training data. MOB-ML is significantly more accurate in the forces than other models trained on energies alone, i.e., SchNet which obtained an MAE of 5.71 kcal/mol/\AA~and SWNN which obtained an MAE of 6.61 kcal/mol/\AA. The only model which is more accurate in terms of the force MAE is PhysNet which is trained on energies and forces for 400k ISO17 structures. PhysNet obtains a force MAE of 1.38 kcal/mol/A. Given the demonstrated learnability of forces, it is very likely that MOB-ML could be trained to be more accurate by including more training data. Furthermore, analytical gradients have not been derived for all reference theories which considerably limits the scope of these machine learning methodologies. For example, the popular local coupled cluster methods\cite{schwilk_scalable_2017, guo_communication_2018, nagy_optimization_2018} do not currently have derived analytical gradient theories. \onecolumngrid \begin{center} \begin{table}[htbp] \caption{Comparison of the mean absolute error for the prediction of energies and atomic forces for the unknown test set of the ISO17 data set obtained with different ML methods. The different ML methods applied different training sizes and drew on different labels to train the models on. Energy and force errors are reported in kcal/mol and kcal/mol/$\text{\AA}$, respectively. } \label{tab:iso17} \begin{tabular*}{\columnwidth}{p{0.155\columnwidth} p{0.155\columnwidth} p{0.155\columnwidth} p{0.155\columnwidth}p{0.155\columnwidth} p{0.155\columnwidth}} \hline Method & Training Size & \multicolumn{2}{c}{Trained on energy labels} & \multicolumn{2}{c}{Trained on energy+gradient labels} \\ & & Energy MAE & Force MAE & Energy MAE & Force MAE \\ \hline SchNet\cite{schutt_schnet_2017} & 400,000 & 3.11 & 5.71 & 2.40 & 2.18 \\ FCHL\cite{christensen_operators_2019} & 1,000 & --- & --- & 3.70 & 3.50 \\ PhysNet \cite{unke_physnet_2019} & 400,000 & --- & --- & 2.94 & 1.38 \\ SWNN \cite{profitt_shared-weight_2019} & 400,000 & 3.72 & 6.61 & 8.57 & 6.74\\ GM-sNN\cite{zaverkin_gaussian_2020} & 400,000 & --- & --- & 1.97 & 1.66 \\ GNNFF \cite{park_accurate_nodate} & 400,000 & --- & --- & --- & 2.02 \\ \textbf{MOB-ML} & \textbf{100} & \textbf{0.84} & \textbf{1.64} & --- & --- \\ \textbf{MOB-ML} & \textbf{220$^*$} & \textbf{0.76} & \textbf{1.63} & --- & ---\\ \hline \end{tabular*} $^*$This MOB-ML model was trained on 220 randomly selected structures from the QM7b-T data set. \end{table} \end{center} \twocolumngrid Despite comparing favorably to other ML methods, it remains to be shown if the MOB-ML gradients are sufficiently accurate for practical applications. Therefore, we now use the MOB-ML gradients to perform the common quantum-chemical task of optimizing molecular structures. We optimize the constitutional isomers in ISO17 with MP2 and with MOB-ML and compare the resulting structures via the root mean square deviation (RMSD) of the atoms positions in Figure~\ref{fig:iso17_rmsd}. \begin{center} \begin{figure}[htbp] \includegraphics[width=\columnwidth]{iso17_rmsd_v2.png} \caption{Histogrammed root mean square deviations (RMSD) of HF structures (blue), B3LYP-D3 structures (orange), and MOB-ML structures (green) with respect to MP2 structures for the unique isomers in the ISO17 data set. The MOB-ML was trained on 220 randomly selected QM7b-T structures. } \label{fig:iso17_rmsd} \end{figure} \end{center} Figure~\ref{fig:iso17_rmsd} shows that the MOB-ML optimized structures are very similar to the reference MP2 optimized structures with a mean RMSD of 0.01~\AA. The MOB-ML optimized structures are significantly and systematically closer to the reference MP2 structures than the HF-optimized structures which exhibit an average RMSD of 0.03~\AA. Moreover, the MOB-ML structures are more similar to the reference MP2 structures than those obtained from B3LYP-D3, a typical DFT exchange-correlation functional. The B3LYP-D3 structures exhibit an average RMSD of 0.03~\AA~with respect to the MP2 reference structures. \section{Conclusions} In this work, we have presented the derivation and implementation of the formally complete MOB-ML analytical nuclear gradient theory within a general Lagrangian framework. We have validated our derivation and implementation by comparison of numerical and analytical gradients. The MOB-ML gradient framework can be applied in conjunction with any desired fitting technique (e.g., Gaussian process regression or neural networks) and any desired recipe for assembling the MOB-ML feature information. Furthermore, the framework for evaluating the gradient of a predicted high-accuracy wave function energy is independent of the wave function method MOB-ML was trained to predict. Hence, we can take the analytical gradient of a MOB-ML method trained to predict an arbitrary accurate wave function theory. MOB-ML was previously shown to predict high-accuracy wave function energies at the cost of a molecular orbital evaluation. We now have shown that a MOB-ML model trained on correlation energies alone also yields highly accurate gradients for potential energy surfaces of a single molecule and for sets of diverse molecules. Specifically, we presented a MOB-ML model which obtains a force MAE of 1.64 kcal/mol/\AA\ for the ISO17 set when only trained on reference energies for 100 molecules beating out the next best model only trained on energies in the literature, SchNet (5.71 kcal/mol/\AA) which was trained on 400k molecules \cite{schutt_schnet_2017}. The transferability and data efficiency becomes even clearer when considering that we obtain an MAE of 1.63 kcal/mol/\AA\ for the ISO17 set when training on 220 QM7b-T molecules which are smaller in size (seven versus nine heavy atoms) and which are more diverse in terms of chemical composition. The accuracy of a MOB-ML model trained on energies for 220 QM7b-T molecules for the forces is on par with some of the best ML models trained on energies and forces for hundreds of thousands of ISO17 molecules. Furthermore, we have demonstrated that a force MAE of this magnitude translates into structures which are very close to reference structures. Specifically, we obtain a mean RMSD of 0.01~\AA\ with respect to MP2 optimized structures for the ISO17 data set which is is three times smaller than for HF or B3LYP-D3 optimized structures. Natural objectives for future work include (i) the inclusion of gradients in the training process to boost the performance in the very low data regime; (ii) the extension to an open-shell framework; (iii) the adaptation of the Lagrangian framework to derive the analytical gradients of the MOB-ML energy with respect to quantities such as electric and magnetic fields. \begin{acknowledgments} This work is supported in part by the U.S. Army Research Laboratory (W911NF-12-2-0023), the U.S. Department of Energy (DE-SC0019390), the Caltech DeLogi Fund, and the Camille and Henry Dreyfus Foundation (Award ML-20-196). S.J.R.L. thanks the Molecular Software Sciences Institute (MolSSI) for a MolSSI investment fellowship. T.H. acknowledges funding through an Early Post- Doc Mobility Fellowship by the Swiss National Science Foundation (Award P2EZP2\_184234). Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the DOE Office of Science under contract DE- AC02-05CH11231. \end{acknowledgments} \section*{Supplementary Material} The Supplementary Material contains the partial derivatives of the feature vector elements. \section*{DATA AVAILABILITY STATEMENT} The data that supports the findings of this study are available within the article and its supplementary material. The data set used in Table~\ref{tab:analytical_vs_numerical} and Fig.~\ref{fig:water_learning_curve} is available from Ref.~\citenum{cheng_thermalized_2019}. The data set used in Fig.~\ref{fig:qm7b_learning} is available from Ref.~\citenum{cheng_thermalized_2019}. The data set used in Table~\ref{tab:iso17} and Fig.~\ref{fig:iso17_rmsd} is available from Ref.~\citenum{schutt_schnet_2017}.
{ "timestamp": "2020-12-17T02:16:53", "yymm": "2012", "arxiv_id": "2012.08899", "language": "en", "url": "https://arxiv.org/abs/2012.08899" }
\section{Introduction and the main result} The concept of chaos is in the core of the dynamical systems theory. The understanding that random behavior can occur from the evolution of deterministic systems, such as in discrete dynamical systems on compact metric spaces, lead many mathematicians to try to formalize the notion of chaos. The first to do that, to our best knowledge, was Guckenheimer \cite{G} in the setting of one dimensional maps. R. Devaney, in his book \cite{D}, gathered some of these attempts in a definition that is now known as Devaney chaotic systems. The dynamical property that captures the central idea of chaos is the \emph{sensitivity to initial conditions}. This is best illustrated by Edward Lorenz and his ideas of the instability of the atmosphere and the butterfly effect \cite{L}. We now define it precisely. \begin{definition} A map $f:X\rightarrow X$ defined in a compact metric space $(X,d)$ is \emph{sensitive} if there is $\varepsilon>0$ such that for every $x\in X$ and every $\delta>0$ there exist $y\in X$ with $d(x,y)<\delta$ and $n\in\mathbb{N}$ satisfying $$d(f^{n}(x),f^{n}(y))>\varepsilon.$$ The number $\varepsilon$ is called the \emph{sensitivity constant} of $f$. \end{definition} Sensitivity means that for each initial condition there are arbitrarily close distinct initial conditions having completely different futures. We can explain sensitivity in a few distinct ways. Denoting by $$B(x,\delta)=\{y\in X; \,\,\, d(y,x)<\delta\}$$ the ball centered at $x$ and radius $\delta$, sensitivity implies the existence of $\varepsilon>0$ such that for every ball $B(x,\delta)$ with $x\in X$ and $\delta>0$, there exists $n\in\mathbb{N}$ such that $$\diam(f^n(B(x,\delta)))>\varepsilon$$ where $\diam(A)=\sup\{d(a,b); a,b\in A\}$ denotes the diameter of $A$. Thus, sensitivity increases the diameter of balls with positive radius. We can explain sensitivity using local stable sets as follows. We define the $\varepsilon$\emph{-stable set} of $x$ by $$W^s_\varepsilon(x):=\{y\in X\;;\;d(f^n(y), f^n(x))\leq\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}\}.$$ Roughly speaking, this is the set of initial conditions whose futures are similar to the future of $x$. It follows that $f$ is sensitive if, and only if, there exists $\varepsilon>0$ such that for any $x\in X$, the $W^s_{\varepsilon}(x)$ does not contain any neighborhood of $x$. Thus, sensitivity can be seen as a condition on all local stable sets of the space. In this paper we study sensitivity for homeomorphisms and how it implies the existence of several initial conditions with similar pasts. The idea is to understand whether sensitivity can also be seen as a condition on all local unstable sets, ensuring they are non-trivial in distinct scenarios. Recall the definition of the $\varepsilon$\emph{-unstable set} of $x$: $$W^u_\varepsilon(x):=\{y\in X\;;\;d(f^{-n}(y), f^{-n}(x))\leq\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}\}.$$ Analogously, this is the set of initial conditions whose pasts are similar to the past of $x$. The main theorem of this paper proves the existence of a compact and perfect subset of any local unstable set for assuming sensitivity and the shadowing property. \begin{definition} We say that a homeomorphism $f:X\rightarrow X$ satisfies the \emph{shadowing property} if given $\varepsilon>0$ there is $\delta>0$ such that for each sequence $(x_n)_{n\in\mathbb{Z}}\subset X$ satisfying $$d(f(x_n),x_{n+1})<\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}$$ there is $y\in X$ such that $$d(f^n(y),x_n)<\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}.$$ In this case, we say that $(x_k)_{k\in\mathbb{Z}}$ is a $\delta-$pseudo orbit of $f$ and that $(x_n)_{n\in\mathbb{Z}}$ is $\varepsilon-$shadowed by $y$. \end{definition} The following is our main result. \begin{mainthm}\label{B} Let $f\colon X\to X$ be a homeomorphism of a compact metric space $X$ satisfying the shadowing property. \begin{enumerate} \item If $f$ is sensitive, with sensitivity constant $\varepsilon>0$, then for each $x\in X$ there is a compact and perfect set $$C_x\subset W^u_\varepsilon(x).$$ \item If $f^{-1}$ is sensitive, with sensitivity constant $\varepsilon>0$, then for each $x\in X$ there is a compact and perfect set $$C_x\subset W^s_\varepsilon(x).$$ \end{enumerate} \end{mainthm} \begin{proof} This proof is inspired by the proof of Proposition 2.2.2 in \cite{ArtigueDend}. Assume that $f$ is a sensitive homeomorphism with sensitivity constant $\varepsilon>0$. The shadowing property assures the existence of $\delta\in(0,\varepsilon)$ such that every $\delta$-pseudo orbit of $f$ is $\varepsilon/2$-shadowed. Given $x\in X$, we can use the sensitivity of $f$ to obtain $x_1\in X$ such that $d(x, x_1)<\delta$ and $x_1\notin W^s_{\varepsilon}(x)$. Consider the sequence $(x^k)_{k\in\mathbb{Z}}$ defined as follows $$x^k=\left\{\begin{array}{ll} f^k(x), & k< 0\\ f^k(x_1), & k\geq 0.\\ \end{array}\right.$$ The sequence $(x^k)_{k\in\mathbb{Z}}$ is a $\delta$-pseudo orbit of $f$ and then by the shadowing property there is $$c_1(x)\in W^u_{\varepsilon/2}(x)\cap W^s_{\varepsilon/2}(x_1).$$ Note that $c_1(x)\neq x$ since $c_1(x)\in W^s_{\varepsilon/2}(x_1)$ and $x_1\notin W^s_{\varepsilon}(x)$, and consider the set $$C_1=\{x, c_1(x)\}.$$ Let $\varepsilon_1>0$ be such that $$\varepsilon_1<\min\{\varepsilon/4,d(x,c_1(x))/2\}$$ and choose $\delta_1\in(0,\varepsilon_1)$, given by the shadowing property, such that every $\delta_1$-pseudo orbit of $f$ is $\varepsilon_1$-shadowed. We can use the sensitivity of $f$ for each $y\in C_1$ to obtain $y_1=y_1(y)$ such that $$d(y,y_1)<\delta_1 \,\,\,\,\,\, \text{and} \,\,\,\,\,\, y_1\notin W^s_\varepsilon(y).$$ The sequence $(y^k)_{k\in\mathbb{Z}}$ given by $$y^k=\left\{\begin{array}{ll} f^k(y), & k<0\\ f^k(y_1), & k\geq0\\ \end{array}\right.$$ is a $\delta_1$-pseudo orbit of $f$, so the shadowing property assures the existence of $$c_2(y)\in W^u_{\varepsilon_1}(y)\cap W^s_{\varepsilon_1}(y_1)$$ Note that $c_2(y)\in W^u_\varepsilon(x)$ for every $y\in C_1$ since $$y\in W^u_{\varepsilon/2}(x), \,\,\,\,\,\, c_2(y)\in W^u_{\varepsilon_1}(y) \,\,\,\,\,\, \text{and} \,\,\,\,\,\, \varepsilon/2+\varepsilon_1<\varepsilon.$$ Also, $c_2(y)\neq y$ since $c_2(y)\in W^s_{\varepsilon_1}(y_1)$ and $y_1\notin W^s_{\varepsilon}(y)$. Moreover, $$c_2(y)\neq z \,\,\,\,\,\, \text{for each} \,\,\,\,\,\, z\in C_1$$ because $d(c_2(y),y)<\varepsilon_1$ and $d(y,z)>\varepsilon_1$ if $z\in C_1\setminus\{y\}$. Thus, the set $$C_2=C_1\cup \{ c_2(y); \,\,y\in C_1\}$$ has $2^2$ elements, $C_2\subset W^u_\varepsilon(x)$ and for each $y\in C_{1}$ there is $c_2(y)\in C_2$ such that $$d(c_2(y),y)<\frac{\varepsilon}{2^2}.$$ We can construct using an induction process an increasing sequence of sets $(C_k)_{k\in\mathbb{N}}$ such that $C_k$ has $2^k$ elements, $C_k\subset W^u_\varepsilon(x)$ and for each $y\in C_{k-1}$ there exists $c_k(y)\in C_k$ such that $$d(c_k(y),y)<\frac{\varepsilon}{2^k}.$$ Thus, we can consider the set $$C_x=\overline{\bigcup_{k\geq 1}C_k},$$ that is a compact set contained in $W^u_\varepsilon(x)$, since $W^u_\varepsilon(x)$ is closed and $C_k\subset W^u_\varepsilon(x)$ for every $k\in\mathbb{N}$. To see that $C_x$ is perfect let $z\in C_x$. If $z\notin C_k$ for every $k\in\mathbb{N}$, then clearly $z$ is accumulated by points of $C_x$. If $z\in C_k$ for some $k\in\mathbb{N}$, then $$z\in C_n \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\geq k,$$ since $(C_k)_{k\in\mathbb{N}}$ is an increasing sequence. Thus, for each $\alpha>0$ we can choose $N>k$ such that $$\frac{\varepsilon}{2^N}<\alpha$$ and since $z\in C_N$ it follows that there exists $c_N(z)\in C_{N+1}$ satisfying $$d(c_N(z),z)<\frac{\varepsilon}{2^N}<\alpha.$$ So, for each $z\in C_x$ and $\alpha>0$ we can find $c_N(z)\in C_x$ such that $d(z,c_N(z))<\alpha$. This proves that $z$ is an accumulation point of $C_x$ and that $C_x$ is perfect. The proof for the case $f^{-1}$ sensitive is analogous and we leave the details to the reader. \end{proof} Next section we obtain consequences of this theorem generalizing results in \cite{ACCV} and \cite{CC2}. \section{Positive cw-expansivity and shadowing} In \cite{Kato93} and \cite{Kato93B} Kato introduced a positive notion of cw-expansiveness. \begin{definition} A map $f:X\rightarrow X$ is \emph{positively cw-expansive} if there exists $c>0$ such that $W^s_c(x)$ is totally disconnected for every $x\in X$. Equivalently, for every non-trivial continuum $C\subset X$ there exists $n\in\mathbb{N}$ such that $$\diam(f^n(C))>c.$$ \end{definition} Positively cw-expansive maps defined on continua (compact, connected and non-trivial) satisfy sensitivity to initial conditions. Indeed, every non-empty open set contains a non-trivial continuum that increase when iterated to the future, so local stable-sets cannot contain any open set of the space. Kato exhibit examples of positively cw-expansive homeomorphisms and proved that they cannot be defined in Peano continua (see Corollary 1.7 in \cite{Kato93B}). We will prove that the restriction to certain hyperbolic sets provide examples of positively cw-expansive homeomorphisms satisfying the shadowing property. We omit classical definitions here such as hyperbolicity, non-wandering set $\Omega(f)$, attractor, manifolds and foliations because in the proof we only need the Theorem 1 in \cite{ABD} that states a hyperbolic set contained in $\Omega(f)$ has either empty interior or is the whole ambient manifold. \begin{theorem} Let $f\colon M\to M$ be a diffeomorphism defined in a manifold and let $\Lambda\subset M$ be a hyperbolic attractor of $f$. If $\Lambda \subset \Omega(f)$, $\Lambda\neq M$ and its stable (unstable) foliation is one dimensional, then $f|_{\Lambda}$ ($f^{-1}|_{\Lambda}$) is positively cw-expansive. \end{theorem} \begin{proof} We assume that the stable foliation has dimension one and prove that $f|_{\Lambda}$ is positively cw-expansive. By contradiction, suppose that $f|_{\Lambda}$ is not positively cw-expansive, that is, for each $\varepsilon>0$ there exists a non-trivial continuum $C\subset\Lambda$ and $x\in \Lambda$ such that $C\subset W^s_\varepsilon(x)$. Let $$A=\bigcup_{x\in C}W^u_\varepsilon(x)$$ and note that $\Lambda$ being an attractor implies that $$W^u_\varepsilon(x)\subset \Lambda \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, x\in \Lambda.$$ This implies that $A\subset \Lambda$ since $C\subset \Lambda$. Since $C$ is a non-degenerate continuum contained in some stable set which is a one-dimensional manifold, it follows that $C$ is also a one-dimensional manifold. This ensures that the interior of $A$ is not empty. Thus the interior of $\Lambda$ is not empty, since $A\subset \Lambda$. By Theorem 1 in \cite{ABD} we have that $\Lambda=M$, contradicting the hypothesis $\Lambda\neq M$. \end{proof} Surface DA attractors and the Solenoid are examples of hyperbolic attractors illustrating this result (see \cite{Robinson} for details of these attractors). All these examples are positively cw-expansive (and hence sensitive) homeomorphisms satisfying the shadowing property. They have in common that local stable sets are uncountable. Indeed, it is proved in \cite{MM} that the Hausdorff dimension of $W^s_{\varepsilon}(x)\cap\Lambda$ is positive in the case $\Lambda$ is a basic piece of an Axiom A diffeomorphism of a surface. We now start to discuss the case all local stable sets are countable. \begin{definition} We say that a homeomorphism $f:X\rightarrow X$ is \emph{positively countably-expansive} if there exists $\varepsilon>0$ such that $W^s_\varepsilon(x)$ is countable for every $x\in X$. Equivalently, for every $C\subset X$ uncountable, there exists $n\in\mathbb{N}$ such that $$\diam(f^n(C))>\varepsilon.$$ \end{definition} Any homeomorphism defined in a countably and compact metric space is clearly positively countably expansive. Thus, the identity map on a countably compact space is positively countably expansive and satisfies the shadowing property (see Theorem 2.3.2 in \cite{AH}). The question we now start to discuss is: \begin{question} Does there exist a positively countably expansive homeomorphism with the shadowing property defined in an uncountable compact metric space? \end{question} We answer this question negatively in two particular cases: the first assuming transitivity and the second assuming the L-shadowing property. \begin{definition} A map $f\colon X\to X$ is called \emph{transitive}, if for any pair $U,V\subset X$ of non-empty open subsets, there exists $n\in\mathbb{N}$ such that $$f^n(U)\cap V\neq\emptyset.$$ \end{definition} In this case, there is a residual set of points whose future orbits are dense on the space. A point $x\in X$ is called \emph{chain-recurrent} if for each $\varepsilon>0$ there exists a non-trivial finite $\varepsilon$-pseudo orbit starting and ending at $x$. The set of all chain-recurrent points is called the \textit{chain recurrent set} and is denoted by $CR(f)$. This set can be split into disjoint, compact and invariant subsets, called the \emph{chain-recurrent classes}. The \emph{chain-recurrent class} of a point $x\in X$ is the set of all points $y\in X$ such that for each $\varepsilon>0$ there exist a periodic $\varepsilon$-pseudo orbit containing both $x$ and $y$. If $f$ is transitive, then the whole space $X$ is a chain recurrent class. Now we define the L-shadowing property. \begin{definition} A homeomorphism $f$ satisfies the L-shadowing property if for every $\varepsilon>0$, there exists $\delta>0$ such that for every sequence $(x_k)_{k\in\mathbb{Z}}\subset X$ satisfying $$d(f(x_k),x_{k+1})\leq\delta\,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{Z} \,\,\,\,\,\, \text{and}$$ $$d(f(x_k),x_{k+1})\to0 \,\,\,\,\,\, \text{when} \,\,\,\,\,\, |k|\to\infty,$$ there is $z\in X$ satisfying $$d(f^k(z),x_k)\leq\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\in\mathbb{Z} \,\,\,\,\,\, \text{and}$$ $$d(f^k(z),x_k)\to0\,\,\,\,\,\, \text{when} \,\,\,\,\,\, |k|\to\infty.$$ The sequence $(x_k)_{k\in\mathbb{Z}}$ is called a $\delta$-\emph{limit-pseudo-orbit} of $f$ and we say that $z$ $\varepsilon$-\emph{limit-shadows} the sequence $(x_k)_{k\in\mathbb{Z}}$. \end{definition} The L-shadowing property was introduced in \cite{CC2} and further explored in \cite{ACCV} and \cite{ACCV3}. This is a stronger version of the shadowing property that is present on continuum-wise-hyperbolic systems as proved in \cite{ACCV3} and it implies a spectral decomposition of the chain recurrent set (see \cite{ACCV}). The chain-recurrent classes of homeomorphisms satisfying the L-shadowing property are either expansive or admit arbitrarilly small \emph{topological semihorseshoes} (see Theorem B in \cite{ACCV}) that are compact periodic sets whose restriction is semiconjugate to a shift of two symbols. In particular, topological semihorseshoes are uncountable sets with positive entropy contained in arbitrarilly small dynamical balls. The following is our second main result. \begin{mainthm}\label{C} Let $f:X\rightarrow X$ be a positively countably-expansive homeomorphism, defined in a compact metric space $X$. If at least one of the following conditions is satisfied \begin{itemize} \item[(1)] $f$ is transitive and has the shadowing property \item[(2)] $f$ has the L-shadowing property \end{itemize} then $X$ is countable. \end{mainthm} We will split the proof of this theorem in the items (1) and (2) and before proving each item we will state a few definitions and results that are going to be important in the proof. \begin{definition} We say that a map $f\colon X\to X$ is \emph{equicontinuous} if the sequence of iterates $(f^n)_{n\in\mathbb{N}}$ is an equicontinuous sequence of maps. Equivalently, for every $\varepsilon>0$ there exists $\delta>0$ such that $$d(x,y)<\delta \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, d(f^n(x),f^n(y))<\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{N}.$$ In this case, the $W^s_{\varepsilon}(x)$ contains the ball of radius $\delta$ centered at $x$. \end{definition} A classical result in topological dynamics is the Auslander-York dichotomy: a minimal homeomorphism of a compact metric space is either sensitive or equicontinuous. In \cite[Corollary 1]{Moot} Moothathu proved that a transitive homeomorphism satisfying the shadowing property is either sensitive or equicontinuous. We remark that a consequence of Theorem \ref{B} is the following result. \begin{corollary}\label{sen} If $f:X\rightarrow X$ is a positively countably-expansive homeomorphism, defined in a compact metric space $X$, and satisfying the shadowing property, then $f^{-1}$ is not sensitive. \end{corollary} \begin{proof} If $f^{-1}$ is sensitive, Theorem \ref{B} assures the existence of compact and perfect sets on every local stable set, but this imples that local stable sets are uncountable and contradicts the hypothesis of positive countable expansivity. \end{proof} \begin{proof}[Proof of Theorem \ref{C} (1)] Let $f$ be a positively countably expansive homeomorphism that is transitive and satisfies the shadowing property. Note that $f^{-1}$ is also transitive and satisfies the shadowing property, so $f^{-1}$ is either sensitive or equicontinuous. Corollary \ref{sen} assures that $f^{-1}$ cannot be sensitive, so it is equicontinuous. Since $f$ is a homeomorphism, \cite[Theorem 3.4]{AG} assures that $f$ is also equicontinuous. Let $c>0$ be a positively countably expansive constant of $f$ and choose $\delta>0$, given by equicontinuity, such that $$d(x,y)<\delta \,\,\,\,\,\, \text{implies} \,\,\,\,\,\, y\in W^s_{c}(x).$$ Thus, every open set of diameter smaller than $\delta$ is contained in a same $c$-stable set and, hence, is countable. Since $X$ is compact, we can choose a finite open cover with elements of diameter smaller than $\delta$. This implies that $X$ is countable, since it is written as a finite union of countable sets. \end{proof} Now we define the limit shadowing property. \begin{definition} A sequence $(x_k)_{k\in\mathbb{N}}\subset X$ is called a \emph{limit pseudo-orbit} if it satisfies $$d(f(x_k),x_{k+1})\rightarrow 0 \,\,\,\,\,\, \text{when} \,\,\,\,\,\, k\rightarrow\infty.$$ The sequence $\{x_k\}_{k\in\mathbb{N}}$ is \emph{limit-shadowed} if there exists $y\in X$ such that $$d(f^k(y),x_k)\rightarrow 0, \,\,\,\,\,\, \text{when} \,\,\,\,\,\,k\rightarrow \infty.$$ We say that $f$ has the \emph{limit shadowing property} if every limit pseudo-orbit is limit-shadowed. \end{definition} This property was introduced by Eirola, Nevanlinna and Pilyugin in \cite{ENP}, see also \cite{Cthesis}, \cite{C}, \cite{C2}, \cite{CK} and \cite{P1}. The L-shadowing property refines both the shadowing and the limit shadowing properties since homeomorphisms satisfying the L-shadowing property also satisfies both shadowing and limit shadowing. Indeed, the shadowing property is proved in \cite[Proposition 2]{CC2} and the limit shadowing property can be seen as a consequence of \cite[Theorem 2.4]{ACCV}. We exhibit in the next result a much simpler proof of this fact. \begin{proposition} If $f\colon X\to X$ is a homeomorphism satisfying the L-shadowing property, then it satisfies the limit shadowing property. \end{proposition} \begin{proof} Let $(x_k)_{k\in\mathbb{N}}$ be a limit pseudo orbit of $f$ and let $\varepsilon=\diam(X)$. The L-shadowing property assures the existence of $\delta>0$ such that any $\delta$-limit pseudo orbit is $\varepsilon$-limit shadowed. Choose $N\in\mathbb{N}$ such that $$d(f(x_k),x_{k+1})<\delta \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, k\geq N$$ and consider the sequence $(y_k)_{k\in\mathbb{N}}$ defined by $$y_k=\begin{cases} x_{N+k}, & k\geq0\\ f^{k}(x_N), & k<0. \end{cases}$$ This is clearly a $\delta$-limit pseudo orbit and, hence, there exists $z\in X$ that $\varepsilon$-limit shadows it. In particular, $f^{-N}(z)$ limit shadows $(x_k)_{k\in\mathbb{N}}$. \end{proof} In the following we use the limit shadowing property and the finiteness of the number of distinct chain recurrent classes to write the whole space as the union of the stable sets of chain recurrent points. \begin{proposition}\label{L} If $f\colon X\to X$ is a homeomorphism defined in a compact metric space and satisfies the L-shadowing property, then $$X=\bigcup_{x\in CR(f)}W^s(x).$$ \end{proposition} \begin{proof} It is proved in \cite{CC2} that the L-shadowing property implies that the chain recurrent set is a finite union of distinct chain recurrent classes $$CR(f)=\bigcup_{i=1}^nC_i.$$ This implies that the restriction of $f$ to each of these classes satisfies the L-shadowing property and, in particular, the limit shadowing property. The argument in \cite[Theorem 3.2.2]{AH} proves that $$X=\bigcup_{i=1}^nW^s(C_i),$$ where $W^s(C)=\{y\in X; \lim_{k\to\infty}d(f^k(y),C)=0\}$. Hence, if $z\in X$, then $$z\in W^s(C_i) \,\,\,\,\,\, \text{for some} \,\,\,\,\,\, i\in\{1,\dots,n\}.$$ We can project the orbit of $z$ into the class $C_i$ considering a sequence of points $(x_k)_{k\in\mathbb{N}}\subset C_i$ that minimize the distance between $f^k(z)$ and $C_i$. It follows from $$\lim_{k\to\infty}d(f^k(z),C_i)=0$$ that $(x_k)_{k\in\mathbb{N}}$ is a limit pseudo orbit of $f$ and then the limit shadowing property assures the existence of $x\in C_i$ that limit shadows $(x_k)_{k\in\mathbb{N}}$. In particular we obtain $$\lim_{k\to \infty}d(f^k(z),f^k(x))=0 \,\,\,\,\,\, \text{i.e.} \,\,\,\,\,\, z\in W^s(x).$$ \end{proof} \begin{proof}[Proof of Theorem \ref{C} (2)] Let $f$ be a positively countably expansive homeomorphism satisfying the L-shadowing property. In this case, there is only a finite number of distinct chain recurrent classes and the restriction of $f$ to each of these classes is transitive, has the shadowing property and, by hypothesis, is positively countably expansive. Then item (1) assures that each chain recurrent class is countable, and since there is only a finite number of them, the chain recurrent set is countable. Proposition \ref{L} ensures that $$X=\bigcup_{x\in CR(f)}W^s(x)$$ and, hence, to prove that $X$ is countable it is enough to prove that $W^s(x)$ is countable for every $x\in CR(f)$. Since $$W^s(x)\subset\bigcup_{n\in\mathbb{N}\cup\{0\}}f^{-n}(W^s_{\varepsilon}(f^n(x))) \,\,\,\, \text{ for every} \,\,\,\,\,\, x\in CR(f),$$ the existence of $x\in CR(f)$ such that $W^s(x)$ is uncountable implies that $$f^{-n}(W^s_{\varepsilon}(f^n(x))$$ would be uncountable for some $n\in\mathbb{N}\cup\{0\}$. Consequently $W^s_{\varepsilon}(f^n(x))$ would be uncountable, yielding contradiction. \end{proof} This theorem generalizes Theorems A and B in \cite{CC2} and Theorem G in \cite{ACCV} to the case of positive countable expansivity. In \cite{CC2} it is proved that positively n-expansive homeomorphisms with the additional assumptions of transitivity and shadowing, or the L-shadowing property, can only be defined on finite spaces. More generaly, in \cite{ACCV} is proved that positively finite-expansive homeomorphisms satisfying the shadowing property can only be defined in finite spaces. \section{Examples} In this last section we introduce more examples of sensitive homeomorphisms satisfying the shadowing property. The first class of examples considers the continuum-wise expansive homeomorphisms defined by Kato in \cite{Kato93}. \begin{definition} We say that $f\colon X\to X$ is \emph{continuum-wise expansive} if there exists $c>0$ such that $W^u_c(x)\cap W^s_c(x)$ is totally disconnected for every $x\in X$. Equivalently, for each non-trivial continuum $C\subset X$, there exists $n\in\mathbb{Z}$ such that \[\diam(f^n(C))>c.\] The number $c>0$ is called a cw-expansivity constant of $f$ and the set $W^u_c(x)\cap W^s_c(x)$ is called the dynamical ball of $x$ and radius $c$. \end{definition} Cw-expansive homeomorphisms defined on Peano continua (locally connected continua) are sensitive (see \cite{Hertz}). Thus, any cw-epansive homeomorphism satisfying the shadowing property is an example of a sensitive homeomorphism with the shadowing property. Examples of these systems can be found in \cite{ACCV3}: the pseudo-Anosov diffeomorphism of $\mathbb{S}^2$ and more generally the continuum-wise hyperbolic homeomorphisms. We can construct sensitive homeomorphisms with the shadowing property that are not cw-expansive as follows. Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces and $X\times Y$ be the product space endowed with the metric $$d_{X\times Y}((x, y),(x', y'))=\max\{d_X(x,x'), d_Y(y,y')\}.$$ Let $f:X\rightarrow X$ and $g:Y\rightarrow Y$ be homeomorphisms and consider the product homeomorphism $f\times g:X\times Y\rightarrow X\times Y$ defined by $$f\times g\;(x,y)=(f(x),g(y))$$ for every $(x,y)\in X\times Y$. \begin{theorem} If $f$ and $g$ have the shadowing property and one of them is sensitive, then $f\times g$ is sensitive and has the shadowing property. Moreover, if either $f$ or $g$ is not cw-expansive then $f\times g$ is not cw-expansive. \end{theorem} \begin{proof} Suppose that $f$ and $g$ have the shadowing property and $f$ is sensitive. Let $\varepsilon>0$ be the sensitivity constant of $f$. Given $(x,y)\in X\times Y$ and $\delta>0$, the sensitivity of $f$ assures the existence of $x'\in X$ with $d_X(x', x)<\delta$ and $n\in\mathbb{N}$ such that $$d_X(f^n(x),f^n(x'))>\varepsilon.$$ Note that $$d_{X\times Y}((x',y), (x,y))=\max\{d_X(x',x), d_Y(y,y)\}=d_X(x',x)<\delta$$ and $$\begin{array}{rcl} d_{X\times Y}((f\times g)^n(x',y), (f\times g)^n(x, y))&=& d_{X\times Y}((f^n(x'),g^n(y)), (f^n(x),g^n(y)))\\ & & \\ &=&\max\{d_X(f^n(x'),f^n(x)), d_Y(g^n(y),g^n(y))\}\\ & & \\ &=&d_X(f^n(x'), f^n(x))\;>\;\varepsilon.\\ \end{array}$$ This proves that $f\times g$ is sensitive. The fact that the product of homeomorphisms which have shadowing property also has the shadowing property is known, see \cite[Theorem 2.3.5]{AH}. Now, suppose that $g$ is not cw-expansive, i.e., for every $\varepsilon>0$ there exists a non-degenerate continuum $C\subset Y$ such that $$\mbox{diam}(g^n(C))<\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}.$$ Then for each $x\in X$, the subset $\{x\}\times C$ of $X\times Y$ is a non-degenerate continuum satisfying $$\mbox{diam}((f\times g)^n(\{x\}\times C))<\varepsilon \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, n\in\mathbb{Z}.$$ Therefore, $f\times g$ is not cw-expansive. \end{proof} \begin{example} Let $f:X\rightarrow X$ be an Anosov diffeomorphism and $g:Y\rightarrow Y$ a Morse-Smale diffeomorphism. By theorem above, the product homeomorphism $f\times g$ is sensitive, has the shadowing property and is not cw-expansive, since $f$ is sensitive and has the shadowing property and $g$ has the shadowing property and is not cw-expansive. \end{example} We also note that the shift map $\sigma$ on $[0,1]^{\mathbb{Z}}$ is sensitive (see \cite[Corollary 7.5]{AIL}), satisfies the shadowing property (see \cite[Theorem 2.3.12]{AH}) and is not cw-expansive. Indeed, for each $\varepsilon>0$ and $\underline{x}=(x_i)_{i\in\mathbb{Z}}\in [0,1]^{\mathbb{Z}}$, the non-degenerate continuum \[C_{\underline{x}}=\prod_{i\in\mathbb{Z}}([x_i-\varepsilon,x_i+\varepsilon]\cap [0,1])\] is contained in $W^s_\varepsilon(\underline{x})\cap W^u_\varepsilon(\underline{x})$. To see this, we consider the following metric on $[0,1]^{\mathbb{Z}}$: for $\underline{x}=(x_i)_{i\in\mathbb{Z}},\; \underline{y}=(y_i)_{i\in\mathbb{Z}} \in X$, let \[d(\underline{x},\underline{y}) = \sup_{i\in\mathbb{Z}}\frac{|x_i-y_i|}{2^{|i|}}.\] Thus, if $\underline{y}=(y_i)_{i\in\mathbb{Z}}\in C_{\underline{x}}$, then \[y_i\in [x_i-\varepsilon,x_i+\varepsilon] \,\,\,\,\,\, \text{for every} \,\,\,\,\,\, i\in\mathbb{Z}\] and this implies that \begin{eqnarray*} d(\sigma^n(\underline{x}), \sigma^n(\underline{y})) &=& \sup_{i\in\mathbb{Z}}\frac{|x_{i+n}-y_{i+n}|}{2^{|i|}}\\ &\leq&\sup_{i\in\mathbb{Z}}\frac{\varepsilon}{2^{|i|}}\\ &\leq& \varepsilon \end{eqnarray*} for every $n\in\mathbb{Z}$. Thus, the shift map $\sigma$ on $[0,1]^{\mathbb{Z}}$ is also an example of a sensitive homeomorphism with the shadowing property that is not cw-expansive. \section*{Acknowledgements} The second author was supported by Capes and the Alexander von Humboldt Foundation under the project number 88881.162174/2017-1, also by CNPq grant number 405916/2018-3 and by Fapemig grant number APQ-01047-18. The authors thank R\'egis Var\~ao for participating in our first virtual meetings during the preparation of the paper and Welington Cordeiro for discussions about sensitivity.
{ "timestamp": "2021-12-07T02:16:43", "yymm": "2012", "arxiv_id": "2012.08894", "language": "en", "url": "https://arxiv.org/abs/2012.08894" }
\section{Introduction} The ability to refer to the past is essential for robots for their long-term interaction with humans. The goal of this paper is to validate the benefits of the history-aware robot, especially when it continuously receives the text instructions to perform a series of pick-and-place manipulations. We claim that robots can benefit by referring to one's task history in terms of following aspects: (1) a robot can interpret the text instructions omitting details or using co-referential expressions, and (2) can infer the occluded visual information due to robot's fixed point of view. We propose a task of history-dependent manipulation, to present the advantage of the manipulation robot which can refer to its task history. Figure~\ref{fig:first_fig} shows an example of history-dependent manipulation task. In this task, a human who knows the shape of the target structure keeps instructing the robot to manipulate blocks. While the human observes the workspace from multiple perspectives, the robot can visually observe the workspace with an RGBD camera at a fixed position. Under these conditions, it is required for the robot to refer its task history (1) for understanding the language expression assuming robot's ability to recall the past, and (2) for overcoming the occlusion in its visual observation. In Figure~\ref{fig:first_fig}, the blue fonts of the text instruction show an example of the first case. After the robot manipulates `front green block', the next instruction orders the robot to manipulate `another green block'. If the robot can recall that the object manipulated in the previous operation was `front green block', it would understand what `another green block' refers, which is the rear green block that has not been manipulated before. The red fonts of the text instruction show an example of the second case. After the robot stacks the front green block on the yellow block, the yellow block becomes invisible to the robot at the next stage. Since there is no limitation in human's perspective, he/she can instruct the robot to place the next target object behind the yellow block, which is occluded in robot's perspective. Although the yellow block is invisible to the robot, this problem can be solved only if the robot can recall the visual information obtained from the previous operation. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig/first_fig.png} \end{center} \caption{An example scenario of history-dependent manipulation, where the robot visually grounds the instruction by referring its task history. By referring the past, the robot can understand the instructions omitting details or using co-referential expressions (blue fonts), and can infer the visually occluded information (red fonts). In depth images, the yellow area indicates the closer distance from the sensor.} \vspace{-8mm} \label{fig:first_fig} \end{figure} In this paper, we propose a synthetic dataset of various scenarios of the history-dependent manipulation task, and a deep neural network based methodology to solve the challenges of history-dependent manipulation. Our synthetic dataset provides 1200 scenarios of history-dependent manipulations, and each scenario is composed with a series of pick-and-place manipulations for building structures with given blocks. For each pick-and-place manipulation, a set of relevant instructions and an RGBD images observing the workspace before the operation are provided. Note that our synthetic dataset does not include complex characteristic of images, natural languages, and robot motions. If the dataset contains such complex environmental factors, we concerned that the added complexity would increase the uncertainty of proposed model's final prediction, which would be a hindrance to empirically show the benefits of history-aware robots. Therefore, we have focused on providing a benchmark dataset without such indispensable elements. Our synthetic dataset focuses solely on validating the effectiveness of the robot which can refer to the task history. We believe the scalability and feasibility issues can be discussed and resolved one by one, after forming a consensus among researchers about the importance of history-aware robots. The remainder of this paper is structured as follows. The related works are introduced in Section~\ref{sec:rel_work}. Section~\ref{sec:dataset} describes the proposed dataset regarding our history-dependent manipulation problem. Section~\ref{sec:method} shows the proposed method based on the attention-based deep neural networks, which would be a benchmark model for fu researches. Section~\ref{sec:exp} discusses the qualitative and quantitative experimental results, as well as the real-world demonstration of the proposed approach based on the CycleGAN is presented. \section{Related Work} \label{sec:rel_work} \subsection{History-Dependent Robot Behavior.} For enabling a robot to understand human's utterances based on the history and to eventually perform a goal task, various studies have been conducted. In the field of visual navigation, the task of vision-and-dialog navigation has emerged recently \cite{dialog_navi, dialog_navi_2}. Their goal is to enable a robot to successfully reach the target by continuously communicating with humans based on language. In this task, a robot needs to keep asking a human how to navigate until reaching a target, and a human would respond to the robot. Since the language expression may refer to past conversations or robot's past trajectory, the robot needs to perceive the dialog and navigation history to follow the given instruction properly. In the field of manipulation, \cite{tellmedave} suggested a system related to challenges when grounding a given language instruction to proper manipulation depending on the environment and task context. In their work, understanding anaphoric references based on history (i.e., Pick up the \textit{snack}, and microwave \textit{it}). \cite{tgg_mit} proposed a probabilistic model, which is able to accumulate knowledge based on past visual and language information, and employ that knowledge to ground the current language command. These works have highlighted the importance of understanding history by enabling robots to perceive language expressions based on the human assumption of robot's ability to recall the past. In addition to this, our study further suggests that understanding history can also help robots to infer the information about invisible objects due to occlusion, which is another crucial aspect not covered by previous relevant studies. \subsection{Visually Grounding Referring Expressions} Since our robot needs to ground the given language instructions in the input RGBD image, our history-dependent manipulation task has a similarity with studies in computer vision research field, whose objective is to comprehend the referring expressions describing a target object existing in the input image \cite{rel_det_1, rel_det_2, rel_seg_1, rel_seg_2, rel_seg_3, rel_mattnet, rel_cvpr2020}. After several studies that can be categorized by whether the proposed model finds the target object based on a detection box \cite{rel_det_1, rel_det_2} or segmentation map \cite{rel_seg_1, rel_seg_2}, the recent models are able to both locate and segment the referred target object \cite{rel_mattnet, rel_cvpr2020}. The task of visually grounding referring expressions have been also related to the object manipulation \cite{rel_mani_dataset, rel_mani_brown}, for enabling robots to find the target object to manipulate. \cite{rel_mani_dataset} proposed a dataset that can be used to train a robot or agent that can pick up the target object referred by human language instructions. \cite{rel_mani_brown} developed a system which can distinguish the target object referred by human language instruction when multiple objects belonging to the same class are given. These studies are similar with ours in that their models can retrieve the object referred by a given language expression. However, since they do not assume that a human can instruct an agent over times, their objective does not include enabling agents to understand the historical information accrued through the past. \subsection{Visual Dialog} Since our robot needs to infer the historical information accumulated through the past to understand the current language instruction, our study shares in common with the study of visual dialog agents. The task of visual dialog focuses on developing an agent which has an ability to have a natural language based communication with humans about the visual content in a given image \cite{visual_dialog}. The objective of the visual dialog agent is to solve the classification problem to choose the correct answer to the question related to the input image when the dialog history for the image is also provided as input. To acquire the correct answer, the agent needs to ground the question in the image and infer the contextual information from the dialog history. \cite{visual_dialog} was a pioneer in this research field by proposing a relevant dataset and benchmark model, and as a result, related studies are still actively being conducted \cite{visual_dialog_rel_1, visual_dialog_rel_2, visual_dialog_rel_3}. This task is related to our study since the visual dialog agent interacts with humans multiple times, perceiving the historical information. However, our research is distinguished from the task of visual dialog since the visual information provided to our robot changes each time after the robot manipulates the object by following the given language instruction. In other words, our task requires the robot to track not only the history of linguistic interactions, but also the history of how the objects in the workspace have been manipulated so far. \begin{figure*}[t] \begin{center} \includegraphics[width=0.93\linewidth]{fig/data_fig.png} \end{center} \caption{An example of the proposed synthetic dataset for the single history-dependent manipulation task. Depth images are neglected due to the space limit. \textit{History time index} denotes the time indices of the past manipulations that needs to be referred. The difference between current time index and the history time index becomes the \textit{history dependency distance}.} \vspace{-5mm} \label{fig:dataset} \end{figure*} \section{Synthetic Image-and-Text Dataset} \label{sec:dataset} \subsection{Task Scenario} We focus on a situation where the text instructions ordering the robot to move blocks are given one by one. Each task of history-dependent manipulation starts with a workspace with several blocks of various colors scattered in random locations. The task ends when all the given blocks are moved to build structures. During the task, the robot observes the given workspace vertically from above using an RGBD camera, and executes the given instruction one by one. Since robot's camera is fixed at a specific location, there can be blocks that are invisible due to occlusion. With our simulator based on the CoppeliaSim (previously known as V-REP) \cite{coppeliaSim}, a dataset reflecting this task scenario has been collected. Our dataset consists of 1200 tasks of history-dependent manipulation (1000 for training, 200 for test), where each task is a series of pick-and-place manipulations to build structures as shown in Figure~\ref{fig:dataset}. For each pick-and-place, an RGBD image of the workspace before the manipulation, a set of language instructions, a bounding box information of ground truth target object and position, and time index of the past pick-and-place manipulation that needs to be referred, are provided. For each RGBD image, 3 to 12 corresponding text instructions are provided, and each history-dependent manipulation task consists of 4 to 8 pick-and-place manipulations. In total, the dataset comprises $6092$ RGBD images and $43194$ text instructions. \subsection{Text Instruction} The process of generating instructions for each pick-and-place manipulation can be divided into three sub-processes: (1) generating phrases for target objects, (2) generating phrases for target locations, and (3) combining them into a complete sentence. We neglected ambiguous instructions, so that generated instructions can only indicate a single pick-and-place manipulation. \subsubsection {Phrase Generation for Target Objects} In our dataset, a target block can be described based on the color, position, nearby blocks, and task history. Regarding the color, our dataset mentions it in an explicit way (i.e., the red block). But if another block with the same color needs to be mentioned in the same sentence, the color information can be annotated such as `Stack the left red block above the another \textit{same colored} block'. The block position is described based on its $xyz$ position when observed from human (simulator) perspective, such as `the block \textit{on the left side}'. In addition, the relative position with respect to other blocks is also used to describe the block as `the \textit{foremost} block'. If the target block cannot be annotated based on its color and position, it is referred by its adjacent blocks such as `the block \textit{closest to} the red block'. For blocks that have been manipulated before, expressions such as `the block that you just moved' can be used. Blocks that have not been manipulated yet can also be mentioned based on the task history. For example, when all blocks except for the current target block have been manipulated to build a structure, the current target block can be referred as `the \textit{last remaining} block'. \subsubsection{Phrase Generation for Target Positions} After generating phrases for the target object to move, our simulator generates phrases that describe where the object should be placed. In our task, target positions can be categorized into two types. First, there are positions within the workspace that can be clearly described such as `center', `left side', 'right front corner'. Second, there are positions adjacent to other blocks, which can be described as `on the left side of the yellow block'. In this case, a set of phrases of the block (`the yellow block'), which is a reference object to describe a target position, is generated based on the object phrase generation method mentioned above, and some words for annotating the nearby location (`on the left side') are properly appended to the generated phrases for the reference object. \subsubsection{Sentence Generation} Let \texttt{(V1)} denote the verb representing `pick-up', \texttt{(V2)} denote the verb representing `place', \texttt{(T)} denote the phrases for target object, \texttt{(P)} denote the phrases for target position. Based on this, a complete sentence can be generated by assembling these elements as below: \begin{itemize} \item $(\texttt{V1})+(\texttt{T})+\texttt{`and'}+(\texttt{V2})+\texttt{`it'}+(\texttt{P})$ \begin{itemize} \item i.e., Take the red block and place it on the left side of yellow block. \end{itemize} \item $(\texttt{V2}) + (\texttt{T}) + (\texttt{P})$ \begin{itemize} \item i.e., Place the red block on the left side of the yellow block. \end{itemize} \end{itemize} \subsection{History Dependency Annotation} \subsubsection{History Dependency Types} After the first pick-and-place manipulation, some text instructions needs to be interpreted by referring the history of how blocks have been manipulated. Situations where the robot needs to understand its task history can be categorized as follows: \begin{itemize} \item When a given instruction explicitly requires to recall how blocks have been moved \item When a block occluded by others needs to be recalled to interpret the instruction \end{itemize} We will call the first type of dependency as \textit{explicit history dependency} (blue fonts in Figure~\ref{fig:dataset}), and the second type as \textit{implicit history dependency} (red fonts in Figure~\ref{fig:dataset}). For the \textit{explicit} annotation, it is examined whether the instruction contains expressions that needs to be interpreted by referring the task history. For the \textit{implicit} annotation, it is examined whether the instruction mentions the occluded block. The type of history dependency can be annotated based on the phrases for the target object (`pick-up'), and phrases for the object used as a reference to describe the target position (`place'). Thus, our dataset provides annotations on what type of history dependency is required to understand each `pick-up' and `place' operation from the current instruction. \subsubsection{History Dependency Distances} Not only the type of history dependency, our dataset also provides annotations of time indices of the past manipulations that need to be referred, which we call as \textit{`history time index'}. In the case of explicit history dependency, the time index of the previous pick-and-place manipulation which is referred by the current instruction is labelled. For the implicit history dependency, the time indexes of when the occluded object moved to its current location, and when it was occluded are labelled. We denote the difference between the current time index and the history time as \textit{`history dependency distance'}, which is a criterion for how far the model can refer to the past. Figure~\ref{fig:dataset} shows an example of history dependency distance. In the stage 1, the yellow block that moved in the stage 0 is referred by the phrase `the block which you moved'. In this case, the history time index for this phrase would be $\{0\}$, and corresponding history dependency distance would be $\{1-0\}=\{1\}$ since the current time index is $\{1\}$. After stage 1, the yellow block become invisible to the robot since the red block has been stacked on that block. When this yellow block is mentioned in the stage 2, the history time index would be $\{0, 1\}$ since the time indices of when it moved and when it was occluded need to be considered. Corresponding history dependency distance will be computed based on this, such that $\{2-0, 2-1\}=\{2, 1\}$, since the current time index is $\{2\}$. \section{Methodology} \label{sec:method} To solve history-dependent manipulation, we propose a model based on a deep neural neural networks as shown in Figure~\ref{fig:model}. It takes an RGBD image and a text instruction as inputs, and locates two bounding boxes for target object and position. The model is composed of modules which are responsible for encoding valuable features from image, language, and task history. Based on extracted features from image, language, and history modules, its classification module predicts two bounding boxes for target object and position. After the prediction, the latent vector obtained from its first fully connected layer is saved as a task history vector for the current manipulation, if it contributed most for the current prediction. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig/model_fig_small.png} \end{center} \caption{The proposed model for task of history-dependent manipulation. Best viewed in color.} \label{fig:model} \end{figure} \subsection{Attention Module} Before describing main modules consisting our model, we would explain the attention module first, which is a sub-module frequently employed in image, language, and history modules. Let us assume that feature vectors encoded from image, language and task history can be represented as a set of vectors such that $\mathbf{X}=\{x_1,\ldots,x_N\}$, where $x_i \in \mathbb{R}^{D_x}$. Based on the query-key-value-styled attention mechanism proposed in \cite{transformer}, our attention module determines where to attend on the information in $\mathbf{X}$. Let $\mathbf{q} \in \mathbb{R}^{D_{qkv}}$ denotes the given query vector, and denote $W_k, W_v \in \mathbb{R}^{D_{qkv} \times D_x}$ as matrices for projecting $x_i$ into key and value vectors as $W_k x_i, W_v x_i \in \mathbb{R}^{D_{qkv}}$. Based on this, a feature vector $\mathbf{f}_X$ representing $\mathbf{X}$ is obtained by a weighted sum of values, where the weight for the $i$-th value $W_v x_i$ is a score of the $i$-th key $W_k x_i$, computed based on the cosine similarity between $\mathbf{q}$ as follows: \begin{align} \mathbf{f}_X =& \textrm{Attention}(\mathbf{q}, \mathbf{X}; W_k, W_v) = \sum_{i=1}^{N}\mathcal{A}_i (W_v x_i), \\ \mathcal{A}_i =& \frac{\textrm{score}({\mathbf{q}, W_k x_i})}{\sum_{j=1}^{N}\textrm{score}({\mathbf{q}, W_k x_j})},\; \textrm{score}(\mathbf{a}, \mathbf{b}) = \frac{\mathbf{a}^T\mathbf{b}}{\|\mathbf{a}\|\|\mathbf{b}\|} \end{align} \subsection{Image Module} Given an RGBD image observing the workspace, the image module firstly extract the spatial feature based on the backbone network from \cite{fpn}. Note that our model needs to understand object's positional information, to perceive the relationship between the language and image when a phrase such as `rightmost purple block' is given as input. Regarding this, \textit{spatial coordinates}, whose width and height is same as the generated spatial feature, containing positional information of the each spatial location, is concatenated to the generated spatial feature. The implementation of the spatial coordinate is same as the approach proposed in \cite{spatial_coordinate}. Let $\mathcal{F}_{I} = \{f_I(1, 1),\ldots,f_I(W, H)\}$ denote the obtained spatial feature and coordinates, where $W$ and $H$ denote its width and height, and $\mathcal{F}_I(i, j)\in \mathbb{R}^{D_I}$ denote a feature from a spatial location $(i, j)$. Based on $\mathcal{F}_{I}$, the image module obtains image patches of candidate objects to pick up, and candidate positions to place it down. In this regard, we employ approaches that have been widely used in works related to object detection \cite{faster_rcnn}. Based on the $\mathcal{F}_{I}$, a region proposal network (RPN) generates proposals, which are candidate bounding boxes of where `pick-up' or `place' can be performed. Afterwards, a proposal feature $\mathcal{F}_{P} \in \mathbb{R}^{N_P \times D_P}$ (dark blue rectangle in Figure~\ref{fig:model}), the feature of visual information contained in the $N_P$ proposals, is obtained by region-of-interest (RoI) pooling layer. Here, $\mathcal{F}_{P}(i) \in \mathbb{R}^{D_P}$ denotes the feature for the $i$-th proposal. Although it is neglected in the Figure~\ref{fig:model}, $\mathcal{F}_{P}$ is also fed into a box-regression layer, to more accurately adjust the size of generated bounding boxes as \cite{faster_rcnn} did. In Figure~\ref{fig:model}, the box-regression layer is neglected due to the space limit. In addition, the image module obtains a feature vector which can represent the visual information about all objects observed on the input image. Regarding this, the attention module is used to obtain the objectness feature $\mathbf{f}_{obj}$ (purple rectangle in Figure~\ref{fig:model}), by focusing more on the spatial location containing the object. Let $\mathbf{q}_{obj} \in \mathbb{R}^{D_{qkv}}$ denote the query vector for capturing the objectness of each feature vector in $\mathcal{F}_{I}$. Note that $\mathbf{q}_{obj}$ is also one of model's trainable parameters. Then, the objectness feature is obtained as follows: \begin{equation} \mathbf{f}_{obj} = \textrm{Attention}(\mathbf{q}_{obj}, \mathcal{F}_I; W_k^I, W_v^I), \end{equation} where $W_k^I, W_v^I \in \mathbb{R}^{D_{qkv} \times D_I}$ denote the matrices for projecting each $\mathcal{F}_I(i, j)$ to key and value vectors. \subsection{Language Module} Let $L=\{w_i\}_{i=1 \ldots N_L}$ denote a given text instruction consisting of $N_L$ words, where $w_i$ denotes the $i$-th word in the sentence $L$. The language module encodes the given sentence $L$ into a set of word embedding vectors, such that $E=\{e_i\}_{i=1 \ldots N}$, where $e_i \in \mathbb{R}^{D_E}$ denotes the word embedding vector of $w_i$. After encoding $E$ based on the bi-directional LSTM \cite{lstm}, a set of vectors $\mathcal{F}_{L}=\{\textrm{Bi-LSTM}(e_i)\}_{i=1,\ldots,N_L}$ is obtained. Here, $\mathcal{F}_{L}(i) \in \mathbb{R}^{D_L}$ denotes the encoded vector of the $i$-th word. Since $L$ contains all information about which object to pick up and where to place it, rather than representing a given sentence with a single feature, our language module encodes the information for `pick-up' and `place' separately based on the attention module. To extract language information about the target object to `pick-up', our model has a query vector $\mathbf{q}_{pick}^L \in \mathbb{R}^{D_{qkv}}$, which is a trainable parameter to capture the language information related to `pick-up'. For obtaining key and value vectors of $\mathcal{F}_{L}$, the projection matrices $W_k^L, W_v^L \in \mathbb{R}^{D_{qkv} \times D_{L}}$ are used. Based on these, the language feature of `pick-up' and `place' can be obtained as follows (dark green rectangles in Figure~\ref{fig:model}): \begin{align} \mathbf{f}^L_{pick} &= \textrm{Attention}(\mathbf{q}_{pick}^L, \mathcal{F}_L; W_k^L, W_v^L) \\ \mathbf{f}^L_{place} &= \textrm{Attention}(\mathbf{q}_{place}^L, \mathcal{F}_L; W_k^L, W_v^L) \end{align} \subsection{History Module} Assume that the model has a set of vectors representing the task history as $H=\{h_i\}_{i=1 \ldots N_H}$, where $h_i \in \mathbb{R}^{D_H}$ denotes a vector encoding the history of the $i$-th operation. In our model, the information related to `pick-up' and `place' are stored separately (i.e., $h_0$ for the first `pick-up', $h_1$ for the first `place', $h_2$ for the second `pick-up'...). How $h_i$ is obtained will be represented in the Section~\ref{sec:classify_module}. Based on the bi-directional LSTM, the history module encodes $H$ into $\mathcal{F}_{H}=\textrm{Bi-LSTM}(H)$ first, where $\mathcal{F}_{H}(i) \in \mathbb{R}^{D_H}$ represents a vector encoding the $i$-th history. Based on the attention module, the history module encodes $\mathcal{F}_{H}$ with respect to `pick-up' and `place', to understand $H$ with respect to each sub-task. Based on this, if the input $L$ is ``Take the last block and stack it above the block that you just moved", the model needs to focus on the whole history to understand what to pick up, but focus only on the latest history to understand where to place it. To capture the history information related to current `pick-up' task, the relevant language feature $\mathbf{f}_{pick}^L$ and time information of the current `pick-up' is employed to construct a query vector $\mathbf{q}_{pick}^H \in \mathbb{R}^{D_{qkv}}$. Regarding the time information, let $T_{pick}, T_{place} \in \mathbb{R}^{T_{max}}$ denote the one-hot encoded time indexes of when current pick-and-place would happen, and $W_T \in \mathbb{R}^{D_T \times T_{max}}$ denote the matrix for projecting them into a high-dimensional space. For example, for the first pick-and-place manipulation, time indexes would be $T_{pick}=[1, 0, 0, \ldots]^T$ and $T_{place}=[0, 1, 0, \ldots]^T$. Based on language and time information for each `pick-up' and `place', the query vector for each sub-task are generated, and the history feature of `pick-up' and `place' are obtained as follows (brown rectangles in Figure~\ref{fig:model}): \begin{align} \mathbf{f}^{H}_{pick} &=\textrm{Attention}(\mathbf{q}_{pick}^H, \mathcal{F}_{H}; W_k^H, W_v^H) \\ \mathbf{f}^H_{place} &= \textrm{Attention}(\mathbf{q}_{place}^H, \mathcal{F}_{H}; W_k^H, W_v^H) \\ \mathbf{q}_{pick}^H &= W_q^H [\mathbf{f}^L_{pick}; W_T T_{pick}] \\ \mathbf{q}_{place}^H &= W_q^H [\mathbf{f}^L_{place}; W_T T_{place}] \end{align} where $W_q^H \in \mathbb{R}^{D_{qkv} \times (D_{qkv}+D_T)}$ and $W_k^H, W_v^H \in \mathbb{R}^{D_{qkv} \times D_H}$ are projection matrices for query, key, and value vectors. \subsection{Classification Module} \label{sec:classify_module} The classification module predicts which proposal is most suitable for `pick-up' or `place'. To estimate whether the object in the $i$-th proposal is proper for `pick-up' or `place', the classification module consists its input vectors $\mathbf{I}_{pick}$ and $\mathbf{I}_{place}$ based on the features of $i$-th proposal, objectness, language, and time, such that: \begin{align} \mathbf{I}_{pick}(i) &= [\mathcal{F}_{P}(i); \mathbf{f}_{obj}; \mathbf{f}^L_{pick}; W_T T_{pick}] \\ \mathbf{I}_{place}(i) &= [\mathcal{F}_{P}(i); \mathbf{f}_{obj}; \mathbf{f}^L_{place}; W_T T_{place}] \end{align} After passing each $\mathbf{I}_{pick}(i), \mathbf{I}_{place}(i)$ to a fully connected and a leaky ReLU layer, intermediate features $\mathbf{H}_{pick}(i), \mathbf{H}_{place}(i)$ are obtained. Then, the score $s_{pick}(i)$ and $s_{place}(i)$ representing how much the $i$-th proposal would be suitable for `pick-up' or `place' are obtained after passing each $[\mathbf{H}_{pick}(i); \mathbf{f}^H_{pick}], [\mathbf{H}_{pick}(i); \mathbf{f}^H_{place}]$ to its last fully connected layer. Among $N_P$ proposals, assume that the $p_1$-th proposal with the maximum $s_{pick}$ value is chosen for `pick-up', and the $p_2$-th proposal with the maximum $s_{place}$ value is chosen for `place'. Then, $\mathbf{H}_{pick}(p_1)$ and $\mathbf{H}_{place}(p_2)$, which represent the meaningful visual, language, and time information used to solve the current manipulation task, are appended to the set of history vectors $H$. \begin{table*} \centering \caption{Performance w/- or w/o history module. In each cell, three numbers denote the accuracy (\%) of pick/place/both. } \begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{ Existence of History Dependency in Instructions} \\ \cline{2-5} & None & Phrase of Pick & Phrase of Place & Phrase of Both \\ \hline w/ History Module & 89.2 / 80.7 / 73.0 & \textbf{95.5} / 63.5 / \textbf{62.2} & \textbf{89.0} / \textbf{62.7} / \textbf{57.3} & \textbf{97.5} / \textbf{49.2} / \textbf{49.2} \\ \hline \begin{tabular}[c]{@{}l@{}}w/o History Module\end{tabular} & \textbf{91.5} / \textbf{86.5} / \textbf{79.5} & 82.7 / \textbf{69.2} / 56.4 & 85.5 / 52.2 / 44.3 & 90.7 / 36.4 / 34.7 \\ \hline \end{tabular} \label{table:ablation} \end{table*} \section{Experiments} \label{sec:exp} \subsection{Network Training} Based on the training dataset composed of $1000$ tasks of history-dependent manipulation, our model is trained by an end-to-end way. Note that a task consists of a series of pick-and-place manipulations, and our dataset provides several instructions describing each manipulation. Therefore, to iterate over one task, one sentence is randomly sampled as an input for each manipulation. For the word embedding vector, we used pretrained model from \cite{glove}. For each manipulation, after the network generates the scores of $s_{pick}$ and $s_{place}$ for generated proposals, a binary cross-entropy function is used to compute the loss for each sub-task, such that $\mathcal{L}_{pick}$ and $\mathcal{L}_{place}$. Here, at the beginning of the training phase, a ground truth bounding box information is given to the network to generate history vectors $H$. After that, we gradually increase the ratio of feeding the estimated bounding box information to the network to generate $H$, so that the network can be gradually exposed to its own generation errors. This method is based on the existing study related to the scheduled sampling \cite{scheduled_sampling}. In addition to the classification losses, the losses from the RPN and box-regression layer of the image module, which are denoted as $\mathcal{L}_{RPN}$ and $\mathcal{L}_{reg}$, are computed as the same way proposed from \cite{faster_rcnn}. Then, the final loss for a single pick-and-place manipulation becomes \begin{equation} \mathcal{L}=\lambda_1(\mathcal{L}_{pick}+\mathcal{L}_{place})+\lambda_2 \mathcal{L}_{RPN} + \lambda_3 \mathcal{L}_{reg}, \nonumber \end{equation} where $\lambda_i$ is a balancing hyper-parameter. In real experiment, we used $\lambda_1=0.5, \lambda_2=0.05, \lambda_3=1.0$. After averaging $\mathcal{L}$ from all manipulations composing a task, it is minimized by the Adam optimizer \cite{adam} with the learning rate of $0.0001$. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{fig/qual_res.png} \end{center} \caption{An example of the qualitative result. The task proceeded from top to bottom. Input depth images are neglected due to the lack of space, and more results can be found in the supplementary material. For input instructions, please refer to the $x$-axis values of language attentions. In the $x$-axis label of generated attentions for history vectors, \texttt{pick\_i} and \texttt{place\_i} denote the history vector stored after the $i$-th manipulation. Note that in the 3rd manipulation, the `left purple block' exists but is occluded due to the yellow block which was manipulated in the 2nd operation. } \label{fig:qual_exp} \end{figure*} \subsection{Qualitative Result} Figure~\ref{fig:qual_exp} shows an example of the qualitative result, when the trained network estimates the target object and position based on the series of input image and language pairs in our test dataset, orders from top to bottom. For the input text instruction, please refer to the $x$-axis values of language attentions. The generated objectness attention, where the yellow area indicates the higher value of attention, shows that the image module has successfully learned how to focus on the area where the object is located. Regarding the generated attentions for text instruction, it is shown that the proper attention has been given to the phrases related to `pick-up' or `place'. However, the attention generated for the task history does not seem to be sufficient to explain our model's inference results. When we examined other cases from the test dataset, it is observed that the history attention tends to be generated more evenly than the language attention, so that the entire task history can be considered. To generate the history attention which is more suitable for explaining the reasoning process of the model, we claim that the separate agent to generate the corresponding attention is necessary, and this would be our future work. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig/dist_exp.png} \end{center} \caption{The accuracy result with respect to the history dependency distance of pick-and-place manipulation. Note that the larger value of the history dependency distance means that the instruction can be understood by perceiving the task history longer ago.} \label{fig:dist_exp} \end{figure} \subsection{Quantitative Result} To demonstrate the effectiveness of referring the task history, we conducted an ablation study, which compares the task accuracy of our network when it has a history module or not. To verify the performance, the accuracy of single pick-and-place manipulation has been measured. If the IoU (Intersection over Union) between the ground truth and predicted bounding boxes is greater than $0.5$, the result is counted as correct. For measuring the performance of the proposed model with the history module, we have referred the studies of visual dialog~\cite{visual_dialog}, and measured the accuracy of the $n$-th manipulation by allowing the model to encode the history correctly based on the ground truth bounding box information of `pick-up' and `place' until the ($n$-1)-th manipulation. When training the proposed model without a history module, since it does not have the ability to understand the task history, the training data of pick-and-place manipulations without the history dependency were employed. Table~\ref{table:ablation} shows the measured accuracy for each model. To subdivide the results, the accuracy of \texttt{PICK, PLACE, BOTH} are shown. Here, \texttt{PICK} or \texttt{PLACE} refer to the accuracy of classifying the target object or position, and \texttt{BOTH} refers to the accuracy of classifying both target object and position. `Existence of history dependency in instructions' denotes whether the task history needs to be referred to interpret the phrases related to a target object (phrase of pick) or position (phrase of place). Here, `None' denotes that the instruction does not require to understand the history, and `Phrase of Both' denotes that the history needs to be perceived to understand phrases of target object and position. Based on the result, we would like to emphasize that the performance of the proposed model with a history module has a better performance when understanding instructions with a history dependency. However, the result also shows that the proposed model without a history module can perform better in understanding instructions which does not require the history dependency. We claim that it is because the model with a history module always refers to the task history. In other words, we suspect that the obtained history feature would have been a hindrance when the model tried to understand the text instructions that do not require the history dependency. It is shown that the overall accuracy of \texttt{PLACE} is low. We claim that it is because the phrases describing the target position have much more linguistic complexity than the ones describing the target object. Meanwhile, the performance of the model without a history module is not bad when the history needs to be understood to interpret the phrase describing the target object. We suspect that the model has cheated on the characteristic of target object candidates, which tend to be placed in a random orientation, while the objects that are already manipulated tend to be placed upright. Figure~\ref{fig:dist_exp} shows how the accuracy of each model changes with respect to the history dependency distances of the instructions. Based on this result, we claim that the proposed model with a history module is more robust than the one without a history module, since its accuracy tends to be more stable even if the older task history needs to be perceived. Even if the overall performance is not very remarkable, we claim that the result shows that the proposed model with a history module could perform better on tasks that require reference to the task history, and we believe that our model can be employed as a baseline for future studies. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig/real_exp.png} \end{center} \caption{An example of applying our network to the real world. Instruction is grounded to the image transferred by CycleGAN \cite{CycleGAN}. In transferred images, white and red boxes show the predicted target object and position. In depth images, the brighter or yellow area indicates the closer distance to the sensor.} \vspace{-5mm} \label{fig:real_exp} \end{figure} \subsection{Real World Experiment} Our network trained based on the synthetic data has been also applied to the real-world. As shown in Figure~\ref{fig:real_exp}, we attached a RealSense camera to Baxter robot's right hand, and enabled a robot to place its arm at a fixed position to observe the workspace. To bridge the gap between RGBD images from the real-world and simulator, we employed CycleGAN \cite{CycleGAN}, which is able to transfer images from different domains. To train CycleGAN, we have collected 800 RGBD images of real-world workspace, and employed simulator-based images from our training dataset. As shown in Figure~\ref{fig:real_exp}, the trained CycleGAN transforms the new real-world RGBD images to look like the ones from the simulator, so that our model can execute the task based on the transferred image. \section{Conclusion} \label{sec:conclusion} In this paper, we introduce a task of history-dependent manipulation, which aims to enable a robot to refer its task history when executing a series of manipulation tasks instructed by text instructions. For solving challenges of this task, we have proposed a synthetic dataset and methodology based on the deep neural networks, which can provide a baseline for relevant future studies. In experiments, it is shown that our network has learned how to interpret the image, text, and task history based on the attention mechanism determining which part of the information needs to be focused on. Based on the CycleGAN, our network trained with the synthetic dataset has been applied to the real world, suggesting the scalability of our work.
{ "timestamp": "2020-12-17T02:19:21", "yymm": "2012", "arxiv_id": "2012.08977", "language": "en", "url": "https://arxiv.org/abs/2012.08977" }
\section{Introduction} \label{sec:intro} \noindent It is useful to distinguish between simple and complex cells in the visual cortex, as proposed by \cite{hubel-1962}. The original classification is based on four characteristic properties of simple cells, as follows. Firstly, the receptive field has distinct excitatory and inhibitory subregions. Secondly, there is spatial summation within each subregion. Thirdly, there is antagonism between the excitatory and inhibitory subregions. Fourthly, the response to any stimulus can be predicted from the receptive field map. Cells that do not show these characteristics can be classified, by convention, as complex. The distinction between simple and complex cells has endured, subject to certain qualifications (although an alternative view is described by \citealp{mechler-2002a}). In particular, it has been argued that the principle characteristic of complex cells is the \emph{phase invariance} of the response \citep{movshon-1978b,carandini-2006}. This means that a complex cell, which is tuned to a particular orientation and spatial frequency, is not sensitive to the precise \emph{location} of the stimulus within the receptive field \citep{kjaer-1997,mechler-2002b}. If the stimulus consists of a moving (or flickering) grating, then phase invariance can be quantified by the relative modulation $a_1/a_0$, where $a_1$ is the amplitude associated with the fundamental frequency of the response, and $a_0$ is the mean response \citep{movshon-1978b,devalois-1982,skottun-1991}. If this ratio is less than one, then the cell can be classified as complex. The standard model of the simple cell is based on a linear filter that is localized in position, spatial frequency and orientation. The output of this filter is subject to a nonlinearity, such as squaring, and may also be normalized by the responses of nearby simple cells. The theoretical framework of this model is well advanced (as reviewed by \citealp{dayan-2001,carandini-2005}), owing to the assumed linearity of the underlying spatial filters. Although the physiology of the complex cell is increasingly well-understood, (as reviewed by \citealp{spitzer-1988,alonso-1998,martinez-2003}), the appropriate theoretical framework is less clear. It is useful to adopt the distinction between `position' and `phase' models that was made by \cite{fleet-1996} in the analysis of binocular processing. It is \emph{invariance} to position or phase that is of interest in the present context. A position-invariant model of the complex cell can be constructed from a set of simple cells, as described by \cite{hubel-1962}. Suppose, for example, that the simple cells are represented by linear filters of a common orientation, but different spatial positions. If the responses of these filters are summed, then the corresponding complex cell will be tuned to an oriented element that appears anywhere in the union of the simple cell receptive fields \citep{spitzer-1988}. This scheme is the basis of the \emph{subunit model}, which was further developed and tested by \cite{movshon-1978a,movshon-1978b}. The results of these experiments are consistent with the idea that the complex response is based on a group of spatially \emph{linear} subunits. The most straightforward way to combine the individual responses would be by rectifying and summing them, but the maximum could also be taken \citep{riesenhuber-1999}. The main disadvantage of the subunit model is that it is \emph{too general} in its basic form. For example, it allows arbitrarily complicated receptive fields to be constructed, as there is no intrinsic constraint on the positions, orientations or spatial frequencies of the subunits. Alternatively, a phase-invariant model can be constructed from a pair of filters of different shapes (\citep{adelson-1985}). If the filters have a quadrature relationship, then their responses to a sinusoidal stimulus will differ in phase by $\pi/2$. It follows that the sum of the squared outputs will be invariant to the phase of the input. The application of this \emph{energy model} to spatial vision \citep{morrone-1988,emerson-1992,atherton-2002,wundrich-2004} is motivated by the observed phase-differences in the receptive fields of adjacent simple cells \cite{pollen-1981}. Indeed, these receptive fields can be well-represented by odd and even Gabor functions \citep{daugman-1985,jones-1987,pollen-1983}. There are two problems with the energy model, in the present context. Firstly, there is no generally agreed way to combine energy mechanisms across different frequencies and orientations (one approach is described by \citealp{fleet-1996}). This is an obstacle to the construction of mechanisms that show more complicated invariances, such as those found in areas MT and MST \citep{orban-2008}. Secondly, the quadrature filters that are best-suited to the energy model are not convenient for the general description of \twod\ image-structure. The concept of phase itself becomes somewhat complicated in two or more dimensions \citep{felsberg-2001}, and the quadrature representation of more complex image-features, such as edge curvature, is unclear. It must be emphasized that \twod\ images contain important structures (e.g.\ luminance saddle-points) that have no analogue in the \oned\ signals to which the energy model is ideally suited. A realistic framework for spatial vision must be capable of representing the full variety of \twod\ structures \citep{dobbins-1987,petitot-2003,shahar-2004}. The present work is motivated by the difficulty of extending the energy model to more complicated \twod\ image-features and spatial transformations. These extensions require a representation of the local image \emph{geometry}, rather than the local phase and frequency structure. The new approach, like the energy model, is based on a set of odd and even linear filters that are located at the same position. The outputs of these filters are nonlinearly combined, again as in the energy model. The combination, however, involves the implicit construction of spatially-offset subunits. The complete model, in this sense, can be seen as a re-formulation of the \cite{hubel-1962} scheme. \subsection{Formal Overview} \label{sec:present} \noindent A minimal overview of the new model will now be given. Let $S(\mat{x})$ be the original scalar image, where $\mat{x} = (x,y)^\tp$, and consider a spatial array of simple cells, parameterized by preferred frequency and orientation. These receptive fields will be modelled by $k$-th order directional derivatives of the Gaussian kernel, \mbox{$G_k(\mat{x},\sigma,\theta) = (\mat{v}\cdot\nabla)^k G_0(\mat{x},\sigma)$} where $\sigma$ is the spatial scale, $\theta$ is the orientation, and $\mat{v} = (\cos\theta,\sin\theta)^\tp$. The simple cell representation $S_k(\mat{x},\sigma,\theta)$ is given by the convolution of these filters with the image: \begin{equation} S_k(\mat{x},\sigma,\theta) = G_k(\mat{x},\sigma,\theta)\star S(\mat{x}). \label{eqn:simple} \end{equation} In particular, consider the magnitude of the first derivative signal, $|S_1(\mat{x},\sigma,\theta)|$. This will be large if there is a step-like edge at $\mat{x}$, with the luminance boundary perpendicular to $\mat{v}$. Now suppose that the edge is shifted by some amount in direction $\mat{v}$. This means that the magnitude $|S_1(\mat{x},\sigma,\theta)|$ will fall, but the nonlinear function \begin{equation} C(\mat{x},\sigma,\theta) = \underset{t}{\max}\mspace{4mu} \bigl| S_1(\mat{x}+t\mat{v},\sigma,\theta) \bigr|, \eqwhere |t| \le \rho \label{eqn:complex} \end{equation} will remain large, unless the shift exceeds the range $\rho$. Equations (\ref{eqn:simple} \& \ref{eqn:complex}) will be the basic models of simple cells $S_k(\mat{x},\sigma,\theta)$ and complex cells $C(\mat{x},\sigma,\theta)$ in this paper (full derivations are given in sec.~\ref{sec:offset}). The complex cell, which inherits the scale and orientation tuning $(\sigma,\theta)$, has a receptive field of radius $\rho$, centred on position $\mat{x}$. It can be seen that (\ref{eqn:complex}) is just a special case of the \cite{hubel-1962} subunit model, with simple cells distributed along the spatial axis $\mat{v}$, and `max' being the combination rule. It has already been argued, in section \ref{sec:intro}, that this model is too general. For example, there is no natural limit on the size $\rho$ of the complex receptive field in (\ref{eqn:complex}). Suppose, however, that access to the first-order directional structure \emph{around} position $\mat{x}$ is replaced by access to the higher-order directional structure \emph{at} position $\mat{x}$. Mathematically, this means that the function $S_1(\mat{x}+t\mat{v},\sigma,\theta)$ of the scalar $t$ is replaced by the values $S_k(\mat{x},\sigma,\theta)$ indexed by $k=1,\ldots, K$. This is interesting for three reasons: Firstly, the model becomes inherently local, because the filters $G_k$ that compute the values $S_k$ are now centred at the same point $\mat{x}$. Secondly, the filters $G_k$ are symmetric or antisymmetric about the point $\mat{x}$, and resemble the Gabor functions used in the energy model. Thirdly, the values $S_k$ can be obtained from a linear transformation of the $K$-th order local jet at $\mat{x}$, and so this scheme is compatible with the scale space theory described above. To be more specific, it will be shown that the first-order structure in the neighbourhood $\mat{x}+t\mat{v}$, as in (\ref{eqn:complex}), can be estimated from a linear combination of the directional derivatives $S_k$, \begin{equation} S_1(\mat{x}+t\mat{v},\sigma,\theta) \approx \sum_{k=1}^K P_k(t)\mspace{2mu} S_k(\mat{x},\sigma,\theta) \label{eqn:approx-simple} \end{equation} where the functions $P_k(t)$ are fixed polynomials. This approximation will then be substituted into the right-hand side of (\ref{eqn:complex}). It will be shown in section \ref{sec:functional} that the approximation (\ref{eqn:approx-simple}) can be motivated by a Maclaurin expansion in powers of $t$. This can also be interpreted, as shown in figure~\ref{fig:construct}, as the synthesis of spatially offset filters, using the Gaussian derivatives as a basis. A matrix formulation of this model will be given in section \ref{sec:matrix}. An optimal (and image-independent) construction of the polynomials $P_k(t)$ will be given in section \ref{sec:unconstrained}. The case in which the derivatives on the right-hand side of (\ref{eqn:approx-simple}) are in another direction $\phi \ne \theta$, is treated in section \ref{sec:constrained}. \subsection{Contributions \& Organization} \label{sec:overview} \noindent The model presented in this paper is quite different from the previous approaches, as explained above. The main contribution is a `differential' model of the complex cell, which is exactly steerable, and which fits naturally into the geometric approach to image analysis \citep{koenderink-1987,koenderink-1990}. This shows that it is possible to analyze the local image geometry, and to obtain a shift-invariant response, using a common set of filters. The body of the paper is organized as follows. The new model is developed in section \ref{sec:offset}, using linear algebra and least-squares optimization. The accuracy of the estimated filters is evaluated in section~\ref{sec:evaluation}. This section also derives the exact response of the ideal filters to several basic stimuli. Some preliminary experiments on natural images are reported. The biological interpretation of the model, and its predictions, are discussed in section \ref{sec:discussion}. Future directions, and the relationship of the model to scale-space theory \citep{koenderink-1984}, are also discussed. \section{Differential Model} \label{sec:offset} \noindent The following notation will be used here. Matrices and vectors are written in bold, e.g.\ $\mat{M}$, $\mat{v}$, where $\mat{M}^\tp$ is the transpose, and $\mat{M}^+$ is the Moore-Penrose inverse \citep{press-1992}. The convolution of functions is $F(x)\star G(x) = \int_{-\infty}^\infty F(x-y)\, G(y)\, \mathrm{d}y$. Some properties of the Gaussian derivatives $G_k(x,\sigma)$ will now be reviewed. There is no particular spatial scale at which a natural image should be analyzed. It is therefore desirable to represent the image in a \emph{scale space}, so that a range of resolutions can be considered \citep{koenderink-1987}. The preferred way to do this is by convolution with a Gaussian kernel. It follows that the structure of the image, at a given scale, can be analyzed via the spatial derivatives of the corresponding Gaussian. The \mbox{$k$-th} order derivatives of a \oned\ Gaussian, $G_k = \mathrm{d}^k\!/\mathrm{d}x^k\, G_0$ can be expressed as \begin{align} G_k(x,\sigma) &= \left( \frac{-1}{\sigma\sqrt{2}} \right)^{\!k} H_k\left( \frac{x}{\sigma\sqrt{2}} \right)\, G_0(x,\sigma) \label{eqn:deriv} \\[1ex] G_0(x,\sigma) &= \exp\left(\frac{-x^2}{2\sigma^2}\right) \label{eqn:gderivs} \end{align} where $G_0(x,\sigma)$ is the original Gaussian, $k$ is a positive integer, and $H_k(x)$ is the \mbox{$k$-th} Hermite polynomial. The first seven Gaussian derivatives are shown in column one of figure \ref{fig:construct}. It will also be useful to introduce two normalizations of the Gaussian derivatives. \begin{equation} G_k^0(x,\sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, G_k(x,\sigma) \eqand G_k^1(x,\sigma) = \frac{1}{2}\, G_k(x,\sigma) \label{eqn:gauss-normalized} \end{equation} which are defined so that $\int |G_k^k(x)| \mathrm{d}x = 1$. In particular, $G_0^0$ and $G_1^1$ are the \mbox{$L^1$-normalized} blurring and differentiating filters, respectively. This superscript notation will not be used unless a particular normalization is important (e.g.\ in sec.~\ref{sec:response}). The two-dimensional Gaussian derivative, in direction $\theta$ with $\mat{x}=(x,y)^\tp$, will be written $G_k(\mat{x},\sigma,\theta) = (\mat{v}\cdot\nabla)^k G_0(\mat{x},\sigma)$, as in section \ref{sec:present}. Two special properties of these filters should be noted. Firstly, the filter $G_k(\mat{x},\sigma,\theta)$ is \emph{separable} in the local coordinate-system that is defined by the direction of differentiation. This means that the \twod\ filter can be obtained from the product of \oned\ filters $G_k(x_\theta,\sigma)$ and $G_0(y_\theta,\sigma)$. Secondly, the Gaussian derivatives are \emph{steerable}, meaning that $G_k(\mat{x},\sigma,\theta)$ can be obtained from a linear combination of derivatives in other directions, $G_k(\mat{x},\sigma,\phi_j)$, where $j=1,\ldots, k+1$. These facts make it possible to analyze a multidimensional filter, in many cases, in terms of \oned\ functions \citep{freeman-1991}. The first derivative, $G_1$, will be used as the basic model of a complex subunit (which is also a simple cell receptive field). This choice is motivated by two observations. Firstly, it is well established that gradient filters can be used to detect edges, as well as more complex image-features \citep{canny-1986,harris-1988}. Secondly, $G_1$ is the first zero-mean filter in the local-jet representation of the image, which is physiologically and mathematically convenient. The extension to higher-order subunits is straightforward, as discussed in section \ref{sec:extensions}. \subsection{Filter Arrays} \label{sec:arrays} \noindent This section will put the system of simple cells, introduced in section \ref{sec:present}, into a standard signal processing framework. This will be done in \oned, in order to simplify the notation. The extension to \twod\ is straightforward. The \oned\ version of the simple cell response (\ref{eqn:simple}) is $S_k(u,\sigma) = \int_{-\infty}^\infty G_k(u-x,\sigma) S(x) \mathrm{d}x$. If $k=1$, and the filter $G_1$ is offset by an amount $t$, then the convolution can be expressed as \begin{equation} \begin{aligned} S_1(u-t,\sigma) &= \int_{-\infty}^\infty G_1(u-t-x,\sigma)\, S(x) \, \mathrm{d}x \\ &= -\int_{-\infty}^\infty G_1(x-t,\sigma)\, S(x-u) \, \mathrm{d}x. \end{aligned} \label{eqn:corrl} \end{equation} Here the antisymmetry of $G_1(x,\sigma)$ has been used to show that the result is equal to the negative \emph{correlation} of the filter and signal. It is evident that if $t$ could be kept equal to $u$, then the signal shift would have no effect on the result. The prototypical stimulus for $G_1$ is the step-edge $S(x)=\mathrm{sgn}(x)$. In this case the filter and signal are anti-correlated, and so the final response (\ref{eqn:corrl}) is non-negative. \subsection{Functional Representation} \label{sec:functional} \noindent It was established in the section \ref{sec:arrays} that the response $S_1(u-t,\sigma)$ can be constructed from the offset filters $G_1(x-t,\sigma)$. This means that the desired approximation (\ref{eqn:approx-simple}) can be treated as a filter-design problem. The following notation will be adopted for the offset filters: \begin{equation} \begin{aligned} F(0,x) &= G_1(x,\sigma)\\ F(t,x) &\approx G_1(x-t,\sigma) \end{aligned} \label{eqn:base} \end{equation} which also depend on the spatial scale and derivative order, but it will not be necessary to make this explicit in the notation. It will suffice to analyze a single filter which, without loss of generality, is located at the origin $\mat{x}=(0,0)^\tp$ of the spatial coordinate-system. The \emph{linear response} of this filter is defined in relation to (\ref{eqn:corrl}) as \begin{equation} R(t,u) = -\int_{-\infty}^\infty F(t,x)\, S(x-u) \, \mathrm{d}x. \label{eqn:resp} \end{equation} The complex response at $\mat{x}=(0,0)^\tp$, with reference to (\ref{eqn:complex}), can now be expressed as a function of the signal translation $u$; \begin{equation} C(u) = \max_t\, \bigl| R(t,u) \bigr| \eqwhere |t| \le \rho. \label{eqn:max} \end{equation} The actual value of $u$, in general, has no particular significance. It will be more important to consider the response $R(t,u)$ as $u$ changes. In particular, suppose that $|R(t,u)|$ is high at the stimulus position $u=u_0$. If the response is insensitive to slight translation of the signal, then $\partial^2 C \big/ \partial u^2 \approx 0$ at $u_0$. The approximation problem in (\ref{eqn:base}) will now be addressed. The filter $F(t,x)$ can be defined in relation to the Maclaurin expansion of $G_1(x-t,\sigma)$ with respect to the offset $t$, as indicated in (\ref{eqn:base}). If image-derivatives up to order $K$ are available, then the approximation is \begin{equation} F(t,x) = \sum_{k=0}^{K-1} \frac{(-t)^k}{k!}\, G_{k+1}(x,\sigma) \label{eqn:deriv-series} \end{equation} The key observation is that the filters $G_{k+1}(x,\sigma)$ in (\ref{eqn:deriv-series}) are precisely those that compute the local jet coefficients, of order $1,\ldots, K$, at the point $x=0$. In other words, the family of shifted filters $F(t,x)$ has been obtained from the family of \emph{non-shifted} derivatives $G_k(x,\sigma)$. It can be seen from (\ref{eqn:deriv},\ref{eqn:gderivs}) and (\ref{eqn:deriv-series}) that the estimated function $F(t,x)$ decreases to zero for large $|x|$, owing to the exponential tails of $G_0$. The definition (\ref{eqn:deriv-series}) is usable in practice, as will be shown in section \ref{sec:error}. There are, however, two difficulties with the scheme described above. Firstly, although $F(t,x)$ is an approximation of $G_1(x-t,\sigma)$, the nature of this approximation (a polynomial with the given derivatives) may be inappropriate. Secondly, as expected, the approximation (\ref{eqn:deriv-series}) is not well-behaved for large $|t|$. Both of these problems can be addressed by replacing the Maclaurin series with a more flexible construction of $F(t,x)$. This is done by substituting a general polynomial $P_k(t)$ in place of each monomial $(-t)^k/k!$ in (\ref{eqn:deriv-series}), leading to \begin{equation} F(t,x) = \sum_{k=1}^K P_k(t)\, G_k(x,\sigma). \label{eqn:filt} \end{equation} The $K$ polynomials $P_k(t)$ are constructed from standard monomial basis functions $t^j$ and coefficients $c_{jk}$. The order of each polynomial will be $K-1$, for consistency with the original series approximation (\ref{eqn:deriv-series}). It follows that the polynomials are \begin{equation} P_k(t) = \sum_{j=0}^{K-1}\, t^j c_{jk} \eqwhere 1 < k \le K. \label{eqn:poly} \end{equation} The problem has now been altered to that of finding $K^2$ appropriate coefficients $c_{jk}$. This will be treated, in sections \ref{sec:unconstrained} and \ref{sec:constrained}, as the optimization of \[ \arg\min_{\mat{C}} \iint \, \bigl|F(t,x)-G_1(x-t,\sigma)\bigr|^2\mspace{3mu}\mathrm{d}x\,\mathrm{d}t \] where $F(t,x)$ is the family of filters defined in (\ref{eqn:filt}), and $\mat{C}$ is the matrix of coefficients $c_{jk}$. This optimization scheme generalizes immediately to filters in any number of dimensions. The simple Maclaurin scheme (\ref{eqn:deriv-series}) remains a useful model, because the optimal polynomials are, in practice, close to the original monomials $P_k(t)\approx t^{(k-1)}$, as can be seen in figure~\ref{fig:construct}. It is important to note that, once the coefficients $c_{jk}$ have been estimated, the location of the synthetic filter $F(t,x)$ can be varied \emph{continuously} with respect to the offset $t$. Any set of translated filters $F(t_i,x)$ can be obtained, provided $|t_i|\le \rho$ for $i=1,\ldots M$, by re-sampling the monomial basis functions as $t_i^j$ and, then repeating (\ref{eqn:filt} \& \ref{eqn:poly}). Furthermore, the principle of \emph{shiftability} states that the convolution $f(x-t)\star s(x)$ can be represented in a finite basis $f(x-t_i)$, provided that the filter $f$ is bandlimited \citep{simoncelli-1992,perona-1995}. The filters $F(t,x)$ are not bandlimited, but they do decay exponentially, as the Fourier transforms have a Gaussian factor. This means, in practice, that the linear response $R(t,u)$ in (\ref{eqn:resp}) can be represented by a suitable discretization $R(t_i,u)$, where the shift-resolution $\Delta t = 2\rho / (M-1)$ can be chosen to achieve any desired accuracy. \subsection{Matrix Representation} \label{sec:matrix} \noindent It will be convenient to represent the filter construction in terms of matrices. This results in a compact formulation, and prepares for the least-squares estimation procedure that will be introduced in section \ref{sec:unconstrained}. Suppose that $M$ filters, each of length $N$ are to be constructed, and that the highest available derivative is of order $K$. Each filter will be represented as a row-vector, so that the collection of offset filters forms an $M\times N$ matrix $\mat{F}$. Note that this representation applies in any number of dimensions, provided that the positions of the filter-samples are consistently identified with the column-indices of $\mat{F}$. The columns of another matrix $\mat{P}$ will contain the $K$ polynomials $P_k(t)$ from equation (\ref{eqn:poly}). These polynomials must be sampled at $M$ points $t_i$, hence $\mat{P}$ has dimensions $M\times K$. Let the sampled monomial basis functions $t_i^j$ be the columns of the matrix $\mat{B}$, which must therefore have the same dimensions, $M\times K$. The monomials are weighted by the $K\times K$ matrix of coefficients $\mat{C}$, such that \begin{equation} \mat{P} = \mat{B} \mat{C} \label{eqn:poly-mat} \end{equation} where column $k$ of $\mat{C}$ contains the coefficients of $P_k$. Let each row of the $K\times N$ matrix $\mat{G}$ contain the sampled Gaussian derivative $G_k(x_i,\sigma)$. Each offset filter should be a linear combination of the Gaussian derivatives, constructed from the polynomials $P_k$. It follows from (\ref{eqn:filt}) that \begin{equation} \mat{F} = \mat{P}\mat{G}. \label{eqn:filt-mat} \end{equation} Let the column-vector $\mat{s}$ contain the sampled signal, $ \mat{s} = \bigl(S(x_1),\ldots,S(x_N)\bigr)^{\!\tp} $. This means that the response-vector $ \mat{r} = \bigl(R(t_1,u),\ldots,R(t_M,u)\bigr)^{\!\tp} $ is obtained according to (\ref{eqn:resp} \& \ref{eqn:filt-mat}) as \begin{equation} \begin{aligned} \mat{r} &= -\mat{F}\mat{s}\\[.25ex] &= -\mat{P} \mat{G} \mat{s}. \end{aligned} \label{eqn:resp-vec} \end{equation} This clearly shows that the response $\mat{r}$ is simply a linear transformation $\mat{P}$ of the \mbox{$K$-th} order Gaussian jet, $ \mat{G}\mat{s} = \bigl(S_1(x,\sigma),\ldots,S_K(x,\sigma)\bigr)^{\!\tp} $. The implication is that the filter-bank $\mat{F}$ need not be explicitly constructed; rather, the response $\mat{r}$ is computed directly from the $K$ image derivatives $S_k$ at $\mat{x}$. \begin{figure}[!ht] \begin{center} \includegraphics{fig-01-cairo} \caption{{Construction of offset filters}. \textbf{Column 1:} The Gaussian derivatives $G_k(x,\sigma)$, scaled for display, of orders $1,\ldots,K$, where $K=7$. \textbf{Column 2:} The corresponding polynomial interpolation functions $P_k(t)$, of order $K-1$. Note that $P_k(t)$ resembles the monomial $t^{(k-1)}$. \textbf{Column 3:} Estimated filters, $F(t_j,x)$ which are offset versions of $G_1(x,\sigma)$. \textbf{Column 4:} Ideal filters $F_\star(t_j,x)$. The synthesis equation is $F(t_j,x) = \sum_k P_k(t_j) G_k(x,\sigma)$, where each weight $P_k(t_j)$ corresponds to the $j$-th dot on the $k$-th polynomial in column two. } \label{fig:construct} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics{fig-02} \caption{{Synthesis of orientation-tuned subunits}. The nine filters were synthesized from eight oriented derivatives centred at the origin (with of $\sigma=1$). The steered-additive solution of section~\ref{sec:addsteered} was used. \textbf{Middle row:} The $G_1$ filter is steered to the axes of a hexagonal lattice; $\theta = 0^\circ\!, 60^\circ\!, 120^\circ$. \textbf{Top and bottom rows:} Offset filters, synthesized at shifts of $t=\pm \rho$ in the corresponding directions (with $\rho=1.5$). Note that each column can be interpreted as three subunits of an orientation-tuned complex cell. The filter amplitudes are scaled to the range $[-1,1]$, with contour lines separated by increments of 0.2 units.} \label{fig:hexfilt} \end{center} \end{figure} \subsection{Unconstrained Estimation} \label{sec:unconstrained} \noindent It will now be shown that the $K^2$ unknown coefficients, contained in the matrix $\mat{C}$, can be obtained by standard least-squares methods. It should be emphasized that this is a \emph{filter-design} problem; the matrix $\mat{P}$ is fixed for all signals, and the response $\mat{r}$ is obtained according to (\ref{eqn:resp-vec}). Let the $M\times N$ matrix $\mat{F}_\star$ contain the \emph{true} derivative filters, such that the \mbox{$ij$-th} element is $G_1(x_j-t_i,\sigma)$. The approximation $\mat{F}\approx\mat{F}_\star$ can be expressed, according to (\ref{eqn:poly-mat} \& \ref{eqn:filt-mat}), as the product \begin{equation} \mat{F}_\star \approx \mat{B} \mat{C} \mat{G}, \quad\text{of which}\quad \mat{C} = \mat{B}^+ \mat{F}_\star \mat{G}^+ \label{eqn:approx} \end{equation} is the solution in the least-squares sense. This formulation requires two Moore-Penrose inverses, which can be computed from the singular-value decompositions of the monomial basis and Gaussian derivative matrices $\mat{B}$ and $\mat{G}$, respectively. It is, however, more efficient to solve this problem using \emph{QR} decompositions, as follows. There are, in practice, more offsets than derivative filters $M > K$, as well as more spatial samples than derivative filters $N > K$. The basis matrix has full column-rank $K$, and can be factored as $\mat{B}=\mat{Q}_B\mat{R}_B$. The derivative matrix has full row-rank $K$, and so its transpose can be factored as $\mat{G}^\tp=\mat{Q}_G\mat{R}_G$. It follows that the solution (\ref{eqn:approx}) can be obtained via $\mat{B}^+ = \mat{R}_B^{-1}\mat{Q}_B^\top$ and $\mat{G}^+ = \mat{Q}_G \mat{R}_G^{-\!\top}$. \subsection{Constrained Estimation} \label{sec:constrained} \noindent The least-squares construction of the filters $\mat{F}$ was described in the preceding section. The method is quite usable, but has two shortcomings. Firstly, if one of the shifts $t_i$ is zero, then $F(0,x) \approx G_1(x,\sigma)$, but it would be preferable to make this an \emph{exact} equality, so that the original filter is returned as in (\ref{eqn:base}). The second shortcoming of the method in \ref{sec:unconstrained} is that, in two or more dimensions, the \emph{orientation} of the derivative filters in the basis-set $\mat{G}$ may not match that of the target-set $\mat{F}_\star$. Both of these problems will be solved below. \subsubsection{Additive Solution} \label{sec:additive} The requirement $F(0,x) = G_1(x,\sigma)$ is satisfied by an \emph{additive} model, in which the polynomial $P_1(t)$ that weights $G_1(x,\sigma)$ is always unity, and all other polynomials pass through zero when $t=0$. This implies the following partitioning of the derivative, monomial and coefficient matrices: \begin{equation} \mat{G} = \begin{pmatrix} \mat{g}_1^\tp \\[.75ex] \mat{G}_\Delta \end{pmatrix} \quad \mat{B} = \begin{pmatrix} \mat{1} \mspace{10mu} \mat{B}_\Delta \end{pmatrix} \quad \mat{C} = \begin{pmatrix} 1 & \mat{0}^\tp \\[.5ex] \mat{0} & \mat{C}_\Delta \end{pmatrix} \end{equation} where $\mat{1}$ is the column-vector of $M$ ones, and $\mat{0}$ is the column-vector of $(K-1)$ zeros. The $1\times N$ vector $\mat{g}_1^\tp$ contains the first derivative filter $G_1(x,\sigma)$, while the $(K-1)\times N$ matrix $\mat{G}_\Delta$ contains the higher-order filters. The columns of the $M\times (K-1)$ matrix $\mat{B}_\Delta$ contain the sampled monomials, \emph{excluding} the constant vector $\mat{1}$. The unknown matrix $\mat{C}$ will be recovered in the form indicated, where $\mat{C}_\Delta$ has dimensions $(K-1)\times(K-1)$. The product of $\mat{B}$ and $\mat{C}$, as in (\ref{eqn:poly-mat}), now gives $\mat{P} = (\mat{1}\mspace{10mu} \mat{P}_\Delta)$, where the columns of $\mat{P}_\Delta=\mat{B}_\Delta\mat{C}_\Delta$ are polynomials \emph{without} constant terms. It follows that the product $\mat{P}\mat{G}$, as in (\ref{eqn:filt-mat}), gives the additive approximation \begin{equation} \mat{F}_\star \approx \mat{1}\mat{g}_1^\tp + \mat{B}_\Delta\mat{C}_\Delta\mat{G}_\Delta \label{eqn:additive} \end{equation} where $\mat{1}\mat{g}_1^\tp$ is the rank-one matrix containing $M$ identical rows $\mat{g}_1^\tp$. Note that if the \mbox{$i$-th} row of $\mat{F}_\star$ corresponds to $t=0$, then the \mbox{$i$-th} row of $\mat{B}_\Delta\mat{C}_\Delta$ must be zero, this being the evaluation of the polynomials $\sum_{j=1}^{K-1} t^j c_{jk}$ at $t=0$. It follows that the \mbox{$i$-th} row of $\mat{F}_\star$ is exactly recovered from (\ref{eqn:additive}) as $\mat{g}_1^\tp$, and so the constraint $F(0,x) = G_1(x,\sigma)$ has been imposed. The unknown coefficients $\mat{C}_\Delta$ are recovered by subtracting $\mat{1}\mat{g}_1^\tp$ from $\mat{F}_\star$, and then proceeding by analogy with (\ref{eqn:approx}). This leads to \[ \mat{C}_\Delta = \mat{B}_\Delta^+ \bigl(\mat{F}_\star-\mat{1}\mat{g}_1^\tp\bigr) \mat{G}_\Delta^+ \] where the matrices $\mat{B}_\Delta^+$ and $\mat{G}_\Delta^{+\!\top}$ can be obtained from the \emph{QR} factorizations of $\mat{B}_\Delta$ and $\mat{G}_\Delta$, as before. \subsubsection{Steered Solution} \label{sec:steered} In two (or more) dimensions, it is assumed that the desired filters $G_1(x-t,\theta,\sigma)$ have a common orientation, where $\mat{v}=(\cos\theta,\sin\theta)^\tp$ is the direction of the derivative in \twod. This leads to invariance with respect to translations of the signal in the given direction. The basis filters $G_k(x,\sigma,\theta)$, however, will typically have a range of orientations $\phi_\ell \ne \theta$. This problem can be solved as follows. Recall from section \ref{sec:offset} that the \mbox{$k$-th} order Gaussian derivative is \emph{steerable} with a basis of size $k+1$. Now suppose that row $k$ of the matrix $\mat{G}$ is replaced by $k+1$ rows, containing sampled filters $G_k(x,\phi_\ell,\sigma)$ at $\ell=1,\ldots,k+1$ distinct orientations $\phi_\ell$. The enlarged matrix $\mat{G}_\phi$ now has dimensions $M_K \times N$, where \begin{equation} M_K = \sum_{k=1}^K (k+1) = \tfrac{1}{2} K(K+3). \label{eqn:steernum} \end{equation} It follows that there is a $K\times M_K$ `steering' matrix $\mat{D}$ such that $\mat{G} = \mat{D}\mat{G}_\phi$ is exactly the $K \times N$ matrix of derivatives at the desired orientation. Moreover, if the approach of section \ref{sec:unconstrained} is applied to the $M_K \times N$ matrix $\mat{G}_\phi$, then a solution \begin{equation} \mat{F}=\mat{B}\mat{C}_\phi\mat{G}_\phi = \mat{B} \mat{C} \mat{G} \end{equation} will be obtained. It follows that the two coefficient matrices are related by $\mat{C}_\phi = \mat{C}\mat{D}$. In summary, if the matrix $\mat{G}$ contains a \emph{sufficient} number $M_K$ of differently oriented filters, then a set of translated filters $\mat{F}$ can be approximated in any common orientation $\theta$. There is no change to the algorithm described in section \ref{sec:unconstrained}. It should, however, be noted that (\ref{eqn:steernum}) shows a trade-off between translation invariance and steerability. Larger translation invariance requires more derivatives, but these become increasingly difficult to steer. \subsubsection{Additive Steered Solution} \label{sec:addsteered} The steered solution, as described in the previous section, will not automatically be additive, in the sense of (\ref{eqn:additive}). This problem will be solved, with reference to section~\ref{sec:additive}, by putting an explicitly steered filter $\mat{g}_\theta^\tp$ in place of $\mat{g}_1^\tp$. The first derivative can be steered with respect to a basis of filters at distinct orientations $\phi_1$ and $\phi_2$ (these would be the first two rows of the $M_K\times N$ matrix $\mat{G}_\phi$). The standard steering equation \citep{freeman-1991} can be simplified in this case to \[ \begin{pmatrix} \cos\theta\\ \sin\theta \end{pmatrix} = \begin{pmatrix} \cos\phi_1 & \cos\phi_2\\ \sin\phi_1 & \sin\phi_2 \end{pmatrix} \begin{pmatrix} p_1\\ p_2 \end{pmatrix} \] where $\theta$, $\phi_1$ and $\phi_2$ are known angles. This system can be solved exactly for the unknown coefficients $p_1$ and $p_2$, resulting in \begin{equation} \begin{gathered} p_1 = \sin(\phi_2-\theta) / \delta \eqand p_2 = \sin(\theta-\phi_1) / \delta \\ \eqwhere \delta = \sin(\phi_2 - \phi_1). \end{gathered} \end{equation} It may be noted that if $\phi_1=0$ and $\phi_2=\pi/2$, then the solution reduces to the usual coefficients $p_1=\cos\theta$ and $p_2=\sin\theta$ for the construction of the directional derivative from $\mathrm{d}/\mathrm{d}x$ and $\mathrm{d}/\mathrm{d}y$. The additive steered approximation can now be defined, using the new filter $G_1(\mat{x},\theta,\sigma)$, as \begin{equation} \mat{F}_\star \approx \mat{1}\mat{g}_\theta^\tp + \mat{B}_\Delta \mat{C}_{\phi\Delta} \mat{G}_{\phi\Delta} \eqwhere \mat{g}_\theta^\tp = p_1\mat{g}_{\phi_1}^\tp + p_2\mat{g}_{\phi_2}^\tp. \label{eqn:addsteer} \end{equation} This system can be solved in the same way as (\ref{eqn:additive}). Note that the higher-order filters in $\mat{G}_{\phi\Delta}$ will be implicitly steered, as described in section \ref{sec:steered}. \section{Evaluation} \label{sec:evaluation} \noindent Two issues are addressed in this evaluation, as follows. \emph{Approximation}: The accuracy of the least-squares algorithms from sections \ref{sec:unconstrained} and \ref{sec:constrained} is established in section~\ref{sec:error}. \emph{Characterization}: The response of the underlying model from section~\ref{sec:present} to basic stimuli, as well as to natural images, is analyzed in sections~\ref{sec:response} and \ref{sec:images} respectively. Note that the issues of approximation and characterization are addressed separately, in order to avoid mixing different sources of error. Hence section \ref{sec:error} will evaluate the approximate filters $F$, while sections \ref{sec:response} and \ref{sec:images} will analyze the ideal filters $F_\star$. \subsection{Approximation Error} \label{sec:error} \noindent The accuracy of the filter approximations will be evaluated in this section, and it will be shown that the least-squares methods are superior to the original Maclaurin expansion. The evaluation is based on the root mean-square (RMS) error between the target and synthetic filters. The accuracy of a given filter-synthesis method is determined by two variables; the range of offsets, and the number of available derivatives (size of the basis). Better approximations can, in general, be obtained by reducing the range of offsets and/or increasing the size of the basis. The range $\rho=1\sigma$ is the smallest that results in a unimodal impulse response, as will be shown in section \ref{sec:impulse}. It is therefore important to analyze the corresponding approximation. In addition, the larger range $\rho=1.5\sigma$ will be analyzed. This leaves the size of the basis (for which there is no prior preference) to be varied in each case. The method of evaluation is illustrated in figure~\ref{fig:deform}. It can be seen that the furthest-offset filters begin to depart from the target shape. The RMS difference between the ideal and approximate filters, for each test, was measured over 51 offsets $t_i$ in the range $\pm\rho$. Each filter was sampled at 101 points $x_j$ in the range $\pm 6\sigma$, which contains the significantly nonzero part of all filters (see fig.~\ref{fig:deform}). The RMS error, for basis sizes $K=3,\ldots,10$ is shown in fig.~\ref{fig:errors}. The meaning of the RMS error, in terms of filter distortion, can be gauged with reference to fig.~\ref{fig:deform}. For example, it can be seen that the Maclaurin model with $K=8$ and $\rho=1.5$ is poor, and this corresponds to a point around the middle of the RMS axis in fig.~\ref{fig:errors}. In the case of the Maclaurin approximation (top row) it can be seen that the error increases rapidly and monotonically with respect to the offset. The pattern is more complicated for the least-squares approximations, because the error has been minimized over an interval $\pm\rho$, which effectively truncates the basis functions in $x$. Nonetheless, the lines corresponding to the different basis-sizes remain nested; they cannot cross, because increasing the size of the basis cannot make the approximation worse. It is, however, possible for the lines to meet. In particular, the unconstrained lines meet in pairs at $t=0$. This is because the target function at zero offset is anti-symmetric. It follows that the incorporation of a \emph{symmetric} basis function $G_{2k}$ cannot improve an existing approximation of order $2k-1$. In the case of the additive approximation, all lines meet at $t=0$, where the error is zero by construction. \begin{figure}[!ht] \begin{center} \includegraphics{fig-03} \vspace*{-2ex} \caption{{Filter deformation}. \textbf{Top left:} The eighth-order Maclaurin synthesis (\ref{eqn:deriv-series}) of filters $\sigma=1$, over a range $\rho=\pm1.5\sigma$ of offsets. Large errors are visible in the most extreme filters. \textbf{Top right:} The approximation is much worse if the order of the basis is reduced by one. \textbf{Bottom left, right:} The least-squares approximation is much better, even if the additivity constraint is enforced.} \label{fig:deform} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics{fig-04} \vspace*{-3ex} \caption{{Error vs. offset}. \textbf{Top row:} The Maclaurin error rises quickly as the target filter is offset from the centre of the basis. Each line represents a different basis-size, $k=3,\ldots,12$, as indicated. The left and right plots show ranges $\rho=1\sigma$, and $\rho=1.5\sigma$, respectively. \textbf{Middle row:} The unconstrained least-squares approximation is much better, especially for high-order bases. \textbf{Bottom row:} The additive approximation is also good, and ensures that the error is zero when there is no offset (as in the Maclaurin case).} \label{fig:errors} \end{center} \end{figure} \subsection{Response to Basic Signals} \label{sec:response} \noindent A number of basic signals will be introduced below, and the ideal responses will be derived; further examples are given in \cite{hansard-2010}. The responses are `ideal' in the sense that the error of the least-squares approximations (sec.\ \ref{sec:unconstrained}--\ref{sec:addsteered}) will be ignored. This is primarily in order to obtain useful results, but there are two further justifications. Firstly, it has been demonstrated in the preceding section that the approximations are good, over an appropriate range $\rho$. Secondly, the approximation error can be made arbitrarily low, by using a large enough basis for the given range. Recall that the offset filters $F(t,x)$ are copies of the Gaussian derivative $G_1(x,\sigma)$. It follows from (\ref{eqn:resp}) that the response function is covariant to the shift $t$, in the sense that \begin{equation} R(t,u) = R(0,u-t). \label{eqn:resp-trans} \end{equation} It therefore suffices to obtain the linear response for the case $t=0$, as the other responses are simply translations of this function. The linear response in this case is $R(0,u) = G_1^1(x,\sigma)\star S(x-u)$, by analogy with (\ref{eqn:corrl}). Note that the normalized filter has been used, as defined in (\ref{eqn:gauss-normalized}). It follows that $R(0,u)$ can be obtained by blurring the signal with the filter $G_0^1(x,\sigma)$, and then differentiating the result. The complex response $C(u)$ is given by the max operation (\ref{eqn:max}). Evidently $C(u)$ is the \emph{upper envelope} of the family $|R(t,u)|$, but it is possible to be more precise than this. In particular, the shift-insensitivity of the model can be quantified by determining the intervals of $u$ over which $C(u)$ is constant, as described below. The response $|R(0,u)|$ to a basic signal $S(x-u)$ can be either symmetric or antisymmetric, and either periodic or aperiodic. However, a common property of the responses considered here is that the local maxima are all of equal height. Let $|R(0,u^\star)| = R^\star$ be a local maximum, and suppose that $u$ is within range of this maximum, meaning that $|u-u^\star|\le\rho$. It follows that $C(u) = R^\star$, because the maximum in (\ref{eqn:max}) will be found at $t=u-u^\star$, and $|R(u-u^\star,u)| = |R(0,u^\star)| = R^\star$ by (\ref{eqn:resp-trans}). An intuitive summary of this is that each local maximum $|R(0,u^\star)|$ generates a plateau $C(u^\star\pm\rho) = R^\star$ in the complex response. In order to make use of this interpretation, the function $V(u)$ will be defined as the signed distance $u-u^\star$ to the nearest local maximum of $|R(0,u)|$. It follows that \begin{equation} C(u) = \begin{cases} R^\star &\text{if}\mspace{10mu} |V(u)| \le \rho\\ \underset{|t|\le\rho}{\max}\mspace{5mu} |R(t,u)| &\text{otherwise}. \end{cases} \label{eqn:response} \end{equation} This explicitly identifies the intervals, $|V(u)|\le\rho$, over which $C(u)$ is constant. Note that if $V(u)>\rho$ then the original definition (\ref{eqn:max}) is used. The functions $R(0,u)$ and $V(u)$, as well as the constant $R^\star$, will now be derived for each of the basic signals. It should be emphasized that $V(u)$ and $R^\star$ are only used to \emph{characterize} the response; they are \emph{not} part of the computational model. \subsubsection{Impulse} \label{sec:impulse} The first test signal to be considered is the unit impulse, which can be used to characterize the initial linear stage of the model. The impulse is defined as \begin{equation} S_\sigma(x) = \delta(x) \label{eqn:impulse-sig} \end{equation} where $\delta(x)$ is the Dirac distribution. It follows that the linear response $G_1^1\star S_\sigma$ is just the original normalized derivative filter, \begin{equation} R_\sigma(0,u) = G_1^1(u,\sigma). \label{eqn:impulse-resp} \end{equation} The maxima of the linear response can be found by differentiating $R_\sigma(0,u)$, and setting the result to zero. The derivative contains a factor $\sigma^2-u^2$, and so the zeros are at $\pm\sigma$. The peak is at $-\sigma$, and it follows that the maximum response is \begin{equation} R^\star_\sigma = G_1^1(-\sigma,\sigma). \label{eqn:impulse-const} \end{equation} Both extrema become peaks in $|R_\sigma(0,u)|$, and the extent of the response plateau is determined by the minimum distance from these. The distance function for the impulse response can now be defined as \begin{equation} V_\sigma(u) = u - \mathrm{sgn}_+\mspace{-1mu}(u) \,\sigma, \label{eqn:impulse-dist} \end{equation} where $\mathrm{sgn}_+$ is the sign-function with the convention $\mathrm{sgn}_+(0)=1$. If $u=0$ then $|V_\sigma(u)| = \sigma$, and it follows from (\ref{eqn:response}) that $C(u) \ne R^\star_\sigma$ if $\sigma > \rho$. It has already been established that $|R_\sigma(0,u)|$ has maxima of $R^\star_\sigma$ at $\pm\sigma$, which implies that the response $C(u)$ will be \emph{bimodal} unless \begin{equation} \sigma \le \rho \label{eqn:impulse-limit} \end{equation} as illustrated figure \ref{fig:impulse}. This condition is strictly imposed, as it would be undesirable to have a bimodal response to a unimodal signal. In general, $\rho$ should be made as large as possible for a given $\sigma$, in order to achieve as much shift-invariance as possible. Recall, for example, that the least-squares approximations in section \ref{sec:error} were demonstrated for $\rho=1.5\sigma$. \begin{figure}[!ht] \begin{center} \includegraphics{fig-05} \vspace*{-.1in} \caption{{Impulse response}. \textbf{Left column:} the top plot shows the unit impulse, $S_\sigma(x)$, as in (\ref{eqn:impulse-sig}). The middle plot shows the response $C(u)$. The bottom plot shows the distance function $|V_\sigma(u)|$, as in (\ref{eqn:impulse-dist}), along with the value of the maximum offset $\rho=1\sigma$. The response is constant, $C(u) = R^\star_\sigma$, when $|V_\sigma(u)| \le \rho$. The critical case, $\sigma=1$, $\rho=1$ is plotted. \textbf{Right column:} as before, except $\rho=0.5\sigma$. The response becomes bimodal, which shows the importance of the condition $\sigma \le \rho$.} \label{fig:impulse} \end{center} \end{figure} \subsubsection{Step} The second test signal to be considered is the unit step function. This is arguably the most important example, because it is the basic model for a luminance \emph{edge}. Indeed, the current model is optimized for the detection of step-like edges, owing to the use of the \emph{first} derivative as the offset filter \citep{canny-1986}. The step can be defined from the standard sign-function, as follows \begin{equation} S_\alpha(x) = \frac{\alpha}{2}\bigl(1+\mathrm{sgn}(x)\bigr) \label{eqn:step-sig} \end{equation} The unit step function is related to the integral $\varPhi(u,\sigma)$ of the normalized Gaussian function $G_0^0(x,\sigma)$ in the following way: \begin{align} \mathit{\Phi}(u,\sigma) &= \int_{-\infty}^u G^0_0(x,\sigma)\ \mathrm{d}x \\[1ex] &= \frac{1/\alpha}{\sigma\sqrt{\pi/2}}\int_{-\infty}^\infty G^1_0(x,\sigma)\mspace{3mu}S_\alpha(u-x)\ \mathrm{d}x \end{align} The third integral is the convolution of $G_0^1$ with $S_\alpha$, and hence $\mathit{\Phi}(u,\sigma)$ is proportional to the smoothed step-edge. The linear response is given by the derivative, \begin{align} R_\alpha(0,u) &= \frac{\sigma\sqrt{\pi/2}}{1/\alpha}\, \frac{\mathrm{d}}{\mathrm{d}u} \, \varPhi(u,\sigma) \\[.75ex] &= \frac{\alpha}{2} \, G(u,\sigma) \label{eqn:step-resp} \end{align} This shows that the basic response is simply an un-normalized Gaussian, located at the step-discontinuity. The maximum response and the signed-distance function are evidently \begin{equation} R^\star_\alpha = {\alpha}/{2} \eqand V(u) = u. \label{eqn:step-dist} \end{equation} Let $\bigl(u,R(u)\bigr)$ be the Cartesian coordinates of the response curve. The final response can be constructed from the Gaussian (\ref{eqn:step-resp}) by inserting the plateau $(\pm\rho,\alpha/2)$ in place of the maximum point $(0,\alpha/2)$. This is illustrated in in figure \ref{fig:step}. \begin{figure}[!ht] \begin{center} \includegraphics{fig-06} \vspace*{-.1in} \caption{{Step response}. \textbf{Left column:} The top plot shows the step-edge $S_\alpha(x)$, as defined in (\ref{eqn:step-sig}). The middle plot shows the response $C(u)$. The bottom plot shows the distance function $V_\alpha(u)$, as in (\ref{eqn:step-dist}). Note that the response is unconditionally unimodal in this case. \textbf{Right column:} As before, except with Gaussian noise (SD 0.075) added independently at each point. The response $C(u)$ is not significantly affected.} \label{fig:step} \end{center} \end{figure} \subsubsection{Cosine} The third class of signals to be considered are the sines and cosines. These are of central importance, owing to their role in the Fourier synthesis of more complicated signals. Furthermore, these functions are used to construct the \twod\ grating patterns that are commonly used to characterize complex cells. It will be convenient to base the analysis on the cosine function \begin{equation} S_\xi(x) = \cos\bigl(2\pi \xi x\bigr) \label{eqn:cos-sig} \end{equation} where $\xi$ is the frequency. The Fourier transforms $g(x) \mapsto \mathcal{F}_x[g](\eta)$ of the filter $G_0^1(x,\sigma)$ and signal $S_\xi(x)$ are \begin{align} \mathcal{F}_x\bigl[G_0^1\bigr](\eta) &= \sigma\sqrt{\pi/2}\, G\bigl(\eta,1/(2\pi\sigma)\bigr) \\[.75ex] \mathcal{F}_x\bigl[S_\xi\bigr](\eta) &= \tfrac{1}{2}\bigl(\delta(\eta-\xi) + \delta(\eta+\xi)\bigr) \end{align} respectively, where $\eta$ is the frequency variable. The convolution $G_0^1\star S_\xi$ can be obtained from the inverse Fourier transform of the product $\mathcal{F}_x\bigl[G_0^1]\,\mathcal{F}_x\bigl[S_\xi\bigr]$. The resulting cosine is attenuated by a scale-factor $\mathcal{F}_x\bigl[G_0^1\bigr](\xi)$, because $\mathcal{F}_x\bigl[S_\xi](\eta)$ is zero unless $|\eta|=\xi$. Differentiating $\cos(2\pi\xi x)$ with respect to $x$ gives $-\sin(2\pi\xi x)$, along with a second scale-factor of $2\pi\xi$. The amplitude of the linear response is given by the product of the two scale-factors $2\pi\xi$ and $\mathcal{F}_x\bigl[G_0^1](\xi)$, which can be expressed as \[ R^\star_\xi = \sigma\xi\pi^{3/2} \sqrt{2}\, G\bigl(\xi,1/(2\pi\sigma)\bigr). \label{eqn:cos-const} \] It can be seen that the amplitude depends on the scale $\sigma$ of the filter, as well as on the frequency $\xi$ of the signal. The complete linear response is given by \[ R_\xi(0,u) = -R^\star_\xi \sin\bigl(2\pi\xi u\bigr). \label{eqn:cos-resp} \] Note that a phase-shift $u_0$ can be introduced, if required, by substituting $u-u_0$ for $u$. The rectified sine $|R_\xi(0,u)|$ is another periodic function, of twice the frequency. The peaks of this function are separated by a distance $\frac{1}{2\xi}$, and so \begin{equation} V_\xi(u) = \biggl(u\bmod \frac{1}{2\xi}\biggr) - \frac{1}{4\xi} \label{eqn:cos-dist} \end{equation} is a suitable distance function for the cosine signal (\ref{eqn:cos-sig}). The case of sine signals is analogous, with $\sin$ replaced by $\cos$ in the linear response (\ref{eqn:cos-resp}), and $u$ replaced by $u-\frac{1}{4\xi}$ in the distance function (\ref{eqn:cos-dist}). \begin{figure}[!ht] \begin{center} \includegraphics{fig-07} \vspace*{-.1in} \caption{{Cosine response}. \textbf{Left column}: The top plot shows the cosine signal $S_\xi(x)$, as in (\ref{eqn:cos-sig}), of frequency $\xi=\frac{1}{6}$. The middle plot shows the response $C(u)$, which is constant. The bottom plot shows the distance function $|V_\xi(u)|$, as in (\ref{eqn:cos-dist}). Note that the critical case is plotted, in which the peaks of $|V_\xi(u)|$ touch the line $\rho=1.5\sigma$. The response is also constant for any higher frequency. \textbf{Right column:} As before, but for a lower frequency, $\xi=\frac{1}{9}$. The distance function now crosses the line $\rho=1.5\sigma$, and corresponding `notches' appear in $C(u)$.} \label{fig:cos} \end{center} \end{figure} It is important to see that the system response is \emph{entirely} constant for frequencies that are not too low. Specifically, the extreme values of (\ref{eqn:cos-dist}), with respect to $u$, are $\pm\frac{1}{4\xi}$, from which it follows that the response is identically $R^\star_\xi$ if $\xi \ge \frac{1}{4\rho}$. The corresponding constraint on the wavelength $\frac{1}{\xi}$ is \begin{equation} 1/\xi \le 4\rho. \label{eqn:wavelength-lim} \end{equation} In order to interpret this result, recall that $\rho \ge \sigma$ is required for a unimodal impulse response (\ref{eqn:impulse-limit}). Furthermore, in section \ref{sec:error}, it was shown that $\rho\approx 1.5\sigma$ is achievable in practice. This means that a constant response can be expected for frequencies as low as $\xi=\frac{1}{6\sigma}$. \subsection{Response to Natural Images} \label{sec:images} \noindent This section makes a basic evaluation of the differential model, using the objective function of `slow feature analysis' \citep{wiskott-2002,berkes-2005}. The procedure is as follows. Each $1024\times 768$ greyscale image is decomposed into $i=1,\ldots,36$ orientation channels $\theta_i$ at scale $\sigma=2$~pixels. This corresponds to a set of simple-cell responses $S_1(\mat{x},\sigma,\theta_i)$, with an angular separation of $5^\circ$. The steerability of $S_1$ is \emph{not} used (i.e.\ a separate convolution is done for each $\theta_i$) in order to minimize any angular bias in the image sampling. A set of straight tracks \begin{equation} \mat{x}_{ijk} = \mat{p}_j \pm k\Delta\times(\cos\theta_i,\, \sin\theta_i)^\tp, \eqwhere j=1,\ldots,m \eqand k=0,\ldots,n \end{equation} is sampled from each \twod\ response. The $m=100$ random points $\mat{p}_j$ are sampled from a uniform distribution over the image; the sign $\pm$ is also random. The resolution $\Delta$ is set to one pixel, and the number of steps along each path is $n=99$. This gives a total of $100^2$ samples from each orientation channel. The responses at non-integral positions $\mat{x}_{ijk}$ are obtained by bilinear interpolation. The samples are non-negative by definition, and a global scale factor $\gamma$ is used to make the overall \mbox{$ijk$-mean} of $\gamma S_1(\mat{x}_{ijk},\sigma,\theta_i)$ equal to $\frac{1}{2}$. The mean simple-cell response is then computed in each orientation channel, \begin{equation} E_S(i) = \frac{1}{mn} \sum_{j=1}^{m} \sum_{k=0}^{n} \, \gamma S_1\bigl(\mat{x}_{ijk},\sigma,\theta_i\bigr) \end{equation} where the scaling by $\gamma$ ensures that $\sum_i E_S(i) = \frac{1}{2}$. The mean quadratic variation along the paths is also computed, in each orientation channel; \begin{equation} Q_S(i) = \frac{1}{mn} \sum_{j=1}^{m} \sum_{k=1}^{n} \, \Bigl| \gamma S_1\bigl(\mat{x}_{ijk},\sigma,\theta_i\bigr) - \gamma S_1\bigl(\mat{x}_{ij[k-1]},\sigma,\theta_i\bigr) \Bigr|^2. \end{equation} The coordinates $\mat{x}_{ijk}$ and $\mat{x}_{ij[k-1]}$ represent adjacent points (separated by $\Delta$) on the \mbox{$j$-th} path in the \mbox{$i$-th} channel. In summary, $E_S(i)$ measures the average response for orientation $\theta_i$, and $Q_S(i)$ measures the average spatial variation of this response in direction $\theta_i$. Slow feature analysis finds filters that minimize the quadratic variation. The orientation tuning and slowness measurements are plotted in figures~\ref{fig:tracks-cos} and \ref{fig:tracks-img} as a function of $\theta_i$, by attaching vertical bars $\pm\frac{1}{2}\sqrt{Q_S(i)}$ to each point $E_i$. Each test is then repeated, using the complex response $C(\mat{x},\sigma,\theta)$ in place of the simple response $S_1(\mat{x},\sigma,\theta)$, giving measurements $E_C(i)$ and $Q_C(i)$. Three test-images with a dominant global orientation are used. Firstly, a vertical cosine grating, $I_{\cos}(x,y) = \frac{1}{2}\bigl(1+\cos(2\pi\xi x)\bigr)$. The range is set to $\rho=1.5\sigma$, as usual, and the wavelength is set to $\frac{1}{\xi} = 8\sigma$. These values do \emph{not} satisfy the limit (\ref{eqn:wavelength-lim}), which ensures that the complex response will not be trivial. The simple-cell response is shown in figure~\ref{fig:tracks-cos} (top left), and two effects should be noted. Firstly, the response is tuned to the dominant orientation $\theta=0$, as can be seen from the unimodal shape of the curve. Secondly, there is a large variation in the response when the tracks are orthogonal to the grating, as shown by the large bars around $\theta=0$. This is because the filter falls in and out of phase with the image as it moves horizontally. The corresponding complex response is shown in figure~\ref{fig:tracks-cos} (top right). It can be seen that the orientation tuning is preserved, while the response variation is greatly reduced. Figure \ref{fig:tracks-cos} (bottom) repeats the test, but with noise added to the cosine grating, $I = 0.25 \times I_{\cos} + 0.75 \times I_{\mathrm{uni}}$, where each pixel in $I_{\mathrm{uni}}$ is independently sampled from the uniform distribution on $[0,1]$. This means that a variable simple response is obtained as the filter moves in any direction, because the image is now truly \twod. The complex response reduces the variation, as shown in figure~\ref{fig:tracks-cos} (bottom). Figure \ref{fig:tracks-img} (top) shows results for a real image which has an orientation-structure similar to that of the grating. The results are analogous. Finally, the same test is performed on a natural image, which contains a mixture of foliage and rocks. Figure \ref{fig:tracks-img} (bottom) shows that, although there is no dominant orientation in the stimulus, the complex response remains much less variable than the simple response. \begin{figure}[!ht] \begin{center} \includegraphics{fig-08} \vspace*{-0.1in} \caption{{Cosine response}. \textbf{Top left:} Average simple cell response $E_S(i)$ to a vertical cosine grating of wavelength $8\sigma$. The curve indicates the mean response in each of 36 orientation channels $\theta_i$, and has a clear peak at zero. The vertical bars $\pm\frac{1}{2}\sqrt{Q_S(i)}$ indicate the \textsc{rms} spatial variation of the response in the preferred direction of each orientation channel. \textbf{Top right:} Complex cell response $E_C(i)$ to the same image. The orientation tuning is preserved, but the variability $\pm\frac{1}{2}\sqrt{Q_C(i)}$ of the response is greatly reduced. \textbf{Bottom left, right:} As before, but with noise added to the image. } \label{fig:tracks-cos} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics{fig-09} \vspace*{-0.1in} \caption{{Image response}. \textbf{Top:} As in figure~\ref{fig:tracks-cos}, but using a real image that contains a dominant vertical orientation. Left: The simple response shows variation across all orientation channels $Q_S(i)$. Right: The variation of the complex response $Q_C(i)$ is much lower. \textbf{Bottom:} As before, but using a natural image, with no dominant orientation.} \label{fig:tracks-img} \end{center} \end{figure} \section{Discussion} \label{sec:discussion} It has been shown that a differential model of the complex cell can be constructed from the local jet representation. The differential model, which works naturally in a basis of steerable filters, can be viewed as a constrained version of the \cite{hubel-1962} subunit model. \subsection{Neural Implementation} The qualitative components in the present approach are similar to those of the Gabor energy model. Both models are based on oriented linear filters, which are centred at the same position. Likewise, both models output a combination of the nonlinearly transformed filter responses. The derivative operators $G_k(x,\sigma)$, in the differential model, are interpreted as simple cells, at a common location $x$ \citep{hawken-1987,young-2001a}. These are constructed from LGN inputs, according to the classical model \citep{hubel-1962}. The offset filters $F(t_i,x)$ have two possible interpretations, shown in fig.~\ref{fig:implement}, as follows. Firstly, the offset filters $F(t_i,x)$ can be interpreted as an intermediate layer of simple cells, each of which has an RF that is a linear combination of other simple cell RFs. Let the row-vector $\mat{f}_i^\tp$ represent $F(t_i,x)$, and let $\mat{s}$ be the signal vector. It follows that the linear response is \begin{equation} r_i = \mat{f}_i^\tp\!\mat{s} \end{equation} where $\mat{f}_i^\tp = \mat{p}_i^\tp\!\mat{G}$, according to the filter-design equation $\mat{F}=\mat{P}\mat{G}$ in (\ref{eqn:filt-mat}). The task of the complex cell $C$ is to compute the (absolute) maximum of the linear responses, $r_i$. This interpretation corresponds to the \textsc{hmax} model \citep{riesenhuber-1999,lampl-2004}, but with additional relationships imposed on the underlying simple cells. An advantage of this interpretation is that it predicts a majority of simple cells with few oscillations, as is observed \citep{ringach-2002}. This follows from the fact that the offset filters are of lower order than their derivatives, together with the fact that an unlimited number of offsets can be obtained from a given derivative basis. Furthermore, suppose that a number of complex cells (e.g.~of different orientations) $C_n$ are constructed from the same basis $\mat{G}$. A new layer of low-order offset filters $\mat{F}_n$ is required for each complex cell, and so the high-order filters in $\mat{G}$ are soon outnumbered in the ensemble $\{\mat{G}, \mat{F}_1, \mat{F}_2, \ldots\}$. An alternative physiological implementation is as follows. Suppose that the complex cell has $i=1\ldots,M$ basal dendrites, each of which branches out to the $K$ simple cells $G_k(x,\sigma)$. The linear response can then be expressed as \begin{equation} r_i = \mat{p}_i^\tp (\mat{G} \mat{s}) \end{equation} where $\mat{G}\mat{s}$, the Gaussian jet response, is computed first. This interpretation requires no intermediate simple cells. Instead, it places a fixed weight $P_{ik}$ on each dendritic branch, and requires a summation to be performed within each dendrite. This seems to be quite compatible with the observed cell morphology; a typical complex cell has a small number of basal dendrites which, unlike those of simple cells, are extensively branched \citep{kelly-1974,gilbert-1979}. The dendritic interpretation is more economical in the number of simple cells required, but less compatible with the observed simple-cell statistics \citep{ringach-2002}. It should be noted that a mixture of the two interpretations in fig.~\ref{fig:implement} is quite possible. For example, there could be a few intermediate simple cells, with the remaining filters implemented by dendritic summation. In all cases, each complex cell is associated with odd and even simple cells $\mat{g}_k$, as is observed \citep{pollen-1981}. \begin{figure} \begin{center} {\includegraphics{./fig-10.pdf}} \caption{Two neural implementations of the complex cell $C$. These are schematic representations, with reduced numbers of derivative filters $\mat{g}_k$ and offset filters $\mat{f}_i$. Left: The offset filters are identified with an intermediate layer of simple cells. Right: The offset filters are implicit in the weighted sums $\mat{p}_1^\tp\!\mat{G}$ and $\mat{p}_2^\tp\!\mat{G}$ performed by the dendritic tree of $C$.} \label{fig:implement} \end{center} \end{figure} The maximum (\ref{eqn:max}) over the $|r_i|$ can be approximated by a barycentric combination, $\sum_i w_i |r_i|$. An appropriate vector of weights can be computed as $w_i = \mbox{$|r_i|^\beta\! \big/ (\alpha + \sum_i |r_i|^\beta)$}$. This is a form of `softmax' \citep{bridle-1989}, with parameters $\alpha$ and $\beta$, as described in \citep{hansard-2010}. Several neural models of this operation have been proposed \citep{riesenhuber-1999,yu-2002}. For example, the $w_i$ can be interpreted as the outputs of a network of mutually inhibitory neurons, which receive copies of the subunit responses $r_i$. There is experimental evidence for similar arrangements, with respect to both simple and complex cells \citep{heeger-1992,carandini-1994,lampl-2004}. \subsection{Experimental Predictions} \label{sec:predictions} This section will demonstrate that the response of the new model, to broad-band stimuli, can easily be distinguished from that of the standard energy model. Some qualitative predictions will also be discussed. The response of the new model to sinusoidal stimuli is similar to that of the energy model. Both responses are phase-invariant, provided that the stimulus frequency is not too low (see fig.~\ref{fig:cos}). Consider, however, a luminance step-edge that is flashed (or shifted) across the RF. The Gabor energy is approximately Gaussian, as a function of the edge-position, with a peak in the centre of the RF. The differential response is much flatter, as shown in fig.~\ref{fig:step}. This suggests that an empirical measure of \emph{kurtosis} could be used to distinguish between the two responses, as will be shown below. Let the odd-symmetric Gabor filter be defined as $F_\parallel(x,\xi,\tau) = -G(x,\tau) \sin(2\pi\xi x)$, so that it matches the polarity of $G_1$. The even filter, following \citep{lehky-2005}, is defined by the numerical Hilbert transform $F_{\!\perp} = \mathcal{H}(F_\parallel)$, in order to avoid the nonzero DC component that arises in the cosine-based definition. The Gabor energy of a signal $S$ is then $R_F^2 = (F_\parallel\star S)^2 + (F_{\!\perp}\star S)^2$. The envelope width $\tau$ of each Gabor filter is determined from the constraint that the bandwidth be equal to 1.5, which is realistic for complex cells \citep{daugman-1985}. A self-similar family of Gabor pairs, parameterized by frequency $\xi$, can now be defined. Quasi-Newton optimization is used to determine a frequency $\xi_0$, for which $F_\parallel(x,\xi_0,\tau)\approx G_1(x,1)$ in the $L^2$ sense. Ten Gabor pairs with frequencies $\xi_k = \xi_0 2^{-k\Delta\xi}$ are constructed, where $k=0,\ldots 9$. The corresponding differential model, with target filter $G_1(x,\xi_0/\xi_k)$, is also constructed for each pair. Four possible ranges are considered for the differential models, by setting $\rho/\sigma = 1.0$, 1.25, 1.5, 1.75. Let $p_\xi(u)$ be the response distribution, which gives the firing-rate for an image-edge, as a function of its offset $u$ from the centre $u=0$ of the complex RF. If $P_\xi(u)$ is the cumulative distribution $\int p_\xi(u)\,\mathrm{d}u$, then the (uncentred) kurtosis $\kappa$ of any $p_\xi(u)$ can be estimated \citep{crow-1967} by \begin{equation} K(p_\xi) = \frac {P_\xi^{-1}(1-a) - P_\xi^{-1}(a)} {P_\xi^{-1}(1-b) - P_\xi^{-1}(b)}. \label{eqn:kurtosis} \end{equation} The values $a=0.025$ and $b=0.25$ are used here (making $K$ the ratio of the 95\% and 50\% confidence intervals). This estimate, which can be computed by linear-interpolation between the samples of $P_\xi(u)$, has the following interpretation. Suppose that the offsets $u_a$ and $u_b$ are associated with firing-rates $a$ and $b$ respectively. The response distributions are symmetric, and so the statistic \eqref{eqn:kurtosis} is simply the distance ratio $K(p_\xi)=u_a/u_b$, as illustrated in fig.~\ref{fig:kurtosis}. The kurtosis could also be estimated from the empirical moments of $p_\xi(u)$, but (\ref{eqn:kurtosis}) is much less sensitive to noise in the tails of the distribution. \begin{figure} \begin{center} \begin{minipage}[c]{.44\linewidth} \centering \vspace*{-6mm} {\includegraphics{./fig-11a.pdf}} {\includegraphics{./fig-11b.pdf}} \end{minipage} \hfil\begin{minipage}[c]{.52\linewidth} \centering {\includegraphics[scale=1]{./fig-11c.pdf}} \end{minipage}\hfil \vspace*{-0.1in} \caption{Top Left: Gaussian Derivative (black) and Gabor (grey) filters, matched subject to the bandwidth condition. Bottom Left: The kurtosis statistic \eqref{eqn:kurtosis} is the ratio of the two horizontal lines, shown here on a Gaussian distribution. Right: Kurtosis of the edge-response. Each line represents a complex cell model, parameterized by preferred frequency $\xi$. The top pair is obtained from the Gabor energy and its square-root. The bottom pair delimits the range of possible differential models. The Gabor and differential responses are easily distinguished. The dashed lines are estimates for reference distributions.} \label{fig:kurtosis} \end{center} \end{figure} The distributions considered here lie in and around the range from the uniform distribution ($\kappa=\frac{9}{5}$, $K = 1.9$), to the normal distribution ($\kappa=3$, $K\approx2.91$). It can be seen from fig.~\ref{fig:kurtosis} that the Gabor response is approximately Gaussian, whereas the differential responses are much flatter. Furthermore, note that the line $\rho=\sigma$ in fig.~\ref{fig:kurtosis} shows the \emph{maximum} kurtosis of the differential model (determined by \ref{eqn:impulse-limit}), which is still much lower than that of the Gabor energy. Furthermore, it can be seen that the kurtosis is approximately independent of frequency, which simplifies the comparision. It should be noted that a much better fit between $G_1$ and $F_\parallel$ can be obtained if the bandwidth constraint is relaxed. This however, makes the energy responses even more kurtotic. The differential model makes several predictions about the configuration of simple and complex cells. Firstly, like the energy model, it predicts that both odd and even filters are required by the complex cell. Unlike the energy model, it does not require an exact quadrature relationship (indeed, the $G_k$ basis is not orthogonal). More generally, an important property of the differential model is its robustness to deviations from the ideal simple-cell RF profiles. Derivative of Gaussian basis was used, in the present derivation, for mathematical clarity. However, all that is required is a basis that spans the space of desired subunit filters $F(t_i,x)$. The differential model also predicts a relationship between the scale $\sigma$ of the subunits and the radius $\rho$ of the resulting complex receptive field. This prediction, as in the case of the energy model, is probably too strict (i.e.\ larger complex receptive fields should be possible). However, as discussed in the following section, the complex receptive fields can be extended by allowing multiple scales $\sigma_j$ in the basis set of the differential model. A qualitative prediction of the present model is that high-order derivative filters are required, in order to approximate the target filter over a sufficient range $\rho$. In particular, it was shown in section \ref{sec:impulse} that, for a unimodal impulse response, $\rho \ge \sigma$ is required. This means, in practice, that derivative filters of order five and beyond must be used in the approximation, as can be seen from figure \ref{fig:errors}. This is interesting, because very oscillatory filters have been observed in V1 \citep{young-2001b}. These have a natural role as high-frequency processors in the Gabor model. Their role is less clear in the geometric approach, because estimates of the high-order image derivatives are of limited use. The present work suggests that these filters could have a different role, in providing a basis for spatially offset filters of low order. \subsection{Future Directions} \label{sec:extensions} \noindent There are several directions in which this model could be developed. One straightforward extension is to allow filters of different scales (as well as different orders) in the basis set. Preliminary experiments confirm that this extends the range $\rho$ of translation invariance, as would be expected. This means that the complex cell receptive field could be made larger, relative to those of the underlying simple cells. Another extension would be to allow a variety of offset-filter shapes (with odd, even \& mixed symmetry), rather than just the first derivative used here. This would lead to better agreement with the physiological data, which indicates a variety of receptive field shapes among the complex subunits \citep{gaska-1987,touryan-2005,sasaki-2007}. It would also be interesting to explore the relationship of the present work to the normalization model \citep{heeger-1992a,rust-2005}, and to other models of motion and spatial processing \citep{johnston-1992,georgeson-2007}. The present work has concentrated on local shift-invariance, because this is a defining characteristic of complex cells. However, mechanisms that have other geometric invariances can be constructed in the same scale space framework. For example, consider the effect of a geometric scaling $(x,y)\rightarrow (\alpha x, \alpha y)$ on the operator $G_0^0(x,y,\sigma,\theta)$, which represents the \mbox{$k$-th} derivative of the normalized \twod\ Gaussian, in direction $\theta$. The scaling $\alpha$ has no effect on the shape of the RF, as can be seen from the equation $G_k^0(\alpha x,\alpha y, \alpha\sigma, \theta) = G_k^0(x,y,\sigma,\theta) \big/ \alpha^{2+k}$. This leads to simple relationships between the responses of the RF family $G_k^0(x,y,\sigma_\ell,\theta)$, where $\ell=1,\ldots L$ defines a range of scales \citep{koenderink-1992,lindeberg-1998}. Future work will consider more complicated geometric invariances (e.g.~\twod\ affine), in connection with the larger RFs that are found in extrastriate areas. Another direction would be to consider how the differential model could be \emph{learned} from natural image data, by analogy with \citep{wiskott-2002,berkes-2005,karklin-2009}. This could be done by fixing the local jet filters (i.e.\ simple cells), and then optimizing the linear transformation $\mat{P}$. The transformation could be parameterized by coefficients $\mat{C}$, given a basis $\mat{B}$ of smooth functions (e.g.\ the polynomials that were used here). Alternatively, $\mat{P}$ could be optimized directly, subject to smoothness constraints on the columns $P_k(t)$. The variability of the response $C(u)$ would be a suitable objective function for the learning process, by analogy with slow-feature analysis models \citep{wiskott-2002,berkes-2005}. The combination of the geometric and statistical approaches to image analysis is, more generally, a very promising aim. \section*{Acknowledgements} The authors would like to thank the reviewers for their help with the manuscript.
{ "timestamp": "2020-12-17T02:21:24", "yymm": "2012", "arxiv_id": "2012.09027", "language": "en", "url": "https://arxiv.org/abs/2012.09027" }
\section{Introduction} This paper is in the line of recent efforts to promote first-order methods as a viable alternative to interior-point methods (IPM) for solving large-scale conic optimization problems, in particular large-scale semidefinite programming (SDP) relaxations of polynomial optimization problems (POPs). We show that a wide class of POPs have a nice property, namely the constant trace property (CTP), and that this property can be exploited in combination with first-order methods to solve large-scale SDP relaxations associated with a POP. So far, this property has been exploited only in a few cases, the most prominent examples being the Shor's relaxation of Max-Cut \cite{yurtsever2019scalable}, in which the authors are able to handle SDP matrices of huge size, and equality constrained POPs on the sphere \cite{mai2020hierarchy}. Given polynomials $f,g_i,h_j$, let us consider the following POP with $n$ variables, $m$ inequality constraints and $l$ equality constraints: \begin{equation}\label{eq:POP.def.intro} f^\star:=\min\{f(\mathbf x)\,:\quad g_i(\mathbf x)\ge 0\,,\,i\in[m]\,,\quad h_j(\mathbf x)= 0\,,\,j\in[l]\}\,, \end{equation} where $[m]:=\{1,\dots,m\}$ and $[l]:=\{1,\dots,l\}$. In general POP \eqref{eq:POP.def.intro} is non-convex, NP-hard. It is well known that under some mild condition, the optimal value $f^\star$ of POP \eqref{eq:POP.def.intro} can be approximated as closely as desired by the so-called Moment-Sums of squares (Moment-SOS) hierarchy \cite{lasserre2001global}. There are a lot of important applications of POP \eqref{eq:POP.def.intro} as well as the Moment-SOS hierarchy; the interested readers are referred to the monograph \cite{henrion2020}. \paragraph{Computational cost of moment relaxations.} The $k$-th order moment relaxation for POP \eqref{eq:POP.def.intro} can be rewritten in compact form as the following standard SDP: \begin{equation}\label{eq:SDP.form.intro} \tau = \inf _{\mathbf X \in \mathcal S^+} \{ \left< \mathbf C,\mathbf X\right>\,:\,\left< \mathbf A_j, \mathbf X\right>= b_j\,,\,j\in [\zeta]\}\,, \end{equation} where $\mathcal{S}^+$ is the set of positive semidefinite (psd) matrices in a block diagonal form: $\mathbf X=\diag(\mathbf X_1,\dots,\mathbf X_\omega)$ with $\mathbf X_j$ being a block of size $s^{(j)}$, $j\in [\omega]$ and $\zeta$ is the number of affine constraints. We denote the largest block size by $s^{\max}:=\max_{j\in[\omega]}s^{(j)}$. We say that SDP \eqref{eq:SDP.form.intro} has \emph{constant trace property} (CTP) if there exists a positive real number $a$ such that $\trace(\mathbf X) = a$, for all feasible solution $\mathbf X$ of SDP \eqref{eq:SDP.form.intro}. We also say that POP \eqref{eq:POP.def.intro} has CTP when every moment relaxation of POP \eqref{eq:POP.def.intro} has CTP. Table \ref{tab:comparison.SDP.solver} lists several available methods for solving SDP \eqref{eq:SDP.form.intro}. In particular, observe that two of them, CGAL and SBM, are first-order methods that exploit CTP. In \cite{yurtsever2019scalable}, the authors combined CGAL with the Nystr\"om sketch (named SketchyCGAL), which require dramatically less storage than other methods and is very efficient for solving Shor's relaxation of large-scale MAX-CUT instances. \begin{table} \caption{\small Complexity comparison of several methods for solving SDP. IP: interior point methods; ADMM: the alternating direction method of multipliers; SBM: spectral bundle methods; CGAL: conditional gradient-based augmented Lagrangian.} \label{tab:comparison.SDP.solver} \small \begin{center} \begin{tabular}{|m{1.6cm}|m{1.8cm}|m{1.2cm}|m{1.8cm}|m{3.8cm}|} \hline Method & Software & SDP type & Convergence rate & The most expensive parts per iteration\\ \hline IP \cite{helmberg1996interior} (second-order)& Mosek \cite{mosek}& Arbitrary & $\mathcal{O}(\log(1/\varepsilon))$ \cite{vandenberghe1996semidefinite} & System of linear equations solving with $\mathcal{O}((s^{\max})^6)$ \cite[Table 1]{vandenberghe2005interior}\\ \hline ADMM \cite{boyd2011distributed} (first-order)& SCS\cite{o2016conic}, COSMO\cite{garstka2019cosmo} & Arbitrary & $\mathcal{O}(\varepsilon^{-1})$ \cite{hong2017linear}& Positive definite system of linear equations solving by $LDL^\top$-decomposition with $\mathcal{O}((s^{\max})^6)$\\ \hline SBM \cite{helmberg2000spectral} (first-order)& ConicBundle \cite{helmberg2014spectral} & with CTP & $\mathcal{O}({\log(1/\varepsilon)}/{\varepsilon})$ \cite{ding2020revisit} & Positive definite linear system solving with $\mathcal{O}((s^{\max})^6)$\\ \hline CGAL \cite{yurtsever2019conditional} (first-order) & SketchyCGAL \cite{yurtsever2019scalable}& with CTP & $\mathcal{O}(\varepsilon^{-1/2})$ & Smallest eigenvalue computing by the Arnoldi iteration with $\mathcal{O}(s^{\max})$ \cite{lee2009k}\\ \hline \end{tabular} \end{center} \end{table} Note that SDP-relaxation \eqref{eq:SDP.form.intro} of POP \eqref{eq:POP.def.intro} at step $k$ of the Moment-SOS hierarchy has $\omega=m+1$ blocks whose largest size is $s^{\max}=\binom{n+k}{n}$ while the number of affine constraints is $\zeta=\mathcal{O}(\binom{n+k}{n}^2)$. Thus the computational cost for solving SDP \eqref{eq:SDP.form.intro} grows very rapidly with $k$. Fortunately, it is usually possible to reduce the size of this SDP relaxation by exploiting certain structures of POP \eqref{eq:POP.def.intro}. Table \ref{tab:exploting.sparsity} lists some of these structures. \begin{itemize} \item Correlative sparsity (CS), term sparsity (TS) and their combination (CS-TS) are applied to POPs \eqref{eq:POP.def.intro} in case that the data $f,g_i,h_j$ are sparse polynomials. The main idea of CS, TS and CS-TS is to break the moment matrices and localizing matrices (which are the psd matrices in the Moment-SOS relaxation) into a lot of blocks according to certain sparsity patterns derived from the POP. If the largest block size is relatively small (say $s^{\max}\le 100$), then the corresponding SDP can be solved efficiently. But if the largest block size is still large (say $s^{\max}\ge 200$), then the corresponding SDP remains hard to solve. \item In the previous work \cite{mai2020hierarchy}, the first three authors exploited CTP for equality constrained POPs on the sphere and converted the resulting SDP relaxations to spectral minimization problems which could be solved by LMBM efficiently. This method returns approximate optimal values of SDP relaxations involving $2000\times 2000$ matrices for which Mosek encounters memory issues and SketchyCGAL is much less efficient. Importantly, the moment SDP-relaxation of an equality constrained POP has a \emph{single} psd matrix. In contrast, for a POP involving a ball constraint (with possibly other inequality constraints), the resulting moment SDP-relaxations include several psd matrices. {Unfortunately for such SDPs, LMBM usually returns inaccurate values even when CTP holds because of ill-conditioning issues. LMBM only updates the dual variables, so it is hard to ensure that the KKT conditions hold.} {We can overcome the latter ill-conditioning issues by relying on a primal-dual algorithm such as CGAL. It turns out that CGAL (without sketching) is suitable for this type of SDP.} {For an SDP involving a single matrix, SketchyCGAL stores updated matrices by means of Nystr\"om sketch. In our experimental setting, we rather consider CGAL without sketching, which boils down to relying on implicit updated matrices. It turns out that this strategy is much faster than the one based on Nystr\"om sketch, but does not provide the primal (matrix) solution.} \end{itemize} \begin{table} \caption{\small Several special structures for reducing complexity of the Moment-SOS relaxations.} \label{tab:exploting.sparsity} \small \begin{center} \begin{tabular}{|m{1.6cm}|m{2cm}|m{7.3cm}|} \hline Structure & Software & POP type \\ \hline CS \cite{Waki06SparseSOS, lasserre2006convergent}& SparsePOP \cite{waki2008algorithm}& $f=\sum_{j\in[p]} f_j$ and $f_j, (g_i)_{i\in J_j}, (h_i)_{i\in W_j}$ share the same variables for every $j\in[p]$ and $p>1$\\ \hline TS \cite{wang2019tssos, wang2019second}& TSSOS & $f,g_i, h_j$ involve a few of terms\\ \hline CS-TS \cite{wang2020cs}& TSSOS & Both CS and TS hold\\ \hline CTP\cite{mai2020hierarchy} & SpectralPOP & Equality constrained POPs on a sphere ($m=0$ and $h_1:=R-\|\mathbf x\|_2^2$) \\ \hline \end{tabular} \end{center} \end{table} \paragraph{SDP relaxations of non-convex quadratically constrained quadratic programs.} A non-convex quadratically constrained quadratic (QCQP) program is a special instance of POP \eqref{eq:POP.def.intro} for which the degrees of the input polynomials are at most two. Famous instances of non-convex QCQPs include the MAX-CUT problem and the optimal power flow (OPF) problem \cite{josz2018lasserre}; in addition we recall that that LCQPs have an equivalent MAX-CUT formulation \cite{Lasserre-Maxcut}. They also have applications in deep learning, e.g., the computation of Lipschitz constants \cite{chen2020semialgebraic} and the stability analysis of recurrent neural networks \cite{ebihara2020l_2}. In practice, non-convex QCQPs usually involve a large number of variables (say $n\ge 1000$) and their associated SDP relaxations \eqref{eq:SDP.form.intro} can be classified in two groups as follows: \begin{itemize} \item \textbf{The first order relaxation}: $k=1$ (also known as Shor's relaxation in the literature). In this case the number of affine constraints in SDP \eqref{eq:SDP.form.intro} is typically not larger than the largest block size, i.e., $\zeta \le s^{\max}$. It can be efficiently solved by most SDP solvers, in particular with SketchyCGAL \cite{yurtsever2019scalable}. Nevertheless the first order relaxation may only provide a lower bound for the optimal value of POP \eqref{eq:POP.def.intro}. In this case, one needs to solve the second and perhaps even higher-order relaxations to obtain tighter bounds or achieve the global optimal value. \item \textbf{The second and higher-order relaxations}: $k\ge 2$. In this case the number of affine constraints in SDP \eqref{eq:SDP.form.intro} is typically much larger than the largest block size ($\zeta\gg s^{\max}$). Then unfortunately most SDP solvers cannot handle large-scale SDPs of this form. In our previous work \cite{mai2020hierarchy}, we proposed a remedy for the particular case of second-order SDP relaxations of equality constrained POPs on the sphere, by relying on first-order solvers such as LMBM. \end{itemize} \paragraph{Common issues of solving large-scale SDP relaxations.} When solving the second and higher-order SDP relaxations, SDP solvers often encounter the following issues: \begin{itemize} \item\textbf{Storage}: The interior-point methods (IPM) are often chosen by users because of their highly accurate output. These methods are efficient for solving medium-scale SDPs. However they frequently fail due to lack of memory when solving large-scale SDPs (say $s^{\max}>500$ and $\zeta>2\times10^5$ on a standard laptop). Then first-order methods (e.g., ADMM, SBM, CGAL) provide an alternative to IPM to avoid the memory issue. This is due to the fact that the cost per iteration of first-order methods is much cheaper than that of IPM. At the price of losing convexity one can also rely on heuristic methods and replace the full matrix $\mathbf X$ in SDP \eqref{eq:SDP.form.intro} by a simpler one, in order to save memory. For instance, the Burer-Monteiro method \cite{burer2005local} considers a low rank factorization of $\mathbf X$. However, to get correct results the rank cannot be too low \cite{waldspurger2020rank} and therefore this limitation makes it useless for the second and higher-order relaxations of POPs. Not suffering from such a limitation, CGAL not only maintains the convexity of SDP \eqref{eq:SDP.form.intro} but also possibly runs with implicit matrix $\mathbf X$ as described in Remarks \ref{re:implicit} and \ref{re:implicit.block}. \item \textbf{Accuracy}: Nevertheless, first-order methods have low convergence rates compared to the interior-point methods. Their performance depends heavily on the problem scaling and conditioning. As a result, in solving large-scale SDPs with first-order methods it is often difficult to obtain results with high accuracy. In contrast the relative gap of the value returned by first-order SDP solvers w.r.t. the exact value is usually expected to be less than 1\%. \end{itemize} The goal of this paper is to provide a method which returns the optimal value of the second-order moment SDP-relaxation and which is suitable for a class of large-scale non-convex QCQPs with CTP. Ideally (i) it should avoid the memory issue, and (ii) the resulting relative gap of the approximate value returned by this method w.r.t. the exact value, should be less than 1\%. \paragraph{Contribution.} We show that (i) (a large class of) POPs have a very nice \emph{constant trace property} and (ii) that this property can be exploited for solving their associated semidefinite relaxations via appropriate first-order methods. More precisely our contribution is threefold: \begin{enumerate} \item In Section \ref{sec:sufficient.CTP} we show that if a positive real number belongs to the interior of every truncated quadratic module associated to the inequality constraints, then the corresponding POP has CTP. Moreover, we prove that this condition always holds when a ball constraint is present. \item In Section \ref{sec:obtain.CTP.SDP} we provide a linear programming approach to check whether a POP has CTP. With this approach we prove in Section \ref{sec:special.POP.CTP} that several special classes of POPs (including POPs on a ball, annulus, simplex) have CTP. \item Our final contribution is to handle sparse large-scale POPs by integrating sparsity-exploiting techniques into the CTP-exploiting framework. \end{enumerate} For practical implementation we have provided a software library called ctpPOP. It consists of modeling each moment SDP-relaxation of POPs as a standard SDP with CTP and then solving this SDP by CGAL or the spectral method (SM) with nonsmooth optimization solvers (LMBM or PBM). In Section \ref{sec:benchmark} we provide extensive numerical experiments to illustrate the efficiency and scalability of ctpPOP with the CGAL solver. In all our randomly generated POPs with different sparsity structures, the relative gap of the optimal value provided by CGAL w.r.t. the optimal value provided by Mosek is below 1\%. Because of its very cheap cost per iteration, CGAL is more suitable for particularly bulky SDPs (such as moment SDP-relaxations of POPs) than other solvers (e.g. COSMO). For instance for minimizing a \emph{dense} quadratic polynomial on the unit ball with up to $100$ variables, CGAL returns the optimal value of the second-order moment SDP relaxation within $6$ hours on a standard laptop while Mosek (considered state-of-the-art IPM SDP solver) runs out of memory. Similarly, for minimizing a \emph{sparse} quadratic polynomial involving thousand variables, with a ball constraint on each clique of variables, CGAL spends around two thousand seconds to solve the second-order moment SDP-relaxation while Mosek runs again out of memory. The largest clique of this POP involves $41$ variables. Classical Optimal Power Flow (OPF) problem without constraints on current magnitudes (as in \cite{josz2016ac,godard2019novel}) can be formulated as a POP with ball and annulus constraints. In many instances Shor's relaxation usually provides the global optimum. However, for illustration purpose we have compared CGAL and Mosek for solving the second-order CS-TS relaxation for one instance ``case89\_pegase\_\_api" from the PGLib-OPF database\footnote{https://github.com/power-grid-lib/pglib-opf}. The largest block size and the number of equality constraints of this SDP are around 1.7 thousand and 8 million, respectively. While Mosek failed because of memory issue, CGAL still returns the optimal value in 2 days, and with the relative gap w.r.t. a local optimal value less than 0.6\%. \section{Notation and preliminary results} With $\mathbf{x} = (x_1,\dots,x_n)$, let ${\mathbb R}[\mathbf{x}]$ stand for the ring of real polynomials and let $\Sigma[\mathbf{x}]\subseteq{\mathbb R}[\mathbf{x}]$ be the subset of sum of squares (SOS) polynomials. Their restrictions to polynomials of degree at most $d$ and $2d$ are denoted by ${\mathbb R}[\mathbf{x}]_d$ and $\Sigma[\mathbf{x}]_d$ respectively. For $\alpha = (\alpha_1,\dots,\alpha_n) \in {\mathbb N}^n$, let $|\alpha|:= \alpha_1 + \dots + \alpha_n$. Let ${\mathbb N}^n_d:=\{\alpha\in{\mathbb N}^n : |\alpha|\le d\}$. Let $(\mathbf{x}^{ \alpha})_{\alpha\in{\mathbb N}^n}$ be the canonical monomial basis of ${\mathbb R}[\mathbf{x}]$ (sorted w.r.t. the graded lexicographic order) and $\mathbf v_d(\mathbf{x})$ be the vector of monomials of degree up to $d$, with length $\s(d,n) := \binom{n+d}{n}$. when it is clear from the context, we also write $\s(d)$ instead of $\s(d,n)$. A polynomial $p\in{\mathbb R}[\mathbf{x}]_d$ can be written as $p(\mathbf{x})\,=\,\sum_{\alpha\in{\mathbb N}^n_d} p_\alpha\,\mathbf{x}^\alpha\,=\,\mathbf{p}^\top\mathbf v_d(\mathbf{x})$, where $\mathbf{p}=(p_\alpha)\in{\mathbb R}^{\\s(d)}$ is the vector of coefficients in the canonical monomial basis. For $p\in{\mathbb R}[\mathbf{x}]$, let $\lceil p\rceil:=\lceil{\rm deg}(p)/2\rceil$. For a positive integer $m$, let $[m]:=\{1,2,\ldots,m\}$. The $l_1$-norm of a polynomial $p$ is given by the $l_1$-norm of the vector of coefficients $\mathbf{p}$, that is $\|\mathbf{p}\|_1 := \sum_{\alpha} |p_\alpha|$. Given $\mathbf{a}\in{\mathbb R}^n$, the $l_2$-norm of $\mathbf{a}$ is $\|\mathbf{a}\|_2:=(a_1^2+\dots+a_n^2)^{1/2}$ and the maximum norm of $\mathbf{a}$ is $\|\mathbf{a}\|_{\infty}:=\max\{|a_j|:j\in[n]\}$. Given a subset $\mathcal{S}$ of real symmetric matrices, let $\mathcal{S}^+:=\{\mathbf X\in \mathcal{S}\,:\, \mathbf X\succeq 0\}$. For $I\subseteq [n]$, let $\mathbf x(I):=\{x_j:j\in I\}$ and ${\mathbb N}^I_d:=\{\alpha\in{\mathbb N}^n_d:\supp(\alpha)\subseteq I\}$. \paragraph{Polynomial optimization problem.} A polynomial optimization problem (POP) is defined as \begin{equation}\label{eq:POP.def} f^\star:=\inf \{f(\mathbf x)\ :\ \mathbf x\in S(g)\cap V(h)\}\,, \end{equation} where $S(g)$ and $V(h)$ are a basic semialgebraic set and a real variety defined respectively by: \begin{eqnarray} \nonumber S(g) &:=&\{\,\mathbf x\in{\mathbb R}^n:\: g_i(\mathbf x)\ge 0\,,\,i\in [m]\,\}\\ \label{eq:semialg.set.real.variety} V(h) &:=&\{\,\mathbf x\in{\mathbb R}^n:\: h_j(\mathbf x)=0\,,\,i\in [l]\,\}\,, \end{eqnarray} for some polynomials $f,g_i,h_j\in{\mathbb R}[\mathbf x]$ with $g:=\{g_i\}_{i\in[m]}$, $h:=\{h_j\}_{j\in[l]}$. We will assume that POP \eqref{eq:POP.def} has at least one global minimizer. \paragraph{Riesz linear functional.} Given a real-valued sequence $\mathbf y=(y_\alpha)_{\alpha\in{\mathbb N}^n}$, define the Riesz linear functional $L_{\mathbf y}:{\mathbb R}[ \mathbf x ] \to {\mathbb R}$, $f\mapsto {L_{\mathbf y}}( f ) := \sum_{\alpha} f_\alpha y_\alpha$. Let $d$ be a positive integer. A real infinite (resp. finite) sequence $( y_\alpha)_{\alpha\in {\mathbb N}^n}$ (resp. $( y_\alpha)_{\alpha \in {\mathbb N}^n_d}$) has a \emph{representing measure} if there exists a finite Borel measure $\mu$ such that $y_\alpha = \int_{{\mathbb R}^n} {\mathbf x^\alpha d\mu(\mathbf x)}$ for every $\alpha \in {{\mathbb N}^n}$ (resp. $\alpha \in {{\mathbb N}^n_d}$). In this case, $( y_\alpha)_{\alpha \in {\mathbb N}^n}$ is called be the moment sequence of $\mu$. We denote by $\supp(\mu)$ the support of a Borel measure $\mu$. \paragraph{Moment/Localizing matrix.} The moment matrix of order $d$ associated with a real-valued sequence $\mathbf y=(y_\alpha)_{\alpha \in {\mathbb N}^n}$ and $d\in {\mathbb N}^{>0}$, is the real symmetric matrix $\mathbf M_d(\mathbf y)$ of size $s(d)$, with entries $( y_{\alpha + \beta })_{\alpha,\beta\in {\mathbb N}^n_d} $. The localizing matrix of order $d$ associated with $\mathbf y=(y_\alpha)_{\alpha \in {\mathbb N}^n}$ and $p = \sum_{\gamma} p_\gamma x^\gamma \in {\mathbb R}[\mathbf x]$, is the real symmetric matrix $\mathbf M_d(p\,\mathbf y)$ of size $s(d)$ with entries $(\sum_\gamma {{p_\gamma }{y_{\gamma + \alpha + \beta }}})_{\alpha, \beta\in {\mathbb N}^n_d}$. \paragraph{Quadratic module.} Given $g=\{g_i:i\in[m]\}\subseteq{\mathbb R}[\mathbf{x}]$, the \emph{quadratic module} associated with $g$ is defined by $Q(g): = \{ \sigma_0+\sum_{i \in[m]}\sigma_ig_i \ :\ \sigma_0 \in \Sigma[ \mathbf x]\,,\, \sigma_i \in \Sigma[ \mathbf x]\}$, and for a positive integer $k$, the set $Q_k(g): = \{ \sigma_0+\sum_{i \in[m]}\sigma_ig_i \,:\, \sigma_0 \in \Sigma[ \mathbf x]_k\,,\, \sigma_i \in \Sigma[ \mathbf x]_{k-\lceil g_i \rceil}\}$ is the truncation of $Q(g)$ of order $k$. \paragraph{Ideal.} Given $h=\{h_i:i\in[l]\}\subseteq{\mathbb R}[\mathbf{x}]$, the set $I(h): = \{ \sum_{j \in[l]}\psi_jh_j \ :\ \psi_j \in {\mathbb R}[ \mathbf x]\}$ is the \emph{ideal} generated by $h$, and the set $I_k(h): = \{ \sum_{j \in[l]}\psi_jh_j \,:\, \psi_j \in {\mathbb R}[ \mathbf x]_{2(k-\lceil h_j \rceil)}\}$ is the truncation of $I(h)$ of order $k$. \paragraph{Archimedeanity.} Assume that there exists $R>0$ such that $R-\|\mathbf x\|_2^2\in Q(g)+I(h)$. As a consequence, $S(g)\cap V(h)\subseteq \mathcal B_R$, where $\mathcal{B}_R:=\{\mathbf x\in{\mathbb R}^n\,:\,\|\mathbf x\|_2\le \sqrt R\}$. In this case, we say that $Q(g)+I(h)$ is \emph{Archimedean} \cite{lasserre2010moments}. \paragraph{The Moment-SOS hierarchy \cite{lasserre2001global}.} Given a POP \eqref{eq:POP.def}, consider the following associated hierarchy of SOS relaxations indexed by $k\in{\mathbb N}^{\ge k_{\min}}$ with $k_{\min}:=\max\{\lceil f\rceil, \{\lceil g_i\rceil\}_{i\in[m]},\{\lceil h_j\rceil\}_{j\in[l]}\}$: \begin{equation}\label{eq:sos.hierarchy.0} \rho_k\,:=\,\sup \{\,\xi \in{\mathbb R}\ :\ f-\xi \in Q_k(g)+I_k(h)\}\,. \end{equation} For each $\sigma\in\Sigma[x]_d$, there exists $\mathbf G\succeq 0$ such that $\sigma=\mathbf v_d^\top\mathbf G\mathbf v_d$. Thus for each $k\in{\mathbb N}^{\ge k_{\min}}$, \eqref{eq:sos.hierarchy.0} can be rewritten as an SDP: \begin{equation}\label{eq:sos.hierarchy} \rho_k = \sup \limits_{\xi,\mathbf G_i,\mathbf u_j} \left\{\xi\ \left| \begin{array}{rl} &\mathbf G_i \succeq 0\,,\, f-\xi=\mathbf v_k^\top \mathbf G_0 \mathbf v_k\\ &\quad\quad\quad\qquad\qquad+\sum_{i\in[m]} g_i \mathbf v_{k-\lceil g_i\rceil}^\top \mathbf G_i \mathbf v_{k-\lceil g_i\rceil}\\ &\quad\quad\quad\qquad\qquad+ \sum_{j\in[l]} h_j \mathbf v_{2(k-\lceil h_j\rceil)}^\top \mathbf u_j \end{array}\right. \right\}\,. \end{equation} For every $k\in {\mathbb N}^{\ge k_{\min}}$, the dual of \eqref{eq:sos.hierarchy} reads as \begin{equation}\label{eq:moment.hierarchy} \tau_k \,:= \,\inf \limits_{\mathbf y \in {{\mathbb R}^{\s({2k})} }} \left\{ L_{\mathbf y}(f)\ \left|\begin{array}{rl} & \mathbf M_k(\mathbf y) \succeq 0\,,\:y_{\mathbf 0}\,=\,1\\ &\mathbf M_{k - \lceil g_i \rceil }(g_i\;\mathbf y) \succeq 0\,,\,i\in[m]\\ &\mathbf M_{k - \lceil h_j \rceil }(h_j\;\mathbf y) = 0\,,\,j\in[l] \end{array} \right. \right\}\,. \end{equation} If $Q(g)+V(h)$ is Archimedean, then both $(\rho_k)_{k\in{\mathbb N}^{\ge k_{\min}}}$ and $(\tau_k)_{k\in{\mathbb N}^{\ge k_{\min}}}$ converge to $f^\star$. For details on the Moment-SOS hierarchy and its various applications the interested reader is referred to \cite{lasserre2010moments}. \section{Exploiting CTP for dense POPs} \label{sec:ctp.dense.pop} This section is devoted to developing a framework to exploit CTP for dense POPs. We provide a sufficient condition for a POP to have CTP, as well as a series of linear programs to check whether the sufficient condition holds. In addition we show that several special classes of POPs have CTP. \subsection{CTP for dense POPs} \label{sec:spectral.relax} First let us define CTP for a POP. To simplify notation, for every $k\in{\mathbb N}^{\ge k_{\min}}$, denote by $\mathcal{S}_k$ the set of real symmetric matrices - of size $s_k:=\s(k)+\sum_{i\in[m]}\s(k-\lceil g_i\rceil)$, - in a block diagonal form $\mathbf X=\diag(\mathbf X_0,\dots,\mathbf X_m)$, and such that - $\mathbf X_0$ (resp. $\mathbf X_i$) is of size $\s(k)$ (resp. $\s(k-\lceil g_i\rceil)$ for $i\in [m]$).\\ Letting $\mathbf D_k(\mathbf y):=\diag(\mathbf M_k(\mathbf y),\mathbf M_{k-\lceil g_1\rceil}(g_1\mathbf y),\dots, \mathbf M_{k-\lceil g_m\rceil}(g_m\mathbf y))$, SDP \eqref{eq:moment.hierarchy} can be rewritten in the form: \begin{equation}\label{eq:dual.diag.moment.mat} \tau_k \,:= \,\inf\limits_{\mathbf y \in {\mathbb R}^{\s(2k)} }\left\{ L_{\mathbf y}(f)\ \left|\begin{array}{rl} &\mathbf D_k(\mathbf y) \in \mathcal S_k^+\,,\,y_{\mathbf 0}=1\,,\\ &\mathbf M_{k-\lceil h_i\rceil}(h_i\mathbf y)=0\,,\,i\in[l] \end{array} \right. \right\}\,. \end{equation} \begin{definition}\label{def:ctp} (CTP for a POP) We say that POP \eqref{eq:POP.def} has CTP if for every $k\in {\mathbb N}^{\ge k_{\min}}$, there exists $a_k>0$ and a positive definite matrix $\mathbf P_{k}\in \mathcal{S}_k$ such that for all $\mathbf y \in {\mathbb R}^{\s(2k)}$, \begin{equation} \left. \begin{array}{rl} &\mathbf M_{k-\lceil h_i\rceil}(h_i\mathbf y)=0\,,\,i\in[l]\,,\\ &y_{\mathbf 0}=1 \end{array} \right\}\Rightarrow \trace(\mathbf P_{k} \mathbf D_k(\mathbf y) \mathbf P_{k})=a_k\,. \end{equation} \end{definition} In other words, we say that POP \eqref{eq:POP.def} has CTP if each moment relaxation \eqref{eq:dual.diag.moment.mat} has an equivalent form involving a psd matrix whose trace is constant. In this case, we call $a_k$ the constant trace and $\mathbf P_k$ the basis transformation matrix. In the next subsection, we provide a sufficient condition for POP \eqref{eq:POP.def} to have CTP. \begin{example} (CTP for equality constrained POPs on a sphere \cite{mai2020hierarchy}) If $g=\emptyset$ and $h_1=R-\|\mathbf x\|_2^2$ for some $R>0$, then POP \eqref{eq:POP.def} has CTP with $a_k=(R+1)^k$ and $\mathbf P_{k}:=\diag((\theta^{1/2}_{k,\alpha})_{\alpha\in{\mathbb N}^n_k})$, where $(\theta_{k,\alpha})_{\alpha\in{\mathbb N}^n_k}\subseteq {\mathbb R}^{>0}$ satisfies $(1+\|\mathbf x\|_2^2)^k=\sum_{\alpha\in{\mathbb N}^n_k}\theta_{k,\alpha}\mathbf x^{2\alpha}$, for all $k\in{\mathbb N}^{\ge k_{\min}}$. \end{example} We now provide a general method to solve a POP with CTP. We first convert the $k$-th order moment relaxation \eqref{eq:dual.diag.moment.mat} of this POP to a standard primal SDP problem with CTP and then leverage appropriate first-order algorithms that exploit CTP to solve the resulting SDP problem. Suppose POP \eqref{eq:POP.def} has CTP. For every $k\in{\mathbb N}^{\ge k_{\min}}$, letting $\mathbf X=\mathbf P_{k}\mathbf D_k(\mathbf y)\mathbf P_{k}$, \eqref{eq:dual.diag.moment.mat} can be rewritten as \begin{equation}\label{eq:SDP.form} \tau_k = \inf _{\mathbf X\in \mathcal{S}_k^+} \{ \left< \mathbf C_k,\mathbf X\right>\,:\,\mathcal{A}_k \mathbf X=\mathbf b_k\}\,, \end{equation} where $\mathcal{A}_k:\mathcal{S}_k\to {\mathbb R}^{\zeta_k}$ is a linear operator such that $\mathcal{A}_k\mathbf X=(\left< \mathbf A_{k,1},\mathbf X\right>,\dots,\left< \mathbf A_{k,\zeta_k},\mathbf X\right>)$ with $\mathbf A_{k,i} \in \mathcal{S}_k$, $i\in[\zeta_k]$, $\mathbf C_k \in \mathcal{S}_k$ and $\mathbf b_k\in {\mathbb R}^{\zeta_k}$. Appendix \ref{sec:convert.standart.SDP} describes how to convert SDP \eqref{eq:dual.diag.moment.mat} to the form \eqref{eq:SDP.form}. The dual of SDP \eqref{eq:SDP.form} reads as \begin{equation}\label{eq:SDP.form.dual} \rho_k = \sup _{\mathbf z\in{\mathbb R}^{\zeta_k}} \,\{ \,\mathbf b_k^\top\mathbf z\,:\, \mathcal{A}_k^\top \mathbf z-\mathbf C_k\in \mathcal S_k^+\}\,, \end{equation} where $\mathcal{A}_k^\top:{\mathbb R}^{\zeta_k}\to \mathcal{S}_k$ is the adjoint operator of $\mathcal{A}_k$, i.e., $\mathcal{A}_k^\top\mathbf z=\sum_{i\in[\zeta_k]} z_i\mathbf A_{k,i}$. After replacing $(\mathcal{A}_k, \mathbf{A}_{k,i}, \mathbf b_k, \mathbf C_k, \mathcal{S}_k, \zeta_k, s_k, \tau_k, \rho_k, a_k)$ by $(\mathcal{A}, \mathbf{A}_{i}, \mathbf b, \mathbf C, \mathcal{S}, \zeta, s, \tau, \rho, a)$, the primal-dual \eqref{eq:SDP.form}-\eqref{eq:SDP.form.dual} has an equivalent formulation as the primal-dual \eqref{eq:SDP.form.0}-\eqref{eq:SDP.form.dual.0}; see also Appendix \ref{sec:sdp.ctp.dense} with $\omega=m+1$ and $s^{\max}=s(k)$. Then two first-order algorithms (CGAL and SM) are leveraged for solving the primal-dual \eqref{eq:SDP.form.0}-\eqref{eq:SDP.form.dual.0}; see Appendix \ref{sec:sdp.ctp.dense} and Appendix \ref{sec:spectral.method.dense}. \subsection{A sufficient condition for a POP to have CTP} \label{sec:sufficient.CTP} In this section, we provide a sufficient condition for POP \eqref{eq:POP.def} to have CTP. For $k\in{\mathbb N}^{\ge k_{\min}}$, let $Q_k^\circ(g)$ be the interior of the truncated quadratic module $Q_k(g)$, i.e., $Q_k^\circ(g):=\{\mathbf v_k^\top \mathbf G_0 \mathbf v_k+\sum_{i\in[m]} g_i \mathbf v_{k-\lceil g_i\rceil}^\top \mathbf G_i \mathbf v_{k-\lceil g_i\rceil} \,:\, \mathbf G_i\succ 0, \quad i\in\{0\}\cup[m]\}$. \begin{theorem}\label{theo:suff.cond.CTP} The following statements hold: \begin{enumerate} \item If one the following equivalent conditions hold for all $k\in{\mathbb N}^{\ge k_{\min}}$: \begin{eqnarray} \nonumber {\mathbb R}^{>0}\subseteq Q_k^\circ(g)+I_k(h) & \Leftrightarrow & \forall \delta>0\,, \ \delta\in Q_k^\circ(g)+I_k(h)\\ \label{eq:suffi.con.ideal} &\Leftrightarrow & 1\in Q_k^\circ(g)+I_k(h)\,, \end{eqnarray} then POP \eqref{eq:POP.def} has CTP, as in Definition \ref{def:ctp}. \item Assume that $h=\emptyset$ and $S(g)$ has nonempty interior. Then POP \eqref{eq:POP.def} has CTP if and only if \begin{equation}\label{eq:pos.real.inter.quad} {\mathbb R}^{>0}\subseteq Q_k^\circ(g)\,,\, \forall k\in{\mathbb N}^{\ge k_{\min}}\,. \end{equation} \end{enumerate} \end{theorem} The proof of Theorem \ref{theo:suff.cond.CTP} is postponed to Appendix \ref{proof:suff.cond.CTP}. The following lemma will be used later on. \begin{lemma}\label{lem:equality} Let $R>0$. For all $k\in{\mathbb N}^{\ge 1}$, one has \begin{equation} (R+1)^k=(1+\|\mathbf x\|^2_2)^k+(R-\|\mathbf x\|^2_2)\sum_{j=0}^{k-1}(R+1)^j(1+\|\mathbf x\|^2_2)^{k-j-1}\,. \end{equation} \end{lemma} \begin{proof} Let $k\in{\mathbb N}^{\ge 1}$. Letting $a=R+1$ and $b=1+\|\mathbf x\|^2_2$, the desired equality follows from $a^k-b^k=(a-b)\sum_{j=0}^{k-1}a^jb^{k-1-j}$. \end{proof} The next result states that the sufficient condition in Theorem \ref{theo:suff.cond.CTP} holds whenever a ball constraint is present in the POP's description. For a real symmetric matrix $\mathbf A$, denote the largest eigenvalue of $\mathbf A$ by $\lambda_{\max}(\mathbf A)$. \begin{theorem}\label{theo:generic.ctp} If $R-\|\mathbf x\|_2^2\in g$ for some $R>0$ then the inclusions \eqref{eq:pos.real.inter.quad} hold and therefore POP \eqref{eq:POP.def} has CTP. \end{theorem} \begin{proof} Without loss of generality, set $g_m:=R-\|\mathbf x\|_2^2$ and let $k\in{\mathbb N}^{\ge k_{\min}}$ be fixed. By Lemma \ref{lem:equality}, $(R+1)^k=\Theta+g_m\Lambda$, where $\Theta:=(1+\|\mathbf x\|^2_2)^k$ and $\Lambda:=\sum_{j=0}^{k-1}(R+1)^j(1+\|\mathbf x\|^2_2)^{k-j-1}$. Note that: \begin{itemize} \item $\Theta=\sum_{\alpha\in{\mathbb N}^n_{k}}\theta_{\alpha}\mathbf x^{2\alpha}=\mathbf v_k^\top \mathbf G_0\mathbf v_k$ for some $(\theta_{\alpha})_{\alpha\in{\mathbb N}^n_{k}} \subseteq {\mathbb R}^{>0}$; \item $\Lambda=\sum_{\alpha\in{\mathbb N}^n_{k-1}}\lambda_{\alpha}\mathbf x^{2\alpha}=\mathbf v_{k-1}^\top \mathbf G_m\mathbf v_{k-1}$ for some $(\lambda_{\alpha})_{\alpha\in{\mathbb N}^n_{k-1}} \subseteq {\mathbb R}^{>0}$. \end{itemize} Here $\mathbf G_0=\diag((\theta_{\alpha})_{\alpha\in{\mathbb N}^n_{k}})$ and $\mathbf G_m=\diag((\lambda_{\alpha})_{\alpha\in{\mathbb N}^n_{k}})$ are both positive definite. Then we have $(R+1)^k=\mathbf v_k^\top \mathbf G_0\mathbf v_k+g_m\mathbf v_{k-1}^\top \mathbf G_m\mathbf v_{k-1}$. Denote by $\mathbf I_t$ the identity matrix of size $s(t)$ for $t\in{\mathbb N}$. Let $\mathbf W$ be a real symmetric matrix such that $\sum_{i\in[m-1]} g_i\mathbf v_{k-\lceil g_i \rceil}^\top\mathbf I_{k-\lceil g_i \rceil} \mathbf v_{k-\lceil g_i \rceil} = \mathbf v_k^\top \mathbf W\mathbf v_k$. Since $\mathbf G_0\succ 0$, there exists $\delta>0$ such that $\mathbf G_0-\delta \mathbf W\succ 0$. Indeed, \begin{equation} \mathbf G_0-\delta \mathbf W\succ 0\Leftrightarrow \mathbf I_k\succ\delta \mathbf G_0^{-1/2} \mathbf W \mathbf G_0^{-1/2} \Leftrightarrow 1> \delta \lambda_{\max}(\mathbf G_0^{-1/2} \mathbf W \mathbf G_0^{-1/2})\,, \end{equation} yielding the selection $\delta=1/(|\lambda_{\max}(\mathbf G_0^{-1/2} \mathbf W \mathbf G_0^{-1/2})|+1)$. Then \begin{equation*} (R+1)^k=\mathbf v_k^\top (\mathbf G_0-\delta \mathbf W)\mathbf v_k+\delta \sum_{i\in[m-1]} g_i\mathbf v_{k-\lceil g_i \rceil}^\top\mathbf I_{k-\lceil g_i \rceil} \mathbf v_{k-\lceil g_i \rceil}+g_m\mathbf v_{k-1}^\top \mathbf G_m\mathbf v_{k-1}\,, \end{equation*} which implies $(R+1)^k\in Q^\circ_k(g)$, which in turn yields the desired conclusion. \end{proof} The next result is a consequence of Theorem \ref{theo:generic.ctp}. Its states that if a POP has a ball constraint then the corresponding Moment-SOS relaxations satisfy Slater's condition. \begin{corollary}\label{coro:stric.fea.sol.sos} Assume that $R-\|\mathbf x\|_2^2\in g$ for some $R>0$. Then Slater's condition holds for SDP \eqref{eq:sos.hierarchy} for all $k\ge k_{\min}$. As a consequence, strong duality holds for the primal-dual \eqref{eq:sos.hierarchy}-\eqref{eq:moment.hierarchy} for all $k\ge k_{\min}$. \end{corollary} \begin{proof} It suffices to prove that SDP \eqref{eq:sos.hierarchy} has a strictly feasible solution for all $k\ge k_{\min}$. Let $k\ge k_{\min}$ be fixed. By \cite[Proposition 5.8]{marshall}, there exist $\sigma_0\in\Sigma[\mathbf{x}]_k$, $\sigma\in\Sigma[\mathbf{x}]_{k-1}$ and $\lambda\in{\mathbb R}$ such that $f+\lambda=\sigma_0+(R-\|\mathbf{x}\|_2^2)\sigma$. Thus $f+\lambda\in Q_k(g)$. By Theorem \ref{theo:generic.ctp}, $1\in Q_k^\circ(g)$ and therefore $f+1+\lambda\in Q^\circ_k(g)$, which yields the desired conclusion. \end{proof} \begin{remark}\label{rem:direct.cons.trace} From the proofs of Theorem \ref{theo:generic.ctp} and Theorem \ref{theo:suff.cond.CTP}, the constant trace $a_k$ and the basis transformation matrix $\mathbf P_k$ (Definition \ref{def:ctp}) can be taken as \begin{equation*} a_k=(R+1)^k\quad\text{and}\quad \mathbf P_k=\diag((\mathbf G_0-\delta \mathbf W)^{1/2},\sqrt{\delta}\mathbf I_{k-\lceil g_1 \rceil},\dots,\sqrt{\delta}\mathbf I_{k-\lceil g_{m-1} \rceil},\mathbf G_m^{1/2})\,. \end{equation*} However, this choice has poor numerical properties. In the next section we provide a series of linear programs (LPs) inspired from the inclusions \eqref{eq:suffi.con.ideal}, to obtain the constant trace $a_k$ and the basis transformation matrix $\mathbf P_k$ which achieve a better numerical performance. \end{remark} \subsection{Verifying CTP for POPs by solving linear programs} \label{sec:obtain.CTP.SDP} For any $k\in{\mathbb N}^{\ge k_{\min}}$, let $\hat{\mathcal S}_{k}$ be the set of real diagonal matrices of size $\s(k)$ and consider the following LP: \begin{equation}\label{eq:find.CTP.diag} \inf \limits_{\xi,\mathbf G_i,\mathbf u_j} \left\{\xi\ \left|\begin{array}{rl} &\mathbf G_0-\mathbf I_0 \in \hat{\mathcal S}_{k}^+\,,\,\mathbf G_i-\mathbf I_i \in \hat{\mathcal S}_{k-\lceil g_i\rceil}^+\,,\,i\in[m]\,,\\ &\xi=\mathbf v_k^\top \mathbf G_0 \mathbf v_k+\sum_{i\in[m]} g_i \mathbf v_{k-\lceil g_i\rceil}^\top \mathbf G_i \mathbf v_{k-\lceil g_i\rceil}\\ &\qquad\qquad\qquad\qquad+ \sum_{j\in[l]} h_j \mathbf v_{2(k-\lceil h_j\rceil)}^\top \mathbf u_j \end{array} \right.\right\}\,, \end{equation} where $\mathbf I_i$ is the identity matrix for $i\in\{0\}\cup [m]$. \begin{lemma}\label{lem:feas.LP} If LP \eqref{eq:find.CTP.diag} has a feasible solution $(\xi_k,\mathbf G_{i,k},\mathbf u_{j,k})$ for every $k\in{\mathbb N}^{\ge k_{\min}}$, then POP \eqref{eq:POP.def} has CTP with $a_k=\xi_k$ and $\mathbf P_k=\diag(\mathbf G_{0,k}^{1/2},\dots,\mathbf G_{m,k}^{1/2})$. \end{lemma} The proof of Lemma \ref{lem:feas.LP} is similar to that of Theorem \ref{theo:suff.cond.CTP} with $a_k=\xi_k$ and $\mathbf G_{i}=\mathbf G_{i,k}$, $i\in\{0\}\cup [m]$. Since small constant traces are highly desirable for efficiency of first-order algorithms (e.g. CGAL), we search for an optimal solution of LP \eqref{eq:find.CTP.diag} instead of just a feasible solution. \begin{remark} One can extend the classes of diagonal matrices $\hat{\mathcal S}_{k}$, $\hat{\mathcal S}_{k-\lceil g_i\rceil}$ in \eqref{eq:find.CTP.diag} to obtain a smaller constant trace. For instance, one can define $\hat{\mathcal S}_{k}$, $\hat{\mathcal S}_{k-\lceil g_i\rceil}$ to be the classes of symmetric block diagonal matrices with block size $2$. As shown in \cite[Lemma 4.3]{wang2019second}, \eqref{eq:find.CTP.diag} then becomes a second-order cone program (SOCP) which can be also efficiently solved. \end{remark} \subsection{Special classes of POPs with CTP} \label{sec:special.POP.CTP} In this section we identify two classes of POPs whose CTP can be verified by LP \eqref{eq:find.CTP.diag}. \subsubsection{POPs with ball or annulus constraints on subsets of variables} Consider the following assumption on the inequality constraints of POP \eqref{eq:POP.def}. \begin{assumption}\label{ass:bound.cons} There exists a nonnegative integer $r\le m/2$ and \begin{itemize} \item $\overline R_{i}>\underline R_i>0$, $T_i\subseteq [n]$ for $i\in [r]$; \item $\overline R_{j}>0$, $T_{j}\subseteq [n]$ for $j\in [m]\backslash[2r]$ \end{itemize} such that \begin{enumerate}[(1)] \item $(\cup_{i\in[r]} T_i) \cup (\cup_{j\in[m]\backslash[2r]} T_{j})=[n]$; \item $g_i:=\|\mathbf x(T_i)\|^2_2-\underline R_i$, $g_{i+r}:=\overline R_{i}-\|\mathbf x(T_i)\|^2_2$ for $i\in [r]$; \item $g_{j}:=\overline R_{j}-\|\mathbf x(T_{j})\|^2_2$ for $j\in [m]\backslash[2r]$. \end{enumerate} \end{assumption} Notice that if Assumption \ref{ass:bound.cons} holds then POP \eqref{eq:POP.def} has $r$ annulus constraints and $(m-2r)$ ball constraints on subsets of variables. Moreover, $Q(g)+I(h)$ is Archimedean due to (1) in Assumption \ref{ass:bound.cons}. \begin{example} Assumption \ref{ass:bound.cons} holds in the following cases: \begin{enumerate}[(1)] \item $m=1$, $r=0$ and $g_1:=\overline R_1-\|\mathbf x\|_2^2$, i.e., $S(g)$ is a ball; \item $m=n$, $r=0$ and $g_i:=\overline R_i-x_i^2$ for $i\in[n]$, i.e., $S(g)$ is a box; \item $m=2$, $r=1$ and $g_1:=\|\mathbf x\|_2^2 - \underline R_1$, $g_2:=\overline R_1-\|\mathbf x\|_2^2$ ($\overline R_1>\underline R_1>0$), i.e., $S(g)$ is an annulus. \end{enumerate} \end{example} \begin{proposition}\label{prop:suff.cond.feas} If Assumption \ref{ass:bound.cons} holds then LP \eqref{eq:find.CTP.diag} has a feasible solution for every $k\in{\mathbb N}^{\ge k_{\min}}$, and therefore POP \eqref{eq:POP.def} has CTP. \end{proposition} The proof of Proposition \ref{prop:suff.cond.feas} is postponed to Appendix \ref{proof:suff.cond.feas}. \subsubsection{POPs with inequality constraints of equivalent degree} \label{sec:CTP.equidegree} We say that polynomials $p_1,\dots,p_t$ are of equivalent degree if $\lceil p_1\rceil =\dots=\lceil p_{t}\rceil$. \begin{assumption}\label{ass:equidegree} Let $m\ge 3$ and $\{g_j\}_{j\in[m-2]}$ be of equivalent degree. $L>0$ and $R>0$ are such that $g_{m-1}=L-\sum\limits_{j\in[m-2]}g_j$ and $g_m=R-\|\mathbf x\|_2^2$. \end{assumption} \begin{proposition}\label{prop:CTP.equidegree} If Assumption \ref{ass:equidegree} holds then LP \eqref{eq:find.CTP.diag} has a feasible solution for every $k\in{\mathbb N}^{\ge k_{\min}}$, and therefore POP \eqref{eq:POP.def} has CTP. \end{proposition} \begin{example} Let $R,L>0$ satisfy $R\ge L^2$ and \begin{equation}\label{eq:stand.simplex} m=n+2\,,\, g_i=x_i \textrm{ for } i\in [n]\,,\, g_{n+1}=L-\sum_{i\in[n]}x_i \text{ and }\ g_{n+2}=R-\|\mathbf x\|_2^2\,. \end{equation} Then Assumption \ref{ass:equidegree} holds and $S(g)$ is a simplex. \end{example} When $S(g)$ is compact, we can always reformulate POP \eqref{eq:POP.def} such that Assumption \ref{ass:equidegree} holds. Suppose $S(g)\subseteq \mathcal{B}_R$ for some $R$. Let $u=\max_{i\in[m]}\lceil g_i\rceil$. Set $\tilde g_i:=g_i(1+\|\mathbf x\|_2^2)^{u-\lceil g_i\rceil}$ for $i\in[m]$. Let $L$ be a positive number such that $\sum_{i\in[m]}\tilde g_i\le L$ on $S(g)$. Set $\tilde g_{m+1}:=L- \sum_{i\in[m]}\tilde g_i$ and $\tilde g_{m+2}:=R-\|\mathbf x\|_2^2$. \begin{remark}\label{rem:choose.L} For the latter case, one can choose any positive number $L\ge (R+1)^u\sum_{i\in[m]} \|g_i\|_1$. Indeed, for any $\mathbf z\in S(g)$, and since $\|\mathbf z\|_2^2\le R$: \begin{equation*} |\mathbf z^\alpha|=\prod_{i\in[n]} |z_i|^{\alpha_i}\le \prod_{i\in[n]} (1+\|\mathbf z\|_2^2)^{\alpha_i/2}=(1+\|\mathbf z\|_2^2)^{|\alpha|/2}\le (1+R)^{t}\,,\,\forall \alpha\in {\mathbb N}^n_{2t}\,. \end{equation*} This implies that for every $i\in[m]$, \begin{equation*} \tilde g_i(\mathbf z)\le (1+R)^{u-\lceil g_i\rceil} \sum_{\alpha\in{\mathbb N}^n_{2\lceil g_i\rceil}}|g_\alpha| |\mathbf z^\alpha|\le (1+R)^{u-\lceil g_i\rceil} (R+1)^{\lceil g_i\rceil} \|g_i\|_1 = (1+R)^{u}\|g_i\|_1\,. \end{equation*} Thus we have $\sum_{i\in[m]}\tilde g_i\le (1+R)^{u}\sum_{i\in[m]}\|g_i\|_1$ on $S(g)$. \end{remark} \begin{corollary}\label{coro:equi.pop.ctp} With the above notation, $S(g\cup \{\tilde g_{m+1}, \tilde g_{m+2}\})=S(g)$ and LP \eqref{eq:find.CTP.diag} has a feasible solution when replacing $g$ by $g\cup \{\tilde g_{m+1},\tilde g_{m+2}\}$ for each $k\in{\mathbb N}^{\ge k_{\min}}$. As a result, POP \eqref{eq:POP.def} is equivalent to the new POP \begin{equation} f^\star:=\inf \{f(\mathbf x)\ :\ \mathbf x\in S(g\cup \{\tilde g_{m+1},\tilde g_{m+2}\})\cap V(h)\} \end{equation} which has CTP. \end{corollary} The proof of Corollary \ref{coro:equi.pop.ctp} is postponed to Appendix \ref{proof:coro:equi.pop.ctp}. In case where POP \eqref{eq:POP.def} does not have CTP and $S(g)$ is compact, Corollary \ref{coro:equi.pop.ctp} provides a way to construct an equivalent POP by including two additional redundant constraints. Then CTP of this new POP can be verified by LP. \subsection{Main algorithm} Algorithm \ref{alg:sol.nonsmooth.hier.B} below solves POP \eqref{eq:POP.def} whose CTP can be verified by LP. \begin{algorithm} \caption{SpecialPOP-CTP} \label{alg:sol.nonsmooth.hier.B} \small \textbf{Input:} POP \eqref{eq:POP.def} and a relaxation order $k\in{\mathbb N}^{\ge k_{\min}}$\\ \textbf{Output:} The optimal value $\tau_k$ of SDP \eqref{eq:SDP.form} \begin{algorithmic}[1] \State Solve LP \eqref{eq:find.CTP.diag} with an optimal solution $(\xi_k,\mathbf G_{i,k},\mathbf u_{j,k})$; \State Let $a_k=\xi_k$ and $\mathbf P_k=\diag(\mathbf G_{0,k}^{1/2},\dots,\mathbf G_{m,k}^{1/2})$; \State Compute the optimal value $\tau_k$ of SDP \eqref{eq:SDP.form} by running an algorithm based on first-order methods, and which exploits CTP ; \end{algorithmic} \end{algorithm} Examples of algorithms based on first-order methods and which exploit CTP are CGAL (Algorithm \ref{alg:CGAL} in Appendix \ref{sec:sdp.ctp.dense}) or SM (Algorithm \ref{alg:sol.SDP.CTP.0} in Appendix \ref{sec:spectral.method.dense}). \section{Exploiting CTP for POPs with CS} \label{sec:CTP.correlative.POP} In this section, we extend the CTP-exploiting framework to POPs with sparsity. For clarity of exposition we only consider \emph{correlative sparsity} (CS). However, in Appendix \ref{sec:sparse.POP.TS.CSTS} we also treat \emph{term sparsity} (TS) \cite{wang2019tssos} as well as \emph{correlative-term sparsity} (CS-TS) \cite{wang2020cs}. Since the methodology is very similar to that in the dense case described earlier, we omit details and only present the main results. To begin with, we recall some basic facts on exploiting CS for POP \eqref{eq:POP.def} initially proposed in \cite{waki2006sums} by Waki et al. \subsection{POPs with CS}\label{sec:sparse.POP} For $\alpha\in{\mathbb N}^n$, let $\supp(\alpha):=\{j\in[n]:\alpha_j>0\}$. Assume $I\subseteq [n]$. Given $\mathbf y=(y_\alpha)_{\alpha\in{\mathbb N}^n_d}$, the moment (resp. localizing) submatrix associated to $I$ of order $d$ is defined by $\mathbf M_d(\mathbf y,I):=(y_{\alpha+\beta})_{\alpha,\beta\in {\mathbb N}^I_d}$ (resp. $\mathbf M_d(q\mathbf y, I):=(\sum_{\gamma}q_\gamma y_{\alpha+\beta+\gamma})_{\alpha,\beta\in {\mathbb N}^I_d}$ for $q\in{\mathbb R}[x(I)]$). Let $\mathbf v_d^I:=(\mathbf x^\alpha)_{\alpha\in {\mathbb N}^I_d}$ with length $\s(|I|,d):=\binom{|I|+d}{n}$. Assume that $\{I_j\}_{j\in[p]}$ (with $n_j:=|I_j|$) are the maximal cliques of (a chordal extension of) the correlative sparsity pattern (csp) graph associated to POP \eqref{eq:POP.def}, as defined in \cite{waki2006sums}. Let $\{J_j\}_{j\in[p]}$ (resp. $\{W_j\}_{j\in[p]}$) be a partition of $[m]$ (resp. $[l]$) such that for all $i\in J_j$, $g_i\in {\mathbb R}[x(I_j)]$ (resp. $i\in W_j$, $h_i\in {\mathbb R}[x(I_j)]$), $j\in[p]$. For each $j\in [p]$, let $m_j:=|J_j|$, $l_j:=|W_j|$ and $g_{J_j}:=\{g_i\,:\,i\in J_j\}$, $h_{W_j}:=\{h_i\,:\,i\in W_j\}$. Then $Q(g_{J_j})$ (resp. $I(h_{W_j})$) is a quadratic module (resp. an ideal) in ${\mathbb R}[x(I_j)]$, for $j\in[p]$. For each $k\in {\mathbb N}^{\ge k_{\min}}$, consider the following sparse SOS relaxation: \begin{equation}\label{eq:primal.cs.SOS} \rho_{k}^{\text{cs}} := \sup\ \left\{\xi\,:\,f-\xi\in\sum_{j\in[p]} (Q_k(g_{J_j})+I_k(h_{W_j}))\right\}\,. \end{equation} It is equivalent to the SDP: \begin{equation}\label{eq:primal.cs.SDP} \rho_{k}^{\text{cs}} = \sup \limits_{\xi,\mathbf G_i^{(j)},\mathbf u_i^{(j)}} \left\{ \xi\ \left|\begin{array}{rl} &{\mathbf G}_i^{(j)} \succeq 0\,,\,i\in\{0\}\cup J_j\,,\,j\in [p]\,,\\ &f-\xi=\sum_{j\in[p]}\left((\mathbf v_k^{I_j})^\top {\mathbf G}_0^{(j)} \mathbf v_k^{I_j}\right.\\ &\qquad\qquad+\sum_{i\in J_j} g_i (\mathbf v_{k-\lceil g_i\rceil}^{I_j})^\top {\mathbf G}_i^{(j)} \mathbf v_{k-\lceil g_i\rceil}^{I_j}\\ &\qquad\qquad\left.+\sum_{i\in W_j} h_i (\mathbf v_{2(k-\lceil h_i\rceil)}^{I_j})^\top {\mathbf u}_i^{(j)} \right) \end{array} \right. \right\}\,. \end{equation} The dual of \eqref{eq:primal.cs.SDP} reads: \begin{equation}\label{eq:dual.cs.SDP} \tau_k^{\text{cs}} \,:= \,\inf \limits_{\mathbf y \in {{\mathbb R}^{\s({2k})} }} \left\{ L_{\mathbf y}(f)\ \left|\begin{array}{rl} & \mathbf M_k(\mathbf y, I_j) \succeq 0\,,\,j\in[p]\,,\,y_{\mathbf 0}\,=\,1\,.\\ &\mathbf M_{k - \lceil g_i \rceil }(g_i\;\mathbf y,I_j) \succeq 0\,,\,i\in J_j\,,\,j\in[p]\,,\\ &\mathbf M_{k - \lceil h_i \rceil }(h_i\;\mathbf y,I_j) = 0\,,\,i\in W_j\,,\,j\in[p] \end{array}\right. \right\}\,. \end{equation} It is shown in \cite[Theorem 3.6]{lasserre2006convergent} that convergence of the primal-dual \eqref{eq:primal.cs.SDP}-\eqref{eq:dual.cs.SDP} to $f^\star$ is guaranteed if there are additional ball constraints on each clique of variables. \subsection{Exploiting CTP for POPs with CS} \label{sec:def.CTP.each.clique} Consider POP \eqref{eq:POP.def} with CS described in Section \ref{sec:sparse.POP}. For every $j\in[p]$ and for every $k\in{\mathbb N}^{\ge k_{\min}}$, letting $\mathbf D_k(\mathbf y,I_j):=\diag(\mathbf M_k(\mathbf y,I_j),(\mathbf M_{k-\lceil g_i\rceil}(g_i\mathbf y,I_j))_{i\in J_j})$ for $\mathbf y\in{\mathbb R}^{s(k)}$, SDP \eqref{eq:dual.cs.SDP} can be rewritten as \begin{equation}\label{eq:cs-ts.SDP.simple} \tau_{k}^{\text{cs}} \,:= \,\inf \limits_{\mathbf y \in {{\mathbb R}^{\s({2k})} }} \left\{ L_{\mathbf y}(f)\ \left|\begin{array}{rl} & \mathbf D_k(\mathbf y, I_j) \succeq 0\,,\,j\in[p]\,,\,y_{\mathbf 0}\,=\,1\,,\\ &\mathbf M_{k - \lceil h_i \rceil }(h_i\;\mathbf y,I_j) = 0\,,\,i\in W_j\,,\,j\in[p] \end{array}\right.\right\}\,. \end{equation} We define CTP for POP with CS as follows. \begin{definition}\label{def:ctp.cs}(CTP for a POP with CS) We say that POP \eqref{eq:POP.def} with CS has CTP if for every $k\in{\mathbb N}^{\ge k_{\min}}$ and for every $j\in[p]$, there exists a positive number $a_k^{(j)}$ and a positive definite matrix $\mathbf P_{k}^{(j)}\in \mathcal{S}_k$ such that for all $\mathbf y \in {\mathbb R}^{\s(2k)}$, \begin{equation} \left. \begin{array}{rl} &\mathbf M_{k-\lceil h_i\rceil}(h_i\mathbf y,I_j)=0\,,\,i\in W_j\,,\\ &y_{\mathbf 0}=1 \end{array} \right\}\Rightarrow \trace(\mathbf P_{k}^{(j)} \mathbf D_k(\mathbf y,I_j) \mathbf P_{k}^{(j)})=a_k^{(j)}\,. \end{equation} \end{definition} The following result provides a sufficient condition for a POP with CS to have CTP. \begin{theorem}\label{theo:generic.ctp.cs} Assume that there is a ball constraint on each clique of variables, i.e., \begin{equation}\label{eq:ball.on.clique} \forall\ j\in[p],\, R_j-\|\mathbf x(I_j)\|_2^2\in g \textrm{ for some }R_j>0\,. \end{equation} Then one has ${\mathbb R}^{>0}\subseteq Q_k^\circ(g_{J_j})$, for all $k\in{\mathbb N}^{\ge k_{\min}}$ and for all $j\in [p]$. As a consequence, POP \eqref{eq:POP.def} has CTP. \end{theorem} The proof of Theorem \ref{theo:generic.ctp.cs} being very similar to that of Theorem \ref{theo:generic.ctp} by considering each clique of variables, is omitted. Again by considering each clique of variables, the following result can be obtained from Theorem \ref{theo:generic.ctp.cs} in the same way Corollary \ref{coro:stric.fea.sol.sos} was obtained. \begin{corollary}\label{coro:slater.con.cs} If \eqref{eq:ball.on.clique} holds then Slater's condition for SDP \eqref{eq:primal.cs.SDP} holds for all $k\in{\mathbb N}^{\ge k_{\min}}$. \end{corollary} We are now in position to provide a general method to solve POPs with CS which have CTP.\\ Consider POP \eqref{eq:POP.def} with CS described in Section \ref{sec:sparse.POP}. Assume that POP \eqref{eq:POP.def} has CTP and let $k\in{\mathbb N}^{\ge k_{\min}}$ be fixed. For every $j\in[p]$, we denote by $\mathcal{S}_{j,k}$ the set of real symmetric matrices of size $\s(k,n_j)+\sum_{i\in J_j}\s(k-\lceil g_i\rceil,n_j)$ in a block diagonal form: $\mathbf X=\diag(\mathbf X_{0},(\mathbf X_i)_{i\in J_j})$ such that $\mathbf X_{0}$ is a block of size $\s(k,n_j)$ and $\mathbf X_i$ is a block of size $\s(k-\lceil g_i\rceil,n_j)$ for $i\in J_j$. Letting \begin{equation}\label{eq:convert.momentmat.sparse} \mathbf X_j=\mathbf P_{k}^{(j)}\mathbf D_k(\mathbf y,I_j)\mathbf P_{k}^{(j)}\,,\,j\in[p]\,, \end{equation} SDP \eqref{eq:cs-ts.SDP.simple} can be rewritten as: \begin{equation}\label{eq:SDP.form.sparse} \tau_{k}^{\text{cs}} = \inf _{\mathbf X_j\in \mathcal{S}_{j,k}^+} \left\{ \sum_{j\in[p]}\left< \mathbf C_{j,k},\mathbf X_j\right>\,:\,\sum_{j\in[p]}\mathcal{A}_{j,k} \mathbf X_j=\mathbf b_k\,,\,j\in[p]\right\}\,, \end{equation} where for every $j\in[p]$, $\mathcal{A}_{j,k}:\mathcal{S}_{j,k}\to {\mathbb R}^{\zeta_k}$ is a linear operator of the form $\mathcal{A}_{j,k}\mathbf X=(\left< \mathbf A_{j,k,1},\mathbf X\right>,\dots,\left< \mathbf A_{j,k,\zeta_k},\mathbf X\right>)$ with $\mathbf A_{j,k,i} \in \mathcal{S}_{j,k}$, $i\in[\zeta_k]$, $\mathbf C_{j,k} \in \mathcal{S}_{j,k}$, $j\in[p]$ and $\mathbf b_k\in {\mathbb R}^{\zeta_k}$. See Appendix \ref{sec:convert.standart.SDP.cs} for the conversion of SDP \eqref{eq:cs-ts.SDP.simple} to the form \eqref{eq:SDP.form.sparse}. The dual of SDP \eqref{eq:SDP.form.sparse} reads as: \begin{equation}\label{eq:SDP.form.sparse.dual} \rho^{\text{cs}}_k = \sup_{\mathbf y\in{\mathbb R}^\zeta}\, \left\{\, \mathbf b_k^\top\mathbf y\,:\, \mathcal{A}_{j,k}^\top \mathbf y-\mathbf C_{j,k}\in \mathcal{S}^+_{j,k}\,,j\in[p]\,\right\}\,, \end{equation} where $\mathcal{A}_{j,k}^\top:{\mathbb R}^{\zeta}\to \mathcal{S}_{j,k}$ is the adjoint operator of $\mathcal{A}_{j,k}$, i.e., $\mathcal{A}_{j,k}^\top\mathbf z=\sum_{i\in[\zeta]} z_i\mathbf A_{j,k,i}$, $j\in[p]$. By Definition \ref{def:ctp.cs}, it holds that for every $k\in{\mathbb N}^{\ge k_{\min}}$, \begin{equation} \left. \begin{array}{lr} \forall \ \mathbf X_j\in \mathcal S_{j,k}\,,\,j\in[p]\\ \sum_{j\in[p]}\mathcal{A}_{j,k} \mathbf X_j=\mathbf b_k \end{array} \right\} \Rightarrow \trace(\mathbf X_j)=a_k^{(j)}\,,\,j\in[p]\,. \end{equation} After replacing $(\mathcal{A}_{j,k}, \mathbf{A}_{j,k,i}, \mathbf b_k, \mathbf C_{j,k}, \mathcal{S}_{j,k}, \zeta_k, \tau_{k}^{\text{cs}}, a_k^{(j)})$ by $(\mathcal{A}_j, \mathbf{A}_{i,j}, \mathbf b, \mathbf C_j, \mathcal{S}_j, \zeta, \tau, a_j)$, SDP \eqref{eq:SDP.form.sparse} then becomes SDP \eqref{eq:SDP.form.0.blocks}; see Appendix \ref{sec:sdp.ctp.blocks} with $\omega_j=m_j+1$ and $s^{\max}=\max_{j\in[p]}\s(k,n_j)$. If there is a ball constraint on each clique of variables then by Corollary \ref{coro:slater.con.cs}, strong duality holds for the pair \eqref{eq:SDP.form.sparse}-\eqref{eq:SDP.form.sparse.dual}, for every $k\in{\mathbb N}^{\ge k_{\min}}$. The two algorithms (CGAL and SM) based on first-order methods are then leveraged to solve the primal-dual \eqref{eq:SDP.form.sparse}-\eqref{eq:SDP.form.sparse.dual}; see Appendix \ref{sec:sdp.ctp.blocks} and Appendix \ref{app:spectral.SDP.ctp}. \subsection{Verifying CTP for POPs with CS via LP} As in the dense case, we can verify CTP for a POP with CS via a series of LPs. For every $k\in{\mathbb N}^{\ge k_{\min}}$ and for every $j\in[p]$, let $\hat{\mathcal S}_{k,j}$ be the set of real diagonal matrices of size $\s(k,n_j)$ and consider the following LP: \begin{equation}\label{eq:find.CTP.cliq} \inf \limits_{\xi,\mathbf G_i,\mathbf u_i} \left\{ \xi\ \left|\begin{array}{rl} &\mathbf G_0-\mathbf I_0 \in \hat{\mathcal S}_{k,j}^+\,,\,\mathbf G_i-\mathbf I_i \in \hat{\mathcal S}_{k-\lceil g_i\rceil,j}^+\,,\,i\in J_j\,,\\ &\xi=(\mathbf v_k^{I_j})^\top {\mathbf G}_0 \mathbf v_k^{I_j}+\sum_{i\in J_j} g_i (\mathbf v_{k-\lceil g_i\rceil}^{I_j})^\top {\mathbf G}_i \mathbf v_{k-\lceil g_i\rceil}^{I_j}\\ &\qquad\qquad\qquad\qquad\qquad+\sum_{i\in W_j} h_i (\mathbf v_{2(k-\lceil h_i\rceil)}^{I_j})^\top {\mathbf u}_i \end{array} \right. \right\}\,, \end{equation} where $\mathbf I_i$ is the identity matrix, for every $i\in\{0\}\cup J_j$. \begin{lemma}\label{lem:feas.LP.cs} Let POP \eqref{eq:POP.def} with CS be described in Section \ref{sec:sparse.POP}. If LP \eqref{eq:find.CTP.cliq} has a feasible solution $(\xi_{k}^{(j)},\mathbf G_{i,k}^{(j)},\mathbf u_{i,k}^{(j)})$, for every $k\in{\mathbb N}^{\ge k_{\min}}$ and for every $j\in[p]$, then POP \eqref{eq:POP.def} has CTP with $\mathbf P_k^{(j)}=\diag(\mathbf G_{0,k}^{1/2},(\mathbf G_{i,k}^{1/2})_{i\in J_i})$ and $a_k^{(j)}=\xi_k^{(j)}$, for $k\in{\mathbb N}^{\ge k_{\min}}$ and for $j\in[p]$. \end{lemma} The proof of Lemma \ref{lem:feas.LP.cs} is similar to that of Lemma \ref{lem:feas.LP}. For instance, for POPs with ball or annulus constraints on subsets of each clique of variables, CTP can be verified by LP. \begin{proposition}\label{prop:suff.cond.feas.cs} Let POP \eqref{eq:POP.def} with CS be described in Section \ref{sec:sparse.POP}. Let $(T_i)_{i\in[r]\cup([m]\backslash [2r])}$ be as in Assumption \ref{ass:bound.cons} and further assume that for every $j\in[p]$, $(\cup_{q\in J_j\cap [r]} T_q)\cup (\cup_{q\in J_j\backslash [2r]} T_q)=I_j$. Then LP \eqref{eq:find.CTP.cliq} has a feasible solution for every $k\in{\mathbb N}^{\ge k_{\min}}$, and therefore POP \eqref{eq:POP.def} has CTP. \end{proposition} The proof of Proposition \ref{prop:suff.cond.feas.cs} is postponed to Appendix \ref{sec:proof.prop:suff.cond.feas.cs}. \subsection{Main algorithm} Algorithm \ref{alg:sol.nonsmooth.hier.B.cs} below solves POP \eqref{eq:POP.def} with CS and whose CTP can be verified by LP. \begin{algorithm} \caption{SpecialPOP-CTP-CS} \label{alg:sol.nonsmooth.hier.B.cs} \small \textbf{Input:} POP \eqref{eq:POP.def} with CS and a relaxation order $k\in{\mathbb N}^{\ge k_{\min}}$\\ \textbf{Output:} The optimal value $\tau_k^{\text{cs}}$ of SDP \eqref{eq:SDP.form.sparse} \begin{algorithmic}[1] \For {$j\in[p]$}{} \State Solve LP \eqref{eq:find.CTP.cliq} to obtain an optimal solution $(\xi_k^{(j)},\mathbf G_{i,k}^{(j)},\mathbf u_{j,k}^{(j)})$; \State Let $a_k^{(j)}=\xi_k^{(j)}$ and $\mathbf P_k^{(j)}=\diag((\mathbf G_{0,k}^{(j)})^{1/2},\dots,(\mathbf G_{m,k}^{(j)})^{1/2})$; \EndFor \State Compute the optimal value $\tau_k^{\text{cs}}$ of SDP \eqref{eq:SDP.form.sparse} by running an algorithm based on first-order methods and which exploits CTP. \end{algorithmic} \end{algorithm} In Step 4 of Algorithm \ref{alg:sol.nonsmooth.hier.B.cs} the two algorithms CGAL (Algorithm \ref{alg:CGAL.blocks} in Appendix \ref{sec:sdp.ctp.blocks} or SM (Algorithm \ref{alg:sol.SDP.CTP.0.blocks} in Appendix \ref{app:spectral.SDP.ctp}) are good candidates. \section{Numerical experiments} \label{sec:benchmark} In this section we report results of numerical experiments obtained by solving the second-order Moment-SOS relaxation of various randomly generated instances of QCQPs with CTP. The experiments are performed in Julia 1.3.1 with the following software packages: \begin{itemize} \item {\tt SumOfSquare} \cite{weisser2019polynomial} is a modeling library for solving the Moment-SOS relaxations of dense POPs, based on JuMP (with Mosek 9.1 used as SDP solver). \item {\tt TSSOS} \cite{wang2019tssos,wang2020chordal,wang2020cs} is a modeling library for solving Moment-SOS relaxations of sparse POPs based on JuMP (with Mosek 9.1 used as SDP solver). \item {\tt LMBM} solves unconstrained non-smooth optimization with the limited-memory bundle method by Haarala et al. \cite{haarala2007globally,haarala2004new} and calls Karmitsa's Fortran implementation of the LMBM algorithm \cite{karmitsa2007lmbm}. \item {\tt Arpack} \cite{lehoucq1998arpack} is used to compute the smallest eigenvalues and the corresponding eigenvectors of real symmetric matrices of (potentially) large size, which is based on the implicitly restarted Arnoldi method. \end{itemize} The implementation of algorithms \ref{alg:sol.nonsmooth.hier.B} and \ref{alg:sol.nonsmooth.hier.B.cs} is available online via the link: \begin{center} \href{https://github.com/maihoanganh/SpectralPOP}{{\bf https://github.com/maihoanganh/ctpPOP}}. \end{center} We use a desktop computer with an Intel(R) Core(TM) i7-8665U CPU @ 1.9GHz $\times$ 8 and 31.2 GB of RAM. The notation for the numerical results is given in Table \ref{tab:nontation}. \begin{table} \caption{\small The notation} \label{tab:nontation} \small \begin{center} \begin{tabular}{|m{1.0cm}|m{10.3cm}|} \hline $n$&the number of variables of a POP\\ \hline $m$&the number of inequality constraints of a POP\\ \hline $l$&the number of equality constraints of a POP\\ \hline $u^{\max}$& the largest size of variable cliques of a sparse POP\\ \hline $p$& the number of variable cliques of a sparse POP\\ \hline $k$&the relaxation order of the Moment-SOS hierarchy\\ \hline $t$&the sparse order of the sparsity adapted Moment-SOS hierarchy (for TS and CS-TS)\\ \hline $\omega$& the number of psd blocks in an SDP\\ \hline $s^{\max}$&the largest size of psd blocks in an SDP\\ \hline $\zeta$&the number of affine equality constraints in an SDP\\ \hline $a^{\max}$& the largest constant trace\\ \hline Mosek & the SDP relaxation modeled by {\tt SumOfSquares} (for dense POPs) or {\tt TSSOS} (for sparse POPs) and solved by Mosek 9.1\\ \hline CGAL & the SDP relaxation modeled by our CTP-exploiting method and solved by the CGAL algorithm\\ \hline LMBM &the SDP relaxation modeled by our CTP-exploiting method and solved by the SM algorithm with the {\tt LMBM} solver\\ \hline val& the optimal value of the SDP relaxation\\ \hline gap& the relative optimality gap w.r.t. the value returned by Mosek, i.e., $\text{gap}=|\text{val}-\text{val(Mosek)}|/{|\text{val(Mosek)}|}$\\ \hline time & the running time in seconds (including modeling and solving time)\\ \hline $-$& the calculation runs out of space\\ \hline \end{tabular} \end{center} \end{table} For the examples tested in this paper, the modeling time of {\tt SumOfSquares}, {\tt TSSOS} and {\tt ctpPOP} is typically negligible compared to the solving time of the packages Mosek, CGAL, and LMBM. Hence the total running time mainly depends on the solvers and we compare their performances below. As mentioned in the introduction, the current framework differs from our previous work \cite{mai2020hierarchy}, where we exploited CTP for equality constrained POPs on a sphere, which could be solved by {\tt LMBM} efficiently. The reason is that the SDP relaxations of such equality constrained POPs involve a single psd matrix. For the benchmarks of this section, we consider POPs involving ball/annulus constraints, so the resulting relaxations include several psd matrices. Our numerical experiments confirm that for such SDPs, LMBM returns inaccurate values while CGAL (without sketching) performs better for this type of SDP in terms of accuracy and efficiency. \subsection{Randomly generated dense QCQPs with a ball constraint} \label{sec:experiment.single.ball} \paragraph{Test problems:} We construct randomly generated dense QCQPs with a ball constraint as follows: \begin{enumerate} \item Generate a dense quadratic polynomial objective function $f$ with random coefficients following the uniform probability distribution on $(-1,1)$. \item Let $m=1$ and $g_1:=1-\|\mathbf x\|_2^2$; \item Take a random point $\mathbf a$ in $S(g)$ w.r.t. the uniform distribution; \item For every $j\in[l]$, generate a dense quadratic polynomial $h_j$ by \begin{enumerate}[(i)] \item for each $\alpha\in{\mathbb N}^n_2\backslash \{\mathbf 0\}$, taking a random coefficient $h_{j,\alpha}$ for $h_j$ in $(-1,1)$ w.r.t. the uniform distribution; \item setting $h_{j,\mathbf 0}:=-\sum_{\alpha\in{\mathbb N}^n_2\backslash \{\mathbf 0\}} h_{j,\alpha} \mathbf a^\alpha$. \end{enumerate} Then $\mathbf a$ is a feasible solution of POP \eqref{eq:POP.def}. \end{enumerate} The numerical results are displayed in Table \ref{tab:quadratics.on.unit.ball} and \ref{tab:QCQP.on.unit.ball}. \begin{table} \caption{\small Numerical results for minimizing a dense quadratic polynomial on a unit ball} \label{tab:quadratics.on.unit.ball} {\small\begin{itemize} \item POP size: $m=1$, $l=0$; Relaxation order: $k=2$; SDP size: $\omega=2$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek }& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 &66 &1277& -2.2181 & 0.3 &-2.2170 &0.2 & -2.2187 & 0.3\\ \hline 20 &231 &16402 & -3.7973 & 4 &-3.7947 &0.6 & -3.7096 & 7\\ \hline 30 &496 &77377 & -3.6876 & 3474 &-3.6858 &104 & -3.8530 & 59\\ \hline 40 &861 &236202 & $-$ & $-$ &-4.1718 &33 & -4.7730 & 179\\ \hline 50 &1326 &564877 & $-$ & $-$ &-6.3107 &1007 & -7.3874 & 139\\ \hline 60 &1891 &1155402 & $-$ & $-$ &-6.5326 &1085 & -7.4733 & 674\\ \hline 70 &2556 &2119777 & $-$ & $-$ &-7.3379 &1262 & -9.5223 & 1486\\ \hline 80 &3321 &3590002 & $-$ & $-$ &-7.9559 &4988 & -10.0260 & 1241\\ \hline 90 &4186 &5718077 & $-$ & $-$ &-7.3425 &5187 & -9.4477 & 5313\\ \hline 100 &5151 &8676002 & $-$ & $-$ &-7.7374 &22451 & -10.684 & 5355\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated dense QCQPs with a ball constraint} \label{tab:QCQP.on.unit.ball} {\small\begin{itemize} \item POP size: $m=1$, $l=\lceil n/4\rceil $; Relaxation order: $k=2$; SDP size: $\omega=2$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10&3 & 66 & 1475 & -2.0686 & 1.7 &-2.0674 & 0.8 & -2.0874 & 0.3\\ \hline 20&5 & 231 & 17557 & -3.0103 & 61 &-3.0075 & 7 & -3.0750 & 18\\ \hline 30&8 & 496 & 81345 & -3.3293 & 4573 &-3.3249 &80 & -3.6863 & 123\\ \hline 40&10 & 861 & 244812 & $-$ & $-$ & -4.6977 & 194 & -5.3488 & 488 \\ \hline 50&13 & 1326 & 582115 & $-$ & $-$ & -4.2394 &951 & -6.1325 & 837\\ \hline 60&15 & 1891 & 1183767 & $-$ & $-$ & -5.7793 & 1387 & -7.5718 & 3781\\ \hline 70&18 & 2556 & 2165785 & $-$ & $-$ & -6.1278 & 4335 & -8.1181 & 15854\\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} As one can see from Table \ref{tab:quadratics.on.unit.ball} and \ref{tab:QCQP.on.unit.ball}, CGAL is typically the fastest solver and returns an optimal value of gap within 1\% w.r.t. the one returned by Mosek when $n\le 30$. Mosek runs out of memory when $n\ge 40$ while CGAL works well up to $n=100$. We should point out that LMBM is less accurate or even fails to converge to the optimal value when $n\ge 20$. The reason might be that LMBM only solves the dual problem and hence looses information of the primal problem. \subsection{Randomly generated dense QCQPs with annulus constraints} \label{sec:experiment.single.annulus} \paragraph{Test problems:} We construct randomly generated dense QCQPs as in Section \ref{sec:experiment.single.ball}, where the ball constraint is now replaced by annulus constraints. Namely, in Step 2 we take $m=2$, $g_1:=\|\mathbf x\|_2^2-1/2$ and $g_2:=1-\|\mathbf x\|_2^2$. The numerical results are displayed in Table \ref{tab:quadratics.on.annulus} and \ref{tab:QCQP.on.annulus}. \begin{table} \caption{\small Numerical results for minimizing a dense quadratic polynomial on an annulus} \label{tab:quadratics.on.annulus} {\small\begin{itemize} \item POP size: $m=2$, $l=0$; Relaxation order: $k=2$; SDP size: $\omega=3$, $a^{\max}=4$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 &66 &1343 & -3.0295 & 0.5 &-3.0278 &1 &-3.0311&0.8 \\ \hline 20 &231 &16633 & -3.6468 & 69 &-3.6458 &5 &-3.7814&16 \\ \hline 30 &496 &77873 & -3.9108 & 2546 &-3.9079 &9 &-3.8941&51 \\ \hline 40 &861 &237063 & $-$ & $-$ &-4.7469 &28 &-6.9780&119 \\ \hline 50 &1326 &566203 & $-$ & $-$ &-6.4170 &112 &-11.1028&258 \\ \hline 60 &1891 &1157293 & $-$ & $-$ &-5.5841 &226 &-9.2142&473 \\ \hline 70 &2556 &2122333 & $-$ & $-$ &-7.9325 &730 & -12.7862& 1669 \\ \hline 80 &3321 &3593323 & $-$ & $-$ &-7.6164 &1355 & -10.068 & 317 \\ \hline 90 &4186 &5722263 & $-$ & $-$ &-8.1900 &3563 & -12.439 & 8751 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated dense QCQPs with annulus constraints} \label{tab:QCQP.on.annulus} {\small\begin{itemize} \item POP size: $m=2$, $l=\lceil n/4\rceil$; Relaxation order: $k=2$; SDP size: $\omega=3$, $a^{\max}=4$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size} & \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10&3 & 66 & 1541 & -2.7950 & 0.5 &-2.7934 & 2&-2.7829&7\\ \hline 20&5 & 231 & 17788 & -3.5048 & 95 &-3.5027 & 10& -4.4491 & 46\\ \hline 30&8 & 496 & 81841 & -3.3964 & 4237 &-3.3937 & 45& -4.9592 & 111\\ \hline 40&10 & 861 & 245673 & $-$ & $-$ &-4.6573 & 140 & -6.7683 & 648\\ \hline 50&13 & 1326 & 583441 & $-$ & $-$ & -3.8236 & 437 & -6.9930 & 519\\ \hline 60&15 & 1891 & 1185658 & $-$ & $-$ & -4.5246 & 1076 & -7.5845 &2917 \\ \hline 70&18 & 2556 & 2168341 & $-$ & $-$ & -6.2924 & 4783 & -9.6145 & 2644 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} Same remarks as in Section \ref{sec:experiment.single.ball}. \subsection{Randomly generated dense QCQPs with box constraints} \label{sec:experiment.box} \paragraph{Test problems:} We construct randomly generated dense QCQPs as in Section \ref{sec:experiment.single.ball}, where the ball constraint is now replaced by box constraints. Namely, in Step 2 we take $m=n$, $g_j:=-x_j^2+1/n$, $j\in [n]$. The numerical results are displayed in Table \ref{tab:quadratics.on.box} and \ref{tab:QCQP.on.box}. \begin{table} \caption{\small Numerical results for minimizing a dense quadratic polynomial on a box} \label{tab:quadratics.on.box} {\small\begin{itemize} \item POP size: $m=n$, $l=0$; Relaxation order: $k=2$; SDP size: $\omega=n+1$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size} & \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10& 66 &1871 & -2.7197 & 0.5 &-2.7189 & 1 &-2.7327 &0.7 \\ \hline 20& 231 & 20791 & -3.3560 & 98 & -3.3501 & 57 & -4.2987 & 18 \\ \hline 30& 496 & 91761 & -4.6372 & 5150 & -4.6242 & 285 & -5.8805 & 156 \\ \hline 40& 861 &269781 & $-$ & $-$ & -4.5788 & 409 & -6.5857 & 188 \\ \hline 50& 1326 &629851 & $-$ & $-$ & -4.2313 & 2083 & -6.6163 & 323 \\ \hline 60& 1891 &1266971 & $-$ & $-$ & -4.0135 & 5525 & -6.5792 & 814 \\ \hline 70& 2556 &2296141 & $-$ & $-$ & -5.4019 & 15172 & -8.7669 & 1434 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated dense QCQPs with box constraints} \label{tab:QCQP.on.box} {\small\begin{itemize} \item POP size: $m=n$, $l=\lceil n/7\rceil $; Relaxation order: $k=2$; SDP size: $\omega=n+1$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10& 2 & 66 & 2003 & -1.8320 & 0.6 & -1.8321 & 3 & -1.9692 & 4\\ \hline 20& 3 & 231 & 21484 & -3.1797 & 175 & -3.1781 & 106 & -4.0216 & 29\\ \hline 30& 5 & 496 & 94241 & -2.2949 & 6850 & -2.2982 & 528 & -3.9900 & 152\\ \hline 40& 6 & 861 & 274947 & $-$ & $-$ & -3.8651 & 933 & -6.1379 & 298\\ \hline 50& 8 & 1326 & 640459 & $-$ & $-$ & -3.6267 & 6159 & -6.3651 & 1494\\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} We observe similar behaviors of the solvers as in Section \ref{sec:experiment.single.ball}. The important point to note here is that solving a QCQP with box constraints is less efficient than solving the same one with ball constraints. This is because the efficiency of CGAL depends on the number of psd blocks involved in SDP. For instance, when $n=50$, CGAL takes around 1000 seconds to solve the second-order moment relaxation of a QCQP with a ball constraint while it takes around 2100 seconds to solve this relaxation for a QCQP with box constraints. \subsection{Randomly generated dense QCQPs with simplex constraints} \label{sec:experiment.simplex} \paragraph{Test problems:} We construct randomly generated dense QCQPs as in Section \ref{sec:experiment.single.ball}, where the ball constraint is now replaced by simplex constraints. Namely, in Step 2 we take $g$ such that \eqref{eq:stand.simplex} holds with $L=R=1$. The numerical results are displayed in Table \ref{tab:quadratics.on.simplex} and \ref{tab:QCQP.on.simplex}. \begin{table} \caption{\small Numerical results for minimizing a dense quadratic polynomials on a simplex} \label{tab:quadratics.on.simplex} {\small\begin{itemize} \item POP size: $m=n+2$, $l=0$; Relaxation order: $k=2$; SDP size: $\omega=n+3$, $a^{\max}=5$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 &66 &2003 & -1.9954 & 0.3 &-1.9950 &7 &-2.2800& 27 \\ \hline 20 &231 &21253 & -1.5078 & 58 &-1.5055 &116 &-2.7237& 32 \\ \hline 30 &496 &92753 & -2.0537 & 2804 &-2.0480 &377 & -3.3114 & 917 \\ \hline 40 &861 &271503 & $-$ & $-$ &-2.3034 &950 & -4.0971 & 577 \\ \hline 50 &1326 &632503 & $-$ & $-$ & -1.8366 & 9539 & -4.0541 & 13700 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated dense QCQPs with simplex constraints} \label{tab:QCQP.on.simplex} {\small\begin{itemize} \item POP size: $m=n+2$, $l=\lceil n/7\rceil $; Relaxation order: $k=2$; SDP size: $\omega=n+3$, $a^{\max}=5$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 & 2 & 66 & 2135 & -1.0605 & 0.4 & -1.0606 & 176 & -2.2338 & 2 \\ \hline 20 & 3 & 231 & 21946 & -1.6629 & 72 & -1.6628 & 512 & -3.3538 & 93 \\ \hline 30 & 5 & 496 & 95233 & -1.0091 & 6206 & -1.0249 & 1089 & -2.9425 & 100 \\ \hline 40 & 6 & 861 & 276669 & $-$ & $-$ & -0.3256 & 2314 & -2.9564 & 4431\\ \hline 50 & 8 & 1326 & 643111 & $-$ & $-$ & -1.4200 & 10035 & -5.4284 & 1310\\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} Again we observe a behavior of the solvers similar to that in Section \ref{sec:experiment.single.ball}. One can also see that solving a QCQP with simplex constraints by CGAL is significantly slower than solving the same one with box constraints. For instance, when $n=50$, CGAL takes 2100 seconds to solve the second-order moment relaxation for a QCQP with box constraints while it takes 9500 seconds with simplex constraints. \subsection{Randomly generated QCQPs with TS and ball constraints} \label{sec:experiment.single.ball.term} \paragraph{Test problems:} We construct randomly generated QCQPs with TS and a ball constraint as follows: \begin{enumerate} \item Generate a quadratic polynomial objective function $f$ such that for $\alpha\in{\mathbb N}^n_2$ with $|\alpha|\ne 2$, $f_\alpha=0$ and for $\alpha\in{\mathbb N}^n_2$ with $|\alpha|=2$, the coefficient $f_\alpha$ is randomly generated in $(-1,1)$ w.r.t. the uniform distribution; \item Take $m=1$ and $g_1:=1-\|\mathbf x\|_2^2$; \item Take a random point $\mathbf a$ in $S(g)$ w.r.t. the uniform distribution; \item For every $j\in[l]$, generate a quadratic polynomial $h_j$ by \begin{enumerate}[(i)] \item setting $h_{j,\alpha}=0$ for each $\alpha\in{\mathbb N}^n_2\backslash \{\mathbf 0\}$ with $|\alpha|\ne 2$ \item for each $\alpha\in{\mathbb N}^n_2\backslash \{\mathbf 0\}$ with $|\alpha|=2$, taking a random coefficient $h_{j,\alpha}$ for $h_j$ in $(-1,1)$ w.r.t. the uniform distribution; \item setting $h_{j,\mathbf 0}:=-\sum_{\alpha\in{\mathbb N}^n_2\backslash \{\mathbf 0\}} h_{j,\alpha} \mathbf a^\alpha$. \end{enumerate} Then $\mathbf a$ is a feasible solution of POP \eqref{eq:POP.def}. \end{enumerate} The numerical results are displayed in Table \ref{tab:quadratics.on.unit.ball.term} and \ref{tab:QCQP.on.unit.ball.term}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with TS on the unit ball} \label{tab:quadratics.on.unit.ball.term} {\small\begin{itemize} \item POP size: $m=1$, $l=0$; Relaxation order: $k=2$; Sparse order: $t=1$; SDP size: $\omega=4$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM}\\ \hline $n$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 & 56 & 937 & -1.5681 & 4 & -1.5527 & 0.7 & -1.5711 & 0.07\\ \hline 20 & 211 & 13722 & -2.4275 & 36 & -2.3996 & 1 & -2.7301 & 0.6 \\ \hline 30 & 466 & 68357 & -3.0748 & 1930 & -3.0577 & 8 & -3.5188 & 8 \\ \hline 40 & 821 & 214842 & $-$ & $-$ & -3.6999 & 20 & -4.9033 & 40 \\ \hline 50 & 1276 & 523177 & $-$ & $-$ & -4.1603 & 128 & -5.3416 & 59 \\ \hline 60 & 1831 & 1083362 & $-$ & $-$ & -4.1914 & 655 & -5.6983 & 303 \\ \hline 70 & 2486 & 2005397 & $-$ & $-$ & -4.9578 & 1461 & -7.1968 & 1040 \\ \hline 80 & 3241 & 3419282 & $-$ & $-$ & -5.6452 & 7253 & -7.9133 & 5759 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated QCQPs with TS and a ball constraint} \label{tab:QCQP.on.unit.ball.term} {\small\begin{itemize} \item POP size: $m=1$, $l=\lceil n/4\rceil $; Relaxation order: $k=2$; Sparse order: $t=1$; SDP size: $\omega=4$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{2}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 &3 & 56 & 1105 & -0.60612 & 0.7 & -0.60550 & 2 & -0.60611 & 0.8\\ \hline 20 &5 & 211 & 14777 & -2.3115 & 47 & -2.3097 & 17& -2.3952 & 3 \\ \hline 30 &8 & 466 & 72085 & -2.8344 & 3102 & -2.8321 & 112 & -3.7588 & 128 \\ \hline 40 &10 & 821 & 223052 & $-$ & $-$ & -3.4081 & 476 & -4.4239 & 673 \\ \hline 50 &13 & 1276 & 539765 & $-$ & $-$ & -3.3552 & 1845 & -5.2568 & 729 \\ \hline 60 &15 & 1831 & 1110827 & $-$ & $-$ & -3.5620 & 2992 & -5.9898 & 1702 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} The behavior of solvers is similar to that in the dense case. \subsection{Randomly generated QCQPs with TS and box constraints} \label{sec:experiment.box.term} \paragraph{Test problems:} We construct randomly generated QCQPs with TS as in Section \ref{sec:experiment.single.ball.term}, where the ball constraint is now replaced by box constraints. The numerical results are displayed in Table \ref{tab:quadratics.on.box.term} and \ref{tab:QCQP.on.box.term}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with TS on a box} \label{tab:quadratics.on.box.term} {\small\begin{itemize} \item POP size: $m=n$, $l=0$; Relaxation order: $k=2$; Sparse order: $t=1$; SDP size: $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{POP size} & \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10 & 22 & 56 & 1441 & -1.0539 & 3 & -1.0519 & 14 & -1.11671 & 1 \\ \hline 20 & 42 & 211 & 17731 & -1.3925 & 93 & -1.3802 & 161 & -2.2978 & 2 \\ \hline 30 & 62 & 466 & 81871 & -2.2301 & 4392 & -2.2128 & 567 & -2.4544 & 533 \\ \hline 40 & 82 & 821 & 246861 & $-$ & $-$ & -2.5209 & 1602 & -4.6159 & 1036\\ \hline 50 & 102 & 1276 & 585701 & $-$ & $-$ & -3.0282 & 2583 & -4.9146 & 376\\ \hline 60 & 122 & 1831 & 1191391 & $-$ & $-$ & -3.0470 & 10858 & -5.7882 & 353\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated QCQPs with TS and box constraints} \label{tab:QCQP.on.box.term} {\small\begin{itemize} \item POP size: $m=n$, $l=\lceil n/7\rceil $; Relaxation order: $k=2$; Sparse order: $t=1$.; SDP size: $\omega=n+1$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $n$&$l$&$\omega$ &$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 10&2 & 22&56 & 1553 & -0.77189 & 0.2 & -0.77214& 9 & -0.78092 & 1\\ \hline 20&3 & 42& 211 & 18364 & -1.7962 & 71 & -1.8009 & 150 & -2.7771 & 4\\ \hline 30&5 & 62& 466 & 84201 & -1.8529 & 5814 & -1.8625 & 650 & -3.5891 & 268 \\ \hline 40&6 & 82& 821 & 251787 & $-$ & $-$ & -2.1930 & 2994 & -4.5890 & 317\\ \hline 50&8 & 102& 1276 & 595909 & $-$ & $-$ & -2.4655 & 8397 & -5.1811 & 883 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} Again the behavior of solvers is similar to that in the dense case. \subsection{Randomly generated QCQPs with CS and ball constraints on each clique of variables} \label{sec:experiment.single.ball.sparse} \paragraph{Test problems:} We construct randomly generated QCQPs with CS and ball constraints on each clique of variables as follows: \begin{enumerate} \item Take a positive integer $u$, $p:=\lfloor n/u\rfloor +1$ and let \begin{equation}\label{eq:index.vars} I_j=\begin{cases} [u],&\text{if }j=1\,,\\ \{u(j-1),\dots,uj\},&\text{if }j\in\{2,\dots,p-1\}\,,\\ \{u(p-1),\dots,n\},&\text{if }j=p\,; \end{cases} \end{equation} \item Generate a quadratic polynomial objective function $f=\sum_{j\in[p]}f_j$ such that for each $j\in[p]$, $f_j\in{\mathbb R}[\mathbf x(I_j)]_2$, and the coefficient $f_{j,\alpha},\alpha\in{\mathbb N}^{I_j}_2$ of $f_j$ is randomly generated in $(-1,1)$ w.r.t. the uniform distribution; \item Take $m=p$ and $g_j:=-\|\mathbf x(I_j)\|_2^2+1$, $j\in[m]$; \item Take a random point $\mathbf a$ in $S(g)$ w.r.t. the uniform distribution; \item Let $r:=\lfloor l/p\rfloor$ and \begin{equation}\label{eq:assign.eqcons} W_j:=\begin{cases} \{(j-1)r+1,\dots,jr\},&\text{if }j\in[p-1]\,,\\ \{(p-1)r+1,\dots,l\},&\text{if } j=p\,. \end{cases} \end{equation} For every $j\in[p]$ and every $i\in W_j$, generate a quadratic polynomial $h_i\in{\mathbb R}[\mathbf x(I_j)]_2$ by \begin{enumerate} \item for each $\alpha\in{\mathbb N}^{I_j}_2\backslash \{\mathbf 0\}$, taking a random coefficient $h_{i,\alpha}$ of $h_i$ in $(-1,1)$ w.r.t. the uniform distribution; \item setting $h_{i,\mathbf 0}:=-\sum_{\alpha\in{\mathbb N}^{I_j}_2\backslash \{\mathbf 0\}} h_{j,\alpha} \mathbf a^\alpha$. \end{enumerate} Then $\mathbf a$ is a feasible solution of POP \eqref{eq:POP.def}. \end{enumerate} The numerical results are displayed in Table \ref{tab:experiment.single.ball.sparse} and \ref{tab:single.ball.sparse.QCQP}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with CS and ball constraints on each clique of variables} \label{tab:experiment.single.ball.sparse} {\small\begin{itemize} \item POP size: $n=1000$, $m=p$, $l=0$, $u^{\max}=u+1$; Relaxation order: $k=2$; SDP size: $\omega=2p$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91& 182 & 91 & 222712 & -240.54 & 124 & -240.37 & 98 & -508.35 & 15\\ \hline 16 & 63& 126 & 171 & 550692 & -205.45 & 1389 & -205.19 & 280 & -429.93 & 83\\ \hline 21 & 48 & 96 & 276 & 1107682 & $-$ & $-$ & -175.60 & 321 & -365.91 & 269\\ \hline 26 & 39 & 78 & 406 & 1955879 & $-$ & $-$ & -165.65 & 559 & -338.00 & 225 \\ \hline 31 & 33 & 66 & 561 & 3167072 & $-$ & $-$ & -149.10 & 973 & -305.33 & 5280 \\ \hline 36 & 28 & 56 & 741 & 4758727 & $-$ & $-$ & -140.21 & 1315 & -285.69 & 737 \\ \hline 41 & 25 & 50 & 946 & 6839993 & $-$ & $-$ & -126.55 & 1926 & -265.28 & 622 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for randomly generated QCQPs with CS and ball constraints on each clique of variables} \label{tab:single.ball.sparse.QCQP} {\small\begin{itemize} \item POP size: $n=1000$, $m=p$, $l=143$, $u^{\max}=u+1$; Relaxation order: $k=2$; SDP size: $\omega=2p$, $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 182 & 91 & 235023 & -224.15 & 163 & -224.09 & 204 & -500.34 & 13 \\ \hline 16 & 63 & 126 & 171 & 572905 & -192.45 & 1830 & -192.30 & 335 & -420.87 & 50 \\ \hline 21 & 48 & 96 & 276 & 1139460 & $-$ & $-$ & -162.79 & 537 & -363.28 & 103 \\ \hline 26 & 39 & 78 & 406 & 2005124 & $-$ & $-$ & -148.77 & 1014 & -336.42 & 263 \\ \hline 31 & 33 & 66 & 561 & 3239573 & $-$ & $-$ & -142.38 & 2115 & -313.80 & 3679 \\ \hline 36 & 28 & 56 & 741 & 4862292 & $-$ & $-$ & -124.97 & 5304 & -263.77 & 6598 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} The number of variables is fixed as $n=1000$. We increase the clique size $u$ so that the number of variable cliques $p$ decreases accordingly. Again results in Table \ref{tab:experiment.single.ball.sparse} and \ref{tab:single.ball.sparse.QCQP} show that CGAL is the fastest solver and returns an optimal value of gap within 1\% w.r.t. the one returned by Mosek (for $u\le 16$). Moreover Mosek runs out of memory when $u\ge 21$, and LMBM fails to converge to the optimal value. \subsection{Randomly generated QCQPs with CS and box constraints on each clique of variables} \label{sec:experiment.single.box.sparse} \paragraph{Test problems:} We construct randomly generated QCQPs with CS as in Section \ref{sec:experiment.single.ball.sparse}, where ball constraints are now replaced by box constraints. Namely, in Step 3 we take $m=n$, $g_j:=-x_j^2+1/u$, $j\in [n]$. The numerical results are displayed in Table \ref{tab:experiment.single.box.sparse} and \ref{tab:single.box.sparse.QCQP}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with CS and box constraints on each clique of variables} \label{tab:experiment.single.box.sparse} {\small\begin{itemize} \item POP size: $n=m=1000$, $l=0$, $u^{\max}=u+1$; Relaxation order: $k=2$; Constant trace: $a^{\max}\in [3,4]$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 1181 & 91 & 313361 & -204.89 & 443 & -204.69 & 753 & -555.51 & 223 \\ \hline 16 & 63 & 1125 & 171 & 720323 & -163.11 & 3082 & -162.88 & 3059 & -438.22 & 119 \\ \hline 21 & 48 & 1095 & 276 & 1380918 & $-$ & $-$ & -147.92 & 5655 & -387.42 & 2213 \\ \hline 26 & 39 & 1077 & 406 & 2357161 & $-$ & $-$ & -131.00 & 8889 & -340.04 & 5346 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for QCQPs with CS and box constraints on each clique of variables} \label{tab:single.box.sparse.QCQP} {\small\begin{itemize} \item POP size: $n=m=1000$, $l=143$, $u^{\max}=u+1$; Relaxation order: $k=2$; Constant trace: $a^{\max}\in [3,4]$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size} & \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$&$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 1181 & 91 & 325672 & -187.01 & 402 &-186.98 & 1915 & -570.68 & 184 \\ \hline 16 & 63 & 1125 & 171 & 742536 & -142.16 & 4323 & -142.27 & 4126 & -442.51 &57 \\ \hline 21 & 48 & 1095 & 276 & 1412696 & $-$ & $-$ & -131.14 & 5334 & -382.89 & 618 \\ \hline 26 & 39 & 1077 & 406 & 2406406 & $-$ & $-$ & -113.44 & 8037 & -336.11 & 901 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} The number of variables is fixed as $n=1000$. We increase the clique size $u$ so that the number of variable cliques $p$ decreases accordingly. From results in Table \ref{tab:experiment.single.ball.sparse} and \ref{tab:single.ball.sparse.QCQP}, one observes that when the largest size of variable cliques is relatively small (say $u\le 11$), Mosek is the fastest solver. However when the largest size of variable cliques is relatively large (say $u\le 21$), Mosek runs out of memory while CGAL still works well. \subsection{Randomly generated QCQPs with CS-TS and ball constraints on each clique of variables} \label{sec:experiment.single.ball.sparse.mix} \paragraph{Test problems:} We construct randomly generated QCQPs with CS-TS and ball constraints on each clique of variables as follows: \begin{enumerate} \item Take a positive integer $u$, $p:=\lfloor n/u\rfloor +1$ and let $(I_j)_{j\in[p]}$ be defined as in \eqref{eq:index.vars}. \item Generate a quadratic polynomial objective function $f=\sum_{j\in[p]}f_j$ such that for each $j\in[p]$, $f_j\in{\mathbb R}[\mathbf x(I_j)]_2$ and the nonzero coefficient $f_{j,\alpha}$ with $\alpha\in{\mathbb N}^{I_j}_2$ and $|\alpha|= 2$ is randomly generated in $(-1,1)$ w.r.t. the uniform distribution; \item Take $m=p$ and $g_j:=-\|\mathbf x(I_j)\|_2^2+1$, $j\in[m]$; \item Take a random point $\mathbf a$ in $S(g)$ w.r.t. the uniform distribution; \item Let $r:=\lfloor l/p\rfloor$ and $(W_j)_{j\in[p]}$ be as in \eqref{eq:assign.eqcons}. For every $j\in[p]$ and every $i\in W_j$, generate a quadratic polynomial $h_i\in{\mathbb R}[\mathbf x(I_j)]_2$ by \begin{enumerate} \item for each $\alpha\in{\mathbb N}^{I_j}_2\backslash\{\mathbf 0\}$ with $|\alpha|\ne2$, taking $h_{i,\alpha}=0$; \item for each $\alpha\in{\mathbb N}^{I_j}_2$ with $|\alpha|=2$, taking a random coefficient $h_{i,\alpha}$ of $h_i$ in $(-1,1)$ w.r.t. the uniform distribution; \item setting $h_{i,\mathbf 0}:=-\sum_{\alpha\in{\mathbb N}^{I_j}_2\backslash \{\mathbf 0\}} h_{j,\alpha} \mathbf a^\alpha$. \end{enumerate} Then $\mathbf a$ is a feasible solution of POP \eqref{eq:POP.def}. \end{enumerate} The numerical results are displayed in Table \ref{tab:experiment.single.ball.sparse.mix} and \ref{tab:single.ball.sparse.QCQP.mix}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with CS-TS and ball constraints on each clique of variables} \label{tab:experiment.single.ball.sparse.mix} {\small\begin{itemize} \item POP size: $n=1000$, $m=p$, $l=0$, $u^{\max}=u+1$; Relaxation order: $k=2$; Sparse order: $t=1$; SDP size: $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 &91 & 364 & 79 & 169654 & -160.05 & 163 & -160.01 & 498 & -489.87 & 98 \\ \hline 16 & 63 & 252 & 154 & 448354 & -135.78 & 1422 & -135.74 & 768 & -413.24 & 186 \\ \hline 21 &48 & 192 & 254 & 939619 & $-$ & $-$ & -117.17 & 1605 & -351.65 & 299 \\ \hline 26 &39 & 156 & 379 & 1705763 & $-$ & $-$ & -106.26& 3150 & -318.15 & 347 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for QCQPs with CS-TS and ball constraints on each clique of variables} \label{tab:single.ball.sparse.QCQP.mix} {\small\begin{itemize} \item POP size: $n=1000$, $m=p$, $l=143$, $u^{\max}=u+1$; Relaxation order: $k=2$; Sparse order: $t=1$; SDP size: $a^{\max}=3$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL} & \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 364 & 79 & 180303 & -155.91 & 158 & -155.87 & 604 & -500.47 & 83\\ \hline 16 & 63 & 252 & 154 & 468290 & 127.42 & 1707 & -127.36 & 1053 & -412.42 & 236 \\ \hline 21 & 48 & 192 & 254 & 939619 & $-$ & $-$ & -114.85 & 2877 & -363.23 & 128\\ \hline 26 & 39 & 156 & 379 & 1751556 & $-$ & $-$ & -102.30 & 6878 & -329.16 & 749 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} The behavior of solvers is similar to that in Section \ref{sec:experiment.single.box.sparse}. Here, we also emphasize that our framework is less efficient than interior-point methods for most benchmarks presented in \cite{cstssos}. The two underlying reasons are that (1) the block size of the resulting SDP relaxations is small, in which case Mosek performs more efficiently, e.g., for the benchmarks from \cite[Section 5.2]{cstssos}, and (2) it is harder to find the constant trace, e.g., for the benchmarks from \cite[Section 5.4]{cstssos}. Thus our proposed method complements that in \cite{cstssos} when the block size of the SDP relaxations is large and/or when CTP can be efficiently verified. \subsection{Randomly generated QCQPs with CS-TS and box constraints on each clique of variables} \label{sec:experiment.single.box.sparse.mix} \paragraph{Test problems:} We construct randomly generated QCQPs with CS-TS as in Section \ref{sec:experiment.single.ball.sparse.mix}, where ball constraints are now replaced by box constraints. Namely, in Step 3 we take $m=n$, $g_j:=-x_j^2+1/u$, $j\in [n]$. The numerical results are displayed in Table \ref{tab:experiment.single.box.sparse.mix} and \ref{tab:single.box.sparse.QCQP.mix}. \begin{table} \caption{\small Numerical results for minimizing a random quadratic polynomial with CS-TS and box constraints on each clique of variables} \label{tab:experiment.single.box.sparse.mix} {\small\begin{itemize} \item POP size: $n=m=1000$, $l=0$, $u^{\max}=u+1$; Relaxation order: $k=2$; Sparse order: $t=1$; Constant trace: $a^{\max}\in [3,4]$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 2362 & 79 & 248335 & -126.15 & 151 &-126.04 & 1982 & -541.78 & 140 \\ \hline 16 & 63 & 2250 & 154 & 601081 & -100.75 & 2225 &-100.64 & 7323 & -429.78 & 88 \\ \hline 21 & 48 & 2190 & 254 & 1191001 & $-$ & $-$ & -87.804 & 10734 & -363.88 & 157 \\ \hline 26 & 39 & 2154 & 379 & 2080265 & $-$ & $-$ & -81.908 & 20294 & -338.00 & 1129 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{\small Numerical results for QCQPs with CS-TS and box constraints on each clique of variables} \label{tab:single.box.sparse.QCQP.mix} {\small\begin{itemize} \item POP size: $n=m=1000$, $l=143$, $u^{\max}=u+1$; Relaxation order: $k=2$; Sparse order: $t=1$; Constant trace: $a^{\max}\in [3,4]$. \end{itemize}} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{POP size}& \multicolumn{3}{c|}{SDP size} & \multicolumn{2}{c|}{Mosek}& \multicolumn{2}{c|}{CGAL}& \multicolumn{2}{c|}{LMBM} \\ \hline $u$&$p$ &$\omega$&$s^{\max}$ & $\zeta$& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}& \multicolumn{1}{c|}{val}& \multicolumn{1}{c|}{time}\\ \hline 11 & 91 & 2362 & 79 & 258984 & -114.53 & 325 & -114.27 & 482 & -529.32 & 226 \\ \hline 16 & 63 & 2250 & 154 & 621017 & -96.199 & 4450 & -96.079 & 1245 & -433.34 & 519 \\ \hline 21 & 48 & 2190 & 254 &1220027 & $-$ & $-$& -83.013 & 8204 & -372.97 & 554 \\ \hline 26 & 39 & 2154 & 379 &2126058 & $-$ & $-$& -74.532 & 27600 & -258.90 & 764 \\ \hline \end{tabular} \end{center} \end{table} \paragraph{Discussion:} The behavior of solvers is similar to that in Section \ref{sec:experiment.single.box.sparse}. \section{Conclusion} In this paper, we have proposed a general framework for exploiting the constant trace property (CTP) in solving large-scale SDPs, typically SDP-relaxations from the Moment-SOS hierarchy applied to POPs. Extensive numerical experiments strongly suggest that with this CTP formulation, the CGAL solver based on first-order methods is more efficient and more scalable than Mosek (based on IPM) without exploiting CTP, especially when the block size is large. In addition, the optimal value returned by CGAL is typically within 1\% w.r.t. the one returned by Mosek. We have also integrated sparsity-exploiting techniques into the CTP framework in order to handle larger size POPs. For SDP-relaxations of large-scale POPs with a term- and/or correlative-sparsity pattern, and in applications for which only a medium accuracy of optimal solutions is enough, we believe that our framework should be very useful. As a topic of further investigation, we would like to improve the LP-based formulation for verifying CTP, for instance by relying on more general second-order cone programming. We also would like to generalize the CTP-exploiting framework to noncommutative POPs \cite{burgdorf16,klep2019sparse,wang2020exploiting} which have attracted a lot of attention in the quantum information community. Another line of research would be to investigate whether CTP could be efficiently exploited by interior-point solvers. \paragraph{\textbf{Acknowledgements}.} The first author was supported by the MESRI funding from EDMITT. The second author was supported by the FMJH Program PGMO (EPICS project) and EDF, Thales, Orange et Criteo, as well as from the Tremplin ERC Stg Grant ANR-18-ERC2-0004-01 (T-COPS project). This work has benefited from the Tremplin ERC Stg Grant ANR-18-ERC2-0004-01 (T-COPS project), the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Actions, grant agreement 813211 (POEMA) as well as from the AI Interdisciplinary Institute ANITI funding, through the French ``Investing for the Future PIA3'' program under the Grant agreement n$^{\circ}$ANR-19-PI3A-0004. The third and fourth authors were supported by the European Research Council (ERC) under the European's Union Horizon 2020 research and innovation program (grant agreement 666981 TAMING).
{ "timestamp": "2020-12-17T02:15:38", "yymm": "2012", "arxiv_id": "2012.08873", "language": "en", "url": "https://arxiv.org/abs/2012.08873" }
\section{Introduction} Spontaneous Rayleigh-Brillouin (RB) scattering spectra of gaseous/fluidic media contain information on the thermodynamics and the transport properties of the gases/fluids such as thermal diffusivity, speed of sound, relaxation times of various dynamical processes, etc.~\cite{Miles2001,Mcmanus2019,Bruno2019}. In addition, such spectra provide a way to test a given theory of the energy relaxation dynamics in gases, and in gaseous mixtures. Theories of RB-scattering for fluids in the kinetic and hydrodynamic regimes have been devised based on the linearized Boltzmann equation and on linearized hydrodynamic equations, respectively. Many models have already successfully explained the scattered light spectra~\cite{Yip1964,Mountain1966,Marques1998}. Some models have accomplished to reproduce the spectra very well, even though some discrepancies with experiments still exist, while their validity is often restricted to certain pressure regimes. A celebrated example of a model representation of RB-scattering profiles in the kinetic regime is the Tenti-model~\cite{Boley1972,Tenti1974}, which was applied to describe the spectral profiles of a variety of gases, such as N$_2$~\cite{GU2013b}, CO$_2$~\cite{Gu2014a,Wang2019}, and N$_2$O~\cite{Wang2018}. However, the Tenti-model was not developed for modeling the RB-spectra of mixture gases. Moreover, it requires as input the values of the macroscopic transport coefficients, such as heat capacity, thermal conductivity, specific heat ratio, and shear viscosity, in general not known for composite mixtures. And ultimately the value of the bulk viscosity, the parameter determining internal relaxation, is typically determined via a fitting procedure when applying the Tenti-model. This makes that the Tenti-model is not immediately applicable to the present case of binary mixtures. The hydrodynamic theory of light scattering in binary fluid mixtures was first developed by~\citet{Mountain1969}, who described the local dielectric constant fluctuations by several linear hydrodynamic equations including the continuity equation for mass conservation, the Navier-Stokes equation for momentum conservation, the diffusion equation and the energy transport equation. Later, \citet{Cohen1971} corrected some correlation functions by adding the 'non-Lorentzian' term based on the original paper of~\citet{Mountain1969} therewith improving the light scattering models. On the experimental side Rayleigh-Brillouin scattering spectra of helium-xenon atomic gas mixtures were measured by \citet{Letamendia1981} at different pressures, compositions, and scattering angles. The data were compared with a complete two-component hydrodynamic theory and good agreement was found at low molar fractions of He and at molar fractions of He higher than a 'critical' value which depends on the partial pressure of Xe. In addition, since the spectral shape in He$-$Xe mixtures is very sensitive to the presence and magnitude of thermal-diffusion effects, the thermal diffusion coefficient could be derived. A kinetic model was formulated by \citet{Letamendia1985} based on the generalized Enskog equations for a binary mixture of hard-sphere fluids. This model gives an improvement over an existing model derived by \citet{Boley1972b}, which is based on the linearized Boltzmann equations for Maxwell molecules and which was successful in explaining light scattering spectra of He$-$Xe mixtures at very low Xe pressure and small Xe molar fraction, conditions under which imperfect-gas effects and thermal diffusion can be ignored. \citet{Bonatto2005} proposed a model to describe the spontaneous density fluctuations in a binary mixture of monatomic ideal gases based on the Boltzmann equation, the collision operators of which are replaced by simple relaxation-time terms. For this model, the description of kinetic equations for a mixture of monatomic ideal gases is characterized by the fields of partial number density, partial flow velocity and partial temperature and assuming that the particles of the mixture interact according to the Lennard-Jones (6$-$12) potential. This model was applied to the light scattering spectrum of a binary gas mixture passing over from a hydrodynamic to a kinetic regime. The measurements by \citet{Gu2015} of RB-spectra on mixtures of Ar$-$He and Kr$-$He, were found to produce excellent agreement with this model. Clearly, since noble gas atoms do not have the internal degrees of motion that molecules have, this model is not suited for mixtures including molecular gases. In order to extend these studies into the molecular regime by including intra-molecular as well as inter-molecular relaxation, RB-scattering of mixtures of SF$_6-$He, SF$_6-$D$_2$, SF$_6-$H$_2$ is measured under different conditions. A relaxation hydrodynamic model for these mixtures of specific disparate masses is developed, based on a generalized hydrodynamic description, and a comparison will be made between the experimental data and the model developed. Also a comparison will be made with a classical hydrodynamics model. \section{Relaxation hydrodynamic model for binary mixtures} \label{sec:MixtureModel} In a fluid in thermal equilibrium, the intensity of the Rayleigh-Brillouin scattered light is related to the fluctuations of the dielectric constant $\delta \epsilon$ caused by the random thermal motion of molecules~\cite{Fabelinskii2012}. For a binary gas mixture these fluctuations are related to the fluctuations of thermodynamic variables as pressure $p$, temperature $T$ and mass concentration of one constituent $c$: \begin{equation}\label{Eq:DielectriFlucMixture} \delta \epsilon(\boldsymbol r, t) = \left(\frac{\partial \epsilon}{\partial p}\right)_{\!\!\hskip -0.25mm {\scriptscriptstyle T},c}\delta p(\boldsymbol r, t) + \left(\frac{\partial \epsilon}{\partial T}\right)_{\!\!\hskip -0.25mm p,c}\delta T(\boldsymbol r, t) + \left(\frac{\partial \epsilon}{\partial c}\right)_{\!\!\hskip -0.25mm p,{\scriptscriptstyle T}}\delta c(\boldsymbol r, t) \end{equation} and the dynamic structure factor provides an expression from which the scattering spectrum can be computed: \begin{equation}\label{Eq:StructureMixture} S(\boldsymbol q, \omega) = 2\hskip 0.25mm {\rm Re}\hskip 0.25mm [ \langle \delta \epsilon(\boldsymbol q, i\omega) \delta \epsilon(- \boldsymbol q, 0) \rangle ] \end{equation} where $\boldsymbol q$ represents the scattering vector with magnitude: \begin{equation} q = 2k_i \sin{\frac{\theta}{2}} = \frac{4\pi n}{\lambda_i} \sin{\frac{\theta}{2}} \label{q-vec} \end{equation} with $k_i$ and $\lambda_i$ the wave vector and wavelength of the incident light, $n$ the refractive index and $\theta$ the scattering angle~\cite{Wang2020}. This sets the dependence of the dynamic structure factor on the experimental conditions. In classical mixture theory \cite{Mountain1969,Cohen1971}, the macroscopic state of a binary gas mixture is characterized by the six scalar fields of mass density $\rho=\rho_{1}+\rho_{2}$, flow velocity $\boldsymbol v$ (contributing a field for each direction), temperature $T$ and mass concentration $c=\rho_{1}/\rho$, where the index 1 refers to light constituent, while the index 2 refers to the heavy one. The balance equations governing the dynamical behavior of these fields are:\\ (1) the continuity equation: \begin{equation}\label{Eq:Continuity} \frac{{\rm d} \rho}{{\rm d} t} + \rho \nabla \cdot \boldsymbol v = 0, \end{equation}\\ (2) the momentum equation: \begin{equation}\label{Eq:NavierStokes} \rho \frac{{\rm d}\boldsymbol v}{{\rm d} t} + \nabla \cdot \boldsymbol \sigma = 0, \end{equation}\\ (3) the diffusion equation: \begin{equation}\label{Eq:Diffusion} \rho \frac{{\rm d}c}{{\rm d} t} + \nabla \cdot \boldsymbol{\mathcal{J}} = 0, \end{equation}\\ (4) the energy transport equation: \begin{equation}\label{Eq:EnergyTrans} \rho \frac{{\rm d}\varepsilon}{{\rm d} t} + \nabla \cdot \boldsymbol \kappa + \boldsymbol \sigma:\nabla \boldsymbol{v} = 0, \end{equation}\\ where $\boldsymbol{\sigma}$ is the pressure tensor, $\varepsilon$ is the mixture specific internal energy, $\boldsymbol{\kappa}$ is the heat flux vector and $\boldsymbol{\mathcal{J}}$ is the diffusion flux of the light constituent in the mixture, while ${\rm d}/{\rm d}t=\partial/\partial t+\boldsymbol{v}\cdot \nabla$ denotes the material time derivative. The balance equations~\eqref{Eq:Continuity}-\eqref{Eq:EnergyTrans} become a closed set of field equations for the determination of the basic fields if we provide constitutive relations for the pressure tensor, the heat flux vector and the diffusion flux. In the Navier-Stokes-Fourier approximation, these constitutive relations are~\cite{deGrootMazur1969}:\\ (i) the Navier-Stokes law \begin{equation}\label{Eq:PressureTensor} \boldsymbol \sigma = (p - \eta_{\rm b} \nabla \cdot \boldsymbol v)\boldsymbol I - 2\ \eta_{\rm s}\substack{\sc{\circ} \\[-0.2mm] \displaystyle \overline{\nabla \spm \boldsymbol{v}}\\[6pt] }, \end{equation} where $p$ is the mixture pressure, $\eta_{\rm b}$ is the volume (or bulk) viscosity, $\eta_{\rm s}$ is the shear viscosity and $\substack{\sc{\circ} \\[-0.2mm] \displaystyle \overline{\nabla \spm \boldsymbol{v}}\\[6pt] }$ is rate-of-shear tensor;\\ (ii) the Fourier law of heat conduction: \begin{equation}\label{Eq:HeatFlux} \boldsymbol \kappa = -\lambda_{\scriptscriptstyle\rm th} \nabla T + \left[\mu-T\left(\frac{\partial \mu}{\partial T}\right)_{p,c}+ k_{\scriptscriptstyle T}\left(\frac{\partial \mu}{\partial c}\right)_{p,\scriptscriptstyle T} \right] \boldsymbol{\mathcal{J}}, \end{equation} where $\lambda_{\rm th}$ is the thermal conductivity of the mixture, $k_{\scriptscriptstyle T}$ is the thermal diffusion ratio \cite{Landau1987a} and $\mu=\mu_{\scriptscriptstyle 1}/m_{\scriptscriptstyle 1}-\mu_{\scriptscriptstyle 2}/m_{\scriptscriptstyle 2}$ is the mixture chemical potential (i.e., the difference in the chemical potential per unit mass of the two constituents); \\ (iii) the Fick law of diffusion \begin{equation}\label{Eq:DiffusionFluxFick} \boldsymbol{\mathcal{J}} = -\rho D_{\scriptscriptstyle 12}\left[ \nabla c + \frac{k_{p}}{p} \nabla p+\frac{k_{\scriptscriptstyle T}}{T} \nabla T\right], \end{equation} where $D_{\scriptscriptstyle 12}$ is the diffusion coefficient and $k_{p}\!=\!-(p/\hskip -0.25mm\spm \rho^{\hskip 0.25mm \scriptscriptstyle 2})(\partial\rho/\partial c)_{\scriptscriptstyle p,T}/(\partial\mu/\partial c)_{\scriptscriptstyle p,T}$ is the so-called barodiffusion factor. As pointed out in the literature~\cite{Weiss1995}, the description of time-dependent processes based on the Navier-Stokes-Fourier theory starts to deviate from experimental data at high frequencies, such as in the case of Rayleigh-Brillouin scattering, where typically hypersound frequencies come into play. In order to overcome this situation, we may consider a generalization of the Navier-Stokes-Fourier constitutive relations by assuming that the pressure tensor, the heat flux vector and the diffusion flux respond to gradients only after a relaxation time has elapsed. Such type of approach was first introduced by \citet{Cattaneo1958} to address the paradox of heat conduction in single fluid which follows from Fourier's law and it leads to a constitutive equation for the heat flux vector with an exponential memory kernel. In the present paper, our generalized constitutive equations follow from the kinetic theory proposed by \citet{Alievskii1969} for a mixture of polyatomic gases. Basing on Grad’s moment method~\cite{Grad1949}, a macroscopic state of a mixture is characterized by the basic fields of partial mass densities, partial diffusion fluxes, partial stress tensors, partial specific energies and partial heat fluxes associated with translational and internal molecular degrees of freedom. A closed system of linear field equations for the determination of the basic fields can be obtained if we multiply the Boltzmann equation for a polyatomic gas mixture by an appropriate set of Hermite and internal energy polynomials, integrate over peculiar velocities and sum over internal states. The collision integrals appearing in these equations can be expressed in terms of the basic fields by considering the so-called 17-moment approximation to the distribution function. In the case of a binary gas mixture, where the molecular mass ratio $m_{\scriptscriptstyle 1}/m_{\scriptscriptstyle 2}$ is a small parameter and the exchange of energy between translational and internal degrees of freedom of the molecules is a slow process, this system of field equations becomes fully decoupled and its solution can be used to derive the following constitutive relations: \begin{gather} \label{Eq:ExpMemoryPressureTensor} \boldsymbol{\sigma}=(\hskip 0.25mm p-\!\!\int_{\hskip -0.25mm \scriptscriptstyle 0}^{\hskip 0.25mm t} \!\!\eta_{\rm b} (t-t^\prime)\hskip 0.25mm \nabla\!\spm\spm \cdot\spm\spm \boldsymbol{v} (\boldsymbol{r},t^\prime)\hskip 0.25mm dt^\prime\hskip 0.25mm )\hskip 0.25mm \boldsymbol{I} -2\hskip 0.25mm [\eta_{\rm s}]_{_{\scriptscriptstyle 1}}\substack{\sc{\circ} \\[-0.2mm] \displaystyle \overline{\nabla \spm \boldsymbol{v}}\\[6pt] } -2\hskip 0.25mm \int_{\hskip -0.25mm \scriptscriptstyle 0}^{\hskip 0.25mm t} \!\![\eta_{\rm s}]_{_{\scriptscriptstyle 2}} (t-t^\prime)\hskip 0.25mm \substack{\sc{\circ} \\[-0.2mm] \displaystyle \overline{\nabla \spm \boldsymbol{v}}\\[6pt] } (\boldsymbol{r},t^\prime)\hskip 0.25mm dt^\prime, \end{gather} \begin{gather} \label{Eq:ExpMemoryHeatFlux} \boldsymbol \kappa=-[\lambda_{\rm th}]_{_{\scriptscriptstyle 1}} \hskip -0.25mm \nabla T -\int_{\hskip -0.25mm \scriptscriptstyle 0}^{\hskip 0.25mm t}\! [\lambda_{\rm th}]_{_{\scriptscriptstyle 2}} (t-t^\prime) \nabla T(\boldsymbol{r},t^\prime)\hskip 0.25mm dt^\prime +\Bigl[\mu-T \left (\frac{\partial\mu}{\partial T}\right)_{p,c}+k_{\scriptscriptstyle T} \left(\frac{\partial \mu}{\partial c}\right)_{\scriptscriptstyle p,T}\Bigr]\hskip 0.25mm \boldsymbol{\mathcal{J}}, \end{gather} \begin{gather} \label{Eq:ExpMemoryDiffusionFluxFick} \boldsymbol{\mathcal{J}}=-\rho\!\int_{\hskip -0.25mm \scriptscriptstyle 0}^{\hskip 0.25mm t}\! D_{\hskip -0.25mm \scriptscriptstyle 12} (t-t^\prime) \Bigl[ \nabla\hskip -0.25mm c\hskip 0.25mm (\boldsymbol{r},t^\prime) +\frac{k_{p}}{p}\hskip 0.25mm \nabla\hskip -0.25mm p\hskip 0.25mm (\boldsymbol{r},t^\prime) +\frac{k_{\scriptscriptstyle T}}{T}\hskip 0.25mm \nabla T(\boldsymbol{r},t^\prime)\Bigr]\hskip 0.25mm dt^\prime, \end{gather} where the generalized transport coefficients are defined as \begin{gather} \eta_{\rm b} (t-t')=\eta_{\rm b} \hskip 0.25mm \frac{ e^{\displaystyle -(t-t')/\tau_{\hskip -0.25mm \scriptscriptstyle v}}}{\tau_{\hskip -0.25mm \scriptscriptstyle v}}, \end{gather} \begin{gather} [\eta_{\rm s}]_{_{\scriptscriptstyle 2}} (t-t')=[\eta_{\rm s}]_{_{\scriptscriptstyle 2}}\hskip 0.25mm \frac{e^{\displaystyle -(t-t')/\tau_{\hskip -0.25mm \scriptscriptstyle 2}}}{\tau_{\hskip -0.25mm \scriptscriptstyle 2}}, \end{gather} \begin{gather} [\lambda_{\scriptscriptstyle\rm th}]_{_{\scriptscriptstyle 2}} (t-t')= [\lambda_{\scriptscriptstyle\rm th}^{\scriptscriptstyle\rm {tr}}]_{_{\scriptscriptstyle 2}} \hskip 0.25mm \frac{e^{\displaystyle-(t-t')/\tau_{\hskip -0.25mm \scriptscriptstyle 2}^\prime}}{\tau_{\hskip -0.25mm \scriptscriptstyle 2}^\prime} +[\lambda_{\scriptscriptstyle\rm th}^{\scriptscriptstyle\rm {int}}]_{_{\scriptscriptstyle 2}} \hskip 0.25mm \frac{e^{\displaystyle -(t-t')/\tau_{\hskip -0.25mm \scriptscriptstyle 2}^{\prime\prime}}}{\tau_{\hskip -0.25mm \scriptscriptstyle 2}^{\prime\prime}}, \end{gather} \begin{gather} D_{\scriptscriptstyle 12}(t-t')=D_{\scriptscriptstyle 12}\hskip 0.25mm \frac{e^{\displaystyle -(t-t')/\tau_{\hskip -0.25mm \scriptscriptstyle w}}}{\tau_{\hskip -0.25mm \scriptscriptstyle w}}. \end{gather} It is clear that our generalized constitutive relations depend on exponential memory kernels which are connected with the characteristic relaxation times $\tau_{\hskip -0.25mm \scriptscriptstyle v}$, $\tau_{\hskip -0.25mm \scriptscriptstyle 2}$, $\tau_{\hskip -0.25mm \scriptscriptstyle 2}^\prime$, $\tau_{\hskip -0.25mm \scriptscriptstyle 2}^{\prime\prime}$ and $\tau_{\hskip -0.25mm \scriptscriptstyle w}$ that give us, respectively, a measure of time interval spent by dynamic pressure, partial stress tensors, partial heat fluxes and diffusion flux to achieve a stationary value. Insertion of the constitutive relations~\eqref{Eq:ExpMemoryPressureTensor}-\eqref{Eq:ExpMemoryDiffusionFluxFick} into the conservation equations~\eqref{Eq:Continuity}-\eqref{Eq:EnergyTrans} leads to a linear system of field equations. As ($p$, $c$, $T$) is not a set of statistically independent variables, here it is replaced by ($\phi$, $p$, $c$) with: \begin{equation*} \phi = T - \frac{T_{\scriptscriptstyle 0} \beta_{\scriptscriptstyle T}}{\rho_{\scriptscriptstyle 0} c_p}p \end{equation*} where $\beta_{\scriptscriptstyle T}=-\rho_{\scriptscriptstyle 0}^{ -1} (\partial \rho/\partial T)_{\scriptscriptstyle p,c}= T_{\scriptscriptstyle 0}^{-1}$ and $c_p=(\partial \varepsilon/\partial T)_{\rho,c}+T_{\scriptscriptstyle 0}\hskip 0.25mm (\partial p/\partial T)_{\!\rho,c}^{2} /\rho_{\scriptscriptstyle 0}^{2}(\partial p/\partial \rho)_{{\scriptscriptstyle T},c}$ are the thermal expansion coefficient and the specific heat capacity at constant pressure, respectively. Equilibrium values are denoted by the subscript zero, while thermodynamics derivatives are understood to be evaluated at equilibrium. After Fourier-Laplace transformations, this linear system for macroscopic fluctuations can be rewritten as: \begin{equation}\label{Eq:FLtransform} \boldsymbol{ A \psi}(\boldsymbol q, s) = \boldsymbol{ B \psi}(\boldsymbol q, 0) \end{equation} where \begin{gather} \boldsymbol{\psi}= \begin{pmatrix} \psi_{\scriptscriptstyle 1}\\[4mm] \psi_{\scriptscriptstyle 2}\\[4mm] \psi_{\scriptscriptstyle 3}\\[4mm] \psi_{\scriptscriptstyle 4} \end{pmatrix}= \begin{pmatrix} \beta_{\scriptscriptstyle T}\hskip 0.25mm \bar{\phi}\\[4mm] \bar{c}\\[4mm] \bar{p}/p_{\scriptscriptstyle 0}\\[4mm] \tau_{\hskip -0.25mm\spm \scriptscriptstyle s}\nabla\!\hskip -0.25mm\spm \cdot\hskip -0.25mm\spm \bar{\boldsymbol{v}} \end{pmatrix} \end{gather} and the $4\hskip -0.25mm\times\hskip -0.25mm 4$ matrices $\boldsymbol{A}$ and $\boldsymbol{B}$ have the form \begin{equation}\label{Eq:MatrixA} \boldsymbol A = \left( \begin{array}{cccc} s + f(s) q^2& -s \dfrac{\gamma-1}{\gamma} \rho_{\scriptscriptstyle 0} \dfrac{k_{\scriptscriptstyle T}}{p_{\scriptscriptstyle 0}} \left(\dfrac{\partial \mu}{\partial c}\right)_{\!\!\hskip -0.25mm p,\scriptscriptstyle T}& \dfrac{\gamma-1}{\gamma} f(s) q^2 &0 \\[2mm] k_{\scriptscriptstyle T} \textsl{g}(s) q^2&s + \textsl{g}(s) q^2& \mathcal{P} k_{p} \textsl{g}(s) q^2& 0\\[2mm] -s \gamma &-s \gamma \rho_{\scriptscriptstyle 0} \dfrac{k_{p}}{p_{\scriptscriptstyle 0}} \left(\dfrac{\partial \mu}{\partial c}\right)_{\!\!\hskip -0.25mm p,\scriptscriptstyle T} &s& \gamma\\[2mm] 0&0&-q^2& s+b(s)q^2 \end{array} \right) \end{equation} \begin{equation}\label{Eq:MatrixB} \boldsymbol B = \tau_s \begin{pmatrix} 1& -\dfrac{(\gamma-1)}{\gamma}\rho_{\scriptscriptstyle 0} \dfrac{k_{\scriptscriptstyle T}}{p_{\scriptscriptstyle 0}} \left(\dfrac{\partial \mu}{\partial c}\right)_{\!\!\hskip -0.25mm p,\scriptscriptstyle T}&0&0 \\[2mm] 0 &1&0& 0\\[2mm] -\gamma & -\gamma \rho_{\scriptscriptstyle 0}\dfrac{k_p}{p_{\scriptscriptstyle 0}} \left(\dfrac{\partial \mu}{\partial c}\right)_{\!\!\hskip -0.25mm p,\scriptscriptstyle T}&1&0 \\[2mm] 0&0&0&1 \end{pmatrix} \end{equation} In the above expressions, time is given in units of the stress relaxation time: \begin{equation} \tau_{\hskip -0.25mm s}=(\hskip 0.25mm [\eta_{\rm s}]_{_{\scriptscriptstyle 1}}+[\eta_{\rm s}]_{_{\scriptscriptstyle 2}})/p=\eta_{\rm s}/p \label{eq-tau_s} \end{equation} and length is given in units of the mixture mean free path $\tau_{s}\sqrt{p_{\scriptscriptstyle 0}/\hskip -0.25mm\rho_{\scriptscriptstyle 0}}$. The specific heat capacity ratio of the mixture: \begin{equation} \label{gammix} \gamma=1+[\hskip 0.25mm x_{\scriptscriptstyle 1}/(\gamma_{\scriptscriptstyle 1}-1)+x_{\scriptscriptstyle 2}/(\gamma_{\scriptscriptstyle 2}-1)\hskip 0.25mm ]^{-1} \end{equation} can be calculated from the specific heat ratios $\gamma_{\scriptscriptstyle i}$ (as listed in Table~\ref{Tab:PolarizabilityandGamma}) and the molar fractions $x_{\scriptscriptstyle i}$ of the constituents. In the above matrices the following functional forms are represented: \begin{equation} \mathcal{P} = 1+ \frac{\gamma -1}{\gamma}\frac{k_{\scriptscriptstyle T}}{k_{p}} \end{equation} \begin{equation} \textsl{g}(s) = \frac{\rho_{\scriptscriptstyle 0} D_{\scriptscriptstyle 12}}{\eta_{\rm s}}\frac{\tau_{\hskip -0.25mm s}/\tau_{\hskip -0.25mm \scriptscriptstyle w}}{s+\tau_{\hskip -0.25mm s}/\tau_{\hskip -0.25mm \scriptscriptstyle w}} \label{g-rhm} \end{equation} \begin{equation} f(s) = \frac{[\lambda_{\rm th}]_{_{\scriptscriptstyle 1}}}{\eta_{\rm s}c_p} + \frac{[\lambda_{\rm th}^{\rm tr}]_{_{\scriptscriptstyle 2}}}{\eta_{\rm s}c_p}\frac{\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}^\prime}{s+\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}^\prime}+ \frac{[\lambda_{\rm th}^{\rm int}]_{_{\scriptscriptstyle 2}}}{\eta_{\rm s}c_p}\frac{\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}^{\prime\prime}}{s+\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}^{\prime\prime}} \label{f-rhm} \end{equation} \begin{equation} b(s) = \frac{4}{3}\frac{[\eta_{\rm s}]_{_{\scriptscriptstyle 1}}}{\eta_{\rm s}} + \frac{4}{3}\frac{[\eta_{\rm s}]_{_{\scriptscriptstyle 2}}}{\eta_{\rm s}}\frac{\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}}{s+\tau_{\hskip -0.25mm s}/\tau_{\scriptscriptstyle 2}} + \frac{\eta_{\rm b}}{\eta_{\rm s}}\frac{\tau_s/\tau_{\scriptscriptstyle v}}{s+\tau_s/\tau_{\scriptscriptstyle v}} \label{b-rhm} \end{equation} Note that the light scattering predictions of the classical mixture theory derived by Mountain and Deutch~\cite{Mountain1969} follows from our generalized hydrodynamic model if we set the functions appearing in matrix $\boldsymbol{A}$ as \begin{equation} \textsl{g}(s) = \frac{\rho_{\scriptscriptstyle 0} D_{\scriptscriptstyle 12}}{\eta_{\rm s}} \label{g-clas} \end{equation} \begin{equation} f(s) = \frac{[\lambda_{\rm th}]_{_{\scriptscriptstyle 1}}+ [\lambda_{\rm th}^{\rm tr}]_{_{\scriptscriptstyle 2}} + [\lambda_{\rm th}^{\rm int}]_{_{\scriptscriptstyle 2}}}{\eta_{\rm s}c_p} \label{f-clas} \end{equation} \begin{equation} b(s) = \frac{4}{3} + \frac{\eta_{\rm b}}{\eta_{\rm s}} \label{b-clas} \end{equation} For the computation of the RB-spectral profiles the matrix equation can be further evaluated. This can be done, both for the relaxation hydrodynamics model taking the functional forms for $\textsl{g}(s)$, $f(s)$ and $b(s)$ as defined in Eqs.~\eqref{g-rhm}-\eqref{b-rhm}, as well as for the classical hydrodynamics model, by taking the forms defined in Eqs.~\eqref{g-clas}-\eqref{b-clas}. Here we proceed by evaluating the more complex case for the relaxation hydrodynamics model. Since the solution of Eq.~\eqref{Eq:FLtransform} can be cast in the form: \begin{equation} \label{Eq:SoutionofFLtransform} \psi_i(\boldsymbol q, s) = \sum_r\mathcal{Q}_{ir}\psi_r(\boldsymbol q, 0), \end{equation} where $\boldsymbol \mathcal{Q} = \boldsymbol A^{-1}\boldsymbol B$, the correlation functions of the form $\langle \psi_i({\boldsymbol q}, s)\psi_j(-{\boldsymbol q}, 0)\rangle$ follow as: \begin{equation}\label{Eq:CorrelationFunc} \langle \psi_i(\boldsymbol q, s)\psi_j(-\boldsymbol q, 0)\rangle = \sum_r \mathcal{Q}_{ir}\langle \hskip 0.25mm \psi_r(\boldsymbol q, 0) \psi_j (-\boldsymbol{q},0)\hskip 0.25mm \rangle \end{equation} where the equal-time correlation functions, which follow from the thermodynamic theory of fluctuations, read: \begin{eqnarray} \langle \hskip 0.25mm |\psi_{\scriptscriptstyle 1}(\boldsymbol q, 0)|^2 \rangle &=& \frac{V^2}{N_{\scriptscriptstyle 0}} \frac{(\gamma-1)}{\gamma}\\ \langle |\psi_{\scriptscriptstyle 2}(\boldsymbol q, 0)|^2 \rangle &=& \frac{V^2}{N_{\scriptscriptstyle 0}} x_{\scriptscriptstyle 1} x_{\scriptscriptstyle 2} \frac{(m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2})^2}{(m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1} +m_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2})^4}\\ \langle \hskip 0.25mm |\psi_{\scriptscriptstyle 3} (\boldsymbol q, 0)|^2 \rangle &=& \frac{V^2}{N_{\scriptscriptstyle 0}} \gamma \end{eqnarray} where $N_{\scriptscriptstyle 0}$ is the total number of molecules in the volume $V$ of the scattering region. In terms of the set of dimensionless variables ($\psi_{\scriptscriptstyle 1}$, $\psi_{\scriptscriptstyle 2}$, $\psi_{\scriptscriptstyle 3}$) we can now write the dynamic structure factor of~Eq.~\eqref{Eq:StructureMixture} as: \begin{equation}\label{Eq:FinalStructureMixture} S(\boldsymbol q, \omega) = 2 \sum_{ij} \left( \frac{\partial \epsilon} {\partial \psi_i}\right) \left( \frac{\partial \epsilon}{\partial \psi_j}\right) \langle |\psi_j(\boldsymbol q, 0)|^2 \rangle {\rm Re}\hskip 0.25mm [\mathcal{Q}_{ij}(s=i\omega)]. \end{equation} Note that the matrix element containing $\psi_4$ needs not be evaluated since it does not appear in the structure factor $S(\boldsymbol q, \omega)$. For a binary mixture obeying the Clausius-Mossotti relation (see for example the textbook of Born~\cite{Born2013}) we can obtain the following relations: \begin{eqnarray} \left(\frac{\partial \epsilon}{\partial \psi_{\scriptscriptstyle 1}}\right) &=& -\frac{N_{\scriptscriptstyle 0}}{V}\hskip 0.25mm (\alpha_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1} + \alpha_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2}) \\ \left(\frac{\partial \epsilon}{\partial \psi_{\scriptscriptstyle 2}}\right) &=& \frac{N_{\scriptscriptstyle 0}}{V}\hskip 0.25mm \frac{(m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1}+m_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2})^2}{m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2}}\hskip 0.25mm (\alpha_{\scriptscriptstyle 1}- \alpha_{\scriptscriptstyle 2}) \\ \left(\frac{\partial \epsilon}{\partial \psi_{\scriptscriptstyle 3}}\right)&=&\frac{N_{\scriptscriptstyle 0}}{V}\hskip 0.25mm \frac{1}{\gamma}\hskip 0.25mm (\alpha_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1} + \alpha_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2}) \end{eqnarray} in which $\alpha_{\scriptscriptstyle 1}$, $\alpha_{\scriptscriptstyle 2}$ are the dynamic polarizabilities of the two molecular species at the frequency of the incident light. The molecular polarizabilities at 532.22 nm can be found in Table~\ref{Tab:PolarizabilityandGamma}. \begin{table} \centering \renewcommand\tabcolsep{20.0pt} \caption{The dynamic polarizabilities $\alpha$ (given in units of $10^{-40}$ C$\cdot$ m$^2$/V) at wavelength of 532.22 nm were calculated based on the static polarizability and the dynamic polarizability function~\cite{Lide2004}, and the used values as cited in footnotes. Heat capacity ratio $\gamma$ for the molecular species as obtained from experiment.}\label{Tab:PolarizabilityandGamma} \begin{threeparttable} \begin{tabular}{c c c} \hline molecule & $\alpha$ & $\gamma$ \\ \hline SF$_6$& 5.029\tnote{a} & 1.10\tnote{d}\\ He & 0.231\tnote{b} & 1.66\tnote{e} \\ D$_2$ & 0.900\tnote{c} & 1.40\tnote{e}\\ H$_2$ & 0.911\tnote{c} & 1.41\tnote{e}\\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[a] Ref.~\cite{Bridge1964}. \item[b] Ref.~\cite{Chung1968}. \item[c] Ref.~\cite{Bridge1997}. \item[d] Ref.~\cite{Yokomizu2015}. \item[e] Ref.~\cite{Koehler1950}. \end{tablenotes} \end{threeparttable} \end{table} This model is tightly related to the transport coefficients of the binary mixture: the bulk viscosity $\eta_{\rm b}$, the shear viscosity $\eta_{\rm s}= [\eta_{\rm s}]_{_{\scriptscriptstyle 1}}+[\eta_{\rm s}]_{_{\scriptscriptstyle 2}}$, the thermal conductivity $\lambda_{\rm th}$, the diffusion coefficient $D_{\hskip 0.25mm \scriptscriptstyle 12}$ and the thermal diffusion ratio $k_T$. The characteristic relaxation times and the usual transport coefficients appearing in the generalized constitutive relations of Eqs.~\eqref{Eq:ExpMemoryPressureTensor}-\eqref{Eq:ExpMemoryDiffusionFluxFick} are given by: \begin{gather} \label{tau-v} \eta_{\rm b}=\frac{3}{2}\hskip 0.25mm (\gamma-1)\hskip 0.25mm (5/3-\gamma)\hskip 0.25mm p \hskip 0.25mm \tau_{\hskip -0.25mm\spm \scriptscriptstyle v}, \end{gather} \begin{gather} \label{eta-s1} [\eta_{\rm s}]_{_{\scriptscriptstyle 1}}=\frac{5}{8}\hskip 0.25mm \frac{x_{\scriptscriptstyle 1} k_{\hskip -0.25mm \scriptscriptstyle \rm B} T}{x_{\scriptscriptstyle 1} \mathit \Omega_{\scriptscriptstyle 11}^{\scriptscriptstyle (2,2)}+2\hskip 0.25mm x_{\scriptscriptstyle 2} \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (2,2)}} \end{gather} \begin{gather} [\eta_{\rm s}]_{_{\scriptscriptstyle 2}}=p\hskip 0.25mm x_{\scriptscriptstyle 2} \tau_{\scriptscriptstyle 2}=\frac{5}{8}\hskip 0.25mm \frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B} T}{\mathit \Omega_{\scriptscriptstyle 22}^{\scriptscriptstyle (2,2)}}, \end{gather} \begin{gather} D_{\hskip -0.25mm \scriptscriptstyle 12}=k_{\hskip -0.25mm \scriptscriptstyle \rm B} T\hskip 0.25mm \frac{m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1}+m_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2}}{m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2}}\hskip 0.25mm \tau_{\hskip -0.25mm \scriptscriptstyle w} =\frac{3}{16}\hskip 0.25mm \frac{(k_{\hskip -0.25mm\scriptscriptstyle\rm B} T)^{2} (m_{\scriptscriptstyle 1}+m_{\scriptscriptstyle 2})}{p\hskip 0.25mm m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2}\mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,1)}}, \end{gather} \begin{gather} D_{\hskip -0.25mm \scriptscriptstyle 11}=\frac{3}{8}\hskip 0.25mm \frac{(k_{\hskip -0.25mm\scriptscriptstyle\rm B} T)^{2}}{p\hskip 0.25mm m_{\scriptscriptstyle 1} \mathit \Omega_{\scriptscriptstyle 11}^{\scriptscriptstyle (1,1)}}, \end{gather} \begin{gather} D_{\hskip -0.25mm \scriptscriptstyle 22}=\frac{3}{8}\hskip 0.25mm \frac{(k_{\hskip -0.25mm\scriptscriptstyle\rm B} T)^{2}}{p\hskip 0.25mm m_{\scriptscriptstyle 2} \mathit \Omega_{\scriptscriptstyle 22}^{\scriptscriptstyle (1,1)}}, \end{gather} \begin{gather} [\lambda_{\rm th}]_{_{\scriptscriptstyle 1}}=\frac{75}{32}\hskip 0.25mm \frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B}}{m_{\scriptscriptstyle 1}} \hskip 0.25mm \dfrac{x_{\scriptscriptstyle 1} k_{\hskip -0.25mm\scriptscriptstyle \rm B} T}{x_{\scriptscriptstyle 1} \mathit \Omega_{\scriptscriptstyle 11}^{\scriptscriptstyle (2,2)}+x_{\scriptscriptstyle 2}\hskip 0.25mm \left(\dfrac{25}{2}\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,1)} -10\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,2)}+2\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,3)}\right)} +\frac{\rho_{1}[c_{v}^{\rm{int}}]_{_{1}}}{x_{\scriptscriptstyle 1}/D_{\scriptscriptstyle 11}+x_{\scriptscriptstyle 2}/D_{\scriptscriptstyle 12}}, \end{gather} \begin{gather} [\lambda_{\rm th}^{\scriptscriptstyle \rm{tr}}]_{_{\scriptscriptstyle 2}}=\frac{5}{2}\hskip 0.25mm\frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B}}{m_{\scriptscriptstyle 2}} \hskip 0.25mm p\hskip 0.25mm x_{\scriptscriptstyle 2}\hskip 0.25mm \tau_{\hskip -0.25mm \scriptscriptstyle 2}^\prime =\frac{75}{32}\hskip 0.25mm \frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B}}{m_{\scriptscriptstyle 2}}\hskip 0.25mm \frac{k_{\hskip -0.25mm\scriptscriptstyle \rm B} T}{\mathit \Omega_{\scriptscriptstyle 22}^{\scriptscriptstyle (2,2)}}, \end{gather} \begin{gather} [\lambda_{\rm th}^{ \rm{int}}]_{_{\scriptscriptstyle 2}}=p\hskip 0.25mm [c_{v}^{ \rm{int}}]_{_{2}}x_{\scriptscriptstyle 2}\hskip 0.25mm \tau_{\hskip -0.25mm \scriptscriptstyle 2}^{\prime\prime} =\frac{\rho_{2}[c_{v}^{\rm{int}}]_{_{2}}}{x_{\scriptscriptstyle 1}/D_{\scriptscriptstyle 12}+x_{\scriptscriptstyle 2}/D_{\scriptscriptstyle 22}}, \end{gather} \begin{gather} k_{\scriptscriptstyle T}=\frac{5}{2}\hskip 0.25mm x_{\scriptscriptstyle 1}x_{\scriptscriptstyle 2}\hskip 0.25mm \frac{m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2}}{\displaystyle (m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1}+m_{\scriptscriptstyle 2}x_{\scriptscriptstyle 2})^{2}} \dfrac{(5\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,1)}-2\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,2)})} {x_{\scriptscriptstyle 1} \mathit \Omega_{\scriptscriptstyle 11}^{\scriptscriptstyle (2,2)}+x_{\scriptscriptstyle 2} \left(\dfrac{25}{2}\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,1)} -10\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,2)}+2\hskip 0.25mm \mathit \Omega_{\scriptscriptstyle 12}^{\scriptscriptstyle (1,3)}\right)}, \end{gather} \begin{equation} k_{p} = p\frac{(\partial \mu /\partial p)_{c,T}}{(\partial \mu /\partial c)_{p,T}}= x_{\scriptscriptstyle 1} x_{\scriptscriptstyle 2} (m_{\scriptscriptstyle 2} - m_{\scriptscriptstyle 1})\frac{m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2}}{(m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1} + m_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2})^3} \end{equation} \begin{equation} \label{dmu/dc} \left(\frac{\partial \mu }{\partial c}\right)_{\!\!\hskip -0.25mm p,T} = \frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B}T}{x_{\scriptscriptstyle 1} x_{\scriptscriptstyle 2}}\frac{(m_{\scriptscriptstyle 1} x_{\scriptscriptstyle 1} + m_{\scriptscriptstyle 2} x_{\scriptscriptstyle 2})^3}{(m_{\scriptscriptstyle 1} m_{\scriptscriptstyle 2})^2} \end{equation} where \begin{gather} [c_{v}^{ \rm{int}}]_{_{i}}=\frac{3}{2}\hskip 0.25mm \frac{(5/3-\gamma_i)}{(\gamma_i-1)}\hskip 0.25mm \frac{k_{\hskip -0.25mm \scriptscriptstyle \rm B}}{m_i} \end{gather} is the isochoric specific heat capacity associated with the internal degrees of freedom of molecules of the $i$-component and $\mathit \Omega^{\scriptscriptstyle (l,r)}_{\scriptscriptstyle ij}$ denotes the elastic Chapman-Cowling collision integrals \cite{Chapman1990}. Since collision integrals can only be evaluated for a specific interaction between molecules, we shall consider in this paper the Lennard-Jones (6-12) potential function: \begin{equation}\label{Eq:LJPotential} U_{ij}(r) = 4\epsilon_{ij}\left[ \left(\frac{\sigma_{ij}}{r}\right)^{12} -\left(\frac{\sigma_{ij}}{r}\right)^6 \right] \end{equation} where $r$ is the distance between the centres of mass of the two molecules, $\epsilon_{ij}$ the maximum depth of the potential well and $\sigma_{ij}$ is the distance at which the potential function vanishes. The values of the Lennard-Jones potential parameters ($\sigma_{ij}$ and $\epsilon_{ij}$) adopted in this model are extracted from literature and listed in Table~\ref{Tab:SphericalParameter}. \begin{table} \centering \caption{Parameters for the Lennard-Jones potentials $\sigma_{ij}$ and $\epsilon_{ij}$ for binary gases.} \label{Tab:SphericalParameter} \begin{threeparttable} \begin{tabular}{l l l l l} \multicolumn{5}{c}{$\sigma_{ij}$ (nm)}\\ \hline & SF$_6$ & He & D$_2$ & H$_2$\\ SF$_6$& 0.5252\tnote{a}& 0.4298\tnote{a}& 0.4420\tnote{c}& 0.4396\tnote{c}\\ He & & 0.2576\tnote{b}& & \\ D$_2$& & & 0.2948\tnote{b}& \\ H$_2$& & & & 0.2968\tnote{b}\\ \\ \multicolumn{5}{c}{$\epsilon_{ij}/k_{\scriptscriptstyle \rm B}$ (K)}\\ \hline & SF$_6$ & He & D$_2$ & H$_2$ \\ SF$_6$& 207.7\tnote{a}& 19.24\tnote{a}& 42.65\tnote{c}&39.77\tnote{c}\\ He & & 10.12\tnote{b}& & \\ D$_2$ & & & 39.3\tnote{b} & \\ H$_2$& & & &33\tnote{b} \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[a] Ref.~\cite{Bzowski1990}. \item[b] Ref.~\cite{Letamendia1981}. \item[c] Calculated based on the Kong rule~\cite{Kong1973}. \end{tablenotes} \end{threeparttable} \end{table} The elastic collision integrals $\mathit \Omega^{\scriptscriptstyle (l,r)}_{\scriptscriptstyle ij}$ are expressed as: \begin{equation}\label{Eq:ColliIntergal} \mathit \Omega^{\scriptscriptstyle (l,r)}_{\scriptscriptstyle ij} = \sigma^2_{ij}\sqrt{\frac{2\pi k_{\scriptscriptstyle\rm B}T}{m_{ij}}}\frac{(r+1)!}{4}\left[ 1 - \frac{1}{2}\frac{\left(1+(-1)^l\right)}{1+l}\right]\mathit \Omega^{\scriptscriptstyle \ast(l,r)}_{\scriptscriptstyle ij} \end{equation} where $m_{i j} = m_{i}m_{j}/(m_{i}+m_{j})$ is the reduced mass, and $\mathit \Omega^{\scriptscriptstyle \ast(l,r)}_{\scriptscriptstyle ij}$ are the reduced collision integrals~\cite{Kim2014}, which are functions of the reduced temperature $T^\ast = k_{B}T/\epsilon_{ij}$. A substantial simplification of our model can be achieved if we consider a mixture of Maxwellian molecules, for which thermal diffusion is automatically absent. However, Letamendia and co-workers~\cite{Letamendia1981} have shown that the light scattering spectrum in disparate-mass gas mixtures is very sensitive to the presence and magnitude of thermal-diffusion effects. An estimate of the contribution of thermal-diffusion effects to the spectral shape was presented by Johnson~\cite{Johnson1983}, who showed that thermal-diffusion effects are comparable in magnitude to other first-order dissipative contributions (heavy-species heat flux and viscosity) in disparate-mass gas mixtures. Lastly, we close this section by remarking that the calculation of the spectral distribution of scattered light for a disparate-mass gas mixture can be calculated from the dynamic structure factor $S(\boldsymbol q, \omega)$, which was defined in Eq.~\eqref{Eq:StructureMixture}, and further evaluated in the present framework to~Eq.~\eqref{Eq:FinalStructureMixture} under the crucial assumption that $m_{\scriptscriptstyle 1}/m_{\scriptscriptstyle 2}$ being small. It requires the specification of the molecular masses, polarizabilities, specific heat capacity ratios for all constituents, and Lennard-Jones potential parameters for all combinations of species. Based on these quantities we can determine: (i) the transport coefficients \eqref{eta-s1} - \eqref{dmu/dc} and (ii) the relaxation times $\tau_{\scriptscriptstyle 2}$, $\tau_{\scriptscriptstyle 2}^{\prime}$, $\tau_{\scriptscriptstyle 2}^{\prime\prime}$, $\tau_{\hskip -0.25mm w}$ and $\tau_{\hskip -0.25mm s}$. Thus, the only free adjustable parameter of our generalized hydrodynamic description is the relaxation time $\tau_{\hskip -0.25mm \hskip -0.25mm \scriptscriptstyle v}$, which is connected to the bulk viscosity of the binary mixture via Eq.~\eqref{tau-v}. Since this is a quantity which cannot be computed within the current framework, we define the so-called internal relaxation number $z=\tau_{\hskip -0.25mm \hskip -0.25mm \scriptscriptstyle v}/\tau_{\hskip -0.25mm s}$ representing the ratio between the mean elastic and inelastic molecular collision frequencies. The value of $z$ can then be determined for each binary mixture from the experimental input. Through the known value of $\tau_{\hskip -0.25mm s}=\eta_{\hskip -0.25mm s}/p$, the stress relaxation time setting the unit of time, the $z$-parameter is equivalent to: \begin{equation} \label{z-par} z= \frac{\eta_{\rm b}}{\eta_{\hskip -0.25mm s}} \frac{2}{(\gamma -1)(5-3\gamma)}. \end{equation} A similar, but simplified, derivation of the dynamic structure factor $S(\boldsymbol q, \omega)$ can be performed within the framework of classical hydrodynamics, starting from the same matrix equation Eq.~\eqref{Eq:FLtransform}. In this case the matrix elements containing the functional forms $\textsl{g}(s)$, $f(s)$ and $b(s)$ are then replaced by the expressions in Eqs.~\eqref{g-clas}-\eqref{b-clas}. Rayleigh-Brillouin spectra are computed and a comparison is made with the spectra computed from the complex relaxation hydrodynamics model. A comparison is made for a collisional number of $z=10$ in both cases, but for a variety of scattering angles $\theta$. This value of $z=10$ approximately corresponds to the condition of $p= 1$ bar of SF$_6$ combined with $p= 1$ bar of helium, for conditions $T$ = 295 K and $\lambda_{i}$= 532 nm (see below). The variation of $\theta$ is included in the computations to illustrate the transition from the kinetic to the hydrodynamic regime. The uniformity parameter for the mixture equals: \begin{equation} \label{Eq:Parameter-y} y= \frac{\lambda_i/n}{4\pi \sin{\theta/2}} \frac{p}{\eta_{\rm s}\sqrt{2k_BT/m}} \ . \end{equation} where $p$ is the total pressure and the mass $m$ is to be taken as the mean in the mixture $m=x_1m_1+x_2m_2$. This representation of $y$ involves an explicit dependence on the scattering angle $\theta$, while all other parameters are kept constant in the comparisons shown in Fig.~\ref{Fig:Comparison}. As indicated, for small scattering angles, hence in the hydrodynamic regime, both theories match closely. Note that $\theta=15^\circ$ corresponds to $y= 13.68$. However, in the kinetic regime of lower values of the uniformity parameter, at $\theta=60^\circ$ corresponding to $y=3.57$, strong deviations are found. \begin{figure \centering \includegraphics[scale=0.40]{FigTheoryComparision.pdf} \caption{Comparison of computations of the RB-scattering profiles between generalized macroscopic theory vs. classical hydrodynamics approach for mixtures with internal relaxation number $z=10$ for varying scattering angles $\theta$ and the case of $p=1$ bar SF$_6$ in a mixture with $p=1$ bar of helium.} \label{Fig:Comparison} \end{figure} \section{Experimental setup} The experimental setup used for measuring spontaneous Rayleigh-Brillouin scattering at a wavelength of 532.22 nm, shown in Fig.~\ref{Fig:GreenSetupSimple}, has been described previously~\cite{Wang2019}. The light from a frequency-doubled Nd:VO$_4$ laser (Coherent, Verdi-5), at a power of 5 Watt and bandwidth less than 5 MHz travels through the binary gas medium. For the scattering cell, two Brewster-angled windows are mounted at entrance and exit ports to reduce stray light. A pressure gauge is connected to the cell to monitor the pressure and a temperature control system involving PT-100 sensors. Peltier elements as well as water cooling are used to keep the cell at a constant temperature with uncertainty less than $0.1 \: ^\circ{\rm C}$. The laser wavelength is monitored by a wavelength meter (Toptica HighFinesse WSU-30). \begin{figure}[ht] \centering \includegraphics[scale=0.40]{FigSetupGreen.pdf} \caption{Schematic diagram of the experimental setup for spontaneous Rayleigh-Brillouin scattering at 532 nm. A Verdi-V5 laser provides continuous wave light at 532.22 nm, at a power of 5 Watt and bandwidth less than 5 MHz. The laser light is split into two beams. The pump beam crosses the RB-scattering gas cell producing scattered light that is captured under an angle $\theta =(55.7 \pm 0.3)^{\circ}$. A small fraction of the power, retained in a reference beam transmitted through M$_4$, is used to align the beam path after the gas cell towards the detector. The scattered light after an bandpass filter (F$_{\rm BW}$) is analyzed in a Fabry-Perot interferometer (FPI), with free spectral range of 2.9964 GHz and an instrument linewidth of ($58 \pm 3$) MHz, and is collected on a photo-multiplier tube (PMT). Mirrors, lenses and diaphragm pinholes are indicated as M$_i$, L$_i$ and D$_i$. A slit of 500 $\mu$m is inserted to limit the opening angle for collecting scattering light, therewith optimizing the resolution.} \label{Fig:GreenSetupSimple} \end{figure} The scattered light propagates through a bandpass filter (Materion, T$>$ 90$\%$ at $\lambda_i$ = 532 nm, bandwidth $\triangle\lambda$ = 2.0 nm) onto a Fabry-Perot interferometer (FPI) with an effective free spectral range (FSR) of $2.9964 (5)$ GHz and an instrument width of $\sigma_{\nu_{\rm instr}}$ = 58.0 $\pm$ 3.0 MHz (FWHM). The calibration methods were discussed by \citet{Gu2012rsi}. The instrument function is verified to exhibit the functional form of an Airy function, which may be well approximated by a Lorentzian function during data analysis. For the present experiments a scattering angle $\theta = 55.7 \pm 0.3^\circ$ is adopted, because at angles smaller than the usual setting of $\theta = 90^\circ{\rm C}$ the Brillouin side peaks become more pronounced~\cite{Wang2020,Fabelinskii2012}. The angle was determined by a homemade rotation goniometer stage, while the opening angle is less than $0.5^\circ$, calculated from the geometry of a slit set behind the gas cell at a certain distance from the scattering center. RB-scattering spectral profiles were recorded by piezo-scanning the FPI at integration times of 1 s for each step, usually over 18 MHz, with detection of the scattered light on a photomultiplier (PMT) after the FPI analyzer. A full spectrum covering many consecutive RB-peaks and 10,000 data points was obtained in about 3 h. The piezo-voltage scans were linearized and converted to frequency scale by fitting the RB-peak separations to the calibrated FSR-value. The methods for producing such concatenated RB-spectra have been detailed before~\cite{Gu2012rsi,Wang2019PhD}. \section{Results and comparison} Rayleigh-Brillouin light scattering spectra were measured for gas mixtures consisting of SF$_6$, the heaviest molecular species to be put in the gas phase at high pressures, combined with gases of the lightest molecules available. Molecular hydrogen (H$_2$) and its isotopologue deuterium (D$_2$) exhibit the same physical and chemical properties, and only the masses are different at 2 amu and 4 amu. As for a comparison with Helium, its mass is the same as that of D$_2$, while the attractive potential depth for interactions with SF$_6$ is much smaller. These combinations provide an interesting test ground for verifying the relaxation hydrodynamic model put forward in section \ref{sec:MixtureModel}. Foremost, the combination of SF$_6$ with low-mass admixed gas fulfills the condition of disparate masses, $m_1<m_2$. The experimental conditions are chosen keeping a standard pressure of 1 bar SF$_6$, mixed with gases of the lighter species, at pressures stepwise increasing from 0.5 bar to 4 bar. A list of all pressure combinations experimentally investigated is provided in Table~\ref{Tab:MixtureWithSF6}. All measurements were performed at room temperature, at $\lambda=532.22$ nm and $\theta=55.7^\circ$. \begin{table}[hb] \centering \caption{The conditions of mixture gases experimentally investigated and the fitting result of the internal relaxation number $z$.}\label{Tab:MixtureWithSF6} \begin{tabular}{c| c c c||c| c c c || c| c c c} \hline SF$_6$ &He&&&SF$_6$ &D$_2$ &&&SF$_6$ &H$_2$ &\\ \cline{1-2} \cline{5-6} \cline{9-10} \multicolumn{2}{c}{$p$ (bar)} & $T$ (K) & $z$ &\multicolumn{2}{c}{$p$ (bar)} & $T$ (K)& $z$& \multicolumn{2}{c}{$p$ (bar)} & $T$ (K) &$z$\\ \hline 1.032 & & 295.0 & &1.007 & & 293.2&&1.002 & & 293.2 &\\ 1.033 & 0.512& 295.1 & 18.73(0.83)&1.001 & 0.506& 293.2& 16.68(0.38)& 1.002 & 0.510& 293.2 &20.75(0.56)\\ 1.037 & 1.029& 295.1 & 9.74(0.36)&1.002 & 1.001& 293.2& 9.91(0.13) &1.002 & 1.004& 293.2 &13.06(0.20)\\ 1.037 & 2.146& 295.1 & 2.44(0.49)&1.002 & 2.003& 293.2& 2.67(0.13) &1.002 & 2.007& 293.2 &5.28(0.06)\\ 1.035 & 2.993& 295.1 & 0.30(0.08)&1.002 & 3.002& 293.2& 1.40(0.07)&1.002 & 3.002& 293.2 &2.51(0.33)\\ 1.037 & 4.084& 295.1 & $<$ 0.1 &1.002 & 4.002& 293.2& 1.64(0.12)&1.004 & 4.001& 293.2 &1.83(0.12)\\ \hline \end{tabular \end{table} The measured RB light scattering spectra, shown in Fig.~\ref{Fig:SF6andHeExpRes} for mixtures with He, in Fig.~\ref{Fig:SF6andD2ExpRes} for mixtures with D$_2$, and in Fig.~\ref{Fig:SF6andH2ExpRes} for mixtures with H$_2$, all show the same qualitative behavior. In general terms the light scattering of the binary mixtures under investigation is fully dominated by the SF$_6$ molecules. The polarizability of SF$_6$ is extremely large, causing this molecule to exhibit a very large Rayleigh cross section~\cite{Sneep2005}. In relative terms the polarizability, and therewith the cross section for the light collisional partners is very small, such that these in fact only behave as 'spectators', an effect that was also observed for Ar/He and Kr/He mixtures~\cite{Gu2015}, although not as pronounced as in the present case. \begin{figure} \centering \includegraphics[scale=0.5]{FigSF6andHeExpRes.pdf} \caption{Measured Rayleigh-Brillouin scattering profiles (black) of binary mixtures of SF$_6$-He as measured for the various conditions as indicated, and a comparison with the binary the mixture model (red). Also an RBS-spectrum of pure SF$_6$, at 1 bar, is shown for comparison. Bottom graphs display the corresponding residuals. The experimental data were measured at wavelength of $\lambda_i$ = 532.22 nm and scattering angle of $\theta = 55.7^\circ$, and these spectra are on a scale of normalized integrated intensity over one FSR. \label{Fig:SF6andHeExpRes}} \end{figure} \begin{figure \centering \includegraphics[scale=0.5]{FigSF6andD2ExpRes.pdf} \caption{Same as Fig.~\ref{Fig:SF6andHeExpRes} now for SF$_6$/D$_2$ mixtures.} \label{Fig:SF6andD2ExpRes} \end{figure} \begin{figure \centering \includegraphics[scale=0.5]{FigSF6andH2ExpRes.pdf} \caption{Same as Fig.~\ref{Fig:SF6andHeExpRes} now for SF$_6$/H$_2$ mixtures. }\label{Fig:SF6andH2ExpRes} \end{figure} While for single-component gases the Brillouin side peaks become more pronounced when increasing the gas pressure, such as was observed for pure SF$_6$ gas~\cite{Wang2017}, for CO$_2$~\cite{Wang2019}, and for N$_2$O~\cite{Wang2018} in the present case with increasing pressure of the collisional partner in a mixture, the reverse is true. The addition of light-mass constituents to the gas causes the RBS-profile to exhibit less pronounced Brillouin side peaks. A comparison with the profile measured for 1 bar of pure SF$_6$, as shown in the figures, demonstrates this; for the pure SF$_6$ gas the side peaks are most pronounced. A further effect of the addition of light-mass collision partners is the narrowing of the composite RBS profile for increased pressures, and for all three collision partners alike. So, even though the light collision partners do not contribute to the light scattering themselves, their influence as collision partner is decisive in framing the light scattering spectrum. The experimentally measured RB-spectra for the various gas mixtures are compared to spectra computed with the theoretical model for binary gas mixtures as described in section \ref{sec:MixtureModel}. The dynamic structure function $S(\boldsymbol q, \omega)$ is calculated with input from known properties of the SF$_6$ molecule and the light collision partner, which is basically the dynamic polarizability $\alpha$, the heat capacity ration $\gamma$, the coefficients $\epsilon_{i,j}$ and $\sigma_{i,j}$ that define the intermolecular interactions via the Lennard-Jones potential. All gas-transport coefficients needed to evaluate the $S(\boldsymbol q, \omega)$ function are then defined, with the exception of the bulk viscosity of the system, which was parametrized via a single value of $z$ in Eq.~\eqref{z-par}. Extensive computations were performed in which $z$ was treated as a fitting parameter for each individual case of binary mixture in a least squares analysis. The computed spectra for the optimized values of $z$ are plotted in Figs.~\ref{Fig:SF6andHeExpRes} - \ref{Fig:SF6andH2ExpRes}, where also residuals between theory and experiment are presented. A general trend can be discerned from the comparison between experimental and theoretical spectra. Largest discrepancies occur for the lowest pressure of additions of 0.5 bar of the lighter component, with residuals exhibiting extrema of some 10\% and 15\% for H$_2$. The agreement overall improves for the largest additions of the low-mass scattering partners, where deviations decrease to 2-3\%. \begin{figure}[hb] \centering \includegraphics[scale=0.35]{FigzChangesofSF6mixture.pdf} \caption{The value of $z$ derived from the comparison between experimental spectra and computed spectra for generalized hydrodynamics, for all pressure combinations and for the three mixtures of SF$_6-$He, SF$_6-$D$_2$ and SF$_6-$H$_2$. The derived uncertainties are also indicated. }\label{Fig:zChangesSF6_3} \end{figure} The computations and fitting procedures lead to a set of values for the $z$ parameter, the collisional number for reaching thermal equilibrium between translational and internal energies in the binary mixtures. Results for all combinations of heavy and light species are displayed in Fig.~\ref{Fig:zChangesSF6_3}, while $z$-values and uncertainties are also listed in Table~\ref{Tab:MixtureWithSF6}. During the fitting optimization of the $z$-parameter it was found that the computed spectra were found to sensitively depend on the values for the heat capacity ratio of $\gamma$(SF$_6$). In these procedures we adopted the experimental value from literature and kept fixed at $\gamma = 1.10$~\cite{Yokomizu2015}. Also the $\gamma$-values for the light collision partners were set to the literature values as listed in Table~\ref{Tab:PolarizabilityandGamma}. It is noted, that for an optimized value of $\gamma$(SF$_6$) $= 1.13$, with some differences for various cases of pressures and low-mass mixing partners, a near to perfect match is found between experiment and the generalized RB-scattering theory. This may indicate that the latter is a better value for the the heat capacity ratio, and that the present approach constitutes a manner to determine heat capacity ratios. However, the strong correlations between parameters $z$ and $\gamma$(SF$_6$) in the fitting procedures would require further investigations. The so-called internal relaxation number $z=\tau_{\hskip -0.25mm v}/\tau_{\hskip -0.25mm s}$ characterizes the ratio between elastic and inelastic molecular collision frequencies. As a rule, inelastic collisions - that describe the transfer of energy between translational and internal degrees of freedom - occur less frequently than elastic collisions. In particular, if the molar fraction of the light constituent increases in a disparate-mass gas mixture, the mean number of inelastic collisions per unit of time suffered by both constituents increases and leads to a decrease of the internal relaxation number. Based on kinetic gas theory~\cite{Rodbard1990}, it may be verified that in a binary mixture of polyatomic gases the bulk viscosity $\eta_{\rm b}$ of the mixture decreases as the molar fraction of the light constituent increases. Since the parameter $z$ is proportional to the bulk viscosity of the gas mixture, this fact could also explain the decrease of the internal relaxation number as we add light constituents to the mixture. \begin{figure} \centering \includegraphics[scale=0.35]{FigCompareMixture2bar.pdf} \caption{Direct comparison of experimental Rayleigh-Brillouin scattering profiles for 1 bar SF$_6$, admixed with 1 bar of the three light collision partners He, H$_2$ and D$_2$.} \label{Fig:CompareExp-3} \end{figure} The trends for the $z$-parameter only show slight differences for the three different collision partners of low mass, but in view of the small uncertainties as resulting from the fitting procedures (cf. Fig.~\ref{Fig:zChangesSF6_3}), these small differences are significant. In case of H$_2$ as the collision partner the value of $z$ is somewhat larger at a certain pressure, which is indicative for the fact that the heavier species of He and D$_2$ lead to more efficient relaxation. It should be noted that the generalized hydrodynamic model approach, presented here, makes the assumption that the masses of heavy and light scatterers are strongly disparate, but the mass of the light collision partners does not enter in the model description. So, in view of the fact that the Lennard-Jones coefficients for collisions between SF$_6$ with H$_2$ or D$_2$ are very similar (cf. Table~\ref{Tab:SphericalParameter}), no difference between H$_2$ and D$_2$ as collision partner would be expected. While Fig.~\ref{Fig:zChangesSF6_3} compares the resulting values of $z$, in Fig.~\ref{Fig:CompareExp-3} the observed spectra for collision with 1 bar of the light species are compared. This shows that there is an observable difference for H$_2$ and D$_2$ as collision partners, an effect that goes beyond the current model description. The fact that the RB-spectra with colliding He and D$_2$ (at 1 bar admixture) are overlapping, leads to the same $z$-parameter for these conditions as shown in Fig.~\ref{Fig:zChangesSF6_3}. The latter correspondence of $z$-value for He and D$_2$ is indicative of the fact that the specific characteristics of the molecular interaction, as in the $\epsilon_{ij}$ and $\sigma_{ij}$ Lennard-Jones parameters, only plays a marginal role. Only at the highest pressures there arises a deviation between He and D$_2$ as collision partners. Finally, for an explicit comparison between the observed RBS-profiles and the results from the two theories, generalized relaxation hydrodynamics and classical hydrodynamics, results are plotted in Fig.~\ref{Fig:TwoTheorywithExpData} for the specific case of a 1:1 mixture of SF$_6$/He at $p=2$ bar. This example shows that the generalized theory gives a superior description of the obtained experimental results, in particular where the kinetic regime is entered in the case of smaller value of $y$. \begin{figure} \centering \includegraphics[scale=0.35]{FigSF6He2barTwotheroywitExpData.pdf} \caption{Comparisons between the measured Rayleigh-Brillouin scattering profiles of binary mixtures of SF$_6$-He and generalized relaxation hydrodynamics theory as well as for the classical hydrodynamics approach for mixtures. The experimental data were measured with 1 bar SF$_6$ and 1 bar He at wavelength of $\lambda_i$ = 532.22 nm and scattering angle of $\theta = 55.7^\circ$. The theoretical spectra are, for the purpose of comparison with experiment, convolved with the instrumental width of 58 MHz. The spectra are plotted on a scale of normalized integrated intensity over one FSR. \label{Fig:TwoTheorywithExpData}} \end{figure} \section{Conclusion} A relaxation hydrodynamical model for binary mixture gases is developed that is based on the assumption that the masses of collision partners are disparate. In the model description all macroscopic transport coefficients are computed from the molecular interactions between heavy-heavy, heavy-light and light-light species from a Lennard-Jones potential by invoking the well-tested two-parameter components for the potentials: well depth and characteristic range. Further, the heat capacity ratios $\gamma=c_p/c_v$ and the dynamic polarizabilities $\alpha$ for the gas components are used. From these inputs, the entire collisional model is produced that allows for a computation of the dynamic structure factor $S(\boldsymbol q, \omega)$ which is representative of the Rayleigh-Brillouin light scattering spectrum. However, a single ingredient is lacking to complete the calculations: an overall relaxation parameter which can be associated with the bulk viscosity of the binary mixture. This is subsequently set as a fitting parameter in the description of light scattering of binary mixtures. The model is experimentally tested by performing measurements on binary mixture gases that fulfill the assumption of disparate masses nearly perfectly. For the heavy component, the gas with the heaviest molecular species is chosen that can be brought in the gas phase at high pressure: hexafluoride (SF$_6$). This is combined with the lightest collision partners available, helium gas and hydrogen gas, the latter for two isotopologues H$_2$ and D$_2$. Incidentally the polarizability of the light scattering partners is so small, with respect to that of SF$_6$ that they behave only as spectators in the light scattering process; the light scattered by the light species is negligible for the spectrum. Nevertheless the activity of the light species as collision partners decisively alters the spectra profiles. The Brillouin side peaks in the RB-profiles, which are known to become more pronounced at increasing pressure for single species gases, become strongly damped and disappear gradually with higher mole fractions of helium and hydrogen added. As for a final conclusion the presently developed generalized relaxation hydrodynamics model for Rayleigh-Brillouin scattering in binary gases, based on an assumption of mass-disparate constituents in the gas, provides a good representation of experimentally observed spectral profiles for heavy SF$_6$ in mixtures with light He, D$_2$ and H$_2$. Also the model is shown to be superior to a description in terms of classical hydrodynamics. \section*{Acknowledgements} YW acknowledges support from the Chinese Scholarship Council (CSC) for his stay at VU Amsterdam.
{ "timestamp": "2020-12-17T02:19:27", "yymm": "2012", "arxiv_id": "2012.08982", "language": "en", "url": "https://arxiv.org/abs/2012.08982" }
\section{INTRODUCTION} \label{sec:intro} The Atacama Large Millimeter/submillimeter Array (ALMA) is one of the largest astronomical projects today. In operation since end of 2011 and continuously being updated and extended, ALMA is now in observation Cycle 7. The observatory presently consists of an array of fifty 12-m antennas and an additional compact array (ACA) of twelve 7-m and four 12-m "total power" (TP) antennas to enhance ALMA's ability to image extended objects. Baseline lengths range from 7 m (minimum possible length for the ACA) up to ca. 16~km for the 12-m array. The ALMA project is an international collaboration between Europe, East Asia, and North America in cooperation with the Republic of Chile. The official project website for scientists is the {\it ALMA Science Portal} {\tt http://www.almascience.org}. A detailed description of ALMA and many of the standard procedures mentioned here can be found in the Cycle 7 ALMA Technical Handbook \cite{remijan2019}. Scientists proposing for using ALMA do not request a particular amount of observing time but define {\it science goals} which consist of sets of observation parameters to be achieved by ALMA. The main parameters are \begin{enumerate} \item The coordinates of the astronomical target(s) to be observed (one of them is used as the representative target if there is more than one) or the raster pointings (in case of mosaics) and the spectral range and spectral resolution to use. \item The sensitivity of the observation expressed as the noise RMS to be achieved in a given frequency bandwidth centered on a given representative frequency. \item The angular resolution (AR) of the final image expressed as the size (diameter) of the synthesized beam of the interferometer to be achieved at the representative frequency. \item (for an extended, i.e. resolved target) the largest angular scale (LAS) for which the interferometer should still be sensitive. \end{enumerate} If the proposal is accepted, these science goals are translated into individual {\it scheduling blocks} (SBs). An SB is a description of an observation scan sequence complete with the necessary plan of calibrator observations. The typical observation time foreseen in an SB is 20 to 90 minutes (including all overhead for calibrations). The execution of an SB results in a so-called execution block (EB), the smallest unit of processable data for ALMA. In order to achieve a high sensitivity or meet time constraints, an SB may have to be executed more than once which results in several EBs. The set of all EBs of an SB then forms the final dataset for a given science goal, the so-called Member Observation Unit Set (MOUS). All EBs undergo a quality assurance (QA) procedure immediately after observation. This is called level zero QA or QA0. Rejected EBs are discarded and reobserved if possible. When all necessary EBs of an MOUS have been observed or no further observations for the given SB are possible, the MOUS is sent for level 2 QA or QA2. This final step of QA includes a complete, science-ready calibration of the data and the generation of image products for the ALMA archive. If the MOUS does not pass QA2, it is sent back for reobservation if that is still possible. Otherwise, it is delivered to the principle investigator (PI) of the proposal. More details of the QA process can be found in \cite{remijan2019,nakos2020} and references therein. Important in the context of this paper is that the decision on QA2-pass or -fail is based on measuring whether the science goal parameters have been achieved. Here, the procedure adopted so far did properly measure the noise RMS and the AR, but the LAS was only assessed indirectly if at all. It was assumed that the general design of the ALMA array configurations would ensure the fulfillment of the LAS condition. The same is true for all intermediate angular scales. In cases of a particularly large range between AR and LAS, the SB design foresees a splitting of the observation into up to four SBs: up to three for the larger angular scales (using a more compact 12-m array configuration with shorter baselines and/or the ACA and the TP antennas) and one for the smaller angular scales down to the AR (using a more extended array configuration). The MOUSs resulting from the up to four SBs would then be combined offline after QA2 in the science analysis by the PI. Such a set of MOUSs observing the same target at different spatial scale ranges is called a Group Observation Unit Set (GOUS). In this paper we present the intermediate results of an ongoing ESO ALMA internal development study on improving our methods of assuring the achievement of the science goal parameters both in observation scheduling and in subsequent QA and final imaging. Having realised that determining the synthesized beam size essentially only measures the achieved sensitivity for the longest baselines, we explore a more complete approach where we separately measure the achieved sensitivity in all ranges of baseline lengths, i.e. all observed angular scales, and then compare these to an expectation which we derive from the AR and LAS specified in the science goal. We propose to perform this comparison between observation and expectation not only at the end of the observation procedure in the QA stage but already during the scheduling of multi-EB MOUSs, dynamically choosing the array configuration for the next execution based on the baseline length distribution achieved so far. \section{THE BASELINE LENGTH DISTRIBUTION - EXPECTATION AND OBSERVATION} \label{sec:bldist} One of the main concepts we need to introduce in our new approach is the Baseline Length Distribution (BLD). This is a histogram of the projected length (in meters) of the baselines formed by the antennas of the interferometer array. For each integration (visibility) recorded in the interferometric dataset (after discarding invalid data, so-called "flagging"), one entry is made in the BLD bin corresponding to the length of the baseline used for that integration. We distinguish between \begin{description} \item[unweighted baseline length distribution] - the histogram entry for one integration is given a weight equal to the integration time. \item[absolutely weighted baseline length distribution] - the histogram entry for one integration is given a weight equal to the inverse per-channel noise squared (which is what is recorded in the WEIGHT column of a MeasurementSet). The WEIGHT value is proportional to the integration time, the channel bandwidth and inversely proportional to the system temperature. \item[relatively weighted baseline length distribution] - here the histogram entry is given a weight equal to the WEIGHT value as above but normalised to the average WEIGHT of all baselines. \end{description} The y-axis of the histogram has units of "visibility hours" in all three cases. Commonly, the "sensitivity" of an observation is regarded as inversely proportional to the achieved RMS image noise. The sensitivity of an interferometric observation is thus proportional to the square root of the product of observation time and number of baselines used, and the value of the individual histogram column of the BLD is proportional to the square of the sensitivity achieved for that baseline length range. Each baseline length range corresponds to a range of angular scales in the interferometric image derived from the dataset via the equation $a = \lambda / d$ where $a$ is the angular scale, $\lambda$ is the observing wavelength, and $d$ is the baseline length. The BLD is thus equivalent to a sensitivity vs. angular scale plot. Histogramming the baseline length rather than the angular scale itself, however, is more convenient in the scheduling context because the BLD of a given array configuration does not depend on the observing frequency. Figure \ref{fig:bldistexamples} shows examples of 1D and 2D BLDs. \begin{figure} [htb] \begin{center} \begin{tabular}{c} \includegraphics[height=12cm]{bld-examples-v2b.png} \end{tabular} \end{center} \caption[] { \label{fig:bldistexamples} Examples of 1D and 2D baseline length distributions (with relative weighting) for (top: (a) and (b)) an ALMA 8 minute observation using 44 antennas in a compact configuration and (bottom: (c) and (d)) a 39 minute exposure using 47 antennas in an extended (hybrid) configuration which is unusually elongated. Histograms (a) and (c) both use 7 m bin width while the 2D histograms (b) and (d) use a variable radial bin width linearly growing with baseline length. See text.} \end{figure} For both forms of representing the sensitivity to different angular scales, there is a problem of low number of counts at the upper end of the histogram (see Fig.~\ref{fig:bldistexamples}c) and a problem of resolution at the lower end. Also logarithmically increasing the bin width does not result in a flat distribution and is less intuitive to read. Furthermore the inclusion of the important "zero spacings" from single dish observations in the first bin is awkward in a log scale. We have therefore chosen to exclusively work with the BLD representation and remedy the histogramming problems by introducing a special bin width scheme adapted to the ALMA arrays: The first two bins are fixed to 7 m width. Starting with the third bin, the bin width is increasing linearly such that it reaches 700 m at a baseline length of 16000~m. This scheme permits to see the distributions in sufficient resolution and with sufficient counts per bin for all ALMA configurations. And it has the additional useful property that the first bin can exclusively be filled by single-dish (TP) observations and the second bin exclusively by ACA observations. Underperformance in these bins can therefore be used as an indicator that TP and/or ACA data is missing. A 2D version of the BLD is essentially the familiar "uv coverage" plot which is commonly used in interferometry. The 2D version has the advantage that it also captures the baseline {\it orientation} and thus the elongation and orientation of the synthesized beam ellipse (see the example in Fig. \ref{fig:bldistexamples}d) while the 1D BLD only captures its average diameter. On the other hand, the 1D BLD permits to better visually compare different BLDs (e.g. observed and expected) since one can simply plot one on top of the other. \subsection{The expected baseline length distribution} \label{sec:expect} In order to decide on the array configuration during scheduling and to judge the quality of the observed BLD, we need to derive an expected shape, the Expected Baseline Length Distribution (EBLD) in order to have a reference. This is done by determining the expectation function for the uv coverage from the science goals and filling a histogram with it. \begin{figure} [htb] \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{expected-example_a.png} \end{tabular} \end{center} \caption[] { \label{fig:expbldexamples} Examples of 1D and 2D analytically calculated expected baseline length distributions for a 30 minute observation with 43 antennas in an extended configuration.} \end{figure} \noindent Here we explore two approaches: \begin{description} \item[a) an analytical approach] which calculates the expected shape from an expected beam shape (derived from the AR) which is Fourier-transformed and then modified by an "inner taper" to account for the LAS requirement. \item[b) a simulation approach] which assumes a perfect interferometer with a maximally filled aperture up to the radius of half the maximum BL length, i.e. a disk of the diameter of the maximum BL length (derived from the AR) filled with antennas at a minimum distance (derived from the LAS). \end{description} \subsubsection{Analytical approach} \label{sec:analytical} As described above, the science goals of the PI project w.r.t. image properties are captured by the ALMA Observing Tool (OT) in the proposal submission as the angular resolution (AR) and the largest angular scale (LAS). For an interferometer this translates into the size of the synthesized beam (FWHM) and the maximum recoverable scale (MRS) given the angular modes it is sensitive to, where the latter is defined for ALMA as recovering 10\% of the total flux density of a uniform disk of the respective size; the primary beam size, i.e. antenna aperture or mosaic pattern, has to be at least three times larger. As detailed in the ALMA Technical Handbook \cite{remijan2019}, the scales for a given {\it single} configuration are also defined via the $5^{\rm th}$ and $80^{\rm th}$ percentile of the BLD. In our approach we effectively generalise this latter definition to a match over the whole BLD and for the case of multi-configuration executions. In order to formalise the PI imaging request we assume a Gaussian target function for the AR for the following reasons: \begin{enumerate} \item such aperture is often adopted in optical applications for reasons of imaging quality \item the functional form is invariant under Fourier transformation \item it is the form of the “clean” beam adopted in the interferometric image reconstruction algorithm CLEAN, i.e. “dirty” beam = “clean” beam \item for the aforementioned reasons, the ALMA configurations have been designed with a visibility space density optimization against this functional form for the radial coordinate. \end{enumerate} In our present best version of the analytical BLD creation, the expectation function is computed as the Fourier transform of the Gaussian beam with the size given by the AR, and apodized after transformation with an inner uv taper of an inverted Gaussian (1/r) of the transformed LAS size. It is numerically implemented with a regular rectangular map filling the polar visibility density histogram, and directly filling the histogram from the analytical expression considering the pixel sizes. Figure \ref{fig:expbldexamples} shows an example of a BLD computed with this method. Note that for angular scale range requests less than a factor five, i.e. LAS~$< 5 \times$AR we set LAS~$:=5 \times$AR, aligned with the baseline distributions of the ALMA 12-m configurations. The expectation function is idealised in terms of the exact radial shape, azimuthal uniformity and overall smoothness. In particular, the designed and actually realised baselines of the antenna array and projection effects during observations inevitably cause deviations, which have to be captured by a metric and evaluated in order to provide guidance for the scheduling task (see Section~\ref{sec:schedPlan}). It is an important point (e.g. references \citenum{briggs1995} and \citenum{boone2013}) that ample sensitivity can be used to improve imaging with a suitable re-weighting of the uv pixels to better resemble the target function and hence improve resolution match, beam circularity and sidelobe suppression (see section \ref{sec:reweight}). \subsubsection{Simulation approach} \label{sec:simulation} As an alternative to the analytical approach, we are also exploring the potential benefits of creating the BLD directly by placing antenna positions onto an aperture with uniform spatial density and histogramming their BL lengths. In order to extract a smooth general shape as in the analytical case, the antennas are placed randomly and the process is repeated a large number of times such that the resulting BLD is an {\it average} of all possible uniform-density array configurations which obey the constraint that all BL lengths are between a given minimum and maximum. The minimum BL length is derived from the LAS, the maximum from the AR. The overall scaling of the BLD is determined by the number of visibilities in the real observation for which the expected BLD is to be derived. Due to the large-number statistics, the simulation approach naturally results in overall Gaussian-like BLD shapes, quite similar to those of the analytical approach, without assuming this functional form anywhere. See Fig. \ref{fig:bldobsandexp-example} for an example of a BLD derived with the simulation approach. As the LAS is increased, the BLD of the hypothetical instrument which is constructed in the above process, approaches that of an equivalent optical telescope with similar image fidelity. This way of determining the ideal BLD shape naturally arrives at a sensitivity requirement for very short spacings and thus permits to determine how to include single-dish observations. \subsection{Comparing the expectation with simulation or observation} \label{sec:fillfrac} As described above, a comparison of BLDs with their analytically calculated expectation is needed both in the context of scheduling and in quality assurance. To formalise this process, we introduce the "filling fraction" $f$ which is defined for each BLD bin $i$ as $$ f_i = o_i / e_i \ \ (e_i > 0), f_i = 1 \ \ (e_i = 0) $$ where $o_i$ is the entry in the $i$th bin of the observed BLD while $e_i$ is the corresponding entry in the expected BLD. A filling fraction of 1 means that the observation has fulfilled the expectation. So for bins where $e_i = 0$, we define the filling fraction to be 1. We also introduce the total filling fraction $f_t = \sum{o_i}/\sum{e_i}$ and the average filling fraction $f_a = \sum{f_i}/n$ where the sums in the expressions for $f_t$ and $f_a$ are computed in the histogram range where $e_i > 0$ and $n$ is the number of bins in that range (which we call {\it expectation range}). Using these filling fraction definitions, one can construct metrics for the assessment of observed BLDs. Here, one has to {\it distinguish between science goals for which image fidelity is more relevant and those where it is of minor importance}. If image fidelity is of minor importance (e.g. for detection experiments or observations of point sources), the resolution at which the observed and expected BLD need to be compared is much reduced. We have tested a number of possible metrics and arrived so far at a preliminary best choice which is defined as follows: \begin{enumerate} \item Determine observed and expected BLDs with ten equidistant bins over the expectation range. \item Compute $f_i$ for each of the ten bins. \item If image fidelity is of high importance for the science goal, then, for "pass", require $f_a\geq0.9$ and $f_i > 0.85$ for $i = 0,...,8$ and $f_9\geq1$, i.e. on average the filling fraction should be at least 90\%, at least 85\% in each bin, and at least 100\% in the uppermost bin which decides on the achievement of the AR. Alternatively, if image fidelity is of minor importance, relax the requirements on bins {0,...,8} and only require $\sum_0^3{o_i}/\sum_0^3{e_i} > 0.85$ and $\sum_4^8{o_i}/\sum_4^8{e_i} > 0.85$, i.e. widen the bins in the lower 90\% of the expectation range such that the shape of the BLD is only roughly measured. \end{enumerate} \section{SCHEDULING BASED ON BASELINE LENGTH DISTRIBUTIONS} \label{sec:schedPlan} \begin{figure} [ht] \begin{center} \begin{tabular}{c} \includegraphics[height=7.3cm]{Sched_fill_exec1a.png} \includegraphics[height=7.3cm]{Sched_fill_exec2a.png} \end{tabular} \end{center} \caption[] { \label{fig:SchedFill1} An ALMA observation requiring two scheduling block executions in Band 7 with an angular resolution of 0.3 arcseconds can be scheduled optimally in configuration C-4 at hour angles between -2h and +2h, with C-5 also allowed in the second execution. The scheduling options are given as the achieved total filling fractions on the z-axis as a function of configuration on the x-axis and hour angle on the y-axis. This scheduling projection assumes for the second execution that the first execution had been scheduled with maximal total filling fraction. The filling fraction of the second execution refers to the remainder of the first, resulting in over 90\% completion of the observation.} \end{figure} In ALMA long and short term scheduling tasks are distinguished relating to project pressure over seasonal conditions, e.g. weather pattern, day/night phase conditions, configuration schedule, and the daily aspects, e.g. available antenna array, actual weather conditions. In the following we describe our new scheduling approach: \begin{description} \item[1 - Determine expected BLD:] Given the science goal parameters of target Declination (DEC), requested sensitivity, LAS, and AR range (i.e. minimum and maximum acceptable AR), the expected BLD of the MOUS is calculated. \item[2 - Find best-matching ALMA configuration and observation hour angle:] Based on a library of BLDs created from detailed simulations of observations with different ALMA array configurations (see description below, this is different from what is described in section \ref{sec:simulation}), find the optimal ALMA configuration C$_{\rm opt}$ and observation hour angle HA$_{\rm opt}$ which achieve the highest total filling fraction $f_a$. This is illustrated by Fig. \ref{fig:SchedFill1}. \item[3 - Observe first/next EB:] When the array is moved into C$_{\rm opt}$ and weather conditions are appropriate, schedule the SB for execution at HA$_{\rm opt}$. \item[4 - Compare observed BLD with expected BLD:] Using, e.g., the metric described in section \ref{sec:fillfrac}, determine whether the first EB already fulfills the expectation for the MOUS. If it is "pass", declare the observation complete. When applying the metric, iterate over the whole range of expected BLDs, from the smallest AR value up to the largest. \item[5 - Determine the remaining necessary observations, i.e. new expected BLD:] In case step (4) results in a fail, i.e. the latest EB did not complete the observation, {\it subtract} the BLD of the latest EB from the expected BLD for the MOUS and declare the remainder the new expected BLD. \item[6 - Re-iterate from step 2:] with the new expected BLD go back to step (2) and determine a new best configuration and HA. Repeat the loop until the observation is declared complete in step (4) or the observing period for the given SB has ended. \end{description} \noindent The library of simulated BLDs needed in step (2) of the algorithm is obtained by running a simulator for the generation of ALMA visibility data for a complete range of observatory configurations (presently ALMA has ten nominal configurations numbered 1 to 10), target declinations (DECs in steps of a few degrees), and possible observation hour angles (HAs). In our study, a prototype of this library was created from a suite of 900 simulations with the CASA\cite{2019arXiv191209437E} task {\tt simobserve} for each of the 10 ALMA configurations on a grid of Declination (in steps of 10 degrees) and hour angle (steps of 1 h above elevations of 30 degrees centered on culmination). Figure \ref{fig:SchedFill1} shows the result of the process in step (2) of the above algorithm for a particular example. An even more accurate scheduling could be achieved by complementing the library of simulated BLDs by simulations of the {\it hybrid} configurations which the array assumes {\it in between} the nominal configurations (since only few antennas can be moved per day). During real time scheduling, the actually available antenna array could be considered. \begin{figure} [htb] \begin{center} \includegraphics[height=5cm]{bld-generic.png} \end{center} \caption[] { \label{fig:schedadvantage} A generic example of one of the advantages of our new scheduling approach: When a first observation does not fully achieve the expected sensitivity at all baseline lengths, standard scheduling of a repeat-observation would drastically over-observe those baselines which are already well-observed. The new approach determines more accurately the missing observation. } \end{figure} Figure \ref{fig:schedadvantage} summarizes the main advantage of the iterative scheduling approach w.r.t. observing efficiency. Note that in reality, the image quality aspects, which are implied by the condition on the BLD, can only be one factor in the scheduling together with other ones imposed by the rules of the observatory: \begin{itemize} \item If the observation of the SB has to be completed by a certain date, e.g. by the end of the observing cycle, this means that the number of available configurations is steadily reduced as the cycle progresses until, near the end, there is only one left. In order to still pass the condition of the BLD metric, it may then be necessary to use a sub-optimal configuration and make up for this by increasing the observation time. This will result in an "over-exposure" at certain BL lengths (which can, however, be compensated for by offline reweighting, see section \ref{sec:reweight}). \item If one or more EBs have already been observed and the algorithm determines a new C$_{\rm opt}$ for the next EB which will only be arrived at much later in the cycle, it may be advisable to give the PI the choice whether to indeed wait or relax the requirements. Alternatively, the observatory can decide, based on quantitative assessment, to achieve the completion by investing additional observing time in a sub-optimal configuration. \end{itemize} \section{USING BASELINE LENGTH DISTRIBUTIONS IN QUALITY ASSURANCE} After the BLD-aware scheduling described in the previous section has declared an MOUS fully observed, it is the role of QA in ALMA to verify that the MOUS indeed achieves the science goals. The achieved RMS noise is directly measured in the image of the representative target in the representative spectral window. This process needs no further improvement. The achievement of the AR and LAS, however, can for the first time be assessed properly by comparing the observed BLD ({\it after} the complete flagging and calibration procedure) with the expectation, again using a metric like the one described in section \ref{sec:fillfrac}: By requiring the filling fraction to be at least 100\% in the uppermost part of the expectation range, we enforce the AR. And by requiring the filling fraction to be close to 100\% in the lowest part of expectation range, we enforce the LAS. Requiring in addition that there be no BLD bins in the expectation range with low filling fraction (e.g. $f_i > 0.85$) ensures that also all intermediate angular scales have been observed with adequate sensitivity. For MOUSs where the science goals do not require good image fidelity, we only assess the AR criterion via the upper 10\% of the BLD and treat the rest of the BLD with more coarse binning mostly verifying the overall sensitivity. In our development study we have started to test this approach on several hundred MOUSs from ALMA Cycles 6 and 7 (see examples of the diagnostic plots in Figures \ref{fig:fillfrac1example} and \ref{fig:bldobsandexp-example}). What we have found so far is that a large fraction of MOUS do pass the new BLD-based condition. In fact, the intermediate to short BL lengths often seem to be over-observed. This may indicate a possibility to further optimise the ALMA scheduling. However, we are not yet ready to draw any final conclusions from the ongoing study. In any case, we can already design a new QA2 process for MOUSs which includes the BLD condition: \begin{enumerate} \item After calibration is completed, obtain the observed BLD for the representative target and spectral window from the calibrated data and using the corresponding WEIGHT values. \item Compare with the expected MOUS BLD using the agreed metric (e.g. the one described in section \ref{sec:fillfrac}) \item If the metric results in a "pass", the MOUS has passed the AR and LAS criteria. If it also passes all other QA criteria, it is archived and delivered to the PI. \item If the metric results in a fail, i.e. some BL length range is underexposed, we can use the same approach as in scheduling to determine the expected BLD of the {\it missing observation} to complete the MOUS. Essentially, the observed BLD after calibration is subtracted from the original expected BLD of the MOUS, and the difference histogram is the new expected BLD for a re-observation of the SB to be scheduled. \end{enumerate} \begin{figure} [ht] \begin{center} \includegraphics[height=5.5cm]{fillfrac1-example.png} \end{center} \caption[] {\label{fig:fillfrac1example} Example plots from our prototype QA application for an ALMA observation of 19 min duration using 43 antennas (which is equal to the expectation) in a compact configuration. Left: The observed BLD (blue) plotted on top of the two edge cases of the expected BLD, the one for the minimum AR value (2.4 arcsec in this case, green) and the one for the maximum AR value (3.6 arcsec, black). Right: The filling fraction $f$ of this observation w.r.t. the expectation for the maximum AR value. Since $f_i > 1$ in all bins, this observation would pass QA. The high values at both ends show that both the AR and the LAS condition are comfortably fulfilled. } \end{figure} \begin{figure} [ht] \begin{center} \includegraphics[height=5.5cm]{fig-obsandexp.png} \end{center} \caption[] {\label{fig:bldobsandexp-example} Another example of plots from our prototype QA application, here for an ALMA observation of 47 minutes duration using 44 antennas in a quite extended configuration at an unusually sub-optimal HA (elevation was around 37$^\circ$ when it could have been around 70$^\circ$ if the observation had been made around HA=0). Left: with the expectation BLDs computed using the simulation approach (section \ref{sec:simulation}). Right: with the expectation BLDs computed using the analytical approach (section \ref{sec:analytical}). The requested AR was between 0.019 arcsec and 0.030 arcsec, the LAS was 0.15 arcsec. In both cases, the AR and LAS requirement is met (for the maximum AR value) while there is certain under-exposure in the intermediate BL length range, which would go unnoticed without inspecting the BLD in this way since the under-exposure is at least partially made up for by the over-exposure at the shortest baselines. } \end{figure} \section{REWEIGHTING VISIBILITIES TO THE IDEAL BASELINE LENGTH DISTRIBUTION SHAPE} \label{sec:reweight} It will never be possible to refine scheduling to a point where for each MOUS the observed BLD agrees exactly with the expectation. This is because (a) the limited number of antennas by far does not fill the aperture of the array and (b) considering antenna maintenance and repair tasks, and practicalities during re-location, optimal arrays cannot be provided at all times. In order to ensure a high over-all efficiency of the observatory, individual MOUSs can only be made to meet the science goals to a good {\it approximation}. However, as long as the observatory errs on the side of exceeding the requested sensitivity, it is possible to correct the {\it shape} of the observed BLD in offline processing. \begin{figure} [b] \begin{center} \includegraphics[height=5.5cm]{reweight-example.png} \end{center} \caption[] { \label{fig:reweightexample} Two examples for the effect of the reweighting of visibilities according to the procedure described in section \ref{sec:reweight}. An observed BLD (blue) is compared to its expectation (green) and for each bin a reweighting factor $\leq 1$ is determined and applied to the weights in the corresponding MeasurementSet. The BLD of the observation after reweighting is shown in red. Left: Here the BLD is partially over-exposed and partially under-exposed. The reweighting brings the formerly overexposed bins down to the level of the expectation while the under-exposed bins are left unchanged. This is a compromise between ideal BLD shape and over-all sensitivity. Right: Here the expectation is well exposed in essentially all bins. The reweighting can achieve a good match with the expectation.} \end{figure} The ideal shape of the observed BLD is the expected BLD as derived according to the methods described in section \ref{sec:expect}. To give the final images the optimal fidelity on all accessible angular scales, the visibilities need to be re-weighted on a per BL length bin basis such that the reweighted observed BLD has the same shape as the expected BLD. Only reweighting factors $< 1$ are permitted (we cannot artificially increase sensitivity in individual BL length bins). So the reweighting procedure can only downweight, and thus the improved image fidelity comes at the price of loss of sensitivity in the downweighted bins and thus only makes sense where image fidelity is regarded as most important (see also reference \citenum{boone2013}). The reweighting factor $w_i$ to be applied for BL length bin $i$ is computed as follows: $$ w_i = f_{\rm min}/f_i $$ where $f_i$ is again the filling fraction and $f_{\rm min}$ is its minimum value across all bins. As a compromise between optimal sensitivity and optimal image fidelity, one can modify the above definition of the $w_i$ by only downweighting those bins where $f_i > 1$, i.e. those bins which have been over-observed. Then one has to compute the $w_i$ as $$ w_i = 1/f_i \ \ \ {\rm for}\ f_i > 1 $$ $$ w_i = 1 \ \ \ {\rm for}\ f_i \leq 1 $$ Figure \ref{fig:reweightexample} shows an example for both cases. \section{CONCLUSIONS AND OUTLOOK} The advantages of our new approach to scheduling and QA will mostly come to bear where the relative sensitivity at different angular scales matters: These are mostly deeper exposures on extended objects which aim to properly measure the power spectrum of the image in the range accessible to the instrument. While also single-EB observations will get the best possible BLD (and thus best image quality), the real advantage is realised for observations with multiple EBs where we can better ensure that the uv coverage is close to optimal. In particular the extension of our approach to 2D BLDs will permit QA to address beam ellipticity and holes in visibility space causing beam side lobes. But already working with 1D BLDs will improve image fidelity and minimise over-observation of individual angular scale ranges thereby freeing the instrument for more projects. Figure~\ref{fig:schedadvantage} illustrates the concept. Standard scheduling which just repeats the same observation until all requirements are met, can be quite wasteful. Being able to determine more exactly which BL lengths still lack exposure opens the door to a range of scheduling options also including more exotic ideas like the use of {\it sub-arrays}: when, e.g., only exposure in a small range of BL lengths is missing, the observatory may construct an appropriate array from a smaller number of antennas during engineering time (when many antennas undergo maintenance), and use this array to execute the missing EB during a time time period which otherwise would not have been available for science observations at all. Our study has now reached the mid-point and we are turning from the development of concepts, processes, and prototypes to the detailed evaluation of them based on datasets from ALMA archive and simulations. We hope to report our final results at the next SPIE Telescopes and Instrumentation conference after this one. For ALMA scheduling, as far as we can see now, the new approach would mean a modification in how to select the configuration of the first EB of an MOUS. For single-EB observations, that would already be all. Also, for SBs for which only image quality (i.e. low beam sidelobes) is relevant but not fidelity, the changes in scheduling procedure would end here. Only for multi-EB MOUSs which need good image fidelity at a larger range of scales there would be a change in that we may switch to a different preferred configuration after the first EB(s) have been observed. Our study still needs to determine how often this would happen and how different the first selected configuration would typically be from the subsequent one. The point is for these cases not to make scheduling simpler but more efficient, i.e. use less over-observation. The idea of using sub-arrays is only envisaged for exceptional cases where an MOUS needs a small number of baselines for completion. Here ALMA could indeed gain overall observing time by using a smaller array during engineering time when conditions permit this. The proposal is not to split up ALMA into sub-arrays but to use a subset of it while the rest is in maintenance or used for engineering tests. For the QA process, our approach means that for the first time the LAS condition is explicitly verified and the coverage of intermediate angular scales is assessed at all. The procedure used so far only verified the overall sensitivity and the sensitivity at the smallest angular scales. Furthermore, our approach will permit to conveniently perform QA on {\it groups} of MOUSs (the so-called GOUSs in the ALMA scheduling system already mentioned in section \ref{sec:intro}). For the set of MOUSs in a given GOUS, a single expected BLD can be calculated and thus the combined data from the MOUSs assessed together as one observation. Finally, the reweighting scheme described in section \ref{sec:reweight} promises to help ALMA users achieve the optimal image fidelity both for individual MOUSs and groups. As more large radio interferometer arrays start operations around the globe, we note again that our results are generally applicable to all arrays with more than a few antennas. \acknowledgments We would like to thank Carlos de Breuck (ESO) for his support as the contact person for this ESO ALMA internal development study.
{ "timestamp": "2020-12-17T02:19:58", "yymm": "2012", "arxiv_id": "2012.08993", "language": "en", "url": "https://arxiv.org/abs/2012.08993" }
\section*{Acknowledgements} \epigraph{``Perhaps we’ll have some answers, at least, before the end. I always dreamed of dying well-informed."}{---Joe Abercrombie, \textit{Last Argument of Kings}} I would like to express my gratitude to my advisors Prof. \nolinebreak Ron M. Adin and Prof. Yuval Roichman for their continuous support of my Ph.D. study, for their patience and immense knowledge. Their encouragement and guidance helped me build the confidence and courage needed for the fail and retry iterations I went through during the research and writing of this thesis. I would also like to thank my friends and fellow researchers Dr. Menachem (Meny) Shlossberg and Dr. Arnon Netzer for their support and encouragement and for many helpful discussions and suggestions. Special thanks go to my parents Roger and Hava Ben-Ari for supporting me throughout the writing of this thesis. Last, but by no means least, I would like to thank Luie Jennings, my best friend and partner for life, for being a source of energy, motivation and inspiration; for being so supportive and understanding; but most of all, for always putting things in proportions, for her lifesaving sense of humor and her love. \newpage \linespread{1.4} \setcounter{tocdepth}{3} \tableofcontents \newpage \linespread{1.4} \setcounter{secnumdepth}{0} \section{Abstract} Flip graphs are graphs on combinatorial objects in which the adjacency relation reflects a local change in the underlying objects. In this thesis we introduce \textit{Yoke graphs}, a family of flip graphs that generalizes previously studied families of flip graphs on colored triangle-free triangulations \nolinebreak \cite{TFT1}, arc permutations \cite{elizalde} and geometric caterpillars \cite{yuval}. Our main results are the computation of the diameter of an arbitrary Yoke graph and a full characterization of the automorphism group of this family of graphs. We also show that Yoke graphs are Schreier graphs of the affine Weyl group of type $\tilde{C}_m$. The approach we take in the computation of the diameter is different from the ones in \cite{TFT1} and \cite{elizalde}. We show that the approach in \cite{elizalde} (for arc permutation graphs) does not extend to Yoke graphs. At the heart of our proof lies the idea of transforming a diameter evaluation into an eccentricity problem. The characterization of the automorphism group is a new result even for the above mentioned three special families of Yoke graphs. \newpage \pagenumbering{arabic} \setcounter{secnumdepth}{3} \chapter{Introduction} Over recent decades, there has been an increasing interest in flip graphs, namely, graphs on combinatorial objects in which the adjacency relation reflects a local change. For example, see \cite{bose, fabila, parlier, pournin, tarjan}. In this work we introduce \textit{Yoke graphs}, a family of flip graphs that generalizes previously studied families of flip graphs on colored triangle-free triangulations (CTFT) \cite{TFT1}, arc permutations \cite{elizalde} and geometric caterpillars \cite{yuval}. Typical problems related to flip graphs include metric properties, such as distance, diameter, finding antipodes and counting geodesics between them; and algebraic properties, such as presentations using Cayley/Schreier graphs, automorphism groups and eigenvalues. The generalization to Yoke graphs of the three families of flip graphs mentioned above is motivated by their surprising similarities in terms of algebraic, combinatorial and metric properties. In particular, they carry similar group actions, are intimately related to posets and have similar diameter formulas. \section{Main Results} In this thesis, as mentioned above, we introduce a family of flip graphs, namely, Yoke graphs $\Ynm$. We compute the diameter of Yoke graphs and prove (see Theorems \ref{thm:ynm_ecc}, \ref{thm:ecc_eq_diam} and \ref{thm:ecc_znm}): \begin{theorem*} \begin{enumerate} \item[] \item If $n=1$, then $\diam(\Ynm)=\binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}$. \item If $0\leq m\leq n$, then $\diam(\Ynm) = \lfloor\frac{n(m+1)}{2}\rfloor$. \item If $2\leq n\leq m$, let $\dz = \binom{\lfloor\frac{m+n}{2}\rfloor+1}{2} + \binom{\lceil\frac{m-n}{2}\rceil+1}{2}$. Then \begin{enumerate} \item if either $2\divides (m-n)$ or $n\leq\lceil\frac{m+1}{2}\rceil$, then $\diam(\Ynm) = \dz$; \item otherwise, $\diam(\Ynm) = \dz + n-\lceil\frac{m+1}{2}\rceil$. \end{enumerate} \end{enumerate} \end{theorem*} We also give a full characterization of the automorphism group of this family of graphs (for all values of $n$ and $m\neq 2$). For example (see Theorems \ref{thm:anm_structure} and \ref{thm:aut_structure} for the complete result): \begin{theorem*} Let $n\geq 1$ and $m\geq 3$ be two integers such that $(n,m)\neq(1, 3)$. \begin{enumerate} \item $\aut(\Ynm[1][m]) \cong C_2\times C_2$ ($\forall m>3$). \item If $n>1$ and at least one of $n$ and $m$ is odd, then $\aut(\Ynm) \cong D_{2n}$. \item If $n>1$ and both $n$ and $m$ are even, then $\aut(\Ynm) \cong D_{n}\times C_2$. \end{enumerate} \end{theorem*} Since Yoke graphs generalize the three families of flip graphs mentioned above, these results provide a unified proof for known results regarding \linebreak diameter. Our approach in this proof is different from the ones in \cite{TFT1} and \cite{elizalde}. At the heart of the proof lies the idea of transforming a diameter evaluation into an eccentricity problem. The characterization of the automorphism group is a new result for the mentioned three cases of Yoke graphs. \section{The Structure of the Thesis} In Chapter \ref{chp:background}, we recall and develop the theory that will be necessary for the presentation of our main results. Chapter \ref{chp:YokeGraphs} introduces Yoke graphs, the main object in this thesis. As noted in the introduction, the definition of Yoke graphs was motivated by similarities between CTFT graphs, arc permutation graphs and caterpillar graphs, in terms of their algebraic, combinatorial and metric properties. In Section \ref{sec:special_cases}, we show that these graphs are indeed special cases of Yoke graphs. In Section \ref{sec:group_actions}, the fact that CTFT graphs and arc permutation graphs are Schreier graphs of the affine Weyl group of type $\tilde{C}_{n}$ is extended to Yoke graphs. In Chapter \ref{chp:ecc_zero_Ynm}, we compute the eccentricity of the vertex $0$ in Yoke graphs. Chapter \ref{chp:diameter_of_Ynm} deals with the diameter of Yoke graphs. In Section \ref{sec:diameter_dominance}, we develop techniques and obtain partial results on the diameter, based on similarities between Yoke graphs and the dominance order on $\mathbb{Z}^n$. In Sections \ref{sec:dYokeGraphs}, \ref{sec:diam_ecc} and \ref{sec:ecc_zero_Znm}, we show that the problem of the diameter in Yoke graphs can be converted into a problem of eccentricity in larger graphs, namely, \textit{dYoke graphs}. The terminology and techniques that were introduced in Chapter \ref{chp:ecc_zero_Ynm} are extended and used in order to compute the eccentricity of $0$ in dYoke graphs. Chapter \ref{chp:automorphisms} deals with the automorphism group of Yoke graphs. In Section \ref{sec:fundamental_auto}, three fundamental automorphisms of $\Ynm$ are introduced and we compute the group they generate. In Section \ref{sec:complete_auto_group}, we show that this group is in fact the entire automorphism group of $\Ynm$ whenever $m\neq 2$ and $(n,m)\neq (1,3)$. We cover the case $(n,m)=(1,3)$ separately. \chapter{Mathematical Background}\label{chp:background} In this chapter, we recall and develop the theory that will be necessary for the presentation of our main results. \section{Graphs} A \textbf{graph} $G$ on a \textbf{vertex} set $V$ is an ordered pair $(V, E)$ where $E\subseteq V\times V$ is a binary relation on $V$. $E$ is called the \textbf{adjacency} relation of $G$ and its elements are called \textbf{edges}. Accordingly, if $(u, v)\in E$, then $u$ and $v$ are said to be \textbf{adjacent} vertices or \textbf{neighbors} in $G$. An edge $(u,u)\in E$ is called a \textbf{loop}. If the relation $E$ is symmetric, then we say that $G$ is an \textbf{undirected graph} and denote adjacent vertices $u$ and $v$ by $u\sim v$. Otherwise, we say that $G$ is a \textbf{directed graph} or a \textbf{digraph}. A \textbf{simple graph} $G$ is an undirected graph without loops. In the rest of this work, the term \textit{graph} denotes a simple graph on a finite set of vertices (directed and infinite graphs will be explicitly stated as such). A graph $G'=(V',E')$ is a \textbf{subgraph} of $G=(V,E)$, denoted $G'\leq G$, if $V'\subseteq V$ and $E'\subseteq E$. A subgraph $G'$ is said to be an \textbf{induced subgraph} of $G$ if every edge $e$ in $G$ such that $e\in V'\times V'$ is an edge in $G'$. A \textbf{complete graph} is one in which all of the vertices are adjacent. The complete graph on $n$ vertices is denoted by $K_n$. A \textbf{path} $P=(v^0\sim\dots \sim v^d)$ between $v^0$ and $v^d$ in a (simple) graph $G$ is a sequence $v^0, \dots, v^d$ of vertices of $G$ such that $v^{i-1}\sim v^i$ for every $1\leq i\leq d$. The path is a \textbf{cycle} if $v^0=v^d$. Here, $d$ is called the \textbf{length} of $P$. $G$ is said to be \textbf{connected} if every two vertices in $G$ are connected by a path. A \textbf{geodesic} (or \textbf{shortest path}) between two vertices $u$ and $v$ in $G$ is a path with minimum length between them. The \textbf{distance} $d_G(v,u)$ between $v$ and $u$ is the length of a geodesic between them. We write $d(v,u)$ when $G$ is evident. The \textbf{diameter} $\diam(G)$ of $G$ is the maximal distance between any two vertices in $G$. Two vertices in $G$ are said to be \textbf{antipodes} if the distance between them is equal to the diameter of the graph. The \textbf{eccentricity} $\ecc_G(v)$ of a vertex $v$ in a graph $G$ is the maximal distance of the vertex from any other vertex in $G$. We write $\ecc(v)$ when $G$ is evident. A vertex $u$ is said to be \textbf{eccentric} to the vertex $v$ if $d(v,u)=ecc(v)$. The \textbf{valency} $\delta_G(v)$ of a vertex $v$ in a graph $G$ is the number of neighbors of $v$ in $G$. An \textbf{isomorphism} of graphs $G$ and $H$ is a bijection $\varphi$ between the vertex sets of $G$ and $H$ such that any two vertices $u$ and $v$ of $G$ are adjacent in $G$ if and only if $\varphi(u)$ and $\varphi(v)$ are adjacent in $H$. An isomorphism of a graph with itself is called an \textbf{automorphism}. The set of automorphisms of a graph $G$, with respect to composition, forms a subgroup $\aut(G)$ of the symmetric group on $n$ elements, where $n$ is the number of vertices in the graph. We call this group the \textbf{automorphism group} of the graph. Throughout this work, composition of functions is computed right-to-left: $(gf)(x)=g(f(x))$. \subsection{Cayley and Schreier Graphs} Let $G$ be a group and let $S$ be a symmetric ($S^{-1}=S$) generating set of $G$. The (right) {\bf Cayley graph} $X(G,S)$ is the graph on the set of elements of $G$ in which $g_1\sim g_2$ if $g_2 = g_1 s$ for some $s$ in $S$. Let $H$ be a subgroup of $G$. The (right) {\bf Schreier coset graph} (also \textbf{Schreier graph} or \textbf{coset graph}) $X(G/H,S)$ is the graph on the set of (right) cosets $\{Hg:g\in G\}$ of $H$, in which $Hg_1 \sim Hg_2$ if $Hg_2=(Hg_1)s$ for some $s$ in $S$. Note that the Cayley graph $X(G,S)$ is a special case of a Schreier graph $X(G/H,S)$ for the trivial subgroup $H=\{e\}$. \begin{remark}\label{rem:shcreier_graph_iso} Assume that a group $G$ acts transitively on the vertices of a graph $Y$ such that the edges of $Y$ correspond to the action of a symmetric generating set $S$ of $G$. In this case, $Y$ is isomorphic to the Schreier graph $X(G/\st(v),S)$ for every $v$ in $Y$ ($\st(v)$ denotes the stabilizer of a vertex $v$ in $Y$). \end{remark} \subsection{Flip Graphs}\label{subsec:background_flip_graphs} This thesis was motivated by three previously studied families of flip graphs on colored triangle-free triangulations (CTFT) \cite{TFT1}, arc permutations \cite{elizalde} and geometric caterpillars \cite{yuval}. In this subsection we introduce these three families. \noindent {\large Colored Triangle-Free Triangulation Graphs\par} All definitions and claims in this subsection, related to CTFT graphs, can be found in \cite{TFT1}. The flip graph of triangulations of a convex polygon \cite{tarjan} inspired the definition of a few flip graphs on subsets of triangulations. One such graph is the CTFT graph. A {\bf triangulation} of a convex polygon in the plane is its subdivision into triangles using non-intersecting diagonals. An edge of a triangulation that belongs to the polygon is called an {\bf external edge}. Every other edge is called a {\bf chord}. Every triangulation of a convex $n$-gon ($n\geq 3$) has exactly $n-2$ triangles and $n-3$ chords. We denote by $P_{n}$ a convex polygon with $n\geq 5$ vertices labeled (counter clockwise) by the elements of the additive cyclic group $\mathbb{Z}_n$. A triangulation is called {\bf inner-triangle-free}, or simply {\bf triangle-free}, if it contains no triangle with $3$ chords. A chord which forms a triangle with two external edges is called a {\bf short chord} and the triangle formed by these three edges is called an {\bf outer triangle}. A triangulation of $P_{n}$ is triangle-free if and only if it contains exactly two outer triangles. Moreover, every triangle-free triangulation $T$ of $P_{n}$ induces two opposite linear orders on the chords of $T$ in which the two short chords are the minimum and maximum elements of the order (see \cite{TFT1} for more details). A {\bf coloring}, or {\bf orientation} of a triangle-free triangulation $T$ of $P_n$ is a labeling of its chords by $\{0,...,n-4\}$ in one of the two linear orders induced by $T$ on its chords. Denote the set of all colored triangle-free triangulations of $P_n$ by $CTFT(n)$. Each chord in a triangulation of $P_n$ is a diagonal of a unique quadrilateral (the union of two adjacent triangles). Replacing a chord in a triangulation by the other diagonal of its corresponding quadrilateral is called a {\bf flip} of the chord (see Figure \ref{fig:ex_triang_flip}). \begin{figure}[hbt] \[ \begin{aligned} \begin{tikzpicture}[scale=0.5] \fill (2.4,3.4) circle (0.1) node[above]{\tiny 6}; \fill (3.4,2.4) circle (0.1) node[right]{\tiny 5}; \fill (3.4,1) circle (0.1) node[right]{\tiny 4}; \fill (2.4,0) circle (0.1) node[below]{\tiny 3}; \fill (1,0) circle (0.1) node[below]{\tiny 2}; \fill (0,1) circle (0.1) node[left]{\tiny 1}; \fill (0,2.4) circle (0.1) node[left]{\tiny 0}; \fill (1,3.4) circle (0.1) node[above]{\tiny 7}; \draw (1,3.4)--(2.4,3.4)--(3.4,2.4)--(3.4,1)--(2.4,0)--(1,0)--(0,1)--(0,2.4)--(1,3.4); \draw (3.4,2.4)--(1,0); \draw (0,1)--(2.4,3.4); \draw[red] (2.4,3.4)--(1,0); \draw (3.4,2.4)--(2.4,0); \draw (0,2.4)--(2.4,3.4); \draw[blue,dotted] (3.4,2.4)--(0,1); \end{tikzpicture} \end{aligned} \ \ \ \longleftrightarrow \ \ \ \begin{aligned} \begin{tikzpicture}[scale=0.5] \fill (2.4,3.4) circle (0.1) node[above]{\tiny 6}; \fill (3.4,2.4) circle (0.1) node[right]{\tiny 5}; \fill (3.4,1) circle (0.1) node[right]{\tiny 4}; \fill (2.4,0) circle (0.1) node[below]{\tiny 3}; \fill (1,0) circle (0.1) node[below]{\tiny 2}; \fill (0,1) circle (0.1) node[left]{\tiny 1}; \fill (0,2.4) circle (0.1) node[left]{\tiny 0}; \fill (1,3.4) circle (0.1) node[above]{\tiny 7}; \draw (1,3.4)--(2.4,3.4)--(3.4,2.4)--(3.4,1)--(2.4,0)--(1,0)--(0,1)--(0,2.4)--(1,3.4); \draw (3.4,2.4)--(1,0); \draw (0,1)--(2.4,3.4); \draw (3.4,2.4)--(2.4,0); \draw (0,2.4)--(2.4,3.4); \draw[red,dotted] (2.4,3.4)--(1,0); \draw[blue] (3.4,2.4)--(0,1); \end{tikzpicture} \end{aligned} \] \caption{Flip of a chord} \label{fig:ex_triang_flip} \end{figure} The {\bf flip graph of colored triangle-free triangulations} $\Gamma_n$ is the graph on $CTFT(n)$ where two triangulations are connected by an edge if one is obtained from the other by a flip of a chord. For example, see $\Gamma_6$ in Figure \ref{fig:ex_gamma6}. This graph is closely related to a distinguished lower interval in the weak order on the affine Weyl group $\widetilde{C}_n$. The diameter of this flip graph was calculated using lattice properties of the order, see \cite[Theorem 5.1]{TFT1}. \begin{figure}[hbt] \[ \begin{aligned} \includegraphics[width=35mm]{img/tft} \end{aligned} \] \caption{$\Gamma_6$} \label{fig:ex_gamma6} \end{figure} \noindent {\large Arc Permutation Graphs\par} All definitions and claims relating to arc permutation graphs can be found in \cite{elizalde}. A permutation $\pi$ in the symmetric group $S_n$ is called an {\bf arc permutation} if every prefix (and suffix) in $\pi$ forms an interval in $\mathbb{Z}_n$, where the element $0$ in $Z_n$ is represented by $n$ for convenience. For example, the permutation $3421576$ is an arc permutations in $S_7$ (see Figure \nolinebreak \ref{fig:ex_arc_permutation}) but $5643127$ is not. Denote the set of arc permutations in $S_n$ by $\mathcal{A}_n$. \newcommand{\circRad} {20pt} \newcommand{\circleScale} {0.7} \newcommand{\createNodes}[7] { \draw (0,0) circle (\circRad); \node[draw, fill=#1, minimum size=1pt,shape=circle, scale=\circleScale] (v0) at ({-(360/7)*0 + 90}:\circRad) {\tiny 1}; \node[draw, fill=#2, minimum size=1pt,shape=circle, scale=\circleScale] (v1) at ({-(360/7)*1 + 90}:\circRad) {\tiny 2}; \node[draw, fill=#3, minimum size=1pt,shape=circle, scale=\circleScale] (v2) at ({-(360/7)*2 + 90}:\circRad) {\tiny 3}; \node[draw, fill=#4, minimum size=1pt,shape=circle, scale=\circleScale] (v3) at ({-(360/7)*3 + 90}:\circRad) {\tiny 4}; \node[draw, fill=#5, minimum size=1pt,shape=circle, scale=\circleScale] (v4) at ({-(360/7)*4 + 90}:\circRad) {\tiny 5}; \node[draw, fill=#6, minimum size=1pt,shape=circle, scale=\circleScale] (v5) at ({-(360/7)*5 + 90}:\circRad) {\tiny 6}; \node[draw, fill=#7, minimum size=1pt,shape=circle, scale=\circleScale] (v6) at ({-(360/7)*6 + 90}:\circRad) {\tiny 7}; } \begin{figure}[hbt] \[ \begin{aligned} \begin{tabular}{|c|c|c|} \hline \begin{tikzpicture} \createNodes{white}{white}{cyan}{white}{white}{white}{white} \end{tikzpicture} & \begin{tikzpicture} \createNodes{white}{white}{red}{cyan}{white}{white}{white} \end{tikzpicture} & \begin{tikzpicture} \createNodes{white}{cyan}{red}{red}{white}{white}{white} \end{tikzpicture}\\ \hline \begin{tikzpicture} \createNodes{cyan}{red}{red}{red}{white}{white}{white} \end{tikzpicture} & \begin{tikzpicture} \createNodes{red}{red}{red}{red}{cyan}{white}{white} \end{tikzpicture} & \begin{tikzpicture} \createNodes{red}{red}{red}{red}{red}{white}{cyan} \end{tikzpicture} \\ \hline \end{tabular} \end{aligned} \] \caption{$\pi=3421576$ is an arc permutation in $\mathcal{A}_7$} \label{fig:ex_arc_permutation} \end{figure} The {\bf flip graph of arc permutations} $X_n$ is the graph on $\mathcal{A}_n$ in which two arc permutations $\pi$ and $\sigma$ in $\mathcal{A}_n$ are connected by an edge if \linebreak \mbox{$\pi = \sigma\circ(i,i+1)$} for some $1\leq i \leq n-1$, where $(i,i+1)$ is a transposition written in cycle notation. For example, see $X_4$ in Figure \ref{fig:ex_arc_permutation_graph}. A transposition $(i,i+1)$ is also called a simple reflection. Recall that the set of all simple reflections in $S_n$ is a generating set of $S_n$. An equivalent definition of $X_n$ is as follows: let $G$ be the Cayley graph associated with $(S_n,S)$, where $S$ is the set of simple reflections in $S_n$. Then $X_n$ is the subgraph of $G$ that is induced by \nolinebreak $\mathcal{A}_n$. \newcommand{\addVertex}[7] \filldraw[black] (#1:#2) circle(0.4pt); \draw (#1:#3) node[fill=white] {\scriptsize #4}; } \begin{figure}[hbt] \[ \begin{aligned} \begin{tikzpicture}[scale=1.5, cap=round, >=latex] \addVertex{{90+(360*0)/12}}{1cm}{1.11cm}{4321}{1.27cm}{$\psi=(3,0,0)$}{{90+(360*0)/12}} \addVertex{{90+(360*1)/12}}{1cm}{1.19cm}{3421}{1.39cm}{$\psi=(2,1,0)$}{{90+(360*1)/12+4}} \addVertex{{90+(360*2)/12}}{1cm}{1.20cm}{3241}{1.42cm}{$\psi=(2,0,1)$}{{90+(360*2)/12-2}} \addVertex{{(90+(360*3)/12)}}{1cm}{1.21cm}{3214}{1.39cm}{$\psi=(2,0,0)$}{{(90+(360*3)/12)+6}} \addVertex{{90+(360*4)/12}}{1cm}{1.21cm}{2314}{1.37cm}{$\psi=(1,1,0)$}{{90+(360*4)/12+4}} \addVertex{{90+(360*5)/12}}{1cm}{1.2cm}{2134}{1.38cm}{$\psi=(1,0,1)$}{{(90+(360*5)/12)}} \addVertex{{90+(360*6)/12}}{1cm}{1.2cm}{1234}{1.35cm}{$\psi=(0,1,1)$}{{90+(360*6)/12}} \addVertex{{90+(360*7)/12}}{1cm}{1.2cm}{1243}{1.38cm}{$\psi=(0,1,0)$}{{90+(360*7)/12}} \addVertex{{90+(360*8)/12}}{1cm}{1.21cm}{1423}{1.37cm}{$\psi=(0,0,1)$}{{90+(360*8)/12-4}} \addVertex{{90+(360*9)/12}}{1cm}{1.21cm}{1432}{1.39cm}{$\psi=(0,0,0)$}{{(90+(360*9)/12)-6}} \addVertex{{90+(360*10)/12}}{1cm}{1.21cm}{4132}{1.37cm}{$\psi=(3,1,0)$}{{90+(360*10)/12+4}} \addVertex{{90+(360*11)/12}}{1cm}{1.2cm}{4312}{1.43cm}{$\psi=(3,0,1)$}{{90+(360*11)/12-4}} \addVertex{{90+(360*0)/12}}{0.76cm}{0.66cm}{3412}{0.51cm}{$\psi=(2,1,1)$}{{90+(360*0)/12}} \addVertex{{(90+(360*3)/12)}}{0.76cm}{0.55cm}{2341}{0.46cm}{$\psi=(1,1,1)$}{{(90+(360*3)/12)+20}} \addVertex{{90+(360*6)/12}}{0.76cm}{0.66cm}{2143}{0.51cm}{$\psi=(1,0,0)$}{{90+(360*6)/12}} \addVertex{{90+(360*9)/12}}{0.76cm}{0.45cm}{4123}{0.46cm}{$\psi=(3,1,1)$}{{(90+(360*9)/12)-20}} \draw[thick] ({90+(360*0)/12}:1cm) -- ({90+(360*1)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*2)/12}:1cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({90+(360*3)/12}:1cm); \draw[thick] ({90+(360*3)/12}:1cm) -- ({90+(360*4)/12}:1cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({90+(360*5)/12}:1cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:1cm); \draw[thick] ({90+(360*6)/12}:1cm) -- ({90+(360*7)/12}:1cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*8)/12}:1cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:1cm); \draw[thick] ({90+(360*9)/12}:1cm) -- ({90+(360*10)/12}:1cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*11)/12}:1cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \end{tikzpicture} \end{aligned} \] \caption{$X_4$} \label{fig:ex_arc_permutation_graph} \end{figure} The diameter of the graph of arc permutations was computed using similarities between the graph and the dominance order on $\mathbb{Z}^n$, see \cite[Subsection \nolinebreak 6.2]{elizalde}. \noindent {\large Geometric Caterpillar Graphs\par} To define a geometric caterpillar, start with the complete graph $K_n$, for $n\geq 3$, whose vertices are labeled by $\mathbb{Z}_n$. Embed $K_n$ in the plane such that its vertices are the vertices of a convex polygon (the labels increasing clockwise) and its edges are straight line segments. Denote this geometric graph by $\hat{K}_n$. A \textbf{geometric caterpillar} (or caterpillar) of order $n$ is a non-crossing spanning tree of $\hat{K}_n$, such that the vertices that are not leaves in the tree form an interval in $\mathbb{Z}_n$. The \textbf{spine} of a caterpillar is its longest (simple) path whose edges are on the boundary of the polygon. Caterpillars are also called fishbones or combs and were studied by Keller, Khachatryan, Perles, Sagan, Wachs and others in various contexts, see, e.g., \cite{keller, perles, yuval2, yuval, martin}. The {\bf flip graph of caterpillars} $\mathcal{G}_n$ is the graph on the set of caterpillars of order $n$ in which two caterpillars are connected by an edge if one is obtained from the other by shifting an endpoint of an edge along the spine (for an example of two adjacent caterpillar in $\mathcal{G}_8$, see Figure \ref{fig:ex_cat_flip}). \begin{figure}[hbt] \[ \begin{aligned} \begin{tikzpicture}[scale=0.45] \fill (2.4,3.4) circle (0.1) node[above]{\tiny 0}; \fill (3.4,2.4) circle (0.1) node[right]{\tiny 1}; \fill (3.4,1) circle (0.1) node[right]{\tiny 2}; \fill (2.4,0) circle (0.1) node[below]{\tiny 3}; \fill (1,0) circle (0.1) node[below]{\tiny 4}; \fill (0,1) circle (0.1) node[left]{\tiny 5}; \fill (0,2.4) circle (0.1) node[left]{\tiny 6}; \fill (1,3.4) circle (0.1) node[above]{\tiny 7}; \draw (1,3.4)--(2.4,3.4)--(3.4,2.4)--(3.4,1); \draw (0,1)--(2.4,3.4); \draw[red] (2.4,3.4)--(1,0); \draw (3.4,2.4)--(2.4,0); \draw (0,2.4)--(2.4,3.4); \end{tikzpicture} \end{aligned} \ \ \ \longleftrightarrow \ \ \ \begin{aligned} \begin{tikzpicture}[scale=0.45] \fill (2.4,3.4) circle (0.1) node[above]{\tiny 0}; \fill (3.4,2.4) circle (0.1) node[right]{\tiny 1}; \fill (3.4,1) circle (0.1) node[right]{\tiny 2}; \fill (2.4,0) circle (0.1) node[below]{\tiny 3}; \fill (1,0) circle (0.1) node[below]{\tiny 4}; \fill (0,1) circle (0.1) node[left]{\tiny 5}; \fill (0,2.4) circle (0.1) node[left]{\tiny 6}; \fill (1,3.4) circle (0.1) node[above]{\tiny 7}; \draw (1,3.4)--(2.4,3.4)--(3.4,2.4)--(3.4,1); \draw[red,dotted] (2.4,3.4)--(1,0); \draw (3.4,2.4)--(2.4,0); \draw (0,2.4)--(2.4,3.4); \draw[blue,dotted] (3.4,2.4)--(1,0); \draw (0,1)--(2.4,3.4); \end{tikzpicture} \end{aligned} \ \ \ \longleftrightarrow \ \ \ \begin{aligned} \begin{tikzpicture}[scale=0.45] \fill (2.4,3.4) circle (0.1) node[above]{\tiny 0}; \fill (3.4,2.4) circle (0.1) node[right]{\tiny 1}; \fill (3.4,1) circle (0.1) node[right]{\tiny 2}; \fill (2.4,0) circle (0.1) node[below]{\tiny 3}; \fill (1,0) circle (0.1) node[below]{\tiny 4}; \fill (0,1) circle (0.1) node[left]{\tiny 5}; \fill (0,2.4) circle (0.1) node[left]{\tiny 6}; \fill (1,3.4) circle (0.1) node[above]{\tiny 7}; \draw (1,3.4)--(2.4,3.4)--(3.4,2.4)--(3.4,1); \draw (3.4,2.4)--(2.4,0); \draw (0,2.4)--(2.4,3.4); \draw[blue] (3.4,2.4)--(1,0); \draw (0,1)--(2.4,3.4); \end{tikzpicture} \end{aligned} \] \caption{Adjacent caterpillars in $\mathcal{G}_8$} \label{fig:ex_cat_flip} \end{figure} \section{Posets} A {\bf partial order} $\leq$ on a set $P$ is a binary relation on $P$ which is reflexive, transitive and antisymmetric. A set $P$ equipped with a partial order $\leq$ is called a {\bf partially ordered set} (or a {\bf poset}) and is denoted by $(P,\leq)$. If we refer to a set $P$ as a poset, then either the partial order is evident, or we assume that the partial order on $P$ is denoted by $\leq_P$. $Q$ is called an {\bf induced subposet} of $P$ if $Q\subseteq P$ and $x\leq_Q y$ if and only if $x\leq_P y$ for every $x,y\in Q$. In this case, the order on $Q$ is called the {\bf induced order}. In the rest of this work, a {\bf subposet} means an induced subposet. A map $\varphi:P \rightarrow Q$ between two posets is called {\bf order preserving} if $x\leq_P y$ implies that $\varphi(x)\leq_Q \varphi(y)$. $\varphi$ is an \textbf{isomorphism} if it is surjective and $\varphi(x)\leq_Q \varphi(y)$ if and only if $x\leq_P y$ for every $x$ and $y$ in $P$. (Note that $\varphi(x)=\varphi(y)$ implies both $x\leq y$ and $y\leq x$, therefore $\varphi$ is injective.) In this case, $P$ and $Q$ are said to be \textbf{isomorphic} and we denote it by $P\cong Q$. Let $x$ and $y$ be two elements in $(P,\leq)$. If $x\leq y$ or $y\leq x$, then we say that $x$ and $y$ are {\bf comparable}. Otherwise, $x$ and $y$ are {\bf incomparable}. We denote by $x<y$ the case $x\leq y$ and $x\neq y$. A poset in which every two elements are comparable is called a \textbf{chain} or a \textbf{linearly ordered set}. A poset in which no two elements are comparable is called an {\bf antichain}. For convenience, we say that $C\subseteq P$ is a chain in $P$, if $C$ is a chain as a subposet of $P$. For every $x\leq y$ in a poset $P$, the {\bf interval} $[x,y]$ is the subposet \linebreak $\{z\in P:x\leq z \leq y\}$. If every interval in $P$ is finite, then $P$ is called a {\bf locally finite} poset. In this thesis, every poset is locally finite. A subposet $Q$ of $P$ is called {\bf convex} if for every $x,y\in Q$ the interval $[x,y]$ in $P$ is contained in $Q$. Let $x<y$ be two elements in a poset $P$. $y$ is said to $\bf cover$ $x$, denoted by $x\lessdot y$, if the interval $[x,y]$ is the two element set $\{x,y\}$. The {\bf covering relation} of $P$ is the relation $\lessdot$ on $P$, consisting of all such pairs in $P$. The \textbf{Hasse diagram} of the poset $(P,\leq)$ is the digraph $\mathscr{H}(P,\leq)$ on $P$ in which $(x,y)$ is an edge if and only if $x\lessdot y$. Throughout this thesis, by the Hasse diagram of a poset, we mean the underlying undirected graph of the Hasse diagram. A {\bf maximal} chain in $P$ is one not contained in any other chain in $P$. A {\bf maximum} chain in $P$ is one with maximal length. A chain $C$ is called \textbf{saturated} if there does not exist an element $z\notin C$ and two elements $x\leq y$ in $C$ such that $x \leq z \leq y$ and $z$ can be added to $C$ without losing the property of being totally ordered. The {\bf length} $\len(C)$ of a finite chain $C$ is equal to $|C|-1$. The length $\len(P)$ of a finite poset $P$ is the length of its maximum chains. The length of an interval $[x,y]$ is denoted by $\len(x,y)$. We say that a poset $P$ is a {\bf graded poset} if there exists an order preserving {\bf rank} function $\rho:P\rightarrow \mathbb{N}$ such that $x\lessdot y$ implies that $\rho(x)=\rho(y)-1$. Equivalently, a poset $P$ is graded if and only if for every $x\leq y$ in $P$ all the maximal chains in $[x,y]$ have the same length. In a graded poset with a rank function $\rho$, $\len(x,y)=\rho(y)-\rho(x)$ for every two elements $x\leq y$. Let $P_1,\dots, P_n$ be $n$ posets. The {\bf direct product} (or {\bf product}) order $\leq$ on $P_1\times\dots\times P_n$ is the order in which $(v_1,\dots,v_n) \leq (u_1,\dots,u_n)$ if $v_i\leq_{P_i} u_i$ for every $1\leq i\leq n$. When we write $P_1\times\dots\times P_n$, we mean the poset on $P_1\times\dots\times P_n$ equipped with the product order, unless stated otherwise. Note that $(v_1,\dots,v_n) \lessdot (u_1,\dots,u_n)$ in $P_1\times\dots\times P_n$ if and only if $u_i$ covers $v_i$ in $P_i$ for some $1\leq i\leq n$ and $u_j=v_j$ for every other $1\leq j\leq n$. The poset obtained by a direct product of $(P,\leq)$ with itself $n$ times is denoted by $(P^n,\leq)$ or $P^n$. In this case, the meaning of $\leq$ should be clear depending on the elements being compared. \begin{example}\label{example:Z^n_is_graded} $(\mathbb{Z}^n,\leq)$ is a graded poset with a rank function $\rank_\leq(v)=\sum_{1}^{n}v_i$. The distance between any two elements $u=(u_1,\dots,u_n)$ and $v=(v_1,\dots,v_n)$ in its Hasse diagram $\mathscr{H}(\mathbb{Z}^n,\leq)$ is $\sum_{i=1}^{n}|u_i-v_i|$. \end{example} \subsection{Lattices} Let $P$ be a poset and let $S\subseteq P$. The {\bf join} of $S$ in $P$ is the least upper bound of $S$ (if it exists). The join $s\join t$ of two elements $s$ and $t$ in $P$ is the join of $\{s,t\}$ in $P$. The {\bf meet} of $S$ in $P$ is the greatest lower bound of $S$ (if it exists). The meet $s\meet t$ of two elements $s$ and $t$ in $P$ is the meet of $\{s,t\}$ in \nolinebreak $P$. A poset $L$ is called a {\bf lattice} if every two elements in $L$ have both a meet and a join. A subposet $S$ of a lattice $L$ is called a \textbf{sublattice} if $S$ is closed under the meet and join operations of $L$. For example, every interval $[u,v]$ in a lattice $L$ is a sublattice of $L$. If $L$ and $M$ are lattices, then so is $L \times M$ equipped with the product order. \begin{example}\label{example:Z^n_is_a_lattice} $(\mathbb{Z}^n,\leq)$ is a lattice with meet and join operations being the point-wise $\min$ and $\max$, respectively. \end{example} If $P$ is a poset that has a least element $\hat{0}$ and a greatest element $\hat{1}$ ($\hat{0}\leq p\leq \hat{1}$ for every $p$ in $P$), then we say that $P$ is {\bf bounded}. Note that every non-empty finite lattice is bounded (by taking the meet and join of all the elements). A lattice $L$ is said to be a \textbf{modular lattice} if $x\leq b$ implies that \mbox{$x\join(a\meet b)=(x\join a)\meet b$} for every $x,a,b\in L$. Clearly, every sublattice of a modular lattice is modular. \begin{example} $(\mathbb{Z}^n,\leq)$ is a modular lattice. \end{example} \begin{theorem}(\cite[Lemma 1.2]{TFT1})\label{thm:dist_modular} Let $L$ be a locally finite, modular lattice. Let $\rho$ be its rank function. Denote the Hasse diagram of $L$ by $H$. Then for every $s,t\in L$: $ d_H(s,t) = \rho(s\join t) - \rho(s\meet t).$ \end{theorem} \subsection{The Dominance Order on $\mathbb{Z}^n$} \begin{definition}\label{def:dom_order} The {\bf dominance order} $\trianglelefteq$ on $\mathbb{Z}^n$ is the order in which \linebreak $(v_1,\dots,v_n) \tri (u_1,\dots,u_n)$ if $\sum_{j=1}^i v_j\leq \sum_{j=1}^i u_j$ for every $1\leq i \leq n$. \end{definition} \begin{observation}\label{obs:domprodiso} $(\mathbb{Z}^n,\trianglelefteq)$ and $(\mathbb{Z}^n,\leq) $ are isomorphic. The following maps, $\chi$ and its inverse $\chi^{-1}$, are poset isomorphisms. $$ \begin{array}{llll} \chi: & (\mathbb{Z}^n,\trianglelefteq) & \longrightarrow & (\mathbb{Z}^n,\leq) \\ & (v_1,\dots,v_{n}) & \longmapsto & (v_1,v_1+v_2,\dots,v_1+\dots+v_{n}) \end{array} $$ $$ \begin{array}{llll} \chi^{-1}: & (\mathbb{Z}^n,\leq) & \longrightarrow & (\mathbb{Z}^n,\trianglelefteq)\\ & (v_1,\dots,v_{n}) & \longmapsto & (v_1,v_2-v_1,\dots,v_{n}-v_{n-1}). \end{array} $$ \end{observation} Let $E$ be the standard basis of $\mathbb{R}^n$ as a vector space and let $S=E\cup E^{-1}$. Note that the isomorphisms $\chi$ and $\chi^{-1}$ are also automorphisms of $\mathbb{Z}^n$ as an additive group. Therefore, they are determined by the mapping of the standard basis $E$. For example, for $n=3$ $$\chi(u) = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{array} \right) u $$ $$\chi^{-1}(u) = \left( \begin{array}{ccc} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & -1 & 1 \end{array} \right) u. $$ This implies that the generating set $\chi^{-1}(S)$ of $\mathbb{Z}^n$ describes the covering relation of $(\mathbb{Z}^n,\tri)$, since the generating set $S$ describes the covering relation of $(\mathbb{Z}^n,\leq)$. In other words, $\mathscr{H}(\mathbb{Z}^n,\tri)$ is the Cayley graph $X(\mathbb{Z}^n, \chi^{-1}(S))$, since the Hasse diagram $\mathscr{H}(\mathbb{Z}^n,\leq)$ is the Cayley graph $X(\mathbb{Z}^n, S)$. \begin{observation}\label{obs:domcover} The Hasse diagram $\mathscr{H}(\mathbb{Z}^n,\tri)$ is the Cayley graph $X(\mathbb{Z}^n, A)$ where $A=\{\pm(1,-1,0,\dots),\pm(0,1,-1,0,\dots),\dots,\pm(0,\dots,0,1)\}$. \end{observation} \begin{observation}\label{obs:domprop} Let $v=(v_1,\dots, v_n)$ and $u=(u_1,\dots, u_n)$ be two elements in $\mathbb{Z}^n$. Examples \ref{example:Z^n_is_graded}, \ref{example:Z^n_is_a_lattice} and Observation \ref{obs:domprodiso} imply the following properties. \begin{enumerate} \item $(\mathbb{Z}^n,\tri)$ is a graded poset with the following rank function: \begin{equation*} \rank_\tri(v) = \rank_{\leq}(\chi(v)) = \sum_{i=1}^{n}\sum_{j=1}^{i}v_j . \end{equation*} \item The distance between $u$ and $v$ in the Hasse diagram of $(\mathbb{Z}^n,\tri)$ is as follows: \begin{equation*} \begin{split} d_{\mathscr{H}(\mathbb{Z}^n,\tri)}(v,u) & = d_{\mathscr{H}(\mathbb{Z}^n,\leq)}(\chi(v),\chi(u)) \\ & = \sum_{i=1}^{n}|\chi(v)_{i}-\chi(u)_{i}| \\ & = \sum_{i=1}^{n}|\sum_{k=1}^{i}(v_k - u_k)|. \end{split} \end{equation*} \item $(\mathbb{Z}^n,\tri)$ is a lattice with the following meet and join operations: \begin{equation*} v\meet u =(\alpha_1,\dots,\alpha_{n}) \text{ where } \alpha_k=\min\{\sum\limits_{i=1}^{k}v_i,\sum\limits_{i=1}^{k}u_i\}-\min\{\sum\limits_{i=1}^{k-1}v_i,\sum\limits_{i=1}^{k-1}u_i\}; \end{equation*} \begin{equation*} v\join u =(\beta_1,\dots,\beta_{n}) \text{ where } \beta_k=\max\{\sum\limits_{i=1}^{k}v_i,\sum\limits_{i=1}^{k}u_i\}-\max\{\sum\limits_{i=1}^{k-1}v_i,\sum\limits_{i=1}^{k-1}u_i\}. \end{equation*} \end{enumerate} \end{observation} \chapter{The Yoke Graphs $\Ynm$}\label{chp:YokeGraphs} In this chapter, we introduce Yoke graphs, the main object of this thesis. After providing basic definitions in Section \ref{sec:yoke_graphs_defs}, we review three important instances in Section \ref{sec:special_cases} and discuss some group actions on these graphs in Section \ref{sec:group_actions}. \section{Basic Definitions}\label{sec:yoke_graphs_defs} For every non-negative integer $n$, denote the set $\{0,\dots, n-1\}$ by $P_n$. \begin{definition}\label{def:yoke_graph} Let $n\geq 1$ and $m\geq 0$ be two integers. The \textbf{Yoke graph} $\Ynm$ is a graph with vertices corresponding to all $v=(v_0\dots,v_{m+1})$ in \mbox{$\mathbb{Z}_n\times P_2^m\times \mathbb{Z}_n$} such that $\sum_{i=0}^{m+1}v_i\equiv 0(\bmod n).$ Two vertices $u$ and $v$ are adjacent in $\Ynm$, denoted $u\sim v$, if there exists $0\leq i\leq m$ such that $u_j=v_j$ for every $j\notin\{i,i+1\}$ and one of the following two cases holds: either $u_i=v_i+1$ and $u_{i+1}=v_{i+1}-1$, or $u_i=v_i-1$ and $u_{i+1}=v_{i+1}+1$. \end{definition} The name ``Yoke graph" is derived from the shoulder yoke, a tool that can be used to carry two buckets. Based on this analogy, we refer to the entries $v_0$ and $v_{m+1}$ of a vertex $v$ in $\Ynm$ as the \textbf{left bucket} and the \textbf{right bucket} of $v$, respectively. \begin{convention}\label{conv:cosets_are_integers} By Definition \ref{def:yoke_graph}, the buckets of a vertex $v\in\Ynm$ are elements (cosets) in the quotient group $\mathbb{Z}_n$ of $\mathbb{Z}$. Throughout this thesis, a bucket is identified with its smallest non-negative representative in $\mathbb{Z}$. \end{convention} According to Convention \ref{conv:cosets_are_integers}, the sum $\sum_{i=0}^{m+1}v_i$ in Definition \ref{def:yoke_graph} is a non-negative integer. Note that a vertex $u$ in $\Ynm$ is determined by its first (or last) $m+1$ entries, since $\sum_{i=0}^{m+1}u_i\equiv 0(\bmod n)$. When convenient, we identify a vertex in $\Ynm$ by specifying its first (or last) $m+1$ entries. We denote the vertex $(0,\dots,0)\in\Ynm$ by $0$. In the first case of the adjacency relation in Definition \ref{def:yoke_graph}, where $u_i=v_i+1$ and $u_{i+1}=v_{i+1}-1$ for some $0\leq i\leq m$, we say that $u$ is obtained from $v$ by \textbf{shifting} a unit from entry $i+1$ to the \textbf{left}, and write $u=\overleftarrow{s}_{i}(v)$. In the second case, where $u_i=v_i-1$ and $u_{i+1}=v_{i+1}+1$, we say that $u$ is obtained from $v$ by \textbf{shifting} a unit in entry $i$ to the \textbf{right}, and write $u=\overrightarrow{s}_{i}(v)$. For an example of a Yoke graph, see $\Ynm[3][3]$ in Figure \ref{fig:Y33}. Observe that $\Ynm$ is a connected simple graph. \begin{figure}[hbt] \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (01110) [label=left:\scriptsize(01110)] {} -- ++(-120:2.0cm) node (01101) [label=left:\scriptsize(01101)] {} -- ++(-120:4.0cm) node (01011) [label=right:\scriptsize(01011)] {} -- ++(-120:4.0cm) node (00111) [label=left:\scriptsize(00111)] {} -- ++(-120:2.0cm) node (21111) [label=left:\scriptsize(21111)] {} -- ++(0:2.0cm) node (21102) [label={[shift={(-90:0.8)}]:\scriptsize(21102)}] {} -- ++(0:4.0cm) node (21012) [label={[shift={(90:-0.3)}]:\scriptsize(21012)}] {} -- ++(0:4.0cm) node (20112) [label={[shift={(-90:0.8)}]:\scriptsize(20112)}] {} -- ++(0:2.0cm) node (11112) [label=right:\scriptsize(11112)] {} -- ++(120:2.0cm) node (11100) [label=right:\scriptsize(11100)] {} -- ++(120:4.0cm) node (11010) [label=left:\scriptsize(11010)] {} -- ++(120:4.0cm) node (10110) [label=right:\scriptsize(10110)] {} -- (01110) {}; \draw (01101) -- ++(-60:2.0cm) node (10101) [label={[shift={(-90:0.8)}]:\scriptsize(10101)}] {} -- ++(0:4.0cm) node (11001) [label={[shift={(90:-0.3)}]:\scriptsize(11001)}] {} -- ++(0:2.0cm) node (20001) [label=right:\scriptsize(20001)] {} -- ++(-120:2.0cm) node (20010) [label=right:\scriptsize(20010)] {} -- ++(-120:4.0cm) node (20100) [label=left:\scriptsize(20100)] {} -- ++(-120:4.0cm) node (21000) [label=right:\scriptsize(21000)] {} -- ++(-120:2.0cm) node (00000) [label=right:\scriptsize 0] {} -- ++(120:2.0cm) node (00012) [label=left:\scriptsize(00012)] {} -- ++(120:4.0cm) node (00102) [label=right:\scriptsize(00102)] {} -- ++(120:4.0cm) node (01002) [label=left:\scriptsize(01002)] {} -- ++(120:2.0cm) node (10002) [label=left:\scriptsize(10002)] {} -- ++(0:2.0cm) node (10011) [label={[shift={(90:-0.3)}]:\scriptsize(10011)}] {} -- (10101) {}; \draw (10101) -- (10110); \draw (11001) -- (11010) -- (20010); \draw (11100) -- (20100) -- (20112); \draw (21000) -- (21012) -- (00012); \draw (21102) -- (00102) -- (00111); \draw (01002) -- (01011) -- (10011); \end{tikzpicture} \caption{$\Ynm[3][3]$} \label{fig:Y33} \end{figure} \pagebreak \begin{observation}[The case $m\leq 1$]\label{obs:small_m_is_trivial} The cases $\Ynm[1][0]$, $\Ynm[2][0]$ and $\Ynm[1][1]$ are graphs on, at most, two vertices (see Figure \ref{fig:small_yoke_graphs}). If $m=0$ and $2<n$, then $\Ynm$ is the cycle graph on $n$ vertices and if $m=1$ and $1<n$, then $\Ynm$ is the cycle graph on $2n$ vertices (see Figure \ref{fig:cycle_yoke_graphs}). \end{observation} \begin{figure}[hbt] \begin{center} \begin{minipage}{.3\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (00) [label=left:\scriptsize(00)] {}; \end{tikzpicture} \end{minipage} \begin{minipage}{.3\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (00) [label=left:\scriptsize(00)] {} -- ++(+90:2.0cm) node (11) [label=left:\scriptsize(11)] {}; \end{tikzpicture} \end{minipage} \begin{minipage}{.3\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (000) [label=left:\scriptsize(000)] {} -- ++(+90:2.0cm) node (010) [label=left:\scriptsize(010)] {}; \end{tikzpicture} \end{minipage} \end{center} \caption{$\Ynm[1][0]$, $\Ynm[2][0]$ and $\Ynm[1][1]$} \label{fig:small_yoke_graphs} \end{figure} \begin{figure}[hbt] \begin{center} \begin{minipage}{.4\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (00) [label=right:\scriptsize(00)] {} -- ++(60:2.0cm) node (51) [label=right:\scriptsize(51)] {} -- ++(120:2.0cm) node (42) [label=right:\scriptsize(42)] {} -- ++(180:2.0cm) node (33) [label=left:\scriptsize(33)] {} -- ++(240:2.0cm) node (24) [label=left:\scriptsize(24)] {} -- ++(300:2.0cm) node (15) [label=left:\scriptsize(15)] {}; \draw (15) -- (00); \end{tikzpicture} \end{minipage} \begin{minipage}{.4\textwidth} \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \draw (0,0) node (00) [label=right:\scriptsize(000)] {} -- ++(60:2.0cm) node (51) [label=right:\scriptsize(210)] {} -- ++(120:2.0cm) node (42) [label=right:\scriptsize(201)] {} -- ++(180:2.0cm) node (33) [label=left:\scriptsize(111)] {} -- ++(240:2.0cm) node (24) [label=left:\scriptsize(102)] {} -- ++(300:2.0cm) node (15) [label=left:\scriptsize(012)] {}; \draw (15) -- (00); \end{tikzpicture} \end{minipage} \end{center} \caption{$\Ynm[6][0]$ and $\Ynm[3][1]$} \label{fig:cycle_yoke_graphs} \end{figure} \section{Important Examples of Yoke Graphs}\label{sec:special_cases} In this section, we show that the three flip graphs from Subsection \ref{subsec:background_flip_graphs} are in fact instances of Yoke graphs. For every one of these graphs, we give its isomorphism with the appropriate instance of a Yoke graph. \noindent {\large Colored Triangle-Free Triangulation Graphs\par} Recall the flip graph of colored triangle-free triangulations $\Gamma_n$ from Subsection \ref{subsec:background_flip_graphs}. In \cite[Definition 2.8]{TFT1}, a bijection between its vertex set $CTFT(n)$ and $\mathbb{Z}_n\times\{0,1\}^{n-4}$ is defined in order to calculate the cardinality of $CTFT(n)$. This map is now used to prove that the graphs $\Gamma_n$ and $\Ynm[n][n-4]$ are isomorphic. \begin{definition} Define the map $g: \Gamma_n \rightarrow \Ynm[n][n-4]$ as follows. Let $T\in \Gamma_n$. If the (short) chord labeled by $0$ in $T$ is $(a-1, a+1)$ for $a\in\mathbb{Z}_n$, then let $g(T)_0 = a$. Extend the definition of $g(T)_i$ iteratively for every $1\leq i\leq n-4$ as follows. First, note that if the chord labeled by $i-1$ in $T$ is $(k,t)$, then the chord labeled by $i$ is either $(k-1,t)$ or $(k,t+1)$. Now, let $g(T)_i$ be $0$ in the former case and $1$ in the latter. \end{definition} For example, assume that the edge $(0,6)$ in Figure \ref{fig:ex_triang_flip} is labeled by $0$ in $\Gamma_8$ (in both triangulations). Then these triangulations are mapped by $g$ from $\Gamma_8$ to $\Ynm[8][4]$. The left triangulation in the figure is mapped to $(7,1,1,0,1,6)$ and the right is mapped to $(7,1,0,1,1,6)$. \begin{observation} It is straightforward to verify that $g$ is a bijection and that a chord flip of the chord labeled by $i\in\{0,\dots, n-4\}$ in $T\in \Gamma_n$ corresponds to the case in the adjacency relation in $\Ynm[n][n-4]$, where a unit is shifted between the entries indexed by $i$ and $i+1$ of $g(T)$. This implies that $g$ is a graph isomorphism. \end{observation} Note that if $P_n$ would be extended to be defined similarly in the case $n=4$, then the resulting graph $\Gamma_4$ would be a graph on $2$ vertices, whereas $\Ynm[4][0]$ is a graph on $4$ vertices. The definition of a coloring of a triangle-free triangulation can be modified (from how it appears in Subsection \ref{subsec:background_flip_graphs} and in \cite{TFT1}) such that the linear order is applied to the triangles instead of the chords. This will extend the correspondence between $\Gamma_n$ and $\Ynm[n][n-4]$ to the case of $\Gamma_4$. In this case, however, the flip action of the chord in $\Gamma_4$ is not involutive (in contrast to the case $n\geq 5$) and its definition should be refined to include ``direction" of the flip. \noindent {\large Arc Permutation Graphs\par} Recall the flip graph of arc permutations $X_n$ from Subsection \ref{subsec:background_flip_graphs}. In \cite[Subsection 6.2]{elizalde}, a bijection between its vertex set $\mathcal{A}_n$ and $\mathbb{Z}_n\times\{0,1\}^{n-2}$ is defined in order to embed $X_n$ in the Hasse diagram of the dominance order on $\{0,\dots, n-1\}\times\{0,1\}^{n-2}$. This map is now used to prove that the graphs $X_n$ and $\Ynm[n][n-2]$ are isomorphic. \begin{definition} Define the map $f: X_n \rightarrow \Ynm[n][n-2]$ as follows. Let $\pi\in X_n$ and set $f(\pi)_0 = \pi(1)-1$. Extend the definition of $f(\pi)_i$ iteratively for every $1\leq i\leq n-2$ as follows. Let $I_i$ be the underlying set $\{\pi(1), \dots, \pi(i)\}$ of the interval formed by the prefix of $\pi$ up to $i$. First, note that either $(\pi(i+1)-1)\bmod n\in I_i$ or $(\pi(i+1)+1)\bmod n\in I_i$, since the interval $I_i$ must be extended by $\pi(i+1)$. Now, let $f(\pi)_i$ be $1$ in the former case and $0$ in the latter. \end{definition} \begin{figure}[hbt] \begin{center} \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture}[scale=1.5, cap=round, >=latex] \addVertex{{90+(360*0)/12}}{1cm}{1.11cm}{4321}{1.27cm}{$\psi=(3,0,0)$}{{90+(360*0)/12}} \addVertex{{90+(360*1)/12}}{1cm}{1.19cm}{3421}{1.39cm}{$\psi=(2,1,0)$}{{90+(360*1)/12+4}} \addVertex{{90+(360*2)/12}}{1cm}{1.20cm}{3241}{1.42cm}{$\psi=(2,0,1)$}{{90+(360*2)/12-2}} \addVertex{{(90+(360*3)/12)}}{1cm}{1.21cm}{3214}{1.39cm}{$\psi=(2,0,0)$}{{(90+(360*3)/12)+6}} \addVertex{{90+(360*4)/12}}{1cm}{1.21cm}{2314}{1.37cm}{$\psi=(1,1,0)$}{{90+(360*4)/12+4}} \addVertex{{90+(360*5)/12}}{1cm}{1.2cm}{2134}{1.38cm}{$\psi=(1,0,1)$}{{(90+(360*5)/12)}} \addVertex{{90+(360*6)/12}}{1cm}{1.2cm}{1234}{1.35cm}{$\psi=(0,1,1)$}{{90+(360*6)/12}} \addVertex{{90+(360*7)/12}}{1cm}{1.2cm}{1243}{1.38cm}{$\psi=(0,1,0)$}{{90+(360*7)/12}} \addVertex{{90+(360*8)/12}}{1cm}{1.21cm}{1423}{1.37cm}{$\psi=(0,0,1)$}{{90+(360*8)/12-4}} \addVertex{{90+(360*9)/12}}{1cm}{1.21cm}{1432}{1.39cm}{$\psi=(0,0,0)$}{{(90+(360*9)/12)-6}} \addVertex{{90+(360*10)/12}}{1cm}{1.21cm}{4132}{1.37cm}{$\psi=(3,1,0)$}{{90+(360*10)/12+4}} \addVertex{{90+(360*11)/12}}{1cm}{1.2cm}{4312}{1.43cm}{$\psi=(3,0,1)$}{{90+(360*11)/12-4}} \addVertex{{90+(360*0)/12}}{0.76cm}{0.66cm}{3412}{0.51cm}{$\psi=(2,1,1)$}{{90+(360*0)/12}} \addVertex{{(90+(360*3)/12)}}{0.76cm}{0.55cm}{2341}{0.46cm}{$\psi=(1,1,1)$}{{(90+(360*3)/12)+20}} \addVertex{{90+(360*6)/12}}{0.76cm}{0.66cm}{2143}{0.51cm}{$\psi=(1,0,0)$}{{90+(360*6)/12}} \addVertex{{90+(360*9)/12}}{0.76cm}{0.45cm}{4123}{0.46cm}{$\psi=(3,1,1)$}{{(90+(360*9)/12)-20}} \draw[thick] ({90+(360*0)/12}:1cm) -- ({90+(360*1)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*2)/12}:1cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({90+(360*3)/12}:1cm); \draw[thick] ({90+(360*3)/12}:1cm) -- ({90+(360*4)/12}:1cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({90+(360*5)/12}:1cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:1cm); \draw[thick] ({90+(360*6)/12}:1cm) -- ({90+(360*7)/12}:1cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*8)/12}:1cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:1cm); \draw[thick] ({90+(360*9)/12}:1cm) -- ({90+(360*10)/12}:1cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*11)/12}:1cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \end{tikzpicture} \end{minipage}$\rightarrow$ \begin{minipage}{.45\textwidth} \centering \begin{tikzpicture}[scale=1.5, cap=round, >=latex] \addVertex{{90+(360*0)/12}}{1cm}{1.26cm}{(3,0,0,1)}{1.27cm}{$\psi=(3,0,0)$}{{90+(360*0)/12}} \addVertex{{90+(360*1)/12}}{1cm}{1.19cm}{(2,1,0,1)}{1.39cm}{$\psi=(2,1,0)$}{{90+(360*1)/12+4}} \addVertex{{90+(360*2)/12}}{1cm}{1.29cm}{(2,0,1,1)}{1.42cm}{$\psi=(2,0,1)$}{{90+(360*2)/12-2}} \addVertex{{(90+(360*3)/12)}}{1cm}{1.35cm}{(2,0,0,2)}{1.39cm}{$\psi=(2,0,0)$}{{(90+(360*3)/12)+6}} \addVertex{{90+(360*4)/12}}{1cm}{1.3cm}{(1,1,0,2)}{1.37cm}{$\psi=(1,1,0)$}{{90+(360*4)/12+4}} \addVertex{{90+(360*5)/12}}{1cm}{1.2cm}{(1,0,1,2)}{1.38cm}{$\psi=(1,0,1)$}{{(90+(360*5)/12)}} \addVertex{{90+(360*6)/12}}{1cm}{1.27cm}{(0,1,1,2)}{1.35cm}{$\psi=(0,1,1)$}{{90+(360*6)/12}} \addVertex{{90+(360*7)/12}}{1cm}{1.2cm}{(0,1,0,3)}{1.38cm}{$\psi=(0,1,0)$}{{90+(360*7)/12}} \addVertex{{90+(360*8)/12}}{1cm}{1.3cm}{(0,0,1,3)}{1.37cm}{$\psi=(0,0,1)$}{{90+(360*8)/12-4}} \addVertex{{90+(360*9)/12}}{1cm}{1.35cm}{(0,0,0,0)}{1.39cm}{$\psi=(0,0,0)$}{{(90+(360*9)/12)-6}} \addVertex{{90+(360*10)/12}}{1cm}{1.3cm}{(3,1,0,0)}{1.37cm}{$\psi=(3,1,0)$}{{90+(360*10)/12+4}} \addVertex{{90+(360*11)/12}}{1cm}{1.2cm}{(3,0,1,0)}{1.43cm}{$\psi=(3,0,1)$}{{90+(360*11)/12-4}} \addVertex{{90+(360*0)/12}}{0.76cm}{0.66cm}{(2,1,1,0)}{0.51cm}{$\psi=(2,1,1)$}{{90+(360*0)/12}} \addVertex{{(90+(360*3)/12)}}{0.76cm}{0.40cm}{(1,1,1,1)}{0.46cm}{$\psi=(1,1,1)$}{{(90+(360*3)/12)+20}} \addVertex{{90+(360*6)/12}}{0.76cm}{0.66cm}{(1,0,0,3)}{0.51cm}{$\psi=(1,0,0)$}{{90+(360*6)/12}} \addVertex{{90+(360*9)/12}}{0.76cm}{0.40cm}{(3,1,1,3)}{0.46cm}{$\psi=(3,1,1)$}{{(90+(360*9)/12)-20}} \draw[thick] ({90+(360*0)/12}:1cm) -- ({90+(360*1)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*2)/12}:1cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({90+(360*3)/12}:1cm); \draw[thick] ({90+(360*3)/12}:1cm) -- ({90+(360*4)/12}:1cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({90+(360*5)/12}:1cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:1cm); \draw[thick] ({90+(360*6)/12}:1cm) -- ({90+(360*7)/12}:1cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*8)/12}:1cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:1cm); \draw[thick] ({90+(360*9)/12}:1cm) -- ({90+(360*10)/12}:1cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*11)/12}:1cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:1cm); \draw[thick] ({90+(360*1)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*11)/12}:1cm) -- ({90+(360*0)/12}:0.76cm); \draw[thick] ({90+(360*2)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*4)/12}:1cm) -- ({(90+(360*3)/12)}:0.76cm); \draw[thick] ({90+(360*5)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*7)/12}:1cm) -- ({90+(360*6)/12}:0.76cm); \draw[thick] ({90+(360*8)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \draw[thick] ({90+(360*10)/12}:1cm) -- ({90+(360*9)/12}:0.76cm); \end{tikzpicture} \end{minipage} \end{center} \caption{$f:X_4\rightarrow\Ynm[4][2]$} \label{fig:ex_arc_permutation_isomorphism} \end{figure} \begin{observation} It is straightforward to verify that $f$ is a bijection and that right multiplication of $\pi\in X_n$ by the transposition $(i,i+1)$ (for \mbox{$1\leq i\leq n-1$}) corresponds to the case in the adjacency relation in $\Ynm[n][n-2]$, where a unit is shifted between the entries indexed by $i-1$ and $i$ of $f(\pi)$. This implies that $f$ is a graph isomorphism. \end{observation} \noindent {\large Geometric Caterpillar Graphs\par} Recall the flip graph of caterpillars $\mathcal{G}_n$ from Subsection \ref{subsec:background_flip_graphs}. \begin{definition}\label{def:order_caterpillar} Let $C\in \mathcal{G}_n$ and let $I(C)=[a,a+1,\dots,a+k]\subseteq \mathbb{Z}_n$ be the interval induced by the spine of $C$ ($2\leq k$). Note that $a$ and $a+k$ are leaves of $C$. Define a complete ordering $S(C) = (s_0,s_1,\dots, s_{n-3})$ on $n-2$ vertices of $C$ as follows. Set $s_0=a$. Define the interval $I_0(C)$ to be $[a,a+1]\subset\mathbb{Z}_n$. Define $I_i(C)$ and $s_{i}$ iteratively for every $1\leq i\leq n-3$ as follows. Assume that $I_{i-1}(C)=[l,\dots,r]$. If $(l-1,r)$ is an edge in $C$, then $I_i(C)=[l-1,l,\dots,r]$ and $s_{i}=l-1$. Otherwise, $I_i(C)=[l,\dots,r,r+1]$ and $s_{i}=r+1$. \end{definition} Note that in the $i$-th iteration in Definition \ref{def:order_caterpillar}, if $I_{i-1}=[l,r]$ and $(l-1,r)$ is not an edge in $C$, then $C$ has no leaf connected to $r$ that is not in $I_{i-1}$. This implies that the iterations in the definition induce a sequence $S(C)$ of distinct $n-2$ vertices and $n-2$ intervals that monotonously increase in size. For example, let $C$ be the leftmost caterpillar in Figure \ref{fig:ex_cat_flip}. Then $S(C)=(7, 6, 5, 4, 1, 3)$, $I_0(C)=[7, 0]$, $I_1(C)=[6, 7, 0]$, $I_2(C)=[5, 6, 7, 0]$, $I_3(C)=[4, 5, 6, 7, 0]$, $I_4(C)=[4, 5, 6, 7, 0, 1]$ and $I_{5}(C)=[3, 4, 5, 6, 7, 0, 1]$. \begin{definition}\label{def:caterpillar_iso} Let $C\in \mathcal{G}_n$. Let $I_i$ (for $0\leq i\leq n-3$) and let $S(C) = (s_0,s_1,\dots, s_{n-3})$ be as in Definition \ref{def:order_caterpillar}. Define the map $h: \mathcal{G}_n \rightarrow \Ynm[n][n-3]$ as follows. Set $h(C)_0 = s_0$. Extend the definition of $h(C)_i$ iteratively for every $1\leq i\leq n-3$ as follows. Assume that $I_{i-1}=[l,r]$. Note that either $s_{i}=l-1$ or $s_{i}=r+1$. Let $h(C)_i$ be $1$ in the former case and $0$ in the latter. \end{definition} For example, let $C\in \mathcal{G}_8$ be the leftmost caterpillar in Figure \ref{fig:ex_cat_flip}. Then $h$ maps $C$ to $\Ynm[8][5]$ and $h(C)=(7, 1, 1, 1, 0, 1, 5)$. \begin{observation} It is straightforward to verify that $h$ is a bijection and that shifting an edge incident with a leaf along the spine in $C\in \mathcal{G}_n$ corresponds to a unit shift between two entries indexed by $i$ and $i+1$ of $h(C)$ for some $0\leq i\leq m$. This implies that $h$ is a graph isomorphism. \end{observation} \section{Group Actions on $\Ynm$}\label{sec:group_actions} \begin{definition}\label{def:flip_action} Let $n\geq 1$ and $m\geq 1$. Define the set $\Snm=\{s_i:0\leq i\leq m\}$ where $s_i$ is the map $s_i:\Ynm\rightarrow\Ynm$ defined as follows: \[ s_0(v)= \begin{cases} \overleftarrow{s}_{0}(v),& \text{if } v_{1}=1\\ \overrightarrow{s}_{0}(v),& \text{if } v_{1}=0 \end{cases}, \; \; \; \; \; s_m(v)= \begin{cases} \overleftarrow{s}_{m}(v),& \text{if } v_{m}=0\\ \overrightarrow{s}_{m}(v),& \text{if } v_{m}=1 \end{cases} \] and for every $1\leq i\leq m-1$ \[ s_i(v)=(v_0,\dots,v_{i+1},v_{i},\dots,v_{m+1})= \begin{cases} \overleftarrow{s}_{i}(v),& \text{if } (v_i, v_{i+1})=(0,1)\\ \overrightarrow{s}_{i}(v),& \text{if } (v_i, v_{i+1})=(1,0)\\ v,& \text{if } v_i=v_{i+1} \end{cases}. \] \end{definition} \begin{observation} Every $s_i$ in $\Snm$ is a distinct involutive permutation of the vertex set of $\Ynm$. \end{observation} \begin{definition} Denote the group generated by $\Snm$ by $\Gnm$. \end{definition} $\Gnm$ is a subgroup of the symmetric group on the $n2^{m}$ vertices of $\Ynm$. Clearly, every edge $e$ in $\Ynm$ corresponds with the action of a unique generator $s_i\in \Snm$ on the endpoints of $e$. Therefore, $\Gnm$ defines a faithful group action on the vertices of $\Ynm$. Note that this action is transitive, since $\Ynm$ is connected. This, combined with Remark \ref{rem:shcreier_graph_iso}, implies the following observation. \begin{observation} Let $n\geq 1$ and $m\geq 1$. Then $\Ynm$ is the Schreier graph of $\Gnm$ with respect to the generating set $\Snm$ and the action of $\Gnm$ on $\Ynm$. \end{observation} Recall that the affine Weyl group of type $\tilde{C}_m$ is the group generated by $S=\{\sigma_0,\dots,\sigma_{m}\}$ ($m\geq 2$) whose Coxeter relations are the following (see e.g. \nolinebreak \cite{humphreys}): \begin{align*} \sigma_i^2=Id & \quad \mbox{for all } 0\leq i\leq m; \\ (\sigma_i\sigma_j)^2=Id & \quad \mbox{for all } |j-i|>1; \\ (\sigma_i\sigma_{i+1})^3=Id & \quad \mbox{for all } 1\leq i\leq m-2; \\ (\sigma_0\sigma_{1})^4=(\sigma_{m-1}\sigma_{m})^4=Id. & \quad \end{align*} \pagebreak \begin{proposition}\label{prop:coxeter_relations_snm} Let $n\geq 1$ and $m\geq 2$. The generators $\Snm$ of $\Gnm$ satisfy the Coxeter relations of $\tilde{C}_m$. \end{proposition} \begin{proof} The generators in $\Snm$ are involutive and $s_i$ and $s_j$ clearly commute whenever $|j-i|>1$. Therefore the first two relations, $s_i^2=Id$ for all \mbox{$0\leq i\leq m$} and $(\sigma_i\sigma_j)^2=Id$ for all $|j-i|>1$, are true. It can be verified that $s_is_{i+1}s_i=s_{i+1}s_is_{i+1}$ for every $1\leq i\leq m-2$. This proves the third relation: $(s_is_{i+1})^3=Id$ for all $1\leq i\leq m-2$. Note that the orbit of every vertex in $\Ynm$ under the action of $\left< s_0s_1 \right>$ (and $\left< s_{m-1}s_m \right>$) is of size $4$. This proves the final relation: $(s_0s_{1})^4=(s_{m-1}s_{m})^4=Id$. \end{proof} \begin{corollary}\label{cor:quotient_of_weyl} Let $n\geq 1$ and $m\geq 2$. $\Gnm$ is a quotient of the affine Weyl group $\tilde{C}_m$. \end{corollary} \begin{corollary}\label{cor:schreier_of_weyl_group} Let $n\geq 1$, $m\geq 2$ and let $S=\{\sigma_0,\dots,\sigma_{m}\}$ be the generators of the affine Weyl group of type $\tilde{C}_m$. \begin{enumerate} \item For every $0\leq i\leq m$, define the action of $\sigma_i$ on $\Ynm$ by $\sigma_i(v)=s_i(v)$ as in Definition \ref{def:flip_action} . Then, by Proposition \ref{prop:coxeter_relations_snm}, the action of $\tilde{C}_m$ on $\Ynm$, induced by extending the actions of $\sigma_i$ multiplicatively, is well defined. \item $\Ynm$ is the Schreier graph of $\tilde{C}_m$ with respect to the generating set $S=\{\sigma_0,\dots,\sigma_{m}\}$ and the above action of $\tilde{C}_m$ on $\Ynm$. \end{enumerate} \end{corollary} Corollary \ref{cor:schreier_of_weyl_group} generalizes \cite[Proposition 3.2]{TFT1} and \cite[Corollary 10.4]{elizalde}, by which the colored triangle-free triangulation graphs $\Gamma_n$ and the arc permutation graphs $X_n$ are Schreier graphs of the affine Weyl group of type $\tilde{C}_{n-4}$ for $n>5$ and $\tilde{C}_{n-2}$ for $n>3$, respectively. \chapter{The Eccentricity of $0$ in $\Ynm$}\label{chp:ecc_zero_Ynm} The main purpose of this chapter is to develop the theory that will allow us to state and prove the following theorem. \begin{theorem}\label{thm:ynm_ecc} Let $n\geq 1$ and $m\geq 0$. \begin{enumerate} \item If $n=1$, then $\ecc_{\Ynm}(0)=\binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}$. \item If $0\leq m\leq n$, then $\ecc_{\Ynm}(0) = \lfloor\frac{n(m+1)}{2}\rfloor$. \item If $2\leq n\leq m$, let $\dz = \binom{\lfloor\frac{m+n}{2}\rfloor+1}{2} + \binom{\lceil\frac{m-n}{2}\rceil+1}{2}$. Then \begin{enumerate} \item if either $2\divides (m-n)$ or $n\leq\lceil\frac{m+1}{2}\rceil$, then $\ecc_{\Ynm}(0) = \dz$; \item otherwise, $\ecc_{\Ynm}(0) = \dz + n-\lceil\frac{m+1}{2}\rceil$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof The theorem is the combined result of Observation \ref{obs:m_leq_2} and \linebreak Lemmas \ref{lem:corner_case_neqo}, \ref{lem:eccentricity_in_I0}, \ref{lem:dist_of_uz}, \ref{lem:close_to_middle_pivot} and \ref{lem:middle_interval_is_n} that appear in the following sections. \end{proof} \begin{observation}[The case $m\leq 1$]\label{obs:m_leq_2} By Observation \ref{obs:small_m_is_trivial}, Theorem \ref{thm:ynm_ecc} clearly holds for $m\leq 1$. \end{observation} In the rest of this chapter, unless explicitly stated otherwise, we assume that $m\geq 2$. \section{The Shift Direction Lemma} Let $\Snm$ be the set of generators of $\Gnm$ as in Definition \ref{def:flip_action}. A \textbf{word $w$ in the letters} $\Snm$ is a sequence $f_d\cdots f_1$ where $f_1,\dots,f_d\in\Snm$. Let $P$ be a path $(v=v^0\sim\dots \sim v^d=u)$ in $\Ynm$. Clearly, there is a unique word $f_d\cdots f_1$ such that $f_t(v^{t-1})=v^t$ for every $1\leq t\leq d$ or, equivalently, $f_t\cdots f_1(v)=v^t$ for every $1\leq t\leq d$. We say that $f_d\cdots f_1$ is the \textbf{word corresponding to the path $P$ from $v$ to $u$} (or corresponding to the path $P$ starting at $v$). Note that a word $f_d\cdots f_1$ does not correspond to a path starting at $v$ if $f_{t}\cdots f_1(v) = f_{t+1}f_t\cdots f_1(v)$ for some $1\leq t\leq d-1$. For example, the word $s_1s_0$ corresponds to the path $(v=(0, 1, 1, 1)\sim (1, 0, 1, 1)\sim (1, 1, 0, 1))$ in $\Ynm[3][2]$ starting at $v$. However, $s_0s_1$ does not correspond to a path starting at $v$, since $v$ is a fixed point of $s_1$. Let $f_d\cdots f_1$ be the word corresponding to a path $(v=v^0\sim\dots \sim v^d)$ in $\Ynm$ starting at $v$. For convenience, if $v^{t}=\overleftarrow{s}_i(v^{t-1})$ for some $0\leq i\leq m$ and $1\leq t\leq d$, then we write $f_t=\overleftarrow{s}_i$. Similarly, if $v^{t}=\overrightarrow{s}_i(v^{t-1})$, then we write $f_t=\overrightarrow{s}_i$. \begin{lemma}[\sc Shift Direction Lemma] \label{lem:shift_direction_lemma_Ynm} Let $f_d\cdots f_1$ be the word corresponding to a geodesic in $\Ynm$. For every $0\leq i\leq m$, all of the instances of $s_i\in \Snm$ in $f_d\cdots f_1$ shift in the same direction. \end{lemma} \begin{proof} Let $v$ and $u$ be two vertices in $\Ynm$. Assume to the contrary that there exist words corresponding to geodesics from $v$ to $u$ not satisfying the lemma. Denote the set of such words by $\mathcal{W}$. For every word $f_d\cdots f_1$ in $\mathcal{W}$, there exist $0\leq i\leq m$ and $1\leq t_1, t_2\leq d$ such that $f_{t_1}=\overrightarrow{s}_{i}$ and $f_{t_2}=\overleftarrow{s}_{i}$. Let $w=f_d\cdots f_1$ be a word in $\mathcal{W}$ in which $\min\{|t_2-t_1|: f_{t_1}=\overrightarrow{s}_{i},f_{t_2}=\overleftarrow{s}_{i}\}$ is minimal. Assume without loss of generality that $t_1<t_2$. If $t_2=t_1+1$, then the word obtained by deleting both $f_{t_1}$ and $f_{t_2}$ from $f_d\cdots f_1$ is a word corresponding to a shorter path from $v$ to $u$, contradicting the minimality of $d$. Therefore, we can assume that $t_2-t_1>1$. If $f_{t_1+1}=s_j$ for some $0\leq j\leq m$ such that $|i-j|>1$, then $f_{t_1}$ and $f_{t_1+1}$ commute as functions, and the word obtained by interchanging $f_{t_1}$ and $f_{t_1+1}$ in $w$ corresponds to a path from $v$ to $u$. This contradicts the minimality of $t_2-t_1$ in the choice of $w$. Therefore, $f_{t_1+1}\in\{\overrightarrow{s}_{i-1}, \overrightarrow{s}_{i+1}\}$. Let $v'=f_{t_1+1}\cdots f_1(v)$. If $f_{t_1+1}=\overrightarrow{s}_{i-1}$, then $v'_i=1$. Since $f_{t_2}=\overleftarrow{s}_i$, there must exist some $t_1+1<k<t_2$ such that $f_k=\overleftarrow{s}_{i-1}$ or $f_k=\overrightarrow{s}_{i}$. Similarly, if $f_{t_1+1}=\overrightarrow{s}_{i+1}$, then $v'_{i+1}=0$ and there must exist some $t_1+1<k<t_2$ such that $f_k=\overleftarrow{s}_{i+1}$ or $f_k=\overrightarrow{s}_{i}$. All of the cases contradict the minimality of $t_2-t_1$. \end{proof} \section{Pivots and Walls}\label{sec:pivot_paths_Ynm} \begin{definition}[\sc Pivot] A \textbf{pivot} of a vertex $v\in\Ynm$ is an integer $-1\leq p\leq m+1$ such that $\sum_{i=0}^{p}v_i$ is divisible by $n$. We call $-1$ and $m+1$ the \textbf{outer pivots} of $v$; they are pivots of every $v\in\Ynm$. Every other pivot, if it exists, is called an \textbf{inner pivot}. Denote the set of pivots of $v$ by $\piv(v)$. \end{definition} For example, in $\Ynm[3][5]$, $\piv((0,1,1,0,1,1,2))=\{-1,0,4,6\}$. \begin{definition}[\sc Wall] Let $P$ be a path from $v$ to $0$ in $\Ynm$ and let $f_d\cdots f_1$ be the word corresponding to $P$. An \textbf{inner wall} of $P$ is an integer $0\leq p\leq m$ such that $s_p$ does not appear in $f_d\cdots f_1$. $-1$ is a \textbf{left outer wall} of $P$ if $\overleftarrow{s}_0$ does not appear in $f_d\cdots f_1$. Similarly, $m+1$ is a \textbf{right outer wall} of $P$ if $\overrightarrow{s}_m$ does not appear in $f_d\cdots f_1$. Finally, we say that $-1\leq p\leq m+1$ is a \textbf{wall} of $P$ if it is either an inner wall or an outer wall of $P$. \end{definition} Clearly, paths from $v$ to $0$ with a wall $p$ exist if and only if $p$ is a pivot of $v$. \begin{definition}[\sc Pivot Path] Let $p$ be a pivot of a vertex $v$ in $\Ynm$. We say that a path $P$ from $v$ to $0$ is a \textbf{$p$-pivot path} of $v$, if $P$ is shortest among the paths from $v$ to $0$ with a wall $p$. Denote the length of a $p$-pivot path of $v$ by $\ps_p(v)$. We say that $P$ is a \textbf{pivot path}, if it is a $p$-pivot path for some $p$. \end{definition} For example, let $v=(2,0,1,1,2)$ in $\Ynm[3][3]$. $\overrightarrow{s}_3\overleftarrow{s}_0\overleftarrow{s}_1$ is a word corresponding to a $2$-pivot path $P$ of $v$: $P=(v=(2,0,1\|1,2)\sim (2,1,0\|1,2)\sim (0,0,0\|1,2)\sim (0,0,0\|0,0)=0)$ where we added the symbol $\|$ to visualize the wall $2$ of $P$. For an example of an outer pivot path with a wall -1, let $v=(1,0,1,0)$ in $\Ynm[2][2]$. $\overrightarrow{s}_2\overrightarrow{s}_1\overrightarrow{s}_0\overrightarrow{s}_2$ is a word corresponding to a $(-1)$-pivot path $P$ of $v$: $P=(v=(\|1,0,1,0)\sim (\|1,0,0,1)\sim (\|0,1,0,1)\sim (\|0,0,1,1)\sim (\|0,0,0,0)=0)$. \begin{lemma}\label{lem:geodesic_is_a_pivot_path_ynm} Every geodesic in $\Ynm$ ending at $0$ is a pivot path (namely, has a wall). \end{lemma} \begin{proof} Let $w=f_d\cdots f_1$ be the word corresponding to a geodesic $P$ in $\Ynm$ from $v$ to $0$. Assume to the contrary that $P$ has no walls. Therefore, both $\overleftarrow{s}_0$ and $\overrightarrow{s}_m$ appear in $w$ (since $P$ has no outer walls); and by the Shift Direction Lemma \ref{lem:shift_direction_lemma_Ynm}, exactly one of $\{\overleftarrow{s}_i, \overrightarrow{s}_i\}$ appears in $w$ (at least once) for every $0\leq i\leq m$. Let $1\leq i\leq m$ be minimal such that $\overrightarrow{s}_i$ appears in $w$. Therefore, both $\overleftarrow{s}_{i-1}$ and $\overrightarrow{s}_{i}$ appear in $w$. This is a contradiction, since it implies that, throughout $P$, at least two units are shifted from the entry $v_i$ and no units are shifted to $v_i$. \end{proof} \begin{observation} By Lemma \ref{lem:geodesic_is_a_pivot_path_ynm}, $$\ecc_{\Ynm}(0)=\max_vd(v,0)=\max_v\min_p\ps_p(v).$$ \end{observation} \begin{definition} Let $v\in\Ynm$ and $p\in\piv(v)$. If (one, equivalently all) $p$-pivot paths of $v$ are also geodesics between $v$ and $0$, then we say that $p$ is a \textbf{geodesic pivot} of $v$. \end{definition} The Shift Direction Lemma \ref{lem:shift_direction_lemma_Ynm} deals with geodesics, or equivalently (by Lemma \ref{lem:geodesic_is_a_pivot_path_ynm}) with $p$-pivot paths for geodesic pivots $p$. It can be extended to arbitrary pivot paths, with essentially the same proof. \begin{lemma}[\sc Pivot Shift Direction Lemma]\label{lem:pivot_shift_direction_lemma_Ynm} Let $f_d\cdots f_1$ be the word corresponding to a pivot path in $\Ynm$. For every $0\leq i\leq m$, all of the instances of $s_i$ in $f_d\cdots f_1$ shift in the same direction. \end{lemma} \begin{observation}\label{obs:shift_direction_pivot_sides} Let $P$ be a $p$-pivot path of $v$, and let $w$ be the word corresponding to $P$. Then, for each $i<p$, if $s_i$ appears in $w$, then it appears as $\overleftarrow{s}_i$; and, for each $i>p$, if $s_i$ appears in $w$, then it appears as $\overrightarrow{s}_i$. \end{observation} \begin{corollary}\label{cor:pivot_path_len} Let $v\in\Ynm$ and let $p\in\piv(v)$. Then $$\ps_p(v)=\sum_{i=1}^{p}iv_i+\sum_{i=p+1}^{m}(m+1-i)v_i.$$ In particular, if $p$ is an inner pivot, then $$\ps_p(v)\leq \binom{p+1}{2}+\binom{m-p+1}{2}.$$ \end{corollary} \begin{definition} For $v\in \Ynm$ define: \begin{enumerate} \item $p_l(v)=\max\{p\in \piv(v):p\leq\frac{m}{2}\}$; \item $p_r(v)=\min\{p\in \piv(v):p>\frac{m}{2}\}$. \end{enumerate} We abbreviate $p_l$, $p_r$ when $v$ is evident. \end{definition} Note that if $p_1$ and $p_2$ are two pivots of $v$ such that $-1\leq p_1 < p_2 \leq \frac{m}{2}$, then $\ps_{p_2}(v) \leq \ps_{p_1}(v)$. Similarly, if $\frac{m}{2}< p_1 < p_2\leq m+1$, then $\ps_{p_1}(v) \leq \ps_{p_2}(v)$. This implies the following observation. \begin{observation}\label{obs:closer_pivot_is_better} Let $v\in \Ynm$. Then $$d(v,0)=\min_p\ps_p(v)=\min\{\ps_{p_l}(v), \ps_{p_r}(v)\}.$$ \end{observation} \begin{fact}\label{fac:split_sum_center} Let $m,p,q$ be integers such that $0\leq p\leq m$, $0\leq q\leq m$ and $|p-\frac{m}{2}| < |q-\frac{m}{2}|$. Then $\sum_{i=1}^{p}i + \sum_{i=1}^{m-p}i < \sum_{i=1}^{q}i + \sum_{i=1}^{m-q}i$. \end{fact} \begin{lemma}\label{lem:ecc_lower_bound_ynm} For any $n\geq 1$ and $m\geq 0$, $\ecc_{\Ynm}(0)\geq\lfloor\frac{n(m+1)}{2} \rfloor$. \end{lemma} \begin{proof} If $m\leq 1$, this lemma follows from Observation \ref{obs:small_m_is_trivial}. If $n=1$, it follows from Lemma \ref{lem:corner_case_neqo}. Let $n,m\geq 2$, and let $u\in\Ynm$ be defined as follows: $u_0=u_{m+1}=\lfloor\frac{n}{2}\rfloor$. If $n$ is even, then $u_i=0$ for all $1\leq i\leq m$, and if $n$ is odd, then $u_{\lfloor\frac{m+1}{2}\rfloor}=1$ and $u_i=0$ for all other $1\leq i\leq m$. For example: \begin{center} \begin{tabular}{ l l l } \hline n & m & u \\ \hline 4 & 2 & $(2,0,0,2)$ \\ 4 & 3 & $(2,0,0,0,2)$ \\ 5 & 4 & $(2,0,1,0,0,2)$ \\ 5 & 5 & $(2,0,0,1,0,0,2)$ \\ \hline \end{tabular} \end{center} In all of these cases, $u$ has no inner pivots, that is, $\piv(u)=\{-1,m+1\}$. Therefore, by Observation \ref{obs:closer_pivot_is_better}, $d(u,0)=\min\{\ps_{-1}(u), \ps_{m+1}(u)\}$. Note that $\sum_{i=0}^{m+1}u_i=n$. Therefore, by Corollary \ref{cor:pivot_path_len}, $$\ps_{-1}(u)+\ps_{m+1}(u)=\sum_{i=0}^{m}(m+1-i)u_i + \sum_{i=1}^{m+1}iu_i=n(m+1).$$ It now suffices to show that $|\ps_{-1}(u)-\ps_{m+1}(u)|\leq 1$, since this implies that $\ecc_{\Ynm}(0)\geq d(u,0)=\lfloor\frac{n(m+1)}{2} \rfloor$. Clearly, if either $2 \divides n$ or $2\divides (m-n)$, then $\ps_{-1}(u)=\ps_{m+1}(u)$. If $2 \ndivides n$ and $2\divides m$, then $|\ps_{-1}(u)-\ps_{m+1}(u)|=1$. This implies that $|\ps_{-1}(u)-\ps_{m+1}(u)|\leq 1$, as required. \end{proof} \begin{lemma}\label{lem:no_pivots_ynm} Let $v\in\Ynm$ such that $v$ has no inner pivots. Then $d(v,0)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. \end{lemma} \begin{proof} Note that $v\neq 0$ and $\sum_{i=0}^{m+1}v_i = n$, since $v$ has no inner pivots. Therefore, by Corollary \ref{cor:pivot_path_len} and Observation \ref{obs:closer_pivot_is_better}, \begin{align*} d(v,0) & = \min\{\ps_{-1}(v),\ps_{m+1}(v)\} \\ & \leq \lfloor\frac{\ps_{-1}(v) + \ps_{m+1}(v)}{2}\rfloor = \lfloor\frac{\sum_{i=0}^{m+1}iv_i + \sum_{i=0}^{m+1}(m+1-i)v_i}{2}\rfloor \\ & = \lfloor \frac{n(m+1)}{2} \rfloor. \end{align*} \end{proof} \section{Computation of the Eccentricity}\label{sec:n_leq_m_Ynm} \begin{lemma}[$n=1$]\label{lem:corner_case_neqo} Let $m\geq 0$. Then $$\ecc_{\Ynm[1][m]}(0)= \binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}.$$ \end{lemma} \begin{proof} The case $m=0$ is trivial. Assume that $m>0$. Note that since $n=1$, $\piv(v)=\{-1,\ldots,m+1\}$ for every $v\in\Ynm$. Therefore, by Corollary \ref{cor:pivot_path_len}, Observation \ref{obs:closer_pivot_is_better} and Fact \ref{fac:split_sum_center}, $d(v,0)\leq \ps_{\lceil\frac{m}{2}\rceil}(v)\leq\sum_{i=1}^{\lceil \frac{m}{2}\rceil}i + \sum_{i=1}^{\lfloor \frac{m}{2}\rfloor}i = \binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}$. This upper bound is obtained by the vertex $v\in\Ynm$ with $v_i=1$ for every $1\leq i\leq m$. \end{proof} \begin{lemma}\label{lem:eccentricity_in_I0} If $0\leq m \leq n$, then $\ecc_{\Ynm}(0) = \lfloor\frac{n(m+1)}{2} \rfloor$. \end{lemma} \begin{proof} The case $m\leq 1$ is true by Observation \ref{obs:small_m_is_trivial}. Assume that $m\geq 2$. By Lemma \ref{lem:ecc_lower_bound_ynm}, it is sufficient to show that $\ecc_{\Ynm}(0) \leq \lfloor\frac{n(m+1)}{2} \rfloor$. Let $v\in\Ynm$. By Lemma \ref{lem:no_pivots_ynm}, we can assume that $v$ has some inner pivot $p\in\piv(v)$. By Corollary \ref{cor:pivot_path_len} and Fact \ref{fac:split_sum_center} (with $q=m$), $d(v,0) \leq \ps_p(v) \leq \binom{p+1}{2}+\binom{m-p+1}{2} \leq \binom{m+1}{2}$. Since $m \le n$, $\binom{m+1}{2} \le \frac{n(m+1)}{2}$ and therefore $d(v,0)\leq\lfloor\frac{n(m+1)}{2}\rfloor$. \end{proof} Throughout the rest of this subsection, we assume that $2\leq n\leq m$. \pagebreak \begin{definition}\label{def:ynm_pivot_notations} Let $v\in \Ynm$. \begin{enumerate} \item $I_c(v) = [p_l(v)+1,p_r(v)]$. If $v_{p_l+1}=v_{p_r}=0$ (and $p_l+1=p_r$), then we say that $I_c(v)$ is \textbf{trivial}. We abbreviate $I_c$ when $v$ is evident. \item $h(v)=\min\{|p-\frac{m}{2}|:p\in \piv(v)\}$. \item $\hnm = \begin{cases} \frac{n}{2} & \mbox{if } 2\divides (m-n) \\ \frac{n+1}{2} & \mbox{if } 2\ndivides (m-n) \end{cases}.$ \end{enumerate} \end{definition} The outline of the rest of this subsection is as follows. We first construct two candidates for a vertex eccentric to $0$ in Definitions \ref{def:uz} and \ref{def:uo} and compute their distances from $0$ in Lemma \ref{lem:dist_of_uz}. We then prove that the maximum of the two distances is an upper bound on the distance of an arbitrary vertex $v$ from $0$. We split this proof into two cases. \begin{enumerate} \item $h(v)<\hnm$ (Lemma \ref{lem:close_to_middle_pivot}). \item $h(v)\geq\hnm$ (Lemma \ref{lem:middle_interval_is_n}). \end{enumerate} \begin{definition}[\sc $\uz$]\label{def:uz} Let $u\in\Ynm$ such that $u_i=1$ for all $1\leq i\leq m$ and $u_0\equiv -\lfloor\frac{m-n}{2} \rfloor \bmod n$. Denote this vertex by $\uz$ and denote $\dz=d(\uz,0)$. \end{definition} For example, $\uz[3][5]=(2,1,1,1,1,1,2)$ and $\uz[3][6]=(2,1,1,1,1,1,1,1)$. \begin{definition}[\sc $\uo$]\label{def:uo} Assume that $2\ndivides (m-n)$. Let $u\in\Ynm$ such that $u_{i}=0$ for $i=\lceil\frac{m+1}{2}\rceil$, $u_i=1$ for all other $1\leq i\leq m$ and $u_0\equiv -\lfloor\frac{m-n}{2} \rfloor \bmod n$. Denote this vertex by $\uo$ and denote $\doo=d(\uo,0)$. \end{definition} For example, $\uo[2][5]=(1,1,1,0,1,1,1)$ and $\uo[3][6]=(2,1,1,1,0,1,1,2)$. \pagebreak \begin{observation}\label{obs:h_of_candidates}\ \begin{enumerate} \item If $2\divides (m-n)$, then $h(\uz)=\frac{n}{2}=\hnm$. \item If $2\ndivides (m-n)$, then $h(\uz)=\frac{n-1}{2}=\hnm-1$ and $h(\uo)=\frac{n+1}{2}=\hnm$. \end{enumerate} \end{observation} \begin{lemma}\label{lem:dist_of_uz}\ \begin{enumerate} \item $\dz = \binom{\lfloor\frac{m+n}{2}\rfloor+1}{2} + \binom{\lceil\frac{m-n}{2}\rceil+1}{2}$. \item If $2\ndivides (m-n)$, then $\doo= \dz +n - \lceil\frac{m+1}{2}\rceil$. \end{enumerate} \end{lemma} \begin{proof} Note that by definition, $p_l(\uz) = p_l(\uo) = \lfloor\frac{m-n}{2} \rfloor$ and $p_r(\uz)=\lfloor \frac{m+n}{2}\rfloor$. Regarding $\dz$: if $2\divides (m-n)$, then $|p_l(\uz)-\frac{m}{2}|=|p_r(\uz)-\frac{m}{2}|=\frac{n}{2}$. It follows, by Corollary \ref{cor:pivot_path_len} and Observation \ref{obs:closer_pivot_is_better}, that $\dz=d(\uz,0)=\ps_{p_l}(\uz)=\ps_{p_r}(\uz)=\binom{\frac{m+n}{2}+1}{2} + \binom{\frac{m-n}{2}+1}{2}$. If $2\ndivides (m-n)$, then $|p_l(\uz)-\frac{m}{2}|=|p_r(\uz)-\frac{m}{2}|+1=\frac{n+1}{2}$. Therefore $\dz= d(\uz,0)=\ps_{p_r}(\uz)=\binom{\lfloor\frac{m+n}{2}\rfloor+1}{2} + \binom{\lceil\frac{m-n}{2}\rceil+1}{2}$. Regarding $\doo$: if $2\divides n$ and $2\ndivides m$, then $|p_l(\uo)-\frac{m}{2}|=|p_r(\uo)-\frac{m}{2}|=\frac{n+1}{2}$ and computation gives $\doo= d(\uo,0)=\ps_{p_l}(\uo)=\ps_{p_r}(\uo)= \dz +n - \frac{m+1}{2}$. If $2\ndivides n$ and $2\divides m$, then again $|p_l(\uo)-\frac{m}{2}|=|p_r(\uo)-\frac{m}{2}|=\frac{n+1}{2}$, but (since the entry at $\lceil\frac{m+1}{2}\rceil$ of $\uo$ is equal to zero) $\ps_{p_r}(\uo)<\ps_{p_l}(\uo)$ and $\doo=d(\uo,0)=\ps_{p_r}(\uo)=\dz +n - \lceil\frac{m+1}{2}\rceil$. \end{proof} \begin{definition}[\sc $\dnm$]\label{def:dnm} Denote $$ \dnm = \begin{cases} \dz & \mbox{if } 2\divides (m-n); \\ \max\{\dz, \doo\} & \mbox{if } 2\ndivides (m-n). \end{cases} $$ \end{definition} \begin{observation} $\ecc_{\Ynm}(0)\geq \dnm$. \end{observation} It now suffices to show that $d(v,0)\leq \dnm$ for each $v\in\Ynm$. As noted above we distinguish two cases. \pagebreak \begin{lemma}\label{lem:close_to_middle_pivot} Let $v\in\Ynm$ such that $h(v)<\hnm$. Then $d(v,0)\leq \dnm$. \end{lemma} \begin{proof} We show that actually $d(v,0)\leq \dz$. Let $p'$ be an inner pivot of $v$ for which $|p'-\frac{m}{2}|=h(v)$. By Observation \ref{obs:h_of_candidates}, if $2\divides(m-n)$, then $h(\uz)=\frac{n}{2}=\hnm$, and if $2\ndivides(m-n)$, then $h(\uz)=\hnm-1$. By assumption, $h(v)\leq \hnm-1$ and therefore $h(v)\leq h(\uz)$. By Corollary \ref{cor:pivot_path_len} and Fact \ref{fac:split_sum_center}, $d(v,0)\leq \sum_{i=1}^{p'}i+\sum_{i=1}^{m-p'}i\leq \sum_{i=1}^{p}i+\sum_{i=1}^{m-p}i=\dz$ \end{proof} The following lemma generalizes Lemma \ref{lem:no_pivots_ynm}. \begin{lemma}\label{lem:v_hat_dist_from_zero_center_ynm} Let $v\in\Ynm$. Then $d(v,0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{n(m+1)}{2}\rfloor.$ \end{lemma} \begin{proof} Recall the definition of $I_c=I_c(v)$ (Definition \ref{def:ynm_pivot_notations}). The sum $\sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i$ is an upper bound on the number of steps needed to set to $0$ every $v_i$ with $i\in[1,m]\setminus I_c$, without shifting units into $I_c$; note that necessarily, following these steps, a bucket of $v$ not in $I_c$, is also equal to $0$. We can therefore assume that $v_i=0$ for every $i\notin I_c$ and prove that $d(v,0)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. Note that $\sum_{i\in I_c}v_i$ is either $0$ or $n$. Therefore, as in the proof of Lemma \ref{lem:no_pivots_ynm}, $d(v,0)\leq\min\{\ps_{p_l}(v),\ps_{p_r}(v)\}\leq \lfloor\frac{\ps_{p_l}(v) + \ps_{p_r}(v)}{2}\rfloor \leq \lfloor\frac{n(m+1)}{2}\rfloor.$ \end{proof} \begin{lemma}\label{lem:middle_interval_is_n} Let $v\in\Ynm$ such that $h(v)\geq\hnm$. Then $d(v,0) \leq \dnm$. \end{lemma} \begin{proof} Assume that $2|(m-n)$. By Lemma \ref{lem:v_hat_dist_from_zero_center_ynm}, $d({v},0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \frac{n(m+1)}{2}$. Since $h(v)\geq \hnm =\frac{n}{2}$, $p_l\leq\frac{m-n}{2}$ and $p_r\geq\frac{m+n}{2}$. Therefore $ d(v,0)\leq \sum_{i=1}^{\frac{m-n}{2}}i + \sum_{i=1}^{m-\frac{m-n}{2}}i +\frac{n(m+1)}{2}=\dz$. Assume that $2\ndivides(m-n)$. By Lemma \ref{lem:v_hat_dist_from_zero_center_ynm}, $d({v},0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{n(m+1)}{2}\rfloor.$ Since $h(v)\geq \hnm =\frac{n+1}{2}$, $p_l\leq \frac{m-n-1}{2}=\lfloor\frac{m-n}{2}\rfloor$ and $p_r\geq \frac{m+n+1}{2}=\lceil\frac{m+n}{2}\rceil$. Therefore $ d(v,0)\leq \sum_{i=1}^{\lfloor\frac{m-n}{2}\rfloor}i + \sum_{i=1}^{m-\lceil\frac{m+n}{2}\rceil}i +\lfloor\frac{n(m+1)}{2}\rfloor=\doo\leq \dnm$. \end{proof} This concludes the proof of Theorem \ref{thm:ynm_ecc}. \chapter{The Diameter of $\Ynm$}\label{chp:diameter_of_Ynm} In this chapter, we study the diameter of Yoke graphs. Since Yoke graphs generalize already studied graphs, we first build upon a method used in order to calculate the diameter of one of these graphs. Specifically, in \cite{elizalde}, similarities between arc permutation graphs and the Hasse diagram of the dominance order on $\Pnm[n][n-2]$ were used in order to calculate the diameter of arc permutation graphs. In Section \ref{sec:diameter_dominance}, we generalize this approach in order to calculate the diameter of $\Ynm$ in the case $m\leq n$. This approach fails for $n<m$. In Sections \ref{sec:dYokeGraphs}, \ref{sec:diam_ecc} and \ref{sec:ecc_zero_Znm}, we present a new approach that works for all instances of Yoke graphs. In the core of this approach lies the idea to convert the problem of computation of diameter to that of computation of eccentricity. Specifically, in Section \ref{sec:dYokeGraphs}, we introduce dYoke graphs $\Znm$. In Section \ref{sec:diam_ecc}, we show that the diameter of $\Ynm$ is equal to the eccentricity of $0$ in $\Znm$. In Section \ref{sec:ecc_zero_Znm}, we compute the eccentricity of $0$ in dYoke graphs and show that $\ecc_{\Znm}(0)=\ecc_{\Ynm}(0)$ and therefore $\diam(\Ynm)=\ecc_{\Ynm}(0)$ with an explicit formula for all values of $n$ and $m$. \section{Proof for $m\leq n$ via the Dominance Order}\label{sec:diameter_dominance} Recall the dominance order $\trianglelefteq$ in Definition \ref{def:dom_order} and recall that we denote the set $\{0,\dots, n-1\}$ by $P_n$. \begin{observation}\label{obs:domPnm_is_sublattice} $\domPnm$ is a sublattice of $(\mathbb{Z}^{m+1},\tri)$. \end{observation} \begin{proof} Let $\chi$ be the isomorphism in Observation \ref{obs:domprodiso} and let \linebreak $L=\chi(\Pnm)\subseteq (\mathbb{Z}^{m+1},\leq)$. Note that $x=(x_0,\dots,x_m) \in L$ if and only if $0\leq x_0\leq n-1$ and $x_i - x_{i-1}\in \{0, 1\}$ for every $1\leq i\leq m$. $L$ is clearly closed under the meet and join operations of $(\mathbb{Z}^{m+1},\leq)$. Therefore, $L$ is a sublattice of $(\mathbb{Z}^{m+1},\leq)$. This proves that $\domPnm$ is a sublattice of $(\mathbb{Z}^{m+1},\tri)$. \end{proof} \begin{definition} Denote $\mathscr{H}\domPnm$ (the Hasse diagram of \linebreak $\domPnm$) by $\Hnm$. Denote the elements $(0,\dots,0)$ and $(n-1,1,\dots, 1)$ in $\Pnm$ by $\hat{0}$ and $\hat{1}$, respectively. \end{definition} Note that $\hat{0}$ and $\hat{1}$ are the minimum and maximum elements of \linebreak \mbox{$\domPnm$}, respectively. Recall that vertices in $\Ynm$ are determined by their first $m+1$ entries. Accordingly (and in accordance with Convention \ref{conv:cosets_are_integers}), we identify vertices in $\Ynm$ with vertices in $\Hnm$ by ignoring the last entry, namely ``forgetting" the right bucket of every vertex in $\Ynm$. If $m\leq 1$, then the diameter of $\Ynm$ is trivial by Observation \ref{obs:small_m_is_trivial}. Unless explicitly stated otherwise, we assume that $2\leq m\leq n$ (in $\Ynm$, $\Hnm$ and $\Pnm$) throughout this section. \begin{observation}\label{obs:cover_in_domPnm} Let $x$ and $y$ be two elements in $\Pnm$. Then $x \lessdot y$ in $\domPnm$ if and only if $x \lessdot y$ in $(\mathbb{Z}^{m+1},\tri)$. \end{observation} By Observations \ref{obs:domcover} and \ref{obs:cover_in_domPnm}, the covering relation of $(\Pnm,\trianglelefteq)$ is precisely the adjacency relation of $\Ynm$ except for $2^{m-1}$ edges between every pair of vertices of the form $(n-1,1,v_2,\dots,v_{m})$ and $(0,0,v_2,\dots,v_{m})$ in $\Ynm$. \begin{observation}\label{obs:Ynm_as_dominance} $\Ynm$ is isomorphic to the graph obtained by taking $\Hnm$ and adding to it $2^{m-1}$ edges between every pair of elements of the form $(n-1,1,v_2,\dots,v_{m})$ and $(0,0,v_2,\dots,v_{m})$. \end{observation} Recall the rank function $\rank_\tri$ on $(\mathbb{Z}^n,\tri)$ in Observation \ref{obs:domprop}. By Observation \ref{obs:cover_in_domPnm}, it induces a rank function on $\domPnm$: for every $v=(v_0,\dots,v_m)\in\Pnm$ $$\rank_\tri(v) = \sum_{i=0}^{m}\sum_{j=0}^{i}v_j.$$ \begin{observation}\label{obs:dist_Hnm_eq_domZm} If $u$ and $v$ are two elements in $\Pnm$, then $$ d_{\Hnm}(u,v) = d_{\mathscr{H}(\mathbb{Z}^{m+1},\tri)}(u,v).$$ \end{observation} \begin{proof} The statement in this observation is implied by Theorem \ref{thm:dist_modular} and Observation \ref{obs:domPnm_is_sublattice}. Specifically: $$ d_{\Hnm}(u,v) = d_{\mathscr{H}(\mathbb{Z}^{m+1},\tri)}(u,v) = \rank_\tri(u\join v)-\rank_\tri(u\meet v).$$ \end{proof} \begin{lemma}\label{lemma:incomparable_in_pnm} If $u$ and $v$ are a pair of incomparable elements in \linebreak $\domPnm$, then $$d_{\Hnm}(u,v)\leq 1+\binom{m}{2}.$$ \end{lemma} \begin{proof} Let $\chi$ be the isomorphism in Observation \ref{obs:domprodiso} and let $x=(x_0,\dots,x_m)=\chi(u)$ and $y=(y_0,\dots,y_m)=\chi(v)$. Note that $x$ and $y$ are incomparable in $(\mathbb{Z}^{m+1},\leq)$, since $u$ and $v$ are incomparable in $\domPnm$. By Observation \ref{obs:dist_Hnm_eq_domZm}, $d_{\Hnm}(u,v)=d_{\mathscr{H}(\mathbb{Z}^{m+1},\leq)}(x,y)$. We prove that $d_{\mathscr{H}(\mathbb{Z}^{m+1},\leq)}(x,y)\leq 1+\binom{m}{2}$. Note that $z=(z_0,\dots,z_m) \in \chi(\Pnm)$ if and only if $0\leq z_0\leq n-1$ and $z_i - z_{i-1}\in \{0, 1\}$ for every $1\leq i\leq m$. Therefore $(x_{i+1}-y_{i+1})-(x_{i}-y_{i})=(x_{i+1}-x_{i})-(y_{i+1}-y_{i})\in\{-1,0,1\}$ for every $1\leq i\leq m$. Since $x$ and $y$ are incomparable, there exist $0\leq j_1, j_2 \leq m$ such that $x_{j_1}-y_{j_1}<0$ and $x_{j_2}-y_{j_2}>0$. This implies that there exists some $1\leq k\leq m-1$ such that $|x_k-y_k|=0$. Finally $d_{\mathscr{H}(\mathbb{Z}^{m+1},\leq)}(x,y)=\sum_{i=0}^m|x_i-y_i|\leq k + \dots + 1 + 0 + 1 + \dots + m-k=\binom{k+1}{2}+\binom{m-k+1}{2}\leq 1+\binom{m}{2}$. The maximum is obtained for $k=1$ (or $k=m-1$). \end{proof} \begin{lemma}\label{lem:modulo_along_edges} If $u, v$ are two elements in $\Pnm$ such that $|\rank_\tri(v)-\rank_\tri(u)|\leq \lfloor\frac{n(m+1)}{2}\rfloor$, then $d_{\Ynm}(u,v)\geq |\rank_\tri(v)-\rank_\tri(u)|$. \end{lemma} \begin{proof} If an edge $e=(x,y)$ in $\Ynm$ is also an edge in $\Hnm$, then, by Observation \ref{obs:cover_in_domPnm}, $|\rank_\tri(x)-\rank_\tri(y))|=1$. Otherwise, by \mbox{Observation \ref{obs:Ynm_as_dominance}}, $e$ is one of the additional $2^{m-1}$ edges in $\Ynm$ for which $|\rank_\tri(x)-\rank_\tri(y)|$ is constant and is equal to $n(m+1)-1$. Therefore, $\rank_\tri(x)$ changes by $\pm 1$ modulo $n(m+1)$ along every edge in $\Ynm$. This implies the claim of this lemma. \end{proof} \begin{corollary}\label{cor:saturated_chains_are_deodesics} Every saturated chain of length smaller or equal to $\lfloor\frac{n(m+1)}{2}\rfloor$ in $(P_n\times P_2^m,\trianglelefteq)$ is a geodesic in $\Ynm$. \end{corollary} Saturated chains of length $\lfloor\frac{n(m+1)}{2}\rfloor$ exist in $\Ynm$, since $\rank_\tri(\hat{1}) \geq \lfloor\frac{n(m+1)}{2}\rfloor$ for any $n$ and $m$. \begin{corollary}\label{cor:Ynm_diam_lower_by_dominance} $diam(\Ynm)\geq \lfloor\frac{n(m+1)}{2}\rfloor$. \end{corollary} In the rest of this section, we show that $\lfloor\frac{n(m+1)}{2}\rfloor$ is also an upper bound on the diameter of $\Ynm$, for $n\geq m$. \begin{definition} Let $u_0=(n-1,1,0,\dots,0)$ and $u_1=(0,0,1,\dots,1)$ in $\Pnm$. \begin{enumerate} \item Denote the interval $[\hat{0},u_0]\subseteq \domPnm$ by $I_0$. Denote by $\Ynm^0$ the subgraph induced by $I_0$ (in $\Ynm$). \item Denote the interval $[u_1,\hat{1}]\subseteq \domPnm$ by $I_1$. Denote by $\Ynm^1$ the subgraph induced by $I_1$ (in $\Ynm$). \end{enumerate} \end{definition} \begin{observation}\label{obs:intervals_by_sum} The elements in $I_0$ and $I_1$ can be characterized as follows. \begin{enumerate} \item $I_0=\{v\in\Pnm: \sum_{i=0}^{m}v_i \leq n\}$. \item $I_1=\{v\in\Pnm: \sum_{i=0}^{m}v_i \geq m-1\}$. \end{enumerate} Therefore, the following is true (since $m\leq n$). \begin{enumerate} \item $I_0\cup I_1=\Pnm$. \item $I_0\cap I_1=\{v\in\Ynm:m-1\leq \sum_{i=0}^{m}v_i \leq n\} = [u_1,u_0]$. \end{enumerate} \end{observation} \begin{definition}\label{def:two_automorphisms} Let $\varphi$ and $\tau$ be the following two maps from $\Ynm$ to $\Ynm$: $$\begin{array}{c c c l} & \varphi(v_0,v_1,\dots,v_m) & = & (v_0+1,v_1,\dots,v_m) \\ & \tau(v_0,v_1,\dots,v_m) & = & (-v_0,1-v_1,\dots,1-v_{m}) \\ \end{array}$$ \end{definition} It is straightforward to verify that $\varphi$ and $\tau$ are automorphisms of $\Ynm$. \begin{lemma}\label{lem:interval_iso} $\Ynm^0$ and $\Ynm^1$ are isomorphic as graphs. \end{lemma} \begin{proof} Let $\tau$ and $\varphi$ be the automorphisms of $\Ynm$ in Definition \ref{def:two_automorphisms} and let $f=\tau\varphi$. Note that $f$ is involutive. We show that both $f(\Ynm^0) \subseteq \Ynm^1$ and $f(\Ynm^1) \subseteq \Ynm^0$. This implies that $f$ is a graph isomorphism between $\Ynm^0$ and $\Ynm^1$. Let $x\in\Ynm$ and let $y=f(x)$. Assume that $x_0=t$ and $\sum_{i=1}^m x_i = k$. Then $y_0=n-t-1$ and $\sum_{i=1}^m y_i = m-k$. Therefore $\sum_{i=0}^m y_i = n+m-1-(t+k)$. By Observation \ref{obs:intervals_by_sum}, if $x\in\Ynm^0$, then $t+k\leq n$. Therefore, $\sum_{i=0}^m y_i = n+m-1-(t+k) \geq m-1$ and $y\in \Ynm^1$. Similarly, if $x\in\Ynm^1$, then $t+k\geq m-1$. Therefore, $\sum_{i=0}^m y_i = n+m-1-(t+k) \leq n$ and $y\in \Ynm^0$. \end{proof} \pagebreak \begin{lemma}\label{lem:I0_diam} $\diam(\Ynm^0) = \diam(\Ynm^1) = \big\lfloor\frac{n(m+1)}{2}\big\rfloor.$ \end{lemma} \begin{proof} By Lemma \ref{lem:interval_iso}, it is sufficient to prove that $\diam(\Ynm^0) = \lfloor\frac{n(m+1)}{2}\rfloor.$ Let $u$ and $v$ be a pair of vertices in $\Ynm^0$. Since $\rank_\tri(u_0)=n(m+1)-1\geq \big\lfloor\frac{n(m+1)}{2}\big\rfloor$, $I_0$ contains saturated chains of length $\big\lfloor\frac{n(m+1)}{2}\big\rfloor$. Therefore, by Corollary \ref{cor:saturated_chains_are_deodesics}, it is sufficient to show that $d_{\Ynm^0}(u,v)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. By Lemma \ref{lemma:incomparable_in_pnm}, we can assume that $u$ and $v$ are comparable, since $n\geq m$. Therefore, there exists a saturated chain $A$ in $I_0$ between $0$ and $u_0$ containing both $u$ and $v$. $\len(A)=\rank_\tri(u_0)=n(m+1)-1$. There is an edge in $\Ynm^0$ connecting $0$ and $u_0$. It completes $A$ to a cycle $C$ of length $n(m+1)$. This proves that $d_{\Ynm^0}(u,v)\leq d_C(u,v)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. \end{proof} \begin{theorem}\label{thm:diam_by_dominance} If $0\leq m\leq n$, then $$\diam(\Ynm) = \big\lfloor\frac{n(m+1)}{2}\big\rfloor.$$ \end{theorem} \begin{proof} If $m\leq 1$, then this theorem is trivial by Observation \ref{obs:small_m_is_trivial}. Assume that $2\leq m\leq n$. By Corollary \ref{cor:Ynm_diam_lower_by_dominance}, it is sufficient to prove that $diam(\Ynm)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. Let $u,v\in\Ynm$. By Observation \ref{obs:intervals_by_sum}, $I_0\cup I_1=\Pnm$. If both $u$ and $v$ are in the same interval, say $I_0$, then by Lemma \ref{lem:I0_diam}, $d_{\Ynm}(u,v)\leq d_{\Ynm^0}(u,v)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. Otherwise, assume without loss of generality that $u\in I_1$ and $v\in I_0\setminus I_1$. By Observation \ref{obs:intervals_by_sum}, $I_0\setminus I_1 = \{v\in\Ynm:\sum_{i=0}^{m}v_i\leq m-2\}$. It is sufficient to show that $v$ can be mapped into $I_0\cap I_1$ via an automorphism of $\Ynm$. Let $\varphi$ be the automorphism of $\Ynm$ in Definition \ref{def:two_automorphisms}. Note that if $v_0\neq 0$, then $\varphi^{-1}(v)$ is still in $I_0\setminus I_1$. Therefore, we can assume that $v_0=0$. Let $c=(m-1)-\sum_{i=0}^{m}v_i$ and let $v'=\varphi^c(v)$. Note that $1\leq c \leq m-1\leq n-1$, since $\sum_{i=0}^{m}v_i\leq m-2$. Therefore, $v'_0=c$ and $\sum_{i=0}^{m} v'_i=m-1$. This proves, by Observation \ref{obs:intervals_by_sum}, that $\varphi^c(v)\in I_0\cap I_1$ as required. \end{proof} \pagebreak The above line of proof fails for $n<m$. We therefore pursue a new, totally different approach, which works for all values of $n$ and $m$. \section{The dYoke Graphs $\Znm$}\label{sec:dYokeGraphs} As noted in the introduction of this thesis, at the heart of our approach to the calculation of the diameter of $\Ynm$ lies the idea of converting the diameter problem of one graph into an eccentricity problem of another graph. To this end, we introduce a new family of graphs. \begin{definition}\label{def:dyoke_graph} Let $n\geq 1$ and $m\geq 0$ be two integers. Denote the subset $\{-1,0,1\}\subseteq \mathbb{Z}$ by $Q_3$. The \textbf{dYoke graph} $\Znm$ is a graph with vertices corresponding to all $u=(u_0\dots,u_{m+1})\in\mathbb{Z}_n\times Q_3^m\times \mathbb{Z}_n$ such that $\sum_{i=0}^{m+1}u_i\equiv 0(\bmod n).$ Two vertices $u$ and $v$ are adjacent in $\Znm$ if there exists $0\leq i\leq m$ such that $u_j=v_j$ for every $j\notin\{i,i+1\}$ and one of the following two cases holds: either $u_i=v_i+1$ and $u_{i+1}=v_{i+1}-1$, or $u_i=v_i-1$ and $u_{i+1}=v_{i+1}+1$. \end{definition} As with Yoke graphs, we refer to the entries $u_0$ and $u_{m+1}$ of a vertex $u$ in $\Znm$ as \textbf{buckets}; and Convention \ref{conv:cosets_are_integers} applies to dYoke graphs as well, so that the sum $\sum_{i=0}^{m+1}u_i$ in Definition \ref{def:dyoke_graph} is an integer. Note that, as with Yoke graphs, a vertex $u$ in $\Znm$ is determined by its first (or last) $m+1$ entries, since $\sum_{i=0}^{m+1}u_i\equiv 0(\bmod n)$. We denote the vertex $(0,\dots,0)\in\Znm$ by $0$. In the first case of the adjacency relation, where $u_i=v_i+1$ and $u_{i+1}=v_{i+1}-1$ for some $0\leq i\leq m$, we say that $u$ is obtained from $v$ by \textbf{shifting} a unit from entry $i+1$ to the \textbf{left}, and write $u=\overleftarrow{s}_{i}(v)$. In the second case, where $u_i=v_i-1$ and $u_{i+1}=v_{i+1}+1$, we say that $u$ is obtained from $v$ by \textbf{shifting} a unit from entry $i$ to the \textbf{right}, and write $u=\overrightarrow{s}_{i}(v)$. When $v_0=0$ ($v_{m+1}=0$), we say that the left (right) bucket is \textbf{empty}. For example, if $v=(3,0,-1,1,2)\in\Znm[5][3]$, then $\overleftarrow{s}_2(v)=(3,0,0,0,2)$ and $\overrightarrow{s}_1(v)=(3,-1,0,1,2)$. If $i<j$, we say that $v_j$ is an \textbf{entry to the right} of $v_i$ and that $v_i$ is an \textbf{entry to the left} of $v_j$. For convenience, we write $s_i$ to indicate a unit shift between the entries indexed by $i$ and $i+1$ without specifying its direction. Note, that if $m>0$, $1\leq k\leq m$ and $v_k=-1$, then a unit shift from entry $k$ of $v$ is not possible. Similarly, if $v_k=1$, then a unit shift to entry $k$ of $v$ not possible. Nevertheless, for every $0\leq i\leq m$, $\overleftarrow{s}_i$ and $\overrightarrow{s}_i$ induce functions from $\Znm$ to $\Znm$, in which if a unit shift $s_i(v)$ is not possible for some $v\in\Znm$, then $v$ is a fixed point of $s_i$. Denote the set $\{\overleftarrow{s}_i,\overrightarrow{s}_i:0\leq i\leq m\}$ by $\Fnm$. If $m=0$, then $\overleftarrow{s}_0$ and $\overrightarrow{s}_0$ are inverses of each other and $\Fnm[n][0]$ generates the cyclic group $\mathbb{Z}_n$. If $m>0$, then none of the functions in $\Fnm$ are bijective and $\Fnm$ generates a semigroup. A \textbf{word in the letters} $\Fnm$ is a sequence $f_d\cdots f_1$ where \mbox{$f_1,\dots,f_d\in\Fnm$}. Let $P$ be a path $(v=v^0\sim\dots \sim v^d=u)$ in $\Znm$. Clearly, there is a unique word $f_d\cdots f_1$ such that $f_t(v^{t-1})=v^t$ for every $1\leq t\leq d$ or, equivalently, \mbox{$f_t\cdots f_1(v)=v^t$} for every $1\leq t\leq d$. We say that $f_d\cdots f_1$ is the \textbf{word corresponding to the path $P$ from $v$ to $u$} (or the path $P$ starting at $v$). Note that a word $f_d\cdots f_1$ does not correspond to a path starting at $v$ if $f_{t}\cdots f_1(v) = f_{t+1}f_t\cdots f_1(v)$ for some $1\leq t\leq d-1$. For example, the word $\overleftarrow{s_1}\overleftarrow{s_0}$ corresponds to the path $(v=(0, 1, 1, 1)\sim (1, 0, 1, 1)\sim (1, 1, 0, 1))$ in $\Znm[3][2]$ starting at $v$. However, $\overleftarrow{s_0}\overleftarrow{s_1}$ does not correspond to a path starting at $v$, since $v$ is a fixed point of $\overleftarrow{s_1}$. Let $f_d\cdots f_1$ be the word corresponding to a path from $v$ to $u$ and let $1\leq t\leq d-1$. If $f_d\cdots f_{t}f_{t+1}\cdots f_1$ (changing the order of $f_t$ and $f_{t+1}$) is also a word corresponding to a path from $v$ to $u$, then we say that $f_t$ and $f_{t+1}$ \textbf{relatively commute} in (the path) $f_d\cdots f_1(v)$. Note that the fact that $f_t$ and $f_{t+1}$ relatively commute in $f_d\cdots f_1(v)$, does not imply that $f_t$ and $f_{t+1}$ commute (as functions from the set of vertices of $\Znm$ to itself). For example, if $f_d\cdots f_1$ corresponds to a path from $v$ to $u$ such that $f_t=\overrightarrow{s}_i$ and $f_{t+1}=\overleftarrow{s}_{i+1}$ for some $1\leq t \leq d-1$ and $0\leq i\leq m-1$, then $f_t$ and $f_{t+1}$ relatively commute in $f_d\cdots f_1(v)$ but do not commute (as functions). If, however, $f_t$ and $f_{t+1}$ commute, then they also relatively commute in $f_d\cdots f_1(v)$. Clearly, two distinct $s_i$ and $s_j$ in $\Fnm$ commute if and only if $|i-j|>1$. \begin{lemma}[\sc Shift Direction Lemma] \label{lem:shift_direction_lemma_Znm} Let $f_d\cdots f_1$ be the word corresponding to a geodesic in $\Znm$. For every $0\leq i\leq m$, all of the instances of $s_i$ in $f_d\cdots f_1$ shift in the same direction. \end{lemma} \begin{proof} Let $v$ and $u$ be two vertices in $\Znm$. If $m=0$, then a word corresponding to a geodesic from $v$ to $u$ is of the form $f^k$ where $f\in\Fnm[n][0]$ and $0\leq k\leq \lfloor\frac{n}{2}\rfloor$. Assume that $m>0$ and assume to the contrary that there exist words corresponding to geodesics from $v$ to $u$ not satisfying the lemma. Denote the set of such words by $\mathcal{W}$. For every word $f_d\cdots f_1$ in $\mathcal{W}$, there exist $0\leq i\leq m$ and $1\leq t_1, t_2\leq d$ such that $f_{t_1}=\overrightarrow{s}_{i}$ and $f_{t_2}=\overleftarrow{s}_{i}$. Let $w=f_d\cdots f_1$ be a word in $\mathcal{W}$ in which $\min\{|t_2-t_1|: f_{t_1}=\overrightarrow{s}_{i},f_{t_2}=\overleftarrow{s}_{i}\}$ is minimal. Assume without loss of generality that $t_1<t_2$. If $t_2=t_1+1$, then the word obtained by deleting both $f_{t_1}$ and $f_{t_2}$ from $f_d\cdots f_1$ is a word corresponding to a shorter path from $v$ to $u$, contradicting the minimality of $d$. Therefore, we can assume that $t_2-t_1>1$. If $f_{t_1+1}=s_j$ for some $0\leq j\leq m$ such that $|i-j|>1$, then $f_{t_1}$ and $f_{t_1+1}$ commute as functions. If $f_{t_1+1}\in\{\overleftarrow{s}_{i-1}, \overleftarrow{s}_{i+1}\}$, then $f_{t_1}$ and $f_{t_1+1}$ relatively commute in $f_d\cdots f_1(v)$. In both cases, the word obtained by interchanging $f_{t_1}$ and $f_{t_1+1}$ in $w$ corresponds to a path from $v$ to $u$. This contradicts the minimality of $t_2-t_1$ in the choice of $w$. Therefore, $f_{t_1+1}\in\{\overrightarrow{s}_{i-1}, \overrightarrow{s}_{i+1}\}$. Let $v'=f_{t_1+1}\cdots f_1(v)$. If $f_{t_1+1}=\overrightarrow{s}_{i-1}$, then $v'_i=1$ (otherwise $f_{t_1}$ and $f_{t_1+1}$ relatively commute in $f_d\cdots f_1(v)$). Since $f_{t_2}=\overleftarrow{s}_i$, there must exist some $t_1+1<k<t_2$ such that $f_k=\overleftarrow{s}_{i-1}$ or $f_k=\overrightarrow{s}_{i}$. Similarly, if $f_{t_1+1}=\overrightarrow{s}_{i+1}$, then $v'_{i+1}=-1$ and there must exist some $t_1+1<k<t_2$ such that $f_k=\overleftarrow{s}_{i+1}$ or $f_k=\overrightarrow{s}_{i}$. All of the cases contradict the minimality of $t_2-t_1$. \end{proof} \section{From Diameter to Eccentricity}\label{sec:diam_ecc} In this section, we show (Theorem \ref{thm:ecc_eq_diam}) that the diameter of $\Ynm$ is equal to the eccentricity of $0$ in $\Znm$. Every Yoke graph $\Ynm$ is naturally embedded in the dYoke graph $\Znm$ as an induced subgraph on the vertices with no negative entries. Note that for every two vertices $v,u\in\Ynm$, the difference $v-u=(v_0-u_0,v_1-u_1,\dots,v_{m+1}-u_{m+1})$ is in $\Znm$. \begin{definition} For every $u\in\Ynm$ let $\varphi_u:\Ynm\rightarrow \Znm$ be the map (on vertices) defined by $\varphi_u(v)=v-u$. \end{definition} \begin{observation}\label{obs:embedding_Ynm_in_Znm} For every $u\in\Ynm$, $\varphi_u$ is a graph isomorphism between $\Ynm$ and the subgraph induced by $\varphi_u(\Ynm)$ (in $\Znm$). \end{observation} The following lemma is essential for the proof of Lemma \ref{lem:dist_preserved_under_phi}. We start with a given $z\in\Znm$ such that $z_i=0$ for some $1\leq i\leq m$, and a geodesic $P$ between $z$ and $0$. We show that $P$ can be transformed into a geodesic $P'$ in which the $i$th entry is either non-negative or non-positive along the path. We can do this without changing the sets of values of other entries. \begin{lemma}\label{lem:handle_zeros_in_geodesics_znm} Let $z\in\Znm$ such that $z_i=0$ for some $1\leq i\leq m$ and let $P=(z=x^0\sim x^1\sim\dots\sim x^d=0)$ be a geodesic between $z$ and $0$. Then a geodesic $P'=(z=y^0\sim y^1\sim\dots\sim y^d=0)$ exists such that: { \begin{enumerate} \item either $y^t_i\leq 0$ for every $0\leq t\leq d$ or $y^t_i\geq 0$ for every $0\leq t\leq d$ ($P'$ can be constructed either way); \item \label{asdf}$\{y^t_j:0\leq t\leq d\}=\{x^t_j:0\leq t\leq d\}$ for every $j\neq i$ where $0\leq j\leq m+1$. \end{enumerate} } \end{lemma} \begin{proof} We prove the existence of $P'$ such that $y^t_i\leq 0$ for every $0\leq t\leq d$. The proof for the case $y^t_i\geq 0$ follows by symmetric arguments. Let $Q=(w^0\sim w^1\sim\dots\sim w^l)$ be a path in $\Znm$. Define $O_i(Q)=|\{0\leq t\leq l:w^t_i=1\}|$. If $O_i(P)=0$, then set $P'=P$. Otherwise, assume that $O_i(P)>0$. We show that there exists a geodesic $Q=(z=w^0\sim w^1\sim\dots\sim w^d=0)$ with the following properties: { \renewcommand\labelenumi{(\theenumi)} \begin{enumerate} \item $O_i(Q)<O_i(P)$. \item $\{w^t_j:0\leq t\leq d\}=\{x^t_j:0\leq t\leq d\}$ for every $j\neq i$ where $0\leq j\leq m+1$. \end{enumerate} } Note that if $O_i(Q)>0$, then the process can be repeated implying the existence of $P'$, as required. Let $f_d\cdots f_1$ be the word corresponding to $P$. Note that by the Shift Direction Lemma \ref{lem:shift_direction_lemma_Znm}, either both $\overrightarrow{s}_{i-1}$ and $\overrightarrow{s}_i$ or both $\overleftarrow{s}_{i-1}$ and $\overleftarrow{s}_i$ appear in $f_d\cdots f_1$, since $x^0_i=0$, $x^d_i=0$ and $O_i(P)>0$. Assume without loss of generality that both $\overrightarrow{s}_{i-1}$ and $\overrightarrow{s}_i$ appear in $f_d\cdots f_1$. Let $t_1$ be the minimal index such that $x^{t_1}_i=1$ and therefore $f_{t_1}=\overrightarrow{s}_{i-1}$. Let $t_2$ be the minimal index such that $t_1<t_2$ and $f_{t_2}=\overrightarrow{s}_{i}$ (such $t_2$ exists since $x^d_i=0$). If $t_2-t_1=1$, then $f_{t_1}$ and $f_{t_2}$ relatively commute in $f_d\cdots f_1(z)$, and the path $Q$ obtained by interchanging $f_{t_1}$ and $f_{t_2}$ satisfies properties (1) and (2) as required. In the rest of this proof, we assume that $t_2-t_1>1$. We can assume that $f_d\cdots f_1$ is in the form $$(*)\;\; f_d\dots f_{t_2}w_2w_1f_{t_1}\dots f_1$$ where $w_1$ and $w_2$ are words (at least one of which is nonempty) such that every letter in $w_1$ shifts a unit between entries to the right of $i$, and every letter in $w_2$ shifts a unit between entries to the left of $i$. Indeed, if there exist $t_1<t<t+1<t_2$ such that $f_{t}$ shifts a unit between entries to the left of $i$ and $f_{t+1}$ shifts a unit between entries to the right of $i$, then $f_t$ and $f_{t+1}$ commute and the word obtained by interchanging $f_t$ and $f_{t+1}$ corresponds to a path between $z$ and $0$ which satisfies property (2), and which does not change $O_i(P)$. By repeatedly interchanging such pairs, we obtain a word in the form $(*)$, since $f_{t}\notin \{\overrightarrow{s}_{i-1}, \overrightarrow{s}_{i}\}$ for every $t_1<t<t_2$. Note that $f_{t_1}$ commutes with every letter in $w_1$ and $f_{t_2}$ commutes with every letter in $w_2$. Therefore, $f_d\dots w_2f_{t_1}f_{t_2}w_1\dots f_1$ corresponds to a path $Q$ between $z$ and $0$ satisfying properties (1) and (2), similarly to the case $t_2 - t_1 = 1$. \end{proof} \begin{lemma}\label{lem:dist_preserved_under_phi} Let $v,u\in\Ynm$. Then $$d_{\Ynm}(v,u)=d_{\Znm}(v-u,0).$$ \end{lemma} \begin{proof} By Observation \ref{obs:embedding_Ynm_in_Znm}, $\Ynm$ is isomorphic to the subgraph induced by $\varphi_u(\Ynm)$ in $\Znm$, implying that $d_{\Ynm}(v,u)\geq d_{\Znm}(\varphi_u(v),\varphi_u(u))=d_{\Znm}(v-u,0)$. Conversely, let $P=(v-u=x^0\sim x^1\sim\dots\sim x^d=0)$ be a geodesic from $v-u$ to $0$ in $\Znm$. Note that if $x^0_i\leq 0$ for some $1\leq i\leq m$, then, by Lemma \ref{lem:handle_zeros_in_geodesics_znm}, there exists a geodesic $P'=(v-u=y^0\sim y^1\sim\dots\sim y^d=0)$ such that $y^t_i\leq 0$ for every $0\leq t\leq d$ (the lemma is applied to $(x^k\sim x^{k+1}\sim\dots\sim x^d=0)$, where $k$ is the first index such that $x^k_i=0$). Similarly, if $x^0_i\geq 0$ for some $1\leq i\leq m$, then there exists a geodesic $P'=(v-u=y^0\sim y^1\sim\dots\sim y^d=0)$ such that $y^t_i\geq 0$ for every $0\leq t\leq d$. Therefore, by its second property, Lemma \ref{lem:handle_zeros_in_geodesics_znm} can be applied iteratively to all $1\leq i\leq m$, to construct a geodesic $P'=(v-u=y^0\sim y^1\sim\dots\sim y^d=0)$ from the geodesic $P$ that satisfies the following conditions for every $1\leq i\leq m$: \begin{enumerate} \item If $u_i=1$ (implying that $(v-u)_i\leq 0$), then $y^t_i\leq 0$ for every $0\leq t\leq d$. \item If $u_i=0$ (implying that $(v-u)_i\geq 0$), then $y^t_i\geq 0$ for every $0\leq t\leq d$. \end{enumerate} Clearly, $(v=y^0+u\sim y^1+u\sim\dots\sim y^d+u=u)$ is a path between $v$ and $u$ in $\Ynm$. Therefore $d_{\Ynm}(v,u)\leq d_{\Znm}(v-u,0)$. \end{proof} \begin{theorem}\label{thm:ecc_eq_diam} $\diam(\Ynm) = \ecc_{\Znm}(0)$. \end{theorem} \begin{proof} By Lemma \ref{lem:dist_preserved_under_phi}, $d_{\Ynm}(v,u)=d_{\Znm}(v-u,0)\leq \ecc_{\Znm}(0)$ for each pair of vertices $v$ and $u$ in $\Ynm$. Therefore, $\diam(\Ynm) \leq \ecc_{\Znm}(0)$. On the other hand, for every $z\in\Znm$ there exist $v, u\in\Ynm$ such that $v-u=z$. By Lemma \ref{lem:dist_preserved_under_phi}, $d_{\Znm}(z,0) = d_{\Znm}(v-u,0) = d_{\Ynm}(v,u) \leq \diam(\Ynm)$, and therefore $\ecc_{\Znm}(0) \leq \diam(\Ynm)$. This proves equality. \end{proof} \section{The Eccentricity of $0$ in $\Znm$}\label{sec:ecc_zero_Znm} In this section, we compute the eccentricity of $0$ in $\Znm$ (Theorem \ref{thm:ecc_znm}). It turns out to be equal to the eccentricity of $0$ in $\Ynm$ as it appears in Theorem \ref{thm:ynm_ecc}. By Theorem \ref{thm:ecc_eq_diam}, it is also equal to the diameter of $\Ynm$. In Subsection \ref{subsec:pivot_paths_Znm}, we study pivot paths and some related concepts. Note that many of the definitions that appear in this section are similar to the ones in Chapter \ref{chp:ecc_zero_Ynm}, where these notions were applied to compute the eccentricity of $0$ in Yoke graphs. Here, these definitions are applied to dYoke graphs and the proofs are slightly more complex due to the richer structure of dYoke graphs. In Subsection \ref{subsec:n_leq_m_Znm}, we compute the eccentricity of $0$ in $\Znm$. \begin{observation}\label{obs:corner_case_meqz} If $m=0$, then $\Znm[n][0]$ is a cycle graph on $n$ vertices. Therefore, $\ecc_{\Znm[n][0]}(0)=\lfloor\frac{n}{2}\rfloor=\lfloor\frac{n(m+1)}{2}\rfloor$. \end{observation} In the rest of this section, unless explicitly stated otherwise, we assume that $m>0$. \subsection{Pivots and Walls}\label{subsec:pivot_paths_Znm} \begin{definition}[\sc Pivot] A \textbf{pivot} of a vertex $v\in\Znm$ is an integer $-1\leq p\leq m+1$ such that $\sum_{i=0}^{p}v_i$ is divisible by $n$. We call $-1$ and $m+1$ the \textbf{outer pivots} of $v$; they are pivots of every $v\in\Znm$. Every other pivot, if it exists, is called an \textbf{inner pivot}. Denote the set of pivots of $v$ by $\piv(v)$. \end{definition} For example, in $\Znm[3][8]$, $\piv((0,1,-1,0,1,1,-1,-1,1,2))=\{-1,0,2,3,7,9\}$. \begin{definition}[\sc Wall] Let $P$ be a path from $v$ to $0$ in $\Znm$ and let $f_d\cdots f_1$ be the word corresponding to $P$. An \textbf{inner wall} of $P$ is an integer $0\leq p\leq m$ such that $s_p$ does not appear in $f_d\cdots f_1$. $-1$ is a \textbf{left outer wall} of $P$ if $\overleftarrow{s}_0$ does not appear in $f_d\cdots f_1$. Similarly, $m+1$ is a \textbf{right outer wall} of $P$ if $\overrightarrow{s}_m$ does not appear in $f_d\cdots f_1$. Finally, we say that $-1\leq p\leq m+1$ is a \textbf{wall} of $P$ if it is either an inner wall or an outer wall of $P$. \end{definition} Clearly, paths from $v$ to $0$ with a wall $p$ exist if and only if $p$ is a pivot of $v$. \begin{definition}[\sc Pivot Path] Let $p$ be a pivot of a vertex $v$ in $\Znm$. We say that a path $P$ from $v$ to $0$ is a \textbf{$p$-pivot path} of $v$, if $P$ is shortest among the paths from $v$ to $0$ with a wall $p$. Denote the length of a $p$-pivot path of $v$ by $\ps_p(v)$. We say that $P$ is a \textbf{pivot path}, if it is a $p$-pivot path for some $p$. \end{definition} For example, let $v=(1,0,-1,-1,1)$ in $\Znm[3][3]$. Then $\overleftarrow{s}_3\overrightarrow{s}_0\overrightarrow{s}_1$ is a word corresponding to a $2$-pivot path $P$ of $v$: $P=(v=(1,0,-1\,\|-1,1)\sim (1,-1,0\,\|-1,1)\sim (0,0,0\,\|-1,1)\sim (0,0,0\,\|\,0,0)=0)$ where we added the symbol $\|$ to visualize the wall $2$ of $P$. \begin{lemma}\label{lem:geodesic_is_a_pivot_path} Every geodesic in $\Znm$ ending at $0$ is a pivot path (namely, has a wall). \end{lemma} \begin{proof} Let $w=f_d\cdots f_1$ be the word corresponding to a geodesic $P$ in $\Znm$ from $v$ to $0$. Assume to the contrary that $P$ has no walls. Therefore, both $\overleftarrow{s}_0$ and $\overrightarrow{s}_m$ appear in $w$ (since $P$ has no outer walls). By the Shift Direction Lemma \ref{lem:shift_direction_lemma_Znm}, and since $P$ has no walls, exactly one of $\{\overleftarrow{s}_i, \overrightarrow{s}_i\}$ appears in $w$ (at least once) for every $0\leq i\leq m$. Let $1\leq i\leq m$ be minimal such that $\overrightarrow{s}_i$ appears in $w$. Then both $\overleftarrow{s}_{i-1}$ and $\overrightarrow{s}_{i}$ appear in $w$. This is a contradiction, since it implies that, throughout $P$, at least two units are shifted from the entry $v_i$ and no units are shifted to $v_i$. \end{proof} \begin{observation}\label{obs:closer_pivot_is_better_znm} By Lemma \ref{lem:geodesic_is_a_pivot_path}, $$\ecc_{\Znm}(0)=\max_vd(v,0)=\max_v\min_p\ps_p(v).$$ \end{observation} \begin{definition} Let $v\in\Znm$ and $p\in\piv(v)$. If (one, equivalently all) $p$-pivot paths of $v$ are also geodesics between $v$ and $0$, then we say that $p$ is a \textbf{geodesic pivot} of $v$. \end{definition} The Shift Direction Lemma \ref{lem:shift_direction_lemma_Znm} deals with geodesics, or equivalently (by Lemma \ref{lem:geodesic_is_a_pivot_path}) with $p$-pivot paths for geodesic pivots $p$. It can be extended to arbitrary pivot paths, with essentially the same proof. \begin{lemma}[\sc Pivot Shift Direction Lemma]\label{lem:pivot_shift_direction_lemma} Let $f_d\cdots f_1$ be the word corresponding to a pivot path in $\Znm$. For every $0\leq i\leq m$, all of the instances of $s_i$ in $f_d\cdots f_1$ shift in the same direction. \end{lemma} \begin{observation}\label{obs:trivial_path} Let $v\in\Znm$ and let $p\in\piv(v)$. Then $$\ps_p(v)\leq \sum_{i=1}^{p}i|v_i| + \sum_{i=p+1}^{m}(m+1-i)|v_i|.$$ If, in addition, $v_i\geq 0$ for every $1\leq i\leq m$, then the above is an equality. \end{observation} \begin{proof} There exists a path with a wall $p$ from $v$ to $0$ of length $\sum_{i=1}^{p}i|v_i| + \sum_{i=p+1}^{m}(m+1-i)|v_i|$ (not necessarily a $p$-pivot path). It is a path in which the entries $\{v_0,\dots,v_p\}$ and $\{v_{p+1}, \dots, v_{m+1}\}$ of $v$ are handled one by one, left to right and right to left, respectively; each $v_i$, in turn, can be set to $0$ by $i$ unit shifts from (if $v_i=-1$) or to (if $v_i=1$) the bucket on the same side of $p$ as $v_i$. For example, if $v=(0,-1,1,0)$ in $\Znm[n][2]$ for some $n\geq 1$ and $p=2$, then the path from $v$ to $0$ that corresponds to the word $\overleftarrow{s}_0\overleftarrow{s}_1\overrightarrow{s}_0$ has length $1+2=3$, as above. Note that it is not a $2$-pivot path, since $\overleftarrow{s}_1(v)=0$. If, however, $v_i\geq 0$ for every $1\leq i\leq m$, then by the Pivot Shift Direction Lemma \ref{lem:pivot_shift_direction_lemma}, a path as above is, in fact, a $p$-pivot path. \end{proof} \begin{corollary}\label{cor:trivial_upper_bound} If $p$ is an inner pivot of $v\in\Znm$, then $$\ps_p(v)\leq \binom{p+1}{2}+\binom{m-p+1}{2}.$$ \end{corollary} Comparing Corollary \ref{cor:pivot_path_len} and Observation \ref{obs:trivial_path} leads to the following corollary: \begin{corollary}\label{cor:dist_equal_when_positive} Let $v\in \Ynm$, considered also as a vertex in $\Znm$. Then $$d_{\Znm}(v,0)=d_{\Ynm}(v,0).$$ \end{corollary} \begin{definition}[\sc $P$-interval] Let $P$ be a pivot path of $v$ and let $\{p_1,\ldots,p_t\}$ with $-1<p_1<\ldots<p_t<m+1$ be the set of inner walls of $P$. Denote $p_0 = -1$, $p_{t+1} = m+1$ and $\piv(P) = \{p_0, p_1, \ldots, p_t, p_{t+1}\}$. Note that $\piv(P)\subseteq \piv(v)$. We call every $p\in\piv(P)$ a \textbf{pivot of $P$}. $\piv(P)$ induces $t+1$ intervals $[p_k + 1,p_{k+1}]$ for $-1\leq k\leq t$. Note that these intervals are a partition of $[0,m+1]$. We call every such interval a \textbf{$P$-interval}. For convenience, we say that an entry $v_i$ is in the interval $I$ if $i\in I$. \end{definition} \begin{lemma}\label{lem:same_direction_in_interval} Let $f_d\cdots f_1$ be the word corresponding to a pivot path $P$ of $v$ and let $I$ be a $P$-interval. Then for every $[i,i+1]\subseteq I$ and $[j,j+1]\subseteq I$, the instances of $s_i$ and $s_j$ in $f_d\cdots f_1$ shift in the same direction. \end{lemma} \begin{proof} By the Pivot Shift Direction Lemma \ref{lem:pivot_shift_direction_lemma} and the definition of a $P$-interval, exactly one of $\{\overleftarrow{s}_i, \overrightarrow{s}_i\}$ appears in $f_d\cdots f_1$, for every $[i,i+1]\subseteq I$. Assume to the contrary, without loss of generality, that both $\overleftarrow{s}_i$ and $\overrightarrow{s}_j$ appear in $f_d\cdots f_1$ for some $i<j$ in $I$. Let $k\in I$ such that $i<k$, $s_k$ appears shifting right in $f_d\cdots f_1$ and $k$ is minimal with these properties. Then both $\overleftarrow{s}_{k-1}$ and $\overrightarrow{s}_{k}$ appear in $f_d\cdots f_1$. A contradiction. \end{proof} \begin{definition}[\sc $\sigma_I(P)$] By Lemma \ref{lem:same_direction_in_interval}, all of the unit shifts between entries in the same $P$-interval $I$ are in the same direction throughout $P$. If this direction is left (right), we say that \textbf{$P$ shifts left (right) in $I$}. Denote the number of unit shifts between entries in some $P$-interval $I\subseteq [0,m+1]$ within a pivot path $P$ by $\sigma_I(P)$. \end{definition} \begin{lemma}\label{lem:number_of_shift_in_a_p_interval} Let $I=[p_1+1,p_2]$ be a $P$-interval of some pivot path $P$ of $v\in\Znm$. If $I$ contains one of the buckets, then also assume that no step in $P$ shifts a unit from an empty bucket (i.e. $\overleftarrow{s}_m((\dots,0))=((\dots,n-1))$ and $\overrightarrow{s}_0((0,\dots))=((n-1,\dots))$ do not appear in $P$). Then \begin{enumerate} \item If $P$ shifts left in $I$, then $\sigma_I(P) = \sum_{i\in I}iv_i$. \item If $P$ shifts right in $I$, then $\sigma_I(P) = \sum_{i\in I}(m+1-i)v_i$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:same_direction_in_interval}, $P$ shifts either right or left in $I$. Assume that $P$ shifts left in $I$ (the proof of the other case is similar). Note that every left unit shift in $P$ between entries in $I$ reduces the value of the sum $\sum_{i\in I}iv_i$ exactly by $1$, since a unit is never shifted left from an empty right bucket. Therefore $\sum_{i\in I}iv_i\geq 0$ and $\sigma_I(P) = \sum_{i\in I}iv_i$. \end{proof} \begin{lemma}\label{lem:sufficient_for_number_of_shifts} Let $I=[p_1+1,p_2]$ be a $P$-interval of some pivot path $P$ of $v\in\Znm$. \begin{enumerate} \item Assume that $p_2=m+1$ and that $P$ shifts left in $I$. If $\sum_{i=j}^{m+1}v_i\geq 0$ for every $j\in I$, then there is no left unit shift in $P$ from an empty right bucket. \item Assume that $p_1=-1$ and that $P$ shifts right in $I$. If $\sum_{i=0}^{j}v_i\geq 0$ for every $j\in I$, then there is no right unit shift in $P$ from an empty left bucket. \end{enumerate} \end{lemma} \begin{proof} We prove the first case and similar arguments apply to the second case. Note that a left unit shift from an empty right bucket increases $\sum_{i\in I}iv_i$. On the other hand, every other left unit shift between entries in $I$ (we call such unit shifts \textit{simple} left unit shifts in the rest of this proof) reduces the value of the sum $\sum_{i\in I}iv_i$ exactly by $1$. Therefore, it is sufficient to prove the following. \begin{claim} It is possible to set to $0$ every $v_i$ in $I$ using only simple left unit shifts between entries in $I$. \end{claim} \begin{claimProof} Assume first that $I$ contains no negative entries. If $p_1=-1$, then the claim is trivial. If $p_1$ is an inner pivot, then $v_i=0$ for every $i\in I$, since, in this case, a left unit shift between entries in $I$ cannot decrease the sum $\sum_{i\in I}v_i$ (implying, in fact, that $v_{m+1}=0$ and $|I|=1$). Otherwise, let $j$ be maximal in $I$ such that $v_j=-1$. Of course, $1\leq j\leq m$. The assumption that $\sum_{i=j}^{m+1}v_i\geq 0$ implies that $\sum_{i=j+1}^{m+1}v_i > 0$. Therefore, it is possible to set $v_j$ to $0$ using only simple left unit shifts between entries in $[j,m+1]$ without creating new negative entries in $I$. Note that $\sum_{i=j}^{m+1}v_i$ does not change in this process, since all of the unit shifts are simple. Therefore, this can be repeated until $I$ contains no negative entries. \end{claimProof} \end{proof} \begin{lemma}\label{lem:no_pivots} Let $v\in\Znm$ such that $v$ has no inner pivots. Then $d(v,0)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. \end{lemma} \begin{proof} $0<v_0,v_{m+1}<n$ and $\sum_{i=0}^{m+1}v_i = n$, since $v$ has no inner pivots. The assumption $\piv(v)=\{-1,m+1\}$ also implies that both $\sum_{i=0}^{j}v_i> 0$ and $\sum_{i=j}^{m+1}v_i> 0$ for every $0\leq j\leq m+1$. Note that $I=[0, m+1]$ is a $P$-interval for every $(m+1)$-pivot path $P$ and every such path shifts left in $I$. Similarly, every $(-1)$-pivot path $P$ shifts right in $I$. Therefore, by Lemmas \ref{lem:number_of_shift_in_a_p_interval} and \ref{lem:sufficient_for_number_of_shifts}, $\ps_{m+1}(v)=\sum_{i=0}^{m+1}iv_i$ and $\ps_{-1}(v)=\sum_{i=0}^{m+1}(m+1-i)v_i$. $\sum_{i=0}^{m+1}iv_i + \sum_{i=0}^{m+1}(m+1-i)v_i= (m+1) \sum_{i=0}^{m+1} v_i =n(m+1)$. Therefore, by Lemma \ref{lem:geodesic_is_a_pivot_path}, $d(v,0)=\min\{\ps_{-1}(v),\ps_{m+1}(v)\}\leq \lfloor\frac{\ps_{-1}(v)+\ps_{m+1}(v)}{2}\rfloor=\lfloor\frac{n(m+1)}{2}\rfloor.$ \end{proof} \begin{lemma}\label{lem:ecc_lower_bound} For every dYoke graph $\Znm$ we have $\ecc_{\Znm}(0)\geq\lfloor\frac{n(m+1)}{2} \rfloor$. \end{lemma} \begin{proof} By Lemma \ref{lem:ecc_lower_bound_ynm} and Corollary \ref{cor:dist_equal_when_positive}, $\ecc_{\Znm}(0)\geq\ecc_{\Ynm}(0)\geq\lfloor\frac{n(m+1)}{2} \rfloor$. \end{proof} \subsection{Computation of the Eccentricity}\label{subsec:n_leq_m_Znm} \begin{lemma}[$n=1$]\label{lem:corner_case_neqo_znm} Let $m\geq 0$. Then $$\ecc_{\Znm[1][m]}(0)= \binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}.$$ \end{lemma} \begin{proof} The case $m=0$ is trivial. Assume that $m>0$. Then $\piv(v)=\{-1,\ldots,m+1\}$ for every $v\in\Znm[1][m]$. Therefore, by Fact \ref{fac:split_sum_center}, Observation\nolinebreak \ref{obs:closer_pivot_is_better_znm} and Corollary \ref{cor:trivial_upper_bound}, $d(v,0)\leq \ps_{\lceil\frac{m}{2}\rceil}(v)\leq \binom{\lceil \frac{m}{2}\rceil + 1}{2} + \binom{\lfloor \frac{m}{2}\rfloor + 1}{2}$. By Observation \ref{obs:trivial_path}, this upper bound is obtained by the vertex $v\in\Ynm[1][m]$ with $v_i=1$ for every $1\leq i\leq m$. \end{proof} \begin{lemma}\label{lem:ecc_n_geq_m} If $0\leq m\leq n$, then $\ecc_{\Znm}(0) = \lfloor\frac{n(m+1)}{2}\rfloor$. \end{lemma} \begin{proof} The case $m=0$ is true by Observation \ref{obs:corner_case_meqz}. Assume that $m>0$. Then by Lemma \ref{lem:ecc_lower_bound}, it is sufficient to show that $\ecc_{\Znm}(0) \leq \lfloor\frac{n(m+1)}{2}\rfloor$. Let $v\in\Znm$. By Lemma \ref{lem:no_pivots}, we can assume that $v$ has some inner pivot $p\in\piv(v)$. By Observation \ref{obs:trivial_path} and Fact \ref{fac:split_sum_center}, $d(v,0) \leq \sum_{i=0}^{p}i + \sum_{i=0}^{m-p}i \leq \binom{m+1}{2}$. Since $m \le n$, $\binom{m+1}{2} \le \frac{n(m+1)}{2}$ and therefore $\binom{m+1}{2}\leq\lfloor\frac{n(m+1)}{2}\rfloor$. \end{proof} Throughout the rest of this subsection we assume that $2\leq n\leq m$. \begin{definition} For $v\in \Znm$ define: \begin{enumerate} \item $p_l(v)=\max\{p\in \piv(v):p\leq \frac{m}{2}\}$. \item $p_r(v)=\min\{p\in \piv(v):p > \frac{m}{2}\}$. \item $I_c(v) = [p_l(v)+1,p_r(v)]$. \item $h(v)=\min\{|p-\frac{m}{2}|:p\in \piv(v)\}$. \item $\hnm = \begin{cases} \frac{n}{2} & \mbox{if } 2\divides (m-n) \\ \frac{n+1}{2} & \mbox{if } 2\ndivides (m-n) \end{cases}.$ \end{enumerate} We abbreviate $p_l$, $p_r$ and $I_c$ when $v$ is evident. \end{definition} Recall the definitions of $\uz$, $\dz$, $\uo$, $\doo$ and $\dnm$ (in $\Ynm$) from Definitions \ref{def:uz}, \ref{def:uo} and \ref{def:dnm}. Note that $\uz$ and $\uo$ can also be considered as vertices in $\Znm$ and by Corollary \ref{cor:dist_equal_when_positive}, $d_{\Znm}(\uz, 0)=d_{\Ynm}(\uz, 0)=\dz$ and $d_{\Znm}(\uo, 0)=d_{\Ynm}(\uo, 0)=\doo$. \begin{observation}\label{obs:ecc_lower_bound} $\ecc_{\Znm}(0) \geq \dnm$. \end{observation} In the rest of this subsection, we prove that $\dnm$ is also an upper bound on the distance of an arbitrary vertex $v$ in $\Znm$ from $0$. Note that $\sum_{i\in I_c}v_i\in\{0,\pm n\}$ for every $v$ in $\Znm$. We split this proof into three cases. \begin{enumerate} \item $h(v)<\hnm$ (Lemma \ref{lem:close_to_middle_pivot_znm}). \item $h(v)\geq\hnm$ and $\sum_{i\in I_c(v)}v_i=\pm n$ (Lemma \ref{lem:middle_interval_is_n_znm}). \item $h(v)\geq\hnm$ and $\sum_{i\in I_c(v)}v_i=0$ (Lemma \ref{lem:zero_middle_interval}). \end{enumerate} \begin{lemma}\label{lem:close_to_middle_pivot_znm} Let $v\in\Znm$ such that $h(v)<\hnm$. Then $d(v,0)\leq\dnm$. \end{lemma} \begin{proof} The proof of this lemma is identical to the proof of Lemma \ref{lem:close_to_middle_pivot} in which Corollary \ref{cor:pivot_path_len} is replaced by Corollary \ref{cor:trivial_upper_bound}. \end{proof} We now deal with the case $h(v)\geq\hnm$, $\sum_{i\in I_c}v_i=\pm n$. We first reduce to the case $\sum_{i\in I_c}v_i=n$. \begin{definition}\label{def:aut_mu} Let $\mu:\Znm\rightarrow \Znm$ be the map defined by $\mu(v)_i=-v_i$ for every $0\leq i\leq m+1$. \end{definition} Note that if $v_0 > 0$, then $\mu(v)_0=n-v_0>0$ and similarly for $v_{m+1}$. It is straightforward to verify that $\mu$ is an automorphism of $\Znm$. \pagebreak \begin{observation}\label{obs:neg_Ic_to_positive} Let $v\in\Znm$ such that $\sum_{i\in I_c}v_i=-n$. Then $I_c\subseteq [1,m]$ and $\sum_{i\in I_c}\mu(v)_i=n$. \end{observation} \begin{proof} If $I_c\nsubseteq [1,m]$, then either $p_l=-1$ or $p_r=m+1$. Assume that $p_l=-1$. Recall that the buckets are identified with non-negative integers. Since $0\notin\piv(v)$, we have $v_0>0$. This implies that $\sum_{i\in I_c}v_i\in\{0,n\}$. A contradiction. If $p_r=m+1$, then $\sum_{i\in I_c}v_i\in\{0,n\}$ following similar arguments. Finally, if $I_c\subseteq [1,m]$ and $\sum_{i\in I_c}v_i=-n$ then $\sum_{i\in I_c}\mu(v)_i=n$, since $\piv(v)=\piv(\mu(v))$. \end{proof} The following lemma generalizes Lemma \ref{lem:no_pivots}. \begin{lemma}\label{lem:v_hat_dist_from_zero_center} Let $v\in\Znm$ such that $\sum_{i\in I_c}v_i=n$. Then $$d(v,0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{n(m+1)}{2}\rfloor.$$ \end{lemma} \begin{proof} The sum $\sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i$ is an upper bound on the number of steps needed to set to $0$ every $v_i$ with $i\in [1,m]\setminus I_c$; note that necessarily, following these steps, a bucket of $v$ not in $I_c$, is also equal to $0$. We can therefore assume that $v_i=0$ for every $i\notin I_c$ and prove that $d(v,0)\leq \lfloor\frac{n(m+1)}{2}\rfloor$. The assumption implies that $d(v, 0)=\min\{\ps_{p_l}(v), \ps_{p_r}(v)\}$. Since $\sum_{i\in I_c}v_i=n$, every $p_r$-pivot path of $v$ shifts left in $[0,p_r]$ and every $p_l$-pivot path of $v$ shifts right in $[p_l+1,m+1]$. Moreover, the assumption $\sum_{i\in I_c}v_i=n$ implies that $\sum_{i=j}^{p_r}v_i> 0$ for every $j\in I_c$ and $\sum_{i=p_l+1}^{j}v_i> 0$ for every $j\in I_c$. Therefore, by Lemmas \ref{lem:number_of_shift_in_a_p_interval} and \ref{lem:sufficient_for_number_of_shifts}, $d(v,0)=\min\{\ps_{p_l}(v),\ps_{p_r}(v)\}\leq \lfloor\frac{\ps_{p_l}(v)+ \ps_{p_r}(v)}{2}\rfloor= \lfloor\frac{\sum_{i\in I_c}iv_i + \sum_{i\in I_c}(m+1-i)v_i}{2}\rfloor=\lfloor\frac{n(m+1)}{2}\rfloor.$ \end{proof} \begin{observation}\label{obs:dz_doo_split} \begin{enumerate} \item[] \item If $2|(m-n)$, then $\dz = \sum_{i=1}^{\frac{m-n}{2}}i + \sum_{i=1}^{m-\frac{m+n}{2}}i + \frac{n(m+1)}{2}$. \item $\doo = \sum_{i=1}^{\lfloor\frac{m-n}{2}\rfloor}i + \sum_{i=1}^{m-\lceil\frac{m+n}{2}\rceil}i + \lfloor\frac{n(m+1)}{2}\rfloor$. \end{enumerate} \end{observation} \begin{proof} A direct (though tedious) comparison with Lemma \ref{lem:dist_of_uz}. Alternatively, it follows by observing that $|p_l-\frac{m}{2}|=|p_r-\frac{m}{2}|$ in both cases under consideration. \end{proof} \begin{lemma}\label{lem:middle_interval_is_n_znm} Let $v\in\Znm$ such that $h(v)\geq\hnm$ and $\sum_{i\in I_c(v)}v_i=\pm n$. Then $d(v,0) \leq \dnm.$ \end{lemma} \begin{proof} By Observation \ref{obs:neg_Ic_to_positive}, we can assume that $\sum_{i\in I_c(v)}v_i=n$. Assume that $2|(m-n)$. By Observation \ref{obs:dz_doo_split}, $\dnm = \dz = \sum_{i=1}^{\frac{m-n}{2}}i + \sum_{i=1}^{m-\frac{m+n}{2}}i + \frac{n(m+1)}{2}.$ By Lemma \ref{lem:v_hat_dist_from_zero_center}, $d({v},0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \frac{n(m+1)}{2}.$ Since $h(v)\geq\hnm$, $p_l\leq\frac{m-n}{2}$ and $p_r\geq\frac{m+n}{2}$. Therefore $d(v,0)\leq \dnm$. Otherwise assume that $2\ndivides(m-n)$. By Observation \ref{obs:dz_doo_split}, $\dnm\geq \doo = \sum_{i=1}^{\lfloor\frac{m-n}{2}\rfloor}i + \sum_{i=1}^{m-\lceil\frac{m+n}{2}\rceil}i + \lfloor\frac{n(m+1)}{2}\rfloor.$ By Lemma \ref{lem:v_hat_dist_from_zero_center}, $d({v},0)\leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{n(m+1)}{2}\rfloor.$ Since $h(v)\geq\hnm$, $p_l\leq \lfloor\frac{m-n}{2}\rfloor$ and $p_r\geq\lceil\frac{m+n}{2}\rceil$. Therefore $d(v,0)\leq \dnm$. \end{proof} \begin{lemma}\label{lem:upper_middle_is_zero} Let $v\in\Znm$ such that $I_c(v)\subseteq[1,m]$ and $\sum_{i\in I_c(v)}v_i=0$. Then $$d(v,0) \leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil$$ where $\Icwidth=|I_c(v)|$. \end{lemma} \begin{proof} As in the beginning of the proof of Lemma \ref{lem:v_hat_dist_from_zero_center}, we can assume that $v_i=0$ for every $i\notin I_c$ and prove that $d(v,0)\leq \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil$. Let $P$ be any $p_l$- or $p_r$-pivot path of $v$. $I_c$ is clearly a $P$-interval (namely, both $p_l$ and $p_r$ are walls of $P$), since $\sum_{i\in I_c}v_i=0$ and $I_c(v)\subseteq[1,m]$. Assume without loss of generality, that $P$ shifts left in $I_c$. By Lemma \ref{lem:number_of_shift_in_a_p_interval}, $\ps_{I_c}(P) = \sum_{i\in I_c}iv_i$. The number of entries of $v$ that are equal to $1$ is equal to the number of entries of $v$ that are equal to $-1$, since $\sum_{i\in I_c}v_i = 0$. This number is at most $\lfloor\frac{\Icwidth}{2}\rfloor$. The maximum value of $\sum_{i\in I_c}iv_i$ is obtained when the first $\lfloor\frac{\Icwidth}{2}\rfloor$ entries in $I_c$ are equal to $-1$ and the last $\lfloor\frac{\Icwidth}{2}\rfloor$ entries are equal to $1$. Therefore $\sum_{i\in I_c}iv_i\leq \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil$. \end{proof} \begin{fact}\label{fac:split_sum} Let $a$ and $b$ be two non-negative integers. Then $$\sum_{i=1}^a i + \sum_{i=1}^{b} i \geq \lfloor\frac{a+b}{2}\rfloor\lceil\frac{a+b}{2}\rceil.$$ \end{fact} \pagebreak \begin{lemma}\label{lem:zero_middle_interval} Let $v\in\Znm$ such that $h(v)\geq\hnm$ and $\sum_{i\in I_c}v_i=0$. Then $d(v,0)\leq\dnm$. \end{lemma} \begin{proof} If both $p_l$ and $p_r$ are outer pivots, then both buckets are non-empty and $\sum_{i\in I_c}v_i=n$, since in this case, $v$ has no pivots other than $p_l$ and $p_r$; but we assumed $\sum_{i\in I_c}v_i=0$. Assume that exactly one of $p_l$ and $p_r$, say $p_l$, is an outer pivot (and that the other pivot, $p_r$, is an inner pivot). Let $\mu$ be the automorphism of $\Znm$ from Definition \ref{def:aut_mu}. Clearly, $I_c(v)=I_c(\mu(v))$. By assumption $v_0>0$ and therefore $\mu(v)_0=n-v_0>0$; hence $\sum_{i\in I_c}\mu(v)_i=n-\sum_{i\in I_c} v_i=n$. Since $0$ is a fixed point of $\mu$, by Lemma \ref{lem:middle_interval_is_n_znm}, $d(v,0)=d(\mu(v),0)\leq \dnm$, as required. Otherwise both $p_l$ and $p_r$ are inner pivots, that is, $I_c(v)\subseteq[1,m]$. By Lemma \ref{lem:upper_middle_is_zero}, $d({v},0) \leq \sum_{i=1}^{p_l}i + \sum_{i=1}^{m-p_r}i + \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil$ where $\Icwidth=|I_c(v)|$. Let $p=\lfloor\frac{m+n}{2}\rfloor$ so that, by Lemma \ref{lem:dist_of_uz}, $\dz=\sum_{i=1}^p i + \sum_{i=1}^{m-p} i$. Note that $|p-\frac{m}{2}|\leq \hnm$. The assumption $h(v)\geq\hnm$ implies that $p_l\leq p\leq p_r$. Therefore \begin{align*} \dnm - d(v,0) \geq \dz - d(v,0) \geq \sum_{i=p_l+1}^p i + \sum_{i=m-p_r+1}^{m-p} i - \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil. \end{align*} Clearly, $\sum_{i=p_l+1}^p i + \sum_{i=m-p_r+1}^{m-p} i \geq \sum_{i=1}^{p-p_l} i + \sum_{i=1}^{p_r-p} i$. Let $a=p-p_l$, let $b=p_r-p$ and note that $a+b=\Icwidth$. Therefore, by Fact \ref{fac:split_sum}, $ \dnm - d(v,0) \geq \sum_{i=1}^a i + \sum_{i=1}^{b} i - \lfloor\frac{\Icwidth}{2}\rfloor\lceil\frac{\Icwidth}{2}\rceil\geq 0. $ \end{proof} \begin{lemma}\label{lem:ecc_n_leq_m} Let $2\leq n\leq m$ be two integers. Then $\ecc_{\Znm}(0) = \dnm$. \end{lemma} \begin{proof} Combine Observation \ref{obs:ecc_lower_bound} with Lemmas \ref{lem:close_to_middle_pivot_znm}, \ref{lem:middle_interval_is_n_znm} and \ref{lem:zero_middle_interval}. \end{proof} \begin{theorem}\label{thm:ecc_znm} For all values of $n$ and $m$, the eccentricity of $0$ in $\Znm$ is equal to the eccentricity of $0$ in $\Ynm$ as it appears in Theorem \ref{thm:ynm_ecc}. \end{theorem} \begin{proof} This theorem is the combined result of Lemmas \ref{lem:corner_case_neqo_znm}, \ref{lem:ecc_n_geq_m} and \ref{lem:ecc_n_leq_m}. \end{proof} \chapter{Automorphisms of $\Ynm$}\label{chp:automorphisms} This chapter deals with automorphisms of $\Ynm$. In Section \ref{sec:fundamental_auto}, we introduce three fundamental automorphisms of $\Ynm$ and find the structure of the group they generate. In Section \ref{sec:complete_auto_group}, we show that the group generated by these three fundamental automorphisms is, in fact, precisely the automorphism group of $\Ynm$, whenever $m\neq 2$ and $(n,m)\neq (1,3)$. We cover the case $(n,m)=(1,3)$ separately. \section{Fundamental Automorphisms}\label{sec:fundamental_auto} \begin{definition}\label{def:typical_automorphisms} Let $\varphi$, $\psi$ and $\tau$ be the following three maps from $\Ynm$ to $\Ynm$: $$\begin{array}{c c c l} & \varphi(v_0,v_1,\dots,v_m,v_{m+1}) & = & (v_0+1,v_1,\dots,v_m,v_{m+1}-1) \\ & \psi(v_0,v_1,\dots,v_m,v_{m+1}) & = & (v_{m+1},v_m,\dots,v_1,v_{0}) \\ & \tau(v_0,v_1,\dots,v_m,v_{m+1}) & = & (-v_0,1-v_1,\dots,1-v_{m},-(m+v_{m+1})) \\ \end{array}$$ \end{definition} Note that $\varphi$ and $\tau$ are essentially as in Definition \ref{def:two_automorphisms}. We define them here again for completeness. \begin{observation} $\varphi$, $\psi$ and $\tau$ are automorphisms of $\Ynm$. \end{observation} \begin{definition} Denote $\AGnm=\left<\varphi, \psi, \tau\right>$; the subgroup of $\aut(\Ynm)$ generated by $\{\varphi, \psi, \tau\}$. \end{definition} Recall that the dihedral group $D_n$ ($\forall n\geq 1$) is the group of order $2n$ with presentation: $$D_n=\left<g,h\; |\; g^n=h^2=1, hgh=g^{-1}\right>.$$ Denote by $o(g)$ the order of an element $g$ in a finite group. \begin{theorem}[The structure of $\AGnm$] \label{thm:anm_structure} \begin{enumerate} \item[] \item $\AGnm[1][0]\cong\{1\}$ and $\AGnm[2][0]\cong \AGnm[1][1]\cong C_2$, the cyclic group of order $2$. \item $\AGnm[n][0] \cong D_n$ ($\forall n>2$). \item $\AGnm[1][m] \cong C_2\times C_2$ ($\forall m>1$). \item If $n>1$ and $m>0$, then \begin{enumerate} \item if at least one of $n$ and $m$ is odd, then $\AGnm \cong D_{2n}$; \item otherwise ($n$ and $m$ are even), $\AGnm \cong D_{n}\times C_2$. \end{enumerate} \end{enumerate} \end{theorem} This theorem is the combined result of forthcoming Lemmas \ref{lem:automorphism_group_special_cases} and \ref{lem:automorhism_group_structure}. \begin{observation}\label{obs:anm_relations} The following two properties hold in $\AGnm$ for any $n$ and $m$. \begin{enumerate} \item $\psi\varphi=\varphi^{-1}\psi$ and $\tau\varphi=\varphi^{-1}\tau$. \item $(\tau\psi)^2=\varphi^m$. \end{enumerate} \end{observation} \begin{proof} The first property is trivial. Let $u\in\Ynm$ and $f=\tau\psi$. Then \begin{equation*} \begin{split} (u_0,\dots,u_{m+1}) & \overset{f}{\mapsto} (-u_{m+1},1-u_m,\dots,1-u_1,-(m+u_0)) \\ & \overset{f}{\mapsto} (u_0+m,u_1,\dots,u_m,u_{m+1}-m). \end{split} \end{equation*} This proves the second property. \end{proof} \begin{fact}\label{fac:dihedral_gens} Let $n>2$. If $g$ and $h$ are two elements in a group such that $o(g)=n$, $o(h)=2$ and $gh=hg^{-1}$, then $\left<g,h\right>\cong D_n$. \end{fact} \begin{observation}\label{obs:anm_equal} \begin{enumerate} \item[] \item $\varphi = \psi$ if and only if $n=1$ and $m\leq 1$; \item $\varphi = \tau$ if and only if $n=1$ and $m=0$; \item $\psi = \tau$ if and only $m=0$; \end{enumerate} \end{observation} \begin{observation}\label{obs:anm_order} The orders of $\varphi$, $\psi$ and $\tau$ in $\AGnm$ are as follows: \begin{enumerate} \item $\psi=Id$ if and only if $(n,m)\in\{(1,0), (1,1), (2,0)\}$, otherwise $o(\psi)=2$; \item $\tau=Id$ if and only if $(n,m)\in\{(1,0), (2,0)\}$, otherwise $o(\tau)=2$; \item $o(\varphi)=n$. \end{enumerate} \end{observation} \begin{corollary}\label{cor:anm_generators_distinct} If $n>1$ and $m>0$, then $\varphi$, $\psi$ and $\tau$ are distinct and of orders $n$, $2$ and $2$, respectively. \end{corollary} \begin{lemma}\label{lem:automorphism_group_special_cases} \begin{enumerate} \item[] \item $\AGnm[1][0]\cong\{1\}$ and $\AGnm[2][0]\cong \AGnm[1][1]\cong C_2$. \item $\AGnm[n][0] \cong D_n$ ($\forall n>2$). \item $\AGnm[1][m] \cong C_2\times C_2$ ($\forall m>1$). \end{enumerate} \end{lemma} \begin{proof} The first case is trivial. By Observations \ref{obs:anm_equal} and \ref{obs:anm_order}, if $n>2$ and $m=0$, then $\psi=\tau\neq Id$. Moreover, by Observation \ref{obs:anm_relations} (Property $1$), Observation \ref{obs:anm_order} and Fact \ref{fac:dihedral_gens}, $\left<\varphi,\psi\right>$ is isomorphic to $D_n$. This proves the second case. If $n=1$ and $m>1$, then $\varphi=Id$, $\psi\neq\tau$, $o(\psi)= o(\tau)=2$ and $\psi\tau=\tau\psi$. This proves the third case. \end{proof} \pagebreak \begin{lemma}\label{lem:automorphism_group_order} If $m>0$ and $(n,m)\neq (1,1)$, then $|\AGnm|=4n$. \end{lemma} \begin{proof} If $n=1$ (and $m>1$), then $|\AGnm[1][m]|=4$, by Lemma \ref{lem:automorphism_group_special_cases}. Assume that $n>1$ (and $m>0$). By Observation \ref{obs:anm_relations}, every element in $\AGnm$ can be written in the form $\varphi^k\tau^i\psi^j$ where $0\leq k\leq n-1$ and $i,j\in\{0,1\}$. Therefore $|\AGnm|\leq 4n$. Assume to the contrary that $\varphi^k\tau^i\psi^j=Id$ for $(k, i, j)\neq (0,0,0)$. By Corollary \ref{cor:anm_generators_distinct}, $\varphi$, $\psi$ and $\tau$ are distinct and of orders $n$, $2$ and $2$, respectively. Therefore, $k\neq 0$. If $i=0$, then $\varphi^k\psi=Id$. This is impossible, since $\varphi^k\psi(0)\neq 0$. On the other hand, if $i=1$, then $\varphi^k\tau\psi^j=Id$. This is also impossible, again, since $\varphi^k\tau\psi^j(0)\neq 0$. This is a contradiction, completing the proof. \end{proof} \begin{lemma}\label{lem:automorhism_group_structure} If $n>1$ and $m>0$, then \begin{enumerate} \item if at least one of $n$ and $m$ is odd, then $\AGnm \cong D_{2n}$; \item otherwise, $\AGnm \cong D_{n}\times C_2$. \end{enumerate} \end{lemma} \begin{proof} We start with Case 1. Assume that at least one of $n$ and $m$ is odd. Therefore, there exists some integer $k$ such that $m+2k$ and $n$ are coprime. For this $k$ define $\alpha=\varphi^k\tau\psi$ and $\beta=\tau\varphi^k$. We show that $\left<\alpha, \beta\right>\cong D_{2n}$. This proves Case 1, since by Lemma \ref{lem:automorphism_group_order}, $|\AGnm|=4n$. $o(\alpha)$ is even, since $m>0$ and $(\alpha^i(0))_j=1$ for every odd integer $i$ and $1\leq j\leq m$. Therefore, $o(\alpha)=2o(\alpha^2)$. By Observation \ref{obs:anm_relations}, $$\alpha^2=\varphi^k\tau\psi\varphi^k\tau\psi=\varphi^k\tau\varphi^{-k}\psi\tau\psi=\varphi^{2k}\tau\psi\tau\psi=\varphi^{m+2k}.$$ Since $m+2k$ and $n$ are coprime, this implies that $o(\alpha^2)=n$. This proves that $o(\alpha)=2n$. Now $\beta(0)\neq 0$, so by Observation \ref{obs:anm_relations}, $o(\beta)=2$ and $\beta\alpha\beta=(\tau\varphi^k)(\varphi^k\tau\psi)(\tau\varphi^k)=\psi\tau\varphi^{-k}=\alpha^{-1}$. By Fact \ref{fac:dihedral_gens}, this proves that $\left<\alpha, \beta\right>\cong D_{2n}$, as required. We proceed to prove Case 2. $\AGnm\cong D_n\times C_2$ if and only if there exist two subgroups $N\cong D_n$ and $M\cong C_2$ of $\AGnm$ such that $N\cap M=\{Id\}$, $NM=\AGnm$ and the generator of $M$ commutes with the generators of $N$. Let $N=\left<\varphi, \psi\right>$ and let $M=\left<\gamma\right>$ where $\gamma=\varphi^{\frac{m}{2}}\psi\tau$ ($=\varphi^{-\frac{m}{2}}\tau\psi$). If $n=2$ (and $m>0$), then $\varphi$ and $\psi$ clearly commute. Moreover, by Observations \ref{obs:anm_equal} and \ref{obs:anm_order}, they are also distinct and of order $2$. Therefore, $N\cong C_2\times C_2\cong D_2$. For $n>2$, $N\cong D_n$ by Observations \ref{obs:anm_relations} and \ref{obs:anm_order} and Fact \ref{fac:dihedral_gens}. $M\cong C_2$, since $\gamma^2=(\varphi^{-\frac{m}{2}}\tau\psi)^2=\varphi^{-m}(\tau\psi)^2=Id$ and $\gamma(0)\neq 0$. By Observation \ref{obs:anm_relations}, $\gamma$ commutes with $\varphi$, and $\psi\gamma\psi=\psi(\varphi^{-\frac{m}{2}}\tau\psi)\psi=\varphi^{\frac{m}{2}}\psi\tau=\gamma$. Therefore, $\gamma$ commutes also with $\psi$. For every $1\leq j\leq m$, $\varphi(0)_j=\psi(0)_j=0$ but $\gamma(0)_j=1$. Therefore $\gamma\notin N$ and $N\cap M=\{Id\}$. $\AGnm\subseteq NM$, since $\tau=\psi\varphi^{-\frac{m}{2}}\gamma\in NM$. This completes the proof. \end{proof} \section{The Automorphism Group of $\Ynm$}\label{sec:complete_auto_group} The main result of this section is the following theorem. \begin{theorem \label{thm:aut_structure} \begin{enumerate} \item[] \item $\aut(\Ynm[1][3])\cong D_4\times C_2.$ \item $\aut(\Ynm)\cong \AGnm$ for all $n\geq 1$ and $m\geq 0$ s.t. $m\neq 2$ and \mbox{$(n,m)\neq (1, 3)$}. \end{enumerate} \end{theorem} This theorem is the combined result of forthcoming Lemmas \ref{lem:y13}, \ref{lem:Anm_is_aut} and Observation \ref{obs:Anm_is_aut}. \begin{lemma}\label{lem:y13} $$\aut(\Ynm[1][3])\cong D_4\times C_2.$$ \end{lemma} \begin{proof} Since $v_0=v_4=0$ for every vertex $v\in\Ynm[1][3]$, we identify each vertex $v$ with $(v_1v_2v_3)$ (since $v_i\in\{0,1\}$ for every $1\leq i\leq 3$, the vertices are well defined without commas), see Figure \ref{fig:Y13}. Let $f_1$, $f_2$ and $f_3$ be three automorphisms of $\Ynm[1][3]$, each one interchanging a single pair of vertices and fixing the rest of the graph: $f_1$ interchanges $(100)$ and $(001)$, $f_2$ interchanges $(101)$ and $(010)$ and $f_3$ interchanges $(110)$ and $(011)$. Let $\tau$ be as in Definition \ref{def:typical_automorphisms} and note that $\tau$ interchanges $(000)$ and $(111)$. $(000)$ and $(111)$ are the only two vertices of valency $2$ in $\Ynm[1][3]$. Therefore, for every $\sigma\in\aut(\Ynm[1][3])$, either $\sigma\in \st(000)$, the stabilizer of $(000)$, or $\tau\sigma\in \st(000)$. This implies that $|\aut(\Ynm[1][3])|=2|\st(000)|$. There are precisely two vertices at each distance $1$, $2$ and $3$ from $(000)$ (and a unique vertex at distance $4$). Therefore $|\st(000)|\leq 2^3=8$. $(000)$ is fixed by $f_1$, $f_2$ and $f_3$ and $|\left<f_1, f_2, f_3\right>|= |C_2\times C_2\times C_2|=8$. This implies that $|\st(000)|=8$ and therefore, $|\aut(\Ynm[1][3])|=16$. Let $\eta = \tau f_1$. Then $o(\eta)=4$ and $\eta f_3=f_3\eta^{-1}$. Therefore, by Fact \ref{fac:dihedral_gens}, $\left<\eta, f_3\right>\cong D_4$. $f_2\notin \left<\eta, f_3\right>$ and $f_2$ commutes with both $\eta$ and $f_3$. Therefore, $\left<\eta, f_2, f_3\right>\cong D_4\times C_2$. Since $|D_4\times C_2|=16=|\aut(\Ynm[1][3])|$, we have $\aut(\Ynm[1][3])\cong\left<\eta, f_2, f_3\right>\cong D_4\times C_2$. \end{proof} \begin{figure}[hbt] \begin{center} \begin{tikzpicture}[scale=0.4] \tikzstyle{every node}=[draw,circle,fill=white,minimum size=4pt,inner sep=0pt] \node (000) at (0,-2) [label=below:\scriptsize(000)] {}; \node (100) at (-3,0) [label=left:\scriptsize(100)] {}; \node (001) at (3,0) [label=right:\scriptsize(001)] {}; \node (101) at (-0.5,1) [label=below:\scriptsize(101)] {}; \node (010) at (0.5,3) [label=above:\scriptsize(010)] {}; \node (011) at (-3,4) [label=left:\scriptsize(011)] {}; \node (110) at (3,4) [label=right:\scriptsize(110)] {}; \node (111) at (0,6) [label=above:\scriptsize(111)] {}; \draw (000) -- (100); \draw (000) -- (001); \draw (100) -- (101); \draw (100) -- (010); \draw (001) -- (101); \draw (001) -- (010); \draw (101) -- (110); \draw (101) -- (011); \draw (010) -- (110); \draw (010) -- (011); \draw (110) -- (111); \draw (011) -- (111); \end{tikzpicture} \end{center} \caption{$\Ynm[1][3]$ (vertices are drawn without the buckets)} \label{fig:Y13} \end{figure} \begin{observation}\label{obs:Anm_is_aut} For $m\leq 1$, $\aut(\Ynm)=\AGnm$. \end{observation} \begin{proof} The cases $\Ynm[1][0]$, $\Ynm[2][0]$ and $\Ynm[1][1]$ are trivial. By Observation \ref{obs:small_m_is_trivial}, if $m=0$ and $n>2$, then $\Ynm$ is the cycle graph on $n$ vertices. If $m=1$ and $n>1$, then $\Ynm$ is the cycle graph on $2n$ vertices. Therefore, in both cases, $\aut(\Ynm)$ is the corresponding dihedral group. By Theorem \ref{thm:anm_structure}, it follows that $\aut(\Ynm)=\AGnm$ when $m\leq 1$. \end{proof} In the rest of this section, unless explicitly stated otherwise, we assume that $m\geq 3$ and $(n,m)\neq (1,3)$ in $\Ynm$. The purpose of the rest of this section is the proof of Lemma \ref{lem:Anm_is_aut} where we show that indeed $\aut(\Ynm)=\AGnm$. Specifically, Lemma \ref{lem:N_zero_equal} implies that if there exists $\pi\in\aut(\Ynm)$ such that $\pi\notin\AGnm$, then we can assume that $\pi$ maps between a unique pair of vertices in $\Ynm$: $(n-1,0,0,1,0,\dots,0)$ and $(n-2,1,1,0,0,\dots,0)$. This leads to a contradiction. \begin{definition}\label{def:aut_notations}Let $x\in\Ynm$ and let $k\in\{0,\dots,n-1\}$. We use the following notations throughout this section. \begin{enumerate} \item $0_\ell=\overrightarrow{s}_0(0)$ and $0_r=\overleftarrow{s}_m(0)$. \item $0_k=\varphi^k(0)=(k,0,\dots,0,n-k)$. \item $1_k=\varphi^k(0,1,\dots,1,-m)=(k,1,\dots,1,-(m+k))$. \item $N^0(x)=\{y:y\sim x\mbox{ and }d(y,0)=d(x,0)-1\}$. \item $n^0(x)=|N^0(x)|$. \end{enumerate} If $y\in N^0(x)$, we say that $x$ \textbf{covers} $y$ and write $y\lessdot x$. \end{definition} We use diagrams to visualize covering relations between vertices. For example, in the following diagram $w = s_j(v) \lessdot v$ and the directed edge $(u, w)$, combined with the label $\overleftarrow{s}_i$, indicates that $w = \overleftarrow{s}_i(u) \lessdot u$. \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em,minimum width=2em] { u & \; & v\\ \; & w & \; \\}; \path[-stealth] (m-1-1) edge [->] node [above] {$\overleftarrow{s}_i$} (m-2-2) (m-2-2) edge [-] node [above] {$s_j$} (m-1-3); \end{tikzpicture} \end{center} In Definition \ref{def:uvw_setup}, we setup three vertices in $\Ynm$ in a way that will recur throughout the statements in this section. The example diagram above visualizes the covering relations in this setup. \begin{definition}\label{def:uvw_setup} Let $u, v\in \Ynm$ be two distinct vertices such that $N^0(u)= N^0(v)$ and $d(u,0)=d(v,0)\geq 2$. Assume that there exists $w\in N^0(u)= N^0(v)$ such that $w=\overleftarrow{s}_i(u)=s_j(v)$ for some $0\leq i,j\leq m$. We say that $(u, v, w)$ are in \textbf{$\Vij$-position}. \end{definition} \begin{lemma}\label{lem:N_zero_size_far} Let $(u, v, w)$ be in $\Vij$-position. Then $|i-j|>1$. \end{lemma} \begin{proof} $i\neq j$, since $u\neq v$. Assume to the contrary that $|i-j|=1$ and assume that $j=i-1$. By definition, $u, v$ and $w$ are distinct and $s_i(u)$ shifts left, therefore $s_{i-1}(w)$ must shift left as well. We visualize the scenarios in this proof using diagrams in which the three entries that $s_i$ and $s_j$ change are all non-bucket entries. Note that the proof also covers the cases in which one of these entries is a bucket (at most one of the entries, since $m\geq 3$). \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=0.75cm,column sep=1,minimum width=2em] { (u_{i-1},u_{i},u_{i+1})=(0,0,1)\\ (w_{i-1},w_{i},w_{i+1})=(0,1,0) \\ (v_{i-1},v_{i},v_{i+1})=(1,0,0)\\}; \path[-stealth] (m-1-1) edge [->] node [left] {$\overleftarrow{s}_i$} (m-2-1) (m-2-1) edge [->] node [left] {$\overleftarrow{s}_{i-1}$} (m-3-1); \end{tikzpicture} \end{center} By Lemma \ref{lem:geodesic_is_a_pivot_path_ynm} and Observation \ref{obs:shift_direction_pivot_sides}, $u$ has a geodesic pivot $p$ such that $p\geq i+1$. $p$ is also a geodesic pivot of $w$; therefore, by Observation \ref{obs:shift_direction_pivot_sides}, $\overleftarrow{s}_{i-1}(w)=v \lessdot w$, in contradiction to $w\in N^0(v)$. Assume that $j=i+1$. Similarly to the previous case, $s_{i+1}(w)$ must shift left as well, $u$ has a geodesic pivot $p$ such that $p\geq i+1$ and $p$ is also a geodesic pivot of $w$. In this case, since $\overleftarrow{s}_{i+1}(w) = v \gtrdot w$, $w$ has no geodesic pivot greater than $i+1$. Therefore, $i+1$ is a geodesic pivot of $w$. $i$ is also a geodesic pivot of $w$ following similar arguments. This implies that $i$ and $i+1$ are geodesic pivots of $v$ and $u$ respectively. In the following diagram, we use the symbol $\|$ to visualize a geodesic pivot. \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=0.5em,minimum width=2em] { (u_{i},u_{i+1},u_{i+2})=(0,1\|1) & \; & (v_{i},v_{i+1},v_{i+2})=(1\|1,0)\\ \; & (w_{i},w_{i+1},w_{i+2})=(1\|0\|1) & \; \\}; \path[-stealth] (m-1-1) edge [->] node [above] {$\overleftarrow{s}_i$} (m-2-2) (m-2-2) edge [->] node [above] {$\overleftarrow{s}_{i+1}$} (m-1-3); \end{tikzpicture} \end{center} We split the rest of the proof of this case ($j=i+1$) into the following three sub-cases. Assume that $n=1$ and that $m$ is even ($3<m$). Then, by Observation \ref{obs:closer_pivot_is_better}, $\frac{m}{2}$ is a geodesic pivot of every vertex in $\Ynm[1][m]$. Note that a unit shift between $\frac{m}{2}$ and $\frac{m}{2}+1$ does not change the distance of any vertex from $0$. This implies that there does not exist $w\in \Ynm[1][m]$ and $i\in\{0,\dots m\}$ such that both $w\lessdot \overrightarrow{s}_{i}(w)$ and $w\lessdot \overleftarrow{s}_{i+1}(w)$. In the last two cases, we show that $N^0(u)\neq N^0(v)$. Assume that $n=1$ and that $m$ is odd ($3<m$). Then, by Observation \ref{obs:closer_pivot_is_better}, $\lfloor\frac{m}{2}\rfloor$ and $\lceil\frac{m}{2}\rceil$ are geodesic pivots of every vertex in $\Ynm[1][m]$. This implies that $i=\lfloor\frac{m}{2}\rfloor$. For example, $u=(0,0,0,1,1,0,0)$, $w=(0,0,1,0,1,0,0)$ and $v=(0,0,1,1,0,0,0)$ in $\Ynm[1][5]$. Therefore, there exists some $t<i$ such that $\overleftarrow{s}_{t}(v)=v'\lessdot v$ and clearly $v'\notin N^0(u)$. Assume that $1<n$ (and $3\leq m$). Then at least one of the following is true: either $0<i$ or $i+2<m+1$. If $0<i$, then there exists some $t<i$ such that $\overleftarrow{s}_{t}(v)=v'\lessdot v$ and $v'\notin N^0(u)$. If $i+2<m+1$, then there exists some $t\geq i+2$ such that $\overrightarrow{s}_{t}(u)=u'\lessdot u$ and $u'\notin N^0(v)$. In both cases $N^0(u)\neq N^0(v)$. \end{proof} \begin{lemma}\label{lem:bowtie_in_Ic} Let $u\in\Ynm$ and let $s_t$ and $s_k$ be two distinct elements in $\Snm$ such that $1<|t-k|$, $s_t(u)\lessdot u$ and $s_k(u)\lessdot s_ks_t(u)$. Then \begin{enumerate} \item both $s_t(u)$ and $s_k(u)$ shift units from entries in $I_c(u)$; \item $s_t(u)$ and $s_k(u)$ shift in opposite directions; \item $p_l(u)$ and $p_r(u)$ are geodesic pivots. \end{enumerate} \end{lemma} \pagebreak \begin{proof} $s_t$ and $s_k$ commute, since $1<|t-k|$. Denote $v=s_ts_k(u)=s_ks_t(u)$, $w=s_t(u)$ and $w'=s_k(u)$. The assumptions imply that $u$, $v$, $w$ and $w'$ are distinct and the covering relations between them can be presented as follows: \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { u & v\\ w & w' \\}; \path[-stealth] (m-1-1) edge [-] node [left] {$s_t$} (m-2-1) edge [-] node [above] {$s_k$} (m-2-2) (m-1-2) edge [-] node [right] {$s_t$} (m-2-2) edge [-] (m-2-1); \end{tikzpicture} \end{center} Assume that $s_t(u)$ shifts left (the proof is similar if $s_t(u)$ shifts right). $t+1\leq p_r(u)$, since $\overleftarrow{s}_t(u)\lessdot u$. If $t+1\leq p_l(u)$, then $s_t$ shifts left in every geodesic pivot path of $u$. This implies that $\overleftarrow{s}_ts_k(u)\lessdot s_k(u)$. A contradiction. Therefore $s_t$ shifts a unit from an entry in $I_c(u)$. $s_k(u)$ also shifts a unit from an entry in $I_c(u)$, following similar arguments. Note that $s_k(u)$ and $s_k(w)$ shift in the same direction, since $1<|t-k|$. This implies that $s_t(u)$ and $s_k(u)$ shift in opposite directions, otherwise $s_k(w)\lessdot w$. Case 3 is implied by the first two, by Observation \ref{obs:closer_pivot_is_better}. \end{proof} \begin{corollary}\label{cor:uvw_setup_in_Ic} Let $(u, v, w)$ be in $\Vij$-position. If $s_j(u)\lessdot u$, then \begin{enumerate} \item $\{i+1, j\}\subseteq I_c(u)$, $\{i, j+1\}\subseteq I_c(v)$; \item $s_j(u)$ shifts right; \item $p_l(u), p_r(u), p_l(v)$ and $p_r(v)$ are geodesic pivots. \end{enumerate} \end{corollary} \begin{lemma}\label{lem:possible_uv} Let $x\in\Ynm$ such that \begin{enumerate} \item $n^0(x)=2$; \item $I_c(x)$ is not trivial (i.e. $x_{p_l+1}\neq 0$, $x_{p_r}\neq 0$ and $\sum_{i\in I_c}x_i=n$); \item both $p_l(x)$ and $p_r(x)$ are geodesic pivots. \end{enumerate} \pagebreak Then $x$ is in one of the following three forms:\\ Either $p_l=-1$ and $p_r=m+1$: \begin{enumerate} \item[(a)] $x=(\frac{n}{2},0,\dots,0,\frac{n}{2})$ (in this case $n$ is even); \item[(b)] $x=(\frac{n-m}{2},1,\dots,1,\frac{n-m}{2})$ (in this case $n-m\geq 2$ and $2\divides n-m$). \end{enumerate} Or $p_l=\frac{m-n}{2}$, $p_r=\frac{m+n}{2}$, and for some $0\leq t_1\leq\frac{m-n}{2}+1$ and $\frac{m+n}{2}\leq t_2\leq m$: \begin{enumerate} \item[(c)] $x=(-(\frac{n-m}{2}+1-t_1),0,\dots, 0, \overbrace{1,\dots,1}^{x_{t_1}, \dots, x_{t_2}},0,\dots,0,-(\frac{n+m}{2}-t_2))$ (in this case $m\geq n$ and $2\divides m-n$). \end{enumerate} \end{lemma} \begin{proof} We split the proof into the following three cases: Assume that $p_l=-1$ and $p_r=m+1$. Then $x_0\neq 0$, $x_{m+1}\neq 0$ and $\sum_{i=0}^{m+1}x_i=n$. Note that all of the non-bucket entries of $x$ must be equal to each other, otherwise $n^0(x)>2$. If $(x_1,\dots,x_m)=(0,\dots, 0)$, then $x=(\frac{n}{2},0,\dots,0,\frac{n}{2})$, the only vertex in which $(x_1,\dots,x_m)=(0,\dots, 0)$ and $\ps_{-1}(x)=\ps_{m+1}(x)$. Similarly, if $(x_1,\dots,x_m)=(1,\dots, 1)$, then $x=(\frac{n-m}{2},1,\dots,1,\frac{n-m}{2})$ ($m<n$, $2\divides m-n$ and $n-m\geq 2$). Assume that both $p_l$ and $p_r$ are inner pivots. Then $x_{p_l+1}=x_{p_r}=1$ and $\sum_{i\in I_c}x_i=n$. Note that all of the non-bucket entries of $x$ that are equal to $1$ are contiguous, otherwise $n^0(x)>2$. Therefore, $p_r-p_l=n$, $p_l=\frac{m-n}{2}$ and $p_r=\frac{m+n}{2}$ (and $m\geq n$ and $2\divides m-n$). This implies that $x$ is in the form of Case (c) of this lemma, since $\ps_{p_l}(x)=\ps_{p_r}(x)$. Finally, we show that if one of $p_l$ and $p_r$ is an inner pivot, the other an outer pivot and $n^0(x)=2$, then $p_l$ and $p_r$ cannot both be geodesic pivots. Assume, without loss of generality, that $p_l=-1$ and $\frac{m}{2}<p_r\leq m$. Then $x_0\neq 0$, $x_{p_r}=1$ and $\sum_{i=0}^{p_r}x_i=n$. Note, again, that all of the non-bucket entries of $x$ that are equal to $1$ are contiguous, otherwise $n^0(x)>2$. This implies that $\ps_{-1}(x)=\ps_{p_l}(x)>\ps_{p_r}(x)$. \end{proof} \begin{observation}\label{obs:no_such_pair} Let $u,v,w\in \Ynm$, such that $u$ and $v$ are each with one of the forms in Lemma \ref{lem:possible_uv}. Then $(u,v,w)$ is not in $V_{i,j}$-position. \end{observation} \begin{lemma}\label{lem:N_zero_size_only_one} Let $(u, v, w)$ be in $\Vij$-position. Then $n^0(u)=n^0(v)=1$. \end{lemma} \begin{proof} By Lemma \ref{lem:N_zero_size_far}, necessarily $|i-j|>1$. Therefore, $s_i$ and $s_j$ commute and both $v=s_{j}s_{i}(u)$ and $v=s_{i}s_{j}(u)$, are geodesics between $u$ and $v$. We first show that these two geodesics are, in fact, the only two geodesics between $u$ and $v$ and therefore $n^0(u)=n^0(v)\leq 2$. If $1<n$ or $1\leq i,j\leq m-1$, then $u$ and $v$ have four distinct entries (indexed by $i,i+1,j$ and $j+1$) in which they are not equal. This implies that $v=s_{j}s_{i}(u)$ and $v=s_{i}s_{j}(u)$ are the only two geodesics between $u$ and \nolinebreak$v$, as required. Assume that $n=1$ (and $3<m$) and that at least one of $i$ and $j$ is in $\{0,m\}$. Note that $s_i$ and $s_j$ shift in $s_js_i(v)$ and $s_is_j(v)$ in opposite directions than they do in $v=s_js_i(u)$ and $v=s_is_j(u)$. Therefore, we can assume without loss of generality, that $i\in\{0,m\}$ (by interchanging the roles of $s_i$ and $s_j$ if $i\notin\{0,m\}$; and if $s_j(u)$ shifts right, then also by interchanging the roles of $u$ and $v$). Assume also that $i=0$ (the proof for the case $i=m$ follows by similar arguments). If $j=m$, then $v=s_{j}s_{i}(u)$ and $v=s_{i}s_{j}(u)$ are the only two geodesics from $u$ to $v$, since $3<m$. Therefore $2\leq j\leq m-1$. In this case, $\sum_{i=1}^{m}u_i - v_i = 1$; therefore either $\overleftarrow{s}_0$ or $\overrightarrow{s}_m$ is in every word corresponding to a geodesic from $u$ to $v$. It is sufficient to show that $\overleftarrow{s}_0$ is in every word corresponding to a geodesic from $u$ to $v$. If $j< m-1$, then clearly, $\overleftarrow{s}_0$ is in every word corresponding to a geodesic from $u$ to $v$. The same result follows if $j= m-1$, again, since $m>3$. It remains to show that $n^0(u) = n^0(v) \neq 2$. Assume to the contrary that $n^0(u) = n^0(u) = 2$. Since $v=s_js_i(u)$ and $v=s_is_j(u)$ are the only two geodesics from $u$ to $v$, $N^0(u)=N^0(v)=\{s_i(u), s_j(u)\}=\{s_j(v), s_i(v)\}$. Therefore $s_j(u)\lessdot u$ and the vertices $u$ and $v$ are as in Corollary \ref{cor:uvw_setup_in_Ic}. By the second property of Corollary \ref{cor:uvw_setup_in_Ic}, $I_c(u)$ and $I_c(v)$ are not trivial (and $n>1$). Therefore, $u$ and $v$ are two distinct vertices, each with one of the forms as in Lemma \ref{lem:possible_uv}. This is a contradiction by Observation \ref{obs:no_such_pair}. \end{proof} \begin{lemma}\label{lem:N_zero_equal} Let $(u, v, w)$ be in $V_{i,j}$-position. Then \begin{enumerate} \item $u$ and $v$ are the unique pair of vertices $(n-1,0,0,1,0,\dots,0)$ and \linebreak $(n-2,1,1,0,0,\dots,0)$, and $w=(n-1,0,1,0,0,\dots,0)$; \item if $n=1$, then $6\leq m$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:N_zero_size_only_one}, $n^0(u)=n^0(v)=1$. We first prove that if $n=1$, then $6\leq m$. Assume to the contrary that $n=1$ and $m\in\{4,5\}$. In this case, there are precisely three vertices in $\Ynm$ which cover only one vertex and that the covered vertex is obtained via a left unit shift: $0_\ell$, $(0,1,1,0,\dots,0)$ and $(0,0,1,0,\dots,0)$. Therefore either $u=(0,1,1,0,\dots,0)$ and $s_i=\overleftarrow{s}_0$ or $u=(0,0,1,0,\dots,0)$ and $s_i=\overleftarrow{s}_1$, since $w\neq 0$. In both cases, $n^0(v)=2$ for every possible $s_j$. A contradiction. It is simple to verify that the pair of vertices given in this lemma indeed satisfies the conditions of the lemma. Let $(u,v,w)$ be in $V_{i,j}$-position, where $u$ and $v$ are any other pair of vertices. We show that $n^0(v)>1$. Since $n^0(u)=1$ and $s_i(u)$ shifts left, every geodesic between $u$ and $0$ contains only left unit shifts and $u$ consists of a sequence of $k\geq 1$ consecutive non-zero entries which are to be shifted into the left bucket in every geodesic from $u$ to $0$ (including the case for $k=1$ where $u=(u_0,0,\dots,0,u_{m+1})$ with $0<u_{m+1}<\frac{n}{2}$). If $k=1$, then $i\notin\{0,2\}$, since $u\notin\{0_\ell, (n-1,0,0,1,0,\dots,0)\}$. Clearly, $s_j\in\{s_0,s_m\}$ and in both cases, $n^0(v)=2$. Similarly, if $k>1$, then it is simple to verify that $u=(n-2,1,1,0,0,\dots,0)$ is the only case in which there exists $s_j$ such that $n^0(v)=1$. \end{proof} \pagebreak \begin{observation} By symmetric arguments to the ones in the proof of Lemma \ref{lem:N_zero_equal}, if $s_i$ were to shift right in Definition \ref{def:uvw_setup}, then the only pair of vertices for which $n^0(u) = n^0(v)=1$ was $(0,\dots,0,1,0,0,n-1)$ and $(0,\dots,0,0,1,1,n-2)$. \end{observation} \begin{observation}\label{obs:assume_fixing_zero} If $f\in\aut(\Ynm)$, then there exist $\alpha\in\{0,\dots,n-1\}$ and $\beta, \gamma\in\{0,1\}$ such that $0$, $0_\ell$ and $0_r$ are fixed points of $\pi=\psi^\gamma\tau^\beta\varphi^\alpha f$. \end{observation} \begin{proof} A vertex in $\Ynm$ has valency $2$ if and only if all of its non-bucket entries are equal to each other. Therefore, there exist $\alpha\in\{0,\dots,n-1\}$ and $\beta\in\{0,1\}$ such that $0$ is a fixed point of $\pi=\tau^\beta\varphi^\alpha f$. $0$ is also a fixed point of $\psi$, and $\psi$ interchanges the only two neighbors of $0$; $0_\ell$ and $0_r$. Therefore, there exists $\gamma\in\{0,1\}$ such that $0$, $0_\ell$ and $0_r$ are fixed points of $\pi=\psi^\gamma\tau^\beta\varphi^\alpha f$. \end{proof} \begin{lemma}\label{lem:assume_fixing_zero_one} Let $\pi\in\aut(\Ynm)$ such that $0$, $0_\ell$ and $0_r$ are fixed points of $\pi$. Then $0_1$ and $0_{n-1}$ are also fixed points of $\pi$. \end{lemma} \begin{proof} If $n=1$, then $0_1=0_{n-1}=0$, which is a fixed point of $\pi$. If $n>2$, then $0_1$ and $0_{n-1}$ are the only (non zero) vertices of valency $2$ with a single geodesic between them and $0$. Therefore, $0_1$ and $0_{n-1}$ are either fixed or interchanged by $\pi$. Note that the geodesic between $0_1$ and $0$ contains $0_r$ and doesn't contain $0_\ell$ whereas the geodesic between $0_{n-1}$ and $0$ contains $0_\ell$ and doesn't contain $0_r$. Therefore, $0_1$ and $0_{n-1}$ are fixed by $\pi$, since $0_r$ and $0_\ell$ are fixed by $\pi$. Assume that $n=2$ and $m>3$. In this case, $0_1=0_{n-1}$ and there are four vertices with valency $2$: $0, 0_1, 1_0$ and $1_1$. Note that $d(0_1,0)=m+1$ and, by Observation \ref{obs:closer_pivot_is_better} and Fact \ref{fac:split_sum_center}, $d(1_k,0)\geq \binom{\lfloor\frac{m}{2}\rfloor+1}{2}+\binom{\lceil\frac{m}{2}\rceil+1}{2}$ (for $k\in\{0,1\}$). Therefore, since $m>3$, $d(0_1,0)<d(1_k,0)$ for both $k=0$ and $k=1$. This proves that $0_1$ is fixed by $\pi$. The only case left is $n=2$ and $m=3$. Note that $x=(1,1,0,1,1)\in\Ynm[2][3]$ is fixed by $\pi$, since it is the only vertex at distance $2$ from $0$ that is a neighbor of both $0_\ell$ and $0_r$. Both vertices $1_0$ and $1_1$ are at distance $2$ from $x$ whereas the vertex $0_1$ is at distance $4$ from $x$. Therefore $0_1$ is fixed by $\pi$. \end{proof} \begin{lemma}\label{lem:Anm_is_aut} Let $n\geq 1$ and $m\geq 3$ be two integers such that $(n,m)\neq (1,3)$. Then $\aut(\Ynm)=\AGnm$. \end{lemma} \begin{proof} Let $f\in\aut(\Ynm)$. By Observation \ref{obs:assume_fixing_zero} and Lemma \ref{lem:assume_fixing_zero_one}, there exist integers $\alpha, \beta, \gamma$ such that $0$, $0_\ell$, $0_r$, $0_1$ and $0_{n-1}$ are fixed points of $\pi = \psi^\gamma \tau^\beta \varphi^\alpha f$. We show that $\pi$ is, in fact, the trivial automorphism. Assume to the contrary that $\pi\neq Id$ and let $u\in\Ynm$ be a vertex at minimal distance $d$ from $0$ such that $u$ is not fixed by $\pi$. Denote $v=\pi(u)$. Note that $N^0(u)=N^0(v)$, by the minimality of $d$, and let $w\in N^0(u)= N^0(v)$. Note that $d\geq 2$ and therefore $w\neq 0$. Let $0\leq i,j\leq m$ such that $s_i(u)=s_j(v)=w$, so that $v=s_j s_i(u)$. Assume that $s_i$ shifts left (the proof of the other case follows by symmetric arguments). Therefore, $(u, v, w)$ are in $V_{i,j}$-position. By Lemma \nolinebreak \ref{lem:N_zero_equal}, $u$ and $v$ are the unique pair of vertices $(n-1,0,0,1,0,\dots,0)$ and \linebreak $(n-2,1,1,0,0,\dots,0)$; and if $n=1$, then $6\leq m$. If $m>3$, then $u$ and $v$ have different valencies, in contradiction to $\pi(u)=v$. Finally, assume $m=3$ (and $n>1$). Then $u$ and $v$ are the pair of vertices $u_1=(n-1,0,0,1,0)$ and $u_2=(n-2,1,1,0,0)$. Since $0_{n-1}$ is a fixed point of $\pi$, $d(u_1,0_{n-1})=d(u_2,0_{n-1})$. Therefore, $$d(\varphi(u_1), 0)=d(\varphi(u_1),\varphi(0_{n-1})) = d(\varphi(u_2),\varphi(0_{n-1}))=d(\varphi(u_2),0).$$ But $d(\varphi(u_1),0)=1$ and $d(\varphi(u_2),0)=3$. A contradiction. \end{proof} \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{References}
{ "timestamp": "2020-12-17T02:20:35", "yymm": "2012", "arxiv_id": "2012.09010", "language": "en", "url": "https://arxiv.org/abs/2012.09010" }
\section{Introduction} In this paper I propose new methods of estimating treatment effects for panel models with heterogenous trends. Two motivational numerical examples are illustrated in Figure \ref{fig:motiv} based on simulated data generated by a model considered by Abadie, Diamond and Hainmueller (2010, ADH), with details given in Appendix \ref{subsec:dgp}. Trends are plotted in the figure for the true untreated outcomes, the ADH synthetic control outcomes, and the construction by one of my new methods. In part (a) of Figure \ref{fig:motiv}, the ADH synthetic control outcomes are far from the truth even for the pre-treatment periods, presumably due to the violation of the convexity or interpolation assumption (ADH, 2010; Gobillon and Magnac, 2016; see also Figure \ref{fig:alltrends} in the appendix for the generated untreated outcomes). Though not much useful, the ADH results at least do not mislead the researcher as its inappropriateness is unequivocal. In part (b), however, the ADH synthetic control looks flawless for the pre-treatment periods, but the post-treatment synthetic control outcomes are far from the truth. Later developments such as Hsiao, Ching and Wan (2012, HCW hereafter) and Doudchenko and Imbens (2017) suffer from similar biases, while the methods I propose in this paper work well as Figure \ref{fig:motiv} shows. \begin{figure} \caption{Trends of counterfactual outcomes} \label{fig:motiv} \begin{center} (a) Pre-treatment outcomes not traced by ADH's synthetic control \includegraphics[width=.95\columnwidth]{055-orig/trends-synth.pdf} (b) Bias in post-treatment counterfactual outcome estimation \includegraphics[width=.95\columnwidth]{055-flat/trends-synth.pdf} \end{center} \footnotesize \emph{Note.} Simulated data. See Appendix \ref{subsec:dgp} for the data generating processes. ADH's (2010) counterfactual trends are found using the R package Synth. (a) The ADH counterfactual outcomes are far from the truth even for the pre-treatment period. (b) The ADH synthetic control is flawless in the pre-treatment period, but ADH's post-treatment counterfactual trend is severely biased. \end{figure} The model considered here is identical to ADH's (2010), and is given by \begin{equation} \label{eq:model} y_{it}^0 = \mu_i + \gamma_t'z_i + \delta_t'h_i + u_{it}, \quad y_{it}^1 = \tau_{it} + y_{it}^0, \end{equation} where $z_i$ and and $y_{it}$ are observed, with $y_{it} = y_{it}^1$ if the $i$th unit is treated in period $t$ and $y_{it} = y_{it}^0$ otherwise. Unit 1 is treated for $t>T_0$, and the rest units ($i=2,\ldots,J+1$) are untreated for all $t$. The unobservable trends $\gamma_t$ and $\delta_t$ are fixed effects that can be dependent on any other random variables. For example, $\gamma_t$ can be small in magnitude in the pre-treatment periods and large in the post-treatment periods. Similarly, $\mu_i$ are arbitrary fixed effects. The unobservable constituents $\mu_i$, $\gamma_t$ and $\delta_t$ should be normalized somehow for identification, but how they are normalized is of no consequence because I take the difference-in-differences (DID) approach. The observed vector $z_i$ contains $K+1$ components including the constant term for common time effects, and the unobservable $h_i$ has $r$ elements, where $K$ and $r$ are typically small. The variables $z_i$ and $h_i$ determine how each unit responds to common shocks $\gamma_t$ and $\delta_t$. The random errors $u_{it}$ are assumed to have zero mean conditional on $z_i$ and $h_i$. The goal is to find $w_2, \ldots, w_{J+1}$ such that the linear combination $\sum_{j=2}^{J+1} w_j y_{jt}$ forms a sensible counterfactual comparison for the treated unit while controlling for the trends due to the common shocks $\gamma_t$ and $\delta_t$. Unlike Doudchenko and Imbens (2017) I firmly base my analysis on the model given by (\ref{eq:model}). That is, the goal is to provide weights $w_2, \ldots, w_{J+1}$ such that $y_{1t}^0 - \sum_{j=2}^{J+1} w_j y_{jt}$ is free from confounding trends driven by $\gamma_t'z_i$ and $\delta_t'h_i$ in the model. Identification is sought not by algorithm but by the model and the population distribution of the related random variables. My approach begins with distinguishing the variables responsible for trend heterogeneity and those which are balanced on in order to enhance comparability. When a set of variables (such as $z_i$ and $h_i$) are responsible for heterogenous trends, they should be \emph{exactly} balanced on by hard constraints in order to avoid bias due to uncontrolled trends since the components $\gamma_t$, $\delta_t$, $z_i$ and $h_i$ are fixed effects; the balancing covariates such as pre-treatment outcomes, on the other hand, need not be exactly matched on. The importance of exact matching on $z_i$ and $h_i$ has been overlooked in the literature. As discussed in Section \ref{sec:comp} later, ADH's (2010) algorithm is relevant in a subtle way but their nonnegativity constraint is obstruent. HCW's (2012) regression-based method and Doudchenko and Imbens's (2017) elastic-net proposal do not attend the heterogenous trends $\gamma_t'z_i$ and $\delta_t'h_i$. Consequences of ignoring its significance are visible in Figure \ref{fig:motiv} above and in Figure \ref{fig:DI} later in Section \ref{subsec:di}. Exact-balancing on trending covariates does not obliterate the necessity of regularization, especially when the number $J$ of the untreated units is comparable to or larger than the number of balancing covariates as is the case in many applications. Without regularization the weight matrix may not be uniquely identified. Undoubtedly, all the extant methods implement regularization in some ways. ADH (2010) impose the nonnegativity and adding-up constraints as hard restrictions. HCW (2012) select a subset of control groups based on researcher's judgment. Doudchenko and Imbens (2017) implement elastic-net penalties. I also consider regularization, where the penalty term is motivated by ordinary least squares (OLS), rather than given heuristically. My proposal leads to a `constrained ridge' regression and its lasso and elastic-net variants, all of which are now well accepted by the econometric community. The rest of this paper is organized as follows. Section \ref{sec:est} presents the new estimators, and Section \ref{sec:comp} compares them with extant estimators. The last section contains concluding remarks. All the proofs are gathered in the appendix, which also contains discussions on establishing asymptotics. Throughout the paper, $Y_t$ and $U_t$ denote the $J\times 1$ vectors of $y_{jt}$ and $u_{jt}$, respectively, for $j\ge 2$, i.e., for the untreated units. The weight vector $(w_2, \ldots, w_{J+1})'$ is denoted by $w$, and $Z$ is the $(K+1)\times J$ matrix $(z_2,\ldots, z_{J+1})$. The exact-balancing restriction for $z_i$ is thus written as $z_1 = Zw$. \section{Estimation} \label{sec:est} This section presents the new estimators. Section \ref{subsec:z} considers a model with a common component $\gamma_t'z_i$ but without latent factors in order to motivate the exact-matching constraint $z_1=Zw$ and regularization. Section \ref{subsec:zq} considers the same model but introduces balancing covariates. Section \ref{subsec:zqh} makes an extension to models with unobservable common factors. \subsection{Heterogenous trends on observables} \label{subsec:z} To begin with, consider the model in (\ref{eq:model}) without $h_i$ so the potential untreated outcomes are modeled by $y_{it}^0 = \mu_i + \gamma_t'z_i + u_{it}$, where $u_{it}$ shows no systematic trends if the model is correctly specified. For $s$ and $t$ with $s\le T_0 < t$, where $T_0$ is the last period before treatment, we have \begin{equation} \label{eq:dy} y_{it}-y_{is} = \tau_{1t} I(i=1) + (\gamma_t-\gamma_s)'z_i + (u_{it}-u_{is}), \quad i=1,2,\ldots,J+1, \end{equation} with $I(\cdot)$ denoting the indicator function. In (\ref{eq:dy}), a post-treatment period $t$ is compared with a single pre-treatment period $s$ for the sake of simple exposition. Generalization by changing $y_{is}$ to $T_0^{-1} \sum_{s=1}^{T_0} y_{is}$ or any other weighted average makes no serious differences in the arguments to follow; likewise, $y_{it}$ can be replaced with an average over the post-treatment periods. An obvious estimator of $\tau_{1t}$ in (\ref{eq:dy}) can be obtained by the OLS regression of $y_{it}-y_{is}$ on $I(i=1)$ and $z_i$ using the $J+1$ cross-sectional observations as the sample. Because the dummy variable $I(i=1)$ has value 1 only for $i=1$, the OLS estimator of $\gamma_t-\gamma_s$ is also obtained by regression $y_{it}-y_{is}$ on $z_i$ using $i\ge 2$, and then $\tau_{1t}$ is estimated as the prediction error for $i=1$. That is, the OLS estimator of $\gamma_t-\gamma_s$ is $(ZZ')^{-1}Z(Y_t-Y_s)$, and \[ \hat\tau_{1t} = (y_{1t}-y_{1s}) - (Y_t-Y_s)'Z'(ZZ')^{-1} z_1. \] With $w_a$ denoting $Z'(ZZ')^{-1}z_1$, this $\hat\tau_{1t}$ is written as $\hat\tau_{1t} = (y_{1t}-y_{1s}) - (Y_t - Y_s)'w_a$, which is the DID estimator using $Y_t'w_a$ as the constructed control group. In the simple case of $z_i=1$, the elements in $w_a$ are uniform, i.e., $w_a = J^{-1} (1,1,\ldots,1)'$, and $Y_t'w_a$ is the unweighted average $y_{jt}$ over the untreated units. In this sense $w_a$ generalizes the unweigted averaging operator. Note that $w_a$ depends on $z_i$ only and choice of $s$ and $t$ is irrelevant. The weight vector $w_a$ eliminates the confounding trends driven by $z_i$ from $y_{1t} - Y_t'w_a$ because \begin{align*} y_{1t} - Y_t'w_a &= (\tau_{1t} + \mu_1 + \gamma_t'z_1 + u_{1t} ) - (\bmu'w_a + \gamma_t' Zw_a + U_t'w_a)\\ &= \tau_{1t} + (\mu_1 - \bmu'w_a) + (u_{1t} - U_t'w_a), \quad \bmu=(\mu_2,\ldots, \mu_{J+1})', \end{align*} and thus the DID estimator $\hat\tau_{1t}$ satisfies \[ \hat\tau_{1t} = \tau_{1t} + [ (u_{1t}-u_{1s}) - (U_t - U_s)'w_a]. \] We clearly have $\E(\hat\tau_{1t}) = \tau_{1t}$ because $w_a$ is a function of $z_1,\ldots, z_{J+1}$, provided that the random disturbances $u_{jt}$ have zero mean for all $t$ conditional on the trending covariates $z_1, \ldots, z_{J+1}$. The above $w_a$ is not the only weight vector that gives an unbiased estimator of $\tau_{1t}$ by DID. Any $w$ satisfying $z_1=Zw$ and $\E(U_t'w)=0$ works because then $y_{1t}^0 - Y_t'w = (\mu_1-\bmu'w) + (u_{1t} - U_t'w)$. Given the arbitrariness of $\gamma_t$, unbiased estimation of $\tau_{1t}$ requires $z_1=Zw$ as a minimal condition, which is the exact balancing constraint emphasized in the introduction, and which $w_a$ turns out to satisfy. It is noteworthy that $w_a=Z'(ZZ')^{-1} z_1$ is the solution to the constrained $\ell_2$ minimization \begin{equation} \label{eq:min0} \min_{w} w'w \text{ subject to } z_1 = Zw. \end{equation} (See the appendix for a proof that $w_a$ solves (\ref{eq:min0}).) That is, $w_a$ is the smallest (in terms of Euclidean norm) of those satisfying $z_1=Zw$. Under the $iid$ assumption for $u_{jt}$, $w_a$ also minimizes the sampling variability in the constructed counterfactual outcomes conditional on $z_1,\ldots, z_{J+1}$, since $\var(U_t'w) = \sigma_u^2 w'w$ for nonrandom $w$. In plain words, $Y_t'w_a$ would exhibit least fluctuations over time while satisfying $z_1=Zw_a$. It is subtle to discuss how a weight $w$ is defined for the model $y_{it}^0 = \mu_i + \gamma_t'z_i + u_{it}$. For a given $w$, let $\hat\tau_{1t}(w) = (y_{1t} - Y_t'w) - (y_{1s}-Y_s'w)$ for $s\le T_0<t$, which is the DID estimator using $Y_t'w$ as the constructed comparison group. The restriction that $\hat\tau_{1t}(w)$ should be unbiased for $\tau_{1t}$ alone does not identify a $w$ in the population since $z_1=Zw$ and $\E(u_{jt}w)=0$ are satisfied by infinitely many $w$'s, if $J>K+1$. For example, when $z_i=1$, any $J\times 1$ vector of fixed numbers that sum up to 1, such as the uniform weights, uneven weights like $w=(0.2,0.8,0,\ldots,0)'$, non-convex weights like $w=(-0.3,1.3,0,\ldots,0)'$, and infinitely many others, allows $\hat\tau_{1t}(w)$ to be unbiased for $\tau_{1t}$ if the model is correctly specified so that $\E(u_{it}|z_1,\ldots,z_{J+1})=0$ for all $t$. The weight $w_a = Z'(ZZ')^{-1} z_1$ is just one particular choice that generalizes the uniform weights. The identification of $w_a$ requires further the minimization of $w'w$ in (\ref{eq:min0}) on top of the unbiasedness requirement ($z_1=Zw$). A natural alternative to $w'w$ in (\ref{eq:min0}) is the $\ell_1$ norm $\Vert w\Vert_1 = \sum_{j=2}^{J+1} |w_j|$, which leads to \begin{equation} \label{eq:min0.l1} \min_w \Vert w\Vert_1 \text{~~subject to~~} z_1 = Zw, \end{equation} a constrained $\ell_1$ minimization problem, also known as the basis pursuit minimization (see Mallat, 2009, Chapter 12). Algorithms using Alternating Direction Method of Multipliers (ADMM) are available for this problem (the R package ADMM). The minimization problem (\ref{eq:min0.l1}) can also be written as the standard quadratic programming \begin{equation} \label{eq:qp} \min_{w^+, w^-} \sum_{j=2}^{J+1} (w^+_j + w^-_j) \text{ subject to } z_1 = Zw^{+} - Zw^{-},\;\; w^+_j\ge 0,\; w^-_j\ge 0 \;\forall j \end{equation} because $w=w^+-w^-$ and $\Vert w\Vert_1 = w^+ + w^-$ for $w_j^+ = \max(w_j,0)$ and $w_j^- = -\min(w_j,0)$. Note that the $\ell_1$ minimization problem does not necessarily have a unique solution (e.g., when $z_i=1$), in which case we can minimize $\varepsilon w'w + \Vert w\Vert_1$ instead of $\Vert w\Vert_1$ for some small positive constant $\varepsilon$ such as $10^{-4}$ to achieve uniqueness (see Gains et al., 2018, p. 863). The elastic-net style loss function $\frac{1-\alpha}{2} w'w + \alpha \Vert w\Vert_1$ using other $\alpha$ parameter values can also be used. The elastic-net minimization algorithm can be implemented as a constrained lasso using $\alpha \Vert w\Vert_1$ as penalty, the zero vector as the response vector, and $[(1-\alpha)/2]^{1/2} I_J$ as the feature matrix. See James et al. (2019) for a fast algorithm for constrained lasso and its implementation by the R package PACLasso. Given a weight vector $w$, the presence of systematic trends in the prediction error $y_{1t}-Y_t'w$ can be tested for the pre-treatment periods by regressing it on $t$, unless $T_0$ is too small. There is no `generated regressors' problem if $w$ is a function of $z_1, \ldots, z_{J+1}$. In addition, the mutual compatibility of two estimated weight vectors, $w_{(1)}$ and $w_{(2)}$, say, can be tested by regressing $Y_t'w_{(1)} - Y_t'w_{(2)}$ on $t-T_0$, $\mathit{after}_t$ and $\mathit{after}_t (t-T_0)$ using all the observations, where $\mathit{after}_t$ is the dummy variable for $t>T_0$. Overall significance can be interpreted as an evidence of model misspecification, although overall insignificance does not necessarily imply correct model specification because $U_t'[ w_{(1)} - w_{(2)}]$ can show no systematic trends while some $u_{it}$'s still do. If $z_i$ contains pre-treatment outcomes (e.g., ADH, 2010), the estimated $w$ is not necessarily exogenous, and the generated regressors problem applies. In that case, the testing results should be taken only as a diagnostic summary statistic. In all cases decision by human intuition using visual examination rather than formal testing is a promising alternative. With regard to how to present the estimated counterfactual outcomes, if $z_i$ contains no pre-treatment dependent variables, then $Y_t'w$ and $y_{1t}$ may have systematically different levels just like in the standard DID framework. The counterfactual outcomes are, thus, better presented by $c+Y_t'w$ such that the intercept $c$ deals with the pre-treatment level difference. For example, $c$ can be the average of $y_{1s} - Y_s'w_a$ over the pre-treatment periods. This modification does not change anything about the estimation of treatment effects but only helps presentation. \subsection{Balancing covariates} \label{subsec:zq} We have thus far considered controlling for heterogenous trends driven by $\gamma_t'z_i$ by imposing the exact-matching constraints that $z_1=Zw$. In most applications the number $K$ of the nonconstant variables in $z_i$ is much smaller than the number $J$ of untreated units, and the restrictions $z_1=Zw$ do not identify a unique $w$. As a supplementary means to identify a single vector, we have considered minimizating the $\ell_2$, the $\ell_1$, or an elastic-net norm of $w$. Now, beside the trending covariates $z_i$, the researcher may also want some other variables to be balanced on in pursuit of robustness against outliers or local misspecification. Typical balancing covariates include pre-treatment outcomes or their deviations from the pre-treatment average, while other exogenous features such as post-treatment controls can also be taken into consideration. Unlike the trend predictors $z_i$, these balancing covariates need not be matched on exactly. Let $q_i$ denote the $m\times 1$ vector of such balancing covariates, e.g., $q_i = (y_{i1}, \ldots, y_{iT_0})'$, where $m$ can be larger or smaller than $J$. Let $Q$ be the $m\times J$ matrix of $q_i$ for the untreated units, i.e., $Q=(q_2, \ldots, q_{J+1})$. Matching seeks to make $(q_1-Qw)'(q_1-Qw)$ as small as possible, which leads to a natual extension of (\ref{eq:min0}) to \begin{equation} \label{eq:min2} \min_w\; (q_1-Qw)'(q_1-Qw) + \lambda w'w \text{~~subject to~~} z_1=Zw \end{equation} for a user-specified tuning parameter $\lambda \ge 0$ (and $\lambda>0$ if $Q'Q$ is singular). This is a constrained ridge (CRIDGE) regression of $q_1$ on $Q$ with penalty $\lambda w'w$ and constraints $z_1=Zw$. The shrinkage parameter $\lambda$ inversely relates to the desired matching quality relative to the magnitude $w'w$. If $\lambda=0$ (allowed if $Q'Q$ is nonsingular), we pursue best matching without shrinkage. If $\lambda=\infty$, we give up on balancing and pursue maximal shrinkage, leading to $w_a$ in the previous section. A finite positive $\lambda$ is a compromise. In all cases, we explicitly impose the restrictions that $z_1=Zw$, and thus heterogenous trends due to different $z_i$ are perfectly controlled for. Given $\lambda$, the solution to (\ref{eq:min2}) is \begin{equation} \label{eq:w2} \hat{w} = \tilde{w}_{ridge} + G_{\lambda}^{-1} Z' ( Z G_{\lambda}^{-1} Z' )^{-1} (z_1 - Z\tilde{w}_{ridge}), \quad G_{\lambda} = Q'Q + \lambda I_J, \end{equation} where $\tilde{w}_{ridge} = G_{\lambda}^{-1} Q'q_1$ is the unconstrained ridge estimator (see the appendix for a proof). Note that $G_{\lambda}$ is invertible if $\lambda>0$ whether or not $Q'Q$ is, and thus $\hat{w}$ is well defined if $Z$ is of full row-rank and $\lambda>0$. The resulting treatment effect estimators are obtained by DID using $Y_t'\hat{w}$ as the constructed control group. There is a more revealing expression for $\hat{w}$ than (\ref{eq:w2}). To derive it, let us first partial out $z_i$ from $Q$ and from $q_1$. Precisely, let $B = QZ' (ZZ')^{-1}$, the matrix of the OLS estimators from the regression of the rows of $Q$ on $Z'$, and let $\tilde{Q} = Q - BZ$ and $\tilde{q}_1 = q_1 - Bz_1$, the prediction errors. Then $\hat{w}$ is decomposed as follows: \begin{equation} \label{eq:w2o} \hat{w} = w_a+\hat{w}_b, \quad w_a = Z'(ZZ')^{-1}z_1, \;\; \hat{w}_b = (\tilde{Q}' \tilde{Q} + \lambda I)^{-1} \tilde{Q}' \tilde{q}_1, \end{equation} which is the sum of the maximum shrinkage estimator $w_a$ subject to $z_1=Zw$ and the unconstrained ridge estimator $\hat{w}_b$ for balancing on the covariates orthogonal to $Z$ (proved in the appendix). Note that (\ref{eq:w2o}) does not hold if the variables are automatically normalized in the ridge regression procedure, but whether to normalize $q_i$ or not is not critical under $z_1 = Z\hat{w}$, according to experiments. See Doudchenko and Imbens (2017) for more on normalization without the constraints. By substituting the $\ell_1$ norm for the squared $\ell_2$ norm $w'w$ in (\ref{eq:min2}), we have the constrained lasso (CLASSO) version \begin{equation} \label{eq:min1} \min_w\, \tfrac12 (q_1-Qw)'(q_1-Qw) + \lambda\Vert w\Vert_1\text{~~subject to~~} z_1 = Zw, \end{equation} where $\lambda$ is again a user-specified parameter. A fast optimization algorithm is available (James et al., 2019; see also Gaines et al., 2018). CRIDGE and CLASSO both shrink the parameters but only CLASSO achieves variable selection. Though a simple decomposition like (\ref{eq:w2o}) is not available for CLASSO, the modified constrained lasso \[ \min_w\, \tfrac12 (\tilde{q}_1-\tilde{Q}w)' (\tilde{q}_1-\tilde{Q}w) + \lambda\Vert w\Vert_1\text{~~subject to~~} z_1 = Zw \] after partialing out $z_i$ from $q_1$ and $Q$ is identical to the original problem (\ref{eq:min1}). Again, if the balancing covariates are to be scaled within the optimization algorithm, the original variables and the variables after partialing-out give different results, naturally. When usual constrained-lasso algorithms fail, one can again modify $\ell_1$ to a nominal elastic-net norm as Gaines et al. (2018) remark. The elastic-net objective function is $\frac12 (q_1-Qw)'(q_1-Qw) + \lambda (\frac{1-\alpha}{2} w'w + \alpha \Vert w \Vert_1)$, which equals the lasso objective function \[ \tfrac12 (q_1^{aug} - Q^{aug} w)' (q_1^{aug}-Q^{aug} w) + \lambda \alpha \Vert w\Vert_1, \] where $q^{aug} = (q_1',0)'$ and $Q^{aug} = [Q', \sqrt{\lambda (1-\alpha)} I_J]'$; see Gaines et al. (2018). Doudchenko and Imbens (2017) propose a cross-validation method of selecting $\lambda$ (and $\alpha$). I propose comparison by visualization after trying several different $\lambda$ values. \begin{example} \label{ex:cal} ADH (2010) analyze the effect of the 1988 California tobacco control program using their synthetic control method. The dependent variable is cigarette consumption. ADH use 7 variables $x_i = (x_{i1}, \ldots, x_{i7})'$ as trend predictors: log per capita state personal income ($x_{i1}$), the percentage of population aged 15--24 ($x_{i2}$), retail price of cigarettes ($x_{i3}$), per capita beer consumption ($x_{i4}$), all of which are averaged over the 1980--1988 period, together with three years of lagged smoking consumption (1975, 1980, and 1988). The balancing covariates are the pre-treatment outcomes (1970--1988). The counterfactual outcomes by ADH, by the constrained ridge with $\lambda = 2$, and by the constrained lasso with the same $\lambda$ are plotted in Figure \ref{fig:cal}(a), where $z_i=(1,x_i')'$ and $q_i = (y_{i1}, \ldots, y_{iT_0})'$. Figure \ref{fig:cal}(a) suggests that the treatment effects by CRIDGE and CLASSO are nontrivially smaller than by ADH. The results by CRIDGE and CLASSO are only marginally different from each other. The last three variables in $x_i$ are included in both $z_i$ and $q_i$, and removing them from $z_i$ is immaterial. If we let $z_i=1$ and $q_i=(x_i', y_{i1}, \ldots, y_{iT_0})'$ instead, that is, if ADH's seven `predictor' variables are used as balancing covariates instead of as trending covariates, then the results from ADH, CRIDGE and CLASSO are all very similar, as Figure \ref{fig:cal}(b) shows. It turns out that the $x_{i1}$ variable, ln(GDP per capita), is the main driver of the dissimilarity between (a) and (b) of Figure \ref{fig:cal}; if we let $z_i=(1,x_{i1})'$ and $q_i = (x_{i2}, \ldots, x_{i7}, y_{i1}, \ldots, y_{iT_0})'$, the resulting trends are close to those in Figure \ref{fig:cal}(a). Removing the duplicates ($x_{i5}$, $x_{i6}$ and $x_{i7}$) is again of little consequence. \qed \end{example} \begin{figure} \caption{Trends in cigarette sales in California} \label{fig:cal} \begin{center} (a) 1 and $x_i$ for trending covariates; $y_{i1}, \ldots, y_{iT_0}$ for balancing covariates \includegraphics[width=.95\columnwidth]{California/trends.pdf} \end{center} \begin{center} (b) 1 for trending covariates; $x_i, y_{i1}, \ldots, y_{iT_0}$ for balancing covariates \includegraphics[width=.95\columnwidth]{California/trends0.pdf} \end{center} \footnotesize \emph{Note.} ADH (2010) data. (a) Trending covariates are 1 and $x_i$, where $x_i$ contains ln(GDP per capita), percent aged 15--24, retail price, beer consumption per capita, and cigarette sales per capita 1988, 1980 and 1975 (see ADH, 2010, Table 1); balancing covariates ($q_i$) are $y_{i1}, \ldots, y_{iT_0}$. (b) Only the constant term is used as trending covariates, and all variables in $x_i$ and $q_i$ are used for balancing. In both (a) and (b), $\lambda = 2$ for the constrained ridge and lasso. \end{figure} For the model given by (\ref{eq:model}), ADH (2010) treat the constant term in $z_i$ and the nonconstant terms differently, where the constant term is exactly matched on by the adding-up constraint and the nonconstant terms appear in minimization. My approach, on the other hand, treats all the terms in $z_i$ identically by exact matching. Balancing $q_1$ and $Qw$ is a different issue; they are matched by the minimization of $(q_1-Qw)'(q_1-Qw)$ without requiring exact balancing. The roles of trending covariates and balancing covariates are different, which is natural considering that $z_i$ appears in the model as the drivers of nuisance trends and $q_i$ is introduced to enhance comparability. A practical remark on the selection of $\lambda$ is worth making. If the model is correctly specified so that $u_{it}$ shows no systematic trends, i.e., if $u_{it}$ has zero mean conditional on $z_1,\ldots, z_{J+1}$ for all $t$, then any $w$ satisfying $z_1=Zw$ will eliminate confounding systematic trends in $y_{1t} - Y_t'w$. When it happens, the choice of $\lambda$ would not make much difference in principle. On the other hand, systematicity in $y_{1t} - Y_t'w$ in the pre-treatment periods would be an evidence of possible misspecification of the model for some $i$ or all, in which case matching on variables such as pre-treatment outcomes will hopefully mitigate the problem. Since a larger $\lambda$ deteriorates the matching quality and increases the variability in $Y_t'w$, it would be an acceptable practice to enlarge $\lambda$ while keeping the discrepancy between $q_1$ and $Qw$ within a tolerable range. Though fuzzy theoretically, the acceptability is usually clear to human eyes as the time-series of $y_{1s}$ and $Y_s'w$ in the pre-treatment periods can be visually compared without difficulty. Also, the constrained shrinkage estimators are continuous in $\lambda$ (except at $\lambda=0$ for which $Q'Q$ may be singular) for given data, and small changes in $\lambda$ will lead to only small changes in the trend of $Y_t'w$. \subsection{Unobservable factors} \label{subsec:zqh} We have thus far considered the case $h_i$ is empty in (\ref{eq:model}). In many application, a few variables in $z_i$ would be sufficient as the driving force of trend heterogeneity. Besides, soft matching on the lagged dependent variables often obliterates the necessity of unobservable common factors. In some cases, however, researchers may want to allow for unobservable $h_i$, especially if no observable trending covariates are available. In this section, we discuss how to handle $h_i$. Because $h_i$ makes heterogenous trends, it is again essential to have $h_1$ and $Hw$ exactly balanced, where $H=(h_2,\ldots,h_{J+1})$. But this is infeasible since $h_i$ are not observed. ADH (2010) replace $h_1 = Hw$ with the sufficient condition that $y_{1s} = Y_s'w$ for all $s\le T_0$, which is not attainable unless $T_0$ is smaller than $J$. But even when $J$ is large enough for $y_{1s}=Y_s'w$ for all $s\le T_0$, the nonnegativity of $w_j$ imposed by ADH (2010) does not necessarily guarantee $z_1=Zw$ and $y_{1s} = Y_s'w$ at the same time. Adverse examples have been illustrated in Figure \ref{fig:motiv}. When $h_i$ are unobserved, an obvious strategy is to estimate them rather than attempting to find a detour. If $\check{h}_i$ denotes the initial estimator of $h_i$ and $\check{H} = (\check{h}_2, \ldots, \check{h}_{J+1})$, the corresponding $\ell_2$ optimization problem is \[ \min_w \; (q_1-Qw)'(q_1-Qw) + \lambda w'w \text{~~subject to~~} z_1=Zw \text{ and } \check{h}_1 = \check{H}w. \] There are total $1+K+r$ constraints, which are generally satisfied by nonempty parameters if $J> K+1+r$, which holds in usual applications. If $J$ is too small, the researcher would try to reduce $K$ or $r$ or both; it is not very sensible to have more common factors than the number of untreated units in applications. A convenient way of estimating $h_i$ is to use least squares using the pre-treatment data: \[ \min_{\substack{\mu_1, \ldots, \mu_{J+1}\\ \gamma_1, \ldots, \gamma_{T_0}\\ \delta_1,\ldots,\delta_{T_0}\\ h_1, \ldots, h_{J+1}}} \; \sum_{i=1}^{J+1} \sum_{t=1}^{T_0} (y_{it} - \mu_i - \gamma_t'z_i - \delta_t'h_i)^2, \] or in matrix notations \[ \min_{\mu^*, \Gamma, F, H^*} \tr \Big\{ (Y^* - 1\mu\SP - \Gamma Z^* - \delta H^*)' (Y^* - 1\mu\SP - \Gamma Z^* - \delta H^*) \Big\}, \] where $Y^*$ is the $T_0\times (J+1)$ matrix of $y_{it}$ for $i=1,\ldots, J+1$ (columns) and $t=1,\ldots,T_0$ (rows), $\mu^*$ is the $(J+1)\times 1$ vector of $\mu_i$, $i=1,\ldots,J+1$, $\Gamma = (\gamma_1, \ldots, \gamma_{T_0})'$, $Z^* = (z_1,Z)$, $\delta=(\delta_1,\ldots, \delta_{T_0})'$, and $H^* = (h_1, H)$. The concentrated loss function is \begin{equation} \label{eq:MYM} \min_{F, H^*} \tr \Big\{ (M_1 Y^* M_{Z\SP} - M_1 \delta H^* M_{Z\SP})' (M_1 Y^* M_{Z\SP} - M_1 \delta H^* M_{Z\SP}) \Big\}, \end{equation} where $M_1 = I_{T_0} - T_0^{-1}11'$ and $M_{Z\SP} = I - Z\SP (Z^* Z\SP)^{-1} Z^*$. Let $A=M_1 Y^* M_{Z\SP}$. The common factors in $A$ are estimated as $\sqrt{T_0}$ times the orthonomal eigenvectors of $AA'$ corresponding to the $r$ largest eigenvalues, and the associated factor loading estimators are $(\tilde{h}_1, \ldots, \tilde{h}_{J+1}) = T_0^{-1} \tilde{\delta}'A$, where $\tilde\delta$ is the matrix of estimated common factors. Note that the estimated common factors correspond to $M_1\delta$ rather than $\delta$ itself, and the estimated factor loadings to $H_*^{\dag} = H^* M_{Z\SP} = H^* - H^* Z\SP (Z^*Z\SP)^{-1} Z^* = [h_1, H] - H^* Z\SP (Z^* Z\SP)^{-1} [ z_1, Z]$ rather than $H^*$ itself. But, given that $z_1=Zw$, we have $h_1 = H w$ if and only if $h_1^{\dag} = H^{\dag} w$, where $H_*^{\dag} = [h_1^{\dag}, H^{\dag}]$. We can therefore use the estimated factor loadings $\tilde{h}_i$ in the constrained ridge, lasso and elastic-net optimization. Although $h_1^{\dag}$ and $H^{\dag} \hat{w}$ are not exactly balanced due to the discrepancy of $\tilde{h}_i$ and $h_i^{\dag}$ (after rotation), It is nuisance that the constrained estimator vector $\hat{w}$ satisfies $z_1 = Z\hat{w}$ and $\tilde{h}_1 = \tilde{H} \hat{w}$, but not $h_1^{\dag} = H^{\dag} \hat{w}$ or $h_1 = H\hat{w}$. Thus, $y_1 - Y_t\hat{w}$ still contains a remaining trend term as shown in \[ y_1 - Y_t\hat{w} = (\mu_1 - \bmu'\hat{w}) + \delta_t' (h_1 - H\hat{w}) + (u_{1t} - U_t' \hat{w}). \] But, given that $z_1=Z\hat{w}$ and $\tilde{h}_1 = \tilde{H} \hat{w}$, we have \[ \delta_t' (h_1 - H\hat{w}) = \delta_t'B^{-1} [ (Bh_1^{\dag} - \tilde{h}_1) - (BH^{\dag} - \tilde{H}) \hat{w} ] \] \begin{example} \label{ex:calh} For the application in ADH (2010), again let $x_i$ be the seven predictor variables used by ADH (2010) as in Example \ref{ex:cal}. Let $\tilde{h}_i$ be the vector of two factor loadings found in $y_{it}$ after temporally demeaning and cross-sectionally partialing-out $(1,x_i')'$. If we let $z_i= (1,x_i')'$ and $q_i = (y_{i1}, \ldots, y_{iT_0})'$, then the estimated counterfactual outcomes using $\tilde{h}_i$ as extra trend predictors are given in Figure \ref{fig:calh}(a), which is very similar to those in Figure \ref{fig:cal}(a). On the other hand, if $q_i = (x_i', y_{i1}, \ldots, y_{iT_0})'$, $z_i$ contains only 1, and $\tilde{h}_i$ contains the four estimated factor loadings in $y_{it}$ after temporal and cross-sectional demeaning (without $x_i$ partialed out), then the CRIDGE and CLASSO results are very similar to the ADH synthetic control as shown in Figure \ref{fig:calh}(b) just like in Figure \ref{fig:cal}(b). Changing $z_i$ is consequential, but controlling for estimated hidden factor loadings does not make much difference in this example. In this exercise, the estimated factor loadings explain the pre-treatment outcomes well. When each of the seven variables in $x_i$ are regressed on the four estimated factor loadings found in part (b), the R-squared is low for the first four controls and very high for the last three (the lagged outcomes) as Table \ref{tbl:rsq.z} shows. The results remain stable when $r$ is increased up to 10. This suggests that the role of hidden factors is only limited when $q_i$ or $z_i$ contains some pre-treatment outcomes.\qed \end{example} \begin{figure} \caption{Trends of cigarette sales in California} \label{fig:calh} \begin{center} (a) $z_i=(1,x_i')'$, $q_i=(y_{i1}, \ldots, y_{iT_0})'$, and $r=2$ \includegraphics[width=.95\columnwidth]{California/trendshidden.pdf} (b) $z_i=1$, $q_i=(x_i', y_{i1}, \ldots, y_{iT_0})'$, and $r=4$ \includegraphics[width=.95\columnwidth]{California/trends0hidden.pdf} \end{center} \footnotesize \emph{Note.} The tuning parameter $\lambda$ is set to 2. In (b), $r$ is chosen to be 4 because there are four predictors in $x_i$ other than pre-treatment outcomes. Changing $r$ to 2 makes practically no differences. \end{figure} \begin{table} \caption{R-squareds for predictors} \label{tbl:rsq.z} \begin{center} \begin{tabular}{@{}lc@{}} \hline \hline Dependent variable & R-squared\\ \hline ln(GDP per capita)$^a$ & 0.348\\ percent aged 15--24$^a$ & 0.106\\ retail price$^a$ & 0.538\\ beer consumption per capita$^b$ & 0.390\\ cigarette sales per capita 1988 & 0.987\\ cigarette sales per capita 1980 & 0.992\\ cigarette sales per capita 1975 & 0.995\\ \hline \hline \end{tabular} \end{center} \footnotesize \emph{Note.} The sample size is $J+1=39$, and the explanatory variables are the estimated factor loadings obtained by least squares applied to cross-sectionally and temporally demeaned pre-treatment outcomes. $^a$1980--1988 averages; $^b$1984--1988 average. \end{table} If some common factors are observed (e.g., incidental linear or quadratic trends), then they can be partialed out by replacing the $M_1$ matrix in (\ref{eq:MYM}) with an appropriate projection matrix. For example, if $y_{it}^0 = \gamma_t' z_i + g_t' \mu_i + \delta_t'h_i + u_{it}$, where $g_t$ is observable and the fixed effects are subsumed in $g_t'\mu_i$, then $M_1$ is to be replaced with $M_{[1,g]}$, say, where $g = (g_1,\ldots, g_{T_0})'$. Finally, the number $r$ of common factors may be chosen exogenously by the researcher or by using an automatic selection procedure. I recommend the former method. Specifically, increasing $r$ starting from zero and plotting the estimated counterfactual outcomes will give the researcher clear ideas how the results change as more hidden factors are allowed for in the model. \section{Comparison with extant estimators} \label{sec:comp} This section compares the new methods with ADH (2010), HCW (2012), and Doudchenko and Imbens (2017). \subsection{Comparison to ADH (2010)} \label{subsec:adh} ADH's (2010) synthetic control algorithm consists of two layers of optimization, which I call the `inner' and `outer' optimization loops. The inner loop finds an optimal $\hat{w}(V)$ for a given $V$ by minimizing $(z_1-Zw)' V (z_1-Zw)$ subject to the adding-up and nonnegativity constraints (called the `ADH constraints' in short in this subsection), and the outer loop finds an optimal diagonal positive semidefinite $V$ by minimizing $\sum_{s=1}^{T_0} [y_{1s} - Y_s'\hat{w}(V) ]^2$. The final weight estimator is $\hat{w} = \hat{w}(\hat{V})$. ADH (2010) also discuss using a user-specified $V$. For a given $V$, if there exists a $w$ satisfying the ADH constraints and the exact-balancing condition $z_1=Zw$ simultaneously, the inner-loop loss function $(z_1-Zw)' V(z_1-Zw)$ attains zero at such a $w$. Even in that case, however, a unique $w$ is not identified in general because the constraints are linear in $w$. For example, if $z_1=0$, a scalar, and $(z_2, z_3, z_4, z_5) = (-2,-1,1,2)$, any symmetric kernels such as $w=(\frac14, \frac14, \frac14, \frac14)'$, $w=(0,\frac12, \frac12,0)'$, etc., minimize the loss function for the inner optimization loop. In such a case a particular weight will be chosen arbitrarily by the numerical procedure used for the optimization. In contrast, if no $w$ satisfies both the ADH constraints and the exact-balancing condition simultaneously, then ADH's algorithm sacrifices exact balancing to abide by the ADH constraints. The consequences of abandoning exact balancing to save the ADH constraints are illustrated in Figure \ref{fig:motiv} as discussed repeatedly. The $V$-weight is determined by the outer-loop minimization for balancing the pre-treatment outcomes. (If a fixed $V$ is used, balancing the pre-treatment outcomes is irrelevant.) For the finally chosen $\hat{V}$, no matter whether it is the outcome of the outer-loop optimization or given exogenously, the solution $\hat{w}=\hat{w}(\hat{V})$ need not be unique nor satisfy $z_1 = Z\hat{w}$. Notably, the selection of $V$ is blind to whether $z_1=Z\hat{w}(V)$ because $V$ is chosen by the outer loop involving only the pre-treatment outcomes. For example, if some $V$ allows for $z_1=Z\hat{w}(V)$ and others do not, the ADH algorithm does not necessarily choose the one that allows for $z_1=Z\hat{w}(V)$ since $V$ is determined by minimizing $\sum_{s=1}^{T_0} [y_{1s} - Y_s'\hat{w}(V)]^2$, which does not necessarily minimize $[z_1-Z\hat{w}(V)]' [z_1-Z\hat{w}(V)]$. The nonnegativity and adding-up constraints provide attractive interpretations to practitioners, but the benefits come with nontrivial costs. First, ADH's (2010) two-layer optimization procedure may fail to converge or give a suboptimal choice of synthetic control. For example, Abadie and Gardeazabal (2003) find the `Synthetic Basque' of $0.851\times \text{Cataluna} + 0.149 \times \text{Madrid}$' in their study on the political turmoil in Spain. But a thorough investigation reveals that a lower root mean squared prediction error can be achieved by an alternative synthetic Basque of $0.633\times \text{Cataluna} + 0.148\times \text{Madrid} + 0.219 \times \text{Baleares}$. (Finding this weight vector requires more direct use of the Karush-Kuhn-Tucker theorem. Neither the Stata `synth' package nor the R `Synth' package identifies this synthetic control.) This suggests that researchers should not be overly confident about the meaningfulness of the estimated $w$ weights. The second issue involves the nonnegativity, and is more subtle. The nonnegativity constraint may violate $z_1=Zw$, i.e., $\Re^J_+ \cap \{ w\colon z_1=Zw \} = \emptyset$, in which case trends in $y_{1t} - Y_t'w$ due to $z_1-Zw$ may confound the treatment effects if $w$ is forced to be in $\Re_+^J$. The importance of nonnegativity can be controversial, but it is noteworthy that a discrepancy between $z_1$ and $Zw$ can lead to a nonnegligible confounding trend in $y_{1t}^0-Y_t'w$ while a negative $w_j$ only affects interpretation. If one wishes, the nonnegativity restriction can be made soft by, for example, the constrained lasso \[ \min_{w^+, w^-} \tfrac12 \Vert q_1-Qw^++Qw^-\Vert_2^2 + \lambda \sum_{j=2}^{J+1} (w^+_j + \kappa w^-_j), \] for some large positive $\kappa$, subject to the constraints that $z_1 = Zw^{+} - Zw^{-}$, $w^+_j\ge 0$ and $w^-_j\ge 0$ for all $j$, which modifies a generalized version of (\ref{eq:qp}). The above soft nonnegativity will allow $w_j<0$ for some $j$ if hard nonnegativity is incompatible with $z_1=Zw$, but will try to keep $w_j$ as close to the nonnegative domain as possible. However, the benefit looks only minor because the appealing interpretation attached to nonnegativity is lost anyway if some $w_j$ are negative. \subsection{Comparison to HCW (2012)} \label{subsec:hcw} HCW (2012) take an alternative approach of regressing $y_{1t}$ on $Y_t$ for a selected subset of the untreated units using the pre-treatment observations to estimate the intercept $c$ and the slope vector $w$. Then the counterfactual outcomes are formed as $\hat{c} + Y_t'\hat{w}$ for $t>T_0$, where $\hat{c}$ and $\hat{w}$ are the OLS estimators. As Li and Bell (2017) derive, this estimator is justified under mean stationarity. If the unobserved trends show mean nonstationarity, HCW's (2012) method needs modification. To see the source of bias and its remedy, let us take a simple example with $z_i=1$. Given the OLS estimators $\hat{c}$ and $\hat{w}$, the estimated treatment effects are \begin{equation} \label{eq:tau1.reg} \hat\tau_{1t} = y_{1t} - \hat{c} - Y_t'\hat{w} = \tau_{1t} + \ddot\gamma_t (1-1'\hat{w}) + \ddot\delta_t (h_1 - Hw) + (\ddot{u}_{1t} - \ddot{U}_t'\hat{w}), \end{equation} where $\ddot\gamma_t - \gamma_t - \Pre{\gamma}$, $\ddot{\delta}_t = \delta_t - \Pre{\delta}$, $\ddot{u}_{1t} = u_{1t} - \Pre[1]{u}$, and $\ddot{U}_t = U_t - \Pre{U}$, with $\Pre{\xi}$ denoting $T_0^{-1} \sum_{t=1}^{T_0} \xi_t$ for variable $\xi_t$. The OLS regression of $y_{1t}$ on $Y_t$ for $t\le T_0$ may give systematic biases in $\hat\tau_{1t}$ for this model due to the $\ddot\gamma_t (1-1'\hat{w})$ term among others, because the stated OLS regression does not guarantee $1'\hat{w} \pto 1$. The origin of this failure is in fact endogeneity. Example \ref{ex:hcw} below demonstrates that $1'\hat{w}<1$ asymptotically (as $T_0\to\infty$) if $y_{1t}$ is regressed on $Y_t$ for $t\le T_0$ for a model with $z_i=1$ and empty $h_i$, so that systematic changes in trend ($\gamma_t$) may confound the treatment effects. \begin{example} \label{ex:hcw} Consider the model $y_{it}^0 = \mu_i + \gamma_t + u_{it}$, where $\gamma_t$ are common time-effects. Let $J$ be small and $T_0\to \infty$ as considered by HCW (2012). The OLS slope estimator $\hat{w}$ from the regression of $y_{1t}$ on $Y_t$ using the pre-treatment observations is \begin{align*} \hat{w} &= (\bY'M_1\bY)^{-1} \bY'M_1 \bfy_1 = [ (\gamma 1' + \bU)' M_1 (\gamma 1' + \bU) ]^{-1} (\gamma 1'+\bU)'M_1 (\gamma+\bu_1)\\ &= (\sigma_{\gamma}^2 11' + S_U)^{-1} 1\sigma_{\gamma}^2 + o_p(1), \end{align*} where $\bfy_i = (y_{i1}, \ldots, y_{iT_0})'$, $\bY = (\bfy_2, \ldots, \bfy_{J+1})$, $M_1 = I_{T_0} - T_0^{-1} 11'$, $\bU$ is the $T_0\times J$ matrix of $u_{jt}$ for $j\ge 2$ and $t\le T_0$, $\gamma = (\gamma_1, \ldots, \gamma_{T_0})'$, $\sigma_{\gamma}^2 = \plim T_0^{-1} \gamma'M_1\gamma$, and $S_U = \plim \allowbreak T_0^{-1} \bU'M_1\bU$. Thus, when $J$ is fixed, \[ 1'\hat{w} = \sigma_{\gamma}^2 1' (\sigma_{\gamma}^2 11' + S_U)^{-1} 1 + o_p(1) = \frac{\sigma_{\gamma}^2 1'S_U^{-1}1}{1+ \sigma_{\gamma}^2 1'S_U^{-1}1} + o_p(1), \] which implies that \begin{equation} \label{eq:1w} 1-1'\hat{w} \pto (1+\sigma_{\gamma}^2 1'S_U^{-1}1)^{-1} > 0. \end{equation} In the presence of common time effects $\gamma_t$, the estimated $\hat\tau_{1t}(\hat{w})$ systematically depends on $\ddot\gamma_t (1-1'\hat{w})$, as is apparent by (\ref{eq:tau1.reg}) and (\ref{eq:1w}). Without the mean stationarity of $\gamma_t$ that ensures $\ddot\gamma_t \approx 0$, $\hat\tau_{1t} (\hat{w})$ is systematically biased away from $\tau_{1t}$. \qed \end{example} An obvious solution to the problem is to impose the restrictions that $1'w=1$ in case $z_i=1$ as in Example \ref{ex:hcw} and that $z_1=Zw$ for general $z_i$, which is exactly our exact-balancing constraint. If $h_i$ is nonempty in (\ref{eq:model}), then $h_i$ can be estimated and the constraints that $\tilde{h}_1 = \tilde{H} w$ can be added as explained in Section \ref{subsec:zqh}. Because the number of common factors are typically small, there exist almost certainly some $w$ vectors that satisfy the restrictions. This modified HCW method is a special case of the constrained ridge regressions proposed in this paper corresponding to $\lambda=0$. The above constrained OLS is easy to implement, but it requires $T_0>J-K-1$. If there are many untreated units ($J$ large), HCW (2012) select a sufficiently small subset \textit{a priori} by the researcher's judgment, which is sometimes arbitrary but often acceptable as long as rationales are provided. The constraints that $z_1=Zw$ and that $h_1=Hw$ are always crucial. \subsection{Doudchenko and Imbens's (2017) elastic net} \label{subsec:di} Doudchenko and Imbens (2017) propose minimizing the elastic-net loss function $\sum_{s=1}^{T_0} (y_{1t} - c - Y_t'w)^2 + \lambda (\frac{1-\alpha}{2} \Vert w\Vert_2^2 + \alpha \Vert w\Vert_1)$ without constraints. Their proposal (elastic net, and no constraints) can be understood as a modification of ADH (2010) and also a modification of HCW (2012) to an elastic-net framework. When signal is strong in the pre-treatment period such that matching on the observed pre-treatment outcomes deals with trends adequately, this elastic-net solution may work well (though bias may still exist due to the endogeneity reason explained in Section \ref{subsec:hcw}), but otherwise there is no device to control for heterogenous trends in the outcomes in the post-treatment periods. \begin{figure} \caption{Trends constructed by Doudchenko and Imbens (2017)} \label{fig:DI} \begin{center} (a) Data for Figure \ref{fig:motiv}(a) \includegraphics[width=.95\columnwidth]{055-orig/trends-di.pdf} \medskip (b) Data for Figure \ref{fig:motiv}(b) \includegraphics[width=.95\columnwidth]{055-flat/trends-di.pdf} \end{center} \footnotesize \emph{Note.} Simulated data used in Figure \ref{fig:motiv}. Doudchenko and Imbens's (2010) counterfactual trends are obtained using the R package glmnet with no standardization and including the intercept. The elastic-net mixing parameter is $\alpha=0.9$, and the $\lambda$ parameter is set to 0.01. For both (a) and (b) the post-treatment counterfactual outcomes are understated by Doudchenko and Imbens' method. \end{figure} Let us take numerical examples. Figure \ref{fig:DI} is obtained by applying Doudchenko and Imbens's (2017) proposal to the two simulated data sets considered for Figure \ref{fig:motiv}. The elastic-net mixing parameter is set to $\alpha = 0.9$ (close to lasso), and the tuning parameter is $\lambda=0.01$, a value that gives a visually appealing pre-treatment matching; larger $\lambda$ values such as 0.1 and 1 are poor in reproducing the trend in the pre-treatment outcomes. The results are compromised for both data sets in the post-treatment periods, which seems to be due to the endogeneity bias discussed in Section \ref{subsec:hcw}. Imposing $1'w=1$ as a hard restriction controls for common time effects, and $z_1=Zw$ for more general models, gives the elastic-net version of what the present paper proposes. It is noteworthy that Doudchenko and Imbens (2017) do not refer to an explicit model; see their introduction. In other words, their aim is not at controlling for heterogenous trends for models like (\ref{eq:model}) but at estimating counterfactual trends based on regularized matching on pre-treatment outcomes (identification by regularization). \section{Conclusion} For model (\ref{eq:model}) considered by ADH (2010), I propose new estimators of treatment effects by treating the trending variables ($z_i$ and $h_i$ in the model) and other balancing covariates (denoted $q_i$ in this paper) differently. Without further assumptions on the time-varying coefficients ($\gamma_t$ and $\delta_t$ in the model), exact-balancing of the trend predictors as hard restrictions is crucial for properly dealing with heterogenous trends driven by the trending covariates. The adverse consequences of making the exact matching soft are illustrated in Figures \ref{fig:motiv} and \ref{fig:DI}, where all the extant estimators exhibit compromised behaviors for data generated by (\ref{eq:model}) without hidden factors. The new estimators proposed in this paper work well. \iffull \section*{References} \begingroup \setlength\labelsep{0pt} \begin{description} \item Abadie, A., A. Diamond, and J. Hainmueller (2010). Synthetic control methods for comparative case studies: Estimating the effect of California's tobacco control program, \emph{Journal of American Statistical Association} 105(490), 493--505. \item Abadie, A., and J. Gardeazabal (2003). The economic costs of conflict: A case study of the Basque Country, \emph{American Economic Review} 93 (1), 113--132. \item Doudchenko, N., and G. W. Imbens (2017). Balancing, regression, difference-in-differences and synthetic control methods: A synthesis, \emph{arXiv} 1610.07748v2, 20 Sep 2017. \item Gaines, B. R., J. Kim, and H. Zhou (2018). Algorithms for fitting the constrained lasso, \emph{Journal of Computational and Graphical Statistics} 27(4), 861--871. \item Gobillon, L., and T. Magnac (2016). Regional policy evaluation: Interactive fixed effects and synthetic controls, \emph{Review of Economics and Statistics} 98(3), 535--551. \item Hsiao, C., H. S. Ching, and S. K. Wan (2012). A panel data approach for program evaluation: Measuring the benefits of political and economic integration of Hong Kong with Mainland China, \emph{Journal of Applied Econometrics} 27, 705--740. \item James, G. M., C. Paulson, and P. Rusmevichientong (2019). Penalized and constrained optimization: An application to high-dimensional website advertising, \emph{Journal of the American Statistical Association}, DOI: 10.1080/01621459.2019.1609970. \item Li, K. T., and D. R. Bell (2017). Estimation of average treatment effects with panel data: Asymptotic theory and implementation, \emph{Journal of Econometrics} 197, 65--75. \item Mallat, S. (2009). \emph{A Wavelet Tour of Signal Processing: The Sparse Way}, Academic Press, Elsevier. \end{description} \endgroup \fi
{ "timestamp": "2020-12-17T02:19:50", "yymm": "2012", "arxiv_id": "2012.08988", "language": "en", "url": "https://arxiv.org/abs/2012.08988" }
\section{Introduction} Object recognition technology has achieved remarkable developments in great quantifies of research fields, \emph{e.g.}, autonomous driving \cite{Behl_2017_ICCV, Li_Peixuan_RTM3D}, intelligent robotics \cite{8593741, ijcai2020-77, chen-etal-2020-bridging}, object clustering \cite{yang2020adversarial, ZhangCSWD20, ZhangGPVTFC}, medical diagnosis \cite{Semantic_Transferable_Dong_ICCV2019, What_Transferred_Dong_CVPR2020}, transfer learning \cite{CSCL_Dong_ECCV2020, DBLP:journals/corr/abs-1907-08375, DBLP:conf/ijcai/ZhangLFY0020, DBLP:journals/corr/abs-2008-01454, DBLP:journals/corr/abs-2006-13022} and federated learning \cite{wang2019eavesdrop, wang2020towards}. When compared with 2D vision, the unordered 3D point cloud representation collected by depth cameras or LiDAR system is more difficult for object recognition to characterize the 3D geometric information. To this end, various deep neural networks capable of reasoning about 3D geometric layout and structure \cite{Qi_2017_CVPR, NIPS2017_7095} are proposed to explore the task-specific semantic content. Intuitively, the advent of convolutional architectures \cite{NIPS2018_7362, Li_2018_CVPR, dgcnn} has dramatically boosted the performance of 3D object classification for point cloud representation. \begin{figure}[t] \centering \includegraphics[width =235pt, height =168pt]{.//fig//NIPS2020_motivation.pdf} \caption{Demonstration of our I3DOL model to learn new classes of 3D objects consecutively.} \label{fig:motivation_of_our_model} \end{figure} However, these existing methods are trained on a prepared well-labeled dataset, whose the number of 3D object categories is fixed in advance. This setup significantly limits their application promotion in the real-world scenarios where new classes of 3D object arrive continually in a streaming manner, as shown in Figure~\ref{fig:motivation_of_our_model}. For example, housekeeping robots \cite{she2019openlorisobject} working for indoor tasks cannot perform well in outdoor scenes, due to the lack of continual learning capacity for new 3D objects. A trivial solution to address this is to access all the training data of past learned indoor object classes, when long delay is allowed for updating the current model. Nevertheless, it can inevitably result in high computational power and storage (\emph{e.g.}, large infrastructures), which are not satisfied in real-world scenarios. Besides, the straightforward way is to apply current classes incremental models \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, Wu_2019_CVPR, Oleksiy_2019_CVPR} in 2D vision equipped with a 3D point cloud feature extractor \cite{NIPS2017_7095, Qi_2017_CVPR} into learning new classes of 3D object. However, these existing methods cannot explore unique and informative 3D geometric characteristics for classes incremental learning and further cause catastrophic forgetting, due to the irregular and redundant geometric structures within 3D point cloud \cite{NIPS2017_7095} (\emph{e.g.}, tables with missed legs or deformable permutations). Therefore, learning new classes incrementally for 3D objects without retaining the training data of past classes is a crucial real-world challenge. To tackle this challenge, we develop a new \underline{I}ncremental \underline{3D} \underline{O}bject \underline{L}earning (\emph{i.e.,} I3DOL) model, which intends to alleviate catastrophic forgetting for point cloud representation of past classes when learning new classes continually, as shown in Figure~\ref{fig:overview_of_our_model}. Specifically, by constructing discriminative local geometric structures of point cloud, an adaptive-geometric centroid module associated with an adaptive receptive field is developed to characterize the irregular point cloud representation. Meanwhile, a geometric-aware attention mechanism is designed to capture the intrinsic relationships between local geometric structures by quantifying the contributions of each local geometric structures for class incremental learning. In other words, it pays more attention on unique and informative local geometric structures to prevent the catastrophic forgetting while neglecting redundant geometric information in point cloud representation. To further alleviate the catastrophic forgetting for past learned 3D object classes, a score fairness compensation strategy is proposed to address the unbalanced training data between past learned and new classes of 3D object, which corrects the biased prediction for past classes via adaptive compensation in the validation phase. Experiments on several 3D benchmark classification tasks strongly demonstrate the effectiveness of our I3DOL model. In summary, the main contributions of this work are presented as follows: \begin{itemize} \item A new Incremental 3D Object Learning (I3DOL) model for point cloud representation is designed to learn new classes of 3D object continually while alleviating the catastrophic forgetting for past classes. To our best knowledge, this is the first exploration about classes incremental learning in the 3D object recognition field. \item We develop an adaptive-geometric centroid module to better characterize the irregular point cloud representation, which could construct several discriminative local geometric structures with an adaptive receptive field for each point cloud. \item To prevent the catastrophic forgetting, a geometric-aware attention mechanism is designed to highlight beneficial 3D geometric characteristics with high contributions for classes incremental learning, while a score fairness compensation strategy is developed to compensate the biased score prediction in the validation stage. \end{itemize} \section{Related Work} This section reviews some related researches about 3D object classification and classes incremental learning. \subsection{3D Object Classification} Although enormous shape descriptors \cite{6619148, 7300438, 8237705, 4650967, doi:10.1111/j.1467-8659.2009.01515.x} are handcrafted by domain experts for point cloud representation, they are invariant to the specific transformations, and cannot generalize well to 3D objects recognition with various categories. After Qi \emph{et al.} propose a pioneer work PointNet \cite{Qi_2017_CVPR} to directly process point cloud data, the advent of deep network architecture has achieved impressive successes in 3D object classification due to its powerful characterization capacity. PointNet++ \cite{NIPS2017_7095} encodes multi-scale hierarchical semantic context of point cloud. Moreover, some permutation invariant architectures \cite{Li_2018_CVPR, NIPS2018_7362} are developed for deep learning with orderless point clouds, which explore the spatially-local correlations about data distribution. \cite{dgcnn} focus on capturing topology information to enrich the characterization power of point cloud by incorporating with the dynamical graph. However, these approaches cannot be directly applied into practical applications where new classes of 3D object arrive continuously in a streaming manner. \begin{figure*}[t] \centering \includegraphics[width =505pt, height =150pt]{.//fig//NIPS2020_overview.pdf} \caption{Overview of our I3DOL model, which mainly consists of the \textit{adaptive-geometric centroid construction} to explore several local geometric structures, the \textit{geometric-aware attention mechanism} to capture informative 3D geometric characteristics in local geometric structures with high contributions for classes incremental learning, and the \textit{score fairness compensation} to further alleviate the catastrophic forgetting brought by unbalanced training samples.} \label{fig:overview_of_our_model} \end{figure*} \subsection{Classes Incremental Learning} Generally, depending on whether nothing or synthetic data or real data from the past classes is available, the existing methods about classes incremental learning \cite{10.1007/978-3-319-46493-0_37, Kirkpatrick3521, NIPS2018_7836, Oleksiy_2019_CVPR,Wu_2019_CVPR, NIPS2019_9429} in 2D vision contain threefold division. Specifically, \cite{10.1007/978-3-319-46493-0_37, Shmelkov_2017_ICCV} employ knowledge distillation to prevent catastrophic forgetting for past classes without accessing their training data. \cite{Kirkpatrick3521} constrain the architecture weights of new tasks to maintain the better performance on past tasks. Furthermore, \cite{Oleksiy_2019_CVPR, 10.5555/3294996.3295059, DBLP:journals/corr/SeffBSL17, NIPS2018_7836} highly depend on the capability of generative adversarial networks to replay synthetic data for past classes. When a small number of exemplars from each old class are selected for training, \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, deesil-eccv2018} focus on alleviating the effect of unbalanced training samples between the past and new categories. \cite{Belouadah_2019_ICCV} design an additional memory with negligible added cost to record past classes statistics. \cite{Xiao2014ErrorDrivenIL, DBLP:journals/corr/RusuRDSKKPH16} propose to expand the network progressively as new training data arrives. \cite{Wu_2019_CVPR} correct the bias towards new classes brought by fully-connected layer for large-scale incremental learning. \cite{NIPS2019_9429} develop a random path selection to choose optimal paths for new tasks. However, these methods in 2D vision cannot be successfully applied into 3D object recognition, since the irregular and redundant geometric structures of point cloud representation make them difficult to explore unique 3D geometric characteristics that are beneficial for classes incremental learning. \section{Our Proposed I3DOL Model} \subsection{Problem Definition and Overview} For classes incremental learning of 3D object, we follow the general experimental configuration in 2D vision \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, Wu_2019_CVPR, NIPS2019_9429}. There are total $S$ incremental states and the training data $D$ is denoted as $D = \{D_s\}_{s=1}^S$, where $D_s = \{x_i^s, y_i^s\}_{i=1}^{n_s}$ represents $n_s$ point cloud data in the $s$-th incremental state. $x_i^s\in\mathbb{R}^{U\times 3}$ and $y_i^s$ denote the $i$-th point cloud data with 3 dimensional coordinates and its corresponding label, and $U$ is the number of sampling points for each 3D object. In the $s$-th incremental state, the labels in $D_s$ consist of $c_s$ new classes, which are different from $c_p = \sum_{i=1}^{s-1}c_i$ past classes in the previous $s-1$ incremental states. Similar to \cite{Rebuffi_2017_CVPR, Wu_2019_CVPR, NIPS2019_9429}, our goal is to make predictions for both $c_s$ new classes and $c_p$ past classes in the $s$-th incremental state, when the new coming data $D_s$ and the selected exemplars set $M$ from past classes are available. Note that $|M|$ is a small value when compared with $n_s$, and it satisfies $|M|/c_p\ll n_s/c_s$ in our experiments. The overview framework of our I3DOL model is depicted in Figure~\ref{fig:overview_of_our_model}. Specifically, the point cloud data of new classes in the $s$-th incremental state is first forwarded into encoder $E$ to extract mid-level features. Then the adaptive-geometric centroid module constructs several discriminative local geometric structures to better characterize the irregular point cloud representation. Meanwhile, the geometric-aware attention module explores the unique and informative 3D geometric characteristics in local geometric structures with high contritions to alleviate catastrophic forgetting while neglecting the redundant information. Furthermore, we develop the score fairness compensation strategy to correct biased score prediction for new classes, which further prevent catastrophic forgetting caused by unbalanced data between past and new classes of 3D object. \subsection{Adaptive-Geometric Centroid Construction}\label{sec:centroid_construction} Suppose that each point cloud is composed of $L$ local geometric structures, which are regarded as $L$ point sets $\big\{G_l|G_l = \{\hat{p}_l, p_{l1}, \cdots, p_{lk}\in\mathbb{R}^3\}\big\}_{l=1}^L$. The $l$-th geometric region $G_l$ consists of a structural centroid $\hat{p}_l$ and its corresponding $k$ nearest neighbor points $\{p_{l1}, \cdots, p_{lk}\}$ surrounded around $\hat{p}_l$. Note that the location of $\hat{p}_l$ determines where the local geometric structure is and what the $k$ nearest neighbor points are included. To extract local discriminative features, most previous methods \cite{NIPS2017_7095, NIPS2018_7362} directly utilize the farthest point sampling or random sampling to obtain the structural centroids for local geometric structures. Although these strategies can fully cover over the whole point cloud, the selected centroids cannot cover the structures with unique 3D geometric characteristics for both past and new classes, and neglect the common useless characteristics in classes incremental learning. Intuitively, the local geometric structures sharing common characteristics could result in the catastrophic forgetting for past classes, while the unique object components and geometric layouts promote to overcome it effectively. Consequently, as depicted in Figure~\ref{fig:overview_of_our_model}, the adaptive-geometric centroid module with adaptive receptive field is developed to construct local geometric structures, which adjusts the selected structural centroids adaptively via geometric offset prediction along the training process. Different from deformable convolution \cite{DBLP:journals/corr/DaiQXLZHW17} which utilizes semantic features for offset prediction in 2D images, we consider the local edge vector of each geometric structure as a guidance for training. Specifically, the semantic knowledge of each edge is first transformed into the weight of edge vector, and then the weighted edge vectors are aggregated together to predict offset direction of the structural centroid. Generally, the learned offset is adaptively determined by the voting of surrounding edge vectors with different significances. After initializing the locations of $L$ structural centroids via the farthest point sampling \cite{NIPS2017_7095, NIPS2018_7362} over the point cloud, we collect the corresponding $k$ nearest neighbor points around each structural centroid to construct $L$ geometric structures. The offset prediction $\triangle\hat{p}_l$ for the $l$-th centroid $\hat{p}_l$ is: \begin{equation} \triangle \hat{p}_l = \frac{1}{k} \sum\limits_{i=1}^k \big(T_p((\hat{f}_l-f_{li}); \theta_{T_p}) \cdot (\hat{p}_l-p_{li}) \big), \label{eq:offset_prediction} \end{equation} where $T_p$ denotes a convolutional layer with parameter as $\theta_{T_p}$, which adaptively transforms the semantic information into the weight of edge vector. $\hat{p}_l$ and $\{p_{li}\}_{i=1}^k$ denote the location of the $l$-th centroid and its $k$ nearest neighbor points, respectively. $\hat{f}_l$ and $\{f_{li}\}_{i=1}^k$ are the semantic features of $\hat{p}_l$ and $\{p_{li}\}_{i=1}^k$, which are extracted by the encoder $E$ in Figure~\ref{fig:overview_of_our_model}. $(\hat{p}_l-p_{li})$ is the direction of local edge vector with respect to the $l$-th centroid $\hat{p}_l$. With the offset prediction $\triangle\hat{p}_l$ in Eq.~\eqref{eq:offset_prediction}, we can update the $l$-th structural centroid $\hat{p}_l$ by adding the offset $\triangle\hat{p}_l$ back to $\hat{p}_l$, and reconstruct the $l$-th local geometric structure by searching $k$ new nearest neighbor points $\{p_{l1}, \cdots, p_{lk}\}$ around the new updated $\hat{p}_l$, \emph{i.e.}, \begin{equation} \begin{split} &\qquad\qquad\qquad\hat{p}_l = \hat{p}_l + \triangle\hat{p}_l, \\ &\{p_{l1}, \cdots, p_{lk}\} = \mathrm{kNN}(\hat{p}_l|p_j\in\mathbb{R}^3, j=1,\cdots, U), \end{split} \label{eq:local_structure_update} \end{equation} where $\mathrm{kNN}(\cdot)$ collects $k$ nearest neighbor points around the new updated centroid $\hat{p}_l$ by searching all points $\{p_j\}_{j=1}^U$ over the whole point cloud. Therefore, the semantic feature $\hat{f}_l$ of the $l$-th updated centroid $\hat{p}_l$ can be computed by gathering all points features inside the $l$-th updated local geometric structures, \emph{i.e.}, \begin{equation} \hat{f}_l = \max_{i=1,2,\cdots,k} T_g(f_{li}; \theta_{T_g}), \label{eq:update_centroid_feature} \end{equation} where $\{f_{li}\}_{i=1}^k$ represent the features of the updated $k$ nearest neighbor points. $T_g$ denotes a convolutional block to gather all points features, and the network weight is $\theta_{T_g}$. Similar to the concatenation strategy in \cite{NIPS2017_7095}, we concatenate all centroids features $\{\hat{f}_l\}_{l=1}^L$ from $L$ local geometric structures as the ultimate extracted features $f_g\in\mathbb{R}^{L\times d}$ over the whole point cloud, where $d$ is the feature dimension of each local geometric structure. $f_g$ is then forwarded into the geometric-aware attention module, as presented in Figure~\ref{fig:overview_of_our_model}. \subsection{Geometric-Aware Attention Mechanism}\label{sec:geometric_aware_attention} Although our adaptive-geometric centroid construction module could explore $L$ accurate structural centroids $\{\hat{p}_l\}_{l=1}^L$ with discriminative features $\{\hat{f}_l\}_{l=1}^L$, each centroid cannot contribute equally to explore informative 3D geometric characteristics for both past and new classes. In other words, some local geometric structures covering common characteristics can promote catastrophic forgetting for past learned classes, and the others with unique 3D characteristics prevent it efficiently. To this end, as shown in Figure~\ref{fig:overview_of_our_model}, we design the geometric-aware attention module to highlight unique 3D geometric characteristics that are beneficial for incremental learning, while preventing catastrophic forgetting caused by common characteristics. To be specific, it quantifies the contributions of different local geometric structures $\{G_l\}_{l=1}^L$ for classes incremental learning of 3D objects, and highlights those unique local geometric structures with high contributions to alleviate the catastrophic forgetting of past learned classes. Motivated by the channel attention on feature responses \cite{zhang2018rcan} in 2D vision, we integrate a residual mechanism into the geometric-aware attention module to quantify the significance of local geometric structure. Then the ultimate semantic feature $f_p\in\mathbb{R}^{L\times d}$ over whole point cloud is: \begin{equation} \begin{split} f_p &= A_g\cdot f_g + f_g \\ &= \Phi_1\big(T_u(\Phi_2(T_d(f_g; \theta_{T_d})); \theta_{T_u})\big)\cdot f_g + f_g, \end{split} \label{eq:centroid_attention} \end{equation} where $A_g = \Phi_1\big(T_u(\Phi_2(T_d(f_g; \theta_{T_d})); \theta_{T_u})\big)\in\mathbb{R}^{L\times d}$ denotes the learned geometric-aware attention, \emph{i.e.}, quantified contribution of each local geometric structure. $\Phi_1$ and $\Phi_2$ represent the sigmoid and ReLU activation functions, respectively. $T_d$ is a channel-downscaling convolutional block with the reduction rate as $r$, and $T_u$ is channel-upscaling convolutional layer with the increase ratio as $r$. $\theta_{T_u}$ and $\theta_{T_d}$ are the network parameters of $T_u$ and $T_d$. Note that we then utilize max pooling operation on $f_p$ to obtain the global feature $f_c\in\mathbb{R}^{d}$, before forwarding it into the classifier $C$ for performance prediction, as shown in Figure~\ref{fig:overview_of_our_model}. \subsection{Score Fairness Compensation}\label{sec:dual_adaptive_fairness_compensations} Even though the discriminative features of local geometric structures are extracted via above subsections, the classifier $C$ is prone to the catastrophic forgetting due to the unbalanced data distributions between the past and new classes of 3D objects. Most existing models \cite{Castro_2018_ECCV, Rebuffi_2017_CVPR, Wu_2019_CVPR, NIPS2019_9429} in 2D vision utilize the knowledge distillation to address this challenge. However, they cannot alleviate the problem that classifier $C$ tends to predict past 3D objects as new classes. Particularly, the classifier $C$ has higher preference for great quantities of new 3D objects classes. Obviously, the important factor causing catastrophic forgetting for past 3D objects is the highly biased probability prediction in the last layer of classifier $C$. To tackle this issue, the score fairness compensation strategy is developed to maintain fairness between past and new classes of 3D objects in classifier $C$ by compensating biased prediction for new 3D objects in the validation phase. \begin{table*}[t] \centering \setlength{\tabcolsep}{2.61mm} \caption{Quantitative comparisons on ModelNet dataset \cite{7298801} with an increment of 4 classes.} \scalebox{0.94}{ \begin{tabular}{|c|cccccccccc|c|} \hline Comparison Methods & 4 & 8 & 12 & 16 & 20 & 24 & 28 & 32 & 36 & 40 & Avg \\ \hline LwF \cite{10.1007/978-3-319-46493-0_37} & 96.5 & 87.2 & 77.5 & 70.6 & 62.3 & 56.8 & 44.7 & 39.4 & 36.1 & 31.5 & 60.3 \\ iCaRL \cite{Rebuffi_2017_CVPR} & 96.8 & 90.4 & 83.6 & 78.3 & 72.5 & 67.3 & 59.6 & 53.1 & 47.8 & 39.6 & 68.9 \\ DeeSIL \cite{deesil-eccv2018} & 97.7 & 91.5 & 85.4 & 80.5 & 74.4 & 71.8 & 65.3 & 58.7 & 52.4 & 43.7 & 72.1 \\ EEIL \cite{Castro_2018_ECCV} & 97.6 & 93.8 & 87.5 & 81.6 & 78.2 & 74.7 & 69.2 & 62.4 & 56.8 & 48.1 & 75.0 \\ IL2M \cite{Belouadah_2019_ICCV} & 97.8 & 95.1 & 89.4 & 85.7 & 83.8 & 82.2 & 78.4 & 72.8 & 67.9 & 57.6 & 81.1 \\ DGMw \cite{Oleksiy_2019_CVPR} & 97.5 & 93.2 & 86.4 & 82.5 & 80.1 & 78.4 & 73.6 & 65.3 & 61.5 & 53.4 & 77.2 \\ DGMa \cite{Oleksiy_2019_CVPR} & 97.5 & 93.4 & 84.7 & 81.8 & 79.5 & 77.8 & 74.1 & 67.4 & 60.8 & 51.5 & 76.8 \\ BiC \cite{Wu_2019_CVPR} & 97.8 & 95.5 & 88.5 & 86.9 & 84.3 & 83.1 & 79.3 & 74.2 & 70.7 & 59.2 & 82.0 \\ RPS-Net \cite{NIPS2019_9429} & 97.7 & 94.6 & 90.3 & 88.2 & 86.7 & 82.5 & 78.0 & 73.6 & 68.4 & 58.3 & 81.7 \\ \hline \hline \rowcolor{lightgray} Ours-w/oAG & 97.8 & 94.3 & 90.1 & 87.5 & 84.2 & 81.7 & 77.9 & 73.5 & 68.4 & 59.1 & 81.5 \\ \rowcolor{lightgray} Ours-w/oGA & 98.1 & 96.0 & 92.4 & 89.7 & 88.2 & 84.5 & 81.3 & 74.0 & 71.7 & 60.3 & 83.6 \\ \rowcolor{lightgray} Ours-w/oSF & \textbf{98.2} & 96.1 & 92.0 & 90.3 & 88.9 & 85.4 & 80.7 & 75.8 & 72.4 & 60.8 & 84.1 \\ \rowcolor{lightgray} Ours & 98.1 & \textbf{97.0} & \textbf{93.4} & \textbf{91.1} & \textbf{89.7} & \textbf{88.2} & \textbf{83.5} & \textbf{77.8} & \textbf{73.1} & \textbf{61.5} & \textbf{85.3} \\ \hline \end{tabular} } \label{tab:exp_modelnet40_dataset} \end{table*} The prediction bias for new classes of 3D objects appears during the inference stage, due to the unbalanced training samples. To this end, in the validation phase, we correct the prediction probability bias of past classes by incorporating with the initial data statistics of past classes obtained when they were initially learned in the training stage. The intuitive explanation is that prediction model is more confident when all training data of past 3D objects is available. Moreover, the initial data statistics are available across all incremental states, and the memory storage for them can be negligibly small. Concretely, for the $t$-th past learned class, the predicted probability via classifier $C$ is rectified by: \begin{equation} \begin{split} C^s(f_c; \theta_{C})^t = \left\{ \begin{aligned} &C(f_c; \theta_{C})^t \cdot \frac{\psi_{s_i}(t)}{\psi_{s}(t)} \cdot \frac{\psi(s)}{\psi(s_i)}, \text{if new classes}, \\ &C(f_c; \theta_{C})^t, \text{otherwise}, \\ \end{aligned} \right. \end{split} \label{eq:score_fairness_compensation} \end{equation} where $C^s(f_c; \theta_{C})^t$ and $C(f_c; \theta_{C})^t$ represent the probabilities of the $t$-th classes with the fairness compensation in validation stage or not. $\psi_{s_i}(t)$ and $\psi_s(t)$ denote the average classification scores predicted as the $t$-th class in the initial state $s_i$ and current incremental state $s$. Note that all training data of the $t$-th past 3D object class first appears in the initial state $s_i$. $\psi(s_i)$ and $\psi(s)$ are the mean prediction scores for all new coming classes of 3D objects in the states $s_i$ and $s$. Moreover, Eq.~\eqref{eq:score_fairness_compensation} applies the probability rectification into the past classes predictions only when the point cloud is initially predicted as the new classes. Obviously, by rescaling the predicted probabilities of past 3D objects classes with an adaptive statistic coefficient $\frac{\psi_{s_i}(t)}{\psi_{s}(t)} \cdot \frac{\psi(s)}{\psi(s_i)}$, Eq.~\eqref{eq:score_fairness_compensation} facilitates the inference fairness between the past and new classes of 3D objects. \begin{algorithm}[t] \caption{\small Optimization Framework of Our I3DOL Model.} \begin{algorithmic}[1] \State {\bfseries Input:} The training data $D = \{D_s\}_{s=1}^S$ and the number of examples $|M|$. \State {\bfseries Initialize: $\{\theta_{D_s}\}_{s=1}^S$}; \State {\bfseries For} $s=1, \cdots, S$ \textbf{do} \State \quad Update exemplars set $M$; \State {\quad\bfseries While} not converged \textbf{do} \State \qquad Randomly sample a batch of examples from the new coming data $D_s$ and the exemplars set $M$; \State \qquad Update $\theta_{D_s}$ via Eq.~\eqref{eq:overall_training_objective}; \State {\quad\bfseries End} \State \quad Store data statistics for score fairness compensation of Eq.~\eqref{eq:score_fairness_compensation} in the validation phase; \State {\bfseries End} \State {\bfseries Return} $\{\theta_{D_s}\}_{s=1}^S$; \end{algorithmic} \label{alg:I3DOL} \end{algorithm} \subsection{Implementation Details} For the configuration of network architecture, we employ PointNet \cite{Qi_2017_CVPR} as the backbone framework of encoder $E$, and apply four-layer multi-layer perception as classifier $C$. Furthermore, we utilize the Adam optimizer for model optimization, where the learning rate and weight decay are initialized as 0.0025 and 0.0005. The number of constructed local geometric structures is set as 64, and their features are extracted from the third convolutional block in encoder $E$. The optimization objective of our model in the $s$-th incremental state is formally formulated as follows: \begin{equation} \min\limits_{\theta_{D_s}}\mathcal{L}_{\mathrm{obj}}^s = \mathbb{E}_{(x_i^s, y_i^s)\in D_s}[- \sum\limits_{t=1}^{c_p+c_s}(y_i^s)^t\mathrm{log}(C(f_c; \theta_{C})^t)], \label{eq:overall_training_objective} \end{equation} where $C(f_c; \theta_{C})^t$ and $(y_i^s)^t$ denote the probability predicted as the $t$-th class in the training phase and its corresponding one-hot label, respectively. $\theta_{D_s}$ represents all network parameters of our I3DOL model for simplification, which is composed of $\theta_{E}, \theta_{C}, \theta_{T_p}, \theta_{T_g}, \theta_{T_u}$ and $\theta_{T_d}$. Moreover, \textbf{Algorithm 1} summarizes the whole optimization framework of our proposed I3DOL model. \begin{table*}[t] \centering \setlength{\tabcolsep}{3.17mm} \caption{Quantitative comparisons on ShapeNet dataset \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} with an increment of 6 classes.} \scalebox{0.94}{ \begin{tabular}{|c|ccccccccc|c|} \hline Comparison Methods & 6 & 12 & 18 & 24 & 30 & 36 & 42 & 48 & 53 & Avg \\ \hline LwF \cite{10.1007/978-3-319-46493-0_37} & 96.3 & 86.8 & 78.5 & 68.3 & 60.7 & 52.4 & 45.1 & 42.6 & 39.5 & 63.4 \\ iCaRL \cite{Rebuffi_2017_CVPR} & 96.7 & 88.4 & 82.1 & 74.9 & 68.5 & 62.3 & 56.9 & 51.3 & 44.6 & 69.5 \\ DeeSIL \cite{deesil-eccv2018} & 97.1 & 90.2 & 84.3 & 76.5 & 73.7 & 65.6 & 57.3 & 53.6 & 47.2 & 71.7 \\ EEIL \cite{Castro_2018_ECCV} & 97.3 & 91.8 & 86.4 & 79.5 & 73.1 & 67.3 & 63.4 & 57.1 & 51.6 & 74.2 \\ IL2M \cite{Belouadah_2019_ICCV} & 97.5 & 91.4 & 86.7 & 79.8 & 75.6 & 71.8 & 69.1 & 64.8 & 61.4 & 77.6 \\ DGMw \cite{Oleksiy_2019_CVPR} & 97.2 & 90.8 & 85.9 & 78.3 & 74.4 & 69.5 & 62.4 & 56.3 & 49.2 & 73.8 \\ DGMa \cite{Oleksiy_2019_CVPR} & 97.2 & 91.6 & 85.1 & 77.9 & 73.2 & 68.5 & 62.8 & 55.4 & 48.7 & 73.4 \\ BiC \cite{Wu_2019_CVPR} & 97.4 & 92.1 & 86.7 & 81.5 & 76.4 & 73.7 & 69.8 & 67.6 & 64.2 & 78.8 \\ RPS-Net \cite{NIPS2019_9429} & \textbf{97.6} & 92.5 & 87.4 & 80.1 & 77.4 & 72.3 & 68.4 & 66.5 & 63.5 & 78.4 \\ \hline \hline \rowcolor{lightgray} Ours-w/oAG & 97.3 & 92.1 & 87.5 & 80.4 & 76.6 & 72.7 & 69.8 & 65.2 & 62.3 & 78.2 \\ \rowcolor{lightgray} Ours-w/oGA & 97.4 & 92.8 & 88.1 & 82.0 & 77.3 & 74.8 & 71.4 & 68.7 & 65.4 & 79.8 \\ \rowcolor{lightgray} Ours-w/oSF & 96.6 & 93.8 & 89.1 & 82.8 & 78.0 & 75.6 & 72.8 & 69.5 & 66.4 & 80.5 \\ \rowcolor{lightgray} Ours & 97.5 & \textbf{94.4} & \textbf{90.2} & \textbf{84.3} & \textbf{80.5} & \textbf{76.1} & \textbf{73.5} & \textbf{70.8} & \textbf{67.3} & \textbf{81.6} \\ \hline \end{tabular} } \label{tab:exp_shapenet54_dataset} \end{table*} \begin{figure}[t] \centering \includegraphics[height=160pt, width=235pt]{.//fig//modelnet_k.pdf} \caption{Effect investigation about different number of exemplars on ModelNet dataset.} \label{fig:effect_different_exemplars} \end{figure} \begin{figure}[t] \centering \includegraphics[height=200pt, width=235pt]{.//fig//modelnet_s5_c8.pdf} \caption{Effect investigation about different number of total incremental states on ModelNet dataset.} \label{fig:effect_different_incremental_states} \end{figure} \begin{figure}[t] \centering \includegraphics[height=205pt, width=235pt]{.//fig//modelnet_converge.pdf} \caption{Convergence investigation about all incremental states on ModelNet dataset when $S=10$.} \label{fig:convergence_investigation_ModelNet} \end{figure} \section{Experiments} In this section, all advanced comparison approaches utilize PointNet \cite{Qi_2017_CVPR} as baseline feature extractor, and also perform data augmentation for point cloud in the training phase. \subsection{Datasets and Evaluation} Generally, three representative point cloud datasets, \emph{i.e.}, ModelNet \cite{7298801}, ShapeNet \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} and ScanNet \cite{8099744} are employed to validate the superiority of our I3DOL model. {ModelNet} \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} consists of 9843 training samples and 2468 testing samples, which are clean 3D CAD models from 40 classes. We select 800 samples as the exemplars set $M$, and set the total incremental states $S$ as 10. ShapeNet \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} contains 35037 training examples and 5053 validation examples. In our experiments, we utilize 53 categories of 3D CAD models that are gathered from online repositories. 1000 samples are stored as the exemplars $M$ and $S$ is set as 9. {ScanNet} \cite{8099744} with 17 categories is composed of scanned and reconstructed real-world indoor scenes, where the training and validation samples are 12060 and 3416, respectively. We set $M=600$ and $S=9$. For performance evaluation, top-1 classification accuracy is employed as basic metric. \begin{table*}[t] \centering \setlength{\tabcolsep}{3.17mm} \caption{Quantitative comparisons on ScanNet dataset \cite{8099744} with an increment of 2 classes.} \scalebox{0.94}{ \begin{tabular}{|c|ccccccccc|c|} \hline Comparison Methods & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 17 & Avg \\ \hline LwF \cite{10.1007/978-3-319-46493-0_37} & 92.2 & 74.8 & 60.3 & 48.2 & 41.6 & 37.3 & 35.7 & 33.5 & 31.8 & 53.1 \\ iCaRL \cite{Rebuffi_2017_CVPR} & 92.4 & 78.7 & 67.4 & 59.7 & 52.5 & 48.2 & 43.5 & 39.9 & 36.3 & 56.0 \\ DeeSIL \cite{deesil-eccv2018} & 92.6 & 80.1 & 71.5 & 63.3 & 57.3 & 52.8 & 48.6 & 45.2 & 43.7 & 63.1 \\ EEIL \cite{Castro_2018_ECCV} & 92.7 & 83.4 & 75.6 & 72.6 & 58.7 & 55.4 & 52.3 & 49.4 & 45.7 & 65.1 \\ IL2M \cite{Belouadah_2019_ICCV} & 92.9 & 84.4 & 77.3 & 70.1 & 60.8 & 56.7 & 54.1 & 52.6 & 48.3 & 66.7 \\ DGMw \cite{Oleksiy_2019_CVPR} & 92.5 & 82.6 & 67.1 & 61.8 & 56.3 & 53.2 & 50.8 & 47.5 & 43.8 & 63.6 \\ DGMa \cite{Oleksiy_2019_CVPR} & 92.5 & 82.2 & 67.8 & 60.2 & 56.6 & 52.7 & 50.4 & 48.1 & 44.7 & 63.7 \\ BiC \cite{Wu_2019_CVPR} & 92.8 & 84.2 & 77.5 & 70.3 & 60.6 & 57.2 & 54.3 & 52.4 & 48.5 & 66.8 \\ RPS-Net \cite{NIPS2019_9429} & 92.9 & 84.8 & 77.1 & 70.7 & 61.2 & 57.6 & 55.4 & 53.3 & 49.1 & 67.3 \\ \hline \hline \rowcolor{lightgray} Ours-w/oAG & 92.9 & 83.7 & 76.8 & 73.6 & 60.2 & 56.7 & 54.0 & 51.8 & 47.5 & 66.4 \\ \rowcolor{lightgray} Ours-w/oGA & 93.2 & 85.5 & 77.9 & 75.6 & 62.1 & 58.3 & 56.4 & 53.9 & 51.0 & 68.2 \\ \rowcolor{lightgray} Ours-w/oSF & \textbf{93.3} & 86.1 & 78.5 & 76.0 & 62.6 & 59.1 & 56.3 & 54.8 & 51.5 & 68.7 \\ \rowcolor{lightgray} Ours & 93.2 & \textbf{87.2} & \textbf{80.5} & \textbf{77.8} & \textbf{64.3} & \textbf{61.9} & \textbf{58.2} & \textbf{56.8} & \textbf{52.1} & \textbf{70.2} \\ \hline \end{tabular} } \label{tab:exp_scannet17_dataset} \end{table*} \subsection{Experiments on ModelNet Dataset} \subsubsection{Performance Comparisons} We present the comparison performance on ModelNet dataset \cite{7298801} in Table~\ref{tab:exp_modelnet40_dataset}, with an increment of 4 classes for each incremental state. Some key observations from the results in Table~\ref{tab:exp_modelnet40_dataset} can be summarized as follows: 1) Our proposed I3DOL model significantly outperforms all advanced comparison approaches \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, Belouadah_2019_ICCV, Oleksiy_2019_CVPR, Wu_2019_CVPR, 10.1007/978-3-319-46493-0_37, NIPS2019_9429} in 2D vision about 3.6\%$\sim$25.2\% in terms of average accuracy, which illustrates the superiority of our I3DOL model. 2) For classes incremental learning of 3D objects, our model effectively alleviates the catastrophic forgetting for past classes of 3D objects when comparing with other competing approaches \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, Belouadah_2019_ICCV, Oleksiy_2019_CVPR, Wu_2019_CVPR, NIPS2019_9429}. 3) The irregular point cloud representation can be characterized well via our I3DOL model to promote the classification performance. \subsubsection{Ablation Studies} As the gray part presented in Table~\ref{tab:exp_modelnet40_dataset}, empirical variant experiments on ModelNet dataset are prepared to illustrate the effectiveness of different components in our I3DOL model. Moreover, we respectively denote our proposed I3DOL model without only adaptive-geometric centroid construction, geometric-aware attention mechanism and score fairness compensation as Ours-w/oAG, Ours-w/oGA and Ours-w/oSF. The average prediction accuracy degrades 1.2\%$\sim$3.8\% when any component is abandoned from our proposed I3DOL model. Furthermore, it demonstrates that all proposed modules play an indispensable role in highlighting unique and informative 3D geometric characteristics with high contributions for classes incremental learning of 3D objects. \subsubsection{Effects of Exemplars and Incremental States} As shown in Figure~\ref{fig:effect_different_exemplars} and Figure~\ref{fig:effect_different_incremental_states}, this subsection investigates the effects of different exemplars and incremental states on ModelNet dataset by setting different values of $|M|$ and $S$. Specifically, some essential conclusions drawn from the results in Figure~\ref{fig:effect_different_exemplars} and Figure~\ref{fig:effect_different_incremental_states} are summarized as follows: 1) Our proposed I3DOL model could better prevent the catastrophic forgetting for past classes of 3D objects in different settings of exemplars and incremental states. 2) More selected exemplars encourage our I3DOL model to better alleviate the catastrophic forgetting for past learned 3D classes brought by redundant 3D geometric characteristics and unbalanced training samples. \subsubsection{Convergence Analysis} Figure~\ref{fig:convergence_investigation_ModelNet} investigates the convergence stability of our model on ModelNet dataset. Specifically, our proposed I3DOL model presents the stable performance when the iterative training epoch is about 140. Moreover, our I3DOL model could achieve efficient convergence across all incremental states. \subsection{Experiments on ShapeNet and ScanNet Datasets} As shown in Table~\ref{tab:exp_shapenet54_dataset} and Table~\ref{tab:exp_scannet17_dataset}, this subsection introduces extensive quantitative comparisons and ablation studies on ShapeNet \cite{DBLP:journals/corr/ChangFGHHLSSSSX15} and ScanNet \cite{8099744}. Some conclusions can be drawn from the presented comparison performance: 1) When compared with other advanced comparison approaches such as \cite{Rebuffi_2017_CVPR, Castro_2018_ECCV, Wu_2019_CVPR, NIPS2019_9429, 10.1007/978-3-319-46493-0_37}, our I3DOL model achieves better performance to alleviate catastrophic forgetting, which improves 2.8\%$\sim$18.2\% in terms of average accuracy. 2) Ablation studies verify that each component of our I3DOL model is designed effectively to facilitate classes incremental learning of 3D objects. 3) Our I3DOL model could better explore unique and informative 3D geometric characteristic, and address the unbalanced data distributions to alleviate catastrophic forgetting for past learned classes of 3D objects. \section{Conclusion} In this paper, we develop a new Incremental 3D Object Learning (\emph{i.e.}, I3DOL) model to continually explore new classes of 3D objects via alleviating catastrophic forgetting for past classes. Specifically, the adaptive-geometric centroid module is used to construct several discriminative local geometric structures to characterize the irregular point cloud representation. Meanwhile, the geometric-aware attention mechanism highlights unique and informative 3D geometric characteristics with high contributions for classes incremental learning of 3D objects. Moreover, we propose the score fairness compensation strategy to correct biased score prediction, which effectively prevents the catastrophic forgetting of past classes. The effectiveness of our I3DOL model is justified well via extensive experiments.
{ "timestamp": "2020-12-17T02:20:52", "yymm": "2012", "arxiv_id": "2012.09014", "language": "en", "url": "https://arxiv.org/abs/2012.09014" }
\section{Introduction} Deep learning models have achieved tremendous success in a wide variety of tasks related to computer vision (CV) and natural language processing (NLP). However, we still do not have a clear understanding regarding the generalization behavior of these models. Generalization is defined as the ability of classifiers to perform well on unseen data. It is desirable for real world deployment of deep models in real-world applications such as healthcare. Recently, \citet{GoodFellow15} showed that neural networks can misclassify an example even with perturbations imperceptible to human eye, and \citet{Zhang17} demonstrated that sufficiently overparameterized neural networks can even fit random labels. Both of these observations question the ability of a neural network to generalize well. Moreover, in both the cases, cross entropy loss on the training dataset remained small, indicating that it cannot be a predictor of generalization. Thus it is crucial to understand the reason behind generalization of neural networks. A lot of generalization bounds have been proposed for deep networks based on margin and weight norms \citep{Bartlett17,Pitas17}. In a recent work, \citet{Jiang20} studied the capability of these bounds to predict the generalization gap of a network. In this work, we propose a few other measures based on noise stability and hidden layer margins of the network. \section{Notation} A neural network is represented as a function $f$ that is given by $f(\mathbf{x}) = W_l(\sigma(W_{l-1}...(W_1\mathbf{x} + \mathbf{b}_1) + \mathbf{b}_2)....) + \mathbf{b}_l$, where $l$ represents the number of layers in the network, $\mathbf{x} \in \mathbb{R}^d$ represents the input to the network, $\sigma$ represents the non-linearity in the network operating element-wise and $W_j \in \mathbb{R}^{h_j \times h_{j-1}}, \mathbf{b}_j \in \mathbb{R}^{h_j}$ represent the weights and biases of the network with $h_j$ representing the width of the $j^{th}$ hidden layer. This representation suffices for a convolutional network as well, because convolution operation can be replaced by a sparse weight matrix. The pre-activations and activations at the $j^{th}$ layer are represented by $\mathbf{z}_j(\mathbf{x})$ and $\mathbf{a}_j(\mathbf{x})$ respectively. Using this notation, the neural network can be defined as \begin{align*} \mathbf{a}_0(\mathbf{x}) &= \mathbf{x} \\ \mathbf{z}_{j}(\mathbf{x}) &= W_{j} \mathbf{a}_{j-1}(\mathbf{x}) + \mathbf{b}_j \\ \mathbf{a}_j(\mathbf{x}) &= \sigma(\mathbf{z}_j(\mathbf{x})) \end{align*} $\| W \|_2$ and $\| W \|_\text{F}$ represent the spectral and frobenius norm respectively, of a matrix $W$. The training dataset is represented by $\mathcal{D} = \{(\mathbf{x}_i, y_i)\}_{i=1}^n$, where $n$ represents the number of points in the training dataset. Let the set of all classes in the dataset be denoted by $\mathcal{Y}$. $\mathcal{N}(\mathbf{\mu}, \Sigma)$ represents the normal distribution with mean $\mathbf{\mu}$ and covariance $\Sigma$. $\mathbb{E}(\mathbf{X})$ represents the expectation of a random variable $\mathbf{X}$. \section{Methodology} The measures used can be broadly divided into three different categories: Noise-based, Margin-based and Norm-based\footnote{Code available: \href{https://github.com/VASHISHT-RAHUL/Generalization_performance_of_neural_networks}{https://github.com/VASHISHT-RAHUL/Generalization\_performance\_of\_neural\_networks}}. In this section, we will provide the description of these methods and their results in the PGDL competition will be discussed in the following section. \subsection{Noise-based measures} The noise stability of the $j^{th}$ layer with respect to $(j-1)^{th}$ layer is measured by adding Gaussian noise to the activations of $(j-1)^{th}$ layer and quantifying its propagation to the pre-activations of the $j^{th}$ layer. Precisely, let $\mathbf{Y} \sim \mathcal{N}(\mathbf{0}, I)$, where $\mathbf{Y} \in \mathbb{R}^{h_{j-1}}$. Then, define \begin{align*} \mathbf{a}'_{j-1}(\mathbf{x}) &= \mathbf{a}_{j-1}(\mathbf{x}) + \sqrt{\frac{\nu}{d_{j-1}}}\| \mathbf{a}_{j-1}(\mathbf{x}) \| \mathbf{Y} \\ \mathbf{z}'_{j}(\mathbf{x}) &= W_{j} \mathbf{a}'_{j-1}(\mathbf{x}) + \mathbf{b}_j \end{align*} where $\nu$ controls the standard deviation of the Gaussian noise and the other factors ensure that $\mathbb{E}(\| \mathbf{a}'_{j-1}(\mathbf{x}) \|^2) = (1 + \nu)\mathbb{E}(\| \mathbf{a}_{j-1}(\mathbf{x}) \|^2)$, thus making the noise proportional to the scale of the inputs. In this case, noise stability ($\beta_j(\mathbf{x})$) of layer $j$ with respect to layer $j-1$ is defined as \[ \beta_j(\mathbf{x}) = \frac{1}{\nu} \frac{\| \mathbf{z}'_j(\mathbf{x}) - \mathbf{z}_j(\mathbf{x}) \|^2}{\| \mathbf{z}_j(\mathbf{x}) \|^2} \] Two measures defined using $\beta_j(\mathbf{x})$ are given below \[ \textbf{mean-noise-stability} = \frac{1}{n*l}\sum_{i=1}^n \sum_{j=1}^l \beta_j(\mathbf{x}_i) \] \[ \textbf{geometric-mean-noise-stability} = \frac{1}{n*l}\sum_{i=1}^n \sum_{j=1}^l \log(\beta_j(\mathbf{x}_i)) \] Two other measures can be defined similarly (\textbf{mean-noise-stability-output} and \textbf{geometric-mean-noise-stability-output}), using the noise stability of the output with respect to the input. Variations of this measure have been used in recent papers \citep{Arora18,Nagarajan19} for establishing data-dependent generalization bounds. \subsection{Margin-based measures} Multiple generalization bounds based on the output-layer margin have been established for deep networks \citep{Jiang20}. However, margin can be defined at intermediate layers as well \citep{Elsayed18}. Moreover, a linear parametric model with these margins as inputs has been shown to predict the generalization gap reasonably well, given a constant depth \citep{Jiang19}. However, with varying depth, determining the inputs for the parametric model is tricky and moreover, we explicitly wanted to avoid parametric models and focus on theoretically grounded measures. \subsubsection{Input-layer margin ($\gamma_{\text{inp}}$)} It is defined as the minimum perturbation that needs to be added to an input $\mathbf{x}$ so that the perturbed input is misclassified by the network \citep{Elsayed18}. It was approximated using the first-order Taylor expansion of the network around the input. \subsubsection{All-layer-margin ($\gamma_{\text{all}}$)} Consider a neural network represented as a composition of $l$ functions denoted by $f_1, ..., f_l$. Let $\delta_1, ..., \delta_l$ represent the perturbations applied at each layer. Then the perturbed network output ($F(\mathbf{x}, \delta)$) is given by \begin{align*} g_1(\mathbf{x}, \delta) &= f(\mathbf{x}) + \delta_1 \| \mathbf{x} \|_2 \\ g_j(\mathbf{x}, \delta) &= f_j(g_{j-1}(\mathbf{x}, \delta)) + \delta_j \| g_{j-1}(\mathbf{x}, \delta) \| \\ F(\mathbf{x}, \delta) &= g_l(\mathbf{x}, \delta) \end{align*} Then, all-layer-margin \citep{Wei20} is defined as \begin{align*} \gamma_{\text{all}}(\mathbf{x}, y) =& \min_{\delta_1, ..., \delta_l} \sqrt{\sum_{j=1}^l \| \delta_j \|^2}\\ &\text{subject to} \argmax_{y' \in \mathcal{Y}} F(\mathbf{x}, \delta)_{y'} \neq y \end{align*} It was approximated by performing gradient ascent on the loss function with respect to the parameters $\delta_1, ..., \delta_l$. \subsubsection{Margin-jacobian} This measure was defined by combining the output-layer margin ($\gamma_{\text{out}}$) with the Jacobian of the output with respect to the intermediate layers. \[ \textbf{margin-jacobian} = \left(\frac{l}{\gamma_{\text{out}}^2}\right)^{\frac{1}{l}} + \frac{\sum_{i=1}^{n} \sum_{j=1}^l \frac{1}{d_l*d_j}\| \frac{\partial f(\mathbf{x}_i)}{\partial \mathbf{a}_j} \|_{\text{F}}^2}{nl^2\gamma_{\text{out}}} \] The first term normalizes the margin by the depth and was inspired from one of the bounds in \citet{Jiang20}. $\frac{\sum_{i=1}^{n} \sum_{j=1}^l \frac{1}{d_l*d_j}\| \frac{\partial f(\mathbf{x}_i)}{\partial \mathbf{a}_j} \|_{\text{F}}^2}{nl}$ represents the average Jacobian norm, that is further normalized by depth and margin. We found that the sum of these terms works better empirically than the individual terms. \subsection{Norm-based Measures} Multiple bounds based on spectral and frobenius norms have been proposed recently for feed-forward as well as convolutional networks \citep{Bartlett17,Pitas17}. The capability of these bounds to predict the generalization gap has also been explored in \citet{Jiang20}. We provide an efficient way of evaluating the spectral norm of weight matrices in case of convolutional networks. Converting the weight matrix of a convolution operator to the sparse weight matrix of a linear operator is time-consuming task. Instead, for calculating the spectral norm, or any eigenvalues of the convolution operator, we used the Power Method, that works for any linear operator, whether or not it is defined explicitly by a weight matrix (for more details, refer to the code). This trick allowed us to evaluate these bounds for deep networks within the time limit of the competition. Based on this faster method of evaluation, the one metric based on spectral norm that we tried was \[ \textbf{fast-log-spec} = (1 - \frac{1}{l})\sum_{i=1}^l \log(\| W_i \|_2^2) - \log(\gamma_{\text{out}}^2) \] This is based on the generalization bound for convolutional networks from \citet{Pitas17}. \section{Results in PGDL} The PGDL competition had three different kind of datasets - public, development and private. Public dataset had two different tasks - Task 1 and Task2, and was completely accessible to the competitors, i.e, we could access the final test error of the networks as well as the final scores on these tasks. Development dataset had two different tasks - Task 4 and Task 5, and was partially accessible, i.e, we could just access the final scores on these tasks. Private dataset had 4 tasks - Task 6, Task 7, Task 8 and Task 9, and was significantly more complex as compared to public and development datasets. Participants were allowed limited submissions on this dataset and the final score was only revealed after the competition ended. Each task consisted of a dataset and a set of neural networks with varying hyperparameters such as depth, dropout etc., that were trained to almost 100\% accuracy on the training dataset. The aim was to come up with a measure that can predict the test error of the network, given the training data and its parameters. The evaluation metric used was the mutual information between the predicted measure and the generalization error, conditioned on the hyperparameters. More details regarding the competition can be found on the \href{https://sites.google.com/view/pgdl2020}{PGDL} website. The results for development and public dataset for all the measures (except for one that we did not try on the development dataset) are given in Table \ref{tab:pubdev}. As limited submissions were allowed on private dataset, results for a few of these measures on the private dataset are shown in Table \ref{tab:priv} (for exact hyperparameter settings, refer to the code). As can be clearly seen, \textbf{geometric-mean-noise-stability} clearly beats the other measures on the private dataset. However, its performance on a particular task - Private Task 8 is poor. Similarly, on development and public datasets, it doesn't perform as well as other measures. This clearly indicates a non-uniformity in the measure that is predictive of generalization on a particular task. \begin{table}[t] \begin{center} \begin{tabular}{ p{5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} } \hline Method & Public & Dev & Public Task 1 & Public Task 2 & Dev Task 4 & Dev Task 5 \\ \hline mean-noise-stability & 10.77 & 0.58 & 0.37 & 21.17 & 0.20 & 0.95 \\ \hline geometric-mean-noise-stability & 2.66 & 1.45 & 0.10 & 5.23 & 1.50 & 1.40 \\ \hline mean-noise-stability-output & 6.21 & 1.64 & \textbf{11.07} & 1.35 & 0.27 & 3.02 \\ \hline geometric-mean-noise-stability-output & 5.99 & & 9.53 & 2.45 & &\\ \hline input-layer-margin & 1.93 & 7.36 & 2.63 & 1.22 & 5.36 & \textbf{9.37} \\ \hline all-layer-margin & \textbf{12.79} & 3.30 & 1.23 & \textbf{24.36} & 0.32 & 6.27\\ \hline margin-jacobian & 2.81 & \textbf{9.38} & 0.15 & 5.47 & \textbf{18.23} & 0.53\\ \hline fast-log-spec & 5.95 & 2.11 & 7.04 & 4.87 & 0.05 & 4.17\\ \hline \end{tabular} \caption{Results on public and development dataset} \label{tab:pubdev} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{ p{5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} } \hline Method & Private & Private Task 6 & Private Task 7 & Private Task 8 & Private Task 9\\ \hline geometric-mean-noise-stability & \textbf{6.51} & 7.76 & \textbf{8.02} & 0.67 & \textbf{9.58}\\ \hline mean-noise-stability-output & 2.10 & 0.07 & 0.80 & \textbf{5.77} & 1.75\\ \hline input-layer-margin & 1.34 & 1.18 & 2.74 & 0.62 & 0.82\\ \hline all-layer-margin & 2.50 & 3.15 & 2.21 & 0.63 & 4.01\\ \hline margin-jacobian & 4.01 & \textbf{8.50} & 2.02 & 2.67 & 2.84\\ \hline \end{tabular} \caption{Results on private dataset} \label{tab:priv} \end{center} \end{table} \section{Conclusion} Although noise resilience based measure beats other measures on the private dataset, the same does not hold for public and development datasets. This clearly indicates a non-uniformity in the measure required to predict generalization for a given task, that can be explored further, if the data of these tasks is made available to the competitors. \bibliographystyle{unsrtnat} \small
{ "timestamp": "2020-12-17T02:15:07", "yymm": "2012", "arxiv_id": "2012.08854", "language": "en", "url": "https://arxiv.org/abs/2012.08854" }
\section{Introduction} A fundamental quantum description of the gravitational field which is valid across arbitrarily short length scales remains unknown. Frequently, a theory of quantum gravity is claimed to necessitate tools which go beyond quantum-field theoretic tools due to the perturbative non-renormalizability of General Relativity (GR) \cite{tHooft:1974toh,Christensen:1979iy,Goroff:1985th}. However, perturbative renormalizability is neither necessary nor sufficient to define a fundamental quantum field theory. As a concrete example, a theory can be perturbatively renormalizable but not valid up to arbitrary scales due to a Landau pole \cite{PhysRev.95.1300,Gockeler:1997dn}, suffering from a triviality problem \cite{frohlich1982triviality,Gies:2004hy}. What suffices for a fundamental description is the finiteness of the running couplings of the underlying theory due to quantum fluctuations. One possible way of ensuring such a property is by the existence of a fixed point in the renormalization group flow. At such a point, the theory reaches a scale-invariant regime and a continuum limit, i.e., the removal of a ultraviolet (UV) cutoff, can be achieved \cite{Wetterich:2019qzx}. From this point of view, a theory of quantum gravity based on continuum quantum field theory techniques which features a fixed point can very well define a fundamental theory. If such a fixed point sits at vanishing value of the couplings, the theory is dubbed asymptotically free while for non-vanishing (interacting) fixed points, the theory is said to be asymptotically safe. In \cite{weinberg1979ultraviolet}, S. Weinberg conjectured that quantum gravity could be an asymptotically safe theory despite its perturbative non-renormalizability. A major obstacle in order to test the validity of such a conjecture lies on the fact that being an interacting fixed point, perturbation theory might not be applicable and non-perturbative tools are mandatory. Two main perspectives were taken over the past decades. On the one hand, as in lattice QCD, non-perturbative information can be extracted from a lattice formulation of quantum gravity. Such a field has developed into the (Causal) Dynamical Triangulations program and evidence for a well-defined continuum limit was collected in the recent years\footnote{Evidence for a suitable continuum limit in four dimensions was also obtained within quantum Regge calculus, a slightly different approach from Dynamical Triangulations, see, e.g., \cite{Hamber:2015jja}.} \cite{loll2019quantum}. Alternatively, on the continuum quantum field theory side, functional renormalization group (FRG) techniques were applied to gravity in the seminal paper \cite{Reuter:1996cp}, leading to the asymptotic safety program for quantum gravity, see, e.g., \cite{Percacci:2017fkn,Reuter:2019byg,Eichhorn:2018yfc,Pereira:2019dbn,Dupuis:2020fhh,Eichhorn:2020mte,Reichert:2020mja} for recent reviews on the topic. See also \cite{Donoghue:2019clr,Bonanno:2020bil} for critical discussions on this program for quantum gravity. The application of FRG techniques has enabled a systematic search for a non-trivial fixed point in the renormalization group (RG) flow of quantum gravity, as proposed in \cite{weinberg1979ultraviolet}. Recently, Dyson-Schwinger equations were also adapted to the context of quantum gravity in \cite{Hamber:2020isy}, opening up an alternative semi-analytical continuum field-theoretic path to probe the existence of a non-trivial fixed point. All those perspectives are anchored on the firm pillars of standard quantum field theory and have collected compelling evidence for the existence of a well defined continuum limit in quantum gravity. This seems to contradict the standard lore that there exists a riddle between quantum field theory and GR. Nevertheless, even if such proposals for a fundamental theory of quantum spacetime fail to describe our world, one can take an effective field theory perspective and still use quantum field theory techniques to compute quantum-gravitational corrections \cite{Donoghue:1993eb,Hamber:1995cq,Bjerrum-Bohr:2014zsa}, showing that GR and quantum field theory have no a priori incompatibility. In this work, we explore the asymptotic safety scenario for quantum gravity within the framework of the FRG. Yet, being able to define a fundamental theory of quantum gravity cannot be fully satisfactory. First, it must be compatible with our current observations and, second, it must be able to make predictions that will be eventually tested. However, at this point, another difficulty about quantum gravity becomes evident. Quantum gravity effects are usually suppressed by the Planck scale making the measurement of direct quantum-gravity effects extremely challenging. However, one promising path towards testing the consistency of quantum gravity that has been taken in asymptotically safe quantum gravity is the coupling with matter fields, see, e.g., \cite{Dou:1997fg,Percacci:2002ie,Percacci:2003jz,Shaposhnikov:2009pv,Narain:2009fy,Zanusso:2009bs,Eichhorn:2011pc,Eichhorn:2012va,Dona:2012am,Dona:2013qba,Dona:2014pla,Labus:2015ska,Oda:2015sma,Meibohm:2015twa,Dona:2015tnf,Meibohm:2016mkp,Eichhorn:2016esv,Eichhorn:2016vvy,Biemans:2017zca,Hamada:2017rvn,Christiansen:2017qca,Eichhorn:2017eht,Eichhorn:2017egq,Eichhorn:2017sok,Christiansen:2017cxa,Eichhorn:2017als,Christiansen:2017gtg,Christiansen:2017qca,Eichhorn:2017ylw,Eichhorn:2017lry,Alkofer:2018fxj,Eichhorn:2018akn,Eichhorn:2018ydy,Eichhorn:2018nda,Pawlowski:2018ixd,Eichhorn:2018whv,deBrito:2019epw,Wetterich:2019zdo,Wetterich:2019rsn,Wetterich:2019rsn,Reichert:2019car,Burger:2019upn,Domenech:2020yjf,Daas:2020dyo,Eichhorn:2020sbo,deBrito:2020dta}. Quantum fluctuations of matter degrees of freedom affect the running of gravitational couplings and might eventually change the fixed-point structure. Eventually, such fluctuations can be strong enough to destroy the existence of a scale-invariant regime, invalidating the proposal of asymptotic safety. Conversely, quantum-gravitational fluctuations contribute to the running of matter couplings and can therefore affect their behavior at high energies. So far, there are several hints that the asymptotic safety scenario for quantum gravity based on metric theories of gravity which are invariant under the full diffeomorphism group is compatible with the matter content of the Standard Model of particle physics plus minor extensions such as right-handed neutrinos and some dark matter candidates. See \cite{Eichhorn:2018yfc} for more details. From the FRG perspective, we do not declare a classical (or microscopic) action from which we derive the quantum dynamics. Rather, one starts with a set of symmetries which should be respected by the underlying quantum theory. Such symmetries can be deformed by the introduction of regulator terms which play the role of effectively suppressing the functional integration of ``slow-modes" in the Wilsonian sense \cite{Morris:1998da,Berges:2000ew,Pawlowski:2005xe,Gies:2006wv,Rosten:2010vm,Dupuis:2020fhh}. They act as momentum-dependent mass-like terms for the elementary fields of the theory. Thus, a flowing action $\Gamma_k$, with $k$ a momentum scale, which obeys an exact flow equation \cite{Wetterich:1992yh,Morris:1993qb,Ellwanger:1993mw} is constructed upon such deformed symmetries and interpolates between the full quantum action\footnote{The generating functional of one-particle-irreducible diagrams.} $\Gamma$ and the microscopic action $S$ which enters the Boltzmann factor\footnote{We restrict the discussion to an Euclidean setting from now on.} of the path integral. The infinitely-many terms that define $\Gamma_k$ are parameterized by an infinite set of couplings which, in their dimensionless versions, are coordinates of the so-called theory space. It is thus expected that different symmetries define different (and inequivalent) theory spaces. From a quantum-gravity perspective, alternative theories of the gravitational field based on different symmetry principles will, very likely, define different quantum theories. However, there is a situation where this issue becomes subtle: there are theories which are based on different symmetries but feature the same classical dynamics. A famous example is GR and Unimodular Gravity (UG) \cite{Anderson:1971pn,vanderBij:1981ym,Buchmuller:1988yn,Buchmuller:1988wx,Unruh:1988in,Henneaux:1989zc,Unruh:1989db,ellis2011trace}. Hence, it is an immediate question whether dynamical equivalence remains true in the quantum realm. In UG, the determinant of the metric $g_{\mu\nu}$ is fixed (non-dynamical) to a specific scalar density $\omega^2 (x)$, i.e., \begin{equation} \mathrm{det}\,g_{\mu\nu} = \omega^2 (x)\,. \label{in1} \end{equation} The symmetry group is reduced from the group of diffeomorphisms (\textit{Diff}) to transverse diffeomorphisms (\textit{TDiff}). Such a group is generated by transverse vectors $\epsilon^{\mathrm{T}\alpha}$, which satisfy $\nabla_\alpha \epsilon^{\mathrm{T}\alpha} = 0$, where the covariant derivative is defined with respect to the unimodular metric $g_{\mu\nu}$. The equivalence between GR and UG at the classical level is established by the use of the Bianchi identities, see, e.g., \cite{Percacci:2017fsy}. Quantum mechanically, however, the situation is much more subtle. In particular, there is a long-standing debate in the literature, see, e.g., \cite{Ardon:2017atk,Percacci:2017fsy,Fiol:2008vk,Saltas:2014cta,Padilla:2014yea,Smolin:2009ti,Smolin:2010iq,Alvarez:2015sba,Bufalo:2015wda,Upadhyay:2015fna,Eichhorn:2013xr,Eichhorn:2015bna,Benedetti:2015zsw,deBrito:2020rwu} for some recent references, if equivalence remains when quantum fluctuations are taken into account. Naively, however, one would expect that they are not equivalent at the quantum level since the nature of their quantum fluctuations is very different, i.e., the sum over histories is performed in very different configuration spaces. In particular, with a view towards asymptotic safety, the theory space defined by \textit{Diff}-invariant operators is different from the one associated with \textit{TDiff}. An important and very subtle difference in GR and UG lies on the treatment of the cosmological constant. In GR, it corresponds to a parameter which is fixed from the beginning and added by hand, as a coupling constant. It is, generically, subject to quantum corrections. In UG, the cosmological constant arises as an integration constant and, therefore, must be fixed by initial conditions. However, as such, it does not enter the classical action of the theory invariant under \textit{TDiff} since the measure term is just a fixed scalar density. Thus, in the former case, the cosmological constant defines a direction in the theory space while in the latter, it does not. This can indicate that, if asymptotically safe, \textit{Diff}- and \textit{TDiff}-invariant theories will not be equivalent. However, as pointed out in \cite{Ardon:2017atk,Percacci:2017fsy,deBrito:2020rwu}, this discussion is more subtle than it sounds and this is still not completely understood. In this paper, we leave aside this discussion and take \textit{TDiff} as the fundamental symmetry of the would-be theory of quantum gravity and look for further hints for the existence of a non-trivial UV fixed point. For simplicity, we call the hypothetical asymptotically safe theory as Unimodular Quantum Gravity (UQG), although it does not mean that our starting point for the quantization is the unimodular version of the Einstein-Hilbert action. Earlier results on asymptotic safety and unimodular gravity can be found in \cite{Eichhorn:2013xr,Saltas:2014cta,Eichhorn:2015bna,Benedetti:2015zsw}, where a non-trivial fixed point was obtained within truncations of the flowing action $\Gamma_k$. We provide a systematic analysis of fixed points within certain classes of truncations of $\Gamma_k$ by taking into account the following refinements: We take $\Gamma_k$ to be a function of the Ricci scalar and the quadratic contraction of Ricci tensors, i.e., \begin{equation} \Gamma_k = \int \mathrm{d}^dx\,\omega\,f_k (R,R^{\mu\nu}R_{\mu\nu})\,. \label{in2} \end{equation} For concreteness, subdivide the analysis in two classes, the first being the so-called $f(R)$-truncations, i.e., $f_k (R,R^{\mu\nu}R_{\mu\nu}) = f_k (R)$ and the second is defined by $f_k (R,R^{\mu\nu}R_{\mu\nu}) = F_k(R^2_{\mu\nu})+R Z_k(R^{2}_{\mu\nu})$, where $F_k(R^2_{\mu\nu})$ and $Z_k(R^2_{\mu\nu})$ are arbitrary functions. The second class of truncations was introduced in \cite{Falls:2017lst}. For both classes, we restrict the analysis to polynomial expansions of the curvatures on a spherical background, for technical simplicity. The other improvement that we implement in this work is that we employ the modified flow equation for UQG introduced in \cite{deBrito:2020rwu} due to the properties of the functional measure of UQG discussed in \cite{Ardon:2017atk,Percacci:2017fsy}. Moreover, we treat the anomalous dimensions of the elementary fields in different approximations as discussed in the context of UG in \cite{deBrito:2020rwu}. Finally, we minimally couple matter fields and analyse the impact they play on the gravitational couplings. The paper is presented as follows: in Sect. \ref{MethodModel} we provide a brief discussion of the background-field method, Faddeev-Popov quantization and FRG techniques for UQG and describe the model that we investigate in this work. The flow equation is set up in Sect. \ref{FlowEqSec}. In Sect. \ref{Projectionsf} we discuss the two classes of polynomial projections of $f(R,R^{\mu\nu}R_{\mu\nu})$ and the extraction of beta functions. Results for the interacting gravitational fixed-point structure both for pure gravity and gravity-matter systems are collected in Sect. \ref{OverallResults}. Finally, we draw our conclusions in Sec. \ref{Conclusions}. Technical details and expressions for the anomalous dimensions used in this work are presented in the appendices. \section{Method and Model}\label{MethodModel} \subsection{UG and the background-field method}\label{ugbfm} One of the challenges in the application of coarse-graining techniques to quantum gravity lies on the lack of a notion of external scale which tells what is coarser or finer. In order to define such a structure, the background-field method \cite{Abbott:1981ke} is employed, but see \cite{Falls:2020tmj}. The metric $g_{\mu\nu}$ is split as a fixed background metric $\bar{g}_{\mu\nu}$ and a fluctuating part $h_{\mu\nu}$, i.e., $g_{\mu\nu} = f(\bar{g},h)_{\mu\nu}$, where $f$ is an arbitrary function. For the purposes of this work, it is highly convenient to choose the so-called exponential parameterization or split of the metric. It is defined by, \begin{equation} g_{\mu\nu}=\bar{g}_{\mu\alpha}[\exp(\kappa\,{h^{.}}_{.})]^{\alpha}_{\,\,\,\nu}=\bar{g}_{\mu\nu}+\kappa\,h_{\mu\nu}+\sum_{n=2}^{\infty}\frac{\kappa^n}{n!}h_{\mu\alpha_1}\cdots\, h^{\alpha_{n-1}}_{\,\,\,\nu}, \label{ugbfm1} \end{equation} with $\kappa = (32\pi G_N)^{1/2}$, with $G_N$ being the Newton constant. Systematic studies employing more general parameterizations in \textit{Diff}-invariant theories can be found in, e.g., \cite{Kalmykov:1995fd,Kalmykov:1998cv,Ohta:2016npm,Ohta:2016jvw,Goncalves:2017jxq,Ohta:2018sze} for perturbative quantum gravity and in \cite{Nink:2014yya,Gies:2015tca,Falls:2015qga,deBrito:2018jxt} in the context of asymptotic safety. The unimodularity condition $\mathrm{det}\,g_{\mu\nu} = \omega^2 (x)$ can be easily implemented in Eq.~\eqref{ugbfm1} by choosing a unimodular background $\mathrm{det}\,\bar{g}_{\mu\nu} = \omega^2 (x)$ and traceless fluctuations $\bar{g}^{\mu\nu}h_{\mu\nu} \equiv h^{\mathrm{tr}} = 0$. In a path-integral formulation, the restriction to traceless fluctuations around a unimodular background in the exponential parameterization automatically restricts the configuration space to unimodular metrics. It must be emphasized that the tracelessness of the fluctuation field is not taken as a gauge condition but rather as a constraint from the very definition of such a field. Such a formulation of the unimodularity condition is called as ``minimal version" and was put forward in \cite{Eichhorn:2013xr,Eichhorn:2015bna,Benedetti:2015zsw,Ardon:2017atk,Percacci:2017fsy}. This is one particular way to implement the unimodularity condition in the path integral and it is the one we adopt in this work. Different strategies to implement such a condition may bring different conclusions with respect to those reported here. See, e.g., \cite{Saltas:2014cta,Alvarez:2015sba,Herrero-Valea:2020xaq} for different perspectives on how to implement the unimodularity condition in the path integral. \subsection{Digression on the Faddeev-Popov quantization in UG} The Euclidean path integral of UQG is performed over trace-free fluctuations in the exponential parameterization as discussed in Subsect.~\ref{ugbfm} and can be written formally as \begin{equation} \mathcal{Z}_{\textrm{UQG}}=\int\frac{{\mathcal{D}h^{\textrm{tr-free}}_{\mu\nu}}}{V_{\textrm{TDiff}}}\,\textmd{e}^{-S_{\textrm{UG}}[h;\bar{g}]}, \end{equation} where $V_{\textrm{TDiff}}$ stands for the volume of the \textit{TDiff} group and the classical unimodular action $S_{\text{UG}}[h;\bar{g}]$ is invariant under \textit{TDiff} - but does not need to coincide with the unimodular version of the Einstein-Hilbert action. Applying the standard Faddeev-Popov procedure\footnote{See \cite{Ardon:2017atk,Percacci:2017fsy,deBrito:2020rwu} for further details about the Faddeev-Popov procedure in UG in its minimal version.}, one inserts in the functional integral a formal identity as \begin{equation}\label{partfunction1} \mathcal{Z}_{\textrm{UQG}}=\int\frac{{\mathcal{D}h^{\textrm{tr-free}}_{\mu\nu}}}{V_{\textrm{TDiff}}}\,\bigg(\int\mathcal{D}\epsilon^{\textrm{T}}\Delta_{\text{FP}}\,\delta(F^{\T})\bigg)\,\textmd{e}^{-S_{\textrm{UG}}[h;\bar{g}]}, \end{equation} where $\Delta_{\text{FP}}$ corresponds to the Faddeev-Popov determinant and $F^{\T}_ {\mu}[h;\bar{g}]=0$ corresponds to a transverse gauge-fixing condition. The Faddeev-Popov unity is obtained by the integration over transverse contravariant vectors $\epsilon^{\mathrm{T}\mu}$, which are the generators of \textit{TDiff}. In addition, we assume that the integration measures are invariant under \textit{TDiff}. Differently from the standard situation in the Faddeev-Popov prescription, one cannot factor out the integral over the transverse vectors $\epsilon^{\mathrm{T}\mu}$ and associate it with the $V_{\textrm{TDiff}}$. The main reason is that the transverse vector is metric-dependent. Following \cite{Ardon:2017atk,Percacci:2017fsy}, the volume of \textit{TDiff} is defined as \begin{equation} V_{\textrm{TDiff}}=\int\mathcal{D}\epsilon\,\delta(\bar{\nabla}_{\mu}\epsilon^{\mu})\,, \end{equation} where it is used that for unimodular metrics, $\nabla_{\mu}\epsilon^{\mu}= \bar{\nabla}_{\mu}\epsilon^{\mu}$. Decomposing $\epsilon^{\mu}$ in terms of transverse and longitudinal parts, i.e., $\epsilon^{\mu}=\epsilon^{\textrm{T}\mu}+\bar{\nabla}^{\mu}\varphi,$ it is straightforward to find that \begin{equation} V_{\textrm{TDiff}}=\textrm{Det}^{-1/2}(-\bar{\nabla}^2)\int\mathcal{D}\epsilon^{\textrm{T}}\,, \end{equation} as a proper representation of the volume of the \textit{TDiff} group. Therefore, the final expression of the path integral of unimodular quantum gravity is represented as \begin{equation}\label{partfunction2} \mathcal{Z}_{\textrm{UQG}}=\int\mathcal{D}h^{\textrm{tr-free}}_{\mu\nu}\mathcal{D}\bar{C}_{\alpha}\mathcal{D}C^{\beta}\,\textrm{Det}^{1/2}(-\bar{\nabla}^2)\,\textmd{e}^{-S_{\text{UG}}[h;\bar{g}]-S_{\text{g.f.}+\text{gh.}}[h,\bar{C},C;\bar{g}]}, \end{equation} where $S_{\text{g.f.}+\text{gh.}}[h,\bar{C},C;\bar{g}]$ corresponds to a gauge-fixing action along with Faddeev-Popov ghosts $\bar{C}_{\alpha}$ and $C^{\beta}$ term. In summary, Eq. (\ref{partfunction2}) is the proper formal definition of the Euclidean functional integral of UG (in its minimal version) and the starting point for applying functional renormalization group techniques. \subsection{Functional Renormalization Group for UG} In order to search for a fixed point in the renormalization group flow, we employ functional renormalization techniques. They are based on the Wilsonian perspective of momentum shell-wise integration of modes in the path integral. It can be performed in a smooth fashion through the introduction of a regulator term $\Delta S_k[\phi]$ in the action appearing in the Boltzmann factor of the Euclidean path integral. It implements a suppression of all field modes associated with momentum lower than an infrared scale $k$. Hence, the scale-dependent path integral is written as \begin{equation}\label{partfunctionreg} \mathcal{Z}_{k}[J]=\int\mathcal{D}\phi_{\Lambda_{\textrm{UV}}}\,\textmd{e}^{-S[\phi]-\Delta S_k[\phi]+\int \mathrm{d}^dx\,J(x)\phi(x)}\,, \end{equation} where $\phi(x)$ represents a generic field content of the theory and $J$ denotes its corresponding external source. The UV cutoff $\Lambda_{\textrm{UV}}$ is placed in order to make the measure well-defined. The regulator term is quadratic in the fields with a kernel function $\mathbf{R}_k(\Delta)$ as \begin{equation} \Delta S_k=\frac{1}{2}\int \mathrm{d}^dx\,\phi(x)\,\mathbf{R}_k(\Delta)\,\phi(x)\,. \end{equation} The suppression of field modes is achieved according to the spectrum of the Laplacian operator in $\mathbf{R}_k(\Delta)$, i.e., field configurations associated with eigenvalues $p^2$ such that $p^2<k^2$ will be suppressed in the functional integration. In this sense, the regulator is such that the path integral is evaluated over a shell from $\Lambda_{\textrm{UV}}$ to $k$, where $k$, therefore, acts as an infrared cutoff scale. Its introduction allows us to define the \textit{effective average action}, or the flowing action $\Gamma_k$, which is a scale-dependent functional and contains the effect of large quantum fluctuations. The flowing action interpolates between the full quantum effective action ($\Gamma_{k\rightarrow 0}=\Gamma$) and the classical/microscopic UV action $(\Gamma_{k\rightarrow \Lambda_{\textrm{UV}}}=S_{\Lambda})$. The flow of $\Gamma_k$ with $k$ is described by the Wetterich equation~\cite{Wetterich:1992yh,Morris:1993qb,Ellwanger:1993mw}, \begin{equation} \partial_t\Gamma_k=\frac{1}{2}\textrm{STr}\bigg[\big(\Gamma^{(2)}_k+\textbf{R}_k\big)^{-1}\partial_t\textbf{R}_k\bigg]\,. \end{equation} where $\partial_t\equiv k\partial_k$, $\Gamma_k^{(2)}=\delta^2\Gamma_k/\delta\Phi\delta\Phi$ is the Hessian and $\textrm{STr}$ denotes the supertrace which contains a negative sign for Grassmann-valued fields and a factor of 2 for complex fields. The Wetterich equation receives an extra contribution coming from the extra determinant in (\ref{partfunction2}). Regularizing the extra determinant as $\textrm{Det}(-\bar{\nabla}^2)\mapsto \textrm{Det}(P_k(-\bar{\nabla}^2))$, where $P_k(z)=z + R_k(z)$, the flow equation for $\Gamma_k$ becomes \begin{equation}\label{RegularizedFlowEq} \partial_t\Gamma_k=\frac{1}{2}\textrm{STr}\bigg[\big(\Gamma^{(2)}_k+\textbf{R}_k\big)^{-1}\partial_t\textbf{R}_k\bigg]-\frac{1}{2}\textrm{Tr}\bigg(\frac{\partial_tR_k(-\bar{\nabla}^2)}{P_k(-\bar{\nabla}^2)}\bigg)\,. \end{equation} Note that, according to the procedure adopted earlier, the extra determinant does not generate additional fluctuation vertices and it arises from a proper application of the Faddeev-Popov procedure in UG. As a consequence, it contributes only to the ``background flow'' $\partial_t\Gamma_k[0;\bar{g}]$. \subsection{Setting the truncation for unimodular gravity-matter systems} In this work the key strategy to obtain information about the fixed-point structure is based on a mixed approach which combines the background-field approximation, vertex, and derivative expansion, similarly to what has been done in \cite{Eichhorn:2009ah,Eichhorn:2010tb,Christiansen:2012rx,Codello:2013fpa,Dona:2013qba,Dona:2014pla,Dona:2015tnf,deBrito:2020rwu}. On the one hand, in the background approximation the extraction of the beta functions of the dimensionless gravitational background couplings is obtained from the flow equation by turning off all the fluctuating fields after the computation of the Hessian. Moreover, the anomalous dimension of the graviton is identified with the running of the background Newton coupling. The ghost and matter anomalous dimensions are set to zero. On the other hand, a simultaneous vertex and derivative expansion generate one-loop-structure diagrams as corrections to the flow of the two-point function of the fields and allow to unambiguously derive independent anomalous dimensions of all fluctuating fields. In this sense, the extra functional trace associated with the path integral measure only contributes to the background flow, since it only depends on the background metric. Furthermore, as an approximation, the different avatars of the Newton coupling, see, e.g., \cite{Eichhorn:2018akn,Eichhorn:2018ydy} in the vertices and graviton propagator are identified with its background value. We consider a truncation for the flowing action $\Gamma_k$ in the unimodular setting containing an arbitrary number of massless Gaussian matter fields\footnote{Meaning that we do not consider matter self-interactions.}, namely scalar, Abelian vector and Dirac fields, minimally coupled to gravity in four-dimensional Euclidean spacetime. Throughout the work we investigate a truncation of the form \begin{equation}\label{workingtruncation} \Gamma_{k}=\Gamma_k^{\textrm{gravity}}+\Gamma_{k}^{\textrm{matter}}+\Gamma_k^{\textrm{g.f.}}+\Gamma_k^{\textrm{gh.}}\,, \end{equation} where we follow~\cite{Falls:2017lst,Ohta:2018sze} and write the gravitational sector as\footnote{Herein we use the shorthand notation $\int_x\equiv\int \mathrm{d}^4x$.} \begin{equation} \Gamma_k^{\textrm{gravity}}[g_{\mu\nu}]=\frac{1}{16\pi G_{\textmd{N},k}}\int_x\,\omega\,f_{k}(R,R_{\mu\nu}^2)\,, \end{equation} where $G_{\textmd{N},k}$ is the dimensionful Newton coupling, $f_k$ is an arbitrary function of the Ricci scalar and the square of the Ricci tensor, $R_{\mu\nu}^2=R_{\mu\nu}R^{\mu\nu}$. The $k$-dependence comes from the scale-dependent renormalization factors and couplings of curvature invariants. The matter sector of the effective average action is composed of $N_{\phi}$ scalar fields, $N_{A}$ Abelian vector fields and $N_{\psi}$ Dirac spinors. Its complete action is given by \begin{align} \Gamma_k^{\textrm{matter}}[g,\phi,\bar{\psi},\psi,A]&=\frac{1}{2}\sum_{i=1}^{N_{\phi}}\int_x\,\omega\,g^{\mu\nu}\partial_{\mu}\phi_i\partial_{\nu}\phi_i\,+\,\sum_{i=1}^{N_{\psi}}\int_x\,\omega\,i\bar{\psi}_i\slashed{\nabla}\psi_i \nonumber\\ &+\frac{1}{4}\sum_{i=1}^{N_{A}}\int_x\,\omega\,g^{\mu\alpha}g^{\nu\beta}F_{i,\mu\nu}F_{i,\alpha\beta}\,, \end{align} where the summation index $i$ runs over the particle species and $F_{i,\mu\nu}$ is the field-strength of the Abelian gauge field $A_{i,\mu}$. We do not consider the running of wave-function renormalization factors of the matter fields as they do not lead to self-consistent results within the hybrid approximation adopted in this work (see Subsect. \ref{Results_PureGravity} for details). The covariant Dirac operator $\slashed{\nabla}=e_{a}^{\,\,\,\mu}\gamma^a \nabla_{\mu}$ satisfies the Lichnerowicz relation \begin{equation} \Delta_{\textmd{L}_{\frac{1}{2}}}\psi_i=-\slashed{\nabla}^2\psi_i=\bigg(-\nabla^2+\frac{R}{4}\bigg)\psi_i\,. \end{equation} The fermion-gravity interaction is achieved through the vierbein and spin connection. In a spacetime manifold with vanishing torsion, these are not independent fields and can both be expressed in terms of $h_{\mu\nu}$ adapted to the exponential decomposition once the local $O(4)$ gauge invariance is gauge-fixed by a Lorentz symmetric condition (see App. B in Ref. \cite{deBrito:2019umw})\footnote{Alternatively, for the Dirac covariant operator $\slashed{\nabla}$, one could use the spin-base formalism \cite{Gies:2013noa,Gies:2015cka,Lippoldt:2015cea} expressed in accordance with the exponential parameterization. Both prescriptions are equivalent at the level of our computations.}. Moreover, due to the relation $g_{\mu\nu}={e^{a}}_{\mu}{e^{b}}_{\nu}\eta_{ab}$, the vierbein also obeys the unimodularity condition, i.e., $\det e^{a}_{\,\,\mu}=\omega$. Besides featuring a $\mathbb{Z}_2$ symmetry for the scalar sector under which $\phi_i \mapsto -\phi_i$, this toy model also features a shift symmetry $\phi_i\mapsto\phi_i+\text{const.}$, which prevents a scalar mass term. Additionally, an axial U(1) symmetry, i.e., $\psi_i \rightarrow e^{i\alpha\gamma_5}\psi_i$, $\bar{\psi_i}\rightarrow \bar{\psi_i}e^{i\alpha\gamma_5}$, prohibits a Dirac mass term. In this model the scalars and ``chiral'' fermions are uncharged, not leading to gauge interactions. The gauge-fixing action for the \textit{TDiff} and the Abelian gauge symmetry is given by~\cite{Eichhorn:2013xr,Eichhorn:2015bna,Ardon:2017atk,Percacci:2017fsy,deBrito:2019umw} \begin{equation} \Gamma_k^{\textrm{g.f.}}[h,A;\bar{g}] =\frac{1}{2a}\int_x\,\omega\,\bar{g}^{\mu\nu}F_{\mu}^{\text{T}}[h;\bar{g}]F_{\nu}^{\text{T}}[h;\bar{g}]+\frac{1}{2\zeta}\sum_{i=1}^{N_A}\int_x\,\omega\,(\bar{g}^{\mu\nu}\bar{\nabla}_{\mu}A_{i,\nu})^2, \end{equation} where $a$ and $\zeta$ represent gauge parameters for the gravitational and Abelian sectors, respectively. Using the transverse projector with respect to the background metric $(\mathscr{P}_{\text{T}})_{\mu\nu}=\bar{g}_{\mu\nu}-\bar{\nabla}_{\mu}(\bar{\nabla}^2)^{-1}\bar{\nabla}_{\nu}$, we define the transverse gauge-fixing function as $F_{\mu}^{\text{T}}[h;\bar{g}]=\sqrt{2}\,(\mathscr{P}_{\text{T}})_{\mu}^{\,\,\,\nu}\,\bar{\nabla}^{\alpha}h_{\nu\alpha}$. This particular prescription of the gauge-fixing function is necessary since only the transverse diffeomorphism sector should be fixed. In this work we adopt the Landau gauge for both gravitational and Abelian sectors, i.e., $a\rightarrow 0$ and $\zeta\rightarrow 0$. The introduction of the transverse projector makes the gauge fixing for \textit{TDiff} a non-local functional. This could be avoided by allowing a higher-derivative operator in the gauge-fixing, see, e.g., \cite{Benedetti:2015zsw}. Accompanying the gauge-fixing term there is the action for the Faddeev-Popov ghosts\footnote{Alternatively, the gauge-fixing and ghost terms for different formulations of unimodular gravity can be derived through BRST transformations, see \cite{Upadhyay:2015fna,Alvarez:2015sba}. See also \cite{Baulieu:2020obv,Baulieu:2020rpv} for a discussion of the BRST implementation of the unimodular gauge.} which reads \begin{equation} \Gamma_k^{\text{gh.}}[h,\bar{C},C,\bar{c},c;\bar{g}]=\int_x\,\omega\,\bar{C}_{\mu}\,\bar{g}^{\mu\nu}\frac{\delta F_{\nu}^{\T}[h;\bar{g}]}{\delta h_{\alpha\beta}}\delta^{Q}_{C}h_{\alpha\beta}+\sum_{i=1}^{N_A}\int_x\,\omega\,\bar{c}_i(-\bar{\nabla}^2)c_i, \end{equation} where $C^{\mu}\, (\bar{C}_{\mu})$ and $c_i\, (\bar{c}_i)$ are ghost and anti-ghost for the gravitational and Abelian sectors, respectively. In the unimodular setting, the Faddeev-Popov ghost for the gravitational sector is constrained by the transversality condition $\nabla_{\mu}C^{\mu}=\bar{\nabla}_{\mu}(g^{\mu\nu}C_{\nu})=0$ as a consequence of the transverse nature of the \textit{TDiff} generator. Furthermore, $\delta^{Q}_{C}h_{\alpha\beta}$ corresponds to the ``quantum'' transformation of the fluctuation field with generator being the ghost field $C^{\mu}$. The explicit implementation of the gravitational ghost sector suitable for the exponential split of the metric is discussed in \cite{Eichhorn:2017sok,deBrito:2020rwu}. A minimal and diagonal Hessian together with an exact inversion of the kinetic operators can be achieved in a spherical background and by decomposing the fluctuation field $h_{\mu\nu}$ into the York basis~\cite{York:1973ia}, namely, \begin{equation} h_{\mu\nu}=h_{\mu\nu}^{\textrm{TT}}+2\bar{\nabla}_{(\mu}\xi_{\nu)}+\Big(\bar{\nabla}_{\mu}\bar{\nabla}_{\nu}-\frac{1}{4}\bar{g}_{\mu\nu}\bar{\nabla}^2\Big)\sigma. \end{equation} We emphasize the absence of the trace mode in the decomposition due to the unimodularity condition. No non-local field redefinitions are performed and, as a consequence, the Jacobians arising from the change of variables are taken into account in the flow equation by a suitable regularization of the resulting determinants. Furthermore, appropriate wave-function renormalization factors are introduced for the gravitational ghost fields and for each spin sector of the gravitational fluctuation according to \begin{subequations} \begin{align}\label{renormfactors} h_{\mu\nu}^{\text{TT}}\mapsto \mathcal{Z}_{k,\text{TT}}^{1/2}\,h_{\mu\nu}^{\text{TT}}, \qquad \xi_{\mu}\mapsto \mathcal{Z}_{k,\xi}^{1/2}\,\xi_{\mu}, \qquad \sigma\mapsto \mathcal{Z}_{k,\sigma}^{1/2}\,\sigma, \end{align} \begin{align}\label{renormghost} C^{\mu}\mapsto \mathcal{Z}_{k,C}^{1/2}\,C^{\mu},\qquad \bar{C}_{\mu}\mapsto \mathcal{Z}_{k,C}^{1/2}\,\bar{C}_{\mu}. \end{align} \end{subequations} The wave-function renormalization factors $\mathcal{Z}_{k,\Phi}$ with $\Phi=(\TT,\xi,\sigma,C,\bar{C})$ generate anomalous dimensions $\eta_{\Phi}=-\partial_t\ln \mathcal{Z}_{k,\Phi}$ and contribute to the system of beta functions of Newton and higher curvature couplings. The Abelian gauge potentials are also decomposed into its transverse and longitudinal parts, \begin{equation}\label{vectordecomposition} A_{i,\mu}=A_{i,\mu}^{\textrm{T}}+\bar{\nabla}_{\mu}\big[(-\bar{\nabla}^2)^{-1/2}A_{i}^{\textrm{L}}\big], \qquad \bar{\nabla}_{\mu}A_{i}^{\textrm{T}\mu}=0. \end{equation} Contrary to the fluctuation field decomposition, herein we chose to insert an inverse square root of the Bochner Laplacian, $-\bar{\nabla}^2$, so that the Jacobian associated with this field redefinition is a simple identity. \section{Setting the flow equation}\label{FlowEqSec} At the practical level, the right-hand side of the flow equation is expanded on the same basis as the one chosen for the truncation such that a suitable projection rule selects the beta functions associated to each coupling. The beta functions of the background gravitational couplings can be read off at zeroth order in the fluctuating fields and the elements of the Hessian employed in such a computation are listed in the App. \ref{hessianelements}. The entries of the regulator function $\mathbf{R}_k$ are built from the following prescription~\cite{Codello:2008vh} \begin{equation} \mathbf{R}_{k,\varphi_i\varphi_j}(\Delta)=\Gamma^{(2)}_{k,\varphi_i\varphi_j}(\Delta)\bigg|_{\Delta\mapsto\Delta+R_k(\Delta)}-\Gamma^{(2)}_{k,\varphi_i\varphi_j}(\Delta)\,, \end{equation} where $\Delta$ is an appropriate coarse-graining operator and $\Gamma^{(2)}_{k,\varphi_i\varphi_j}$ denotes the second functional derivative of $\Gamma_k$ with respect to the fields $\varphi_i$ and $\varphi_j$. For the regulator kernel (i.e., for the shape function that enters the regulator) we choose the Litim-type cutoff~\cite{Litim:2001up} \begin{equation}\label{litimcutoff} R_k(\Delta)=(k^2-\Delta)\theta(k^2-\Delta)\,. \end{equation} In particular, we adopt two types of regularization schemes distinguished by two common choices of coarse-graining operators~\cite{Codello:2008vh} namely, the Bochner-Laplacian, $-\bar{\nabla}^2$ (Type I), and the Lichnerowicz-Laplacians, $\Delta_{\textmd{L} s}$ (Type II), which are connected by the Lichnerowicz relations on a four-dimensional maximally symmetric Euclidean background \begin{align} \Delta_{\textmd{L}_2} = -\bar{\nabla}^2 + \frac{2}{3}\bar{R} \,,\quad\qquad \Delta_{\textmd{L}_1} = -\bar{\nabla}^2 + \frac{1}{4}\bar{R} \,,\quad\qquad \Delta_{\textmd{L}_0} = -\bar{\nabla}^2 \,. \qquad \end{align} Inspired by \cite{Alkofer:2018fxj,Ohta:2015fcu,Demmel:2015oqa,Dona:2013qba}, in order to accommodate both regularization prescriptions, we define the ``interpolating'' coarse-graining operator for each spin-$s$ sector as $\Delta_s=\Delta_{\textmd{L} s}-\gamma_s\bar{R}$, where the endomorphism parameters were introduced such that the choice $\gamma_0 = \gamma_{\frac{1}{2}} = \gamma_1 = \gamma_2=0$ implements the Lichnerowicz-Laplacians and $\gamma_0=0$, $\gamma_{\frac{1}{2}}=1/4$, $\gamma_1=1/4$ and $\gamma_2=2/3$ result in the Bochner-Laplacian. According to~\cite{Dona:2012am}, in order to account for a correct sign for the fermionic contributions to the Newton coupling constant, a Type II regularization must be adopted. The fermionic regulator function then is written as \begin{equation} \mathbf{R}_{k,\psi\psi}(z)=i\big[\sqrt{1+(k^2/z-1)\theta(k^2/z-1)}-1\big]\slashed{\nabla}, \end{equation} where $z=\Delta_{\textmd{L}_{\frac{1}{2}}}$. Furthermore, since for massless scalar fields both types of regularizations are equal, we adopt for simplicity the Type II regularization prescription for the gauge fields as well~\cite{Dona:2013qba}. Henceforth, we explore in this work both types of coarse-graining operators only in the gravitational sector. For the truncation (\ref{workingtruncation}), the running of the dimensionless gravitational couplings can be read off, at zeroth order in the fields, from the following flow equation written in the Landau gauge, i.e., $a=0,$ \begin{align}\label{Flow_Eq_York_Decomposed} \partial_t \Gamma_{k} &=\frac{1}{2}\textmd{Tr}_{(2)}\Big[\textbf{G}_{\TT} \, \partial_t\textbf{R}_{k,\TT} \Big] + \frac{1}{2}\textmd{Tr}_{(1)}^{\prime}\Big[\textbf{G}_{\xi\xi} \, \partial_t\textbf{R}_{k,\xi\xi} \Big] + \frac{1}{2}\textmd{Tr}_{(0)}^{\prime\prime}\Big[\textbf{G}_{\sigma\sigma} \, \partial_t\textbf{R}_{k,\sigma\sigma} \Big] - \textmd{Tr}_{(1)}\Big[\textbf{G}_{C\bar{C}} \, \partial_t\textbf{R}_{k,C\bar{C}} \Big] \nonumber \\ &- \frac{1}{2}\textmd{Tr}_{(0)}^\prime\bigg[\frac{}{}\!\!\left( \Delta_0 + R_{k}(\Delta_0)\right)^{-1} \! \partial_t R_{k}(\Delta_0) \bigg]\,+\,\frac{N_{\phi}}{2}\textmd{Tr}_{(0)}\Big[\textbf{G}_{\phi\phi}\,\partial_t\textbf{R}_{k,\phi\phi}\Big]-N_{\psi}\textmd{Tr}_{(1/2)}\Big[\textbf{G}_{\psi\psi}\,\partial_t\textbf{R}_{k,\psi\psi}\Big] \nonumber \\ &+\frac{N_A}{2}\textmd{Tr}_{(1)}\Big[\textbf{G}_{A^{\T}A^{\T}}\,\partial_t\textbf{R}_{k,A^{\text{T}}A^{\text{T}}}\Big]+\frac{N_A}{2}\textmd{Tr}_{(0)}^{\prime}\Big[\textbf{G}_{A^{\textmd{L}}A^{\textmd{L}}}\,\partial_t\textbf{R}_{k,A^{\text{L}}A^{\text{L}}}\Big]-N_{A}\textmd{Tr}_{(0)}\Big[\textbf{G}_{c\bar{c}}\,\partial_t\textbf{R}_{k,c\bar{c}}\Big]\nonumber \\ &+\,\mathcal{T}^{\textmd{Jacob.}}_{(1)} \,+\, \mathcal{T}^{\textmd{Jacob.}}_{(0)}\,, \end{align} with $\textbf{G}_{ij}=\Big[\big(\Gamma_{k}^{(2)}+\textbf{R}_{k}\big)^{-1}\Big]_{ij}$ for every pair $(i,j)$. The first term in the second line corresponds to the extra determinant accounting for an appropriate treatment of the volume of the \textit{TDiff} group. The last two terms in the fourth line denote additional contributions coming from the Jacobian associated with the change of variables $h_{\mu\nu}\mapsto \{h^{\TT}_{\mu\nu},\xi_{\mu},\sigma\}$. Upon regularization $\Delta_s\mapsto \Delta_s+R_k(\Delta_s)$, these contributions manifest themselves as the following additional traces \begin{subequations} \begin{align} \mathcal{T}^{\textmd{Jacob.}}_{(1)} = -\frac{1}{2}\textmd{Tr}^\prime\left[ \left(\Delta_1 + R_k(\Delta_1)+\frac{2\gamma_1-1}{2}\bar{R} \right)^{-1} \partial_t R_k(\Delta_1)\right] \,, \end{align} \begin{align} \mathcal{T}^{\textmd{Jacob.}}_{(0)} = &-\frac{1}{2}\textmd{Tr}^{\prime\prime}\left[\left(\Delta_0 + R_k(\Delta_0) - \frac{1}{3}\bar{R} \right)^{-1} \partial_t R_k(\Delta_0)\right] \nonumber\\ &-\frac{1}{2}\textmd{Tr}^{\prime\prime}\left[ \!\frac{}{}\left(\Delta_0 + R_k(\Delta_0) \right)^{-1} \partial_t R_k(\Delta_0)\right] \,. \end{align} \end{subequations} The computation of the traces in the FRG equation is performed with standard heat kernel techniques. All the necessary technical tools and notation are collected in App. \ref{heatkernel}. In general, the result of the trace computation leads to very long expressions and, therefore, we shall not report explicit results here. The anomalous dimensions can be computed by acting with two functional derivatives with respect to the fields on the flow equation (\ref{Flow_Eq_York_Decomposed}) and expanding the full scale-dependent effective action in powers of the fields on a flat background. Their extraction is then obtained by means of suitable projection rules. We follow the same strategy as in \cite{deBrito:2019umw,deBrito:2020rwu}. The explicit expressions for the anomalous dimensions used in this work are reported in App.~\ref{etas}. \section{$f(R,R_{\mu\nu}R^{\mu\nu})$ projections and extraction of beta functions}\label{Projectionsf} In this section we discuss the extraction of beta functions from two different types of polynomial projections of the $f_k(R,R_{\mu\nu}R^{\mu\nu})$-truncation minimally coupled with Gaussian matter degrees of freedom in the unimodular setting. To extract the beta functions of the background gravitational couplings from the FRG equation, we can adopt a projection which consists in setting to zero all fluctuation fields. Within the background approximation, the truncation (\ref{workingtruncation}), inserted in the left-hand side of (\ref{Flow_Eq_York_Decomposed}), leads to a flow equation of the form \begin{equation}\label{General_Form_Flow_Eq} \frac{1}{16\piG_{\textmd{N},k}}\bigg[-\eta_{\textmd{N}}f_{k}\left(\bar{R},\bar{R}_{\mu\nu}^2\right)+\partial_t f_k\left(\bar{R},\bar{R}_{\mu\nu}^2\right)\bigg]=\mathscr{F}\left(f_k,f_k^{(m,n)},\eta_i,\partial_t f_k, \partial_t f_k^{(m,n)},N_{\Psi}\right), \end{equation} where the left-hand side of (\ref{General_Form_Flow_Eq}) features the ``background anomalous dimension'' $\eta_\textmd{N}=-\partial_t\ln\mathcal{Z}_\textmd{N}$ with $\mathcal{Z}_\textmd{N}=(16\piG_{\textmd{N},k})^{-1}$ and $N_{\Psi}=(N_{\phi},N_{A},N_{\psi})$. The dependence on $\vec{\eta}=(\eta_{\TT},\eta_{\xi},\eta_{\sigma},\eta_{c})$ on the right-hand side of (\ref{General_Form_Flow_Eq}) comes from the regulator insertion $\partial_t\mathbf{R}_k$ associated with each field sector. Moreover, we adopt the compact notation \begin{align} f_k^{(m,n)} = \frac{\partial^{m+n} f_k(\bar{R},\bar{X})}{\partial \bar{R}^m \partial \bar{X}^n} \,, \end{align} with $\bar{X}=\bar{R}_{\mu\nu}^2$. In order to obtain concrete results, we resort to polynomial truncations. In principle, had we performed all calculations in a generic background, the most general polynomial expansion (within the class of the $f_k(R,R_{\mu\nu}^2)$-truncation) would be of the form \begin{equation} f_k(R,R_{\mu\nu}^2)=\sum_{n_1,n_2}\bar{\alpha}_k^{(n_1,n_2)}R^{n_1}\,(R_{\mu\nu}R^{\mu\nu})^{n_2} \end{equation} where $\bar{\alpha}_k^{(n_1,n_2)}$ denotes the scale-dependent couplings. The running of the couplings $\bar{\alpha}_k^{(n_1,n_2)}$ can be extracted by expanding both sides of the flow equation (\ref{General_Form_Flow_Eq}) in powers of $\bar{R}$ and $\bar{R}_{\mu\nu}$ and comparing the coefficients of the same curvature invariants on both sides order by order. Unfortunately, this procedure carries an ambiguity for a spherical background as the invariant $\bar{R}_{\mu\nu}^2$ collapses to $\frac{1}{4}\bar{R^2}$. As a consequence, the running of any two couplings $\bar{\alpha}_k^{(p_1,p_2)}$ and $\bar{\alpha}_k^{(q_1,q_2)}$ can no longer be disentangled for all pairs $(p_1,p_2)$ and $(q_1,q_2)$ satisfying the relation $p_1+2p_2=q_1+2q_2$. A way to bypass this ambiguity, without appealing to a generic background, is to impose some restriction on the function $f_k(R,R_{\mu\nu}^2)$. \subsection{$f(R)$ polynomial projection} In this subsection, we consider the particular case corresponding to the $f(R)$-approximation, which can be directly obtained by neglecting the $R_{\mu\nu}^2$-dependence in our truncation. For practical computations, we focus on the polynomial approximation \begin{equation}\label{fR_Truncation} f_k(R)=-R+\sum_{n=2}^{N}k^{2-2n}\alpha_{k,n}R^n\,, \end{equation} where $\alpha_{k,n}$ corresponds to scale-dependent dimensionless couplings and the parameter $N$ stands for a positive integer number that fixes the maximal degree of the polynomial truncation. This truncation was largely explored in the context of ``standard" asymptotic safety. See, e.g., \cite{Codello:2007bd,Machado:2007ea,Codello:2008vh,Demmel:2012ub,Dietz:2012ic,Dietz:2013sba,Demmel:2013myx,Demmel:2014sga,Falls:2013bv,Falls:2014tra,Demmel:2014hla,Demmel:2015oqa,Ohta:2015efa,Ohta:2015fcu,Gonzalez-Martin:2017gza,Christiansen:2017bsy,Falls:2017lst,Alkofer:2018fxj,Alkofer:2018baq,deBrito:2018jxt,Falls:2018ylp,Burger:2019upn}. The coefficient of the first term is normalized to $-1$ in order to recover the unimodular Einstein-Hilbert truncation once higher-order powers of the curvature scalar are neglected. Furthermore, the zeroth-order term, which would be proportional to the cosmological constant, is absent since we are dealing with a unimodular theory space\footnote{It is important to emphasize that, since the introduction of the regulator breaks BRST invariance, mass-like terms for the graviton can be generated and mimick the effect of the cosmological constant~\cite{deBrito:2020rwu}. Nevertheless, such terms arise as a symmetry-breaking effect due to the regulator and are not present in the background approximation.}. We extract the system of beta functions associated with the dimensionless Newton coupling $G_k$ and the set of dimensionless couplings $\{\alpha_{k,n}\}_{n=2,\cdots, N}$ by plugging Eq.~(\ref{fR_Truncation}) into the flow equation~(\ref{General_Form_Flow_Eq}) and expanding both sides of it up to order $\bar{R}^{N}$. In this case, the flow equation leads to the following structure \begin{align} &\frac{\eta_\textmd{N}}{16\pi G_k}k^2\bar{R}+\frac{1}{16 \pi G_k}\sum_{n=2}^{N}\bigg((2-2n-\eta_\textmd{N})\alpha_{k,n}+\beta_{\alpha}^{(n)}\bigg)k^{4-2n}\bar{R}^n \equiv\sum_{n=1}^N\mathscr{H}_{n}\left(\alpha_k,N_{\Psi},\vec{\eta},\beta_{\alpha}^{(m)}\right)\,, \end{align} where $G_k=k^{2}\,G_{\textmd{N},k}$ is the dimensionless Newton coupling and we have defined $\beta_{\alpha}^{(n)}=\partial_t \alpha_{k,n}$. The function $\mathscr{H}_n$ has the general schematic form \begin{align} &\mathscr{H}_{n}\left(\alpha_k,N_{\Psi},\vec{\eta},\beta_{\alpha}^{(m)}\right)\equiv\mathscr{A}_n(\alpha_k)+\tilde{\mathscr{A}}_n(N_{\Psi})+\sum_{j=1}^{4}\mathscr{B}_{n,j}(\alpha_k)\eta_{j}+\sum_{m=2}^{N}\mathscr{M}_{n,m}(\alpha_k)\beta_\alpha^{(m)}\,. \end{align} The coefficients $\mathscr{A}_n$, $\tilde{\mathscr{A}}_n$, $\mathscr{B}_{n,j}$ and $\mathscr{M}_{n,m}$ are scheme-dependent quantities and can be computed analytically for the Litim's cutoff. By matching contributions according to the power of scalar curvature, we arrive at the RG equations \begin{subequations} \begin{align} \beta_G=2G_k\bigg[1+8\pi G_k\,\mathscr{H}_{1}\left(\alpha_k,N_{\Psi},\vec{\eta},\beta_{\alpha}^{(m)}\right)\bigg]\,,\label{betaGfR} \end{align} \begin{align} \beta_\alpha^{(n)}=(\eta_\textmd{N}+2n-2)\alpha_{k,n}+16\pi G_k\,\mathscr{H}_{n}\left(\alpha_k,N_{\Psi},\vec{\eta},\beta_{\alpha}^{(m)}\right)\,,\label{betaAlphafR} \end{align} \end{subequations} with $n=2,\,\ldots,\,N$. In Eq.~(\ref{betaGfR}) we have used $\eta_\textmd{N}=G_k^{-1}\beta_G-2$. We highlight that the system of RG equations defined by (\ref{betaGfR}) and (\ref{betaAlphafR}) provides only implicit results for the beta functions $\beta_G$ and $\beta_\alpha^{(n)}$. Furthermore, the system is not closed because of the presence of the anomalous dimensions $(\eta_{\TT},\eta_{\xi},\eta_{\sigma},\eta_{c})$. In principle, this system can be solved analytically in order to extract explicit results for $\beta_G$ and $\beta_\alpha^{(n)}$ once a prescription to obtain the anomalous dimensions is adopted. In Sect. \ref{Results_PureGravity}, we will consider two types of prescriptions: the standard ``RG-improvement'' approximation\footnote{In Appendix C, the reader can find more details about the identification of the background anomalous dimension with the one derived from the second derivative of the flowing action with respect to fluctuations.} and a hybrid semi-perturbative approximation based on an independent calculation of the anomalous dimensions using the derivative expansion around a flat background. We emphasize that the latter prescription is somewhat not self-consistent since it glues together results obtained under different schemes and backgrounds. Nonetheless, we take this tentative choice to obtain results beyond the background approximation. As usual in functional methods, the use of hybrid schemes might be justified \textit{a posteriori} if the underlying results find good convergence properties. Nevertheless, the final expressions for the system of RG equations are very lengthy and not worth being reported here. The so-called non-trivial or non-Gaussian fixed-point (NGFP) solutions (denoted as $G^*$ and $\alpha^*_n$) may be obtained in terms of the following equations \begin{subequations} \begin{align} 2G^*\bigg[1+8\pi G^*\,\mathscr{H}_1\left(\alpha^*,N_{\Psi},\vec{\eta}\,\big|_*,0\right)\bigg]=0\label{FP_EQ1_fR}, \end{align} \begin{align} (2n-4)\alpha_n^*+16\pi G^*\,\mathscr{H}_n\left(\alpha^*,N_{\Psi},\vec{\eta}\,\big|_*,0\right)=0\label{FP_EQ2_fR}. \end{align} \end{subequations} The notation $(\cdots)\big|_*$ indicates that the quantity in parenthesis is evaluated at the fixed-point solution. In Sect. \ref{Results_PureGravity}, we report numerical evidence for interacting fixed-point solutions associated with various choices of $N$. \subsection{\texorpdfstring{$F(R_{\mu\nu}R^{\mu\nu})+R\,Z(R_{\mu\nu}R^{\mu\nu})$}{TEXT} polynomial projection} Another way of bypassing the technical problem of distinguishing the invariants $R^2$ and $R_{\mu\nu}^2$ on a spherical background is to consider an alternative class of truncation, which is characterized by the following decomposition\footnote{Hereafter we refer to such decomposition as FZ-truncation.} \begin{equation} f_k(R,R_{\mu\nu}^2)=F_k(R_{\mu\nu}^2)+R\,Z_k(R_{\mu\nu}^2), \end{equation} where $F_k(R_{\mu\nu}^2)$ and $Z_k(R_{\mu\nu}^2)$ denote scale-dependent arbitrary functions of the invariant $R_{\mu\nu}^2$. This class of truncation was first investigated in \cite{Falls:2017lst} as an approach to include effects beyond the tensor structure of the Ricci scalar. For practical calculations, we restrict our analysis to polynomial truncations defined by \begin{subequations} \begin{align} F_k(R_{\mu\nu}R^{\mu\nu})=\sum_{n=1}^{N_{F}}k^{2-4n}\rho_{k,2n}(R_{\mu\nu}R^{\mu\nu})^n\label{FfunctionFZ}\,, \end{align} \begin{align} Z_k(R_{\mu\nu}R^{\mu\nu})=-1+\sum_{n=1}^{N_Z}k^{-4n}\rho_{k,2n+1}(R_{\mu\nu}R^{\mu\nu})^n\label{ZfunctionFZ}\,, \end{align} \end{subequations} where $N_F=\lfloor N/2\rfloor$ and $N_Z=\lfloor (N-1)/2\rfloor$, with $\lfloor \cdots \rfloor$ representing the floor function. We denote as $\{\rho_{k,n}\}_{n=2,\,\cdots,\,N}$ the set of scale-dependent dimensionless couplings. This particular decomposition allows us to unambiguously extract the beta functions associated with the set of higher-curvature couplings $\{\rho_{k,n}\}_{n=2,\,\cdots,\,N}$ even in a spherical background. This follows from the fact that $F_k(R_{\mu\nu}R^{\mu\nu})$ and $R \,Z_k(R_{\mu\nu}R^{\mu\nu})$ contribute to the left-hand side of the flow equation with even and odd powers of $\bar{R}$, respectively, when projected onto a spherical background. Following the same procedure outlined in the $f(R)$-approximation, we extract the system of RG equations associated with the dimensionless couplings $G_k$ and $\{\rho_{k,n}\}_{n=2,\,\cdots,\,N}$ by plugging (\ref{FfunctionFZ}) and (\ref{ZfunctionFZ}) into both left-hand side and right-hand side of Eq. (\ref{General_Form_Flow_Eq}) to obtain the following expressions \begin{subequations} \begin{align} \frac{1}{16\piG_{\textmd{N},k}}&\bigg[-\eta_{\textmd{N}}f_{k}\left(\bar{R},\bar{R}_{\mu\nu}^2\right)+\partial_t f_k\left(\bar{R},\bar{R}_{\mu\nu}^2\right)\bigg]\bigg|_{S^4}=\nonumber\\ &=\frac{\eta_\textmd{N}}{16 \pi G_k}k^2\bar{R}+\frac{1}{16\pi G_k}\sum_{n=1}^{N_F}\frac{k^{4-4n}}{4^n}\left(\beta_{\rho}^{(2n)}+(2-4n-\eta_\textmd{N})\rho_{k,2n}\right)\bar{R}^{2n}\nonumber\\ &+\frac{1}{16\pi G_k}\sum_{n=1}^{N_Z}\frac{k^{2-4n}}{4^n}\left(\beta_{\rho}^{(2n+1)}-(4n+\eta_\textmd{N})\rho_{k,2n+1}\right)\bar{R}^{2n+1}\,, \end{align} for the right-hand side of the flow equation and \begin{align} &\mathscr{F}\left(f_k,f_k^{(m,n)},\eta_i,\partial_t f_k,\partial_t f_k^{(m,n)},N_{\Psi}\right)\bigg|_{S^4}=\nonumber\\ &=\sum_{n=1}^{N}\bigg(\mathscr{A}_n(\rho_k)+\tilde{\mathscr{A}}_n(N_{\Psi})+\sum_{j=1}^{4}\mathscr{B}_{n,j}(\rho_k)\eta_{j}+\sum_{m=2}^{N}\mathscr{M}_{n,m}(\rho_k)\beta_{\rho}^{(m)}\bigg)\,, \end{align} \end{subequations} for the left-hand side of the flow equation. The notation $(\cdots)\big|_{S^4}$ denotes the projection on the spherical background. Performing an order by order comparison in the curvature scalar $\bar{R}$, one easily obtains the system of RG equations for the FZ-truncation and the equations for the fixed-point solutions $G^*$ and $\{\rho^*_{n}\}_{n=2\,\cdots\,N}$. The final expressions are quite similar to the condensed expressions in (\ref{betaGfR}) and (\ref{betaAlphafR}) for the flow equations and (\ref{FP_EQ1_fR}) and (\ref{FP_EQ2_fR}) for the fixed-point equations, respectively. Nevertheless, the explicit form of the coefficients $\mathscr{A}_n$, $\tilde{\mathscr{A}}_n$, $\mathscr{B}_{n,j}$ and $\mathscr{M}_{n,m}$ obtained within the FZ-truncation differs considerably from the ones extracted via $f(R)$-approximation. \section{Results for the interacting gravitational fixed-point structure}\label{OverallResults} \subsection{Pure gravity systems}\label{Results_PureGravity} In the following, we present our results regarding the fixed-point structure extracted within the two previously defined polynomial truncations, focusing on the case without matter fields, i.e., by setting $N_{\phi}=N_A=N_{\psi}=0$. The analysis including matter contributions is reported in Sect. \ref{Impact_Matter}. The fixed-point equations for both truncations are considerably complicated so that we resort to a numerical recursive solution of the fixed-point equations for the higher-order couplings in terms of the two lowest ones and adopt a bootstrap search strategy~\cite{Falls:2013bv,Falls:2014tra} to select suitable fixed-point solutions and critical exponents. Within the background approximation, a canonical choice of closure for the system of RG equations is obtained with the RG-improved anomalous dimensions (see, e.g., \cite{Codello:2013fpa}) $\eta_{\TT}=\eta_{\sigma}=G_k^{-1}\partial_t G_k-2$ and $\eta_{\xi}=\eta_c=0$. Alternatively, a hybrid closure is obtained by improving the background approximation with anomalous dimensions computed in an independent way via the vertex expansion (see, e.g., \cite{Christiansen:2012rx,Codello:2013fpa,deBrito:2020rwu}). For the $f(R)$-approximation within the RG-improved closure, we have performed the search for fixed-point candidates at each order of the approximation from $N=1$ to $N=20$. It is worth mentioning that, in the case of standard ASQG, i.e., where the theory space is defined by all \textit{Diff}-invariant operators, the fixed-point analysis has been performed within polynomial expansions involving terms up to $R^{70}$~\cite{Falls:2018ylp}. \begin{figure}[!t] \begin{center} \includegraphics[height=5.0cm]{FPs-fR_1.eps} \quad \includegraphics[height=5.0cm]{FPs-fR_2.eps} \quad \includegraphics[height=5.0cm]{FPs-fR_3.eps} \quad \includegraphics[height=5.0cm]{FPs-fR_4.eps} \quad \includegraphics[height=5.0cm]{FPs-fR_5.eps} \quad \includegraphics[height=5.0cm]{FPs-fR_6.eps} \caption{\footnotesize{Fixed-point values of the couplings $G_k$, $\alpha_{k,2}$, $\alpha_{k,3}$, $\alpha_{k,4}$, $\alpha_{k,5}$ and $\alpha_{k,6}$ in the $f(R)$-truncation. The blue circle indicates the Type I regularization (Bochner-Laplacian), whereas the red square indicates the Type II regularization (Lichnerowicz-Laplacians). All plots are computed within the RG-improved prescription.}} \label{fig:FPs_fR_RGImprov} \end{center} \end{figure} In Fig.~\textbf{\ref{fig:FPs_fR_RGImprov}}, we show the results of the fixed-point values for some of the dimensionless couplings (up to $\alpha^{*}_6$), defined in the polynomial $f(R)$-decomposition - see Eq. (\ref{fR_Truncation}), as functions of the order of approximation $N$. For higher-order couplings ($\alpha_7^*,\,\cdots,\,\alpha_{20}^*$), the fixed-point coordinates are of order $|\alpha_n^*|<10^{-4}$. In each plot, the results computed with the Bochner-Laplacian (Type I) as a coarse-graining operator are represented by a blue circle, whereas the ones computed employing the Lichnerowicz-Laplacians (Type II) are distinguished by a red square. We observe that the fixed-point values stabilize against the inclusion of higher-order operators. Albeit quantitatively different, the fixed-point structure is similar for both coarse-graining operators and, in particular, it displays an apparent stabilization for sufficiently large truncation. In order to provide a better visualization of the stabilization pattern against higher-order extensions for both regularization schemes, we consider in Fig.~\textbf{\ref{fig:NormalizedFPsfR}} a convenient normalization for the fixed-point couplings. Following \cite{Falls:2014tra, Falls:2017lst,Falls:2018ylp}, we define the set of normalized fixed points $\{\lambda_n\}_{n=1,\,\cdots,\, N}$ according to \begin{equation} \lambda_1(N)=\frac{G^*(N)}{G^*(N_{\text{max}})}+1 \qquad \text{and} \qquad \lambda_n(N)=\frac{\alpha_n^*(N)}{\alpha^*_n(N_{\text{max}})}+n, \end{equation} where $G^*(N)$ and $\alpha^*_n(N)$ represent the fixed-point values of the dimensionless couplings computed at order $N$. The couplings $\lambda_n$ are normalized in units of the fixed-point values computed at the largest approximation order, which in the present case is $N_{\text{max}}=20$, and are shifted by $n$. Fig.~\textbf{\ref{fig:NormalizedFPsfR}} gives evidence for the rapid apparent stabilization of the fixed points. \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{NormalizedFPs_Scheme3.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{NormalizedFPs_Scheme4.eps} \caption{\footnotesize{Normalized fixed points in the $f(R)$-truncation. The convergence pattern is exhibited with the normalization $\lambda_1(N)=\frac{G^*(N)}{G^*(N_{\text{max}})}+1$ and $\lambda_n(N)=\frac{\alpha_n^*(N)}{\alpha^*_n(N_{\text{max}})}+n$ (for $n>1$). From bottom to top, we display $\lambda_{n}(N)$ for $n=1,\cdots,15$ in the $N_{\text{max}}=20$ truncation (see main text). The left panel shows the normalized fixed-points values associated with the Type I regularization scheme, while the right panel corresponds to results obtained via the Type II regularization scheme. Both schemes of calculation are closed with the RG-improved prescription.}} \label{fig:NormalizedFPsfR} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{CritExps_Scheme3.eps}\quad \hspace{3em} \includegraphics[height=5.8cm]{CritExps_Scheme4.eps} \caption{\footnotesize{Critical exponents associated with the fixed-point structure in the $f(R)$-approximation for the range $n=1,\cdots, 15$ in the $N_{\textmd{max}}=20$ truncation within the RG-improved closure. The left panel corresponds to results obtained under the Type I regularization, while the right panel displays the results obtained under the Type II regularization.}} \label{fig:fR&CritExps} \end{center} \end{figure} Fig.~\textbf{\ref{fig:fR&CritExps}} displays the corresponding critical exponents for both types of coarse-graining operators as functions of $N$ and gives evidence for a non-increasing number of relevant directions\footnote{We follow the convention that relevant directions are characterized by positive (real part of the) critical exponents which, in turn, are the eigenvalues of the stability matrix multiplied by minus one.}. This indicates that the dimensionality of the UV critical hypersurface does not grow up to the dimension of the truncated unimodular theory space, which is a crucial feature for the asymptotic safety program. As occurred for the fixed-point values, higher-order additional invariant operators do not seem to spoil the stabilization of the critical exponents. In particular, for the critical exponents computed within the Type I regularization, i.e., Bochner-Laplacian as a coarse-graining operator. Fig.~\textbf{\ref{fig:fR&CritExps}} (left) indicates that the number of relevant directions saturates at two (except obviously for $N=1$). For the Type II regularization, characterized by the Lichnerowicz-Laplacians (right), the analysis is a bit more subtle. In this case, we observe a small oscillation in the neighborhood of positive values for lower-order truncations ($N<6$). In spite of that, the inclusion of additional invariant operators drives the number of relevant directions to stabilize at two as well. An interesting feature also displayed in Fig.~\textbf{\ref{fig:fR&CritExps}} is the near-canonical character of the critical exponents , i.e., a small deviation of the critical exponents in comparison with the canonical scaling of the operators appearing in our truncation. Indeed, the critical exponents computed within the $f(R)$-expansion behave like $\theta_n\sim\Delta_n$, where $\Delta_n=4-2n$ is the canonical scaling dimension of an invariant of the form $\bar{R}^n$. The two positive critical exponents appear as exceptions, since they deviate from the corresponding canonical scaling dimension by a greater gap. The near-canonical character of the critical exponents was already observed in a unimodular setting based on a polynomial expansion of $f(R)$ up to $\bar{R}^{10}$~\cite{Eichhorn:2015bna}. Additionally, it is worth mentioning that a near-canonical spectrum of critical exponents has been investigated in detail within standard ASQG~\cite{Falls:2013bv,Falls:2014tra,Falls:2017lst,Eichhorn:2018akn,Eichhorn:2018ydy,Falls:2018ylp,Kluth:2020bdv}. In particular, such a property suggests that power-counting can be a good guiding principle in the construction of truncations of the flowing effective action. As stated earlier, as an attempt to go beyond the RG-improvement prescription, the anomalous dimensions of the fluctuating metric and ghost fields may be independently computed through a simultaneous vertex and derivative expansion of the effective average action in the same fashion as discussed previously in the unimodular setting~\cite{deBrito:2020rwu} (see also \cite{Christiansen:2012rx,Codello:2013fpa}). This provides a second way of closing the system of RG equations by combining the background-field approximation for the couplings with independent anomalous dimensions for fluctuating fields in a hybrid approach, as in \cite{Eichhorn:2010tb,Codello:2013fpa,Dona:2013qba,Dona:2014pla}. Our setup for the generation of the interaction proper vertices employs the same ansatz (\ref{workingtruncation}). In order to capture higher-curvature effects, the Lagrangian $f(R,R_{\mu\nu}^2)$ is decomposed into an Einstein-Hilbert term supplemented by quadratic-curvature invariants such that the gravitational sector is in the form \begin{equation}\label{Quadratic_Action} \Gamma_k^{\text{gravity}}[g_{\mu\nu}]=\frac{k^2}{16\pi G_{k}}\int_x\omega\,\big(-R+k^{-2}\alpha_{k,2}R^2+k^{-2}\rho_{k,2} R_{\mu\nu}R^{\mu\nu}\big)\,, \end{equation} with $\alpha_{k,2}$ and $\rho_{k,2}$ being the same dimensionless couplings as in (\ref{fR_Truncation}) and (\ref{FfunctionFZ}), respectively. In particular, for computational simplicity, curvature-squared contributions to the vertices are neglected. We emphasize that this is an additional approximation that should be refined in a future investigation. After expanding the gravitational action in powers of the fluctuation field $h_{\mu\nu}$, we set $\bar{g}_{\mu\nu}=\delta_{\mu\nu}$. This setup allows us to obtain the anomalous dimensions in the form $\eta_i=\eta_i(G_k,\alpha_{k,2},\rho_{k,2},\beta_{\alpha},\beta_{\rho},N_{\Psi})$. The explicit expressions are given in App.~\textbf{\ref{etas}} within a semi-perturbative approximation\footnote{The semi-perturbative approximation consists in setting to zero all the $\eta$'s that would arise from the RG-scale derivative on the regulator function. This amounts to neglecting the $\eta$'s on the right-hand side of the expressions for the anomalous dimensions \cite{Dona:2013qba,Dona:2015tnf,Eichhorn:2017lry,Eichhorn:2017sok,Eichhorn:2018nda,Eichhorn:2019yzm,deBrito:2020rwu}.} and, when inserted in the RG equations for the $f(R)$-truncation, the coupling $\rho_{k,2}$ and its beta function are set to zero, since the $f(R)$-approximation does not contain any $R_{\mu\nu}^2$-dependence. Similarly, when treating the system of RG equations for the FZ-truncation, the coupling $\alpha_{k,2}$ and its beta function are not considered. To avoid a proliferation of similar plots, we refrain from showing the plots of convergence of individual fixed-point values and only exhibit results for the normalized fixed points and the critical exponents in Fig.~\textbf{\ref{fig:NormalizedFPsfRSemi}} and Fig.~\textbf{\ref{fig:fR&CritExpsSemi}}. \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{NormalizedFPs_Scheme5.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{NormalizedFPs_Scheme6.eps} \caption{\footnotesize{Plots of the convergence pattern for the normalized fixed-point values of the couplings $\lambda_{n}(N)$ for the $f(R)$-approximation evaluated within the hybrid semi-perturbative closure. The left plot exhibits the convergence pattern for the the range $n=1,\cdots,13$ in the $N_{\text{max}}=16$ truncation under the Type I regularization, while the right plot displays the convergence pattern for the range $n=1,\cdots,11$ in the $N_{\text{max}}=14$ truncation under the Type II regularization. All couplings follow the same normalization convention as defined previoulsy. The truncations are smaller with respect to previous results and different for different coarse-graining operators due to numerical instabilities.}} \label{fig:NormalizedFPsfRSemi} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[height=5.95cm]{CritExps_Scheme5.eps} \quad \hspace{3em} \includegraphics[height=5.95cm]{CritExps_Scheme6.eps} \caption{\footnotesize{Critical exponents associated with the fixed-point structure in the $f(R)$-approximation within the hybrid semi-perturbative closure. The left panel corresponds to results for the range $n=1,\cdots,13$ in the $N_{\text{max}}=16$ truncation under the Type I regularization. In particular, only the third and fourth set of critical exponents, under this regularization, are complex conjugate pairs and, consequently, the lines representing their real parts fall on top of each other. The right plot exhibits the results for the range $n=1,\cdots,11$ in the $N_{\text{max}}=14$ truncation under the Type II regularization. Clearly, in the latter case, the truncation needs to be further extended in order to verify if apparent convergence is restored for the critical exponents.}} \label{fig:fR&CritExpsSemi} \end{center} \end{figure} As argued in \cite{Meibohm:2015twa}, for a generic class of regulators proportional to the two-point functions, the imposition that the regulators must diverge in the UV leads to the constraint $\vec{\eta}\,\big|_*<2$. Since our regulators fall in this class, we have selected, within the hybrid semi-perturbative prescription, fixed-point values for $G^*$ and $\alpha_2^*$ which respect the bound\footnote{The constraint verified in this work is subjected to the approximations made here. In particular, for $\eta_{\sigma}$ the range of fixed-point values may be more restrict if one considers the effects of symmetry-breaking graviton mass terms (and full closure), as discussed in \cite{deBrito:2020rwu}.} $\vec{\eta}\,\big|_*<2$. The convergence pattern of the normalized fixed-point values $\lambda_n(N)$ for the $f(R)$-approximation are displayed in Fig.~\textbf{\ref{fig:NormalizedFPsfRSemi}} for both types of regularization schemes. Considering the non-linear character of the expressions for the anomalous dimensions given by (\ref{Eta_TT_Grav}), (\ref{Eta_Sigma_Grav}), (\ref{Eta_Gh}), as opposed to the RG-improved case, we managed to find suitable fixed-point solutions for the polynomial truncation up to $N_{\text{max}}=16$ for the Type I regularization and up to $N_{\text{max}}=14$ for Type II. As in the case of the RG-improved prescription, the Type I regularization leads to stable fixed-point solutions, apart from a late stabilization of the normalized coupling associated with the invariant $\bar{R}^5$. However, the Type II regularization only leads to a clear apparent stabilization for the first four lower-order operators and seems to be sensitive against the inclusion of higher-order invariants. This behavior is again evident in the plots of the critical exponents in Fig.~\textbf{\ref{fig:fR&CritExpsSemi}}. In order to tell if this behavior is a truncation artifact due to the independently-computed anomalous dimensions or simply reflects a limitation in our search method, an investigation of higher-order truncations would be needed. Interestingly though, the near-canonical character of the critical exponents is still manifest for both types of regularization schemes within this hybrid semi-perturbative approximation. This indicates that quantum fluctuations encoded in the anomalous dimensions provide a mild contribution to all invariant operators. We move on to discuss the fixed-point structure of the polynomial FZ-truncation. The more complicated nature of this truncation naturally leads to larger expressions in contrast with the $f(R)$-truncation, thus demanding additional computational capacity. As a consequence, within the RG-improved prescription, we limit ourselves to explore the fixed-point equations within a truncation where the highest-order invariant operator corresponds to $R(R_{\mu\nu}R^{\mu\nu})^7$ (i.e., $N_{\text{max}}=15$). As in the $f(R)$ case, a numerical recursive solution of the fixed-point equations is implemented alongside a bootstrap search method. \begin{figure}[!t] \begin{center} \includegraphics[height=5.0cm]{FPs-FZ_1.eps} \quad \includegraphics[height=5.0cm]{FPs-FZ_2.eps} \quad \includegraphics[height=5.0cm]{FPs-FZ_3.eps} \quad \includegraphics[height=5.0cm]{FPs-FZ_4.eps} \quad \includegraphics[height=5.0cm]{FPs-FZ_5.eps} \quad \includegraphics[height=5.0cm]{FPs-FZ_6.eps} \caption{\footnotesize{ Fixed-point values of the couplings $G_k$, $\rho_{k,2}$, $\rho_{k,3}$, $\rho_{k,4}$, $\rho_{k,5}$ and $\rho_{k,6}$ in the FZ-truncation. The blue circle indicates the Type I regularization (Bochner-Laplacian), whereas the red square indicates the Type II regularization (Lichnerowicz-Laplacians). All plots are computed within the RG-improved prescription.}} \label{fig:FPs_FZ_RGImprov} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=5.8cm]{NormalizedFPsFZ_Scheme3.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{NormalizedFPsFZ_Scheme4.eps} \caption{\footnotesize{Plots of the convergence pattern of the normalized fixed-point values of the couplings $\lambda_{n}(N)$ (now given in terms of $(G_k,\rho_{k,n})$) for the FZ-truncation evaluated within the RG-improved prescription. The left plot exhibits the convergence pattern for the range $n=1,\cdots,7$ in the $N_{\text{max}}=9$ truncation under the Type I regularization, while the right plot displays the convergence pattern for the range $n=1,\cdots,12$ in the $N_{\text{max}}=15$ truncation under the Type II regularization. All couplings follow the same normalization convention as in the $f(R)$ case.}} \label{fig:NormalizedFPsFZ} \end{center} \end{figure} In Fig.~\textbf{\ref{fig:FPs_FZ_RGImprov}}, we display our findings of the fixed-point values of the dimensionless couplings up to $\rho_{k,6}$ as functions of $N$ extracted from the FZ-truncation for both types of coarse-graining operators. We adopt the same convention for the plot markers as in the $f(R)$-truncation. Additionally, in Fig.~\textbf{\ref{fig:NormalizedFPsFZ}}, we display the convergence pattern of the normalized fixed-point values of the couplings $\lambda_n(N)$ defined in terms of ($G_k, \rho_{k,n}$). As one can notice from Figs.~\textbf{\ref{fig:FPs_FZ_RGImprov}} and~\textbf{\ref{fig:NormalizedFPsFZ}}, for the regularization employed by the Lichnerowicz-Laplacians (red squares in~\textbf{\ref{fig:FPs_FZ_RGImprov}} and right panel in~\textbf{\ref{fig:NormalizedFPsFZ}}), we managed to find suitable NGFP solutions for the polynomial truncation until $N_{\text{max}}=15$, exhibiting mild oscillations for higher-order operator invariants (with the exception of wild oscillations at the approximation orders $N=3$ and $N=8$). Contrarily, the regularization based on the Bochner-Laplacian (blue circles in~\textbf{\ref{fig:FPs_FZ_RGImprov}} and left panel in~\textbf{\ref{fig:NormalizedFPsFZ}}) leads to suitable, apparently stabler, NGFP solutions only up to $N_{\text{max}}=9$. This feature is attributed to a limitation in our numerical method implemented to generate fixed-point solutions. \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{CritExpsFZ_Scheme3.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{CritExpsFZ_Scheme4.eps} \caption{\footnotesize{Critical exponents associated with the fixed-point structure in the FZ-truncation within the RG-improved closure. The left panel corresponds to results for the range $n=1,\cdots,7$ in the $N_{\text{max}}=9$ truncation obtained under the Type I regularization, while the right plot displays the results for the range $n=1,\cdots,12$ in the $N_{\text{max}}=15$ truncation obtained under the Type II regularization.}} \label{fig:FZCritExps} \end{center} \end{figure} According to the critical exponents illustrated in Fig.~\textbf{\ref{fig:FZCritExps}}, our findings for the FZ-truncation still indicate that the UV critical hypersurface is characterized by two relevant directions for both types of coarse-graining operators. Despite the stabilization of the number of relevant directions, the numerical values for the critical exponents undergo the same unstable behavior as the fixed-point values depicted in Fig.~\textbf{\ref{fig:FPs_FZ_RGImprov}}. Albeit the difficulties in extending our analysis to truncations higher than $N=9$ for the Type I regularization scheme, the results shown in Fig.~\textbf{\ref{fig:FZCritExps}} (left) indicate that the critical exponents share the same near-canonical character as in the case of the $f(R)$-approximation. However, such a behavior is less apparent in the case of Type II coarse-graining operators. Here, as one can see from Fig.~\textbf{\ref{fig:FZCritExps}} (right), some critical exponents behave according to the near-canonical scaling. Nevertheless, for assorted choices of $N$ there are points which exhibit appreciable deviations from the canonical scaling of invariant operators within the truncation. To conclude this section, we display in Figs. \textbf{\ref{fig:NormalizedFPsFZSemi}} and \textbf{\ref{fig:FZCritExpsSemi}} the results for the normalized fixed-point values and critical exponents in the FZ-truncation when the semi-perturbative prescription is adopted. For the regularization employed by the Lichnerowicz-Laplacians (right panel in Fig.~\textbf{\ref{fig:NormalizedFPsFZSemi}}), suitable NGFP solutions were found for polynomial truncation until $N_{\textmd{max}}=10$, with improved stabilization of the fixed-point coordinates, apart from a severe oscillation at order $N=5$. These are better results in comparison with the previous case (given the simplicity of our truncation). Regarding the Bochner-Laplacian operator (left panel in Fig.~\textbf{\ref{fig:NormalizedFPsFZSemi}}), stable results were achieved only up to $N_{\textmd{max}}=8$. A similar limitation was observed in the previous analysis. Moreover, at order $N=2$, we have disregarded the only would-be suitable NGFP solution for the pair $(G^*,\rho_2^*)$, since one of the two corresponding critical exponents is $\sim 110$ and may be regarded as a truncation artifact. Conclusive statements regarding the stability of the fixed point requires an extensive analysis of more sophisticated truncations. \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{NormalizedFPsFZ_Scheme5.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{NormalizedFPsFZ_Scheme6.eps} \caption{\footnotesize{Plots of the convergence pattern for the normalized fixed-point values of the couplings $\lambda_{n}(N)$ (given in terms of $(G_k,\rho_{k,n})$) for the FZ-truncation evaluated within the semi-perturbative prescription. The left plot exhibits the convergence pattern for the range $n=1,\cdots,7$ in the $N_{\text{max}}=8$ truncation under the Type I regularization, while the right plot displays the convergence pattern for the range $n=1,\cdots,9$ in the $N_{\text{max}}=10$ truncation under the Type II regularization. All couplings follow the same normalization convention as in the $f(R)$ case.}} \label{fig:NormalizedFPsFZSemi} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[height=5.8cm]{CritExpsFZ_Scheme5.eps} \quad \hspace{3em} \includegraphics[height=5.8cm]{CritExpsFZ_Scheme6.eps} \caption{\footnotesize{Critical exponents associated with the fixed-point structure in the FZ-truncation within the semi-perturbative closure. The left panel corresponds to results for the range $n=1,\cdots,7$ in the $N_{\text{max}}=8$ truncation obtained under the Type I regularization, while the right plot displays the results for the range $n=1,\cdots,9$ in the $N_{\text{max}}=10$ truncation obtained under the Type II regularization.}} \label{fig:FZCritExpsSemi} \end{center} \end{figure} As in the RG-improved case, the dimensionality of the UV critical hypersurface is still two for both regularization schemes. However, for the Type II case, the two positive critical exponents exhibit mild oscillations and, as opposed to the corresponding RG-improved result, the gap controlling their near-canonical scaling is severely reduced by the anomalous dimensions contributions when higher-order invariant operators are included. Nevertheless, in contrast with the RG-improved analysis, the critical exponents do not exhibit appreciable deviations from canonical scaling for several choices of the approximation order $N$. Notably, our findings suggest that in the unimodular version of the FZ-truncation the search for fixed-point candidates gets hampered by difficulties in extending the approximation order beyond $N=16$ (which is the case considering Lichnerowicz-Laplacians within the RG-improvement prescription) in comparison with the fixed-point analysis in the unimodular version of the $f(R)$-approximation. The possibility of extension gets more restricted when independent anomalous dimensions are adopted. As a consequence, the FZ-truncation in the unimodular setting generates less stable solutions than its $f(R)$ relative overall. This characteristic is opposed to considerations previously made in the standard ASQG setting. In particular, the systematic investigation carried out in~\cite{Falls:2017lst} reveals that the FZ-truncation presents a faster stabilization, including higher-order extensions, than the $f(R)$-approximation. Considering the approximations we have used, our findings reveals the opposite behavior in the unimodular version. \subsection{Gravity-matter systems}\label{Impact_Matter} Several works in standard ASQG provide strong hints for the existence of a NGFP in the RG flow within different truncations, ranging from the Einstein-Hilbert approximation to more sophisticated ones \cite{Souma:1999at,Reuter:2001ag,Lauscher:2002sq,Litim:2003vp,Codello:2006in,Codello:2007bd,Machado:2007ea,Codello:2008vh,Eichhorn:2009ah,Benedetti:2009rx,Benedetti:2009gn,Eichhorn:2010tb,Manrique:2010am,Manrique:2011jc,Christiansen:2012rx,Benedetti:2012dx,Demmel:2012ub,Dietz:2012ic,Dietz:2013sba,Demmel:2013myx,Falls:2013bv,Benedetti:2013jk,Ohta:2013uca,Codello:2013fpa,Demmel:2014sga,Demmel:2014hla,Falls:2014tra,Christiansen:2014raa,Falls:2014zba,Falls:2015qga,Christiansen:2015rva,Demmel:2015oqa,Ohta:2015efa,Ohta:2015fcu,Gonzalez-Martin:2017gza,Knorr:2017mhu,Christiansen:2017bsy,Gies:2015tca,Gies:2016con,Biemans:2016rvp,Denz:2016qks,Falls:2016msz,Falls:2017lst,Falls:2018ylp,deBrito:2018jxt,Knorr:2019atm,Burger:2019upn,Falls:2020qhj,Kluth:2020bdv}. On top of that, a growing number of investigations provide compelling evidence for the persistence of the NGFP against the introduction of a large class of matter fields, such as the field content corresponding to the SM of particle physics and some beyond SM (bSM) extensions, see \cite{Dou:1997fg,Percacci:2002ie,Percacci:2003jz,Shaposhnikov:2009pv,Narain:2009fy,Zanusso:2009bs,Eichhorn:2011pc,Eichhorn:2012va,Dona:2012am,Dona:2013qba,Dona:2014pla,Labus:2015ska,Oda:2015sma,Meibohm:2015twa,Dona:2015tnf,Meibohm:2016mkp,Eichhorn:2016esv,Eichhorn:2016vvy,Biemans:2017zca,Hamada:2017rvn,Christiansen:2017qca,Eichhorn:2017eht,Eichhorn:2017egq,Eichhorn:2017sok,Christiansen:2017cxa,Eichhorn:2017als,Christiansen:2017gtg,Christiansen:2017qca,Eichhorn:2017ylw,Eichhorn:2017lry,Alkofer:2018fxj,Alkofer:2018baq,Eichhorn:2018akn,Eichhorn:2018ydy,Eichhorn:2018nda,Pawlowski:2018ixd,Eichhorn:2018whv,deBrito:2019epw,Wetterich:2019zdo,Wetterich:2019rsn,Wetterich:2019rsn,Reichert:2019car,Burger:2019upn,Domenech:2020yjf,Daas:2020dyo,Eichhorn:2020sbo,deBrito:2020dta}. In this section, we explore the impact of matter degrees of freedom on the interacting gravitational fixed-point structure in the unimodular setting for both $f(R)$ and $F(R_{\mu\nu}^2)+RZ(R_{\mu\nu}^2)$ polynomial truncations. By varying the number of matter fields, we can probe the compatibility of non-trivial fixed-point solutions in the unimodular theory space coupled to the field content corresponding to the SM of particle physics as well as to some bSM extensions. Following the same strategy employed in the pure gravity case, a numerical recursive solution and a bootstrap search method were adopted for the selection of suitable fixed-point candidates. For both $f(R)$- and FZ-truncations we have limited our search to fixed-point solutions within polynomial approximations ranging from $N=1$ to $N=10$. The results reported in the case of gravity-matter systems are restricted to the RG-improved treatment for the anomalous dimensions, i.e., using a prescription that relates $\eta_\TT$ and $\eta_\sigma$ to the beta function of the Newton coupling, while setting all other anomalous dimensions to zero. The other prescription considered in the pure gravity, with anomalous dimensions computed via derivative expansion, will not be reported here. The reason is related to the existence of certain bounds on the anomalous dimensions, as it was pointed out in \cite{Meibohm:2015twa}, appearing as a consistency requirement for an appropriate behavior of the FRG regulator at $k\to \infty$. For gravity-matter systems we have found fixed-point values that violate such bounds and we can argue that these results are not self-consistent. In Table \textbf{\ref{fig:Table_FPsMatter}} we exhibit a summary of the results concerning the stability of NGFPs for specific matter contents for both $f(R)$ and $F(R_{\mu\nu}^2)+RZ(R_{\mu\nu}^2)$ approximations. In this case, we just report the main qualitative features, i.e., in which cases we find evidence for fixed-point solutions and the corresponding number of relevant directions. In all cases, we have investigated polynomial truncations including operators up to $\mathcal{O}(R^{10})$. \begin{table} \begin{center} \resizebox{13cm}{!}{% \vspace{0pt} \begin{tabular}{|c | c | c | c | c |c | c |c|} \toprule \multicolumn{8}{|c|}{Stability of NGFP for some specific matter models}\\ \midrule Model & \multicolumn{3}{c|}{Matter content} & \multicolumn{2}{c|}{Type I } & \multicolumn{2}{c|}{Type II } \\ \midrule & $N_{\phi}$ & $N_A$ & $N_{\psi}$ & $f(R)$ & $F(R_{\mu\nu}^2)+R Z(R_{\mu\nu}^2)$ & $f(R)$ & $F(R_{\mu\nu}^2)+R Z(R_{\mu\nu}^2)$ \\ \midrule SM & 4 & 12 & 45/2 & \ding{51} $(2)$ & \ding{51} $(2)$ & \ding{51} $(2)$ & \ding{51} $(2)$ \\ \midrule SM + $3\nu_{\text{R}}$ & 4 & 12 & 24 & \ding{51} $(2)$ & \ding{51} $(3)$ & \ding{51} $(2)$ &\ding{51} $(2)$ \\ \midrule MSSM & 49 & 12 & 61/2 & \ding{54} & \ding{54} & \ding{54} &\ding{54} \\ \midrule SU(5) GUT & 124 & 24 & 24 & \ding{54} & \ding{54} & \ding{54} &\ding{54} \\ \midrule SO(10) GUT & 97 & 45 & 24 & \ding{51}$^*(2)$ & \ding{51}$^*(3)$ & \ding{51}$^*(2)$ &\ding{51}$^*(2)$ \\ \bottomrule \end{tabular}% } \caption{\footnotesize{Collection of the results on the stability of NGFPs arising from the matter content of the Standard Model and some of its commonly studied extensions for both $f(R)$ and $F(R_{\mu\nu}^2)+RZ(R_{\mu\nu}^2)$ polynomial projections in the unimodular setting. The RG-improved closure is adopted. The symbols go as follows: checkmarks $\,$\ding{51}$\,$ indicate the underlying setup possesses a suitable NGFP which converges for increasing order of approximation $N$. The number between parenthesis indicates the number of relevant directions observed. An asterisk simply indicates that there is no NGFP at the level of the Einstein-Hilbert truncation ($N=1$), converging afterwards towards a suitable NGFP. Finally, an $\,$\ding{54}$\,$ means that there is no NGFP at all orders of approximation, except for one appearance at only one power of curvature.}} \label{fig:Table_FPsMatter} \end{center} \end{table} The minimal requirement for a phenomenological viable fixed-point solution is its compatibility with the matter content of the SM, i.e., $N_{\phi}=4$, $N_A=12$ and $N_{\psi}=45/2$. As we can observe in Table \ref{fig:Table_FPsMatter}, our result points towards the existence of this fixed point for both truncations under investigation and for both types of regulators employed in the coarse-graining procedure. The fixed-point solution corresponding to the SM matter content exhibits similarities with the results observed for pure gravity. In both truncations and regularization schemes, we have found evidence for two-dimensional UV critical surfaces. Furthermore, the numerical values for the fixed-point solutions, as well as for the critical exponents, seem to stabilize for truncations characterized by $N\gtrsim 6$. The exception is the FZ-approximation with Type I regulator, which presents a mild deviation from the ``convergence'' pattern at $N=9$ and $N=10$. To complement our analysis, we also have considered the matter content associated with some bSM scenarios. The first extension, which is motivated by the necessity to accommodate neutrino masses, corresponds to the choice $(N_{\phi}=4,N_A=12,N_{\psi}=24)$, i.e., including $3/2$ additional Dirac fermions (or 3 Weyl fermions), accounting for 3 right-handed neutrinos. In this case, our results also point towards the existence of UV fixed-point solutions. The main difference in comparison with the SM matter content is the appearance of an extra relevant direction in the FZ-truncation with Type I regularization. For the other approximations/schemes, our results indicate two relevant directions. It is also interesting to consider matter content corresponding to bSM scenarios characterized by larger symmetry groups, e.g., supersymmetric models and grand unified theories (GUTs). In Table \ref{fig:Table_FPsMatter}, we report our findings for matter content associated with the minimally supersymmetric Standard Model (MSSM), SU(5) and SO(10) GUTs. Among these options, only the SO(10) GUT ($N_\phi=97$, $N_A=45$ and $N_\psi = 24$) exhibits suitable fixed-point solutions. In this case, most of the schemes under investigation leads to UV fixed-points characterized by two relevant directions. The exception, once again, is the FZ-truncation with Type I regulator, where we have found three relevant directions. It is also interesting to emphasize that, in the case of matter content corresponding to SO(10) GUT, the fixed-point solutions do not appear at the level of the lowest truncation, i.e., $N=1$. For the matter content corresponding to the MSSM and SU(5) GUT, we do not find evidence for the existence of suitable fixed-point solutions within the aforementioned truncations. Our findings show a qualitative agreement with the results for non-unimodular settings reported in \cite{Dona:2013qba,Alkofer:2018fxj}. As it was pointed out in \cite{Dona:2013qba}, the absence of suitable UV fixed-points for gravity-matters systems with field content corresponding to the MSSM and SU(5) GUT can be explained by the inclusion of too many scalars and fermions, without being compensated by the inclusion of extra vector fields. It is important to emphasize that this explanation is restricted to the calculations based on the background-field approximation. It is worth mentioning that results from the fluctuation approach, see \cite{Pawlowski:2020qer}, for ASQG indicate that the inclusion of too many scalars pushes the scalar anomalous dimension to a regime that violates certain regulator bounds \cite{Meibohm:2015twa}. \section{Conclusions}\label{Conclusions} In this work, the renormalization group flow of unimodular quantum gravity was analyzed. This was motivated by the possibility of such a quantum theory to be asymptotically safe and, thus, well defined up to arbitrarily short distances. We have explored larger theory spaces with respect to previous analyses, by considering truncations which involve the tensorial structure of Ricci-tensor invariants and anomalous dimensions which are computed from the running of the two-point function of gravitons and Faddeev-Popov ghosts. Moreover, in the background approximation, we have used the background-dependent correction to the flow equation discussed in \cite{deBrito:2020rwu}. Such improvements enabled us to confront previous results \cite{Eichhorn:2013xr,Eichhorn:2015bna} with truncations enlargements and, apart from quantitative differences which follows from truncation-induced effects, we have found evidence for the persistence of the fixed point. Technically, we have also tested how the underlying fixed-point structure is affected by different choices of the endomorphism parameter in the regulator function. In particular, we discussed results obtained for Bochner and Lichnerowicz coarse-graining operators. As expected, different choices of such operators, in the background approximation, directly affect the projection onto curvature invariants in the flow equation and can lead to substantial different qualitative results such as the number of relevant directions (see, e.g., \cite{Bridle:2013sra,deBrito:2018jxt}). For this discrete choice of the endomorphism parameter, we have observed stable qualitative results both in the $f(R)$ and FZ truncations, where the fixed point features two relevant directions. Nevertheless, different classes of truncations lead to different computational subtleties and we have verified that in this setup, the $f(R)$ truncation has better (apparent) convergence properties. More efficient methods must be employed for the FZ-truncation in order to probe whether the fixed point structure stabilize for larger truncations. In any case, it is remarkable that by changing the endomorphism parameter and the anomalous dimension prescription in each class leads to a fixed point which features the same qualitative features, leading to the expectation that this is a consequence of the near-perturbative nature of the fixed point (which is reflected in the near-canonical scaling). Finally, we have considered the interaction of unimodular quantum gravity with matter degrees of freedom. Intuitively, matter fluctuations will affect the running of the gravitational couplings and since we aim at describing a realistic theory of quantum gravity, the fixed point must exist in the presence of matter fields. As a first approximation, we have included scalars, spinors and vectors without self-interactions coupled to the unimodular gravitational field. As discussed, the matter content of the SM and of some of its extensions does not destroy the fixed point, leading to evidence for the existence of a complete theory of quantum gravity and matter. However, as pointed out, for some extensions of the SM, the matter content is ``too big" and destroys the fixed point, i.e., they act against scale invariance in the UV. Hence, it is a concrete realization that even for truncated theory spaces, one might indeed find systems which do not feature a fixed point. The present work suggests several different ways of improving the truncations of unimodular quantum gravity and matter systems. In particular, a promising and necessary direction is the consideration of approximations that go beyond the background one. In this work we have performed a purely background approximation and a hybrid one. However, it is necessary to investigate momentum-dependent correlation functions in unimodular quantum gravity and compare the results with our present findings. This is work in progress. Lastly, there is the discussion about the equivalence of unimodular quantum gravity and standard quantum gravity. Conceptually, from the point of view of asymptotically safe quantum gravity, this is an important puzzle to be resolved. Different symmetry groups define, in principle, different theory spaces and hence, a different set of essential couplings that should reach a non-trivial fixed point. In the unimodular setting, there is no room for a cosmological constant as an essential coupling while in standard gravity, it is usually treated as an essential coupling and it is required to reach a fixed point in the asymptotic safety program. However, it is far from clear if this necessarily leads to incompatible pictures. In standard gravity, the cosmological constant corresponds to a relevant direction and, thus, a free parameter that should be fixed by ``experiments". In unimodular gravity, the cosmological constant appears as an integration constant which is also fixed by initial conditions. In the end, it remains to be understood if such theories share the same observables or not. \section*{Acknowledgments} The authors are grateful to R. Alkofer, A. Eichhorn, J. Pawlowski and R. Percacci for discussions about unimodular gravity in the last months/years. The authors also acknowledge M. Schiffer for helping with some computational issues. GPB is supported by VILLUM FONDEN under grant number 29405. GPB is also grateful to CNPq no.~142049/2016-6 for the financial support during part of this work. The work of AFV is supported by CNPq under the grant no. 140968/2020-2. ADP acknowledges CNPq under the grant PQ-2 (309781/2019-1), FAPERJ under the “Jovem Cientista do Nosso Estado” program (E26/202.800/2019), and NWO under the VENI Grant (VI.Veni.192.109) for financial support.
{ "timestamp": "2021-04-19T02:06:19", "yymm": "2012", "arxiv_id": "2012.08904", "language": "en", "url": "https://arxiv.org/abs/2012.08904" }
\section{Introduction} This paper provides a new framework to answer questions in quantitative topology using Chen's theory of iterated integrals. In the 1970s Kuo-Tsai Chen built a de Rham complex for the loop space $\Omega X$ of a simply connected smooth manifold. Chen showed that a certain subcomplex of this de Rham complex computes the homology of $\Omega X$, the subcomplex generated by \textit{iterated integrals} on $\Omega X$. Moreover, these iterated integrals have an explicit geometric description, built out of the (usual) de Rham complex on $X$. We put this framework to use in two independent ways. First we use it to prove the nonexistence of nontrivial cycles in $\Omega X$ that both have small volume and live in a subspace of short loops. Second, we give a new method to upper bound Gromov's distortion in rational homotopy groups of $X$. Unless otherwise stated, all homology and cohomology groups are with real coefficients. \subsection{Loop spaces in quantitative topology} The role of loop spaces in quantitative topology have been already been considered by Gromov in \cite{metricstructures}. In particular he proves \begin{theorem} \cite[Theorem~7.3]{metricstructures} Let $X$ be a compact simply connected Riemannian manifold. There exist constants $C > c > 0$ such that the following holds. For any $L > 0$, let $\Omega_{\leq L}X$ be the subspace of $\Omega X$ of loops of length less than at most $L$. Then the inclusion $\Omega_{\leq L} X \to \Omega X$ induces a surjection on (integer) homology up to degree $cL$. Moreover, the induced map on $H_n$ is zero for $n > CL$. \end{theorem} That is, Gromov answers the question: which homology classes in $H_n(\Omega X)$ can be represented by cycles with $\textit{suplength}$ at most $L$, i.e. is contained in the subspace $\Omega_{\leq L}X \subset \Omega X$? Gromov shows that for $n > CL$ the answer is none of them, and for $n < cL$ the answer is all of them. One could refine this investigation by also keeping track of the \textit{volume} of a chain $Z$ representing a given homology class $\zeta \in \Omega X$, for some notion of volume in $\Omega X$ inherited from the metric $X$. \begin{example} Let $n \geq$ and $X=S^n$. In this case, the (integral) homology of $\Omega S^n$ is free rank $1$ in degrees which are a nonnegative multiple of $n-1$, and vanishes otherwise. Let $k \geq 1$ and $\zeta_k \in H_{k(n-1)}(\Omega S^n; \mathbb{R})$ be the image in real homology of a generator in integral homology. For $k=1$ this class can be represented by a sweepout of $S^n$ by loops. For $k=2$ and $n$ even, $\zeta_2$ is a nonzero multiple of the Hurewicz image of the desuspension of an element of $\pi_{2n-1}(S^n)$ with nonzero Hopf invariant. For any $k$ it will be shown that there exists constants $c = c(k)>0$ such that any $k(n-1)$-cycle representing $\zeta_k$ satisfies $\text{Suplength}(Z)^k\text{Vol}(Z) > c$. \end{example} In this paper we give bounds of the same flavor for an arbitrary Riemannian manifold $X$ and an arbitrary nonzero $\zeta \in H_n(\Omega X)$. In the case of $X=S^n$, we show these bounds are asymptotically tight. It is not known if these bounds are tight for general $X$. \begin{thmx} \label{thma} Let $\zeta \in H_n(\Omega X)$ be a nonzero homology class. Then there exists a constant $c>0$ and an integer $r > 0$ such that any cycle $Z$ representing $\zeta$ obeys the bound $$ \text{Suplength}(Z)^r \text{Vol}(Z) > c.$$ Moreover, $r$ can be taken to be the minimal nonnegative integer $k$ such that there exists a $\beta \in H^n(\Omega X)$ such that $\langle \beta,\; \zeta \rangle \neq 0$ and $\beta$ can be represented closed iterated integral with each summand built out of at most $k$ differential forms on $X$. \end{thmx} Informally, if we are looking for representatives of $\zeta$ in $\Omega_{\leq L}$ then the volume of such representatives are bounded below. Conversely, if $Z$ represents $\zeta \in H_*(\Omega X)$ with very small volume then $Z$ must have very large suplength. \subsection{Iterated integrals and distortion in $\pi_n(X) \otimes \mathbb{Q}$} Our second application of the framework is a new method to provide upper bounds to Gromov's distortion of homotopy groups. Following \cite{scalspaces}, define the \textit{distortion} of $\alpha \in \pi_n(X) \otimes \mathbb{Q}$ to be $$ \delta_{\alpha}(L) \coloneqq \text{sup}\{k \;|\;\text{there exists an } L \text{-Lipschitz map } S^n \to X \text{ with } [f] = k\alpha \}.$$ Gromov outlines, and Manin fleshes out, an algorithm using Sullivan's minimal models and obstruction theory to find upper bounds for $\delta_{\alpha}(L)$. Here we present a new method for obtaining bounds. \begin{thmx} \label{thmb} Suppose $X$ is a simply connected Riemannian manifold and $\alpha \in \pi_n(X) \otimes \mathbb{Q}$. Denote by $\tau(\alpha)$ the image of $\alpha$ under the composite of $$\pi_n(X) \otimes \mathbb{Q} \xrightarrow{\sim} \pi_{n-1}(\Omega X) \otimes \mathbb{Q} \xrightarrow{Hurewicz} H_{n-1}(\Omega X; \mathbb{Q}).$$ Suppose $\beta$ is an iterated integral on $\Omega X$ with each summand built out of at most $r$ differential forms on $X$, and also such that $\langle \beta, \tau(\alpha) \rangle \neq 0$. Then the distortion of $\alpha$ is $O(L^{n-1+r})$. \end{thmx} \begin{example} Consider the space $X = S_a^3 \vee S_b^3 \cup_{[a,[a,b]]} D^8 \cup_{[b,[a,b]]} D^8$ considered in \cite[\S 13(d),(e)]{fht} and \cite[Section~5]{shadows}. The rational homotopy group $\pi_{10}(X) \otimes \mathbb{Q}$ is rank one; let $\tau$ be a generator. Manin follows the method of obstruction theory to give an upper bound on the distortion of $\tau$ of $O(L^{12})$, and then refines this to an upper bound of $O(L^{11})$. In Section 5, we give another argument that the distortion of $\tau$ is $O(L^{11})$, avoiding the obstruction theory. \end{example} In broad terms, the obstruction theory of \cite{shadows} is replaced with the method of \textit{weight reduction}, as explained in \cite{kozsulduality}, \cite{sinhahopfinvs}. Section 5 contains examples of such computations. Theorem 1.2 and Theorem 1.3 both follow from the main theorem, which is proved in Section 4. \begin{thmx}\label{thmc} Let $X$ be a simply connected compact Riemannian manifold and $\beta \in H^n(\Omega X; \mathbb{R})$. Then there exists a differential form $\omega$ on $\Omega X$ representing $\beta$ which admits a positive integer $r$ and a $C > 0$ such that for all $\gamma \in \Omega X$, $$ ||\omega(\gamma)||_{\infty} \leq C\text{Length}(\gamma)^r. $$ \end{thmx} \subsection{Outline of the paper} The outline of the paper is as follows. In Section 2 we give an expository introduction to iterated integrals in the setting of the loop space of $S^n$. In Section 3 we present the necessary background; the precise definitions of iterated integrals on loop spaces and the metric data that will be kept track of. Section 4 contains the proof of Theorem \ref{thmc} and from this deduces Theorems \ref{thmb} and \ref{thma}. In Section 5 we present examples and computations using the bar complex of a minimal model. In Section 6 the sharpness of the bounds given by Theorem \ref{thmc} are discussed. \subsection{Acknowledgements} I would like to thank my advisor, Larry Guth, for tirelessly helping me as my advisor. I would like to also thank Fedya Manin for support, conversations and guidance, as well as giving me opportunities to talk about this work. I would like to thank Dev Sinha for expositing similar ideas in the Poincare dual setting of submanifolds instead of forms. Finally, I would like to thank Luis Kumandari and Sasha Berdnikov for helpful conversations. \section{Warmup: homotopy functionals on $S^n$} In this section we give an expository introduction to differential forms on the loop space, saving the groundwork of setting up the machinery to the next section. \subsection{The degree of maps $S^n \to S^n$} The simplest example of a homotopy functional is the degree of a map $S^n \to S^n$. This is detected by a volume form $\omega$ on $S^n$: given $f:S^n \to S^n$, $\text{deg}(f) = \int_{S^n}f^*\omega$. In fact pulling back the volume form detects volume \textit{locally}: for any $f: P \to S^n$ with $P$ an $n$-dimensional manifold, $\text{Vol}(f) = \int_P f^*\omega$. In particular, if $P = [0,1] \times Q$, with $f: [0,1] \times Q \to S^n$ sending $\{0,1\} \times Q$ to the basepoint $x_0 \in S^n$, then $f$ desuspends (under the suspension-loop adjunction) $\hat{f}: Q \to \Omega S^n$ in the following way. Fix once and for all smooth sweepouts of spheres $S^n$ by loops, i.e. unit maps $\eta: S^{n-1} \to \Omega S^n$ of the suspension-loop adjunction. Then we can take $\hat{f} \coloneqq (\Omega f) \circ \eta$. We will define an $(n-1)$-form $\smallint \omega$ on $\Omega S^n$ such that pulling back $\smallint \omega$ by $\hat{f}$ detects the volume of $f$. Let $\gamma \in \Omega^n$ and $V_1, \dots, V_{n-1}$ be vector fields along $\gamma$ (which should be thought of as tangent vectors in $T_{\gamma}\Omega S^n$. Then define the $(n-1)$-form $\smallint \omega$ by $$(\smallint \omega)(\gamma)(V_1, \dots, V_{n-1}) \coloneqq \int_{t=0}^{t=1} \omega(\gamma(t))(\gamma'(t), V_1(t), \dots, V_{n-1}(t))$$ Then $\int_{Q} \hat{f}^*(\smallint \omega)) = \int_{Q\times I} f^* \omega = \int_P f^*\omega = \text{Vol}(f)$. This proves the following proposition. \begin{proposition} Let $\omega$ be a volume form on $S^n$. Then the $(n-1)$-form $\int \omega$ on $\Omega S^n$ detects the degree of a smooth map $f: S^n \rightarrow S^n$. That is, if $\hat{f}: S^{n-1} \rightarrow \Omega S^n$ is the desuspension of $f$ under the suspension-loop adjunction then \begin{align*} \text{deg}(f) = \langle \hat{f}^*(\smallint \omega), \; [S^{n-1}] \rangle. \end{align*} \end{proposition} Before proceeding, we give an equivalent definition of $\smallint \omega$. This definition will be better suited to generalise to other more complicated homotopy functionals. Let $I \coloneqq [0,1]$ and $\text{ev}: \Omega S^n \times I \to S^n$ be the map $(\gamma, t) \mapsto \gamma(t)$. The volume form $\omega$ on $S^n$ pulls back to an $n$-form $\text{ev}^*\omega$ on $\Omega S^n \times I$. Integrating this over the fiber of the projection $\Omega S^n \times I \to \Omega S^n$ gives an $(n-1)$-form which is equal to $\smallint \omega$. \subsection{The Hopf invariant of maps $S^{2n-1} \to S^n$} The next simplest example of a homotopy functional is the Hopf invariant $\pi_{2n-1}(S^n) \to \mathbb{Z}$ for $n$ even. We will define a differential form of degree $(2n-2)$ on $\Omega S^n$ detecting this. Let $\Delta^2 \subset I \times I$ be the set of $(t_1, t_2)$ such that $t_1 \leq t_2$. There is a map $\text{ev}_2: \Omega S^n \times \Delta^2 \to S^n \times S^n$ sending $(\gamma, (t_1, t_2))$ to $(\gamma(t_1), \gamma(t_2))$. The volume form $\omega \times \omega$ on $S^n \times S^n$ pulls back to a $(2n)$-form $\text{ev}^*(\omega \times \omega)$ on $\Omega S^n \times \Delta^2$. Integrating this over the fiber of the projection $\Omega S^n \times \Delta^2 \to \Omega S^n$ gives a $(2n-2)$-form on $\Omega S^n$ which we will denote $\smallint \omega \omega$. \begin{proposition} Let $\omega$ be a volume form on $S^n$ with $n$ even. Then the $(2n-2)$-form $\int \omega \omega$ on $\Omega S^n$ detects the Hopf invariant of a smooth map $f: S^{2n-1} \rightarrow S^n$. That is, if $\hat{f}: S^{2n-2} \rightarrow \Omega S^n$ is the adjoint of $f$ under the suspension-loop adjunction then \[ \text{Hopf}(f) = \langle \; \hat{f}^*(\smallint \omega \omega), \; [S^{2n-2}] \; \rangle \] \end{proposition} \begin{proof} Consider the commutative diagram \[\begin{tikzcd} S^{2n-2} \times \Delta^2 \arrow[r, "\hat{f}\times \text{id}"] \arrow[d, "\pi"] & \Omega S^n \times \Delta^2 \arrow[d, "\pi"] \arrow[r, "\text{ev}_2"] & S^n \times S^n \\ S^{2n-2} \arrow[r, "\hat{f}"] & \Omega S^n \end{tikzcd}\] Here $\pi$ is the projection onto the first factor inducing a covariant integration over the fiber map $\pi_*$ sending $k$-forms to $(k-2)$-forms. By definition the iterated integral $\smallint \omega \omega = \pi_* \text{ev}_2^* (\omega \times \omega)$ and by commutativity of the diagram we have \[ \hat{f}^*(\smallint \omega \omega) = \pi_* (\hat{f} \times \text{id})^* \text{ev}_2^* (\omega \times \omega). \] For ease of notation, denote $\text{ev}_2 \circ (\hat{f} \times \text{id}): S^{2n-2} \times \Delta^2 \to S^n \times S^n$ by $g$. This is the map $(x, s, t) \mapsto (f'(x, s), f'(x, t))$ where $f': S^{2n-2} \times I \to S^{2n-1}$ is $f$ composed with the quotient map $S^{2n-2} \times I \to S^{2n-1}$. For any $2n$-form $\alpha$ on $S^{2n-2}\times \Delta^2$, \[ \langle [S^{2n-2}], \pi_*\alpha \rangle = \int_{S^{2n-2}} \int_{\Delta^2} \alpha = \int_{S^{2n-2} \times \Delta^2} \alpha \] So it remains to show that $\text{Hopf}(f) = \int_{S^{2n-2}\times\Delta^2} g^*(\omega \times \omega)$. We will argue this in the Poincar\'e dual setting, i.e. consider the preimage under $g$ of a regular point $(p, q) \in S^n \times S^n$ and show that its signed count is equal to the linking number definition of the Hopf invariant of $f$. Consider a chain $Z$ representing the fundamental class of $S^{2n-2} \times {\Delta^2}$, with $\partial Z = S^{2n-2} \times \partial \Delta^2$. Then since $\Delta^2 \coloneqq \{0 \leq s \leq t \leq 1\}$, the boundary $\partial \Delta^2$ splits into the sum $B + D$, where $B = \{s = 0\} \cup \{t = 1\}$ and $D = \{s = t\}$. Both $g_*(S^{2n-2} \times B)$ and $g_*(S^{2n-2} \times D)$ are $(2n-1)$-cycles lying in $n$-dimensional subspaces of $S^n \times S^n$; they lie in $(S^n, *) \cup (*, S^n)$ and $\text{Diag}(S^n)$ respectively. So they can both be bounded by $2n$ chains of zero volume, call them $Z_B$ and $Z_D$ respectively. Then $Z + Z_B + Z_D$ is a $2n$-cycle in $S^n \times S^n$. We will now show that it is homologus to $\text{Hopf}(f)$ times the fundamental class of $S^n \times S^n$. Pick a regular point $(p, q) \in S^n \times S^n$ with neither $p$ nor $q$ the basepoint. Then $Q \coloneqq (g|_{S^{2n-2}\times D})^{-1}(S^n \times \{q\})$ and $P \coloneqq (g|_{S^{2n-2}\times D})^{-1}(\{p\} \times S^n)$ are two $(n-1)$-manifolds in $S^{2n-2} \times D \cong S^{2n-2} \times (0, 1)$. We will show that the linking number of $P$ and $Q$ is equal to the signed count of $g^{-1}(p, q)$, which in turn is equal to the degree of our $(2n)$-cycle $Z + Z_B + Z_D$. Consider $\tilde{P} \coloneqq g^{-1}(\{p\} \times S^n)$ and $\tilde{Q} \coloneqq g^{-1}(S^n \times \{q\})$, submanifolds of $S^{2n-2} \times \Delta^2$. Note that $\tilde{P} \cap \tilde{Q} = g^{-1}(p,q)$. Under the projection $\pi: \Delta^2 \to D$ given by $(s, t) \mapsto (t, t)$, $\tilde{P}$ maps to a submanifold $\partial^{-1}P$ of $S^{2n-2}\times D$ bounding $P$, and $\tilde{Q}$ maps to $Q$. So the linking number of $P$ and $Q$ (the intersection number of $\partial^{-1}P$ and $Q$) is equal to the intersection number of $\tilde{P}$ and $\tilde{Q}$ which is equal to the signed count count of $ g^{-1}(p,q)$ as required. \end{proof} \section{Setup} \subsection{Iterated integrals} In this we will give the necessary background for and definition of iterated integrals, following Hain's thesis \cite{hainthesis} and \cite{itintreview}. \begin{definition} A \textit{differential space} is a Hausdorff space $X$ together with a family $\{\phi_i: U_i \rightarrow X\}_{i \in I}$ of continuous maps, called \textit{plots}, subject to the following conditions. First, the $U$ are convex subsets of Euclidean space (of any dimension $n \geq 0$) with nonempty interior. Secondly, every map $\{*\} \rightarrow X$ is a plot. Thirdly, if $f: U \rightarrow U'$ is smooth in the classical sense and $\phi: U' \rightarrow X$ is a plot, then $\phi f: U \rightarrow X$ is also a plot. \end{definition} Hence the plots should be thought of as determining which functions into $X$ are smooth. More precisely, a function $f: X \rightarrow Y$ is said to be \textit{smooth} if and only if $f$ pushes forward all plots on $X$ to plots on $Y$. Note that the notion of plots is more general than the notion of charts on (finite dimensional) manifolds, since they not required to be local homeomorphisms. A differential form on a differential space $X$ is then specified by its pullbacks on all plots. Formally: \begin{definition}A \textit{differential $k$-form} $\omega$ on a differential space $X$ is the assignment of a $k$-form $\omega_{\alpha}$ on $U$ for each plot $\alpha: U \rightarrow X$ which are compatible in the following sense. If $\phi: U \rightarrow U'$ is smooth and $\alpha: U' \rightarrow X$ is a plot, then $\phi^*\omega_{\alpha} = \omega_{\alpha\phi}$. One can think of the $\omega_{\alpha}$ as the pullbacks of the form $\omega$ along the plot $\alpha$. The usual definitions of addition, wedge product and exterior derivative on forms allow us to define the \textit{de Rham complex} of a differential space $X$; it is a graded differential algebra denoted $\Lambda^*X$. \end{definition} Note that a smooth manifold $X$ is a differential space, where a map $\alpha: U \rightarrow X$ is a plot if and only if it is smooth. Then the de Rham complex of $X$ in the sense of Chen is isomorphic to the classical de Rham complex. If $X$ is a differential space with basepoint $x_0$, then let $\Omega X$ denote the set of functions $\gamma: [0,1] \rightarrow X$ which are piecewise plots into $X$ and such that $\gamma(0) = \gamma(1) = x_0$. Then $\Omega X$ has the structure of a differential space generated by the following maps: $\phi: U \to \Omega X$ which admit a partition $0 = t_0 < t_1 < \dots < t_m = 1$ of $[0,1]$ such that $\hat{\phi}|_{U \times [t_{j-1}, t_j]} :U \times [t_{j-1}, t_j] \to X$ is a plot on $X$ for each $j$. Furthermore, subspaces of differential spaces are also differential spaces, as are the product of two differential spaces is again a differential space \cite[Chapter~4.4~(c),(d)]{hainthesis}. \begin{definition} Let $\Delta^n$ denote the $n$-simplex $$ \Delta^n \coloneqq \{(t_1, t_2, \dots, t_n) \subset \mathbb{R}^n \; | \; 0 \leq t_1 \leq \dots \leq t_n \leq 1 \}.$$ Then the \textit{evaluation map} is a smooth map between differential spaces: \begin{align*} \text{ev}_n: \Omega X \times \Delta^n & \to X^{\times n} \\ (\gamma, (t_1, \dots, t_n)) & \mapsto (\gamma(t_1), \dots, \gamma(t_n)) \end{align*} \end{definition} \begin{definition} Let $X$ be a differential space. Just as for forms on finite dimensional manifolds, we can define an \textit{integration over the fibers} map for differential spaces: $$\int_{\Delta^n}: \Lambda^k(X \times \Delta^n) \to \Lambda^{k-n}(X)$$ The definition is plotwise. Let $\omega$ be a differential form on $X \times \Delta^n$. Given a plot, $\alpha: U \to X$ on $X$, define $(\int_{\Delta_n} \omega)_{\alpha}$ to be the form on $U$ $$ \int_{\Delta_n} (\omega_{\alpha \times \text{id}_{\Delta^n}}) $$ Here it is used that $\alpha \times \text{id}_{\Delta^n}$ is a plot on $X \times \Delta^n$. The integration is over the volume element $dt_1 \wedge \dots dt_n$ on $\Delta^n$. \end{definition} The preceding two definitions allow us to now give the definition of iterated integrals. \begin{definition} (Iterated integrals) Let $r$ be a nonnegative integer. Let $\omega_1, \dots \omega_r$ be forms of dimension $n_1, \dots, n_r$ on the manifold $M$. Then the \textit{iterated integral} $\smallint \omega_1 \dots \omega_r$ is a $(-r+\Sigma n_i)$-form on $\Omega M$ given by $$\smallint \omega_1 \dots \omega_r \coloneqq \int_{\Delta^r} \text{ev}_r^*(\omega_1 \times \dots \times \omega_r).$$ Here $\omega_1 \times \dots \times \omega_r$ is the product form $\pi_1^*\omega_1 \wedge \dots \wedge \pi_r^*\omega_r$ on $M^{\times r}$. The iterated integral is said to be of \textit{length} $r$. \end{definition} \subsection{Properties of iterated integrals} Let $X$ be a simply connected compact manifold. Let $\smallint\Lambda^*(X)$ denote the differential graded subalgebra of $\Lambda^*(\Omega X)$ generated by iterated integrals. Let $C_*(\Omega X)$ denote the chain complex of smooth chains on $\Omega X$. There is an integration map\footnote{ also referred to as the \textit{Stokes map} in \cite{itintreview}} \begin{align*} \rho: \Lambda^n(\Omega X) \to \;& \text{Hom}(C_n(\Omega X); \mathbb{R}) \\ \omega \mapsto \;& \{(\sigma: \Delta^n \to \Omega X) \mapsto \int_{\Delta^n}\sigma^*\omega\} \end{align*} \begin{theorem}(Chen's loop space de Rham theorem \cite{itpathints}) Let $X$ be as above. The integration map restricted to $\smallint\Lambda^*(X)$ induces an isomorphism $$\rho_*: H^*(\smallint\Lambda^*(X)) \xrightarrow{\sim} H^*(\Omega X; \mathbb{R}).$$ \end{theorem} So any cohomology class of $\Omega X$ (and similarly any homotopy functional on $X$) can be represented by an iterated integral. In fact, Chen gives a stronger version of this: \begin{theorem} \cite[Thm~2.3.1]{itpathints} Let $X$ be as above. Let $A$ be a subalgebra of $\Lambda^*(X)$ such that the integration map $A \to C^*(X; \mathbb{R})$ induces an isomorphism on cohomology. Let $\smallint A$ denote the differential graded subalgebra of $\smallint \Lambda^*(X)$ generated by iterated integrals of forms in $A$. Then the integration map restricted to $\smallint A$, gives an isomorphism $$\rho_*: H^*(\smallint A) \xrightarrow{\sim} H^*(\Omega X; \mathbb{R}).$$ \end{theorem} An example of such an $A$ that will be used in Section 5 is the image of a minimal model $m_X: \mathcal{M}_X \to \Lambda^*(X)$. The final property of iterated integrals that we will use is their relation to the bar construction. Recall the definition of the bar construction of a differential graded algebra $A$ as follows. It is a differential graded algebra $B(A)$ that, as an algebra, is given by $$ B(A) \coloneqq \bigotimes_{r\geq 0} A^{>0}$$ with an element $a_1 \otimes \dots \otimes a_r$ denoted by $a_1 | \dots | a_r$ and has degree $(\text{deg}(a_1) + \dots + \text{deg}(a_r) - r$. The differential on $B(A)$ is $$ d(|a_1| \dots | a_r|) = - \sum_{i=1}^{r} (-1)^{n_i}|a_1| \dots | da_i | \dots | a_r| + \sum_{i=2}^{r} (-1)^{n_i} |a_1| \dots |a_{i-1} a_i | \dots | a_r | $$ where $n_i = \sum{j<i}(\text{deg}(a_j)-1)$. \begin{theorem} [Iterated Path Integrals, Thm 4.1.1] Let $X$ be a connected differential space with $H_0(X) = \mathbb{Z}$. Let $A$ be a differential graded subalgebra of $\Lambda^*(X)$ with $A^0 = \mathbb{R}$ and $A^1 \cap d\Lambda^0(X) = 0$. Then the map $B(A) \to \smallint A \subset \Lambda^*(\Omega X)$ given by $$ | \omega_1 | \dots | \omega_r | \mapsto \smallint \omega_1 \dots \omega_r$$ is an isomorphism of differential graded algebras. \end{theorem} \subsection{Metric setup on $\Omega X$} Let $(X,g)$ be a Riemannian manifold. The Riemannian distance function $d_X$ on $X$ allows us to define a distance function on $\Omega X$ as follows. Given loops $\gamma_1, \gamma_2$, define $$d(\gamma_1, \gamma_2) = \sup_{t\in[0,1]} d_X(\gamma_1(t), \gamma_2(t)).$$ This gives a metric on $\Omega X$, which which allow us to define the notion of volume for chains on $\Omega X$. By the \textit{volume} (or \textit{$n$-volume}) of a simplex $\sigma: \Delta^n \to \Omega X$, denoted $\text{Vol}(\sigma)$, we will mean the $n$-dimensional Hausdorff measure of the image of $\sigma$ with respect to the metric on $\Omega X$. Analogously, an arbitrary chain $\sum q_i \sigma_i$ (with $q_i \in \mathbb{R}$) has volume $\sum |q_i| \text{Vol}(\sigma_i)$. There is also a function $\text{Length}: \Omega X \to \mathbb{R}$ which associates to a loop $\gamma$ its length when viewed as a path in $X$. Then the \textit{suplength} of a simplex $\sigma$ is the supremum of the lengths of all of the loops in the image of $\sigma$. An arbitrary chain $\sum q_i \sigma_i$ has suplength equal to the maximum suplength of the $\sigma_i$ that have nonzero coefficient $q_i$. Given a differential space $X$ with metric, such as a Riemannian manifold, or a loop space of either of such with the induced metric, we will define a norm on forms analogous to the norm on forms on a Riemannian manifold. Recall the data of a $k$-form $\omega$ on $X$ is a compatible collection of $k$-forms $\omega_{\phi_i}$ on $U_i$ for each plot $\phi_i: U_i \to K$. Endow each $U_i$, a convex subset of Euclidean space, with the Euclidean metric. At a given point $x \in X$, define $||\omega(x)||_{\infty}$ to be the quantity $$ ||\omega(x)||_{\infty} \coloneqq \sup \{ \dfrac{||\omega_{\phi}(y)||_{\infty}}{\text{Dil}_k(\phi)} \;|\; \phi: U \to X \text{ a plot with Dil$_k(\phi) \neq 0$, and $\phi(y) = x$}\} $$ Here $\text{Dil}_k(\phi)$ is the \textit{$k$-dilation} of $\phi$: say $\phi$ has $k$-dilation $\leq \lambda$ if each $k$-dimensional surface $\Sigma$ in $U$ has image in $X$ with Hausdorff $n$-measure at most $\lambda \text{Vol}_k(\Sigma)$. Note this is consistent with $||\cdot||_{\infty}$ defined on Riemannian manifolds, since in this setting $\omega_{\phi} = \phi^*(\omega)$ and for $k$-forms $\omega$, $||\phi^*(\omega)||_{\infty} \leq \text{Dil}_k(\phi)||\omega||_{\infty}$. There are two key properties of this norm that we will use, both generalizations to the properties of the norm on forms on Riemannian manifolds. \begin{lemma} If $f: X \to Y$ is a smooth $L$-Lipschitz map between metric differential spaces, then for any $n$-form $\alpha$ on $Y$, $||f^*\alpha(x)||_{\infty} \leq L^n||\alpha(f(x))||_{\infty}$. \end{lemma} \begin{proof} The ingredients here are that if $\phi: U \to X$ is a plot then so is $f \circ \phi$, and also $\text{Dil}_k(f \circ \phi) \leq L^n \text{Dil}_k(\phi)$. Then \begin{align*} ||f^*\alpha(x)||_{\infty} &= \sup \{ \frac{||(f^*\alpha)_{\phi}(y)||_{\infty}}{\text{Dil}_k{\phi}} \;\;|\;\; \phi: U \to X,\; \phi(y) = x,\;\; \text{Dil}_k(\phi) \neq 0 \} \\ &= \sup \{ \frac{||\alpha_{f \circ \phi}(y)||_{\infty}}{\text{Dil}_k{\phi}} \;\;|\;\; \phi: U \to X,\; \phi(y) = x,\;\; \text{Dil}_k(\phi) \neq 0 \} \\ &\leq L^n \sup \{ \frac{||\alpha_{f \circ \phi}(y)||_{\infty}}{\text{Dil}_k{\phi}} |\; f \circ \phi: U \to Y,\; f\circ \phi(y) = f(x),\; \text{Dil}_k(f \circ \phi) \neq 0 \} \\ & \leq L^n ||\alpha(f(x))||_{\infty}. \end{align*} \end{proof} \begin{lemma} Let $X$ be a metric differential space. Let $Z$ be a $k$-chain in $X$ and $\omega$ a $k$-form on $X$. Then the integration pairing satisfies $$| \langle \omega, Z \rangle| \leq \text{Vol}(Z) \cdot \sup_{x \in \text{supp}(Z)}||\omega(x)||_{\infty}.$$ \end{lemma} \begin{proof} Since $Z$ is finite dimensional, upon pulling back to $Z$ this immediately reduces to the finite dimensional case. \end{proof} \section{Proof of Theorems \ref{thmc}, \ref{thmb}, and \ref{thma}} In this section we prove Theorem \ref{thmc} and use this to deduce Theorems \ref{thmb} and \ref{thma}. \begin{theorem} (Theorem \ref{thmc}) Let $(X,g)$ be a compact simply connected Riemmanian manifold, and $\omega_1, \dots, \omega_r$ forms on $X$. Then for any $\gamma \in \Omega X$ the form $\smallint \omega_1 \dots \omega_r$ on $\Omega X$ satisfies $$ ||(\smallint \omega_1 \dots \omega_r)(\gamma)||_{\infty} \leq \frac{1}{r!} \text{Length}(\gamma)^r ||\omega_1||_{\infty} \dots ||\omega_r||_{\infty} $$ \end{theorem} \begin{proof} Let $k$ be the dimension of the form $\smallint \omega_1 \dots \omega_r$. Let $\phi: U \to \Omega X$ be a plot with nonzero $k$-dilation and image containing $\gamma$. Let $g$ denote the composite $$ U \times \Delta^r \xrightarrow{\phi \times \text{id}_{\Delta^r}} \Omega X \times \Delta^r \xrightarrow{\text{ev}_r} X^{\times r}$$ It suffices to show that at a point $x \in \phi^{-1}(\gamma) \subset U$, $$ || \int_{\Delta^r} g^*(\omega_1 \times \dots \times \omega_r)(x) ||_{\infty} \leq \text{Dil}_k(\phi) \cdot \frac{1}{r!} \text{Length}(\gamma)^r \cdot ||\omega_1 \times \dots \times \omega_r ||_{\infty}$$ Let $v_1, \dots, v_k$ be vectors in $T_x U$ with unit norm (under the Euclidean metric on U). Then \begin{align*} & |\int_{\Delta^r} g^*(\omega_1 \times \dots \omega_r)(x)[v_1, \dots, v_k]| \\ &= |\int_{(t_i) \in \Delta^r} g^*(\omega_1 \times \dots \times \omega_r)(x, (t_i))[\tilde{v}_1, \dots, \tilde{v}_k, \partial \tilde{t}_1, \dots, \partial \tilde{t}_r]| \\ \intertext{Where $\partial t_i$ are the standard basis vectors of $T_{(t_i)}\Delta^r$, $\tilde{v}_i$ is a lift of $v_i$ to $T_{(x,t)}(U \times \Delta^r)$ and similarly for $\partial \tilde{t}_i$.} &\leq \int_{(t_i) \in \Delta^r}||\omega_1 \times \dots \times \omega_r|| \cdot \text{Dil}_k(\phi) \cdot |\dot{\gamma}(t_1)| \cdot \ldots \cdot |\dot{\gamma}(t_r)| \\ \intertext{Since $g: U \times \Delta^r \to X^{\times r}$ has $k$-dilation $\text{Dil}_{\phi}$ in the first factor, and $g_*(\partial \tilde{t}_j)$ has norm $\dot{\gamma}(t_j)$,} &= ||\omega_1 \times \dots \times \omega_r|| \cdot \text{Dil}_k(\phi) \cdot \int_{(t_i) \in \Delta^r} |\dot{\gamma}(t_1)| \cdot \ldots \cdot |\dot{\gamma}(t_r)| \\ &= ||\omega_1 \times \dots \times \omega_r|| \cdot \text{Dil}_k(\phi) \cdot \frac{1}{r!}\text{Length}(\gamma)^r \end{align*} as required. \end{proof} Theorems \ref{thmb} and \ref{thma} follows directly from this and Theorem 3.1. \begin{corollary} (Theorem \ref{thmb}) Given $\alpha \in \pi_n(X) \otimes \mathbb{Q}$, the distortion of $\alpha$ is $O(L^{n-1+r})$ where $r$ is the smallest positive integer such that $\alpha$ is detected by a cohomology class in $H^{n-1}(\Omega X; \mathbb{Q})$ which can be represented by a sum of iterated integrals all of whose summands are length at most $r$. \end{corollary} \begin{proof} \textit{(of Corollary)} Let $f: S^n \to X$ be an $L$-Lipschitz map such that $[f] = k\alpha \in \pi_*(X) \otimes \mathbb{Q}$. Let $\tau$ be the composite $\pi_n(X) \otimes \mathbb{Q} \xrightarrow{\sim} \pi_{n-1}(\Omega X) \otimes \mathbb{Q} \to H_{n-1}(\Omega X; \mathbb{Q})$. This map $\tau$ can also be refined to a map $\tau: \text{Map}(S^n, X) \to C_{n-1}(\Omega X)$, defined as follows. Fix a cycle $\zeta_n$ generating $H_{n-1}(\Omega S^n)$. Then define $\tau(f) \coloneqq (\Omega f)_*\zeta_n$. Let $\omega$ be a differential $(n-1)$-form on $\Omega X$ such that $\langle \omega, \tau(\alpha) \rangle = 1$. By Theorem 3.1 we can assume $\omega$ is a sum of iterated integrals. Then by the hypothesis of the corollary we can further assume that there is a constant $C > 0$ such that for all $\gamma \in \Omega X$, $$||\omega(\gamma)||_{\infty} \leq C \text{Length}(\gamma)^r.$$ Now, suppose $\zeta_n$ has suplength $L_0$ and volume $V_0$. Then $(\Omega f)_*(\zeta_n)$ is an $(n-1)$-cycle in $\Omega X$ of volume at most $L^{n-1}V_0$ and suplength $LL_0$. Then \begin{align*} k = \langle \omega, (\Omega f)_*(\zeta_n) \rangle &\leq \text{Vol}((\Omega f)_*(\zeta_n) \cdot \sup_{\gamma \in \text{supp}(\tau(f))} ||\omega(\gamma)||_{\infty} \\ & \leq L^{n-1}V_0 \cdot C(LL_0)^r \\ &= O(L^{n-1+r}) \end{align*} as required. \end{proof} \begin{corollary} (Theorem \ref{thma}) Let $\zeta \in H_n(\Omega X)$ be a nonzero homology class. Then there exists a constant $c>0$ and an integer $r > 0$ such that any cycle $Z$ representing $\zeta$ obeys the bound $$ \text{Suplength}(Z)^r \text{Vol}(Z) > c.$$ Moreover, $r$ can be taken to be the minimal nonnegative integer $k$ such that there exists a $\beta \in H^n(\Omega X)$ such that $\langle \beta,\; \zeta \rangle \neq 0$ and $\beta$ can be represented closed iterated integral with each summand built out of at most $k$ differential forms on $X$. \end{corollary} \begin{proof} Let $\beta$ be some class in $H^n(\Omega X)$ such that $\langle \beta, \zeta \rangle = 1$. This $\beta$ can be represented by a differential form $\omega$ which is a sum of iterated integrals, so by Theorem \ref{thmc} there exists a constant $C > 0$ such that for any $\gamma \in \Omega X$, $$||\omega(\gamma)||_{\infty} \leq C \text{Length}(\gamma)^r$$ where $r$ is the length of the longest iterated integral summand in $\omega$. Then $$1 = \langle \omega, Z \rangle \leq \text{Vol}(Z) \sup_{\gamma \in \text{supp}(Z)} ||\omega(\gamma)||_{\infty} \leq \text{Vol}(Z) \cdot C \; \text{Suplength}(Z)^r$$ so $\text{Suplength}(Z)^r \text{Vol}(Z) \geq 1/C$ as required. \end{proof} \section{Applications to Gromov's distortion of homotopy groups} In this section we compute some examples of iterated integrals on simply connected Riemannian manifolds $K$. \begin{example} Let $K = S^n$ be a sphere and $\omega$ a volume form on $S^n$. In section 2, we demonstrated that iterated integral $\smallint \omega$ that detected the degree functional $\pi_n(S^n) \otimes \mathbb{Q} \to \mathbb{Q}$ and for even $n$ the iterated integral $\smallint \omega \omega$ detects the Hopf invariant $\pi_{2n-1}(S^{2n}) \otimes\mathbb{Q} \to \mathbb{Q}$. By the theorem, these differential forms on $\Omega S^n$ satisfy, for $\gamma \in \Omega S^n$: \begin{align*} ||\smallint \omega(\gamma)||_{\infty} \; & \lesssim \text{Length}(\gamma) \\ ||\smallint \omega \omega(\gamma)||_{\infty} \; & \lesssim \text{Length}(\gamma)^2 \end{align*} Hence by the corollary we recover Gromov's original bounds for these homotopy functionals: if $f: S^n \to S^n$ is $L$-Lipschitz, then $|\text{deg}(f)| = O(L^n)$. If $n$ is even and $f: S^{2n-1} \to S^n$ is $L$-Lipschitz, then $|\text{Hopf}(f)| = O(L^{2n})$. \end{example} \begin{example} Let $K = \mathbb{CP}^n$. Let $\omega$ be a $2$-form generating $H^2(K)$. The iterated integral $\smallint \omega \omega^n$ of length $2$ is a closed $(2n)$-form generating $H^{2n}(\Omega \mathbb{CP}^n)$ \cite{hainthesis} and hence detects the generator of $\pi_{2n+1}(\mathbb{CP}^n)$. Thus the distortion of this generator is $O(L^{2n+2})$. \end{example} Next we give the example of the $8$-dimensional nonformal cell complex $X$ considered in \cite[\S5]{shadows}. An alternate approach to obstruction theory is given to show that the the generator of $\pi_{10}(X) \otimes \mathbb{Q}$ has distortion $O(L^{11})$. \begin{example} Let $K$ be a Riemannian manifold\footnote{Created, for example, by thickening the cell complex} equivalent to the following $8$-dimensional cell complex $$ X = (S^3_a \vee S^3_b) \cup_{[a,[a,b]]} D^8 \cup_{[b,[a,b]]} D^8.$$ The group $\pi_{10}(K) \otimes \mathbb{Q}$ is rank 1 (see \cite[\S 13(d),(e)]{fht} for details); let $\tau$ be a generator. The claim is that $\tau$ can be detected by an iterated integral of length two, implying that $\tau$ has distortion $O(L^{11})$. Label the two $3$-cells of $X$ by $a$ and $b$, and the two $8$-cells $w$ and $z$ respectively. The cells give a basis of homology and so dually we can find (for each cell $e \in \{a,b,w,z\}$) $\text{dim}(e)$-dimensional forms $\omega_e$ on $K$ giving a basis of $H^*(K; \mathbb{Q})$. Then we will show that $\smallint \omega_a \omega_z$ detects $\tau$. We show this by considering the minimal model of $K$: $$\mathcal{M}_K = \langle x_1^{(3)},\;x_2^{(3)},\;y^{(5)},\; T^{(10)},\;\dots \;|\; dx_i = 0,\;dy = x_1x_2,\;dT = x_1x_2y,\; \dots \rangle.$$ and the quasi-isomorphism $m_K: \mathcal{M}_K \to \Lambda^*(K)$ can be taken such that $x_1 \mapsto \omega_a$, $x_2 \mapsto \omega_b$, $y$ is sent to a primitive $\omega_y$ of $\omega_a \wedge \omega_b$ and all other generators are sent to zero. Note also that $x_1 y \mapsto \omega_w$ and $x_2 y \mapsto \omega_z$. The minimal model for $\Omega K$ can also be deduced from the minimal model for $K$: $$\mathcal{M}_{\Omega K} = \langle \tilde{x}_1^{(2)},\;\tilde{x}_2^{(2)},\;\tilde{y}^{(4)},\; \tilde{T}^{(9)},\;\dots \;|\; d\tilde{x}_i = 0,\;d\tilde{y} = 0,\;d\tilde{T} = 0,\; \dots \rangle. $$ Note that $H^9(\Omega K; \mathbb{Q})$ is also rank 1, so any nontrivial class in here will detect $\tau$. Now, let $A$ be the image of the minimal model $\mathcal{M}_K$ in the de Rham complex $\Lambda^*(K)$ of $K$. In degree $3$ it is generated by $\omega_a$ and $\omega_b$. In degree $5$ it is generated by $\omega_y$. In degree $8$ it is generated by $\omega_w = \omega_a \wedge \omega_y$ and $\omega_z = \omega_b \wedge \omega_y$. In other positive degrees $A$ vanishes. By Theorem 3.2, the subalgebra $\smallint A$ of $\Lambda^*(\Omega K)$ has cohomology isomorphic to $H^*(\Omega K)$. So we are looking for cocycles in $\smallint A$ of degree $9$. By inspection the degree $9$ part of $\smallint A$ has dimension $8$, with basis \begin{align*} \smallint \omega_a \omega_w, \hspace{1mm} \smallint \omega_a \omega_z, \hspace{1mm} \smallint \omega_b \omega_w, \hspace{1mm} \smallint \omega_b \omega_z, \hspace{1mm} \smallint \omega_y (\omega_a \wedge \omega_b), \\ \smallint \omega_a (\omega_a \wedge \omega_b) \omega_b, \hspace{1mm} \smallint \omega_a \omega_a (\omega_a \wedge \omega_b), \hspace{1mm} \smallint (\omega_a \wedge \omega_b) \omega_b \omega_b. \end{align*} Using the fact that $\omega_a^2 = 0 = \omega_b^2$ and also that $\omega_a \wedge \omega_b \wedge \omega_y = 0$ for degree reasons, we can take the exterior derivatives of these in $\smallint A$ and see the space of $9$-cocycles in $\smallint A$ is dimension $7$. The space of $9$-coboundaries is dimension $6$, with a basis given the exterior derivatives of: \begin{align*} \smallint \omega_a \omega_y \omega_a - \smallint \omega_a \omega_a \omega_a \omega_b, \\ \smallint \omega_b \omega_y \omega_b - \smallint \omega_a \omega_b \omega_b \omega_b, \\ \smallint \omega_a \omega_y \omega_b - \smallint \omega_a \omega_a \omega_b \omega_b, \\ \smallint \omega_a \omega_a \omega_a \omega_b, \\ \smallint \omega_a \omega_a \omega_b \omega_b, \\ \smallint \omega_a \omega_b \omega_b \omega_b \end{align*} Hence $\smallint \omega_a \omega_z$ is a representative for the generator of $H^9(\Omega K)$. Since this iterated integral is length $2$, we get the upper bound on the distortion of $\tau$ of $O(L^{11})$ as required. \end{example} \section{Sharpness of bounds} We show that the bounds on differential forms representing the degree and Hopf invariant for spheres are sharp. \begin{proposition} Let $\omega$ be any $(n-1)$-form on $\Omega S^n$ generating $H^{n-1}(\Omega S^n)$. Then it cannot be the case that $||\omega(\gamma)||_{\infty} = o(\text{Length}(\gamma))$ for $\gamma \in \Omega S^n$. \end{proposition} \begin{proof} Suppose for contradiction we have such an $\omega$ with $||\omega(\gamma)||_{\infty} = o(\text{Length}(\gamma))$. Let $\xi$ be a cycle generating $H_{n-1}(\Omega S^n)$. Consider the map $\{L\}: \Omega S^n \to \Omega S^n$ sending $\gamma$ to $\gamma \cdot \ldots \cdot \gamma$; $\gamma$ to $\gamma$ concatenated with itself itself $L$ times. This is $1$-Lipschitz so the induced map on chains $\{L\}_*: C_*(\Omega S^n) \to C_*(\Omega S^n)$ does not increase volume. It does however increase the suplength of chains: for any chain $Z$, $\text{Suplength}(\{L\}_*Z) = L\cdot \text{Suplength}(Z)$. Note that $[\{L\}_*\xi] = L[\xi] \in H_{n-1}(\Omega S^n)$ since $\{L\}_*\xi$ sweeps out $S^n$ by loops $L$ times. Then $$L = \langle \omega, \{L\}_*\xi \rangle \leq \text{Vol}(\{L\}_*\xi) \cdot \sup_{\gamma \in \text{supp}(\{L\}_*\xi)} ||\omega(\gamma)||_{\infty} = o(L),$$ which is a contradiction. \end{proof} \begin{proposition} Let $n$ be even and $\omega$ be any $(2n-2)$-form on $\Omega S^n$ generating $H^{2n-2}(\Omega S^n)$. Then it cannot be the case that $||\omega(\gamma)||_{\infty} = o(\text{Length}(\gamma)^2)$ for $\gamma \in \Omega S^n$. \end{proposition} \begin{proof} Let $\cdot$ denote the Pontryagin product on chains in $\Omega X$. Then $\xi \cdot \xi$ is a $(2n-2)$-cycle generating $H_{2n-2}(\Omega S^n)$. We proceed as before, this time using the family of cycles $\{L\}_*\xi \cdot \{L\}_*\xi$. By linearity of the Pontryagin product, $[\{L\}_*\xi \cdot \{L\}_*\xi] = L^2[\xi \cdot \xi] \in H_{2n-2}(\Omega S^n)$. Suppose we have such an $\omega$ with $||\omega(\gamma)||_{\infty} = o(\text{Length}(\gamma)^2)$. Then $$L^2 \lesssim \langle \omega, \{L\}_*\xi \cdot \{L\}_*\xi \rangle \leq \text{Vol}(\{L\}_*\xi \cdot \{L\}_*\xi) \cdot \sup_{\gamma \in \text{supp}(\{L\}_*\xi \cdot \{L\}_*\xi)} ||\omega(\gamma)||_{\infty} = o(L^2),$$ a contradiction. \end{proof} The crucial properties of the families of cycles in the preceding two propositions were that they were $\text{suplength} \lesssim L$ and the ratio of their degree in homology to their volume was large (linear in $L$ and quadratic in $L$ respectively). We can study this phenomenon in more generality. \begin{definition} Let $X$ be a compact simply connected Riemannian manifold and $\alpha \in H_n(\Omega X; \mathbb{Q})$. The \textit{homological distortion} of $\alpha$ is \begin{align*} \delta_{\alpha}'(L) \coloneqq \text{max}\{k \; : \; & \text{there exists a cycle representing $k\alpha$ that is} \\ & \text{suplength $\leq L$ and volume $\leq L^n$}\}. \end{align*} \end{definition} The bound on the volume can always be satisfied by rescaling the coefficients of a cycle. The volume bound of $L^n$ is so that homological distortion can be compared to distortion: \begin{proposition} For any $\alpha \in \pi_{n+1}(X) \otimes \mathbb{Q}$ with Hurewicz image $\tau(\alpha) \in H_{n}(\Omega X; \mathbb{Q})$, $\delta_{\alpha}(L) \lesssim \delta_{\tau(\alpha)}'(L)$. \end{proposition} \begin{proof} An $L$-Lipschitz map $f: S^{n+1} \to X$ with $[f] = k\alpha$, gives a cycle $\tau(f) \coloneqq (\Omega f)_*(\zeta_{n+1}) \in C_n(\Omega X)$ with volume $\lesssim L^n$, suplength $\lesssim L$ and $[\tau(f)] = k \tau(\alpha)$. \end{proof} Note that Corollary 4.3 really gives an upper bound on homological distortion which by Proposition 6.3 implies the upper bound on distortion. It is an open question as to whether there is an $\alpha$ such that $\delta_{\tau(\alpha)}'(L)$ grows strictly faster than $\delta_{\alpha}(L)$. \printbibliography \medskip \textsc{Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, United States} \\ E-mail address: \texttt{relliott@mit.edu} \end{document}
{ "timestamp": "2020-12-17T02:18:23", "yymm": "2012", "arxiv_id": "2012.08937", "language": "en", "url": "https://arxiv.org/abs/2012.08937" }
\section{Introduction} The paper is devoted to the study of amenability properties in the framework of DF-algebras. These are algebras with jointly continuous multiplication whose underlying topological vector spaces are DF-spaces. The category of DF-spaces contains spaces of distributions, e.g. tempered distributions or distributions with compact support. More generally, duals of Fr\'echet spaces belong to this category. In particular, the duals of K\"othe echelon spaces are DF-spaces. These are the so-called K\"othe co-echelon spaces and this class of objects will be of particular importance for us. The general study of amenable DF-algebras meets two major difficulties which come from the facts that the category of DF-spaces does not respect subspaces and that there is no Open Mapping Theorem available. This implies that the two well-known approaches to amenability (namely, Johnson's approach based on derivations \cite{BEJ} and Helemskii--Sheinberg's approach based on flat modules \cite{Hel_Shein}) which are equivalent in the category of Banach (or Fr\'echet) algebras are potentially inequivalent in the DF-algebra framework (however, we have no explicit counterexample so far). The main aim of this paper is to modify the notion of a flat module in such a way that the above-mentioned problem disappears. The resulting notion of a {\em topologically flat} module is equivalent to that of a flat module in the case of Banach (or Fr\'echet) modules, but, in our view, is better adapted to the nonmetrizable setting. We define {\em topologically amenable algebras} in terms of topologically flat modules, and we show that topological amenability for complete barrelled DF-algebras is equivalent to amenability in Johnson's sense. We also obtain a topological amenability criterion for K\"othe co-echelon algebras, completing thereby recent results of the second author \cite{KP-amenable,KP-contractible}. The theory of amenable Banach algebras essentially starts with the famous result of Johnson \cite[Theorem 2.5]{BEJ} who proved that the convolution algebra $L^1(G)$ is amenable if and only if the locally compact group $G$ is amenable. Since then amenable Banach algebras became an inseparable part of functional analysis and operator algebra theory (see \cite{Runde_new} for a recent and detailed account). A few years after the publication of Johnson's memoir, Helemskii and Sheinberg \cite{Hel_Shein} observed that the notion of an amenable algebra perfectly fits into the general ``Banach homological algebra'' developed earlier by Helemskii \cite{Hel_dim} (and, independently, by Kiehl and Verdier \cite{KV} and by Taylor \cite{JLT}). Namely, Helemskii and Sheinberg proved that a Banach algebra $A$ is amenable in Johnson's sense if and only if the unitization of $A$ is a flat Banach $A$-bimodule. This result was extended by the first author \cite[Corollary 3.5]{AP-weak} to the setting of Fr\'echet algebras. In the present article we continue this investigation and study amenability properties of DF-algebras, with a special emphasis on K\"othe co-echelon algebras. The paper is organized as follows. The next section is Notation and Preliminaries, and it contains basic definitions, facts and notation that is used in the sequel. In Section~\ref{sect:flat_amen}, we introduce and study topologically flat locally convex modules and topologically amenable locally convex algebras. The main results here are Theorem~\ref{ext-top-flat}, which characterizes topologically flat DF-modules in terms of the $\Ext$ functor, and Theorem~\ref{topam-der}, which shows that, for complete barrelled DF-algebras, the topological amenability in our sense is equivalent to the Johnson amenability. In Section~\ref{sect:coechelon}, we characterize topologically amenable K\"othe co-echelon algebras $k_p(V)$ in terms of the corresponding weight sets $V$ (Theorems~\ref{finite-order} and~\ref{thm:kinf}). Finally, in Section~\ref{sect:examples} we give some concrete examples of topologically amenable (and non-amenable) co-echelon algebras. In particular, we construct a topologically amenable co-echelon algebra of order $\infty$ which, in a sense, cannot be reduced to a direct sum of $\ell_\infty$ with a contractible co-echelon algebra. General references are: \cite{MV} for functional analysis, \cite{D,M} for Banach and topological algebra theory, and \cite{H2} for the homology theory of topological algebras. \section{Notation and Preliminaries} \label{sect:not_prelim} We start by recalling some basic definitions and introducing some notation that will be used in the sequel. By a \textit{locally convex algebra} we mean a locally convex space (lcs) over $\CC$ equipped with a separately continuous associative multiplication. In general, locally convex algebras are not assumed to have an identity. Given a locally convex algebra $A$, we denote by $A_+$ the unconditional unitization of $A$, and we denote by $A^{\op}$ the opposite algebra, i.e., the lcs $A$ with multiplication $a\cdot b:=ba$. In what follows, when using the word ``algebra'' with an adjective that describes a linear topological property (such as ``complete'', ``Fr\'echet'', ``Banach'', etc.), we mean that the underlying lcs of the algebra in question has the specified property. The same applies to locally convex modules (see below). Given a locally convex algebra $A$, a {\em left locally convex $A$-module} is an lcs $X$ together with a left $A$-module structure such that the action $A\times X\to X$ is separately continuous. Right locally convex modules and locally convex bimodules are defined similarly. At some point we will be using a concrete locally convex bimodule $A\otimes\CC$ which is the lcs $A$ itself with trivial right module action and multiplication as the left module action. The completed projective tensor product of lcs's $E$ and $F$ will be denoted by $E\Ptens F$, and the completion of $E$ will be denoted by $\wt{E}$ or by $E^\sim$. A complete locally convex algebra with jointly continuous multiplication is called a {\em $\Ptens$-algebra}. If $A$ is a $\Ptens$-algebra then the assignment $a\otimes b\mapsto ab$ gives rise to the so-called \textit{product map} $\pi_A\c A\Ptens A\to A$. We will simply write $\pi$ whenever it is clear to which algebra the product map is referred to. If $A$ is a $\Ptens$-algebra, then a left locally convex $A$-module $X$ is a {\em left $A$-$\Ptens$-module} if $X$ is complete and if the action of $A$ on $X$ is jointly continuous. Right $\Ptens$-modules and $\Ptens$-bimodules are defined similarly. The category of left $A$-$\Ptens$-modules (respectively, of right $A$-$\Ptens$-modules, of $A$-$B$-$\Ptens$-bimodules) will be denoted by $A\lmod$ (respectively, $\rmod A$, $A\bimod B$). Note that $A$-$\Ptens$-bimodules are nothing but left unital $A^e$-$\Ptens$-modules, where $A^e:=A_+\Ptens A_+^{\op}$ is the enveloping algebra of $A$ (see \cite[\S II.5.2]{H2}). If $X\in\rmod A$ and $Y\in A\lmod$, then the \emph{$A$-module projective tensor product} of $X$ and $Y$ is defined as \[ X\ptens{A} Y:=(X\Ptens Y/N)^\sim, \] where \[ N:=\overline{\s}\bigl\{x\cdot a\otimes y-x\otimes a\cdot y : a\in A,\, x\in X,\, y\in Y\bigr\} \subset X\Ptens Y. \] If $X$ and $Y$ are two lcs's, then $L(X,Y)$ stands for the vector space of continuous linear operators from $X$ to $Y$. We equip $L(X,Y)$ with the topology of uniform convergence on bounded sets. As usual, we let $X'=L(X,\CC)$. If $A$ is a locally convex algebra and $X,Y$ are left locally convex $A$-modules, then $_AL(X,Y)$ denotes the vector space of continuous linear $A$-module maps, i.e., operators $T\in L(X,Y)$ satisfying $T(a\cdot x)=a\cdot Tx$ for all $a\in A$, $x\in X$. In the case of right $A$-modules, resp. $A$-$B$-bimodules, the vector spaces $L_A(X,Y)$ and $_AL_B(X,Y)$ are defined analogously. Suppose that $A$, $B$, $C$ are locally convex algebras, $X$ is a locally convex $B$-$C$-bimodule, and $Y$ is a locally convex $A$-$C$-bimodule. Then $L_C(X,Y)$ has a natural $A$-$B$-bimodule structure given by \[ (a\cdot T)(x)=a\cdot T(x),\quad (T\cdot b)(x)=T(b\cdot x) \qquad (a\in A,\; b\in B,\; T\in L_C(X,Y)). \] If the actions of $A$ on $Y$ and of $B$ on $X$ are hypocontinuous with respect to the families of bounded subsets of $Y$ and $X$, respectively, then $L_C(X,Y)$ is easily seen to be a locally convex $A$-$B$-bimodule (cf. \cite[Section 3]{JLT}). In particular, this condition is satisfied provided that the actions are jointly continuous. In particular, for each $\Ptens$-algebra $A$ and each left (respectively, right) $A$-$\Ptens$-module $X$ the dual space $X'$ is a right (respectively, left) locally convex $A$-module. Note, however, that the action of $A$ on $X'$ need not be jointly continuous. Let $\CLCS$ denote the category of complete lcs's and continuous linear maps. Suppose that $\sC\subset\CLCS$ is a full additive subcategory. We write $\alg(\sC)$ for the category of all $\Ptens$-algebras whose underlying spaces are objects of $\sC$. If $A$ is a $\Ptens$-algebra, then we denote by $A\lmod(\sC)$ the full subcategory of $A\lmod$ consisting of those modules whose underlying spaces are objects of $\sC$. The symbols $\rmod A(\sC)$ and $A\bimod B(\sC)$ are understood in a similar way. Following \cite{Pir_wdgnucl} (cf. also \cite{H2}), we say that $\sC$ is {\em admissible} if the following holds: \begin{itemize} \item[$(\sC1)$] if $E\in\sC$ and $F$ is a locally convex space isomorphic to $E$, then $F\in\sC$; \item[$(\sC2)$] if $E\in\sC$ and $E_0\subset E$ is a complemented vector subspace, then $E_0\in\sC$; \item[$(\sC3)$] if $E,F\in\sC$, then $E\Ptens F\in\sC$. \end{itemize} Most of the categories of complete lcs's used in functional analysis are admissible. In this paper, the concrete admissible subcategories we are mostly interested in are $\CLCS$ itself, the category $\Ban$ of Banach spaces, the category $\Fr$ of Fr\'echet spaces, and the category $\CBDF$ of complete barrelled (DF)-spaces. The admissibility of $\Ban$ and $\Fr$ is well known. As for $\CBDF$, property $(\sC2)$ follows from the fact that the classes of barrelled spaces and of (DF)-spaces are stable under taking quotients modulo closed subspaces \cite[27.1.(4) and 29.5.(1)]{K1}, while property $(\sC3)$ follows from \cite[41.4.(7) and 41.4.(8)]{K2}. Let $A$ be a $\Ptens$-algebra, and let $\sC$ be an admissible subcategory of $\CLCS$. A sequence \begin{equation} \label{short_seq} 0 \to X \xra{i} Y \xra{p} Z \to 0 \end{equation} in $A\lmod(\sC)$ is {\em admissible} if it is split exact in $\CLCS$, i.e., if it has a contracting homotopy consisting of continuous linear maps. Geometrically, this means that $i$ is a topological embedding, $p$ is open (i.e., is a quotient map), $i(X)=\ker p$, and $i(X)$ is a complemented subspace of $Y$. We say that a morphism $i\colon X\to Y$ (respectively, $p\colon Y\to Z$) in $A\lmod(\sC)$ is an {\em admissible monomorphism} (respectively, an {\em admissible epimorphism}) if it fits into an admissible sequence \eqref{short_seq}. It is easy to show that $A\lmod(\sC)$ together with the class of all admissible sequences is an exact category in Quillen's sense \cite{Quillen}. Therefore most of the main notions and constructions of homological algebra (projective objects, projective resolutions, derived functors, etc.) make sense in $A\lmod(\sC)$. For details, we refer to \cite{H2}. An important property of $A\lmod(\sC)$ is that, if $A\in\alg(\sC)$, then $A\lmod(\sC)$ has enough projectives. As a consequence, each covariant functor $F\colon A\lmod(\sC)\to\Vect$ (where $\Vect$ is the category of vector spaces) has left derived functors $\dL_n F$, and each contravariant functor $F\colon A\lmod(\sC)\to\Vect$ has right derived functors $\dR^n F$ ($n\ge 0$). In particular, for each left locally convex $A$-module $Y$ the functor $\Ext^n_A(-,Y)$ is defined to be the $n$th right derived functor of ${_A}L(-,Y)\colon A\lmod(\sC)\to\Vect$. We would like to stress that, in contrast to \cite{H2}, we do not require $Y$ to be an object of $A\lmod(\sC)$. In particular, we may let $Y=Z'$ for some $Z\in\rmod A(\sC)$. This special case will be essential in our characterization of topologically flat modules (see Theorem \ref{ext-top-flat}). In fact, this is the only reason why we have to consider general locally convex modules rather than $\Ptens$-modules only. Note that the above facts on $A\lmod(\sC)$ have obvious analogs for $\rmod A(\sC)$ and $A\bimod B(\sC)$. For details, see \cite{H2}. Let us now recall some facts on strictly exact sequences of locally convex spaces. Let $\sC$ be an additive category. Following \cite{Schndrs}, we say that a short sequence \eqref{short_seq} in $\sC$ is {\em strictly exact} if $i$ is a kernel of $p$ and $p$ is a cokernel of $i$. \begin{example} If $\sC=\Vect$, then \eqref{short_seq} is strictly exact iff it is exact in the usual sense. \end{example} \begin{example} \label{example:str_ex_LCS} If $\sC=\CLCS$, then \eqref{short_seq} is strictly exact iff $i$ is topologically injective, $i(X)=\ker p$, $p$ is an open map of $Y$ onto $p(Y)$, and $p(Y)$ is dense in $Z$. This follows from \cite[Proposition 4.1.8]{Prosm_derFA}. Essentially, this means that $X$ can be identified with a closed subspace of $Y$, and $Z$ is the completion of $Y/X$. \end{example} \begin{example} \label{example:str_ex_Fr} If $\sC=\Fr$ or $\sC=\Ban$, then \eqref{short_seq} is strictly exact in $\sC$ iff it is strictly exact in $\CLCS$ iff it is exact (or, equivalently, strictly exact) in $\Vect$. This is essentially a combination of Example~\ref{example:str_ex_LCS} with the Open Mapping Theorem. See also \cite[Chapter 2]{Wengen}. \end{example} The following result is a special case of V.~P.~Palamodov's theorem \cite[Proposition 4.2]{VPP} (see also \cite[Theorem 2.2.2]{Wengen}). Given a set $S$, let $\ell_\infty(S)$ denote the Banach space of bounded $\CC$-valued functions on $S$. \begin{theorem}[Palamodov] \label{top-exact} A short sequence \eqref{short_seq} in $\CLCS$ is strictly exact if and only if, for each set $S$, the sequence \[ 0 \to L(Z,\ell_\infty(S))\to L(Y,\ell_\infty(S)) \to L(X,\ell_\infty(S)) \to 0 \] is exact in $\Vect$. \end{theorem} We end this section with a definition and a collection of basic facts concerning K\"othe co-echelon spaces and algebras. Let $I$ be a countable set, and let $V:=(v_n)_{n\in\N}$ be a sequence of weights $v_n\c I\to(0,\infty]$ such that \begin{gather} \forall\,i\in I\quad\exists\,n\in\N\quad v_n(i)<\infty,\tag{W1}\\ \forall\,n\in\N\quad\forall i\in I\quad v_{n+1}(i)\le v_n(i).\tag{W2} \end{gather} For $1\le p\le\infty$ or $p=0$ we define the \textit{K\"othe co-echelon space of order $p$} as \begin{gather*} k_p(I,V):=\Big\{x=(x_i)\in\CC^I : \sum_{i\in I}|x_i|^pv_n(i)^p<\infty\text{\,\,for some\,\,}n\in\N\Big\} \quad (1\le p<\infty),\\ k_{\infty}(I,V):=\Big\{x=(x_i)\in\CC^I : \sup_{i\in I}|x_i|v_n(i)<\infty\text{\,\,for some\,\,}n\in\N\Big\},\\ k_0(I,V):=\Big\{x=(x_i)\in\CC^I : \lim_{i\to\infty}|x_i|v_n(i)=0\text{\,\,for some\,\,}n\in\N\Big\}. \end{gather*} We often write $k_p(V)$ for $k_p(I,V)$ when the index set $I$ is clear from the context. In most examples we actually have $I=\N$ (see Examples \ref{example:phi}--\ref{example:germs}), but sometimes it is more convenient to let $I=\N\times\N$ (see Example~\ref{example:NN}). The above definition is a bit unusual since we allow $v_n(i)=\infty$ for some $n\in\N$ and $i\in I$. However, this less restrictive approach does not affect our proofs and allows us to consider in particular the space $\phi:=\CC^{(\N)}$ of finitely supported sequences (see Example \ref{example:phi} below). The space $k_p(I,V)$ is canonically endowed with the inductive limit topology of the system $(\ell_p(I,v_n))_{n\in\N}$ (for $p\ge 1$) or $(c_0(I,v_n))_{n\in\N}$ (for $p=0$), where $\ell_p(I,v_n)$ and $c_0(I,v_n)$ are the weighted Banach spaces of scalar sequences equipped with their canonical norms. Clearly, if $v_n(i)=\infty$, then $x\in\ell_p(I,v_n)$ implies that $x_i=0$. Thus we usually write \[k_p(I,V)=\ind_n\ell_p(I,v_n)\hspace{10pt}(1\le p\le\infty),\qquad k_0(I,V)=\ind_nc_0(I,v_n).\] Since K\"othe co-echelon spaces are countable inductive limits of Banach spaces, they are barrelled DF-spaces (see \cite[12.4, Theorem 8]{J}). By \cite[Theorem 2.3]{BMS}, $k_p(V)$ is complete for all $1\le p\le \infty$. On the other hand, $k_0(V)$ is not always complete, see \cite[\S31.6]{K1} or \cite[Theorem 3.7 and Examples 3.11, 4.11.2, 4.11.3]{BMS}. In many concrete cases (see examples below), K\"othe co-echelon spaces are algebras with respect to the coordinatewise multiplication of sequences. A systematic study of such algebras was initiated in \cite{BonDom}. Recall from \cite[Proposition 2.1]{BonDom} that $k_p(V)$ is an algebra if and only if \begin{equation} \forall\,n\in\N\quad\exists\,m\in\N\quad v_m/v_n^2\in\ell_{\infty}\tag{W3} \end{equation} (we let $\infty/\infty=1$ for convenience). Moreover, if (W3) holds, then the multiplication on $k_p(V)$ is automatically jointly continuous [loc. cit.]. From now on, when we write something like ``let $k_p(V)$ be a {\em K\"othe co-echelon algebra}'', we tacitly assume that $V$ is a sequence of weights satisfying conditions (W1)--(W3), and that $k_p(V)$ is considered as a locally convex algebra under the coordinatewise multiplication. \begin{example} \label{example:phi} For each $n\in\N$, define $v_n\colon\N\to (0,\infty]$ by $v_n(j)=1$ for $j\le n$, and $v_n(j)=\infty$ for $j>n$. Conditions (W1)--(W3) are clearly satisfied, and $k_p(V)$ is nothing but the algebra $\varphi$ of finite sequences equipped with the strongest locally convex topology. \end{example} \begin{example} \label{example:dual_power} Let $R\in [0,+\infty)$, and let $\alpha=(\alpha_i)_{i\in\N}$ be a sequence of positive numbers increasing to infinity. Consider the {\em dual power series spaces}\footnote[1]{By \cite[Theorem 2.7]{BMS}, $D\Lambda_R^p(\alpha)$ is topologically isomorphic to the strong dual of the power series space $\Lambda_{1/R}^q(\alpha)$, where $1/p+1/q=1$.} \begin{align*} D\Lambda_R^p(\alpha)&=\Bigl\{ x=(x_j)\in\CC^\N : \sum_j |x_j|^p r^{\alpha_j p}<\infty \;\text{for some}\; r>R\Bigr\} \quad (1\le p<\infty);\\ D\Lambda_R^\infty(\alpha)&=\Bigl\{ x=(x_j)\in\CC^\N : \sup_j |x_j| r^{\alpha_j}<\infty \;\text{for some}\; r>R\Bigr\}. \end{align*} If $(r_n)$ is a fixed sequence of positive numbers strictly decreasing to $R$, then we clearly have $D\Lambda_R^p(\alpha)=k_p(V)$, where $v_n(j)=r_n^{\alpha_j}$ for all $n,j\in\N$. We could also consider the space $D\Lambda_R^0(\alpha)=k_0(V)$ with $V$ as above, but the condition that $\alpha_j\to\infty$ easily implies that $D\Lambda_R^0(\alpha)=D\Lambda_R^\infty(\alpha)$. An elementary computation shows that $D\Lambda_R^p(\alpha)$ satisfies (W3) if and only if for each $r>R$ there exists $\rho>R$ such that $\rho\le r^2$. Equivalently, this means that if $r>R$, then $r^2>R$. If $R\ge 1$ or $R=0$, then this condition is clearly satisfied, so $D\Lambda_R^p(\alpha)$ is a K\"othe co-echelon algebra in this case. If $0<R<1$, then the above condition fails (take any $r\in (R,\sqrt{R}]$). \end{example} \begin{example} \label{example:s'} Letting $\alpha_j=\log j$ in Example~\ref{example:dual_power}, we see that $D\Lambda_0^p(\alpha)$ is nothing but the algebra $s'$ of sequences of polynomial growth. \end{example} \begin{example} \label{example:germs} If $\alpha_j=j$, then $D\Lambda_R^p(\alpha)$ is topologically isomorphic to the space of germs of holomorphic functions on the closed disc $\ol{\DD}_R=\{ z\in\CC : |z| \le R\}$. If $R\ge 1$ or $R=0$, then the multiplication on $D\Lambda_R^p(\alpha)$ corresponds to the ``componentwise'' multiplication of the Taylor expansions of holomorphic functions (the {\em Hadamard multiplication}, cf. \cite{ReSa}). The resulting locally convex algebra will be denoted by $\cH(\ol{\DD}_R)$. \end{example} Given $p\in [1,\infty]$ and a sequence $V=(v_n)$ of weights satisfying (W1)--(W3), we say that $V$ is {\em eventually in $\ell_p$} if $v_n\in\ell_p(I)$ for some $n\in\N$. Because of (W2), this means precisely that there exists $n\in\N$ such that $v_k\in\ell_p(I)$ for all $k\ge n$. If $V$ is eventually in $\ell_\infty$, then we say that $V$ is {\em eventually bounded}. By \cite[Proposition 2.5]{BonDom}, if $1\le p<\infty$, then \[ \begin{split} \text{$V$ is eventually in $\ell_p$} &\iff \text{$V$ is eventually in $\ell_1$} \iff \text{$k_p(V)$ is unital}\\ &\iff \text{$V$ is eventually bounded, and $k_p(V)$ is nuclear.} \end{split} \] In fact, if the above conditions are satisfied, then we have $k_p(V)=k_q(V)$ for all $p,q\in [1,\infty]\cup\{ 0\}$ (see \cite[Proposition 15]{Bier}). A comprehensive study of K\"othe co-echelon spaces may be found in \cite{BMS}. K\"{o}the co-echelon algebras appear as a main object of investigation in \cite{BonDom} and \cite{KP-amenable,KP-contractible}. \section{Topological Flatness and Topological Amenability} \label{sect:flat_amen} Let $\sC$ be an admissible subcategory of $\CLCS$, and let $A\in\alg(\sC)$. \begin{definition} \label{def:topflat} We say that a module $X\in A\lmod(\sC)$ is {\em topologically flat} (relative to $\sC$) if for each short admissible sequence \begin{equation} \label{shortadm} 0 \to Y_1\to Y_2 \to Y_3 \to 0 \end{equation} in $\rmod A(\sC)$ the sequence \begin{equation} \label{short_tens} 0 \to Y_1\ptens{A} X \to Y_2\ptens{A} X \to Y_3\ptens{A} X \to 0 \end{equation} is strictly exact in $\CLCS$. A right module in $\rmod A(\sC)$ (respectively, a bimodule in $A\bimod A(\sC)$) is topologically flat if it is topologically flat as a left module over $A^{\op}$ (respectively, over $A^e$). \end{definition} \begin{rem} \label{flat-top-flat} According to \cite{H2}, a module $X\in A\lmod(\sC)$ is {\em flat} (relative to $\sC$) if for each short admissible sequence \eqref{shortadm} in $\rmod A(\sC)$ the sequence \eqref{short_tens} is exact in $\Vect$. If $\sC\subset\Fr$, then flatness and topological flatness are equivalent (see Example~\ref{example:str_ex_Fr}). We conjecture that, in the general case, neither topological flatness implies flatness, nor vice versa. However, we do not have concrete counterexamples at the moment. \end{rem} \begin{example} \label{example:proj_topflat} Each projective module $P\in A\lmod(\sC)$ is topologically flat. Indeed, if $P$ is free, i.e., if $P$ is isomorphic to $A_+\Ptens E$ for some $E\in\sC$, then \eqref{short_tens} is isomorphic to the sequence \[ 0 \to Y_1\Ptens E \to Y_2\Ptens E \to Y_3\Ptens E \to 0, \] which is split exact and is {\em a fortiori} strictly exact in $\CLCS$. Since each projective module is a retract of a free module \cite[III.1.27]{H2}, the result follows. \end{example} \begin{prop} \label{prop:flat_topinj} A module $X\in A\lmod(\sC)$ is topologically flat if and only if for each admissible monomorphism $Y\to Z$ in $\rmod A(\sC)$ the induced map $Y\ptens{A} X\to Z\ptens{A} X$ is topologically injective. \end{prop} \begin{proof} This is immediate from Definition~\ref{def:topflat} and from the fact that the functor $(-)\ptens{A} X\colon\rmod A\to\CLCS$ preserves cokernels \cite[Proposition 3.3]{AP}. \end{proof} \begin{rem} For $\sC=\Ban$, Proposition~\ref{prop:flat_topinj} is well known (cf. \cite[Theorem VII.1.42]{H3}). For $\sC=\Fr$, this fact was observed in \cite{AP-weak}. \end{rem} The following ``adjoint associativity'' (or ``exponential law'') for locally convex spaces is a kind of folklore. Since we have not found an exact reference, we give a proof here for the convenience of the reader. \begin{prop} \label{prop:adjass_lcs} Let $X$, $Y$, $Z$ be locally convex spaces. Suppose that $Z$ is complete. There is a natural linear map \begin{equation} \label{adjass} L(X\Ptens Y,Z)\to L(X,L(Y,Z)), \qquad f\mapsto (x\mapsto (y\mapsto f(x\otimes y))). \end{equation} The above map is a vector space isomorphism in either of the following cases: \begin{mycompactenum} \item $X$ and $Y$ are Fr\'echet spaces; \item $X$ and $Y$ are DF-spaces, and $Y$ is barrelled. \end{mycompactenum} \end{prop} \begin{proof} By the universal property of the projective tensor product (see, e.g., \cite[41.3.(1)]{K2}), $L(X\Ptens Y,Z)$ is naturally identified with the space of jointly continuous bilinear maps from $X\times Y$ to $Z$. On the other hand, each $\varphi\in L(X,L(Y,Z))$ determines a separately continuous bilinear map $\varPhi\colon X\times Y\to Z$ via $\varPhi(x,y)=\varphi(x)(y)$ ($x\in X$, $y\in Y$). Moreover, the rule $\varphi\mapsto\varPhi$ determines a vector space isomorphism between $L(X,L(Y,Z))$ and the space of those separately continuous bilinear maps $X\times Y\to Z$ which are $\cB_Y$-hypocontinuous, where $\cB_Y$ is the family of all bounded subsets of $Y$ \cite[40.1.(3)]{K2}. This implies that \eqref{adjass} indeed takes $L(X\Ptens Y,Z)$ to $L(X,L(Y,Z))$, is always injective, and is surjective if and only if each separately continuous, $\cB_Y$-hypocontinuous bilinear map from $X\times Y$ to $Z$ is jointly continuous. In case (i), this condition is clearly satisfied because the separate continuity and the joint continuity are equivalent for maps $X\times Y\to Z$ (see, e.g., \cite[40.2.(1)]{K2}). Assume now that (ii) holds, and let $\varPhi\colon X\times Y\to Z$ be a separately continuous, $\cB_Y$-hypocontinuous bilinear map. Since $Y$ is barrelled, $\varPhi$ is also $\cB_X$-hypocontinuous by \cite[40.2.(3)]{K2}. Finally, since $X$ and $Y$ are DF-spaces, and since $\varPhi$ is $(\cB_X,\cB_Y)$-hypocontinuous, we conclude that $\varPhi$ is jointly continuous \cite[40.2.(10)]{K2}. In view of the above remarks, this completes the proof. \end{proof} \begin{corollary} \label{hc-jc} Let $X,Y$ be either Fr\'echet spaces or barrelled DF-spaces. Then there exist natural vector space isomorphisms \[ (X\Ptens Y)'\cong L(X,Y') \cong L(Y,X'). \] \end{corollary} The following is a natural extension of \cite[II.5.22]{H2} to the locally convex setting. \begin{prop} \label{prop:adjass_mod} Let $A$, $B$, $C$ be $\Ptens$-algebras, and let $X\in A\bimod B$, $Y\in B\bimod C$, and $Z\in A\bimod C$. There is a natural linear map \begin{equation} \label{adjass_mod} {_A}L_C(X\ptens{B} Y,Z)\to {_A}L_B(X,L_C(Y,Z)), \qquad f\mapsto (x\mapsto (y\mapsto f(x\otimes y))). \end{equation} The above map is a vector space isomorphism if either of the conditions {\upshape (i), (ii)} of Proposition~{\upshape\ref{prop:adjass_lcs}} are satisfied. \end{prop} \begin{proof} By the universal property of $\ptens{B}$ \cite[II.4.2]{H2}, ${_A}L_C(X\ptens{B} Y,Z)$ is naturally identified with the space of those jointly continuous bilinear maps from $X\times Y$ to $Z$ which are (1) $B$-balanced, (2) $A$-linear in the first variable, and (3) $C$-linear in the second variable. A routine calculation shows that a jointly continuous bilinear map $X\times Y\to Z$ has the above three properties if and only if the respective linear map $X\to L(Y,Z)$ takes $X$ to $L_C(Y,Z)$ and is an $A$-$B$-bimodule morphism. The rest follows from Proposition~\ref{prop:adjass_lcs}. \end{proof} \begin{corollary} \label{hc-jc2} Let $B$ be a $\Ptens$-algebra, $X\in\rmod B$, and $Y\in B\lmod$. If $X$ and $Y$ are either Fr\'echet spaces or barrelled DF-spaces, then there exist natural vector space isomorphisms \[ (X\ptens{B} Y)'\cong L_B(X,Y')\cong {_B}L(Y,X'). \] \end{corollary} \begin{corollary} \label{tensor-hc-jc} Let $\sC\in\{\Fr,\CBDF\}$, let $A$ be a $\Ptens$-algebra, and let $X\in \rmod A(\sC)$, $Y\in A\lmod(\sC)$, $Z\in\sC$. Then there exists a natural vector space isomorphism \[ L(X\ptens{A} Y,Z')\cong L_A(Z\Ptens X,Y'). \] \end{corollary} \begin{proof} This follows from Corollaries \ref{hc-jc}, \ref{hc-jc2}, the commutativity of $\Ptens$, and the associativity of $\tens_A$, since we have \[ L(X\ptens{A} Y,Z') \cong ((X\ptens{A} Y)\Ptens Z)' \cong ((Z\Ptens X)\ptens{A}Y)' \cong L_A(Z\Ptens X,Y'). \qedhere \] \end{proof} The following result was proved in \cite[Proposition 3.3]{AP-weak} for $\sC=\Fr$. We now give a shorter proof which holds both for $\sC=\Fr$ and $\sC=\CBDF$. \begin{prop} \label{prop:ext-dual} Let $\sC\in\{\Fr,\CBDF\}$, and let $A\in\alg(\sC)$. Then for all $X\in A\lmod(\sC)$, $Y\in\rmod A(\sC)$, $n\in\Z_+$ there is a natural vector space isomorphism $\Ext^n_A(X,Y')\cong\Ext^n_A(Y,X')$. \end{prop} \begin{proof} Let $P_\bullet\to A_+$ be a projective resolution of $A_+$ in $A\bimod A(\sC)$. Then $P_\bullet\ptens{A} X\to X$ is a projective resolution of $X$ in $A\lmod(\sC)$, and $Y\ptens{A} P_\bullet\to Y$ is a projective resolution of $Y$ in $\rmod A(\sC)$. Applying Corollary~\ref{hc-jc2} twice, we obtain natural vector space isomorphisms \[ \begin{split} \Ext^n_A(X,Y') =H^n({_A}L\bigl(P_\bullet\ptens{A} X,Y')\bigr) &\cong H^n\bigl((Y\ptens{A} P_\bullet \ptens{A} X)'\bigr)\\ &\cong H^n\bigl(L_A(Y\ptens{A} P_\bullet,X')\bigr) =\Ext^n_A(Y,X'). \qedhere \end{split} \] \end{proof} Our next theorem generalizes \cite[Proposition 3.4]{AP-weak}. \begin{theorem} \label{ext-top-flat} Let $\sC\in\{\Fr,\CBDF\}$, and let $A\in\alg(\sC)$. The following properties of $X\in A\lmod(\sC)$ are equivalent: \begin{mycompactenum} \item $X$ is topologically flat; \item $\Ext_A^1(Y,X')=0\quad\forall\,\,Y\in \rmod A(\sC)$; \item $\Ext_A^1(X,Y')=0\quad\forall\,\,Y\in \rmod A(\sC)$; \item $\Ext_A^n(Y,X')=0\quad\forall\,\,Y\in \rmod A(\sC),\quad\forall\,\,n\in\N$; \item $\Ext_A^n(X,Y')=0\quad\forall\,\,Y\in \rmod A(\sC),\quad\forall\,\,n\in\N$; \item the functor $L_A(-,X')\colon\rmod A(\sC)\to\Vect$ takes short admissible sequences to exact sequences. \end{mycompactenum} \end{theorem} \begin{proof} $\mathrm{(ii)\Leftrightarrow (iii)}$, $\mathrm{(iv)\Leftrightarrow (v)}$: these are special cases of Proposition~\ref{prop:ext-dual}. $\mathrm{(ii)\Leftrightarrow (iv)\Leftrightarrow (vi)}$: these are special cases of \cite[III.3.7]{H2}. $\mathrm{(i)\Rightarrow (vi)}$. By assumption, for each short admissible sequence \eqref{shortadm} in $\rmod A(\sC)$ the sequence \eqref{short_tens} is strictly exact in $\CLCS$. By Palamodov's Theorem~\ref{top-exact}, the dual sequence \begin{equation} \label{dual_tens} 0 \to (Y_3\ptens{A} X)' \to (Y_2\ptens{A} X)' \to (Y_1\ptens{A} X)' \to 0 \end{equation} is exact in $\Vect$. Corollary~\ref{hc-jc2} implies that \eqref{dual_tens} is isomorphic to \begin{equation} \label{Hom_X'} 0 \to L_A(Y_3,X') \to L_A(Y_2,X')\to L_A(Y_1,X') \to 0. \end{equation} This yields (vi). $\mathrm{(vi)\Rightarrow (i)}$. We want to show that for each short admissible sequence \eqref{shortadm} in $\rmod A(\sC)$ the sequence \eqref{short_tens} is strictly exact in $\CLCS$. By Palamodov's Theorem~\ref{top-exact}, this means precisely that for each set $S$ the sequence \begin{equation} \label{L_tens_linf} 0 \to L(Y_3\ptens{A} X,\ell_\infty(S)) \to L(Y_2\ptens{A} X,\ell_\infty(S)) \to L(Y_1\ptens{A} X,\ell_\infty(S)) \to 0 \end{equation} is exact in $\Vect$. Taking into account the isomorphism $\ell_\infty(S)\cong (\ell_1(S))'$ and applying Corollary~\ref{tensor-hc-jc}, we see that \eqref{L_tens_linf} is isomorphic to \begin{equation} \label{Homl1_X'} 0 \to L_A(\ell_1(S)\Ptens Y_3,X') \to L_A(\ell_1(S)\Ptens Y_2,X') \to L_A(\ell_1(S)\Ptens Y_1,X') \to 0. \end{equation} Since $\ell_1(S)\Ptens Y_\bullet$ is admissible in $\rmod A(\sC)$, we see that \eqref{Homl1_X'} is exact in $\Vect$ by (vi). In view of the above remarks, this completes the proof. \end{proof} The next proposition shows that a flat Banach module over a Banach algebra remains topologically flat if we consider it as an object of the bigger category of Fr\'echet modules or of complete barrelled DF-modules. \begin{prop} \label{banach-top-flat} Let $A$ be a Banach algebra and let $X$ be a left Banach $A$-module. The following conditions are equivalent: \begin{mycompactenum} \item $X$ is flat (or, equivalently, topologically flat) relative to $\Ban$; \item $X$ is flat (or, equivalently, topologically flat) relative to $\Fr$; \item $X$ is topologically flat relative to $\CBDF$. \end{mycompactenum} \end{prop} \begin{proof} Clearly, each of the conditions (ii) and (iii) implies (i). Conversely, let $\sC$ denote either of the categories $\Fr$ or $\CBDF$, and suppose that (i) holds. By \cite[VII.1.14]{H2}, condition (i) means precisely that $X'$ is injective in $\rmod A(\Ban)$. Using \cite[III.1.31]{H2}, we see that $X'$ is a retract of $L(A_+,X')$ in $\rmod A(\Ban)$. Hence for each short admissible sequence $Y_\bullet$ in $\rmod A$ the sequence $L_A(Y_\bullet,X')$ is a retract of $L_A(Y_\bullet,L(A_+,X'))$. On the other hand, \cite[Proposition 3.2]{JLT} implies that \[ L_A(Y_\bullet,L(A_+,X')) \cong L(Y_\bullet,X'). \] Hence $L_A(Y_\bullet,X')$ is a retract of $L(Y_\bullet,X')$, which is clearly exact in $\Vect$. Therefore $L_A(Y_\bullet,X')$ is exact in $\Vect$. Applying Theorem \ref{top-exact}, we conclude that $X$ is topologically flat in $A\lmod(\sC)$. \end{proof} \begin{rem} The equivalence of (i) and (ii) in Proposition~\ref{banach-top-flat} was proved in \cite[Proposition 4.11]{AP}. \end{rem} We now turn to topological amenability, using Helemskii--Sheinberg's approach \cite{Hel_Shein} as a motivation. Let $\sC$ be an admissible subcategory of $\CLCS$, and let $A\in\alg(\sC)$. \begin{definition} We say that $A$ is \emph{topologically amenable} (relative to $\sC$) if $A_+$ is topologically flat in $A\bimod A(\sC)$. \end{definition} \begin{rem} According to \cite{H2}, $A$ is {\em amenable} if $A_+$ is flat in $A\bimod A(\sC)$. As in Remark~\ref{flat-top-flat}, we would like to stress that amenability and topological amenability are formally different in the general case, but they are equivalent if $\sC\subset\Fr$. \end{rem} \begin{example} \label{example:contr_topamen} Recall from \cite[Chap. VII]{H3} (see also \cite[Postscript]{H2}) that $A$ is {\em contractible} if $A_+$ is projective in $A\bimod A(\sC)$. Since projective modules are topologically flat (see Example~\ref{example:proj_topflat}), we conclude that each contractible algebra is topologically amenable. \end{example} Recall that the amenability of a Banach algebra can be rephrased in the language of derivations. Our next result gives a similar characterization in the categories $\Fr$ and $\CBDF$. For Fr\'echet algebras, this was proved in \cite[Corollary 3.5]{AP-weak}. \begin{theorem} \label{topam-der} Let $\sC\in\{\Fr,\CBDF\}$, and let $A\in\alg(\sC)$. Then $A$ is topologically amenable relative to $\sC$ if and only if for each $X\in A\bimod A(\sC)$ every continuous derivation $A\to X'$ is inner. \end{theorem} \begin{proof} It is a standard fact (see, e.g., \cite[Chap. I, Subsection 2.1]{H2}) that every continuous derivation $A\to X'$ is inner if and only if $\cH^1(A,X')=0$, where $\cH^1(A,X')$ is the 1st continuous Hochschild cohomology group of $A$ with coefficients in $X'$. By \cite[III.4.9]{H2}, we have a vector space isomorphism $\cH^1(A,X')\cong\Ext^1_{A^e}(A_+,X')$. Now the result follows from Theorem~\ref{ext-top-flat}. \end{proof} In the $\CBDF$ category it is also possible to relate topological amenability to amenability. \begin{corollary} \label{am-top-am} Let $A$ be a complete barrelled DF-algebra which is amenable relative to $\CBDF$. Then $A$ is topologically amenable relative to $\CBDF$. \end{corollary} \begin{proof} By \cite[Theorem 4.4]{KP-amenable}, for each $X\in A\bimod A(\CBDF)$ every continuous derivation $A\to X'$ is inner. Now the result follows from Theorem~\ref{topam-der}. \end{proof} If $A$ is a Banach algebra then the above notions coincide. \begin{prop} \label{am-top-am-banach} Let $\sC\in\{\Fr,\CBDF\}$, and let $A$ be a Banach algebra. Then $A$ is topologically amenable relative to $\sC$ if and only if $A$ is amenable relative to $\Ban$. \end{prop} \begin{proof} This follows immediately from Proposition \ref{banach-top-flat}. \end{proof} Since the algebras $k_0(V)$ that appear in the next section are not necessarily complete, we adopt the following definition of topological amenability for non-complete algebras. \begin{definition} Let $\sC\in\{\Fr,\CBDF\}$, and let $A$ be a locally convex algebra with jointly continuous multiplication such that $\wt{A}\in\alg(\sC)$ (where $\wt{A}$ is the completion of $A$). We say that $A$ is {\em topologically amenable} relative to $\sC$ if $\wt{A}$ is topologically amenable relative to $\sC$. \end{definition} Given $A$ as above, let $A\bimod A(\sC)$ denote the category of locally convex $A$-bimodules $X$ such that the left and right actions of $A$ on $X$ are jointly continuous and such that the underlying space of $X$ is an object of $\sC$. Clearly, we have an isomorphism of categories $A\bimod A(\sC)\cong\wt{A}\bimod\wt{A}(\sC)$. Using the above definition, we can easily extend Theorem~\ref{topam-der} to non-complete algebras. \begin{theorem} \label{topam-noncompl-der} Let $\sC\in\{\Fr,\CBDF\}$, and let $A$ be a locally convex algebra with jointly continuous multiplication such that $\wt{A}\in\alg(\sC)$. Then $A$ is topologically amenable relative to $\sC$ if and only if for each $X\in A\bimod A(\sC)$ every continuous derivation $A\to X'$ is inner. \end{theorem} \begin{proof} Given $X\in A\bimod A(\sC)\cong\wt{A}\bimod\wt{A}(\sC)$, observe that $X'$ is complete (see, e.g., \cite[28.5.(1)]{K1}). Hence each continuous derivation $A\to X'$ uniquely extends to a continuous linear map $\wt{A}\to X'$, which is easily seen to be a derivation. Thus we have a $1$-$1$ correspondence between the continuous derivations $A\to X'$ and $\wt{A}\to X'$, which takes the inner derivations onto the inner derivations. Now the result follows from Theorem~\ref{topam-der} applied to $\wt{A}$. \end{proof} We end this section with another consequence of topological amenability. The proof is similar to that of \cite[Proposition 2.8.64]{D} therefore we omit it. \begin{prop} \label{top-am-dense-range} Let $\sC\in\{\Fr,\CBDF\}$, and let $A$ and $B$ be locally convex algebras with jointly continuous multiplication such that $\wt{A},\wt{B}\in\alg(\sC)$. Suppose that $\theta\c A\to B$ is a continuous homomorphism with dense range. If $A$ is topologically amenable relative to $\sC$, then so is $B$. \end{prop} \section{Topological Amenability for Co-echelon Algebras} \label{sect:coechelon} We are now going to investigate topological amenability in the framework of K\"othe co-echelon algebras. Throughout this section, amenability and topological amenability are considered relative to the category $\CBDF$ of complete barrelled DF-spaces. The following result is a restatement of \cite[Lemma 0.5.1]{H2} adapted to DF-spaces. The proof is essentially the same. \begin{lemma} \label{adjoint-surjective} Let $X$ and $Y$ be DF-spaces such that $X$ is complete and $Y$ is quasi-barrelled, and let $u\c X\to Y$ be a continuous linear injection. If $u$ has dense range and its adjoint $u'\c Y'\to X'$ is surjective, then $u$ is a topological isomorphism between $X$ and $Y$. \end{lemma} \begin{proof} By assumption, $u'\c Y'\to X'$ is a continuous linear bijection between Fr\'echet spaces, thus it is a topological isomorphism by the Open Mapping Theorem \cite[Theorem 24.30]{MV}. Therefore $u''$ is a topological isomorphism as well. We have \begin{equation} \label{bidual} \iota_Y\circ u=u''\circ\iota_X, \end{equation} where $\iota_X\c X\hk X''$ and $\iota_Y\c Y\hk Y''$ are the canonical inclusions. Since $Y$ is quasi-barrelled, it follows from \cite[11.2, Proposition 2]{J} that $\iota_Y$ is a topological embedding. Since $u''$ is a topological isomorphism, we conclude from \eqref{bidual} that $\iota_X$ is continuous, or, equivalently, a topological embedding [loc. cit.]. Hence $u''$ induces a topological isomorphism $u\c X\to\im u$. Since $X$ is complete, $\im u$ is complete as well, so $\im u$ is closed in $Y$. Therefore $u$ is a topological isomorphism of $X$ onto $\im u=\overline{\im u}=Y$. \end{proof} Before proceeding to the characterization results, we list some properties of topologically amenable K\"othe co-echelon algebras of finite order. \begin{lemma} \label{ker_pi} Let $1\le p<\infty$, and let $k_p(V)$ be a K\"othe co-echelon algebra. Then the kernel of the multiplication map $\pi\colon k_p(V)\Ptens k_p(V)\to k_p(V)$ is a complemented subspace of $k_p(V)\Ptens k_p(V)$. As a consequence, the quotient $k_p(V)\Ptens k_p(V)/\ker\pi$ is complete. \end{lemma} \begin{proof} To begin with, let us show that the family $(e_i\otimes e_j)_{i,j\in\N}$ is a Schauder basis in $k_p(V)\Ptens k_p(V)$ with respect to the square ordering of $\N\times\N$ (see \cite[Section 4.3]{RR}). Indeed, we have $k_p(V)\Ptens k_p(V)=\ind_n \ell_p(v_n)\Ptens\ell_p(v_n)$ by \cite[Theorem 7]{EMM}. Hence if $u\in k_p(V)\Ptens k_p(V)$ then $u\in\ell_p(v_n)\Ptens\ell_p(v_n)$ for some $n\in\N$. Since $(e_j)_{j\in\N}$ is a Schauder basis in $\ell_p(v_n)$, it follows from \cite[Proposition 4.25]{RR} that $(e_i\otimes e_j)_{i,j\in\N}$ is a Schauder basis in $\ell_p(v_n)\Ptens\ell_p(v_n)$ with respect to the square ordering. Therefore $u=\sum_{i,j=1}^{\infty}u_{ij}e_i\otimes e_j$ in $\ell_p(v_n)\Ptens\ell_p(v_n)$ hence also in $k_p(V)\Ptens k_p(V)$. Consequently, $(e_i\otimes e_j)_{i,j\in\N}$ is a basis in $k_p(V)\Ptens k_p(V)$. Since the coefficient functionals $e_i^*\colon x\mapsto x_i$ on $k_p(V)$ are obviously continuous, so are the functionals $e_i^*\otimes e_j^*$ on $k_p(V)\Ptens k_p(V)$. Thus $(e_i\otimes e_j)_{i,j\in\N}$ is a Schauder basis. Given $u=\sum_{i,j} u_{ij} e_i\otimes e_j\in k_p(V)\Ptens k_p(V)$, we clearly have $\pi(u)=\sum_i u_{ii} e_i$. Hence \[ \ker\pi=\overline{\spn}\{ e_i\otimes e_j : i\ne j\}. \] Therefore, to complete the proof, it suffices to construct a continuous linear projection $P$ on $k_p(V)\Ptens k_p(V)$ such that $P(e_i\otimes e_j)=\delta_{ij} e_i\otimes e_j$ for all $i,j$, where $\delta_{ij}$ is the Kronecker delta. Given $n\in\N$, let $\ell_p^0(v_n)$ denote the subspace of $\ell_p(v_n)$ consisting of finite sequences. Consider the bilinear map \[ B_n\colon\ell_p^0(v_n)\times\ell_p^0(v_n)\to \ell_p(v_n)\Ptens\ell_p(v_n),\qquad B_n(x,y)=\sum_{j=1}^\infty x_j y_j e_j\otimes e_j. \] We claim that $B_n$ is bounded. Indeed, using \cite[Lemma 2.22]{RR}, we obtain \[ \sum_jx_jy_je_j\otimes e_j=\int_0^1\Big(\sum_jr_j(t)x_je_j\Big)\otimes\Big(\sum_jr_j(t)y_je_j\Big)dt, \] where $(r_j)$ are the Rademacher functions on $[0,1]$. Hence \begin{align*} \|B_n(x,y)\|_{\ell_p(v_n)\Ptens\ell_p(v_n)} & \le\sup_{0\le t\le 1}\Big\|\sum_jr_j(t)x_je_j\Big\|_{\ell_p(v_n)}\Big\|\sum_jr_j(t)y_je_j\Big\|_{\ell_p(v_n)} \\ & =\|x\|_{\ell_p(v_n)}\|y\|_{\ell_p(v_n)}. \end{align*} Therefore $B_n$ is bounded. Extending $B_n$ by continuity to $\ell_p(v_n)\times \ell_p(v_n)$ and then linearizing, we obtain a bounded linear operator $P_n$ on $\ell_p(v_n)\Ptens\ell_p(v_n)$. Finally, letting $P=\ind_n P_n$, we obtain a continuous linear operator $P$ on $k_p(V)\Ptens k_p(V)$ with the required properties. In view of the above remarks, this completes the proof. \end{proof} \begin{prop} \label{middle-step} Let $1\le p<\infty$ and let $k_p(V)$ be a K\"othe co-echelon algebra. Suppose that $k_p(V)$ is topologically amenable. Then: \begin{mycompactenum} \item $V$ is eventually bounded; \item the product map $\pi\c k_p(V)\pt k_p(V)\to k_p(V)$ is open, and there is a commutative diagram \[\begin{tikzcd} k_p(V)\pt k_p(V)\arrow[swap]{d}{\pi}\arrow{r}{q} & k_p(V)\pt k_p(V)/\ker\pi\arrow[shift left]{dl}{\hat{\pi}} \\ k_p(V)\arrow{ur}{\hat{\pi}^{-1}} & \end{tikzcd}\] where $q$ is the quotient map. Moreover, \begin{equation} \hat{\pi}^{-1}(a)=\sum_{j=1}^{\infty}a_je_j\otimes e_j+\ker\pi\hspace{25pt}(a\in k_p(V)); \label{inverse-formula} \end{equation} \item $k_p(V)$ is nuclear. \end{mycompactenum} \end{prop} \begin{proof} (i) Suppose towards a contradiction that all the weights $v_n$ are unbounded. This implies that there is a sequence $j_l\nearrow\infty$ such that $v_k(j_l)\ge1$ for all $l\in\N$ and all $k\le l$. Define a dense range homomorphism \[ \theta\c k_p(V)\to\ell_p,\quad\theta(a):=(a_{j_l})_{l\in\N}, \] where we consider $\ell_p$ with the coordinate-wise multiplication. For every $k\in\N$ we get \[ \|\theta(a)\|_{\ell_p}^p=\sum_{l\le k}|a_{j_l}|^p+\sum_{l>k}|a_{j_l}|^p\le C_k\|a\|_{k,p}^p \] with $C_k:=\max\{1/v_k(j_l)^p\c\,l\le k\}+1$. Consequently, $\theta$ indeed takes $k_p(V)$ to $\ell_p$ and is continuous. Since $k_p(V)$ is topologically amenable, it follows from Propositions~\ref{am-top-am-banach} and \ref{top-am-dense-range} that the Banach algebra $\ell_p$ is amenable. This leads to a contradiction since $\ell_p$ is known to be non-amenable (see, e.g., \cite[Example 4.1.42(iii)]{D}). Therefore $V$ is eventually bounded. (ii) To prove that $\pi$ is open, it suffices to show that $\hat\pi$ is a topological isomorphism. Taking into account Lemma~\ref{ker_pi}, we see that $\hat\pi$ acts between complete barrelled DF-spaces and, clearly, has dense range. By Lemma~\ref{adjoint-surjective}, the proof will be complete if we show that $\hat\pi'$ is surjective. Towards this goal, take $\psi\in (k_p(V)\Ptens k_p(V)/\ker\pi)'$ and let $\psi_0=\psi\circ q$. Since $\psi_0$ vanishes on $\ker\pi$, we have \begin{equation} \psi_0(a\otimes b)=\sum_{j=1}^{\infty}a_jb_j\psi_0(e_j\otimes e_j)\hspace{25pt}(a,b\in k_p(V)). \label{derivation} \end{equation} Define now a linear map \[\delta\c k_p(V)\to(k_p(V)\otimes\CC)',\qquad\langle b,\delta(a)\rangle:=\psi_0(a\otimes b).\] In other words, $\delta$ is the image of $\psi_0$ under \eqref{adjass} (where $X=Y=k_p(V)$ and $Z=\CC$). Hence $\delta$ is continuous. Using \eqref{derivation}, we see that \[\la c,\delta(ab)\ra=\la ab\otimes c,\psi_0\ra =\la a\otimes bc,\psi_0\ra=\la c,\delta(a)\cdot b\ra\hspace{25pt}(a,b,c\in k_p(V)).\] Since the left action of $k_p(V)$ on $(k_p(V)\otimes\CC)'$ is trivial, we conclude that $\delta$ is a derivation. By Theorem \ref{topam-der}, there is $\phi\in(k_p(V))'$ such that \[\delta(a)=\phi\cdot a\hspace{25pt}(a\in k_p(V)).\] Hence for all $a,b\in k_p(V)$ we have \[ \la a\otimes b+\ker\pi,\hat{\pi}'(\phi)\ra =\la ab,\phi\ra =\la b,\phi\cdot a\ra =\la b,\delta(a)\ra =\la a\otimes b,\psi_0\ra =\la a\otimes b+\ker\pi,\psi\ra, \] that is, $\hat{\pi}'(\phi)=\psi$. Therefore the map $\hat{\pi}'$ is surjective. In view of the above remarks, this implies that $\pi$ is open. To prove \eqref{inverse-formula}, observe that for every $j\in\N$ we have \[\hat{\pi}^{-1}(e_j)=\hat{\pi}^{-1}\circ\hat{\pi}(e_j\otimes e_j+\ker\pi)=e_j\otimes e_j+\ker\pi.\] Since $(e_j)_{j\in\N}$ is a Schauder basis in $k_p(V)$, this implies \eqref{inverse-formula}. (iii) To get the nuclearity of $k_p(V)$ we repeat exactly the proof of \cite[Theorem 5.1]{KP-amenable}. We can indeed do so, since $\hat{\pi}^{-1}$ is a topological isomorphism not only in the case of amenability (which was the assumption in \cite{KP-amenable}) but also under the weaker assumption of topological amenability. \end{proof} \begin{theorem} \label{finite-order} Let $1\le p<\infty$, and let $k_p(V)$ be a K\"othe co-echelon algebra. TFAE: \begin{mycompactenum} \item $k_p(V)$ is topologically amenable; \item $k_p(V)$ is amenable; \item $k_p(V)$ is contractible; \item $k_p(V)$ is unital; \item $V$ is eventually in $\ell_1$; \item $V$ is eventually bounded, and $k_p(V)$ is nuclear. \end{mycompactenum} \end{theorem} \begin{proof} $\mathrm{(ii)\Leftrightarrow (iii)\Leftrightarrow (iv)}$: see \cite[Theorem 5.1]{KP-amenable}. $\mathrm{(iv)\Leftrightarrow (v)\Leftrightarrow (vi)}$: see \cite[Proposition 2.5]{BonDom}. $\mathrm{(ii)\Rightarrow (i)}$ follows from Corollary \ref{am-top-am}. $\mathrm{(i)\Rightarrow (vi)}$ follows from Proposition \ref{middle-step}. \end{proof} It turns out that the cases of K\"othe co-echelon algebras of order zero and infinity can be treated simultaneously. \begin{theorem} \label{thm:kinf} Let $p\in\{0,\infty\}$, and let $k_p(V)$ be a K\"othe co-echelon algebra. TFAE: \begin{mycompactenum} \item $k_0(V)$ is topologically amenable; \item $k_{\infty}(V)$ is topologically amenable; \item $V$ is eventually bounded. \end{mycompactenum} \end{theorem} \begin{proof} $\mathrm{(ii)\Rightarrow (iii)}$. If $k_{\infty}(V)$ is topologically amenable then we can follow the proof of Proposition \ref{middle-step} to show that $V$ is eventually bounded. Indeed, suppose towards a contradiction that all the weights $v_n$ are unbounded. This implies that there is a sequence $j_l\nearrow\infty$ such that $v_k(j_l)\ge 2^l$ for all $l\in\N$ and all $k\le l$. Define a dense range homomorphism \[ \theta\c k_\infty(V)\to\ell_1,\quad\theta(a):=(a_{j_l})_{l\in\N}, \] where we consider $\ell_1$ with the coordinate-wise multiplication. For every $k\in\N$ we get \[ \|\theta(a)\|_{\ell_1}=\sum_{l\le k}|a_{j_l}|+\sum_{l>k}|a_{j_l}|\le C_k\|a\|_{k,\infty} \] with $C_k:=\sum_{l\le k} (1/v_k(j_l))+1$. Consequently, $\theta$ indeed takes $k_\infty(V)$ to $\ell_1$ and is continuous. Since $k_\infty(V)$ is topologically amenable, it follows from Propositions~\ref{am-top-am-banach} and \ref{top-am-dense-range} that the Banach algebra $\ell_1$ is amenable. This leads to a contradiction since $\ell_1$ is known to be non-amenable (see, e.g., \cite[Example 4.1.42(iii)]{D}). Therefore $V$ is eventually bounded. $\mathrm{(iii)\Rightarrow (ii)}$. Without loss of generality, we may assume that $v_1\in V$ is bounded. We then have $\ell_\infty\subset\ell_\infty(v_1)$, and the inclusion is clearly bounded. Composing with the inclusion of $\ell_\infty(v_1)$ into $k_\infty(V)$, we obtain a continuous homomorphism \begin{equation} \theta\c\ell_{\infty}\to k_{\infty}(V),\quad \theta(a):=a. \label{infinite-order-density} \end{equation} We claim that $\theta$ has dense range. To this end, let $a\in k_{\infty}(V)\setminus\{ 0\}$, i.e., $0<\|a\|_{n,\infty}<\infty$ for some $n\in\N$. Using (W3), find $m\in\N$ and $C>0$ such that \[ \forall\,j\in\N\quad v_m(j)\le C v_n(j)^2. \] Fix $\ve>0$ and denote $J_1:=\{j\in\N\c\,v_n(j)\ge\frac{\ve}{2C\|a\|_{n,\infty}}\}$ and $J_2:=\N\setminus J_1$. Define a scalar sequence $b^{\ve}=(b_j)_j$ as \[b_j:=\begin{cases} a_j,\,\,j\in J_1 \\ 0,\,\,\,j\in J_2. \end{cases}\] For each $j\in J_1$ we have \[|b_j|\le\frac{2C}{\ve}\|a\|_{n,\infty}|a_j|v_n(j)\le\frac{2C}{\ve}\|a\|_{n,\infty}^2.\] Consequently, $b^{\ve}\in\ell_{\infty}$ with $\|b^{\ve}\|_{\ell_{\infty}}\le \frac{2C}{\ve}\|a\|_{n,\infty}^2$. If $J_2$ is empty, we conclude that $a=b^\ve$ is in the range of $\theta$. Otherwise observe that \[\|a-b^{\ve}\|_{m,\infty}=\sup_{j\in J_2}|a_j|v_m(j).\] For any $j\in J_2$ we get \begin{align*} |a_j|v_m(j)\le C|a_j|v_n(j)^2 & \le C\|a\|_{n,\infty}v_n(j) \\ & <C\|a\|_{n,\infty}\frac{\ve}{2C\|a\|_{n,\infty}}=\frac{\ve}{2}<\ve. \end{align*} Thus $\|a-b^{\ve}\|_{m,\infty}<\ve$. This implies that for a sequence $\ve_k\searrow0$ we get another sequence $b^k:=b^{\ve_k}\in\ell_{\infty}$ such that \[\lim_{k\to\infty}b^k=a\quad\text{in}\,\,\ell_{\infty}(v_m).\] But the topology of $\ell_{\infty}(v_m)$ is stronger than that of $k_{\infty}(V)$, thus \[\lim_{k\to\infty}b^k=a\quad\text{in}\,\,k_{\infty}(V).\] Consequently, the homomorphism \eqref{infinite-order-density} has dense range. Since $\ell_\infty$ is amenable by \cite[Lemma 7.10]{BEJ} (see also \cite[Theorem 5.6.2]{D}, \cite[Theorem VII.2.42]{H2}), the topological amenability of $k_\infty(V)$ now follows from Propositions \ref{am-top-am-banach} and \ref{top-am-dense-range}. $\mathrm{(i)\Leftrightarrow(iii)}$. This part is even easier since $(e_j)_{j\in\N}$ is a common Schauder basis for both $c_0$ and $k_0(V)$, thus the density of the range of $\theta$ in \eqref{infinite-order-density} is immediate. \end{proof} \section{Examples} \label{sect:examples} Let us now give some concrete examples which illustrate Theorems~\ref{finite-order} and~\ref{thm:kinf}. \begin{example} \label{example:phi_amen} Applying Theorem \ref{finite-order}, we see that the algebra $\phi$ of finite sequences (see Example~\ref{example:phi}) is not topologically amenable. \end{example} \begin{example} \label{example:dual_power_amen} Consider the dual power series space $D\Lambda_R^p(\alpha)$, where $1\le p\le\infty$ and $R\in\{ 0\}\cup [1,+\infty)$ (see Example~\ref{example:dual_power}). If $R\ge 1$, then the respective weights $(r^{\alpha_j})_{j\in\N}$ are clearly unbounded for all $r>R$, so $D\Lambda_R^p(\alpha)$ is not topologically amenable in this case (see Theorems~\ref{finite-order} and~\ref{thm:kinf}). On the other hand, $(r^{\alpha_j})_{j\in\N}$ is bounded for each $0<r\le 1$, and so $D\Lambda_0^\infty(\alpha)$ is topologically amenable by Theorem~\ref{thm:kinf}. In fact, more is true. Indeed, all dual power series spaces $D\Lambda_R^p(\alpha)$ are Schwartz spaces by \cite[Theorem 4.9]{BMS}. Also, it is clear that $D\Lambda_0^\infty(\alpha)$ is unital. Now \cite[Theorem 12]{KP-contractible} implies that $D\Lambda_0^\infty(\alpha)$ is contractible. Finally, if $p<\infty$, then $D\Lambda_0^p(\alpha)$ is topologically amenable iff it is contractible iff $\sum_j r^{\alpha_j}<\infty$ for some $r>0$ (see Theorem~\ref{finite-order}). \end{example} \begin{example} The algebra $s'$ of sequences of polynomial growth is contractible. This follows from \cite[Proposition 7.3]{JLT_fmwk} and is explicitly mentioned in \cite[Example 3.1]{Pir_Oulu}, \cite[Example 6.6]{ZL}. Since $s'=D\Lambda_0^p(\alpha)$, where $\alpha_j=\log j$ (see Example~\ref{example:s'}), we see that the contractibility of $s'$ is also a special case of Example~\ref{example:dual_power_amen}. \end{example} \begin{example} \label{example:germs_amen} As another special case of Example~\ref{example:dual_power_amen}, we see that the Hadamard algebra $\cH(\ol{\DD}_R)$ of germs of holomorphic functions on the disc $\ol{\DD}_R$ (see Example~\ref{example:germs}) is not topologically amenable for $R\ge 1$. On the other hand, letting $R=0$, we see that the Hadamard algebra $\cH_0$ of holomorphic germs at zero is contractible. \end{example} The reader may have noticed that for all the algebras mentioned in Examples~\ref{example:phi_amen}--\ref{example:germs_amen} topological amenability is equivalent to contractibility. On the other hand, there are two obvious examples of topologically amenable co-echelon algebras that are not contractible --- namely, $c_0$ and $\ell_\infty$. To construct more examples of the same kind, let us first observe that the direct sum of two co-echelon algebras of the same order is also a co-echelon algebra. More exactly, if $V=(v_n)_{n\in\N}$ and $W=(w_n)_{n\in\N}$ are sequences of weights on index sets $I$ and $J$, respectively, then we have $k_p(I,V)\mathop{\oplus} k_p(J,W)\cong k_p(I\sqcup J,U)$, where the sequence $U=(u_n)_{n\in\N}$ of weights on $I\sqcup J$ is given by $u_n(i)=v_n(i)$ if $i\in I$, and $u_n(j)=w_n(j)$ if $j\in J$. Conversely, each partition $I=S\sqcup T$ induces a direct sum decomposition $k_p(I,V)\cong k_p(S,V_S)\mathop{\oplus} k_p(T,V_T)$, where $V_S$ and $V_T$ consist of the restrictions to $S$ and $T$ of weights from $V$. \begin{example} \label{example:dirsum} Let $A_1=c_0\mathop{\oplus} D\Lambda_0^\infty(\alpha)$ and $A_2=\ell_\infty\mathop{\oplus} D\Lambda_0^\infty(\alpha)$. In view of the above discussion, $A_1$ and $A_2$ are co-echelon algebras of order $0$ and $\infty$, respectively. By Theorem~\ref{thm:kinf}, $A_1$ and $A_2$ are topologically amenable. On the other hand, $A_1$ and $A_2$ are not Montel spaces, so they are not contractible by \cite[Theorems 12 and 13]{KP-contractible} (moreover, $A_1$ is not unital, which already implies that it is not contractible). \end{example} Of course, the above example is degenerate in a sense. Our next goal is to construct a ``genuine'' example of a co-echelon algebra of order $\infty$ which is topologically amenable and unital, but is not contractible. By ``genuine'' we mean that the algebra we are going to construct is not reduced to a direct sum of $\ell_\infty$ with a contractible algebra of the form $k_\infty(V)$ in the sense explained before Example~\ref{example:dirsum}. \begin{example} \label{example:NN} We fix a sequence $(c_j)_{j\in\N}$ of positive numbers such that $c_j\le 1$ for all $j$, and such that $c_j\to 0$ as $j\to\infty$. For each $n\in\N$ define a weight $v_n$ on $\N^2$ by \begin{equation} \label{v_ij} v_n(i,j)= \begin{cases} c_j^n, & i<n,\\ 1, & i\ge n. \end{cases} \end{equation} Clearly, the sequence $V=(v_n)_{n\in\N}$ satisfies (W1) and (W2). Furthermore, we have $v_{2n}\le v_n^2$ for all $n\in\N$, whence $V$ satisfies (W3). Thus $k_p(\N^2,V)$ is a K\"othe co-echelon algebra for all $p$. Since $V$ is eventually bounded, we see that $k_0(\N^2,V)$ and $k_\infty(\N^2,V)$ are topologically amenable (see Theorem~\ref{thm:kinf}). Moreover, $k_\infty(\N^2,V)$ is clearly unital. \end{example} For each $i\in\N$, let $L_i=\{ (i,j) : j\in\N\}\subset\N^2$. \begin{lemma} \label{lemma:fin_inter} If $S\subset\N^2$, then $k_\infty(S,V_S)$ is a Banach space if and only if $S\cap L_n$ is finite for all $n\in\N$. \end{lemma} \begin{proof} We will use the well-known fact that an (LB)-space $E=\ind_n E_n$ (where $E_n$ are Banach spaces, and $E_n\to E_{n+1}$ are bounded linear injections) is a Banach space if and only if the sequence $(E_n)$ stabilizes in the sense that there exists $N\in\N$ such that $E_n\to E_{n+1}$ is a topological isomorphism for all $n\ge N$ (this follows, for example, from \cite[19.5.(4)]{K1}). If $S\cap L_n$ is finite for all $n\in\N$, then so is $S_n=\bigcup_{k\le n} (S\cap L_k)$. We clearly have $v_n=v_{n+1}=1$ outside $S_n$. Letting \[ C_n=\max_{(i,j)\in S_n} \frac{v_n(i,j)}{v_{n+1}(i,j)}, \] we obtain $v_n\le C_n v_{n+1}$ everywhere on $S$. This readily implies that $\ell_\infty(S,v_n)\to\ell_\infty(S,v_{n+1})$ is a topological isomorphism. Hence $k_\infty(S,V_S)$ is a Banach space. Conversely, suppose that $S\cap L_k$ is infinite for some $k$. Since for each $n\ge k+1$ we have $v_n(k,j)=c_j^n$, and since $c_j\to 0$ as $j\to\infty$, we see that there is no $C>0$ such that $v_n\le C v_{n+1}$ on $S\cap L_k$. Therefore $\ell_\infty(S,v_n)\to\ell_\infty(S,v_{n+1})$ is not a topological isomorphism, and so $k_\infty(S,V_S)$ is not a Banach space. \end{proof} \begin{lemma} \label{lemma:no_decomp} There is no decomposition $\N^2=S\sqcup T$ such that $k_\infty(S,V_S)$ is a Banach space and such that $k_\infty(T,V_T)$ is a Montel space. \end{lemma} \begin{proof} Suppose that $S\subset\N^2$ is a subset such that $k_\infty(S,V_S)$ is a Banach space, and let $T=\N^2\setminus S$. By Lemma~\ref{lemma:fin_inter}, for each $n\in\N$ there exists $j_n\in\N$ such that $(n,j_n)\in T$. We clearly have $v_m(n,j_n)=1$ for all $n\ge m$. Letting $R=\{ (n,j_n) : n\in\N\}$, we conclude that \[ \inf_{(i,j)\in R}\frac{v_m(i,j)}{v_1(i,j)} =\inf_{n\in\N} \frac{v_m(n,j_n)}{v_1(n,j_n)} >0 \qquad (m\in\N). \] The existence of an infinite set $R\subset T$ with the above property means precisely that $k_\infty(T,V_T)$ is not Montel \cite[Theorem 4.7]{BMS}. \end{proof} Essentially the same argument applies to $k_0(\N^2,V)$. However, more is true. \begin{lemma} \label{lemma:no_decomp_2} Let $V$ be the weight sequence on $\N^2$ given by \eqref{v_ij}. Then $k_0({\N^2},V)$ is not complete, and the underlying lcs of $k_0(\N^2,V)$ is not isomorphic to a direct sum of a normed space and a dense subspace of a reflexive space. \end{lemma} \begin{proof} Recall from \cite[Theorem 2.7]{BMS} that, for each set $I$ and each sequence $V=(v_n)_{n\in\N}$ of weights on $I$ satisfying (W1) and (W2), the strong dual of $k_0(I,V)$ is topologically isomorphic to the K\"othe echelon space \[ \lambda_1(I,A)=\Bigl\{ x=(x_i)\in\CC^I : \| x\|_n=\sum_{i\in I} |x_i| a_n(i)<\infty\;\forall n\in\N\Bigr\}, \] where $a_n(i)=v_n(i)^{-1}$ and $A=(a_n)_{n\in\N}$ is the corresponding K\"othe set. Let now $I=\N^2$, let $V$ be given by \eqref{v_ij}, and let $E=k_0(\N^2,V)$. Assume, towards a contradiction, that $E\cong E_0\mathop{\oplus} E_1$, where $E_0$ is a normed space and $E_1$ is a dense subspace of a reflexive space. Hence we have a topological isomorphism $E'\cong E'_0\mathop{\oplus} E'_1$. Moreover, $E'_0$ is a Banach space, and $E'_1$ is a reflexive Fr\'echet space (see, e.g., \cite[23.5.(5) and 29.3.(1)]{K1}). Now recall from \cite[Corollary 25.14]{MV} that all reflexive Fr\'echet spaces are distinguished, i.e., their strong duals are barrelled. Clearly, each normed space is distinguished, and a direct sum of two distinguished spaces is distinguished. Therefore $E'$ is distinguished. On the other hand, it is easily seen that the K\"othe set $A=(a_n)_{n\in\N}$ on $\N^2$, where $a_n(i,j)=v_n(i,j)^{-1}$, satisfies the conditions of \cite[Corollary 27.18]{MV}. Hence $\lambda_1(\N^2,A)$ is not distinguished. This is a contradiction since $E'\cong\lambda_1(\N^2,A)$ (see above). Applying now \cite[Corollary 3.5 and Theorem 3.7]{BMS}, we conclude that $k_0(\N^2,V)$ is not complete. \end{proof} \begin{prop} Let $V$ be the weight sequence on $\N^2$ given by \eqref{v_ij}. Then \begin{mycompactenum} \item $k_0(\N^2,V)$ and $k_\infty(\N^2,V)$ are topologically amenable K\"othe co-echelon algebras; \item $k_\infty(\N^2,V)$ is unital; \item $k_0(\N^2,V)$ is not complete; \item there is no decomposition $\N^2=S\sqcup T$ such that $k_\infty(S,V_S)$ is a Banach algebra and such that $k_\infty(T,V_T)$ is a contractible algebra; \item the underlying lcs of $k_0(\N^2,V)$ is not isomorphic to a direct sum of a normed algebra and a contractible K\"othe co-echelon algebra. \end{mycompactenum} \end{prop} \begin{proof} Properties (i) and (ii) are mentioned in Example~\ref{example:NN}, while (iii) is contained in Lemma~\ref{lemma:no_decomp_2}. To prove (iv) and (v), observe that each contractible co-echelon algebra of order $p>0$ is a Montel space (for $p<\infty$ this follows from Theorem~\ref{finite-order}, while for $p=\infty$ this is \cite[Theorem 12]{KP-contractible}). Also, if a co-echelon algebra of order $0$ is contractible, then its completion is a Montel space \cite[Theorem 13]{KP-contractible}. Now (iv) and (v) follow from Lemmas~\ref{lemma:no_decomp} and~\ref{lemma:no_decomp_2}, respectively. \end{proof} We conjecture that (v) holds for $k_\infty(\N^2,V)$ as well.
{ "timestamp": "2020-12-17T02:18:55", "yymm": "2012", "arxiv_id": "2012.08956", "language": "en", "url": "https://arxiv.org/abs/2012.08956" }
\section{Introduction} In 2008 the US government released its first-ever evidence-based guidance on PA, which acknowledged very strong evidence that physically active people have higher levels of health-related fitness, a lower risk profile for developing cardiovascular diseases, and lower rates of various chronic diseases than people who are not active \cite{tudor-locke}. A simple and inexpensive option for monitoring steps and cardiovascular health is the physical activity monitor. Step count is a quantifiable metric which demonstrates individuals taking $\geq$5,000 steps/day had substantially lower prevalence of adverse cardio-metabolic health indicators than those taking lower amounts \cite{schmidt}. In addition, step count has been shown to be inversely related to risk of progression to diabetes \cite{kraus}. Collectively, reduced physical activity as measured by step count is a consistent predictor of death in chronic heart failure, possibly surpassing traditional laboratory-based exercise tests. Recent advances in deep learning have enabled more sophisticated modelling of patterns in sensor data to classify human physical activities. LSTM based architectures have shown their efficacy in accurately predicting step count \cite{edel,chen}. Herein we describe a personalized step count algorithm that adapts to a subject's unique gait pattern with very little new subject-specific data collected from a different wearable device. We consider an LSTM deep learning algorithm to model human steps from two different wrist-worn devices containing accelerometers and gyroscopes. We observed a general LSTM model trained on publicly available data performs well for most subjects, but exhibited lower step count accuracy for some individuals. We hypothesized that the observed decrease in step count accuracy may be due to individual subject gait differences and change in devices. Therefore, we show that an adaptive step count algorithm can be used to train personalized models to obtain accurate step count across devices and subjects. \section{Materials and Methods} In this paper, we use two datasets: (1) the "Pedometer" dataset \cite{ryan}, collected at Clemson University using a Shimmer3 device from 30 subjects and (2) a fully independent, analogous "AZ" dataset, collected at AstraZeneca using an ActiGraph GT9X Link from 5 different subjects. The participants were asked to perform the 6-minute walk test in an indoor looped setting. Next, the steps and distance were manually annotated using video recordings from an Iphone XR. Morever, the rationale for developing this dataset was to test the ability of an LSTM step count algorithm to learn features that generalize across different device manufacturers and subjects. Accelerometer and gyroscope data from the wrist worn devices were used to model step count as a classification problem with left and right step as the two classes. The sensor data was resampled to 15Hz and windowed into $0.4\overline{6}$s readings based on tuning experiments and literature \cite{pachi}. The deep learning model architecture consists of a many-to-one 256 unit LSTM followed by two fully connected layers of 512 and 256 units, respectively, with dropouts and activated using ReLU. Finally, a softmax layer was used to predict left or right step, and every transition between the two classes is counted as a step. The input features to the deep learning model architecture consists of individual windows with 3 readings from a tri-axial accelerometer(gs) and 3 from a tri-axial gyroscope(deg/sec). In order to personalize our model without the need to train from scratch, we investigated different variations of Transfer Learning. Domain adaptation is one such paradigm where the source and target probability distributions are different, i.e. given a source feature space $X_S$ and target feature space $X_T$, source label space $Y_S$ and target label space $Y_T$, we have $X_S=X_T$, $Y_S = Y_T$, and $P(X_S) \neq P(X_T)$ \cite{avi}. In the context of our problem, $X_G=X_P^S$, $Y_G=Y_P^S$, and $P(X_G) \neq P(X_P^S)$, where \textit{G} represents a general model trained using Pedometer dataset (source) and P represents a personalized model for every subject \textit{S} in the AZ dataset (target). The metrics chosen to evaluate our model are class accuracy and step count as shown in (1) and (2). \begin{equation} accuracy_{class} = \left(\frac{N_R + N_L}{N_T}\right) * 100 \end{equation} \begin{equation} accuracy_{steps} = \left(1 - \left|\frac{step\_count_{predicted}-step\_count_{ground truth}}{step\_count_{ground truth}}\right|\right) * 100 \end{equation} where $N_R$ and $N_L$ are number of correctly classified right and left step windows respectively, and $N_T$ is the total number of steps. $step\_count_{ground truth}$ is obtain using visually confirmed annotation from the datasets and $step\_count_{predicted}$ is the step count predicted by the model. \section{Results} To establish a baseline, we obtained data from Pedometer dataset \cite{ryan} and implemented Piece-wise Aggregate Approximation (PAA) + merge approach described in \cite{feng}. After establishing a baseline, the LSTM deep learning model was designed as described in the previous section. There are three major experimental setups to assess the performance of our model. Firstly, a general model was trained and tested using only the Pedometer dataset (Shimmer3) using Leave-two-subject-out cross-validation (CV), we call this LSTM-General I. Next, this pre-trained general model was validated using the entire AZ dataset (ActiGraph GT9X Link) as a test set, which is documented as experiment LSTM-General II. Finally, a personalized model, LSTM-Personalized was domain adapted using a 2-step training process from the pre-trained general model using only 30 seconds of subject-specific data, and the test dataset was created by removing this information from each subject. As shown in Table 1, the median $accuracy_{class}$ and median $accuracy_{steps}$ among subjects were recorded for the three settings. Since, LSTM-General I generates different models for each CV fold, we calculate the mean of medians and use the best pre-trained model. The model was tuned based on window size, sensor modalities, device location, and number of units in each layer. In addition to training with different combination of layers, we also examined different LSTM architectures including stacked, bidirectional, and stateful. \begin{table} \caption{Summary of $\mathbf{accuracy_{class}}$ and $\mathbf{accuracy_{steps}}$ for all experimental setups.}\label{tab1} \begin{tabular}{|l|l|l|l|l|} \hline {\bfseries Approach} & {\bfseries Training Data} & {\bfseries Testing Data} & { $\mathbf{accuracy_{class}}$} & {$\mathbf{accuracy_{steps}}$}\\ \hline PAA + Merge (Baseline) & N/A & Pedometer & N/A & 89.73 \\ PAA + Merge (Baseline) & N/A & AZ & N/A & 86.05 \\ LSTM-General I & Pedometer & Pedometer & 85.98 $\pm\,2.46$ & 98.63 $\pm\,2.37$\\ LSTM-General II & Pedometer & AZ & 60.01 & 94.49\\ LSTM-Personalized & Pedometer + $\sim$AZ & AZ & 91.76 & 98.81\\ \hline \end{tabular} \end{table} As shown in Table 1, LSTM-General I improves $accuracy_{steps}$ by $\sim$10\% from the baseline, and the standard deviation indicates a left-skewed distribution. Moreover, the baseline tested is an approach that does not classify steps, so $accuracy_{class}$ metric is not applicable. Additionally, there is a decrease in performance of LSTM-General II which is indicated by a 25.97\% and 4.14\% drop in $accuracy_{class}$ and $accuracy_{steps}$ respectively, which shows lack of robustness in handling different devices. Next, we observe LSTM-Personalized outperforms LSTM-General II as indicated by a 31.75\% and 4.32\% increase in $accuracy_{class}$ and $accuracy_{steps}$ respectively. This implies only a small amount of labelled data (represented as $\sim$AZ in Table 1) is necessary to improve performance. \begin{figure} \centering \includegraphics[width=6cm]{accuracy_class_panel.PNG} \includegraphics[width=6cm]{accuracy_stepcount_panel.PNG}\\ \hspace{0.6cm}(a) \hspace{5.7cm}(b)\\ \caption{Comparison of experiments at subject level: (\textbf{a}) $accuracy_{class}$ box plot. (\textbf{b}) $accuracy_{steps}$ box plot.}. \end{figure} From the boxplots shown in Fig. 1., drastic decreases in performance of LSTM-General II in case of outliers is also adequately handled by LSTM-Personalized. Additionally, the median performance of LSTM-Personalized and LSTM-General I are similar in terms of both the metrics, which suggests information from different devices and different subjects can be handled by using personalized models. \section{Conclusion} In this paper, we proposed a personalized step counting algorithm built using an LSTM network and tuned with subject-specific data through domain adaptation. This shows that data collected from different subjects and devices can be modelled using our algorithm. Additionally, results suggest very little labelled data is necessary to construct personalized models. This work suggests deep transfer learning has significant potential to achieve clinically acceptable accuracy in the assessment of physical activity for multiple disease scenarios. Moreover, accurately identifying class labels is important as it helps in tasks like distance estimation, indoor localization, and human activity recognition. Therefore, in the future, we plan to conduct a large-scale study in patient populations to evaluate other wearable sensor related tasks, disease endpoints, and unsupervised domain adaptation.
{ "timestamp": "2020-12-17T02:19:18", "yymm": "2012", "arxiv_id": "2012.08975", "language": "en", "url": "https://arxiv.org/abs/2012.08975" }
\section{Introduction} Approaches for Session-based Recommendation (SR) aim to dynamically recommend items to a user based on the sequence of ongoing interactions (e.g. item clicks) in a session. Several existing deep learning (DL) approaches for SR are designed to maximize the immediate (short-term) reward for recommendations, e.g. \citep{hidasi2018recurrent,liu2018stamp,wu2018session}. More recently, deep reinforcement learning (DRL) approaches have been proposed that maximize the expected long-term cumulative reward by looking beyond the immediate user recommendation, e.g. \citep{chen2018generative,bai2019model,zhao2018deep}. Such approaches can, for instance, optimize recommendations for long-term user engagement instead of maintaining a myopic objective of optimizing the immediate user recommendation. Model-free RL approaches for SR, e.g. \citep{zheng2018drn,zhao2018recommendations,zhao2018deep} rely on large-scale data collection by interacting with a population of users \citep{chen2018generative}. These interactions can be potentially costly as the initial recommendations from an untrained RL agent may leave a poor impression on users, and can lead to churn. On the other hand, model-based RL approaches, e.g. \citep{chen2018generative,bai2019model,zhao2017deep}, rely on learning user-feedback simulation models that probe for rewards in previously unexplored regions of the state and action space. These approaches are sample-efficient in comparison to model-free approaches but have to rely on user-behavior model approximation from inherently biased logs \citep{bai2019model,yang2018unbiased}. An alternative to the aforementioned approaches is learning an RL-based recommendation agent solely from historical logs obtained from a different sub-optimal recommendation policy. Once a batch of data from potentially sub-optimal policies is gathered, an RL agent can be learned from this fixed dataset overcoming the need for further feedback from the costly real-world interactions, or the need for often biased data-driven user-behavior models. Data can be collected by deploying less costly, easy-to-train, and fast-to-deploy heuristics- or rule-driven sub-optimal behavior policies, e.g. those based on k-nearest-neighbors \citep{jannach2017recurrent,garg2019sequence}, and then further used to improve upon the behavior policies. Recently, several DRL approaches have been proposed for such a \textit{Batch RL} (a.k.a. Offline RL) setting where the agent is trained from a batch of data without access to further interactions, e.g. \citep{fujimoto2018off,fujimoto2019benchmarking,kumar2019stabilizing,levine2020offline}. In this work, we study some of the SR-specific challenges in model-free batch RL approaches, and propose ways to mitigate them. For instance, logs from sub-optimal policies have rewards for only a sparse set of state-action pairs, while is known to cause overestimation bias or errors in Q-learning \citep{fujimoto2018off,fujimoto2019benchmarking}. This bias can be particularly severe in the SR setting where the action space can be very large (of the order of the number of items in the catalog, e.g. 1000s). Furthermore, each user acts as a different version of the environment for the RL agent, lending intrinsic stochasticity to the environment \citep{bellemare2017distributional}. This stochasticity is even more apparent in the SR setting, where no past information or demographic details of the user (environment) are available. The effects of this stochasticity are amplified in the batch RL setting, where logs from sub-optimal policies are biased \citep{yang2018unbiased,bai2019model}, and do not depict the true user behavior characteristics. For these reasons, robust estimation of the reward distribution from the environment (user) can be challenging in the batch learning scenario. Addressing these challenges in the batch learning setup for SR, we propose Batch-Constrained Distributional RL for Session-based Recommendations, or \textit{BCD4Rec}. The key contributions of this work can be summarized as follows: 1. Inline with recent results from other domains \citep{fujimoto2019benchmarking,kumar2019stabilizing}, we first observe that the standard off-policy Q-learning approaches such as Deep Q-Networks (DQN) suffer in the batch RL setting for SR as well. In most cases, deep Q-learning using Deep Q Networks (DQN, \citep{mnih2013playing,van2016deep}) and other state-of-the-art non-RL approaches fail to improve upon the behavior policy from which the batch data was generated in the first place. We observe that these approaches can, at best, mimic the behavior policy. \newline 2. We propose BCD4Rec: an approach for SR that is suited for the batch-constrained deep RL setup. We adapt and build-upon the recent advancements in Batch RL \citep{fujimoto2018off,fujimoto2019benchmarking} and Distributional RL \citep{bellemare2017distributional,dabney2017distributional} (detailed in Section \ref{sec:approach_iqn}), and extend them for large state and action spaces frequently encountered in SR by using state and action embeddings. \newline 3. Through empirical evaluation on a real-world and a simulated dataset, we demonstrate that BCD4Rec can significantly improve upon the performance of the behavior policy as well as other strong RL and non-RL baselines. BCD4Rec further depicts several desirable properties: i. reducing overestimation error as measured by its ability to suggest meaningful items from the correct latent categories, and ii. overcoming the bias in the offline logs and recommending relevant items with reduced popularity bias. \section{Related Work} In recent years, heuristics-driven nearest neighbor-based approaches (e.g. \citep{jannach2017recurrent,ludewig2018evaluation,garg2019sequence}) as well as several deep learning approaches based on recurrent neural networks (RNNs) (e.g. \citep{hidasi2018recurrent}), graph neural networks (e.g. \citep{wu2018session,gupta2019niser}), attention networks (e.g. \citep{liu2018stamp}), etc. have been proposed for SR. The DL approaches provide state-of-the-art performance in next-step interaction prediction task but are myopic in their recommendations and do not take longer-term goals into account. Several RL-based approaches based on MDPs \citep{shani2005mdp}, factored MDPs \citep{tavakol2014factored}, and approximate deep RL \citep{zhao2018deep,zhao2017deep,zhao2018recommendations} have been proposed for SR. These methods aim to optimize the long-term cumulative reward from users instead of next-step prediction tasks. However, these methods have been primarily proposed for online RL settings, and require costly experience collection by interacting with a population of users. As we show empirically, vanilla deep Q-learning methods, as used in the above approaches, struggle in the batch RL settings. Several approaches for batch RL have been proposed to explore data-efficient RL approaches in absence of access to the real environment, e.g. \citep{sutton2016emphatic,fujimoto2018off,fujimoto2019benchmarking,jaques2019way,kumar2019stabilizing,levine2020offline,agarwal2020optimistic}. A promising approach to avoid overestimation error is to estimate (mimic) the unknown behavior policy, and use it to guide and constrain the action space of the RL agent \citep{kumar2019stabilizing,fujimoto2018off,fujimoto2019benchmarking}. Very recently, these ideas have been used in deep Q-learning based batch RL for recommender systems in Generator Constrained Q-Learning (GCQ) \citep{wang2020off}. However, they explicitly leverage the user information which is not available in the SR setup considered in this work. Our work can be seen as an extension of GCQ to the SR setup. Moreover, GCQ considers constraining the action space of the RL agent by using item-frequency based approach. As we show in this work, frequency-based approaches are prone to popularity bias \citep{yang2018unbiased}, and can struggle to recommend relevant long-tail items. Furthermore, having access to user history, GCQ does not address the stochastic nature of user-behavior which can be critical in the SR setting. To the best of our knowledge, this is the first study to demonstrate the advantage of using distributional RL \citep{dabney2017distributional,bellemare2017distributional} in the SR setting. Many algorithms for off-policy evaluation from logged bandit feedback that utilize ideas from importance sampling, inverse propensity scoring, and counterfactual risk minimization have been proposed \citep{li2011unbiased,swaminathan2017off}. However, these approaches have not been considered for session-based recommendations which involve large action spaces and sequential decision-making. In this work, we look at the batch learning problem for session-based recommendations within the RL setup. \section{BCD4Rec\label{sec:approach_iqn}} Consider a Markov Decision Process (MDP) \citep{sutton2011reinforcement} defined by the tuple of five elements ($\mathcal{S, A}, P, R, \gamma$), where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $P(s'|s,a)$ is the transition probability from state $s$ to $s'$, $R(s,a)$ is the random variable reward function, $\gamma \in (0,1)$ is the discount factor, $s,s' \in \mathcal{S}$ and $a \in \mathcal{A}$. Given a policy $\pi$, the value function for the agent following the policy is given by the expected return of the agent $Q^{\pi}(s,a) := \mathbb{E} \left[Z^\pi(s,a)\right]= \mathbb{E}_{\pi}\left [\sum_{t=0}^{\infty}\gamma^t R(s_t,a_t)\right ], $ where $s_t\sim P(\cdot|s_{t-1},a_{t-1})$, $a_t \sim \pi(\cdot|s_t)$, $s_0=s$, $a_0=a$. The recommender agent (RA) in the SR setting interacts with a user (environment) by sequentially choosing the impression list of items (or the slate) to be recommended over a sequence of time steps, so as to maximize its cumulative reward while receiving feedback from the user. The state $s= \{s^1,s^2,\ldots,s^L\} \in \mathcal{S}$ corresponds to the browsing history of the user consisting of the most recent $L$ interactions in the current session. An action $a=\{a^1, a^2, \ldots, a^l\} \in \mathcal{A}$ corresponds to a slate or impression list of $l$ items chosen by the agent as a recommendation to the user based on the current state $s$, from a set $\mathcal{I}$ of currently available items not clicked by the user previously. The transition probability $P(s'|s,a)$ from the current state $s$ to the next state $s'$ depends on the response of the user to the action $a$ taken by the RA in state $s$. The immediate reward $r$ given state $s$ and action $a$ is determined by the response of the user, e.g. a click on an item results in $r=1$ while a skip results in $r=0$. The goal of training the RA is to obtain a policy $\pi(\textit{s},\mathcal{I})$ that chooses an action $a$ (an impression list of items) from the set $\mathcal{I}$ given the current state $s$ such that the long-term expected reward (e.g. number of buys) is maximized. We consider the scenario where a single item\footnote{This can be extended and generalized to multiple items in a slate \citep{ie2019reinforcement,chen2018generative}.} $i_t \in \mathcal{I}$ is recommended to the user at time $t$, and the response/choice of the user $c_t$ is available to the RA, where the choice is made from a pre-defined set of user-choices such as click, skip, etc. The immediate reward $r_t$ depends on the choice $c_t$. In addition, we consider a ``target choice'', maximizing the frequency of which maximizes the returns, e.g. click-through rate. For example, if target choice is click, then rewards of 0 for skip, 1 for click can be considered. Here, skip is considered as a negative interaction whereas click is considered as a positive interaction. A session till time $t$ can thus be represented as $S_t=\{(i_1,c_1,r_1),\ldots,(i_t,c_t,r_t)\}$. For computational reasons, the last $L$ positive (non-skip) interactions in a session are used to determine the current state of the agent. \subsection{State and Action Embeddings \label{sub:state} } Typically, the item-catalog size $|\mathcal{I}|$ is large (of the order of thousands, or even millions) resulting in an extremely large action space for the RA. Furthermore, the state space consisting of sequence of item interactions grows combinatorially in $|\mathcal{I}|$. We represent items as trainable vectors or embeddings in a dense $d$-dimensional space such that the embeddings of all the items constitute a lookup matrix $\mathbf{I} \in \mathbb{R}^{|\mathcal{I}| \times d} $, where the $j$-th row of $\mathbf{I}$ corresponds to item $i_j$ represented as $\mathbf{i}_j \in \mathbb{R}^d $ ($j = 1 \dots |\mathcal{I}|$). Any action $a \in \mathcal{A}$ corresponds to an item, therefore, the action embedding $\mathbf{a} \in \mathbb{R}^d$. In practice, we find that initializing the item embeddings, i.e. the matrix $\mathbf{I}$ via pre-training a supervised model for next item prediction to be very useful (refer Section \ref{sec:exp}). The previously clicked or interacted items in a session are used to predict the next item using Session-based Recommendation with Graph Neural Networks (SRGNN) \citep{wu2018session}. The item embedding matrix after training SRGNN is used to initialize $\mathbf{I}$. Other alternatives include a simple word2vec-like approach \citep{goldberg2014word2vec,zhao2018deep} where items are analogous to words. The state $s=\{s^1,s^2,\ldots,s^L\}$ of the agent is obtained from the sequence of $L$ most recent non-skip interactions\footnote{As in other approaches \citep{ie2019reinforcement,chen2018generative,bai2019model}, we update the state of the agent only when the action is a non-skip action.} (e.g. clicked items) in a session $S_t$. The corresponding state embedding $\mathbf{s}$ is obtained from the item embedding vectors $\mathbf{s}^k \in \mathbf{I}$ ($k=1\ldots L$) via a bi-directional gated recurrent units (BiGRU) network \citep{cho2014properties} with parameters $\boldsymbol{\theta}$ to obtain the state embedding $\mathbf{s} = \mathbf{W} \mathbf{h}_L + \mathbf{b}$, where $\mathbf{h}_L=BiGRU(\mathbf{s}^1,\ldots,\mathbf{s}^L;\boldsymbol{\theta})$ is the final hidden state of BiGRU, and $\mathbf{W} \in \mathbb{R}^{d \times d}$ and $\mathbf{b} \in \mathbb{R}^d$ are the parameters of the final linear layer. These are eventually used to get the value estimates, as detailed later in Section \ref{ssec:distRL}. \subsection{Constraining the Action Space} The standard off-policy DRL algorithms like Double Q-learning (hereafter, referred to as DQN) \citep{van2016deep} assume further interactions with the current policy while training with a history of experiences generated by previous iterations of the policy. In other words, the initial batch data $\mathcal{B}$ obtained from a behavior policy can be subsequently updated by gathering more experience by interacting with the environment or a model of the environment. In contrast, batch RL setup additionally assumes that the data set $\mathcal{B}$ is fixed, and no further interactions with the environment are allowed while training. Due to this fixed and limited batch data $\mathcal{B}$, batch RL is not guaranteed to converge. When selecting an action $a$, such that $(s, a, s')$ is distant from data contained in the batch $\mathcal{B}$, the estimate $Q_{\boldsymbol{\theta}'}(s', a' )$ (from the target network with parameters $\boldsymbol{\theta}'$ in Double DQN \citep{van2016deep}) may be arbitrarily erroneous, affecting the learning process. This \textit{overestimation bias} \citep{fujimoto2018off} resulting from a mismatch in the distribution of data induced by the current policy versus the distribution of data contained in $\mathcal{B}$ implies slower convergence of learning due to difficulty in learning a value function for a policy which selects actions not contained in the batch. To avoid the overestimation bias, we constrain the action-space of the agent for a state $s$ such that it only chooses actions that are likely under the unknown behavior policy $\pi_b$ from which $\mathcal{B}$ is generated, as used in discrete batch-constrained Q-learning (BCQ) \citep{fujimoto2019benchmarking}. The action for the next state is selected under the guidance of a state-conditioned generative model $\mathcal{M}$ that approximates the policy $\pi_b$ such that the probability $p_{\mathcal{M}}(a|s)\approx \pi_b(a|s)$. Such a behavior cloning neural network is trained in a supervised learning fashion with a cross-entropy loss to solve the $|\mathcal{I}|$-way classification task, $\mathcal{L}_\omega(s,a) = - \text{log}(p_{\mathcal{M}}(a|s))$, over all pairs $(s,a)$ taken from tuples $(s,a,r,s')\in \mathcal{B}$, where $ p_{\mathcal{M}}(a|s;\omega)=\frac{\text{exp}(\mathbf{s}^T\mathbf{a})}{\sum_{i \in \mathcal{I}}\text{exp}( \mathbf{s}^T\mathbf{i})}, $ $\omega$ being the parameters of the neural network. The action space of the agent (recommendable items) is restricted to those actions that satisfy $p_\mathcal{M}(a'|s') > \beta$, $\beta\in[0,1)$, as detailed in next subsection. The training of $\mathcal{M}$ is equivalent to training a deep neural network for SR in a supervised manner, e.g. \citep{liu2018stamp,wu2018session}, where the goal is to predict the next interaction item for a user given past interactions. The only difference is that while training $\mathcal{M}$, the interactions not only correspond to the positive feedback items but also the skipped items, or items with any response type for that matter. In this work, we choose SRGNN \citep{wu2018session} (a state-of-the-art graph neural networks based approach for SR) as the neural network architecture for $\mathcal{M}$. \subsection{Distributional RL Agent\label{ssec:distRL}} It has been recently shown that, in practice, better policies can be learned by estimating the value distribution $Z^\pi$ instead of estimating just the expectation of the value $Q^\pi$ \citep{morimura2010nonparametric,bellemare2017distributional,dabney2017distributional}, especially when the environment is stochastic. Learning the value distribution matters in the presence of approximations which are common in deep RL approaches (e.g. neural networks as value function approximators) \citep{bellemare2017distributional}. This can be particularly important in the case of SR where the environment is highly stochastic given the variety of users with varying interests and behaviors (refer Section \ref{sec:results}). BCD4Rec is, therefore, trained in a distributional RL fashion using Implicit Quantile Networks (IQN) \citep{dabney2018implicit}, where $K$ samples from a base distribution, e.g. $\tau \sim U([0,1])$ are reparameterized to $K$ quantile values of a target distribution. The estimation of action-value for $\tau$-quantile is given by $Q^\tau_{\boldsymbol{\theta}}(s,a) = \mathbf{s}_\tau^T \mathbf{a}$, where $\mathbf{s}_\tau =\mathbf{s} \odot\phi(\tau)$ ($\odot$ is Hadamard product) for some differentiable function $\phi$ with $\phi: [0,1] \rightarrow R^d$ computing the embedding for the quantile $\tau$. Note that this form of the value function allows us to efficiently compute the values for all actions (items) in parallel via multiplication of the the item-embedding lookup matrix $\mathbf{I}$ and the vector $\mathbf{s}_\tau$, i.e. using $\mathbf{I}\mathbf{s}_\tau$, indicating the importance of considering latent state and action spaces to handle high dimensional setting like SR. The $j$-th dimension of $\phi(\tau)$ is computed as: $\phi_j(\tau):=ReLU(\sum_{i=0}^{n-1}cos(\pi i \tau)w_{ij} + b_j)$ where $w_{ij}$ and $b_j$ for $i={0,\ldots,n-1}$ and $j={0,\ldots,d-1}$ are trainable parameters. The final loss for training BCD4Rec is computed over all $K^2$ pairs of quantiles based on K estimates each from the current network with parameters $\boldsymbol{\theta}$ and the target network with parameters $\boldsymbol{\theta}'$, and by using $\mathcal{M}$ to constrain the action space as follows: \begin{equation}\label{eq:loss_bcd} \begin{aligned} \mathcal{L}_{BCD}(\boldsymbol{\theta}) &= \frac{1}{K^2}\mathbb{E}_{s,a,r,s'} \left [\sum_\tau \sum_{\tau'} l_\tau \left ( r + \gamma Q^{\tau'}_{\boldsymbol{\theta}'}(s',a') - Q^\tau_{\boldsymbol{\theta}}(s,a) \right )\right ], \\ a' &= \argmax_{ a'|p_\mathcal{M}(a'|s') > \beta} \frac{1}{K}\sum_\tau Q^{\tau}_{\boldsymbol{\theta}}(s',a'), \end{aligned} \end{equation} where, $\tau$ and $\tau'$ are sampled from uniform distribution $U([0,1])$, $l_\tau$ is the quantile Huber loss $l_\tau(\delta) = |\tau - \mathbb{I}({\delta < 0})|L_\kappa(\delta)$ with Huber loss \citep{huber1964robust} $L_\kappa$: $L_\kappa(\delta) = 0.5 \delta^2$ if $\delta \leq \kappa$, and $\kappa(|\delta| - 0.5\kappa)$ otherwise. An estimate of the value can be recovered through the mean over the quantiles, and the policy $\pi$ is defined by greedy selection over this value: $\pi(s) = \argmax_{a} \frac{1}{K} \sum_\tau Q^\tau_{\boldsymbol{\theta}}(s,a)$. Refer Appendix \ref{sec:app_a} for a summary of the training procedure for BCD4Rec. \section{Experimental Evaluation\label{sec:exp}} We evaluate our approach on two domains: \textbf{Diginetica (DN)} is a real-world offline dataset from CIKM Cup 2016 data challenge. We pre-process the data in the same manner as in \citep{wu2018session}. It contains six months of user interaction logs. The details related to the items clicked, bought, and skipped in each session are available. We use the first 60 days of data to train a user-behavior simulation environment that we use as a proxy for online testing, and the next two disjoint sets of 30 day data each to i. train the recommender agents, and ii. test the policies in the offline setting. During training, rewards considered for different action types are skip:0, click:1, buy:5. \textbf{RecSim} \citep{ie2019recsim,ie2019reinforcement} is a recently proposed simulation environment for testing RL agents for SR. We consider batch learning on data obtained from three behavioral policies (and refer to these as RecSim-x, x=1,2,3) with different click through rates (CTR) obtained from checkpoints at different iterations while training an IQN agent in online setting using the simulator. \textbf{Baselines Considered \label{sec:baselines}} We consider several non-RL and RL baselines: i. \textbf{Heuristic-based approaches}: \textit{MostPop}, \textit{SKNN} \citep{jannach2017recurrent} and \textit{STAN} \citep{garg2019sequence} are popular baselines that recommend items in decreasing order of their popularity (MostPop), or based on nearest neighbors defined in terms of similarity of on-going session to sessions in historical logs (SKNN and STAN). ii. \textbf{Supervised Deep Neural Networks}: We consider two state-of-the-art SR approaches using supervised learning, namely, \textit{GRU} \citep{hidasi2018recurrent} and \textit{SRGNN} \citep{wu2018session} to predict the next positive interaction. iii. \textbf{Deep RL agents}: We consider double Q-learning \citep{van2016deep} \textit{DQN} and its extension \textit{BCQ} \citep{fujimoto2019benchmarking} that uses batch-constraining (BC). In distributional RL methods, we consider QRDQN \citep{dabney2017distributional}, its batch-constrained version QRBCQ, and IQN \citep{dabney2018implicit}. Buy Rate (BR) and Click Through Rate (CTR) are the chosen performance metrics for online evaluation of agents in RecSim and DN, respectively. Coverage@3 (C@3) is used as additional metric to study the diversity of the recommendations made. Refer Appendix \ref{sec:app_a}, \ref{sec:app_b}, \ref{sec:eval_metrics}, and \ref{sec:app_d} for more details on baselines, datasets, performance metrics, and hyperparameter settings, respectively. \subsection{Results and Observations\label{sec:results}} \begin{table*} \centering \footnotesize \caption{Comparison of various approaches with the proposed BCD4Rec on online evaluation metrics BR, CTR and Coverage (C@3). BCD4Rec significantly improves upon the behavior policy, and has performance closest to the best achievable online policy. (Though higher coverage indicates higher diversity in recommendations and less bias, it does not necessarily imply better performance; an exploration policy will have very poor CTR/BR while having very high coverage.) Note: numbers are reported as average over 3 runs. \label{tab:online}} \scalebox{0.85}{ \centering \begin{tabular}{l|c c | c c| c c| c c} \hline & \multicolumn{2}{|c|}{\textbf{Diginetica (DN)}} & \multicolumn{2}{|c|}{\textbf{RecSim-1}} & \multicolumn{2}{|c|}{\textbf{RecSim-2}} & \multicolumn{2}{|c}{\textbf{RecSim-3}}\\ \hline \textbf{Method} & \textbf{BR} & \textbf{C@3} & \textbf{CTR} &\textbf{C@3} &\textbf{CTR} & \textbf{C@3} & \textbf{CTR }&\textbf{C@3} \\ \hline Most pop & 2.0 &\textbf{24.3} &39.5 & 30.0 &41.5 &30.0 &40.3 &30.0 \\ SKNN & 2.0 & 5.3 & 58.1 & 79.0 & 60.2 & 73.5 & 76.1 &66.0 \\ STAN &2.0 & 6.1 & 61.3 & 79.5 & 66.7 & 75.0 & 77.9 & 67.0 \\ \hline GRU &4.2 $\pm$ 0.6 & 11.8 $\pm$ 0.9 & 63.5 $\pm$ 1.7& 75.0 $\pm$ 1.8 & 69.5 $\pm$ 1.1& 77.3 $\pm$ 2.4 & 74.1 $\pm$ 0.9 &68.5 $\pm$ 2.3\\ SRGNN & 3.1 $\pm$ 0.9 &11.1 $\pm$ 0.2 &63.1 $\pm$ 1.3 &75.5 $\pm$ 4.4 &67.8 $\pm$ 2.0&77.2 $\pm$ 2.3 & 77.7 $\pm$ 2.0& 75.0 $\pm$ 2.8\\ \hline DQN & 4.8 $\pm$ 0.4 &7.8 $\pm$ 0.5 &50.7 $\pm$ 4.0 &66.2 $\pm$ 2.3 &52.1 $\pm$ 3.1 &65.8 $\pm$ 9.5 & 54.0 $\pm$ 2.7 & 77.5 $\pm$ 3.3\\ BCQ &13.3 $\pm$ 1.1 &10.7 $\pm$ 0.5 &65.9 $\pm$ 1.5& 74.5 $\pm$ 3.6 & 72.3 $\pm 2.3 $ &71.5 $\pm$ 0.4 &76.3 $\pm$ 1.1 &74.9 $\pm$ 2.0\\ QRDQN &14.9 $\pm$ 1.3 &\underline{16.7} $\pm$ \underline{1.2} & 62.9 $\pm$ 1.8 &74.3 $\pm$ 4.3 & 63.9 $\pm$ 1.5 & 73.2 $\pm$ 3.6 &78.2 $\pm$ 1.3 &68.7 $\pm$ 2.8\\ QRBCQ &15.3 $\pm$ 2.3 &14.9 $\pm$ 0.6 &70.8 $\pm$ 0.9 &74.5 $\pm$ 4.4 &74.7 $\pm$ 0.6 &73.8 $\pm$ 6.8 &81.4 $\pm$ 1.9 &74.8 $\pm$ 2.0 \\ IQN &\underline{23.5} $\pm$ \underline{2.0} &12.4 $\pm$ 1.4 &\underline{73.5} $\pm$ \underline{0.7} &\underline{79.5} $\pm$ \underline{4.0} &\underline{75.9} $\pm$ \underline{2.5} &\textbf{79.8 $\pm$ 2.8} &\underline{81.5} $\pm$ \underline{2.1} &\underline{78.3} $\pm$ \underline{2.8} \\ BCD4Rec &\textbf{24.1} $\pm$ \textbf{1.7} &14.2 $\pm$ 0.3 &\textbf{76.4 $\pm$ 0.8} &\textbf{79.7 $\pm$ 0.8} & \textbf{79.3 $\pm$ 1.2} &\underline{77.8} $\pm$ \underline{4.3} &\textbf{83.2 $\pm$ 1.7} &\textbf{82.1 $\pm$ 1.8}\\ \hline \textit{Behavior} & \textit{4.1 } &\textit{23.0} &\textit{63.1} &\textit{97.0} &\textit{68.3 } &\textit{90.0 } &\textit{79.9} &\textit{84.0} \\ \textit{Online} & \textit{26.6}& \textit{21.1 }& \textit{85.9 }&\textit{100.0} & \textit{85.9}&\textit{100.0} &\textit{85.9} &\textit{100.0} \\ \hline \end{tabular}} \end{table*} \begin{figure*} \centering \subfigure[\scriptsize Category Prediction Acc. (DN)]{\includegraphics[width=0.32\textwidth]{figures/digi_extrapolation_category.pdf}} \subfigure[\scriptsize Pop. Bias (DN)]{\includegraphics[width=0.32\textwidth]{figures/digi_popularity_buy.pdf}} \subfigure[\scriptsize Pop. Bias (RecSim-2)]{\includegraphics[width=0.32\textwidth]{figures/recsim_popularity_click.pdf}} \caption{\textbf{(a)} Using category of the first item in a session (episode) as the target, we evaluate the percentage accuracy of recommending the items from the same category throughout the session. BCD4Rec almost perfectly learns to recommend items from the (latent) category of interest, indicating better handling of overestimation bias. (We depict the most popular 200 categories out of 825 for clarity.) \textbf{(b),(c)} Popularity Bias\label{fig:extrapolation}: BCD4Rec learns to recommend a diverse set of items and has item recommendation distribution close to that of the online policy. DQN, BCQ and QRDQN show very high popularity bias, and in turn, poor diversity in recommended items for DN where the action space is large. (For DN, we depict the most popular 850 items out of $\approx$6.7k for clarity.)} \end{figure*} 1. From Table \ref{tab:online}, we observe that across all tasks, DQN and all non-RL baselines (heuristics and supervised learning methods) struggle to improve upon the behavior policy. This observation is inline with the results from other domains where DQN is shown to suffer in the ``truly'' off-policy batch-constrained learning setting \citep{fujimoto2018off,fujimoto2019benchmarking}, and even the heuristics or supervised methods can mimic the behavior policy at best. Importantly, \textit{BCD4Rec significantly improves upon the behavior policy in all cases}, substantially bridging the gap between behavior policy and online (best possible) results. By estimating the return distribution rather than the expected return, the distributional RL methods (QRDQN and IQN) improve upon DQN even without BC. (Refer Appendix \ref{sec:app_e} for more insights and results on ability of various approaches to learn the return distribution). This shows the efficacy of distributional RL methods in the previously unexplored batch learning for SR settings in the literature. 2. We observe that BC improves the performance of all RL methods including DQN, QRDQN and IQN. The gains from batch-constraining are maximum for BCQ vs DQN, while least for the distributional RL method BCD4Rec vs IQN, indicating that BC is critical when using non-distributional RL methods. Overall, our results indicate that it is \textit{possible to improve upon the behavior policy without having further access to the costly interactions in the real environment} even for the challenging large state and action space settings of SR. 3. Apart from the evaluation metrics, we study the following property of the learned agents as a proxy to evaluate overestimation bias: Each item in DN dataset has a category associated with it (total 825 categories). In each session, the user interacts with items from only one category. Given one or more positive interactions in a session, a good RA should learn to discard items from irrelevant categories by assigning low value estimates to them. This is a challenging enough problem as the action space is large ($\approx$6.7k items), and erroneously high value estimates even for one irrelevant item from a different category can lead to significant overestimation bias, due to subsequent build-up of error in the absence of further corrections feasible only via further exploration in online setting. Figure \ref{fig:extrapolation}(a) demonstrates that DQN, QRDQN and IQN agents are more prone to recommending items from non-relevant categories, and are, in turn overestimating the values for irrelevant actions (items from wrong categories). In contrast, their counterparts using constrained action spaces (BC) during training can additionally rely on $\mathcal{M}$ during training to \textit{guide the agents to learn correct value estimates for the relevant actions by enforcing constraints on the action space, in turn tackling overestimation bias}. 4. From Figs. \ref{fig:extrapolation} (b) and (c), we observe that distributional RL-based agents not only improve the BR and CTR (i.e. make relevant recommendations), but also reduce the skewness (popularity bias) in the distribution of recommended items, and thus depicting more diversity in the recommended items. DQN, BCQ and QRDQN depict high popularity bias in RecSim as well as DN. In DN, where action space is large, DQN, BCQ and QRDQN have significantly higher bias (more skew) than even the behavior policy. Distributional RL methods BCD4Rec, IQN and QRBCQ have low popularity bias which is close to that of the online (best achievable) agent. \begin{figure*} \centering \subfigure[\scriptsize DN]{\includegraphics[width=0.23\textwidth]{figures/dn_pretrain_bar.pdf}} \subfigure[\scriptsize RecSim-2]{\includegraphics[width=0.23\textwidth]{figures/rec_pretrain_bar.pdf}} \subfigure[\scriptsize DN]{\includegraphics[width=0.22\textwidth]{figures/dn_iqn_bcq.pdf}} \subfigure[\scriptsize RecSim-2]{\includegraphics[width=0.23\textwidth]{figures/recsim2_iqn_bcq.pdf}} \caption{\textbf{(a),(b)} Pre-training of item embeddings leads to significant gains in BR and CTR. Gains are higher for DN where the action space is larger. \textbf{(c),(d)} Scatter plots depicting correlation between online (BR / CTR) and offline (R@3 / $\Bar{Q}$) evaluation metrics for hyperparameter selection. We found average Q-value estimate $\Bar{Q}$ to be a more reliable metric than the commonly used Recall ($R@3$). \label{fig:thresholding}} \end{figure*} 5. Pre-training item embeddings using SRGNN on the positive interaction data from offline logs improves the performance of all the agents. The gains are higher in DN where the action space is larger ($\approx$6.7k) compared to RecSim (200) suggesting the importance of pre-training the item embeddings when dealing with large action spaces, as shown in Figs. \ref{fig:thresholding} (a) and (b). 6. Hyper-parameter selection in offline manner is an unsolved problem in batch RL literature \citep{paine2020hyperparameter}. We find the commonly used Recall metric \citep{bai2019model} is not reliable for hyperparameter selection. On the other hand, the average Q-value on a hold-out validation set ($\Bar{Q}$) from $\mathcal{B}$ correlates better with the online performance metrics (refer Appendix \ref{sec:eval_metrics} for detailed explanation of metrics). For varying hyperparameter values (for BC threshold $\beta$, number of cosines $n$ and number of quantiles $K$), we compare the performance of BCD4Rec in terms of both offline and online performance metrics, and observe that $\Bar{Q}$ is strongly correlated with online evaluation metrics CTR and BR in comparison to $R@3$, as shown in Figs. \ref{fig:thresholding} (c) and (d). We have similar observations for BCQ and QRBCQ agents (refer Appendix \ref{sec:app_d}). \section{Conclusion and Future Work\label{sec:discussion}} In this paper, we have studied the problem of batch reinforcement learning (RL) for session-based recommender systems (SR). Building upon the recent advances in distributional RL and batch RL, we have proposed a robust approach for batch-constrained distributional RL for SR that does not require exploration. We have demonstrated the efficacy of the proposed approach on a publicly available simulation environment and a real-world dataset. Our results suggest that: i. distributional RL and batch-constraining show significant improvements over vanilla Q-learning in the SR setting, ii. distributional RL is critical to overcome the popularity bias in the offline logs, iii. pre-training of item embeddings significantly improves the performance in batch RL setting when the action space is large (of the order of 1000s), iv. the commonly used Recall metric is not reliable for hyperparameter selection and the average Q-value on a hold-out validation set from the offline logs correlates better with the online performance metrics. In future, it will be interesting to i. explore model-based batch RL approaches for SR using recent advances, e.g. \citep{bai2019model,chen2018generative}, ii. evaluate on even larger action spaces, iii. explore ideas at the intersection of causal RL and batch RL \citep{bannon2020causality}, and evaluate the efficacy of other recent advances in batch RL, e.g. \citep{kumar2020conservative,wang2020critic,kidambi2020morel}. \bibliographystyle{abbrvnat}
{ "timestamp": "2020-12-17T02:19:35", "yymm": "2012", "arxiv_id": "2012.08984", "language": "en", "url": "https://arxiv.org/abs/2012.08984" }
\section{Introduction}\label{sec:intro} \subsection{Results} Scalar curvature is a microscopic concept. The scalar curvature at a point~$p$ of a Riemannian manifold~$M$ can be read off from the volumes of balls around~$p$ whose radii approach zero. A lower scalar curvature bound for~$M$ corresponds to an upper bound on the volumes of sufficiently small balls in $M$ or, equivalently, in $\widetilde M$ as the universal cover projection $\widetilde M\to M$ is locally isometric. How can we replace scalar curvature by a macroscopic concept? For instance, by replacing a lower scalar curvature bound by an upper bound on the volumes of balls of a fixed radius, say radius~$1$, in the universal cover. We form the \emph{macroscopic cousin} of a mathematical statement involving a lower scalar curvature bound for a Riemannian manifold~$M$ by the replacing it with an upper bound on the volumes of $1$-balls in $\widetilde M$. Guth's ICM report~\cite{guth-metaphors} describes the analogies and connections that emerge from the macroscopic point of view. \medskip Our first main theorem is the macroscopic cousin of a conjecture by Gromov~\cite{gromov-large}*{Conjecture~3A} for which it is assumed that the scalar curvature is bounded from below by~$-1$. Via the Bishop-Gromov inequality one sees that it also generalizes Gromov's main inequality~\cite{gromovvbc}*{Section~0.5} for which it is assumed that the Ricci curvature is bounded from below by~$-1$. \begin{theorem}\label{thm: main result} For every $V_1>0$ and $d\in{\mathbb N}$ there is constant $\operatorname{const}(d, V_1)>0$ with the following property. If $M$ is a $d$-dimensional closed Riemannian manifold such that the volume of every $1$-ball in the universal cover of $M$ is at most~$V_1$, then \[ \sv M\le \operatorname{const}(d, V_1)\cdot \operatorname{vol}(M),\] where $\sv M$ denotes the simplicial volume of~$M$. \end{theorem} Theorem~\ref{thm: main result} generalizes another theorem, Guth's volume theorem, as the simplicial volume of a hyperbolic manifold coincides with its volume up to a dimensional constant by a result of Gromov and Thurston. Other generalizations of Guth's theorem can be found in~\citelist{\cite{balacheff+karam}*{Theorem~1.3}\cite{alpert-area}}. We adopt the following convention. Within a statement P about a manifold, a \emph{dimensional constant} just means a positive real constant that only depends on the dimension of the manifold. In other words, if the dimensional constant is denoted by $c(d)$ the statement P should be read with the preface: \emph{For every dimension $d$ there is a constant $c(d)>0$ such that the following holds true.} Let us denote the supremal volume of an $r$-ball in a Riemannian manifold $(M,g)$ by $V_{(M,g)}(r)$. The induced metric on the universal cover is denoted by~$\tilde g$. \begin{theorem}[Guth's Volume theorem~\cite{guthvolume}]\label{thm: guth volume thm} Let $M$ be a $d$-dimensional closed hyperbolic manifold, and let $g$ be another metric on $M$. Suppose that \[ V_{(\widetilde M, \widetilde g)}(1)\le V_{\mathbb{H}^d}(1),\] where $\mathbb{H}^d$ is $d$-dimensional hyperbolic space. Then \[ \operatorname{vol}(M,g_\mathrm{hyp})\le \operatorname{const}(d)\cdot \operatorname{vol}(M, g)\] for a dimensional constant $\operatorname{const}(d)>0$. \end{theorem} Guth's volume theorem is the macroscopic cousin of Schoen's conjecture~\cite{schoen}*{p.~127} which says that $\operatorname{scal}_{M,g}\ge \operatorname{scal}_{M,g_\mathrm{hyp}}$ implies $\operatorname{vol}(M, g_\mathrm{hyp})\le \operatorname{vol}(M, g)$. More precisely, it is the \emph{non-sharp} macroscopic cousin because of the dimensional constant $\operatorname{const}(d)$. \medskip Our second main theorem for $R=1$ is the non-sharp macroscopic cousin of the conjecture that rationally essential manifolds do not admit a metric of positive scalar curvature (the statement for $R=1$ readily implies the one for all $R>0$ by scaling the metric). \begin{theorem}\label{thm: main result about essential manifolds} There is a dimensional constant $\epsilon(d)>0$ with the following property. For every rationally essential Riemannian manifold $(M, g)$ of dimension~$d$ and every $R>0$ we have \[ V_{(\widetilde M, \widetilde g)}(R)> \epsilon(d)\cdot R^d.\] \end{theorem} If $\epsilon(d)$ could be chosen to be the volume of a Euclidean $d$-ball, then the above conjecture would follow. A closed oriented manifold is \emph{rationally essential} if its classifying map sends the fundamental class to a non-zero class in rational homology. Guth proves the volume estimate in Theorem~\ref{thm: main result about essential manifolds} for Riemannian manifolds whose universal cover have infinite filling radius~\cite{guthvolume}*{Theorem~1}. Not every rationally essential manifold has a universal cover with infinite filling radius according to~\cite{brunnbauer+hanke}*{Theorem~1.4 and Proposition~2.8}. \medskip Our third main theorem is the macroscopic cousin of the combined conjectures~\citelist{\cite{gromov-large}*{Conjecture~3A}\cite{gromov-singularities}*{3.1 (e) on p.~769}} by Gromov (see also~\cite{gromov-asymptotic}*{p.~232}). It generalizes the main result in~\cite{sauerminvol} where a lower Ricci curvature bound was assumed. See also~\cite{sauer-homologygrowth} for related results in the residually finite case. \begin{theorem}\label{thm: main l2 result} For every $V_1>0$ and $d\in{\mathbb N}$ there is a constant $\operatorname{const}(d, V_1)>0$ with the following properties. \begin{enumerate} \item If $(M, g)$ is a $d$-dimensional connected closed oriented Riemannian manifold with classifying map $c\colon M\to B\Gamma$ such that $V_{(\widetilde M, \widetilde g)}(1)\le V_1$, then the von Neumann rank of $c_\ast([M])\in H_d(B\Gamma)$, where $[M]$ is the fundamental class of~$M$, is bounded from above by $\operatorname{const}(d,V_1)\cdot \operatorname{vol}(M)$. \item If, in addition, the manifold $M$ is aspherical, then its $\ell^2$-Betti numbers satisfy \[ \beta^{(2)}_i(M)\le \operatorname{const}(d, V_1)\cdot \operatorname{vol}(M, g)\] for every $i\in{\mathbb N}$, and the Euler characteristic satisfies \[ |\chi(M)|\le \operatorname{const}(d,V_1)\cdot \operatorname{vol}(M,g).\] \end{enumerate} \end{theorem} Subsection~\ref{sub: l2 betti numbers} contains an overview of what we need from the theory of $\ell^2$-Betti numbers, including the definition of von Neumann rank. We present a result which is an outcome of our methods but is of independent interest. See Remark~\ref{rem: relation to foliated simplicial volume} for the relevant notions. \begin{theorem}\label{thm: main foliated simplicial volume} For every $V_1>0$ and $d\in{\mathbb N}$ there is constant $\operatorname{const}(d, V_1)>0$ with the following properties: The integral foliated simplicial volume of every $d$-dimensional closed aspherical Riemannian manifold $(M,g)$ that satisfies $V_{(\widetilde M, \widetilde g)}(1)\le V_1$ is bounded from above by $\operatorname{const}(d,V_1)\cdot \operatorname{vol}(M,g)$. More precisely, the inequality $|M|^\alpha\le \operatorname{const}(d,V_1)\cdot \operatorname{vol}(M,g)$ holds for any free measurable pmp action $\alpha$ on a standard probability space. \end{theorem} Theorem~\ref{thm: main foliated simplicial volume} generalizes Corollary 1.2 of~\cite{fauser} since the assumption in~\cite{fauser}*{Corollary~1.2} implies the vanishing of the minimal volume~\cite{collapse}*{Theorem~3.1}. Theorem~\ref{thm: main foliated simplicial volume} shows the vanishing of the integral foliated simplicial volume (and its variants above) for $3$-dimensional graph manifolds since their minimal volume vanishes~\cite{collapse}*{Example~0.2 and Theorem~3.1}. This vanishing result is a special case of~\cite{fauser+friedl+loeh}*{Theorems~1.2 and~1.6}. \subsection{Comment on the proof} The proof of Theorem~\ref{thm: main result} involves an action of the fundamental group on the Cantor set. We will explain why. A close reading of Guth's proof of the volume theorem yields a proof of Theorem~\ref{thm: main result} for closed aspherical manifolds whose smallest non-contractible loop (\emph{systole}) is of length at least~$1$ (see the discussion~\cite{alpert-area}*{Section~4}). If the fundamental group is residually finite, then we can pass to a finite cover whose systole is of length at least~$1$. Since the stated inequality between simplicial volume and Riemannian volume may be verified on every finite cover, Theorem~\ref{thm: main result} follows for closed aspherical manifolds with residually finite fundamental groups. We have to get rid of the assumptions of asphericity and residual finiteness. Residual finiteness is hard to verify beyond locally symmetric spaces; we do not even know whether fundamental groups of closed negatively curved manifolds are residually finite. The attempt to get rid of residual finiteness leads to actions on the Cantor set. If the fundamental group~$\Gamma$ of our manifold~$M$ is not residually finite, we may not have enough finite covers to enforce a large systole. Let us consider the inverse limit \[ \varprojlim_{i\in I} M_i\] of the directed system of all connected finite regular covers of $M$ -- even if there none except $M$ itself in the most extreme case. By covering theory the system $M_i$, $i\in I$, corresponds to a directed system of finite index normal subgroups $\Gamma_i<\Gamma$, $i\in I$. Each $M_i$ is just the quotient $\Gamma_i\backslash\widetilde M$. The inverse limit $\widehat \Gamma=\varprojlim \Gamma/\Gamma_i$ is the \emph{profinite completion} of $\Gamma$. We have \[ \varprojlim_{i\in I}M_i\cong \varprojlim_{i\in I}\Gamma_i\backslash \widetilde M\cong \varprojlim_{i\in I}\Gamma/\Gamma_i\times_\Gamma\widetilde M\cong \widehat \Gamma\times_\Gamma\widetilde M.\] The $\Gamma$-quotient on the right is the quotient by the diagonal action. The profinite completion has an obvious action by translations. The $\Gamma$-space $\widehat \Gamma$ has three important properties: \begin{enumerate} \item It is homeomorphic to the Cantor set. \item It possesses an invariant probability measure (the Haar measure). \item If $\Gamma$ is residually finite then the $\Gamma$-action is free. \end{enumerate} Let $X$ be the Cantor set. In Theorem~\ref{thm: existence of action} we reprove an observation of Hjorth-Molberg that every countable group $\Gamma$ admits a free, continuous action on $X$ having a $\Gamma$-invariant probability measure. With regard to such an action we form the space \[X\times_\Gamma \widetilde M,\] which acts as a replacement for $\varprojlim M_i$ if the fundamental group is not residually finite. Guth's methods need the finite covers $M_i$. Our contribution is to generalize them so that we can work with the global object $X\times_\Gamma \widetilde M$ instead. Recent results of Liokumovich-Lishak-Nabutovsky-Rotman~\cite{fillingmetric} and Papasoglu~\cite{papasoglu} generalize Guth's theorem in~\cite{guthuryson}*{Theorem~0.1} on Uryson width. Papasoglu's proof is simpler than Guth's proof which underlies our work. Can one could combine the method in~\cite{papasoglu} with our ideas to obtain shorter proofs for our main results or to obtain explicit constants as in Nabutovsky's paper\cite{explicitbound}? We do not think this is possible in the case of Theorems~\ref{thm: main result} and~\ref{thm: main l2 result}, but it might be possible in the case of Theorem~\ref{thm: main result about essential manifolds}. \subsection{Structure of the proof} We establish a framework of equivariant bundles over $X$ (\emph{Cantor bundles}). After Section~\ref{sec: topological preliminaries} on preliminaries we introduce the notion of Cantor bundle in Section~\ref{sec: category cantor bundles}. The space $\xxm$ with its diagonal action of~$\Gamma=\pi_1(M)$ is a trivial example. A more interesting toy example is Example~\ref{exa: non trivial Cantor bundle}. Cantor bundles can be regarded as spaces with a groupoid action, namely the groupoid given by the orbit equivalence relation on $X$, endowed with additional geometric data. Spaces with groupoid actions are considered in many contexts. Our main inspiration came from Gaboriau's $\mathcal{R}$-simplicial complexes in the measurable world~\cite{gaboriau}. Another influence is Gromov's paper~\cite{gromov-foliated}. After discussing transverse Hausdorff measures on Cantor bundles in Section~\ref{sec: transverse measure} we introduce the rectangular nerve construction in the framework of Cantor bundles (Section~\ref{sec: rectangular cantor nerves}). The \emph{rectangular Cantor nerve} of an equivariant cover on $\xxm$ as described above is a non-trivial Cantor bundle. The toy example gives a good impression how such a rectangular Cantor nerve might look like. In Section~\ref{sec: volume estimates} we establish the existence of good covers in our framework and prove the analog of Guth's result on the exponential decay of the volume of the high multiplicity set in our framework. This is the main point about the auxiliary space~$X$: We cannot obtain a good, equivariant cover on $\widetilde M$, only on the Cantor bundle $\xxm$. We then bound the transverse volume of the image of the map to the rectangular Cantor nerve. In Section~\ref{sec: homotoping down} this map is homotoped as a Cantor bundle map to the $d$-skeleton where $d$ is the dimension of $M$. In Section~\ref{sec: from volume to simplicial volume} we relate what we have done so far to the simplicial volume of $M$. Here we use tools from homological algebra and equivariant topology. \section{Topological preliminaries}\label{sec: topological preliminaries} In Subsection~\ref{sub: action on cantor set} we present a short proof of the existence of suitable actions on the Cantor set which is a result of Hjorth-Molberg. In~\ref{sub: equivariant CW complexes} we review the notion of an equivariant CW-complex and of a classifying space. In~\ref{sub: rectangular complexes} and~\ref{sub: rectangular nerves} we give a detailed review of rectangular complexes and Guth's rectangular nerve since special care is needed in our equivariant context. We adhere to the following notation. Let $M$ be a closed $d$-dimensional Riemannian manifold with fundamental group $\Gamma$. Its universal cover is denoted by $\widetilde M$ and endowed with the Riemannian metric induced by $M$. The Cantor set is denoted by $X$. We fix a free continuous $\Gamma$-action on $X$ and a $\Gamma$-invariant Borel probability measure $\mu$ on $X$ whose existence is stated in Theorem~\ref{thm: existence of action}. If $B=B(p,r)\subset\widetilde{M}$ is the open ball of radius $r$ around $p$, then $aB$ is the concentric ball of radius $a\cdot r$ around $p$. \subsection{Free actions on the Cantor set}\label{sub: action on cantor set} The following observation is due to Hjorth and Molberg~\cite{hjorth+molberg}*{Theorem~0.1}. Based on the notion of co-induction we formulate a shorter proof here for the convenience of the reader. A stronger statement, which we only need in the proof of Theorem~\ref{thm: main foliated simplicial volume}, was obtained by Elek~\cite{elek}. \begin{theorem}\label{thm: existence of action} Let $\Gamma$ be a countable discrete group and let $X$ be the Cantor set. Then there is a free, continuous $\Gamma$-action on $X$ having a $\Gamma$-invariant probability measure. \end{theorem} \begin{proof} The case of finite groups is easy. We may and will assume that $\Gamma$ is infinite. For every element $\gamma\in\Gamma$ let $X_\gamma$ be the profinite completion of the cyclic subgroup $\langle\gamma\rangle$ endowed with the left translation action by $\langle\gamma\rangle$ and the normalized Haar measure~$\nu_\gamma$. Depending on the order of $\gamma$, $X_\gamma$ is either a finite set or homeomorphic to the profinite completion $\widehat{\mathbb Z}$ of ${\mathbb Z}$, which is a Cantor set. Let $Y_\gamma$ be the co-induction of the $\langle\gamma\rangle$-space $X_\gamma$, that is \[ Y_\gamma\mathrel{\mathop:}=\operatorname{map}(\Gamma, X_\gamma)^{\langle\gamma\rangle}=\bigl\{f\colon\Gamma\to X_\gamma\mid \forall_{x\in\Gamma} f(\gamma x)=\gamma\cdot f(x)\bigr\}\] endowed with the compact-open topology and the left $\Gamma$-action $(\lambda\cdot f)(x)=f(x\lambda)$ for $x\in \Gamma$ and $\lambda\in \Gamma$. Non-equivariantly, $Y_\gamma$ is homeomorphic to the product $\prod_{\langle\gamma\rangle\backslash \Gamma}X_\gamma$, which is a Cantor set. One easily verifies that the product measure $\mu_\gamma$ of the $\nu_\gamma$ is invariant under the $\Gamma$-action on $Y_\gamma$. Finally, we define $X$ to be the product \[ X\mathrel{\mathop:}= \prod_{\gamma\in \Gamma} Y_\gamma\] endowed with diagonal $\Gamma$-action and the product measure of the measures~$\mu_\gamma$. The product measure is clearly $\Gamma$-invariant. As a countable product of Cantor sets, $X$ is a Cantor set. It remains to show that the $\Gamma$-action on $X$ is free. Let $x=(y_\gamma)\in X$ and $\gamma_0\in\Gamma$. Assume that $\gamma_0\cdot x=(\gamma_0\cdot y_\gamma)_{\gamma\in\Gamma}=(y_\gamma)_{\gamma\in\Gamma}$. Since the $\langle\gamma_0\rangle$-action on $Y_{\gamma_0}$ is free, it implies that $\gamma_0=e$. \end{proof} \subsection{Equivariant CW-complexes}\label{sub: equivariant CW complexes} We recall some terminology concerning equivariant CW-complexes and classifying spaces. For the notion of an \emph{(equivariant) $\Gamma$-CW-complex} we refer to~\cite{tomdieck}*{Section~II.1}. The skeleta $N^{(n)}$ of a $\Gamma$-CW-complex $N$ are built inductively via $\Gamma$-pushouts of the form \[\begin{tikzcd} \coprod_{i\in I_n} \Gamma/H_i\times S^{n-1}\ar[r]\ar[d,hook] & N^{(n-1)}\ar[d, hook]\\ \coprod_{i\in I_n} \Gamma/H_i\times D^n\ar[r] & N^{(n)} \end{tikzcd} \] The conjugates of the subgroups $H_i$, $i\in I_n$, $n\ge 0$, are precisely the isotropy groups of the $\Gamma$-space~$N$. If all subgroups $H_i$ are trivial, then $N$ is a \emph{free} $\Gamma$-CW-complex. If all subgroups $H_i$ are finite, then $N$ is a \emph{proper} $\Gamma$-CW complex. The universal cover of a CW-complex with fundamental group $\Gamma$ has a natural structure of a free $\Gamma$-CW-complex. A \emph{cellular action} of a discrete group $\Gamma$ on a CW-complex $W$ is a continuous action of $\Gamma$ on $W$ such that \begin{enumerate} \item\label{eq: cell permute} for every open cell $e$ and $\gamma\in\Gamma$ the translate $\gamma e$ is an open cell and \item\label{eq: pointwise stabilizer} if $\gamma\in\Gamma$ fixes an open cell set-wise then it does so point-wise. \end{enumerate} A CW-complex with a cellular $\Gamma$-action is a $\Gamma$-CW-complex in the sense of~\cite{tomdieck}*{p.~98} (see~\cite{tomdieck}*{Proposition~(1.15) on p.~101}), which means that is obtained from glueing equivariant cells $\Gamma/H\times D^k$ along their boundaries $\Gamma/H\times S^{k-1}$ where $H<\Gamma$ is a subgroup. The equivariant homotopy category of free $\Gamma$-CW complexes possesses a terminal object which is denoted by $E\Gamma$. The space $E\Gamma$ is unique up to equivariant homotopy and called the \emph{classifying space of~$\Gamma$}. The quotient of $E\Gamma$ is commonly denoted by $B\Gamma$ and also called \emph{classifying space}. Each free $\Gamma$-CW-complex admits an equivariant map to the classifying space of~$\Gamma$. Any such map -- they are unique up to homotopy -- is called \emph{classifying map}. \subsection{Rectangular complexes}\label{sub: rectangular complexes} A \emph{rectangular complex} is a $M_\kappa$-polyhedral complex with $\kappa=0$ in the sense of Bridson-Haefliger~\cite{bridson+haefliger}*{Definition~7.37 on p.~ 114} such that each cell is isometric to a Euclidean $d$-cuboid $[0,a_1]\times [0, a_2]\times\cdots\times [0, a_d]\subset {\mathbb R}^d$ and the intersection of two cells is either empty or a single face. We recall some terminology and basic facts from the book of Bridson-Haefliger~\cite{bridson+haefliger}*{Chapter~I.7}. The faces of $[0,a]$ are just $\{0\}, \{a\}$ and $[0,a]$. The \emph{faces} of a Euclidean $d$-cuboid $[0,a_1]\times\cdot\times [0,a_d]$ are the subsets given by $F_1\times\cdots\times F_d$ where each $F_i$ is a face of $[0,a_i]$. Faces of dimensions $0$ and $1$ are also called \emph{vertices} and \emph{edges}, respectively. The \emph{barycenter} of a Euclidean $d$-cuboid $C=[0,a_1]\times [0, a_2]\times\cdots\times [0, a_d]$ with $d>0$ is the point $(\frac{1}{2}a_1, \dots, \frac{1}{2}a_d)$. It lies in the interior of $C$ and is fixed by any isometry of $C$. The barycenter of a vertex is the vertex itself. A rectangular complex has the structure of a CW-complex with the cells corresponding to the Euclidean cuboids. Depending on the context, we refer to the latter as cells or (Euclidean) cuboids or faces. A rectangular complex is endowed with the path metric that is induced by the Euclidean metric on each Euclidean cuboid. The second barycentric subdivision of a rectangular complex is simplicial complex, even a $M_0$-simplicial complex~\cite{bridson+haefliger}*{Proposition~7.49 on p.~118}. Let $J$ be a, possibly countably infinite, index set. The real vector space with basis $J$ will be denoted by ${\mathbb E}^J$. We regard ${\mathbb E}^J$ as the vector space of real sequences indexed over $J$ that have only finitely many non-zero components. We endow ${\mathbb E}^J$ with the Euclidean norm and metric. For a family $(a_j)_{j\in J}$ of positive real numbers we will define a rectangular complex \[N\bigl((a_j)_{j\in J}\bigr)\subset{\mathbb E}^J\] as a subset of ${\mathbb E}^J$ in the following way. The vertices of $N$ are the sequences of ${\mathbb E}^J\backslash\{0\}$ whose $j$-component, $j\in J$, is either $0$ or $a_j$. Two vertices are \emph{adjacent} if they differ in exactly one component. A family of $2^k$ vertices span a $k$-face (or $k$-cell, or $k$-cuboid) given by their convex hull if each vertex is adjacent to exactly $k$ vertices. We call $N\bigl((a_j)_{j\in J}\bigr)$ the \emph{rectangular complex associated to the family $(a_j)_{j\in J}$.} To see that the previous definition yields a rectangular complex we have to verify that the intersection of two faces is empty or a single face. To this end, we start with following remark. \begin{remark}\label{rem: subsets of index set} A face $F$ in $N((a_j)_{j\in J})$ is a subset of ${\mathbb E}^J$ of the following type: There is a finite subset $J'\subset J$ and there are $c_j\in\{0,a_j\}$ for every $j\in J\backslash J'$ with $(c_j)_{j\in J\backslash J'}\ne 0$ such that \begin{equation}\label{eq: type of cuboid} F= \bigl\{ (b_j)_{j\in J}\mid \forall_{j\in J'} ~b_j\in [0,a_j]\wedge\forall_{j\in J\backslash J'}~ b_j=c_j\bigr\}. \end{equation} Vice versa, every such subset is a face in $N((a_j)_{j\in J})$, namely the convex hull of the following set of vertices of cardinality $2^{\# J'}$ \[ \bigl\{ (b_j)_{j\in J}\mid \forall_{j\in J'} ~b_j\in \{0,a_j\}\wedge\forall_{j\in J\backslash J'}~ b_j=c_j\bigr\}.\] Depending on $F$, we define the following four subsets of the index set $J$: \begin{align*} J_0(F) &\mathrel{\mathop:}= \bigl\{ j\in J\backslash J'\mid c_j=0\bigr\}& J_{\frac{1}{2}}(F)&\mathrel{\mathop:}= J'\\ J_1(F) &\mathrel{\mathop:}= \bigl\{j\in J\backslash J'\mid c_j=a_j\bigr\}& J_+(F)&\mathrel{\mathop:}= J_1(F)\cup J_{\frac{1}{2}}(F) \end{align*} Equivalently we could define $J_0(F)$, $J_{\frac{1}{2}}(F)$, and $J_1(F)$ as the subset of indices $j\in J$ for which the $j$-component of the barycenter of $F$ is $0$, $\frac{1}{2}a_j$, and $1$, respectively. \end{remark} Let us consider two faces \begin{align*} F&= \bigl\{ (b_j)_{j\in J}\mid \forall_{j\in J_{\frac{1}{2}}(F)} ~b_j\in [0,a_j]\wedge\forall_{j\in J\backslash J_{\frac{1}{2}}(F)}~ b_j=c_j\bigr\}\\ \tilde F&= \bigl\{ (b_j)_{j\in J}\mid \forall_{j\in J_{\frac{1}{2}} (\tilde F)} ~b_j\in [0,a_j]\wedge\forall_{j\in J\backslash J_{\frac{1}{2}}(\tilde F)}~ b_j=\tilde c_j\bigr\} \end{align*} The intersection \[ F\cap \tilde F= \bigl\{ (b_j)_{j\in J}\mid \forall_{j\in J_{\frac{1}{2}}(F)\cap J_{\frac{1}{2}}(\tilde F)} ~b_j\in [0,a_j]\wedge\forall_{j\not\in J_{\frac{1}{2}}(F)}~ b_j=c_j\wedge\forall_{j\not\in J_{\frac{1}{2}}(\tilde F)}~ b_j=\tilde c_j\bigr\} \] is empty or again of the type~\eqref{eq: type of cuboid} and thus a single face of $N((a_j)_{j\in J})$. We conclude that $N((a_j)_{j\in J})$ is indeed a rectangular complex. Let $F$ be a face in $N((a_j)_{j\in J})$. We denote the dimension of $F$ by $d(F)$. By definition of the rectangular complex we have $J_1(F)\ne\emptyset$. One has $d(F)=\# J_{\frac{1}{2}}(F)$. Further, $F$ is a cuboid with side lengths $a_j$, $j\in J_{\frac{1}{2}}(F)$. For every face $F$ we enumerate these side lengths by \begin{equation}\label{eq: enumeration of side lengths} r_1(F), \dots, r_{d(F)}(F)~\text{ such that }~r_1(F)\le \dots\le r_{d(F)}(F). \end{equation} \subsection{Rectangular nerves of covers}\label{sub: rectangular nerves} We recall the definition of the rectangular nerve of a cover by balls which was introduced by Guth. \begin{definition} Let $\mathcal{V}=\{B_j\mid j\in J\}$ be a cover of a Riemannian manifold $W$ by open balls such that the balls $\frac{1}{2}B_j$ still cover $W$. Let $r_j$ be the radius of the ball $B_j$. The \emph{rectangular nerve} $N(\mathcal{V})$ of $\mathcal{V}$ is the subcomplex of $N((r_j)_{j\in J})$ whose faces $F$ are precisely the ones for which \[ \bigcap_{j\in J_+(F)} B_j \ne\emptyset\] and $J_1(F)\ne \emptyset$. \end{definition} We turn to the equivariant setting with regard to the action of the fundamental group $\Gamma=\pi_1(M)$ on the universal cover~$\widetilde M$. \begin{lemma}\label{lem: proper Gamma CW} Let $J$ be a free cofinite $\Gamma$-set, and $\mathcal{V}=\{B_j\mid j\in J\}$ be an equivariant cover of $\widetilde M$ by balls $B_j$ of radius $r_j$ in the sense that $\gamma B_j=B_{\gamma j}$ for every $j\in J$ and $\gamma\in \Gamma$. Then $N(\mathcal{V})$ is a locally finite rectangular complex. The left shift action \[ \Gamma\curvearrowright \prod_{j\in J} [0,r_j],~\gamma\cdot\bigl(x_j\bigr)_{j\in J}=\bigl(x_{\gamma^{-1}j}\bigr)_{j\in J}\] restricts to a proper $\Gamma$-action on $N(\mathcal{V})$ that permutes cells. Further, the barycentric subdivision of $N(\mathcal{V})$ is a proper $\Gamma$-CW-complex. \end{lemma} \begin{proof} Since $\Gamma$ is cofinite and and the $\Gamma$-action on $\widetilde M$ by deck transformations is proper, the cover $\mathcal{V}$ is locally finite. Hence $N(\mathcal{V})$ is a locally finite CW-complex. Clearly, the action permutes cells, thus the action satisfies property~\eqref{eq: cell permute} of a cellular action. Each stabilizer of a cell is contained in the set-stabilizer of a finite subset of $J$, thus is a finite group. The CW-structure of $N(\mathcal{V})$ does not necessarily satisfy property~\eqref{eq: pointwise stabilizer} of a cellular action. Next we show that its barycentric subdivision does. A $k$-face of the barycentric subdivision is given by the convex hull of the barycenters of a strictly ascending chain $F_0\subset F_1\subset\dots\subset F_k$ where $F_i$ is an $i$-face of $N(\mathcal{V})$. Let $\gamma\in\Gamma$ fix the $k$-face $C$ associated with $F_0\subset F_1\subset\dots\subset F_k$ as a set. Then $\gamma$ fixes each face $F_i$ as a set. Since $F_0$ is a vertex, $\gamma$ fixes $F_0$ pointwise. By induction we may assume that $\gamma$ fixes $F_i$ pointwise for $i<k$. Since $\gamma$ fixes the $i+1$-dimensional face $F_{i+1}$ as a set and its $i$-dimensional subface $F_i$ pointwise, it must fix $F_{i+1}$ pointwise. Hence $\gamma$ fixes $C$ pointwise. So the $\Gamma$-action on the barycentric subdivision is cellular. The stabilizers of cells are finite as discussed above, which means that the barycentric subdivision of $N(\mathcal{V})$ is a proper $\Gamma$-CW-complex. \end{proof} \section{Homological preliminaries}\label{sec: homological prelim} In~\ref{sub: measure homology} we review Thurston's measure homology which is isomorphic and isometric to real singular homology on CW-complexes but has better functorial properties with regard to Cantor bundles which are considered later. In~\ref{sub: normed chain complexes} we discuss normed abelian groups and chain complexes. There we review integral variants ($X$-parametrised integral simplicial volume, integral foliated simplicial volume) of the simplicial volume that take an action of the fundamental group on a Cantor set or a probability space into account. In~\ref{sub: l2 betti numbers} we collect what we need from L\"uck's approach to~$\ell^2$-Betti numbers. We prove a bound on the von Neumann rank (Definition~\ref{def: von Neumann rank}) via the $X$-parametrised integral simplicial norm which slightly generalizes a bound of $\ell^2$-Betti numbers of a closed manifold by the foliated simplicial volume due to Schmidt~\cite{schmidt}. We collect some notation. The space of real-valued and integer-valued continuous functions on $X$ is denoted by $C(X)$ and $C(X;{\mathbb Z})$, respectively. The action of $\Gamma$ on $X$ induces a (left) action on $C(X)$. Tensor products $M\otimes_{\mathbb Z} N$ over the ring ${\mathbb Z}$ are denoted by $M\otimes N$. The integral group ring of $\Gamma$ is denoted by ${\mathbb Z}[\Gamma]$. Modules over a (non-commutative) ring are assumed to be left modules unless said otherwise. Since ${\mathbb Z}[\Gamma]$ is a ring with involution -- induced by $\gamma\mapsto\gamma^{-1}$ -- we can turn any left ${\mathbb Z}[\Gamma]$-module into a right ${\mathbb Z}[\Gamma]$-module. We do implicitly so if we write $M\otimes_{{\mathbb Z}[\Gamma]} N$ for two left ${\mathbb Z}[\Gamma]$-modules. The singular chain complex of a space $Y$ is denoted by $C_\ast(Y)$. We write $C_\ast(Y;{\mathbb R})$ for the singular chain complex with real coefficients. Similar for singular homology. If the group $\Gamma$ is acting continuously on $Y$ and $M$ is a ${\mathbb Z}[\Gamma]$-module, then we write the equivariant homology and cohomology as \[ H^\Gamma_p(Y;M)\mathrel{\mathop:}= H_p\bigl(M\otimes_{{\mathbb Z}[\Gamma]}C_\ast(Y) \bigr)\text{ and }H_\Gamma^p(Y;M)\mathrel{\mathop:}= H^p\bigl(\hom_{{\mathbb Z}[\Gamma]}(C_\ast(Y), M)\bigr).\] The projection $\widetilde M\to M$ yields a canonical isomorphism $H_\ast^\Gamma(\widetilde M;{\mathbb Z})\xrightarrow{\cong} H_\ast(M;{\mathbb Z})$. \subsection{Measure homology}\label{sub: measure homology} Measure homology replaces the finite linear combinations in singular homology by signed measures on the space of singular simplices. It was invented by Thurston. We recall its basic notions. For more details we refer to~\cite{loeh-measure}. For a topological space $N$ we endow the space of continuous maps $\operatorname{map}(\Delta^n, N)$ from the standard $n$-simplex to $N$ with the compact-open topology. We define $\mathcal{C}_n(N)$ as the ${\mathbb R}$-vector space of signed Borel measures on $\operatorname{map}(\Delta^n, N)$ that have compact support and finite variation. The elements of $\mathcal{C}_n(N)$ are called \emph{measure chains}. The alternating sum of pushforwards of face maps turn $\mathcal{C}_\ast(N)$ into a chain complex whose homology is called the \emph{measure homology} of $N$. The variation of measures induces a seminorm on the measure homology. The map from the singular chain complex to the chain complex of measure chains $C_\ast(N)\to \mathcal{C}_\ast(N)$ that sends a singular $n$-simplex to the point measure on that simplex is a natural chain homomorphism, which induces an isometric isomorphism in homology~\cite{loeh-measure}. \subsection{Norms on abelian groups and chain complexes}\label{sub: normed chain complexes} We consider norms and seminorms on ${\mathbb R}$-modules, i.e.~real vector spaces, and on ${\mathbb Z}$-modules, i.e.~abelian groups. The defining properties of a (semi-)norm on an ${\mathbb R}$-module make sense for a ${\mathbb Z}$-module~$A$ and a function $|\_|\colon A\to {\mathbb R}^{\ge 0}$ with the slight modification that $|r\cdot a|_A=|r|\cdot |a|_A$ is only required for $r\in{\mathbb Z}$ and $a\in A$. \begin{theorem}[\cite{steprans}] An abelian group endowed with a norm that induces the discrete topology is free. \end{theorem} The supremum norm on the abelian group $C(X;{\mathbb Z}$) of integer valued continuous functions on $X$ induces the discrete topology. We record the following consequence for later use. \begin{corollary}\label{cor: continuous function free} The abelian group $C(X;{\mathbb Z})$ is free. \end{corollary} A chain complex of ${\mathbb Z}$- or ${\mathbb R}$-modules equipped with a (semi-)norm on each chain group is called a \emph{(semi-)normed chain complex} provided the boundary maps are continuous. We endow the quotient of a (semi-)normed module with the quotient semi-norm. In general, a norm does not induce a norm on the quotient but only a semi-norm. In the context of semi-norms being \emph{isometric} does not imply being \emph{injective}. The singular chain complexes $C_\ast(N)$ and $C_\ast(N;{\mathbb R})$ of a topological space $N$ with integer or real coefficients, respectively, are normed via the $\ell^1$-norm with respect to the basis by singular simplices. They induce semi-norms on $H_\ast(N)$ and $H_\ast(N;{\mathbb R})$, respectively. The latter is denoted by $\norm{\_}$ and called \emph{simplicial norm}. The induced chain homomorphism and homology homomorphism of a map of spaces do not increase the simplicial norms. Gromov and Thurston defined the \emph{simplicial volume} of a closed manifold~$M$ as the simplicial norm of its fundamental class~$[M]$. We denote it by $\norm{M}$. The $\ell^1$-norm on $C(X;{\mathbb Z})$ with respect to the measure~$\mu$ and the simplicial norm induce the following norm on each abelian group $C(X;{\mathbb Z})\otimes C_p(N)$ which we call the \emph{$X$-parametrised integral simplicial norm} and denote by $\norm{\_}_{\mathbb Z}^X$: For functions $f_1,\dots, f_k\in C(X;{\mathbb Z})$ and distinct singular $p$-simplices $\sigma_1,\dots,\sigma_k$ we set \[ \norm{f_1\otimes \sigma_1+\dots+f_k\otimes\sigma_k}_{\mathbb Z}^X\mathrel{\mathop:}=\int_X|f_1|d\mu+\dots+\int_X|f_k|d\mu.\] Let us now consider the situation where $N$ is a topological space endowed with the action of a group $\Gamma$. Then $C_\ast(N)$ is a chain complex over the group ring ${\mathbb Z}[\Gamma]$. We obtain an induced semi-norm on the quotient $C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]}C_p(N)$ of $C(X;{\mathbb Z})\otimes C_p(N)$ which we call by the same name and denote by the same symbol. \begin{definition}\label{def: inclusion into parametrised chains} Let $Y$ be a connected space with fundamental group~$\Gamma$ and universal cover~$\widetilde Y$. The composition of chain maps \[ C_\ast(Y)\xleftarrow{\cong} {\mathbb Z}\otimes_{{\mathbb Z}[\Gamma]}C_\ast(\widetilde Y)\hookrightarrow C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]}C_\ast(\widetilde Y)\] is denoted by $j_\ast^Y$. Here the right hand map is induced by the inclusion of constant functions. \end{definition} \begin{remark}\label{rem: comparision parametrised and real norm} Let $i_\ast^{\mathbb R}$ be the change of coefficients $C_\ast(Y)\to C_\ast(Y;{\mathbb R})$. We have \[\norm{j_\ast^Y(z)}_{\mathbb Z}^X\le \norm{i_\ast^{\mathbb R}(z)}\] for every chain $z$ in $C_\ast(Y)$ and thus a similar statement for every homology class. This follows from the fact that invariant measure~$\mu$ on $X$ yields by integration a chain map \[ C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]}C_\ast(\widetilde Y)\to {\mathbb R}\otimes_{{\mathbb Z}[\Gamma]} C_\ast(\widetilde Y)\xrightarrow{\cong}C_\ast(Y;{\mathbb R})\] that does not increase norms. \end{remark} \begin{definition}\label{def: parametrised integral simplicial volume} The \emph{$X$-parametrised integral simplicial volume} of a connected closed oriented manifold $M$ with fundamental group~$\Gamma$ is defined as \[\norm{M}_{\mathbb Z}^X:= \norm{j^M_\ast([M])},\] where $[M]\in H_d(M)$ is the fundamental class. \end{definition} Note that we take the liberty to skip the dependency on the measure~$\mu$ in the notation of the $X$-parametrised integral simplicial volume. \begin{remark}[Relation to integral foliated simplicial volume]\label{rem: relation to foliated simplicial volume} Let us denote the free and probability measure preserving (\emph{pmp}) action of $\Gamma$ on $(X,\mu)$ by $\alpha$. Then $\norm{M}_{\mathbb Z}^X$ only depends on the measure isomorphism class of $\alpha$, and $\norm{M}_{\mathbb Z}^X$ coincides with the $\alpha$-parametrised simplicial volume $|M|^\alpha$ as defined in~\cite{foliated}*{Definition~2.2}. The \emph{integral foliated simplicial volume} is defined as the infimum of $\alpha$-parametrised simplicial volumes over all free measurable pmp actions $\alpha$ of~$\Gamma$, and is thus bounded from above by the $X$-parametrised integral simplicial volume. We refer to~\cites{foliated,schmidt} for more details. \end{remark} \subsection{$\ell^2$-Betti numbers}\label{sub: l2 betti numbers} We use L\"uck's approach to $\ell^2$-Betti numbers which is based on the dimension function for modules over finite von Neumann algebras. This is not just a matter of taste as it is important in our context to work with singular chains and to be able to read off $\ell^2$-Betti numbers from the singular chain complex instead of the simplicial chain complex. L\"uck~\cite{lueck-dimension} defines a dimension function $\dim_A$ taking values in $[0,\infty]$ for arbitrary modules over a von Neumann algebra $A$ with a finite trace, where $A$ is regarded just as a ring, not as functional-analytic object. Our most important example is the \emph{group von Neumann algebra} $L(\Gamma)$ with its canonical trace. The complex group ring ${\mathbb C}[\Gamma]$ is a subring of $L(\Gamma)$. The trace of an element in ${\mathbb C}[\Gamma]$ is the coefficient of $1_\Gamma$. The involution of ${\mathbb C}[\Gamma]$ induced by complex conjugation and taking inverses extends to an involution of $L(\Gamma)$ which corresponds to taking adjoint operators. In particular, we can turn any left $L(\Gamma)$-module into a right $L(\Gamma)$-module via this involution. The $p$-th \emph{$\ell^2$-Betti number of a $\Gamma$-space} $Y$ is then defined as \[\beta^{(2)}_p(Y;\Gamma)\mathrel{\mathop:}= \dim_{L(\Gamma)}H_p^\Gamma\bigl(Y;L(\Gamma)\bigr).\] In the case of the universal covering $\widetilde M\to M$ and $\Gamma=\pi_1(M)$ we simply write $\beta^{(2)}_p(M)$ instead of $\beta^{(2)}_p(\widetilde M; \Gamma)$ and call it the \emph{$p$-th $\ell^2$-Betti} number of~$M$. In the case of Riemannian manifolds and simplicial complexes the above definitions coincide with those by Atiyah and Dodziuk, respectively. For more information and proofs we refer to L\"uck's book~\cite{lueck-l2book}. Next we describe another von Neumann algebra whose relevance to $\ell^2$-Betti numbers became clear in the work of Gaboriau~\cite{gaboriau}. The probability space $(X,\mu)$ from Theorem~\ref{thm: existence of action} gives rise to the abelian von Neumann algebra $L^\infty(\mu)$ of complex-valued measurable functions on $X$ with the integral as finite trace. The measure preserving action of $\Gamma$ induces a unitary $\Gamma$-action on $L^\infty(\mu)$. One can then form the \emph{crossed product von Neumann algebra} $L^\infty(\mu)\bar\rtimes\Gamma$ which contains $L(\Gamma)$ and $L^\infty(\mu)$ as subalgebras and which possesses a (unique) finite trace that extends those of $L(\Gamma)$ and $L^\infty(\mu)$. For $\gamma\in\Gamma\subset{\mathbb C}[\Gamma]\subset L(\Gamma)$ and $f\in L^\infty(\mu)$ we have \[ \gamma\cdot f=f\bigl(\gamma^{-1}\_\bigr)\cdot\gamma\in L^\infty(\mu)\bar\rtimes\Gamma.\] The involution on $L^\infty(\mu)\bar\rtimes\Gamma$ extends the one of $L(\Gamma)$ and the complex conjugation on $L^\infty(\mu)$. We indicate the involution in all cases with a bar. We refer for more information to~\cites{gaboriau,sauer-groupoid}. The following theorem was suggested by ideas of Connes and Gromov and was proved in the PhD thesis of Schmidt~\cite{schmidt}. \begin{theorem}\label{thm: betti bound} Every $\ell^2$-Betti number of a closed oriented manifold is bounded from above by its \mbox{$X$-para}\-metrised integral simplicial volume. \end{theorem} We formulate a slightly more general version (Theorem~\ref{thm: bound von Neumann rank}) based on the notion of \emph{von Neumann rank} which is defined below. The proof of Theorem~\ref{thm: bound von Neumann rank} can be extracted from Schmidt's proof of Theorem~\ref{thm: betti bound}. To make it easier for the reader we present a proof of Theorem~\ref{thm: bound von Neumann rank} which is a streamlined version of Schmidt's method. Some preparations are in order. Let $C_\ast$ be a chain complex of left ${\mathbb Z}[\Gamma]$-modules. We denote by $C^{-\ast}$ the chain complex whose $p$-th chain module is $\hom_{{\mathbb Z}[\Gamma]}\bigl(C_{-p},{\mathbb Z}[\Gamma]\bigr)$ with the induced differential. We may extend chain complexes that are indexed over non-negative degrees like the singular chain complex to all degrees in ${\mathbb Z}$ by setting them zero in negative degrees. Since the group ring is a ring with involution we may regard te module $C^{-\ast}$ which is naturally a right ${\mathbb Z}[\Gamma]$-module as a left ${\mathbb Z}[\Gamma]$-module. Let $D_\ast$ be another ${\mathbb Z}[\Gamma]$-chain complex. We consider the following commutative diagram of ${\mathbb Z}$-chain complexes: \begin{equation}\label{eq: chain complex square} \begin{tikzcd} {\mathbb Z}\otimes_{{\mathbb Z}[\Gamma]} \bigl(C_\ast\otimes_{{\mathbb Z}} D_\ast\bigr)\ar[d,hook]\ar[r]& \hom_{{\mathbb Z}[\Gamma]}\bigl(C^{-\ast}, D_\ast\bigr)\ar[d,hook]\\ L^\infty(\mu)\otimes_{{\mathbb Z}[\Gamma]}\bigl(C_\ast\otimes_{{\mathbb Z}} D_\ast\bigr)\ar[r] & \hom_{L^\infty(\mu)\bar\rtimes\Gamma}\bigl (L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]}C^{-\ast}, L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]}D_\ast\bigr) \end{tikzcd} \end{equation} The tensor product of chain complexes $C_\ast\otimes D_\ast$ is itself a ${\mathbb Z}[\Gamma]$-chain complex via the diagonal $\Gamma$-action. The complex on the upper right is the hom-complex; its $p$-th chain group consists of chain maps $C^{-\ast}\to D_\ast$ of degree $p$; its $p$-th homology consists of the group of chain homotopy classes of degree~$p$ chain maps, which we denote by $[C^{-\ast}, D_\ast]$. We refer to~\cite{brown}*{I.0} for a detailed description of these standard constructions of chain complexes. The left vertical map comes from the inclusion of constant functions. The right vertical map is the induction from ${\mathbb Z}[\Gamma]$ to $L^\infty(\mu)\bar\rtimes\Gamma$. The upper horizontal arrow sends $1\otimes x\otimes y$ to the map $g\mapsto\overline{g(x)}\cdot y$ for $g\in C^{-\ast}$. The lower horizontal arrow is the map \begin{equation}\label{eq: lower horizontal arrow} f\otimes x \otimes y\mapsto \Bigl(a\otimes g\mapsto a\cdot \overline{f\cdot g(x)}\otimes y \Bigr) \end{equation} To verify that this map is well defined we check that $f(\gamma\_)\otimes x\otimes y$ and $f\otimes \gamma x\otimes \gamma y$ have the same image. This follows from \begin{align*} a\overline{f(\gamma\_)g(x)}\otimes y= a\overline{f(\gamma\_)g(x)}\gamma^{-1}\otimes \gamma y= a\overline{\gamma f(\gamma\_)g(x)}\otimes \gamma y &= a\overline{f(\_)\gamma g(x)}\otimes \gamma y\\ &= a\overline{f(\_)g(\gamma x)}\otimes \gamma y. \end{align*} We leave the verification of the property of being a chain map to the reader. Next let $Y$ be a topological space with a free $\Gamma$-action. Set $C_\ast=D_\ast=C_\ast(Y)$. Let $A_\ast\colon C_\ast(Y\times Y)\to C_\ast(Y)\otimes_{\mathbb Z} C_\ast (Y)$ be the Alexander-Whitney map, and let $\Delta_\ast\colon C_\ast(Y)\to C_\ast(Y\times Y)$ be the map induced by the diagonal embedding. If we compose the horizontal maps in the commutative square above with the chain maps $\operatorname{id}_{\mathbb Z}\otimes_{{\mathbb Z}[\Gamma]}A_\ast\circ \Delta_\ast$ and $\operatorname{id}_{L^\infty(\mu)}\otimes_{{\mathbb Z}[\Gamma]}A_\ast\circ\Delta_\ast$, respectively, and take homology, we obtain the following commutative square. The left vertical map is induced by the inclusion of constant functions. \begin{equation}\label{eq: cup product} \begin{tikzcd} H_d(\Gamma\backslash Y)\cong H_d^\Gamma(Y; {\mathbb Z})\arrow[d]\arrow[r,"\cap\_"] & \bigl[C^{-\ast}(Y),C_{d+\ast}(Y)\bigr]\arrow[d]\\ H_d^\Gamma\bigl(Y; L^\infty(\mu)\bigr)\arrow[r]& \bigl[ L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]}C^{-\ast}(Y), L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]}C_{d+\ast}(Y)\bigr] \end{tikzcd} \end{equation} The upper and lower horizontal maps are variants of the cap product. \begin{definition}\label{def: von Neumann rank} Let $Z$ be a connected space with fundamental group~$\Gamma$. Let $x\in H_d(Z)$. The \emph{von Neumann rank} of $x$ is defined as \[\dim_{L(\Gamma)}\Bigl(\bigoplus_{n\ge 0} \operatorname{im}\Bigl(H^n_\Gamma\bigl(\widetilde Z;L(\Gamma)\bigr)\xrightarrow{x\cap\_} H_{d+n}^\Gamma\bigl(\widetilde Z;L(\Gamma)\bigr)\Bigr)\Bigr). \] \end{definition} \begin{remark}\label{rem: equivariant duality} Let $M$ be a closed oriented $d$-manifold. The sum of the $\ell^2$-Betti numbers of $M$ is the von Neumann rank of its fundamental class. This is direct consequence of (equivariant) Poincare duality which says that the image of the fundamental class under the upper horizontal map in~\eqref{eq: cup product} is chain homotopy equivalence. \end{remark} \begin{theorem}\label{thm: bound von Neumann rank} Let $Z$ be a connected space with fundamental group~$\Gamma$. Then the von Neumann rank of a homology class $[x]\in H_d(Z)$ is bounded from above by $d\cdot \norm{[j_\ast^Z(x)]}_{\mathbb Z}^X$. \end{theorem} \begin{proof Suppose the image of $x$ under the map induced by inclusion of constant functions is homologous to a cycle $\sum_{k=1}^m a_k\otimes \sigma_k$ where $a_k\in C(X;{\mathbb Z})$ and $\sigma_k$ is a singular $d$-simplex in $\widetilde Z$. Via the embedding $C(X;{\mathbb Z})\hookrightarrow L^\infty(\mu)$ we obtain a homology class $[\sum_{k=1}^m a_k\otimes \sigma_k]\in H_d^\Gamma(\widetilde Z;L^\infty(\mu))$. The cap product with $\sum_{k=1}^m a_k\otimes \sigma_k$, that is the image of $\sum_{k=1}^m a_k\otimes \sigma_k$ under the lower horizontal map of~\eqref{eq: cup product}, is represented by the $L^\infty(\mu)\bar\rtimes\Gamma$-chain homomorphism whose degree $i$ part is \begin{equation*} L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]} C^{i}(\widetilde Z)\toL^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]} C_{d-i}(\widetilde Z),~~ 1\otimes g\mapsto \sum_{k=1}^m \overline{a_k\cdot g(\sigma_k\rfloor_i)}\otimes \sigma_k\lfloor_{d-i}. \end{equation*} Here $\sigma\rfloor_i$ and $\sigma\lfloor_{d-i}$ denote the front $i$-face and the back $(d-i)$-face of $\sigma$ respectively. It clearly factorizes over the $L^\infty(\mu)\bar\rtimes\Gamma$-homomorphism \begin{equation*} L^\infty(\mu)\bar\rtimes\Gamma\otimes_{{\mathbb Z}[\Gamma]}C^i(\widetilde Z)\to \bigoplus_{k=1}^m L^\infty(\mu)\bar\rtimes\Gamma\cdot\chi_{\operatorname{supp}(a_k)},~~ y\otimes g\mapsto \bigl( y\overline{a_k g(\sigma_k\rfloor_i)}\bigl)_k=\bigl( y\overline{g(\sigma_k\rfloor_i)}a_k\bigl)_k. \end{equation*} Further, we have \[ \dim_{L^\infty(\mu)\bar\rtimes\Gamma}\Bigl( \bigoplus_{k=1}^m L^\infty(\mu)\bar\rtimes\Gamma\cdot\chi_{\operatorname{supp}(a_k)}\Bigr)=\sum_{k=1}^m\mu(\operatorname{supp}(a_k))\le \bigl\lVert \sum_{k=1}^m a_k\otimes \sigma_k\bigr\rVert_{\mathbb Z}^X.\] The last inequality uses the fact that $a_k$ is integer-valued. Hence the $L^\infty(\mu)\bar\rtimes\Gamma$-dimension of the image of the cap product with $[\sum_{k=1}^m a_k\otimes \sigma_k]$ \begin{equation}\label{eq: induced image} \bigoplus_{n\ge 0}\operatorname{im}\Bigl(H^n_\Gamma(\widetilde Z;L^\infty(\mu)\bar\rtimes\Gamma)\to H^\Gamma_{d+n}(\widetilde Z;L^\infty(\mu)\bar\rtimes\Gamma)\Bigr) \end{equation} is bounded by~$\norm{[j_\ast^Z(x)]}_{\mathbb Z}^X$. It remains to verify that the $L^\infty(\mu)\bar\rtimes\Gamma$-dimension of~\eqref{eq: induced image} is the von Neumann rank of~$[x]$. Since $L(\Gamma)\subset L^\infty(\mu)\bar\rtimes\Gamma$ is a flat ring extension~\cite{sauer-groupoid}*{Theorem~4.3}, we obtain that \begin{multline*} \bigoplus_{n\ge 0}\operatorname{im}\Bigl(H^n_\Gamma(\widetilde Z;L^\infty(\mu)\bar\rtimes\Gamma)\to H^\Gamma_{d+n}(\widetilde Z;L^\infty(\mu)\bar\rtimes\Gamma)\Bigr)\\ \cong L^\infty(\mu)\bar\rtimes\Gamma\otimes_{L(\Gamma)}\bigoplus_{n\ge 0}\operatorname{im}\Bigl(H^n_\Gamma(\widetilde Z; L(\Gamma))\to H^\Gamma_{d+n}(\widetilde Z;L(\Gamma))\Bigr) \end{multline*} where the maps on the right hand side are induced by the cap product with $[x]$. Since the von Neumann dimension is compatible with induction~\cite{sauer-groupoid}*{Theorem~2.6}, the proof is finished. \end{proof} \section{The category of Cantor bundles}\label{sec: category cantor bundles} In~\ref{sub: cantor bundles} we introduce the central notion of a Cantor bundle. A Cantor bundle comes with a map to the Cantor set~$X$. In general, a Cantor bundle is not a locally trivial bundle over~$X$. See Example~\ref{exa: non trivial Cantor bundle}. It is, however, locally trivial, when restricted to compacta (see Lemma~\ref{lem: decomposition into boxes}). Metric Cantor bundles are also introduced which are Cantor bundle whose fibers over $X$ come with the structure of a metric space. In~\ref{sub: cantor bundle maps} we define Cantor bundle maps. We also consider pushouts of Cantor bundles. In~\ref{sub: from chains to measure chains} we study the functoriality of certain chain complexes attached to Cantor bundles. \subsection{Cantor bundles}\label{sub: cantor bundles} We define the data of a product atlas for a space over $X$. \begin{definition} Let $W$ be a topological space and $\operatorname{pr}\colon W\to X$ a continuous map to the Cantor set. We denote the fiber over $x\in X$ by $W_x$. For $A\subset X$ we write $W\vert_A$ for $\operatorname{pr}^{-1}(A)\subset W$. We introduce the notion of a \emph{product atlas} for $W$: \begin{itemize} \item A \emph{product chart} for $W$ consists of a clopen subset $A\subset X$, an open subset $U\subset W$, a space $F$, and a homeomorphism $U\to A\times F$ over $A$. \item Two product charts $c_i\colon U_i\to A_i\times F_i$, $i\in\{1,2\}$ are \emph{compatible} if there are subspaces $F_i'\subset F_i$ such that $c_i(U_1\cap U_2)=(A_1\cap A_2)\times F_i'$ and the transition map \[c_2\circ c_1^{-1}\colon (A_1\cap A_2)\times F_1'\to (A_1\cap A_2)\times F_2'\] is a product of $\operatorname{id}_{A_1\cap A_2}$ and a homeomorphism $g\colon F_1'\to F_2'$. \item A \emph{product atlas} for $W$ consists of a family of compatible product charts whose domains cover $W$; it is \emph{maximal} if it contains every product chart that is compatible with the product charts of the atlas. \end{itemize} \end{definition} If in the definition of a product atlas we would replace the Cantor set by a connected space then the existence of a product atlas for $W\to X$ would imply that $W\to X$ is trivial. This in stark contrast to our situation. \begin{definition} Let $\operatorname{pr}\colon W\to X$ be a topological space over $X$ endowed with a maximal product atlas. \begin{itemize} \item A relatively compact subset $K\subset W$ is called a \emph{box} if there is a product chart $c\colon U\to A\times F$ such that $K\subset U$ and $c(K)=A\times F'$ for a subspace $F'\subset F$. \item For a box $K$ and for all $x,y\in A\mathrel{\mathop:}=\operatorname{pr}(K)$ the map \[ \tau_{x,y}\colon K_x\xrightarrow{\cong} K_y\] defined by \[ \tau_{x,y}(p)=c^{-1}\bigl( y, \operatorname{pr}_2(c(p))\bigr)\] for a choice of product chart $K\subset U\xrightarrow{c} A\times F$ is independent of the choice of chart. We say that $\tau_{x,y}$ is the \emph{parallel transport} inside the box $K$. \end{itemize} \end{definition} \begin{lemma}\label{lem: decomposition into boxes} Let $\operatorname{pr}\colon W\to X$ be a locally compact Hausdorff space over $X$ endowed with a maximal product atlas. For every compact subset $K\subset W$ there is a relatively compact, open subset $L\subset W$ containing $K$ and a clopen partition $X=A_1\cup\dots\cup A_n$ such that $L\vert_{A_i}$ is a box for each $i\in\{1,\dots,n\}$. Further, if $K$ is a finite union of open boxes, we may choose $L=K$. \end{lemma} \begin{proof} Since $W$ is locally compact, each point lies in an open box. Since $K$ is compact it is covered by finitely many open boxes $B_1, \dots, B_n$. Let $L$ be the union of these boxes. Every box is relatively compact, and so is $L$. Since the clopen subsets of $X$ form a set algebra, there is a clopen partition $A_1, \dots, A_m$ of $\operatorname{pr}(L)$ that is subordinate to $\operatorname{pr}(B_1), \dots, \operatorname{pr}(B_n)$. We claim that $L\vert_{A_i}$ is box: Pick $x_0\in A_i$. We construct a product chart \[ f_i\colon L\vert_{A_i}\to A_i\times (W_{x_0}\cap L)\] as follows. Every $p\in L_x\subset L\vert_{A_i}$ lies in a box $B_k$ with $A_i\subset \operatorname{pr}(B_k)$. We set $f_i(p)=(x, \tau_{x,x_0}^{B_k}(p))$ where $\tau_{x,x_0}^{B_k}$ is the parallel transport inside $B_k$. We have \[ \tau_{x,x_0}^{B_k}(p)=\tau_{x,x_0}^{B_l}(p)\text{ for $p\in W_x\cap B_k\cap B_l$ and $\{x,x_0\}\subset \operatorname{pr}(B_k\cap B_l)$}, \] hence $f_i$ is well defined. The map $f_i$ is a homeomorphism, and its inverse maps $(x, q)$ to $\tau_{x_0,x}^{B_k}(q)$ for any box $B_k$ with $q\in B_k$. Since $f_i$ is compatible with all the boxes $B_k$ and its domain is covered by them, $f_i$ lies in the maximal product atlas. So $L\vert_{A_i}$ is box. Moreover, we can add the complement of $\operatorname{pr}(L)$ to the clopen partition above to get the statement of the lemma. \end{proof} \begin{definition}[Cantor bundle] A \emph{Cantor bundle} is a locally compact Hausdorff space~$W$ endowed with a continuous proper $\Gamma$-action and a continuous $\Gamma$-equivariant map $\operatorname{pr}\colon W\to X$ and a maximal product atlas such that the $\Gamma$-action on $W$ has a Borel fundamental domain that is a union of finitely many boxes. \end{definition} Note that the action on a Cantor bundle is automatically free since it lies over the free action on $X$. \begin{definition} Let $\operatorname{pr}\colon W\to X$ be a Cantor bundle, and let $V\subset W$ be a $\Gamma$-invariant subspace so that for every $p\in V$ there is a product chart $U\to A\times F$ such that $p\in U$ and $U\cap V$ is a box. Then we call $\operatorname{pr}\vert_V\colon V\to X$ a \emph{Cantor subbundle} of $\operatorname{pr}\colon W\to X$. \end{definition} \begin{lemma} Let $\mathcal{A}$ be a maximal product atlas of a Cantor bundle $W\to X$. A~Cantor subbundle $V\subset W$ is a Cantor bundle with respect to the product atlas \[ \mathcal{A}_V\mathrel{\mathop:}= \bigl\{ U\xrightarrow{c} A\times F\mid c\in\mathcal{A}, U\cap V\text{ is a box}\bigr\}.\] \end{lemma} \begin{proof} The only non-obvious statement is the existence of a fundamental domain for $V$ consisting of finitely many boxes. Let $D\subset W$ be a fundamental domain for $W$ which is a union of boxes $B_1,\dots, B_n$. Note that $D$ is relatively compact as every $B_i$ is relatively compact. Then $V\cap D$ is a fundamental domain for $V$. At every point $p\in V\cap D$ we can choose a product chart with a domain $U_p\ni p$ such that $U_p\cap V$ is a box. By relative compactness we can cover $V\cap D$ with finitely many such product chart domains $U_{p_1}, \dots, U_{p_m}$. Every set \[U_{p_i}\cap V\cap D =\bigcup_{j=1}^n U_{p_i}\cap V\cap B_j\] is a box, hence $V\cap D$ is a union of such. \end{proof} \begin{example}\label{exa: cell} Let $A\subset X$ be a clopen subset and $Y$ any compact space. We consider the trivial Cantor bundle $X\times (\Gamma\times Y)$ with the projection to the first factor and endowed with the $\Gamma$-action \[ \gamma\cdot (x,\gamma', y)=(\gamma x, \gamma \gamma', y).\] Then $A\times \Gamma\times Y$ endowed with the projection \[ A\times\Gamma\times Y\to X,~(a,\gamma, y)\mapsto \gamma a\] and the left translation $\Gamma$-action on the second factor is a Cantor subbundle via the embedding \[ A\times\Gamma\times Y\hookrightarrow X\times\Gamma\times Y,~(a,\gamma, y)\mapsto (\gamma a, \gamma, y).\] \end{example} \begin{remark}[Finite isotropy disappears]\label{rem: finite isotropy} Let $H<\Gamma$ be a finite subgroup. Then $X\times \Gamma/H$ is a Cantor bundle with the projection to $X$ and the diagonal $\Gamma$-action. Let $A\subset X$ be a Borel fundamental domain for the $H$-action on~$X$. Then \[ X\times \Gamma/H\cong A\times \Gamma\] are isomorphic as Cantor bundles where the latter is the one from the previous example. An isomorphism is given by \[ A\times\Gamma\to X\times \Gamma/H,~(a,\gamma)\mapsto (\gamma a, \gamma H).\] \end{remark} \begin{example}[Non-trivial Cantor bundle]\label{exa: non trivial Cantor bundle} We describe an example of a Cantor bundle whose fibers exhibit uncountably many homeomorphism types. In particular, it is not a trivial Cantor bundle. Let $\Gamma={\mathbb Z}$ and let $X$ be a minimal subshift of the shift action of ${\mathbb Z}$ on $\{0,1\}^{\mathbb Z}$ such that the ${\mathbb Z}$-action on the Cantor set~$X$ is free. Such a minimal subshift exists due to~\cite{glasner}*{Theorem~4.2}. Let $L$ be the following infinite $1$-dimensional simplicial complex: \begin{figure}[h] \begin{tikzpicture} \node[circle, fill, scale=0.5] (s0) at (0,0) {}; \node[below right=1mm of s0] {-2}; \node[circle,fill, scale=0.5] (s1) at (2,0) {}; \node[below right=1mm of s1] {-1}; \node[circle,fill, scale=0.5] (s2) at (4,0) {}; \node[below right=1mm of s2] {0}; \node[circle,fill, scale=0.5] (s3) at (6,0) {}; \node[below right=1mm of s3] {1}; \node[circle,fill, scale=0.5] (s4) at (8,0) {}; \node[below right=1mm of s4] {2}; \path[thick] (s0) edge node[above] {} (s1) (s1) edge node[above] {} (s2) (s2) edge node[above] {} (s3) (s3) edge node[above] {} (s4); \draw (s0) edge (s0.south); \node [left=of s0] {} edge[thick, dashed] (s0); \node [right=of s4] {} edge[thick, dashed] (s4); \node [draw,circle, fill, scale=0.5] [below=4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [above right=4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [above left =4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [below=4mm of s1] {} edge (s1); \node [draw,circle, fill, scale=0.5] [above right=4mm of s1] {} edge (s1); \node [draw,circle, fill, scale=0.5] [above left =4mm of s1] {} edge (s1); \node [draw,circle, fill, scale=0.5] [below=4mm of s2] {} edge (s2); \node [draw,circle, fill, scale=0.5] [above right=4mm of s2] {} edge (s2); \node [draw,circle, fill, scale=0.5] [above left =4mm of s2] {} edge (s2); \node [draw,circle, fill, scale=0.5] [below=4mm of s3] {} edge (s3); \node [draw,circle, fill, scale=0.5] [above right=4mm of s3] {} edge (s3); \node [draw,circle, fill, scale=0.5] [above left =4mm of s3] {} edge (s3); \node [draw,circle, fill, scale=0.5] [below=4mm of s4] {} edge (s4); \node [draw,circle, fill, scale=0.5] [above right=4mm of s4] {} edge (s4); \node [draw,circle, fill, scale=0.5] [above left =4mm of s4] {} edge (s4); \end{tikzpicture} \caption{The simplicial complex~$L$.} \end{figure} We have an obvious ${\mathbb Z}$-action on $L$ by translation. For $x=(x_i)\in X\subset \{0,1\}^{\mathbb Z}$ let $L_x\subset L$ be the subcomplex that consists of the horizontal line and of an upward caret at each $n$ with $x_n=1$ and a downward segment at each $n$ with $x_n=0$. \begin{figure}[h] \begin{tikzpicture} \node[circle, fill, scale=0.5] (s0) at (0,0) {}; \node[below right=1mm of s0] {-2}; \node[circle,fill, scale=0.5] (s1) at (2,0) {}; \node[below right=1mm of s1] {-1}; \node[circle,fill, scale=0.5] (s2) at (4,0) {}; \node[below right=1mm of s2] {0}; \node[circle,fill, scale=0.5] (s3) at (6,0) {}; \node[below right=1mm of s3] {1}; \node[circle,fill, scale=0.5] (s4) at (8,0) {}; \node[below right=1mm of s4] {2}; \path[thick] (s0) edge node[above] {} (s1) (s1) edge node[above] {} (s2) (s2) edge node[above] {} (s3) (s3) edge node[above] {} (s4); \draw (s0) edge (s0.south); \node [left=of s0] {} edge[thick, dashed] (s0); \node [right=of s4] {} edge[thick, dashed] (s4); \node [draw,circle, fill, scale=0.5] [above right=4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [above left =4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [above right=4mm of s1] {} edge (s1); \node [draw,circle, fill, scale=0.5] [above left =4mm of s1] {} edge (s1); \node [draw,circle, fill, scale=0.5] [below=4mm of s2] {} edge (s2); \node [draw,circle, fill, scale=0.5] [above right=4mm of s3] {} edge (s3); \node [draw,circle, fill, scale=0.5] [above left =4mm of s3] {} edge (s3); \node [draw,circle, fill, scale=0.5] [below=4mm of s4] {} edge (s4); \end{tikzpicture} \caption{$L_x$ for $x=(\dots, 1, 1, \mathbf{0}, 1, 0,\dots)\in\{0,1\}^{\mathbb Z}$.} \end{figure} Then \[ W=\bigl\{ (x, p)\mid p\in L_x\}\subset X\times L\] is a Cantor subbundle of the trivial Cantor bundle $X\times L$: The diagonal ${\mathbb Z}$-action on $X\times L$ restricts to $W$. For every $x=(x_i)\in X$ we consider the clopen neighborhood of $x$ \[ A_x(n)=\bigl\{ (y_i)\in X\mid \text{ $y_i=x_i$ for $i\in\{-n, \dots, n\}$}\bigr\},\] and let $L_x(n)\subset L_x$ be the finite subgraph obtained from $L_x$ by cutting off the horizontal line at $-n$ and $n$. We then have \[ W\cap \bigl(A_x(n)\times L_x(n)\bigr)= A_x(n)\times L_x(n)\subset X\times L.\] Running through $x\in X$ and $n\in{\mathbb N}$ we cover all of $W$. So $W$ is a Cantor subbundle. Since the valency of $L_x$ at each vertex is at least~$3$, two fibers $L_x$ and $L_y$ are homeomorphic if and only if they are simplicially isomorphic. Since $X$ is uncountable, $W$ has uncountably many homeomorphism types of fibers. \end{example} \begin{definition}[Metric and Riemannian Cantor bundles]\hfill\\ A Cantor bundle $\operatorname{pr}\colon W\to X$ is \emph{metric} if \begin{itemize} \item each fiber $W_x$ is endowed with a metric inducing the topology of $W_x$, and \item the maps $W_x\to W_{\gamma\cdot x}$ induced by multiplication are isometries for every $x\in X$ and every $\gamma\in \Gamma$. \item for each product chart $c\colon U\to A\times F$ there is a metric on $F$ such that $c$ is fiberwise an isometry. \end{itemize} If, in addition, each fiber $W_x$ is a $d$-dimensional Riemannian manifold with the induced Riemannian metric, we say that $\operatorname{pr}\colon W\to X$ is a \emph{$d$-dimensional Riemannian Cantor bundle}. Finally, \emph{metric Cantor subbundles} are defined similarly to Cantor subbundles. \end{definition} \begin{example}\label{exa: manifold example} The product space $\xxm$ with the diagonal $\Gamma$-action is a Riemannian Cantor bundle. Each fiber $\{x\}\times \widetilde{M}\cong\widetilde{M}$ carries the Riemannian metric lifted from the Riemannian metric from $M$. The maximal product atlas is defined to be the set of all product charts that are compatible with $\operatorname{id}\colon \xxm\to\xxm$. The $\Gamma$-action possesses a relatively compact Borel fundamental domain $F\subset\widetilde M$. Then the box $X\times F$ is a Borel fundamental domain of the $\Gamma$-action on~$\xxm$. \end{example} \subsection{Cantor bundle maps}\label{sub: cantor bundle maps} To obtain a category of Cantor bundles we define the morphisms next. \begin{definition}\label{def: product-like map} Let $V$ and $W$ be topological spaces over $X$ endowed with maximal product atlases. Let $\Phi\colon V\to W$ be a continuous map over $X$. Let $c\colon U_V\to A_V\times F_V$ be a product chart of $V$. We say that $\Phi\vert_U$ is a \emph{product map} if there is a product chart $d\colon U_W\to A_W\times F_W$ of $W$ such that $d\circ \Phi\circ c^{-1}\colon A_V\times F_V\to A_W\times F_W$ is a product of the identity on $X$ and a continuous map. We say that $\Phi$ is \emph{locally product-like} if every point of $V$ lies in the domain of a product chart on which $\Phi$ is a product map. \end{definition} \begin{definition} A continuous map over $X$ between Cantor bundles is called a \emph{Cantor bundle map} if it is $\Gamma$-equivariant and locally product-like. A Cantor bundle map between metric Cantor bundles is called \emph{Lipschitz} if there is some $L>0$ such that it is $L$-Lipschitz on each fiber. \end{definition} \begin{remark}\label{rem: automatic properness} A Cantor bundle map $V\to W$ is automatically proper since the $\Gamma$-actions on $V$ and $W$ are proper and both actions possess relatively compact fundamental domains. \end{remark} The composition of (Lipschitz) Cantor bundle maps is a (Lipschitz) Cantor bundle map. So we obtain a category of Cantor bundles with Cantor bundle maps as morphisms. The notion of product map does not depend on the choices of product charts as we show next. \begin{lemma}\label{lem: box decomposition for maps} Let $V$ and $W$ be locally compact Hausdorff spaces over $X$ equipped with maximal product atlases. Let $\Phi\colon V\to W$ be locally product-like and proper. Then every compact subset $K\subset W$ is contained in a relatively compact, open subset $L\subset W$ such that there is a clopen partition $X=A_1\cup\dots\cup A_n$ with the following properties. \begin{itemize} \item $L\vert_{A_i}$ is a box for each $i\in\{1,\dots,n\}$. \item $\Phi^{-1}(L)\vert_{A_i}$ is a box for each $i\in\{1,\dots,n\}$. \item The restriction of $\Phi$ to \[\Phi^{-1}(L)\vert_{A_i}\to L\vert_{A_i}\] is a product map for each $i\in\{1,\dots,n\}$. \end{itemize} If $K$ is an open box, we may choose $L$ to be $K$. \end{lemma} \begin{proof} Every compact subset of $W$ is contained in a relatively compact, open subset $L\subset W$ with the properties as in Lemma~\ref{lem: decomposition into boxes}. We may assume that $L$ itself is a box. Since $\Phi$ is proper, $\Phi^{-1}(L)$ is relatively compact as well. We cover $\Phi^{-1}(L)$ by finitely many open boxes $B_i$, $i\in I$, such that $\Phi$ is a product of maps on each box. The intersection of two boxes is a box. The preimage of a box under $\Phi\vert_{B_i}$ is a box. Hence $\Phi^{-1}(L)$ is the union of boxes $B_i'\mathrel{\mathop:}= \Phi^{-1}(L)\cap B_i$, $i\in I$. On each $B_i'$ the map $\Phi$ is a product. Let $X=A_1\cup \dots\cup A_n$ be a clopen partition subordinate to $\operatorname{pr}_V(B_i')$, $i\in I$. It exists since the clopen sets of $X$ form a set algebra. By the same argument as in the proof of Lemma~\ref{lem: decomposition into boxes} each $\Phi^{-1}(L)\vert_{A_j}$, $j\in\{1,\dots, n\}$, is a box. As in the proof of Lemma~\ref{lem: decomposition into boxes} one sees that the parallel transport on the boxes $B_i'$ and the choice of some $x_0\in A_j$ yields a product chart \[ \Phi^{-1}(L)\vert_{A_j}\xrightarrow{\cong} A_j\times \bigl(\Phi^{-1}(L)\cap V_{x_0}\bigr).\] Similarly for $L\vert_{A_j}$. Since $\Phi$ is a product map on each $B_i'$, it is compatible with the parallel transport within each $B_i'$, and so the restriction of $\Phi$ to $\Phi^{-1}(L)\vert_{A_j}\to L\vert_{A_j}$ is a product map. \end{proof} \begin{lemma}\label{lem: box decomposition for maps - domain focus} Let $V$ and $W$ be locally compact Hausdorff spaces over $X$ equipped with maximal product atlases. Let $\Phi\colon V\to W$ be locally product-like and proper. Then every compact subset $K\subset V$ is contained in a relatively compact, open subset $L\subset V$ such that there is a clopen partition $X=A_1\cup\dots\cup A_n$ with the following properties. \begin{itemize} \item $L\vert_{A_i}$ is a box for each $i\in\{1,\dots,n\}$. \item The restriction of $\Phi$ to $L\vert_{A_i}$ is a product map for each $i\in\{1,\dots,n\}$. \end{itemize} If $K$ is an open box, we may choose $L$ to be $K$. \end{lemma} \begin{proof} This follows from applying Lemma~\ref{lem: box decomposition for maps} to the relatively compact subset $\Phi(K)$. The last statement follows from the fact that the intersection of two boxes is a box. \end{proof} Next we discuss categorical pushouts in the category of Cantor bundles. \begin{lemma}\label{lem: pushouts of Cantor bundles} A commutative square of Cantor bundles and Cantor bundle maps that is a pushout of topological spaces is a pushout in the category of Cantor bundles. \end{lemma} \begin{proof} Let the following diagram \[\begin{tikzcd} A\ar[r, "f"]\ar[d,"i"] & C\ar[d, "j"]\\ B\ar[r, "g"] & D \end{tikzcd}\] be commutative and consist of Cantor bundles and Cantor bundle maps. We further assume that it is a pushout of topological spaces. Let $Z$ be a Cantor bundle, and let $r\colon B\to Z$ and $s\colon C\to Z$ be Cantor bundle maps compatible with~$A$. By the pushout property there is unique map $t\colon D\to Z$ that makes everything commute. It is obvious that $t$ is equivariant and lies over $X$. It remains to show that $t$ is locally product-like. Let $p\in D$ be a point over $x\in X$. Let $U\subset D$ be an open box around $p$. By Lemma~\ref{lem: box decomposition for maps - domain focus} there are clopen subsets $X_C$ and $X_B$ of $X$ that contain $x$ such that $j^{-1}(U)\vert_{X_C}$ and $g^{-1}(U)\vert_{X_B}$ are boxes and the restrictions of $j$ and $g$ to $j^{-1}(U)\vert_{X_C}$ and $g^{-1}(U)\vert_{X_B}$ are product maps. By the same lemma we obtain a clopen neighborhood $X_A\subset X_B\cap X_C$ of $x$ such that $i^{-1}(g^{-1}(U))\vert_{X_A}=f^{-1}(j^{-1}(U))\vert_{X_A}$ is a box and the restrictions of $i$ and $f$ to $i^{-1}(g^{-1}(U))\vert_{X_A}$ are product maps. By applying Lemma~\ref{lem: box decomposition for maps - domain focus} to the boxes $g^{-1}(U)\vert_{X_A}$ and $j^{-1}(U)\vert_{X_A}$ we find a smaller clopen neighborhood $X_Z\subset X_A$ of $x$ such that $r$ and $s$ are product maps on these boxes. Now the box $U\vert_{X_Z}$ is a pushout of the boxes $g^{-1}(U)\vert_{X_Z}$ and $j^{-1}(U)\vert_{X_Z}$ along the box $i^{-1}(g^{-1}(U))\vert_{X_Z}$. All maps in the pushout square are product maps as well as the restrictions of $r$ and $s$ to the corners. Hence $t$ restricted to $U\vert_{X_Z}$ is a product map. \end{proof} \begin{example}[Continuing Example~\ref{exa: non trivial Cantor bundle}] The Cantor bundle~$W$ in Example~\ref{exa: non trivial Cantor bundle} can be written as a pushout. Let \[A=\bigl\{x\in X\subset \{0,1\}^{\mathbb Z}\mid x_0=1\bigr\}\] and $A^c\subset X$ its complement. Let $X\times {\mathbb R}$ be the trivial Cantor bundle with the diagonal ${\mathbb Z}$-action. On ${\mathbb R}$ we consider the usual translation action of ${\mathbb Z}$. The pushout for $W$ can be written semi-formally as \[ \begin{tikzcd} A\times {\mathbb Z}\times\begin{tikzpicture} \node [draw,circle, color=gray,fill, scale=0.5] [above=.1mm]{}; \node [draw,opacity=0,circle, fill, scale=0.5] [above right=4mm of s0] {} edge[opacity=0] (s0); \node [draw,opacity=0,circle, fill, scale=0.5] [above left =4mm of s0] {} edge[opacity=0] (s0); \end{tikzpicture} ~\coprod~ A^c\times{\mathbb Z}\times \begin{tikzpicture} \node [draw,circle, opacity=0,fill, scale=0.5] [below=4mm of s0] {} edge[opacity=0] (s0); \node [draw,circle, color=gray, fill, scale=0.5] [above=.1mm]{}; \end{tikzpicture}\ar[d,hook,shorten <= -1.5em]\ar[r] &X\times{\mathbb R}\ar[d]\\ A\times {\mathbb Z}\times \begin{tikzpicture} \node [draw,circle, color=gray,fill, scale=0.5] {}; \node [draw,circle, fill, scale=0.5] [above right=4mm of s0] {} edge (s0); \node [draw,circle, fill, scale=0.5] [above left =4mm of s0] {} edge (s0); \end{tikzpicture} ~\coprod~ A^c\times{\mathbb Z}\times \begin{tikzpicture} \node [draw,circle, color=gray,fill, scale=0.5] {}; \node [draw,circle, fill, scale=0.5] [below=4mm of s0] {} edge (s0); \end{tikzpicture} \ar[r] & W. \end{tikzcd} \] Here we regard a space of the type $A\times{\mathbb Z}\times M$ as a Cantor bundle as in Example~\ref{exa: cell}. \end{example} \begin{lemma}\label{lem: map to classifying space} Let $N$ be a cocompact proper $\Gamma$-CW complex. Then $X\times N$ with the diagonal $\Gamma$-action is a Cantor bundle. Further, there is a locally product-like map $X\times N\to X\times E\Gamma$ over $X$ to the classifying space of~$\Gamma$ which is $\Gamma$-equivariant with respect to the diagonal actions. \end{lemma} \begin{proof} The $\Gamma$-CW complex $N$ is built via equivariant pushouts where we successively attach finitely many equivariant cells of the form $\Gamma/H\times D^n$ with finite $H<\Gamma$: \[ \begin{tikzcd} \coprod_{i\in I_n} \Gamma/H_i \times S^{n-1}\ar[d,hook]\ar[r] &N^{(n-1)}\ar[d, hook]\\ \coprod_{i\in I_n} \Gamma/H_i\times D^n\ar[r] & N^{(n)} \end{tikzcd} \] Taking a product with the compact Hausdorff space $X$ preserves pushouts. With Remark~\ref{rem: finite isotropy} we see that $X\times N$ is inductively built via finitely many pushouts of the form \[ \begin{tikzcd} \coprod_{i\in I_n} A_i\times \Gamma \times S^{n-1}\ar[d,hook]\ar[r] &X\times N^{(n-1)}\ar[d, hook]\\ \coprod_{i\in I_n} A_i\times \Gamma\times D^n\ar[r] & X\times N^{(n)}. \end{tikzcd} \] There is a Borel fundamental domain of $X\times N$ consisting of the finite union of the products of $A_i$ and the open $n$-cell associated with $i\in I_n$ over all $n$ and $i\in I_n$. Hence $X\times N$ is a Cantor bundle. Next we apply repeatedly Lemma~\ref{lem: pushouts of Cantor bundles} to the above pushout for $n=1,2,\dots, \dim(N)$ and to the target $X\times E \Gamma$ to construct equivariant, locally product-like maps $X\times N^{(n)}\to X\times E\Gamma$. Strictly speaking, the target $X\times E\Gamma$ is not necessarily a Cantor bundle as required in the lemma since $E\Gamma$ might not be a finite $\Gamma$-CW complex. However, the image of the maps below lie in the product of $X$ and a finite $\Gamma$-CW subcomplex which allows us to use Lemma~\ref{lem: pushouts of Cantor bundles}. On the $0$-skeleton $\coprod_{i\in I_0} A_i\times\Gamma$ we define an equivariant and locally product-like map to $X\times E\Gamma$ as the equivariant extension that maps $(a,1)$ to $(a,p)$ for some chosen point $p\in E\Gamma$. To proceed inductively via the pushout property, we need to extend a continuous locally product-like equivariant map $A_i\times \Gamma\times S^{n-1}\to X\times E\Gamma$ to $A_i\times \Gamma\times D^n$. By decomposing each $A_i$ into a suitable clopen partition we may assume that the restriction $A_i\times\{1\}\times S^{n-1} \to X\times E\Gamma$ is a product of the inclusion $A\hookrightarrow X$ and a continuous map $S^{n-1}\to E\Gamma$. Since $E\Gamma$ is contractible, we can extend the map $A\times\{1\}\times S^{n-1} \to E\Gamma$ to $A\times\{1\}\times D^n$ and then extend further to $A\times\Gamma\times D^n$ by equivariance. The resulting map is locally product-like. \end{proof} \subsection{Chains and norms of chains in the context of Cantor bundles}\label{sub: from chains to measure chains} We consider various chain complexes involving singular chains, locally finite chains and measure chains for Cantor bundles or spaces over $X$. The abelian group $C(X;{\mathbb Z})$ carries a left ${\mathbb Z}[\Gamma]$-module structure and via the involution on the group ring also a right ${\mathbb Z}[\Gamma]$-module structure. Let $N$ be a $\Gamma$-space. For every $x\in X$ the map \begin{gather*} \operatorname{ev}_x\colon C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]} C_\ast(N)\to C_\ast^\mathrm{lf}(N)\\ \sum_i f_i\otimes\sigma_i\mapsto \sum_{\gamma\in\Gamma} \sum_if_i(\gamma^{-1}x)\gamma \sigma_i \end{gather*} to locally finite chains is a chain map~\cite{foliated}*{Lemma~2.5}. We consider the map \[ \operatorname{map}\bigl(\Delta^n, N\bigr)\to \mathcal{C}_n\bigl(X\times N\bigr),~~\sigma\mapsto\sigma^X\] that sends a singular $n$-simplex $\sigma$ in $N$ to the finite, compactly supported Borel measure $\sigma^X$ on $\operatorname{map}(\Delta^n, X\times N)$ characterized by the property \[ \int_{\operatorname{map}(\Delta^n, X\times N)}gd\sigma^X=\int_X g\bigl(\Delta^n\xrightarrow{\sigma}\{x\}\times N\hookrightarrow X\times N\bigr)d\mu(x)\] for every compactly supported continuous function $g$ on $\operatorname{map}(\Delta^n, X\times N)$. If $f\in C(X)$, then $f\ast\sigma^X$ denotes the measure with \[ \int_{\operatorname{map}(\Delta^n, X\times N)}gd(f\ast\sigma^X)=\int_X f(x)g\bigl(\Delta^n\xrightarrow{\sigma}\{x\}\times N\hookrightarrow X\times N\bigr)d\mu(x).\] The map \begin{gather*} C(X;{\mathbb Z})\otimes C_\ast(N)\hookrightarrow \mathcal{C}_\ast(X\times N)\\ \sum_i f_i\otimes \sigma_i\mapsto \sum_i f_i\ast \sigma_i^X \end{gather*} is a $\Gamma$-equivariant injective chain map. Here $\Gamma$ acts diagonally on the left hand side, and the left action on the right hand side is induced by the diagonal action on $X\times N$. \begin{definition}\label{def: diffusion} The chain map $C(X;{\mathbb Z})\otimes C_\ast(N)\hookrightarrow \mathcal{C}_\ast(X\times N)$ is called the \emph{diffusion embedding}. \end{definition} \begin{lemma}\label{lem: induced map in smaller subcomplex} Let $U$ and $V$ be topological Hausdorff spaces. We endow $X\times U$ and $X\times V$ with the obvious maximal product atlasses. Let $\Phi\colon X\times U\to X\times V$ be a locally product-like map over~$X$. Then there is a chain map, indicated by the dashed arrow, such that the following diagram commutes. Furthermore, this chain is non-increasing with respect to the $X$-parametrised integral simplicial norm. \begin{equation*} \begin{tikzcd} \mathcal{C}_\ast(X\times U)\arrow[r, "\Phi_\ast"] & \mathcal{C}_\ast(X\times V)\\ C(X; {\mathbb Z})\otimes C_\ast(U)\arrow[r,dashed]\arrow[u,hook] & C(X;{\mathbb Z})\otimes C_\ast(V)\arrow[u, hook] \end{tikzcd} \end{equation*} Here the upper horizontal map is the induced map in measure chains. The vertical maps are the diffusion embeddings. We will also denote the dashed arrow by~$\Phi_\ast$. \end{lemma} \begin{proof} Let $\sigma\colon \Delta_p\to U$ and $f\in C(X;{\mathbb Z})$. We apply Lemma~\ref{lem: box decomposition for maps} to the map \[ X\times\Delta^n\xrightarrow{\operatorname{id}_X\times \sigma}X\times U\xrightarrow{\Phi} X\times V.\] The image of this map is a compact subspace of the Hausdorff space $X\times V$, hence we can apply Lemma~\ref{lem: box decomposition for maps} even if $V$ is not locally compact. As a result there is a finite Borel partition $X=A_1\cup \dots\cup A_n$ and there are continuous maps $g_i\colon U\to V$ for $i\in\{1,\dots, n\}$ such that \[ \Bigl(\Delta_p\xrightarrow{\sigma}\{x\}\times U\xrightarrow{\Phi_x} X\times V\xrightarrow{\operatorname{pr}} V\Bigr)=g_i\circ \sigma~\text{ for $x\in A_i$.}\] Hence~$\Phi_\ast$ maps $f\otimes\sigma$ to the measure that is the image of $\sum_{i=1}^n f\cdot\chi_{A_i}\otimes g_i\circ\sigma$ under the diffusion embedding. The statement about the $X$-parametrised integral simplicial norm follows directly from this description. \end{proof} \begin{remark} The advantage of measure homology is that $\Phi$ obviously induces a chain map. It automatically follows that its restriction to the complex $C(X;{\mathbb Z})\otimes C_\ast(\_)$ is a $\Gamma$-equivariant chain map provided $\Phi$ is $\Gamma$-equivariant, for example a Cantor bundle map. A direct verification of functoriality that avoids measure homology would be more cumbersome. \end{remark} \section{Transverse measure theory on Cantor bundles}\label{sec: transverse measure} In this short section we define the notion of transverse measure which gives, in particular, a finite measure on $\Gamma$-invariant Borel subsets of $\xxm$. As before, $\mu$ denotes the $\Gamma$-invariant probability measure on the Cantor set $X$. \begin{definition} Let $W$ be a standard $\Gamma$-space endowed with a $\Gamma$-invariant Borel measure $\lambda$. For any choice of a measurable $\Gamma$-fundamental domain $F\subset W$ we define a Borel measure $\lambda^{\text{tr}}$ on the $\sigma$-algebra of $\Gamma$-invariant Borel subsets of $W$ by \[ \lambda^{\text{tr}}(A)=\lambda(A\cap F).\] We call $\lambda^{\text{tr}}$ the \emph{transverse measure} induced by $\lambda$. \end{definition} \begin{remark}The definition of the transverse measure does not depend on the choice of the Borel fundamental domain. This is proved similarly to the situation of a lattice in a locally compact group. Let $F'\subset W$ another Borel $\Gamma$-fundamental domain. Let $A\subset W$ be a $\Gamma$-invariant Borel subset. Then \begin{align*} \lambda(A\cap F)=\lambda \Bigl(A\cap \bigcup_{\gamma\in\Gamma}\gamma F'\cap F\Bigr) &=\lambda \Bigl(\bigcup_{\gamma\in\Gamma} A\cap \gamma F'\cap F\Bigr)\\ &=\lambda \Bigl(\bigcup_{\gamma\in\Gamma} \gamma^{-1}A\cap F'\cap \gamma^{-1} F\Bigr)\\ &=\lambda \Bigl(A\cap F'\cap\bigcup_{\gamma\in\Gamma}\gamma^{-1}F'\Bigr)\\ &=\lambda (A\cap F'). \end{align*} \end{remark} \begin{definition} Retaining the setting of the previous definition, the pushforward of $\lambda^{\text{tr}}$ under the restriction of the quotient map $F\to \Gamma\backslash W$ is a Borel measure on $\Gamma\backslash W$ that we also call the \emph{transverse measure} induced by $\lambda$ and denote by the same symbol. \end{definition} \begin{definition} Let $\operatorname{pr}\colon W\to X$ be a metric Cantor bundle. Let $\mathcal{H}^d_x$ be the $d$-dimensional Hausdorff measure on $W_x$. Then \[ \int_X\mathcal{H}^d_xd\mu(x)\] defines a $\Gamma$-invariant Borel measure on $W$ that we call the \emph{$d$-dimensional Hausdorff measure} of the Cantor bundle $\operatorname{pr}$. We denote it generically by $\operatorname{vol}_d$. The induced transverse measure is denoted by $\operatorname{vol}^{\mathrm{tr}}_d$. \end{definition} The fact that $A\mapsto \int_X \mathcal{H}^d_x(A\cap W_x)d\mu(x)$ is Borel measurable for a Borel subset $A\subset W$ follows from the existence of a product atlas. The $\Gamma$-invariance follows from the fact that $\mu$ is $\Gamma$-invariant and the multiplication with an element $\gamma\in\Gamma$ is fiberwise an isometry. So the above definition is justified. \begin{lemma} Let $\Phi\colon V\to W$ be a Lipschitz Cantor bundle map, and let $\phi$ be the induced map on $\Gamma$-quotients. Then the map \[ \Gamma\backslash W\to {\mathbb N}\cup\{\infty\},~w\mapsto \# \phi^{-1}(\{w\})\] is $\operatorname{vol}^{\mathrm{tr}}_d$-measurable, and we define the \emph{transverse $d$-volume} of $\Phi$ as \[ \operatorname{vol}^{\mathrm{tr}}_d(\Phi)=\int_{\Gamma\backslash W} \# \phi^{-1}(\{w\})d\operatorname{vol}^{\mathrm{tr}}_d(w).\] \end{lemma} \begin{proof} Let $F_W\subset W$ be a Borel fundamental domain of $\Gamma\curvearrowright W$ that is a finite union of boxes. Then $F_V=\Phi^{-1}(F_W)$ is a fundamental domain of $\Gamma\curvearrowright V$. By Lemma~\ref{lem: box decomposition for maps} there is a clopen partition $X=A_1\cup\dots\cup A_n$ such that each $V\vert_{A_i}\cap F_V$ is a box, each $W\vert_{A_i}\cap F_W$ is a box and the restriction \[ \Phi\colon V\vert_{A_i}\cap F_V\to W\vert_{A_i}\cap F_W\] is a product $\operatorname{id}_{A_i}\times h_i$ (in product charts) with $h_i$ being Lipschitz. The statement is equivalent to the measurability of the map \[ f\colon F_W\to {\mathbb N}\cup\{\infty\},~~w\mapsto \#\bigl(\Phi^{-1}(\{w\})\cap F_V\bigr).\] In product chart coordinates $(x,p)\in A_i\times F\cong (F_W)\vert_{A_i}$ we have $f(x,p)=\#h_i^{-1}(\{p\})$. So the measurability over $A_i$ and hence everywhere follows from~\cite{federer}*{2.10.9}. \end{proof} Let $f\colon M\to N$ be a Lipschitz map between Riemannian manifolds. By Rade\-macher's theorem $f$ is differentiable almost everywhere. Let $J_df$ be the almost everywhere defined \emph{$d$-dimensional Jacobian} of $Df$. If $f$ is smooth and $M$ and $N$ are $d$-dimensional, then $J_df(m)$ is the absolute value of the determinant of the differential at $m\in M$ with respect to orthonormal bases. Let $\Phi\colon V\to W$ be a Lipschitz Cantor bundle map between metric Cantor bundles for which every fiber is a Riemannian manifold. We consider the quotient map $\phi\colon \Gamma\backslash V\to\Gamma\backslash W$. By equivariance of $\Phi$ the $d$-dimensional Jacobian of $\phi$ is well defined on a conull set by \[ J_d\phi([p])=J_d\Phi(p).\] We prove the following version of the area formula for Cantor bundles. \begin{theorem}[Area formula]\label{thm: Area formula} Let $\Phi\colon V\to W$ be a Lipschitz Cantor bundle maps between Riemannian Cantor bundles, where each fiber of $V$ is $d$-dimensional. Then the $d$-dimensional Jacobian $J_d\phi$ is $\operatorname{vol}^{\mathrm{tr}}_d$-measurable and \[ \operatorname{vol}^{\mathrm{tr}}_d(\Phi)=\int_{\Gamma\backslash W} \# \phi^{-1}(\{w\})d\operatorname{vol}^{\mathrm{tr}}_d(w)=\int_{\Gamma\backslash V} J_d\phi(v) d\operatorname{vol}^{\mathrm{tr}}_d(v).\] \end{theorem} \begin{proof} Let $F_W\subset W$ be a Borel fundamental domain consisting of finitely many boxes. Let $F_V$ be its $\Phi$-preimage. The statement is equivalent to \begin{align}\label{eq: area formula} \int_{F_W}\#\Phi^{-1}(\{w\})d\operatorname{vol}_d(w)=\int_{F_V} J_d\Phi(v) d\operatorname{vol}_d(v). \end{align} Let $X=A_1\cup\dots\cup A_n$ be a clopen partition as in Lemma~\ref{lem: box decomposition for maps} such that $(F_V)\vert_{A_i}$ and $(F_W)\vert_{A_i}$ are boxes. Choose homeomorphisms $A_i\times F_i\cong (F_V)\vert_{A_i}$ and $A_i\times F_i'\cong (F_W)\vert_{A_i}$ such that under these homeomorphisms the map $\Phi$ is of the form $\operatorname{id}_{A_i}\times h_i$. The left hand side of~\eqref{eq: area formula} becomes \[ \sum_{i=1}^n\mu(A_i)\int_{F_i'}\#h_i^{-1}(\{w\})d\mathcal{H}^d(w).\] The right hand side of~\eqref{eq: area formula} becomes \[ \sum_{i=1}^n\mu(A_i)\int_{F_i} J_dh_i(v) d\mathcal{H}^d(v).\] The $i$-th summand of left and right hand side coincide by the classical area formula~\cite{federer}*{3.2.3 on p.~243}. \end{proof} \section{Rectangular Cantor nerves and Cantor covers}\label{sec: rectangular cantor nerves} In Section~\ref{subsec: cantor covers} we introduce the notion of Cantor cover on $\xxm$ which is an analog of a $\Gamma$-equivariant cover by balls on $\widetilde M$. Then we define a good Cantor cover. The proof of its existence is postponed to Section~\ref{sub: vitali layering}. In Section~\ref{sub: Cantor nerve} we introduce the analog of Guth's rectangular nerve in our setting. In Section~\ref{sub: map to Cantor nerve} we introduce the nerve map as a Cantor bundle map. \subsection{Cantor covers}\label{subsec: cantor covers} A \emph{Cantor cover} of $\xxm$ is an open cover \[\mathcal{U}=\{A_j\times B_j\mid j\in J\}\] of $\xxm$ by product sets of clopen sets $A_j\subset X$ and open balls $B_j\subset \widetilde M$ indexed over a free cofinite $\Gamma$-set $J$ such that $A_{\gamma j}=\gamma A_j$ and $B_{\gamma j}=\gamma B_j$ for all $\gamma\in\Gamma$ and $j\in J$. We further require that $\{A_j\times\frac{1}{2}B_j\mid j\in J\}$ still covers $\xxm$. If we replace the property of being a cover by requiring that the elements of $\mathcal{U}$ are pairwise disjoint then we call $\mathcal{U}$ a \emph{Cantor packing}. Since the index set is cofinite, i.e.~consists of finitely many orbits, and the $\Gamma$-action on $\widetilde M$ is proper, a Cantor cover is always locally finite. Let $\mathcal{U}=\{A_j\times B_j\mid j\in J\}$ be a Cantor cover or Cantor packing of $\xxm$. Let $\mathcal{V}$ be an arbitrary family of subsets of a space. \begin{itemize} \item We denote the union of the elements of $\mathcal{V}$ by $\bigcup\mathcal{V}$. \item For $x\in X$ we denote by \[\mathcal{U}_x=\{B_j\mid j\in J, x\in A_j\}\] the induced open cover (packing, respectively) of $\widetilde M\cong \{x\}\times \widetilde M$. \item We say that $\mathcal{U}$ has \emph{no self intersections} if $\bigl(\gamma A_j\times \gamma B_j\bigr)\cap \bigl(A_j\times B_j\bigr)\ne\emptyset$ implies $\gamma=1$ for every $j\in J$ and $\gamma\in \Gamma$. \item For $a>0$ we write $a\,\mathcal{U}\mathrel{\mathop:}= \{A_j\times aB_j\mid j\in J\}$. \end{itemize} We produce a suitable Cantor cover of $\xxm$ that consists of good balls in every fiber. The notion of goodness goes back to Gromov. We refer to~\cite{guthvolume}*{Section~1} for this notion. A cover by good balls will be called a good cover which is a bit unfortunate since this terminology is also used for covers with contractible sets and intersections. Let $N$ be a $d$-dimensional Riemannian manifold and $V_N(1)$ be the supremal volume of $1$-balls in $N$. The ball $B(p,r)\subseteq N$ of radius $r$ around a point $p\in M$ is called a \emph{good ball} if the following conditions are satisfied. \begin{enumerate} \item Reasonable growth: $\operatorname{vol}(B(p,100r))\leqslant 10^{4(d+3)}\operatorname{vol}(B(p,\frac{1}{100}r))$. \item Volume bound: $\operatorname{vol}(B(p,r))\leqslant 10^{2(d+3)}V_N(1)r^{d+3}$. \item Small radius: $r\leqslant \frac{1}{100}$. \end{enumerate} A \emph{good cover} of a Riemannian manifold is an open cover by good balls where the concentric $\frac{1}{6}$-balls are disjoint and the $\frac{1}{2}$-balls provide a cover of the manifold as well. A Cantor cover $\mathcal{U}$ of $X\times\widetilde M$ is called \emph{good} if $\mathcal{U}_x$ is a good cover of $\widetilde M$ for every $x\in X$. Guth showed that any closed Riemannian manifold has a good cover \mbox{\cite{guthvolume}*{Lemma 2}}. At the end of Section~\ref{sub: vitali layering} we will be able to give the proof of the equivariant statement: \begin{theorem}\label{thm:EquivariantCover} There exists a good Cantor cover on $\xxm$ that has no self intersections. \end{theorem} \subsection{The rectangular Cantor nerve of a Cantor cover}\label{sub: Cantor nerve} In the sequel we consider a Cantor cover \[\mathcal{U}=\{A_j\times B_j\mid j\in J\}\] of $\xxm$. We adhere to the following notation. \begin{itemize} \item By picking a set of $\Gamma$-representatives we write the $\Gamma$-set $J$ as $\Gamma\times I$ with finite~$I$. \item Let $r_j$ denote the radius of the ball $B_j$ and $m_j$ the center of $B_j$. \item Let $\mathcal{V}\mathrel{\mathop:}=\{B_j\mid j\in J\}$. This is a locally finite cover of~$\widetilde M$ since $\Gamma$ acts freely and properly on $\widetilde M$. \end{itemize} The nerve $N(\mathcal{V})$ satisfies the requirements of Lemma~\ref{lem: proper Gamma CW}. In particular, its barycentric subdivision is a proper $\Gamma$-CW-complex. By properness and cofiniteness of $J$ the maximal multiplicity of $\mathcal{V}$ is finite, hence $N(\mathcal{V})$ is cocompact. Since $\mathcal{U}_x$ is a subcover of $\mathcal{V}$, $N(\mathcal{U}_x)$ is a subcomplex of $N(\mathcal{V})$ for every $x\in X$. \begin{definition}\label{def: Cantor nerve} The \emph{rectangular Cantor nerve} $N^{\mathrm{Ca}}(\mathcal{U})$ of the Cantor cover $\mathcal{U}$ is the subset \[ \bigl\{(x,p)\mid p\in N(\mathcal{U}_x), x\in X\bigr\}\subset X\times N(\mathcal{V}).\] \end{definition} Clearly, $N^{\mathrm{Ca}}(\mathcal{U})$ is a $\Gamma$-invariant subset of $N(\mathcal{V})$. We restrict the metric of $N((r_j)_{j\in J})$ to $N(\mathcal{V})$ and, further, to each $N(\mathcal{U}_x)$. \begin{lemma}\label{lem: nerve as cantor subbundle} The rectangular Cantor nerve $N^{\mathrm{Ca}}(\mathcal{U})$ is a metric Cantor subbundle of the trivial metric Cantor bundle $X\times N(\mathcal{V})$ endowed with its diagonal $\Gamma$-action. \end{lemma} \begin{proof} It suffices to construct an open box neighborhood around each point \[(x,p)\in N^{\mathrm{Ca}}(\mathcal{U})\subset X\times N(\mathcal{V})\subset X\times N\bigl((r_j)_{j\in J}\bigr).\] Let $F$ be an open face in $N(\mathcal{U}_x)$ that contains $p$. The star of $F$ within $N\bigl((r_j)_{j\in J}\bigr)$ consists of all open faces in $N\bigl((r_j)_{j\in J}\bigr)$ that contain $F$ in their closure. A face $E$ of $N\bigl((r_j)_{j\in J}\bigr)$ is in the star of $F$ if and only if $J_+(F)\subset J_+(E)$. For a face $E$ which lies in the star of $F$ and in $N(\mathcal{V})$ the subset $J_+(E)$ lies in the subset \[J_F\mathrel{\mathop:}=\bigl\{ j\in J\mid B_j\cap\bigcap_{i\in J_+(F)}B_i\ne\emptyset\bigr\},\] which is finite since $\mathcal{V}$ is locally finite. As a finite intersection of clopen sets the set \[ C\mathrel{\mathop:}= \bigcap_{\substack{j\in J_F\\x\in A_j}} A_j\cap \bigcap_{\substack{j\in J_F\\x\not\in A_j}} X\backslash A_j\] is a clopen neighborhood of~$x$. Let $S$ be the star of $F$ within $N(\mathcal{V})$. Let $S'$ be the star of $F$ within $N(\mathcal{U}_x)$. We claim that every face $E$ of $S'$ lies in $N(\mathcal{U}_y)$ for all $y\in C$. For such $E$ we have $J_1(E)\ne \emptyset$ and $\bigcap_{j\in J_+(E)}B_j\ne\emptyset$. Since $E$ lies in $N(\mathcal{U}_x)$ we have $x\in\bigcap_{j\in J_+(E)}A_j$. The inclusion $J_+(E)\subset J_F$ implies that $C\subset\bigcap_{j\in J_+(E)}A_j$. Thus $E$ lies in $N(\mathcal{U}_y)$ for every $y\in C$. Therefore \[ N^{\mathrm{Ca}}(\mathcal{U})\cap \bigl(C\times S'\bigr)=C\times S',\] where the left hand intersection is taken within $X\times N(\mathcal{V})$. Next we show that $C\times S'$ is open in $N^{\mathrm{Ca}}(\mathcal{U})$. To this end, we show that $S'=N(\mathcal{U}_y)\cap S$ for all $y\in C$. Since $S$ is open in $N(\mathcal{V})$ this proves that $C\times S'$ is open. Let $E$ be a face in $N(\mathcal{U}_y)\cap S$. In particular, $y\in \bigcap_{j\in J_+(E)}A_j$. Then $E$ lies in $S'$ if $x\in\bigcap_{j\in J_+(E)}A_j$. Suppose there is $j_0\in J_+(E)$ with $x\not\in A_{j_0}$. Since $y\in C$, this would imply $y\in X\backslash A_{j_0}$ and contradict $y\in \bigcap_{j\in J_+(E)}A_j$. Hence $E$ lies in $S'$. So $C\times S'$ is a box neighborhood containing $(x,p)\in N^{\mathrm{Ca}}(\mathcal{U})$ with respect to the global product chart on $X\times N(\mathcal{V})$. \end{proof} Next we try to understand the rectangular Cantor nerve by pushouts. \begin{lemma}\label{lem: cantor nerve as pushout} We assume that $\mathcal{U}$ has no self-intersections. Let $C_n$ be a complete set of representatives of the $\Gamma$-orbits of the $n$-dimensional faces of $N(\mathcal{V})$. The $n$-skeleton $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}=N^{\mathrm{Ca}}(\mathcal{U})\cap X\times N(\mathcal{V})^{(n)}$ of $N^{\mathrm{Ca}}(\mathcal{U})$ arises from the $(n-1)$-skeleton as a pushout \begin{equation}\label{eq: cantornerve as pushout} \begin{tikzcd} \coprod\limits_{F\in C_n} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \Gamma\times\partial\Bigl( \prod\limits_{k=1}^{n} [0, r_k(F)]\Bigr)\arrow[r,hook]\arrow[d,hook] & N^{\mathrm{Ca}}(\mathcal{U})^{(n-1)}\arrow[d,hook]\\ \coprod\limits_{F\in C_n} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \Gamma\times\Bigl(\prod\limits_{k=1}^n [0, r_k(F)]\Bigr)\arrow[r,hook] & N^{\mathrm{Ca}}(\mathcal{U})^{(n)}. \end{tikzcd} \end{equation} whose maps are Cantor bundle maps. The lower horizontal map is fiberwise an isometric embedding of $ \prod_{k=1}^n [0, r_k(F)]$. \end{lemma} \begin{proof} The map \begin{gather*}\coprod\limits_{F\in C_n} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \Gamma\times\Bigl(\prod\limits_{j\in J_\frac{1}{2}(F)}[0,r_j]\Bigr)\xrightarrow{\Psi} X\times N\bigl((r_j)_{j\in J}\bigr)\\ \bigl(a,\gamma, (w_j)_{j\in J_\frac{1}{2}(F)}\bigr)\mapsto \bigl(\gamma a, \gamma\cdot (\bar w_{j})_{j\in J}\bigr) \end{gather*} where \[ \bar w_j=\begin{cases} w_j & \text{if $j\in J_\frac{1}{2}(F)$,}\\ 0 & \text{if $j\in J_0(F)$,}\\ r_j & \text{if $j\in J_1(F)$,} \end{cases} \] lands in $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ and is a Cantor bundle map into $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$. Next we verify that the restriction $\Psi_0$ of $\Psi$ \[ \coprod\limits_{F\in C_n} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \Gamma\times\prod\limits_{j\in J_\frac{1}{2}(F)}(0,r_j)\xrightarrow{\Psi_0} N^{\mathrm{Ca}}(\mathcal{U})^{(n)}\backslash N^{\mathrm{Ca}}(\mathcal{U})^{(n-1)}\] is bijective. Suppose $\Phi_0$ maps two points $(a,\gamma, (w_j))$ and $(a',\gamma', (w_j'))$ in the left hand summands associated with the $n$-faces $F$ and $F'$ to the same point. By equivariance it suffices to consider the case $\gamma'=1$. The open faces $\gamma F$ and $F'$ intersect, hence concide as subsets $\gamma F=F'$\footnote{Since $N(\mathcal{V})$ is not a $\Gamma$-CW complex (only after barycentric subdivision) one might have $F=\gamma F$ as subsets but not pointwise.}. Since $F,F'$ are from a complete set of $\Gamma$-representatives $C_n$ we obtain that $F'=F$ and $\gamma F=F$ as subsets. Let $x\mathrel{\mathop:}= \gamma a=a'$. The $n$-faces of $N(\mathcal{U}_x)$ are exactly the $n$-faces $E$ with $x\in \bigcap_{j\in J_+(E)}A_j$ and $\bigcap_{j\in J_+(E)}B_j\ne\emptyset$ and $J_1(E)\ne\emptyset$. From $\gamma F=F$ we obtain that $\emptyset\ne\bigcap_{j\in J}B_j=\bigcap_{j\in J}\gamma B_j$ and $x\in \bigcap_{j\in J_+(\gamma F)}A_j=\bigcap_{j\in J_+(F)}\gamma A_j$. In particular, $B_j\cap\gamma B_j\ne\emptyset$ and $A_j\cap \gamma A_j\ne\emptyset$ for every $j\in J_+(F)$. Since $\mathcal{U}$ has no self-intersections, this implies $\gamma=1$ and proves injectivity. By~\cite{tomdieck-topology}*{Proposition~8.3.1 on p.~203} and rewriting $\prod_{j\in J_\frac{1}{2}(F)}[0,r_j]$ as $\prod_{k=1}^n[0,r_k(F)]$ the pushout property of~\eqref{eq: cantornerve as pushout} follows from the bijectivity of $\Psi_0$ and the fact that $\Psi$ is a quotient map onto its closed image. The image of $\Psi$ is the union of closed $n$-faces which is closed. Let $C\subset \operatorname{im}(\Psi)$ be a subset such that $\Psi^{-1}(C)$ is closed. Let $F$ be a compact subset of the domain of $\Psi$ whose translates cover the domain. Then \[C=\Phi\bigl(\bigcup_{\gamma\in\Gamma} \gamma F\cap \Phi^{-1}(C)\bigr)=\bigcup_{\gamma\in\Gamma}\gamma\cdot\Phi\bigl( F\cap \gamma^{-1}\Phi^{-1}(C)\bigr).\] Moreover, each subset $\Phi\bigl( F\cap \gamma^{-1}\Phi^{-1}(C)\bigr)$ is compact and lies in the compact subset $\Phi(F)$. The $\Gamma$-action on $N(\mathcal{V})$ is proper. Hence $C$ is closed, and $\Phi$ is a quotient map onto its image. \end{proof} \subsection{The map to the rectangular Cantor nerve}\label{sub: map to Cantor nerve} A Cantor bundle map $\Phi\colon\xxm\toN^{\mathrm{Ca}}(\mathcal{U})\subset X\times N((r_j)_{j\in J})$ is \emph{subordinate to $\mathcal{U}$} if each component $\Phi_j\colon \xxm\to [0,r_j]$, $j\in J$, of the map $\Phi$ is supported in $A_j\times B_j$. Next we construct a specific Cantor bundle map subordinate to $\mathcal{U}$. We define the continuous component map $\Phi_j$ for each $j\in J$ by \begin{gather*} \Phi_j\colon \xxm\to [0,r_j]\\ \Phi_j(x,p)=\begin{cases} 0 & \text{ if $d_{\widetilde M}(p, m_j)\ge r_j$ or $x\not\in A_j$,}\\ 2(r_j-d_{\widetilde M}(p, m_j)) & \text{ if $\frac{r_j}{2}\le d_{\widetilde M}(p, m_j)\le r_j$ and $x\in A_j$,}\\ r_j & \text{ if $d_{\widetilde M}(p, m_j)<\frac{r_j}{2}$ and $x\in A_j$.} \end{cases} \end{gather*} \begin{definition} The \emph{Cantor nerve map} $\Phi$ associated with $\mathcal{U}$ is the product of the maps $\Phi_j$ \[ \Phi\colon \xxm\to N^{\mathrm{Ca}}(\mathcal{U}),~\Phi(x,p)=\bigl( x, (\Phi_j(x,p))_{j\in J}\bigr).\] \end{definition} One sees immediately that $\Phi$ is subordinate to $\mathcal{U}$. \begin{lemma}\label{lem: cantor nerve map as cantor bundle map} The Cantor nerve map associated with $\mathcal{U}$ is a Lipschitz Cantor bundle map. \end{lemma} \begin{proof} Since each $A_j$ is clopen and $\Phi_j$ is clearly continuous when restricted to $A_j\times\widetilde M$ or $(X\backslash A_j)\times\widetilde M$, $\Phi_j$ is continuous. Thus $\Phi$ is continuous. For each $j\in J$ and $\gamma\in \Gamma$ we have \[ \Phi_{\gamma j}(\gamma x, \gamma p)=\Phi_j(x, p),\] which implies that $\Phi$ is $\Gamma$-equivariant. Let $(x,p)\in \xxm$. Let $F$ be an open face in $N^{\mathrm{Ca}}(\mathcal{U})= N(\mathcal{U}_x)$ that contains $\Phi(x,p)$. Then $x\in \bigcap_{j\in J_+(F)}A_j$ and $p\in \bigcap_{j\in J_+(F)}B_j$. We consider the following sets \begin{align*} J_F &\mathrel{\mathop:}=\bigl\{ j\in J\mid B_j\cap\bigcap_{i\in J_+(F)}B_i\ne\emptyset\bigr\},\\ C &\mathrel{\mathop:}=\bigcap_{\substack{j\in J_F\\x\in A_j}} A_j\cap \bigcap_{\substack{j\in J_F\\x\not\in A_j}} X\backslash A_j. \end{align*} Let $S$ be the star of $F$ within $N(\mathcal{U}_x)$. In the proof of Lemma~\ref{lem: nerve as cantor subbundle} we showed that $C\times S$ is an open box ($S$ was denoted $S'$ in the proof), that is, \[ N^{\mathrm{Ca}}(\mathcal{U})\cap C\times S=C\times S.\] Let $y\in C$ and $q\in \bigcap_{j\in J_+(F)}B_j$. Next we show that $\Phi_j(y,q)=\Phi_j(x,q)$ for every $j\in J$ which implies that $\Phi$ is a product of maps on $C\times \bigcap_{j\in J_+(F)}B_j$. First assume that $\Phi_j(y,q)=0$. If $q\not\in B_j$, then $\Phi_j(x,q)=0$. If $q\in B_j$, then $j\in J_F$, thus $y\not\in A_j$. This implies that $y\not\in C$ or $x\not\in A_j$. Because of $y\in C$ we must have $x\not\in A_j$. Hence $\Phi_j(y,q)=\Phi_j(x,q)=0$. Second assume that $\Phi_j(y,q)>0$. Then $q\in B_j$ and $j\in J_F$ and $y\in A_j$. If $x\not\in A_j$ then $y\in C$ would imply that $y\not\in A_j$. Hence $x\in A_j$. Therefore \[\Phi_j(y,q)=\Phi_j(x,q)=\begin{cases} 2(r_j-d_{\widetilde M}(p, m_j)) & \text{ if $\frac{r_j}{2}\le d_{\widetilde M}(p, m_j)\le r_j$,}\\ r_j & \text{ if $d_{\widetilde M}(p, m_j)<\frac{r_j}{2}$.} \end{cases}. \] It remains to show that $\Phi$ is Lipschitz. Each $\Phi_j$ has Lipschitz constant~$2$. Hence $\Phi_j$ has local Lipschitz constant at $(x,p)$ bounded by $2m_x(p)^{1/2}$ where $m_x(p)$ is the multiplicity at $p$ of the cover $\mathcal{U}_x$. The multiplicity is uniformly bounded by the multiplicity of the cover $\mathcal{V}$ (albeit not by a dimensional constant). Hence $\Phi$ is a Lipschitz Cantor bundle map. \end{proof} \begin{remark}\label{rem: fiberwise is Guth} Restricted to a fiber $x\in X$ the map $\Phi_x\colon \widetilde M\to N$ is exactly Guth's nerve map~\cite{guthvolume}*{Section~3} associated with the cover $\mathcal{U}_x$. \end{remark} \section{Volume estimates}\label{sec: volume estimates} The goal of this section is to prove the existence of a good Cantor cover and to prove the analog of Lemma~4 in~\cite{guthvolume} in the Cantor setting. In Section~\ref{sub: vitali layering} we define layers of an ordinary cover and of a Cantor cover. The most important technical result is the existence of layers for Cantor covers with no self-intersections. After that, the proof of the analog of Guth's Lemma~4 in Sections~\ref{sub: exponential decay} and~\ref{sub: bounding the volume of image} runs similar to Guth's proof. \subsection{Cantor-Vitali layerings of equivariant covers}\label{sub: vitali layering} Let $\mathcal{V}$ be a cover of a Riemannian manifold by balls. A \emph{Vitali layering} of $\mathcal{V}$ consists of a finite sequence of subsets $\mathcal{V}(1), \dots, \mathcal{V}(n)$ of $\mathcal{V}$, called \emph{layers}, with the following property: \begin{enumerate} \item The balls within each layer are pairwise disjoint. \item For every pair $i<j$ in $\{1,\dots,n\}$ and every ball $B\in\mathcal{V}(j)$ there is a ball in $\mathcal{V}(i)$ that meets $B$ and whose radius is greater or equal than the one of $B$. \item Every ball of $\mathcal{V}$ appears in precisely one of the layers. \end{enumerate} We say that a layer $\mathcal{V}(j)$ is \emph{lower} than a layer $\mathcal{V}(i)$ if $i<j$. The relation $<$ on $\mathcal{V}(i)$ \emph{associated to the Vitali layering} is defined as the smallest partial order $<$ on the layer $\mathcal{V}(i)$ such that $B<B'$ whenever there is a ball $B''$ from a lower layer that meets both $B$ and $B'$ and the radii of these balls satisfy \[ 2r\le r''\le r'.\] The \emph{core} of a layer is the union of all balls $\frac{1}{10}B$ where $B$ is maximal with respect to the relation $<$ on that layer. \begin{lemma}\label{lem: layering non-equivariantly} Let $\mathcal{V}=\{B_j\mid j\in J\}$ be a good cover of a Riemannian manifold by balls. Let $\mathcal{V}(1), \mathcal{V}(2), \dots,\mathcal{V}(n)$ be a Vitali layering of $\mathcal{V}$ with cores $\mathcal{V}^c(1),\dots, \mathcal{V}^c(n)$. The following holds for every integer $l\in\{1,\dots,n\}$. \begin{enumerate} \item Every point in $\bigcup\mathcal{V}^c(l)$ is contained in at most $10^{8(d+3)}$ balls from lower layers. \item For $l'\ge l$ we have $\bigcup\mathcal{V}(l')\subset \bigcup 3\mathcal{V}(l)$. \item $\bigcup\mathcal{V}(l)\subset \bigcup 10\mathcal{V}^c(l)$. \end{enumerate} \end{lemma} \begin{proof} Both statements (1) and~(3) are extracted from the proof of Lemma~4 in Guth's paper~\cite{guthvolume}*{p.~60}. Concerning (1), Guth shows that the radii of balls in layers $\ge l$ that contain a point $p\in \frac{1}{10}B_j$ in the $l$-th core, where $B_j\in \mathcal{V}(l)$ is a maximal ball of radius $r_j$, are pinched in the interval $[\frac{1}{15}r_j, 2r_j]$. The number of such balls is bounded by a dimensional constant~\cite{guthvolume}*{Lemma~3} which can be taken to be $10^{8(d+3)}$. Ad (2): Let $l'\ge l$. Let $B_j\in \mathcal{V}(l')$. Then there is a ball $B_k\in \mathcal{V}(l)$ that meets $B_j$ and has $r_k\ge r_j$. Hence $B_j\subset 3B_k\subset \bigcup 3\mathcal{V}(l)$. \end{proof} A \emph{Cantor-Vitali layering} of a Cantor cover $\mathcal{U}$ consists of a finite sequence of Cantor packings $\mathcal{U}(1),\dots, \mathcal{U}(n)$ such that $\mathcal{U}(1)_x, \dots, \mathcal{U}(n)_x$ is a Vitali layering of $\mathcal{U}_x$ for every $x\in X$. Further, the \emph{core} of the $l$-th layer $\mathcal{U}(l)$ is defined to be union of the cores of $\mathcal{U}(l)_x$ over all $x\in X$. \begin{lemma}\label{lem: expansion of cover} Let $\mathcal{U}=\{A_j\times B_j\mid j\in J\}$ be a Cantor packing of~$\xxm$ such that each ball $B_j$ is good. Then \[ \operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigcup 10\,\mathcal{U}\Bigr)\le 10^{4(d+3)}\operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigcup\mathcal{U}\Bigr).\] \end{lemma} \begin{proof} We write $J=\Gamma\times I$ as $\Gamma$-sets. Let $\mathcal{U}_0=\{ A_i\times B_i\mid i\in I\}$. Since the $\Gamma$-translates of $A_i\times 10B_i$, $i\in I$, cover the set $\bigcup 10\,\mathcal{U}$ and $\bigcup\mathcal{U}_0$ is a $\Gamma$-fundamental domain of $\bigcup\mathcal{U}$ by the packing property, we obtain for the transverse measure that \begin{align*} \operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigcup 10\,\mathcal{U}\Bigr)\le \operatorname{vol}_d\Bigl( \bigcup 10\,\mathcal{U}_0\Bigl) &\le \sum_{i\in I} \mu(A_i)\operatorname{vol}_d(10B_i)\\ &\le \sum_{i\in I}10^{4(d+3)}\mu(A_i)\operatorname{vol}_d(B_i)\qquad\text{(since $B_i$ is good)}\\ &=10^{4(d+3)}\operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigcup\mathcal{U}\Bigr). \qedhere \end{align*} \end{proof} \begin{lemma}\label{lem: layering} Let $\mathcal{U}$ be a good Cantor cover of $\xxm$. Let $\mathcal{U}(1), \mathcal{U}(2), \dots$ be a Cantor-Vitali layering of $\mathcal{U}$. Further, let $\mathcal{U}^c(l)$ denote the core of $\mathcal{U}(l)$. Then the following hold. \begin{enumerate} \item For every $x\in X$ and $p\in \bigcup\mathcal{U}^c(l)_x$ the number of balls that contain $p$ and lie in $\mathcal{U}(l')_x$ for some $l'\ge l$ is bounded by $10^{8(d+3)}$. \label{eq: layering lemma; well insulated} \item For $l'\ge l$ we have $\bigcup\mathcal{U}(l')\subset \bigcup 3\,\mathcal{U}(l)$. \label{eq: layering lemma; reasonable growth} \item For $l\ge 1$ we have $\operatorname{vol}^{\mathrm{tr}}_d\bigl(\bigcup \mathcal{U}(l)\bigr)\le 10^{4(d+3)}\cdot\operatorname{vol}^{\mathrm{tr}}_d\bigl(\bigcup\mathcal{U}^c(l)\bigr)$. \label{eq: layering lemma substantial volume of core} \end{enumerate} \end{lemma} \begin{proof} (1) and~(2) follow directly from Lemma~\ref{lem: layering non-equivariantly}. By applying Lemma~\ref{lem: layering non-equivariantly} (3) fiberwise to the packings $\mathcal{U}(l)_x$ and then taking unions over $x\in X$ we obtain that \[ \bigcup\mathcal{U}(l)\subset \bigcup 10\,\mathcal{U}^c(l).\] Statement (3) follows from~\ref{lem: layering non-equivariantly} (3) and~\ref{lem: expansion of cover}. \end{proof} \begin{theorem}\label{thm: existence of equivariant layers} Every Cantor cover of $\xxm$ without self-intersections possesses a Cantor-Vitali layering. \end{theorem} \begin{proof} Let $\mathcal{U}=\{\gamma\cdot A_j\times \gamma\cdot B_j\mid (\gamma,j)\in \Gamma\times \{1,\dots,n\}\}$ be a Cantor cover of $\xxm$ without self-intersections. Let $r_j$ be the radius of $B_j$. We assume that the enumeration of balls is such that $r_1\ge r_2\ge\dots\ge r_n$. For the purpose of this proof, we call a set of subsets of $\xxm$ of \emph{product type} if it is of the form $\{\gamma\cdot Y_j\times \gamma\cdot B_j\mid (\gamma,j)\in \Gamma\times \{1,\dots,n\}\}$ where each $Y_j$ is a (not necessarily non-empty) clopen subset of $X$. We construct the layers by a double induction. For every $l\in{\mathbb N}$ and $s\in\{1,\dots,n\}$ we define Cantor packings $\mathcal{U}^s(l)$ and $\mathcal{U}(l)$ depending on $\mathcal{U}(1), \dots, \mathcal{U}(l-1)$ and on $\mathcal{U}^{s-1}(l)$ in the following way. \begin{align*} \mathcal{U}(0)&\mathrel{\mathop:}=\emptyset\\ \mathcal{U}^0(l)&\mathrel{\mathop:}=\emptyset~\text{ for every $l\in\{1,\dots, n\}$}&\\ Y^s(l)&\mathrel{\mathop:}=\Bigl\{x\in X \mid\text{ $B_s\in \mathcal{U}_x$, $B_s$ is disjoint from all balls in $\mathcal{U}^{s-1}(l)_x$,}\\ &~\qquad\qquad\qquad\text{and $B_s$ is not contained in $\mathcal{U}(1)_x,\dots, \mathcal{U}(l-1)_x$}\Bigr\}\\ \mathcal{U}^s(l)&\mathrel{\mathop:}= \mathcal{U}^{s-1}(l)\cup\bigl\{ \gamma Y^s(l)\times \gamma B_s\mid\gamma\in\Gamma\bigr\}\\ \mathcal{U}(l)&\mathrel{\mathop:}=\mathcal{U}^n(l) \end{align*} \noindent\emph{Each set $\mathcal{U}^s(l)$ is of product type}: It suffices to verify that each set $Y^s(l)$ is clopen. By induction we may assume that $\mathcal{U}^{s-1}(l)$ and $\mathcal{U}(i)$ are of product type for every $i\in\{1,\dots, l-1\}$, that is, \begin{align*} \mathcal{U}^{s-1}(l) &= \bigl\{\gamma\cdot Y^{s-1}_j(l)\times \gamma\cdot B_j\mid (\gamma,j)\in \Gamma\times \{1,\dots,n\}\bigr\},\\ \mathcal{U}(i) &= \bigl\{\gamma\cdot Y_j(i)\times \gamma\cdot B_j\mid (\gamma,j)\in \Gamma\times \{1,\dots,n\}\bigr\} \end{align*} for suitable clopen subsets $Y_j^{s-1}(l)\subset X$ and $Y_j(i)\subset X$. For every $j\in\{1,\dots,n\}$ let \[ F_j^s\mathrel{\mathop:}= \bigl\{\gamma\in\Gamma\mid B_s\cap \gamma B_j\ne\emptyset\bigr\}.\] The subset $F_j^s$ is finite. Then we have \[ Y_s(l)=A_s\cap \Bigg( X\backslash~\Bigl(\bigcup_{\mathclap{\substack{j\in \{1,\dots,n\}\\\gamma\in F_j^s}}} \gamma Y^{s-1}_j(l)~\cup~\bigcup_{\mathclap{\substack{j\in\{1,\dots,n\}\\ i\in\{1,\dots, l-1\}\\\gamma\in F^s_j}}} \gamma Y_j(i)\Bigr)\Bigg), \] which is clearly a clopen subset. \smallskip\\ \noindent\emph{Each $\mathcal{U}^s(l)$ is a Cantor packing}: Equivalently, we may show that $\mathcal{U}^s(l)_x$ is a packing for every $x\in X$. Since $Y^s(l)\subset A_s$ and $\mathcal{U}$ has no self intersections, $\bigl\{ \gamma Y^s(l)\times \gamma B_s\mid\gamma\in\Gamma\bigr\}$ is a Cantor packing for every $s\in \{1,\dots, n\}$ and $l\in{\mathbb N}$. By induction we may assume that $\mathcal{U}^{s-1}(l)$ is a Cantor packing or, equivalently, $\mathcal{U}^{s-1}(l)_x$ is packing for every $x\in X$. Hence if for some $x\in X$ there is a non-empty intersection of two balls in $\mathcal{U}^s(l)_x$ it has to be a ball $\gamma B_s$ in $\mathcal{U}^s(l)_x$ intersecting a ball in $\mathcal{U}^{s-1}(l)_x$. In particular, $x\in \gamma Y^s(l)$. By $\Gamma$-equivariance the ball $B_s$ lies in $\mathcal{U}^s(l)_{\gamma^{-1}x}$ and intersects a ball in $\mathcal{U}^{s-1}(l)_{\gamma^{-1} x}$. This contradicts $\gamma^{-1}x\in Y^s(l)$. So $\mathcal{U}^s(l)_x$ is indeed a packing for every $x\in X$. \smallskip\\ \emph{The sequence $\mathcal{U}(1), \mathcal{U}(2), \dots, \mathcal{U}(n)$ is a Cantor-Vitali layering of $\mathcal{U}$}: Let $x\in X$. We show that $\mathcal{U}(1)_x, \mathcal{U}(2)_x, \dots, \mathcal{U}(n)_x$ is a Vitali layering of $\mathcal{U}_x$. Let $i,j\in \{1,\dots, n\}$ with $i<j$. Consider a ball $\gamma B_s$ in $\mathcal{U}(j)_x$. In particular, $x\in\gamma Y^s(j)$. We have to find a ball in $\mathcal{U}(i)_x$ that meets $\gamma B_s$ and has radius at least $r_s$. Equivalently, we have to find a ball in $\mathcal{U}(i)_{\gamma^{-1}x}$ that meets $B_s\in \mathcal{U}(j)_{\gamma^{-1}x}$ and has radius at least~$r_s$. If such a ball did not exist, then $B_s$ would lie in $\mathcal{U}^s(i)_{\gamma^{-1}x}\subset \mathcal{U}(i)_{\gamma^{-1}x}$ or $B_s$ would lie in $\mathcal{U}(1)_{\gamma^{-1}x},\dots,\mathcal{U}(i-1)_{\gamma^{-1}x}$. Both possibilities are absurd. Let $x\in X$. If $s\in\{1,\dots,n\}$ is the smallest number so that $B_s\in \{B_1,\dots, B_n\}\cap\mathcal{U}_x$ but $B_s$ is not in one of the layers $\mathcal{U}(1)_x,\dots, \mathcal{U}(n-1)_x$ then $B_s\in \mathcal{U}^s(n)_x\subset \mathcal{U}(n)_x$. Hence every ball of $\{B_1,\dots, B_n\}\cap \mathcal{U}_x$ is in one of the layers $\mathcal{U}(1)_x, \dots, \mathcal{U}(n)_x$. By equivariance each ball of $\mathcal{U}_x$ appears in one of the layers $\mathcal{U}(1)_x, \dots, \mathcal{U}(n)_x$. It is clear from the construction that each ball also appears in at most one layer. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:EquivariantCover}] By \cite{guthvolume}*{Lemma 1} around every point of $\widetilde{M}$ there is a good ball. We choose a $\Gamma$-fundamental domain of $\widetilde M$ and a good ball for every point in the fundamental domain. Since $M$ is compact we can select a finite subset $B_1,\dots ,B_n$ of these balls such that the projections of $\frac{1}{12}B_1,\dots, \frac{1}{12}B_n$ cover all of $M$. Hence the translates of $X\times \tfrac{1}{12}B_1,\dots, X\times\tfrac{1}{12}B_n$ form a Cantor cover of~$\xxm$. By properness of the $\Gamma$-action on $\widetilde M$ the set \[F\mathrel{\mathop:}=\bigl\{\gamma\in\Gamma\mid \exists_{i\in\{1,\dots,n\}} B_i\cap\gamma B_i\ne\emptyset \bigr\}\] is finite. Next we show that there is clopen partition $X=A_1\cup \dots\cup A_r$ of $X$ such that $\gamma A_i\cap A_i=\emptyset$ for every $\gamma\in F$ and every $i\in\{1,\dots, r\}$. To this end, choose a metric $d$ on $X$ that induces the topology on $X$. For every $\gamma\in F$ the continuous map \[ X\to [0,\infty),~~x\mapsto d(\gamma x, x)\] takes on a minimum $\epsilon_\gamma$ which is strictly positive as the $\Gamma$-action on $X$ is free. Let $\epsilon\mathrel{\mathop:}=\min_{\gamma\in F}\epsilon_\gamma>0$. Now we pick a cover of $X$ by clopen subsets of diameter at most $\epsilon/2$. Then there is a subordinate clopen partition $X=A_1\cup \dots\cup A_r$ of $X$. Since the diameter of each $A_i$ is at most $\epsilon/2$ we have $\gamma A_i\cap A_i=\emptyset$ for every $\gamma\in F$ and every $i\in\{1,\dots, r\}$. Then the translates of the sets $A_i\times B_j$, $i=1,\dots, r$, $j=1,\dots, n$, form a Cantor cover $\mathcal{U}'$ indexed over $\Gamma\times\{1,\dots,r\}\times\{1,\dots,n\}$ without self-intersections. Also the Cantor cover $6\;\mathcal{U}'$ has no self-intersections. According to Theorem~\ref{thm: existence of equivariant layers} the Cantor cover $\mathcal{U}'$ has a Cantor-Vitali layering. Let $\mathcal{U}'(1)$ be the top layer. We claim that $\mathcal{U}\mathrel{\mathop:}= 6\;\mathcal{U}'(1)$ is a good Cantor cover without self-intersections: Since the top layer is always a Cantor packing, $\tfrac{1}{6}\,\mathcal{U}=\mathcal{U}'(1)$ is a Cantor packing. Further, $\tfrac{1}{2}\,\mathcal{U}=3\,\mathcal{U}'$ is a Cantor cover by~Lemma~\ref{lem: layering}~(2). Finally, since $6\;\mathcal{U}'$ has no self intersections, $\mathcal{U}$ has no self-intersections either. \end{proof} \subsection{Exponential decay of the volume of the high multiplicity set}\label{sub: exponential decay} Similar remarks as in Guth's paper on the multiplicity are valid here: The (fiberwise) multiplicity of a Cantor cover is bounded but not in terms of a universal constant. Therefore we cannot bound later the Lipschitz constant of the Cantor nerve map universally. However, the volume of the high multiplicity set decays exponentially. The argument for that is basically the same as the one in~\cite{guthvolume}*{p.~61/62}, only with volume replaced by transverse measure and so on. \begin{theorem}\label{thm: volume of high multiplicity set} Let $\mathcal{U}$ be a good Cantor cover of $\xxm$ with no self-intersections. For $(x,p)\in\xxm$ let $m_x(p)$ be the multiplicity of the point $p$ with respect to the cover $\mathcal{U}_x$ of $\widetilde M$. There are dimensional constants $\alpha(d)>0$ and $\beta(d)>0$ such that for every $\lambda\ge 1$ \[\operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigl\{(x,p)\mid m_x(p)\ge \lambda+\beta(d)\bigr\}\Bigr)\le e^{-\alpha(d)\lambda}\cdot\operatorname{vol}(M).\] \end{theorem} \begin{remark} In the above statement we can choose $\alpha=-\log(1-10^{-16(d+3)})$ and $\beta=10^{8(d+3)}$. This is a consequence of the proof below. \end{remark} \begin{proof} According to Theorem~\ref{thm: existence of equivariant layers} we pick a Cantor-Vitali layering $\mathcal{U}(1), \mathcal{U}(2), \dots$ of $\mathcal{U}$. Let $\mathcal{U}^c(l)\subset\mathcal{U}(l)$ be the associated core of $\mathcal{U}(l)$. Consider the subsets \[ L^{\theta}(\lambda)=\bigl\{ (x,p)\in\xxm\mid (x,p)\in \bigcup\mathcal{U}(l)\text{ for at least $\theta$ values of $l$ in the range $l\ge \lambda$}\bigr\}.\] By Lemma~\ref{lem: layering non-equivariantly}~\eqref{eq: layering lemma; reasonable growth} and Lemma~\ref{lem: expansion of cover} we obtain that \begin{equation}\label{eq: estimate for L1 layers} \operatorname{vol}^{\mathrm{tr}}_d\bigl(L^1(\lambda)\bigr)\le \operatorname{vol}^{\mathrm{tr}}_d\bigl(\bigcup 3\mathcal{U}(\lambda)\bigr)\le 10^{4(d+3)}\operatorname{vol}^{\mathrm{tr}}_d\bigl(\bigcup\mathcal{U}(\lambda)\bigr). \end{equation} With the constant $\beta(d)=10^{8(d+3)}$ from Lemma~\ref{lem: layering non-equivariantly}~\eqref{eq: layering lemma; well insulated} we define $T(\lambda)$ as the average volume \[ T(\lambda)\mathrel{\mathop:}= \frac{1}{\beta(d)}\sum_{\theta=1}^{\beta(d)}\operatorname{vol}^{\mathrm{tr}}_d\bigl(L^\theta(\lambda)\bigr).\] An element $(x,p)\in L^\theta(\lambda)\backslash L^\theta(\lambda+1)$ lies in $\mathcal{U}(\lambda)$ and in exactly $\theta-1$ different layers lower than~$\lambda$. With Lemma~\ref{lem: layering non-equivariantly}~\eqref{eq: layering lemma; well insulated} this implies that \[\bigcup \mathcal{U}^c(\lambda)\subset \bigcup_{\theta=1}^{\beta(d)}\bigl( L^\theta(\lambda)\backslash L^\theta(\lambda+1)\bigr).\] Note that $\mathcal{U}^c(\lambda)$ and $L^\theta(\lambda)$ are $\Gamma$-invariant subsets to which we can apply the measure $\operatorname{vol}^{\mathrm{tr}}_d$. The above inclusion yields \begin{align*} \operatorname{vol}^{\mathrm{tr}}_d\bigl(\mathcal{U}^c(\lambda)\bigr) \le \operatorname{vol}^{\mathrm{tr}}_d\bigl( \bigcup_{\theta=1}^{\beta(d)}\bigl( L^\theta(\lambda)\backslash L^\theta(\lambda+1) \bigr) &\le \sum_{\theta=1}^{\beta(d)} \operatorname{vol}^{\mathrm{tr}}_d\bigl (L^\theta(\lambda)\backslash L^\theta(\lambda+1)\bigr)\\ &=\sum_{\theta=1}^{\beta(d)} \operatorname{vol}^{\mathrm{tr}}_d\bigl( L^\theta(\lambda)\bigr)-\operatorname{vol}^{\mathrm{tr}}_d\bigl( L^\theta(\lambda+1)\bigr)\\ &\le \beta(d)\bigl( T(\lambda)-T(\lambda+1)\bigr). \end{align*} We conclude further that \begin{align*} T(\lambda)-T(\lambda+1) \ge \frac{1}{\beta(d)}\operatorname{vol}^{\mathrm{tr}}_d\bigl( \bigcup\mathcal{U}^c(\lambda)\bigr) &\ge \frac{10^{-4(d+3)}}{\beta(d)}\operatorname{vol}^{\mathrm{tr}}_d\bigl( \bigcup\mathcal{U}(\lambda)\bigr)~\text{ (Lemma~\ref{lem: layering non-equivariantly}~\eqref{eq: layering lemma substantial volume of core})} \\ &\ge \frac{10^{-8(d+3)}}{\beta(d)}\operatorname{vol}^{\mathrm{tr}}_d\bigl( L^1(\lambda)\bigr)~~~\text{ (using~\eqref{eq: estimate for L1 layers})} \\ &= 10^{-16(d+3)}\operatorname{vol}^{\mathrm{tr}}_d\bigl( L^1(\lambda)\bigr)\\ &\ge 10^{-16(d+3)} T(\lambda). \end{align*} Hence $T(\lambda+1)\le (1-10^{-16(d+3)})T(\lambda)$. So $T$ decays exponentially. More precisely, we obtain that for $\lambda\ge 1$ \begin{equation}\label{eq: exponential decay of T} T(\lambda)\le e^{-\alpha(d)\lambda}\cdot T(1)\le e^{-\alpha(d)\lambda}\cdot\operatorname{vol}(M), \end{equation} where $\alpha=\alpha(d)=-\log(1-10^{-16(d+3)})$. Finally, we relate the function $T$ to the volume of the high multiplicity subset. Let $(x,p)\in\xxm$ be a point with $m_x(p)\ge\lambda+\beta(d)$. Since the balls in layer $\mathcal{U}(l)_x$ are disjoint, the point $(x,p)$ lies in at most $\lambda$ many balls from the layers $\bigcup\mathcal{U}(1)_x, \dots, \bigcup\mathcal{U}(\lambda)_x$. Hence $(x,p)\in L^{\beta(d)}(\lambda)$. We conclude that \begin{equation*} \operatorname{vol}^{\mathrm{tr}}_d\Bigl(\bigl\{(x,p)\mid m_x(p)\ge \lambda+\beta(d)\bigr\}\Bigr)\le \operatorname{vol}^{\mathrm{tr}}_d\bigl(L^{\beta(d)}(\lambda)\bigr) \le T(\lambda) \le e^{-\alpha(d)\lambda}\operatorname{vol}(M).\qedhere \end{equation*} \end{proof} \subsection{Bounding the transverse volume of the image of the nerve map}\label{sub: bounding the volume of image} In the sequel let $\mathcal{U}=\{A_j\times B_j\mid j\in J\}$ be a good Cantor cover of $\xxm$ with no self-intersections. Let $\Phi\colon\xxm\to N^{\mathrm{Ca}}(\mathcal{U})$ be the Cantor nerve map. The following (non-equivariant) statement only concerns the fiberwise nerve map $\Phi_x\colon\widetilde M\to\mathcal{N}(\mathcal{U}_x)$. In view of Remark~\ref{rem: fiberwise is Guth} we can cite the following theorem from Guth's paper. Recall that the constant $V_1$ denotes an upper bound on the volume of $1$-balls of $\widetilde M$ (see Theorem~\ref{thm: main result}) and that $d(F)$ denotes the dimension of a face~$F$. \begin{theorem}[\cite{guthvolume}*{Lemma~5}] \label{thm: volume restricted to star} There are dimensional constants $C(d)>0$ and $\beta(d)>0$ so that for every $x\in X$ and every open face $F\in N(\mathcal{U}_x)$ we have \[ \operatorname{vol}_d\Bigl(\Phi\vert_{\Phi^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)<C(d)\cdot V_1\cdot r_1(F)^{d+1}\cdot e^{-\beta(d)\cdot d(F)}.\] \end{theorem} We now fix a dimensional constant $\beta(d)>0$ that satisfies the conclusions of Theorems~\ref{thm: volume of high multiplicity set} and~\ref{thm: volume restricted to star}. \begin{theorem}\label{thm: volume of nerve map} There is a dimensional constant $C(d)>0$ such that \[\operatorname{vol}^{\mathrm{tr}}_d\bigl( \Phi \bigr) \le C(d)\cdot\operatorname{vol}(M).\] \end{theorem} \begin{proof} Let $n$ be the maximal multiplicity of the cover $\{B_j\mid j\in J\}$ of $\widetilde M$. For $i\in{\mathbb N}_0$ we define the $\Gamma$-invariant subsets \begin{align*} S_i&\mathrel{\mathop:}= \bigl\{(x,p)\in\xxm\mid i+\beta(d)\le m_x(p)< 1+i+\beta(d)\bigr\},\\ S&\mathrel{\mathop:}= \bigl\{(x,p)\in\xxm\mid m_x(p)<\beta(d)\bigr\}. \end{align*} Restricted to $S_i$ or $S$ the map $\Phi$ is fiberwise Lipschitz with Lipschitz constant at most $2(1+i+\beta(d))^{1/2}$ or $2\beta(d)^{1/2}$, respectively (cf.~the proof of Lemma~\ref{lem: cantor nerve map as cantor bundle map}). Therefore we have \begin{align*} \operatorname{vol}^{\mathrm{tr}}_d(\Phi)&\le \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi\vert_S\bigr)+\sum_{i=0}^{n}\operatorname{vol}^{\mathrm{tr}}_d\bigl( \Phi\vert_{S_i}\bigr)\\ &\le\bigl( 2\beta(d)^{1/2})^d\cdot \operatorname{vol}(M)+\sum_{i=0}^n \bigl( 2(1+i+\beta(d))^{1/2}\bigr)^d\cdot\operatorname{vol}^{\mathrm{tr}}_d(S_i)\\ &\le \Bigr(2^d\beta(d)^{d/2}+\sum_{i=0}^n 2^d(1+i+\beta(d))^{d/2}e^{-\alpha(d)\cdot i}\Bigr)\cdot\operatorname{vol}(M)\qquad\text{ (Theorem~\ref{thm: volume of high multiplicity set})}\\ &\le C(d)\cdot \operatorname{vol}(M), \end{align*} where we set $C(d)$ to be the value of the convergent series \[ C(d)\mathrel{\mathop:}= \Bigr(2^d\beta(d)^{d/2}+\sum_{i=0}^\infty 2^d(1+i+\beta(d))^{d/2}e^{-\alpha(d)\cdot i}\Bigr).\] Since $\alpha(d), \beta(d)$ are dimensional constants, so is $C(d)$. \end{proof} We now fix a dimensional constant $C(d)>0$ that satisfies the conclusions of Theorems~\ref{thm: volume restricted to star} and~\ref{thm: volume of nerve map}. \section{Pushing the equivariant nerve map down to the \texorpdfstring{$d$}{d}-skeleton}\label{sec: homotoping down} In this section we deform the Cantor nerve map of a Cantor cover $\mathcal{U}$ to the $d$-skeleton with $d=\dim(M)$. The non-equivariant counterpart in Guth's paper~\cite{guthvolume} is the one where tools from geometric measure theory enter. An essential tool is the following result. \begin{theorem}[Pushout lemma~\cite{guthuryson}*{Lemma~0.6}]\label{thm: pushout lemma} For each dimension $d\ge 2$ there is a constant $\sigma(d)>0$ so that the following holds. Suppose that $N$ is a compact piecewise smooth $d$-dimensional manifold with boundary. Suppose that $K\subset {\mathbb R}^n$ is a convex set, and $\phi\colon (N,\partial N)\to (K,\partial K)$ is a piecewise smooth map. Then $\phi$ may be homotoped into a map $\phi'$ so that the following holds. \begin{itemize} \item The map $\phi'$ agrees with $\phi$ on $\partial N$. \item $\operatorname{vol}_d(\phi')\le \operatorname{vol}_d(\phi)$. \item The image $\phi'(N)$ lies in the $\sigma(d)\cdot\operatorname{vol}_d(\phi)^{1/d}$-neighborhood of $\partial K$. \end{itemize} \end{theorem} Here is a list of dimensional constants to be used below. \begin{center} \begin{tabular}{l|l} $\beta(d)$ & defined after Theorem~\ref{thm: volume restricted to star}.\\ $C(d)$ & defined after Theorem~\ref{thm: volume of nerve map}.\\ $\sigma(d)$ & see the Pushout lemma above.\\ \end{tabular} \end{center} Next we recall the definition of thin and thick faces from Guth's paper. To this end, we choose $\epsilon>0$ small enough so that \begin{equation}\label{eq: epsilon constant} \prod_{k=d+1}^\infty \Bigl( 1-2\bigl(3\cdot\epsilon\cdot\sigma(d)^d\cdot e^{-\beta(d)\cdot k}\bigr)^{1/d}\Bigr)^{-d}<2~\text{ and }~ 2\cdot\epsilon \cdot e^{\beta(d)\cdot d}<1. \end{equation} The infinite product converges by the exponential decay in the term $e^{-\beta(d)\cdot k}$. Since the value of $\epsilon$ only depends on $d$ and the dimensional constant $\beta(d)$, it is a dimensional constant and we write $\epsilon=\epsilon(d)$. Let $F$ be an open $k$-face with side lengths $r_1(F)\le \dots \le r_k(F)$. We call the face $F$ \emph{thin} if \begin{equation}\label{eq: thin face} C(d)\cdot V_1\cdot r_1(F)<\epsilon(d). \end{equation} Otherwise it is called \emph{thick}. Next we play off the framework developed in Sections~\ref{sec: category cantor bundles} and~\ref{sec: rectangular cantor nerves} to transfer Guth's methods to our setting. \subsection{Compression map} Let $\delta\in (0,\frac{1}{2})$. The \emph{$\delta$-truncation} $N^{\mathrm{Ca}}(\mathcal{U})_\delta^{(n)}$ of the $n$-skeleton $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ is obtained from $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ by removing a smaller cuboid inside each $n$-dimensional face. Referring to the pushout~\eqref{eq: cantornerve as pushout}, we obtain $N^{\mathrm{Ca}}(\mathcal{U})_\delta^{(n)}$ by removing \[ \coprod\limits_{F\in C_n} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \Gamma\times\Bigl(\prod\limits_{k=1}^n [\delta r_k(F), (1-\delta) r_k(F)]\Bigr).\] The self map $R_\delta$ of the cuboid given by $F$ stretches linearly the interval $[\delta r_k(F), (1-\delta)r_k(F)]$ to $[0, r_k(F)]$ and sends $[0,\delta r_k(F)]$ to $0$ and $[(1-\delta)r_k(F),r_k(F)]$ to $r_k(F)$ in each coordinate. The \emph{$\delta$-compression map} on the $n$-skeleton is the map $\mathrm{P}_\delta\colon N^{\mathrm{Ca}}(\mathcal{U})^{(n)}\to N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ such that $\mathrm{P}_\delta$ is the identity on the $(n-1)$-skeleton and on every summand of the left lower corner of the pushout~\eqref{eq: cantornerve as pushout} it is the equivariant extension of \[\Bigl(\bigcap\limits_{j\in J_+(F)} A_j\Bigr)\times \{1\}\times\prod\limits_{k=1}^n [0, r_k(F)]\xrightarrow{\operatorname{id}\times R_\delta} N^{\mathrm{Ca}}(\mathcal{U})^{(n)}. \] By Lemma~\ref{lem: pushouts of Cantor bundles} the map $\mathrm{P}_\delta$ is a Cantor bundle map. \begin{remark}\label{rem: compression map} Obviously, we have \[ \mathrm{P}_\delta\bigl( N^{\mathrm{Ca}}(\mathcal{U})_\delta^{(n)}\bigr)\subsetN^{\mathrm{Ca}}(\mathcal{U})^{(n-1)}.\] The map $\mathrm{P}_\delta$ is a Lipschitz Cantor bundle map with Lipschitz constant $(1-2\delta)^{-1}$. \end{remark} \subsection{Federer-Fleming deformation in thick faces}\label{sub: federer-fleming} Let $n>d$. Let \[\Phi\colon\xxm\toN^{\mathrm{Ca}}(\mathcal{U})^{(n)}\] be a Lipschitz Cantor bundle map which is subordinate to $\mathcal{U}$. Referring to the pushout~\eqref{eq: cantornerve as pushout}, we consider the subset \[ L_F\mathrel{\mathop:}= \Bigl(\bigcap_{j\in J_+(F)} A_j\Bigr)\times \{1\}\times\prod_{k=1}^n [0, r_k(F)]\] of $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$. Let $L_F^\circ\subset L_F$ be similarly defined as $L_F$ by taking the interior of the cuboid in the right hand factor. By applying Lemma~\ref{lem: box decomposition for maps} to each box $L_F$ and taking a common refinement we obtain a clopen partition $X=B_1\cup \dots \cup B_m$ such that the following holds. \begin{itemize} \item For every $i\in\{1,\dots,m\}$ and every $F\in C_n$ we have either $B_i\subset \bigcap_{j\in J_+(F)}A_j$ or $B_i\cap \bigcap_{j\in J_+(F)}A_j=\emptyset$. \item $\Phi^{-1}(L_F)\vert_{B_i}$ is a box (possibly empty). So we have $\Phi^{-1}(L_F)\vert_{B_i}=B_i\times W_{i,F}$ for some subset $W_{i,F}\subset\widetilde M$. \item If $B_i\subset \bigcap_{j\in J_+(F)}A_j$, then $\Phi\vert_{B_i\times W_{i,F}}=\operatorname{id}_{B_i}\times h_{i,F}$ for some Lipschitz map $h_{i,F}\colon W_{i,F}\to \prod_{k=1}^n [0,r_k(F)]$. \end{itemize} Let us denote the restriction of $h_{i, F}$ to the $h_{i,F}$-preimage of the interior of the cube by $h_{i,F}^\circ$. We apply the Federer-Fleming deformation theorem to $h_{i,F}^\circ$ for each thick $F\in C_n$ in the same way as in~\cite{guthvolume}*{p.~70}. It gives us points $p_{i, F}$ in the interior of the cube $\prod_{k=1}^n [0,r_k(F)]$ such that for the radial projections $\operatorname{pr}_{i,F}$ from the interior of the cube minus the point $p_{i,F}$ to the boundary of the cube we have \begin{equation}\label{eq: federer fleming volume} \int J_{\operatorname{pr}_{i,F}\circ h_{i,F}^\circ}d\operatorname{vol}_d^{\widetilde M}\le G(V_1,d)\cdot \int J_{h_{i,F}^\circ}d\operatorname{vol}_d^{\widetilde M} \end{equation} for a constant $G(V_1,d)\ge 1$ only depending on $V_1$ and $d$. Here $\operatorname{vol}_d^{\widetilde M}$ is the Riemannian volume measure on $\widetilde M$ induced by $M$. The same two remarks in~\cite{guthvolume}*{p.~70} apply here: First, the stretching factor $G(V_1,d)$ depends on the dimension $d(F)$ of the face. However, the maximal dimension of a thick face only depends on $V_1$ and $d$ as noted before. Second, the usual Federer-Fleming construction takes place in a cube rather than a cuboid. The fact that the face is thick puts a limit on how distorted it is in comparison to a cube. By properness of $\Phi$ the infimal distance $\epsilon$ of $p_{i,F}$ to $\operatorname{im} h_{i,F}$ over all thick $F\in C_n$ and $i\in\{1,\dots,m\}$ is strictly positive. Next we describe two Cantor subbundles $Z_1^{(n)}$ and $Z_2^{(n)}$ of $N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$. The first one $Z_1^{(n)}$ is obtained by removing $\epsilon$-balls around the $\Gamma$-translates of the points $p_{i,F}$, more precisely, by removing \[ \coprod\limits_{\substack{\text{$F\in C_n$ thick}\\i=1,\dots,m}} \Bigl(\bigcap\limits_{j\in J_+(F)} A_j\cap B_i\Bigr)\times \Gamma\times B(p_{i,F},\epsilon). \] The second one $Z_2^{(n)}$ is obtained by removing all thick $n$-faces, that is, $Z_2^{(n)}$ is given by a similar pushout as in~\eqref{eq: cantornerve as pushout} with the coproduct in the lower left corner running only over thin $F\in C_n$. By equivariance, $\operatorname{im}\Phi\subset Z_1^{(n)}$, so the map $\Phi$ factors as \[\xxm\to Z_1^{(n)}\hookrightarrow N^{\mathrm{Ca}}(\mathcal{U})^{(n)}.\] The maps $\operatorname{pr}_{i,F}$ and the identity on the $(n-1)$-skeleton and thin faces yield by the pushout property (see Lemma~\ref{lem: pushouts of Cantor bundles}) a Cantor bundle map $Z_1^{(n)}\to Z_2^{(n)}$. It depends on the choice of the points $p_{i,F}$. For every such choice the composition of $\Phi\colon \xxm\to Z_1^{(n)}$ with the radial projection map $Z_1^{(n)}\to Z_2^{(n)}\hookrightarrowN^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ is called a \emph{Federer-Fleming deformation} of $\Phi$. Since each $\operatorname{pr}_{i,F}$ is Lipschitz when restricted to the complement of a small ball around the center, a Federer-Fleming deformation of $\Phi$ is still Lipschitz. We cannot bound the Lipschitz constant by a dimensional constant, though, as we cannot control the above quantity~$\epsilon$. \begin{lemma}\label{lem: federer-fleming volume} Let $\Phi\colon \xxm\toN^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ be a Lipschitz Cantor bundle map which is subordinate to $\mathcal{U}$. Let $\Phi'$ be a Federer-Fleming deformation of $\Phi$. Then $\Phi'$ is a Lipschitz Cantor bundle map subordinate to $\mathcal{U}$ and \[\operatorname{vol}^{\mathrm{tr}}_d(\Phi')\le G(V_1,n)\cdot \operatorname{vol}^{\mathrm{tr}}_d(\Phi).\] \end{lemma} \begin{proof} Let $E\subset N^{\mathrm{Ca}}(\mathcal{U})^{(n-1)}$ be a Borel $\Gamma$-fundamental domain of the $(n-1)$-skeleton. Then \[\Phi^{-1}(E)\cup \bigcup_{F\in C_n} \Phi^{-1}\bigl(L_F^\circ\bigr)\] is a Borel $\Gamma$-fundamental domain of $\xxm$. The above union is disjoint. By~\eqref{eq: area formula} we obtain that \begin{equation*} \operatorname{vol}^{\mathrm{tr}}_d(\Phi') = \int_{\Phi^{-1}(E)} J_{\Phi'} d\operatorname{vol}_d+\sum_{F\in C_n} \int_{\Phi^{-1}(L_F^\circ)}J_{\Phi'}d\operatorname{vol}_d. \end{equation*} On the $\Phi$-preimage of the $(n-1)$-skeleton the maps $\Phi$ and $\Phi'$ coincide. The maps $\Phi$ and $\Phi'$ also coincide on $\Phi^{-1}(L_F^\circ)$ for each thin $F\in C_n$. Hence \[ \operatorname{vol}^{\mathrm{tr}}_d(\Phi') =\int_{\Phi^{-1}(E)} J_{\Phi} d\operatorname{vol}_d+\sum_{\substack{F\in C_n\\\text{$F$ thin}}} \int_{\Phi^{-1}(L_F^\circ)}J_{\Phi}d\operatorname{vol}_d+\sum_{\substack{F\in C_n\\\text{$F$ thick}}} \int_{\Phi^{-1}(L_F^\circ)}J_{\Phi'}d\operatorname{vol}_d. \] For thin $F\in C_n$ the set $\Phi^{-1}(L_F^\circ)$ is the disjoint union of products of $B_i\cap \bigcap_{j\in J_+(F)}A_j$ and the domain of $h_{i, F}^\circ$ where $i$ runs over $1,\ldots, m$. Recall that each $B_i$ is either disjoint from or contained in $\bigcap_{j\in J_+(F)}A_j$. For thick $F\in C_n$ we obtain from~\eqref{eq: federer fleming volume} that \begin{align*} \int_{\Phi^{-1}(L_F^\circ)}J_{\Phi'}d\operatorname{vol}_d &= \sum_{\substack{i=1,\ldots,m\\B_i\subset \bigcap_{j\in J_+(F)}A_j}} \mu(B_i)\int J_{\operatorname{pr}_{i,F}\circ h_{i,F}^\circ}d\operatorname{vol}_d^{\widetilde M}\\ &\le G(V_1,d)\cdot\sum_{\substack{i=1,\ldots,m\\B_i\subset \bigcap_{j\in J_+(F)}A_j}} \mu(B_i)\int J_{h_{i,F}^\circ}d\operatorname{vol}_d^{\widetilde M}\\ &=G(V_1,d)\cdot\int_{\Phi^{-1}(L_F^\circ)}J_{\Phi}d\operatorname{vol}_d. \end{align*} The claimed inequality follows from Theorem~\ref{thm: Area formula}. \end{proof} \subsection{Guth's pushout lemma for thin faces} While the Federer-Fleming deformation allows us to deform the nerve map away from thick faces, the pushout deformation of this subsection, in combination with the compression map, allows us to deform the nerve map away from thin faces. We retain the setup at the beginning of Section~\ref{sub: federer-fleming}. We additionally require that $\Phi$ is piecewise smooth on each fiber. We apply exactly the same argument as in~\cite{guthuryson}*{p.~206} to each thin face and the maps $h_i$; we only have to take care that everything fits together to a Cantor bundle map in the end. For each open thin face $F\in C_n$ and each $i\in\{1,\dots,m\}$ one chooses a convex subset $K_{i, F}$ of $F$ containing almost all of $F$ but in general position with respect to $h_i$: The $h_i$-preimage of $K_{i,F}$ is a piecewise smooth submanifold $S_{i, F}$ of $\widetilde M$ with boundary $\partial S_{i, F}$ which is the $h_i$-preimage of $\partial K_{i,F}$. We apply the Pushout Lemma~\ref{thm: pushout lemma} to \[h_i\vert_{S_{i,F}}\colon (S_{i,F}, \partial S_{i,F})\to (K_{i,F},\partial K_{i,F}).\] The result is a map $\tilde h_{i,F}$ so that $\tilde h_{i,F}$ coincides with $h_i$ on $\partial S_{i,F}$ and \begin{equation}\label{eq: pushout lemma volume} \operatorname{vol}_d(\tilde h_{i, F})\le \operatorname{vol}_d(h_i\vert_{S_{i,F}}), \end{equation} and the image of $\tilde h_{i, F}$ lies in the $w_{i,F}$-neighborhood of $\partial K_{i,F}$ where \[ w_{i,F}\mathrel{\mathop:}= \sigma(d)\cdot\operatorname{vol}_d\bigl(h_i\vert_{S_{i,F}}\bigr)^{1/d}.\] We modify the map $\Phi$ as follows. The Cantor bundle $\xxm$ contains the $\Gamma$-invariant subspace \begin{equation}\label{eq: subspace} \bigcup_{\substack{\gamma\in\Gamma\\F\in C_n\text{ thin}\\i=1,\dots,m}} \mkern-12mu \gamma\cdot\Bigr(\bigcap_{j\in J_+(F)} A_j\cap B_i\Bigl)\times \gamma\cdot S_{i, F}=\Phi^{-1}\Bigl(\mkern-12mu\bigcup_{\substack{\gamma\in\Gamma\\F\in C_n\text{ thin}\\i=1,\dots,m}}\Bigr(\bigcap_{j\in J_+(F)} A_j\cap B_i\Bigl)\times\Gamma\times K_{i,F}\Bigr) \end{equation} which is a disjoint union of subspaces and in each fiber a disjoint union of piecewise smooth $d$-dimensional submanifolds with boundaries. We make the subspace~\eqref{eq: subspace} slightly smaller by replacing each $S_{i,F}$ with its interior and then consider the complement, which we denote by $R\subset\xxm$, of this smaller subspace. Then $\xxm$ can be expressed as the pushout of Cantor bundles \[\begin{tikzcd} \coprod\limits_{\substack{\gamma\in\Gamma\\F\in C_n\text{ thin}\\i=1,\dots,m}} \mkern-12mu \Bigr(\bigcap_{j\in J_+(F)} A_j\cap B_i\Bigl)\times\Gamma \times\partial S_{i, F}\ar[r,hook] \ar[d,hook,shorten <= -2.5em]& R\ar[d,hook]\\ \coprod\limits_{\substack{\gamma\in\Gamma\\F\in C_n\text{ thin}\\i=1,\dots,m}} \mkern-12mu \Bigr(\bigcap_{j\in J_+(F)} A_j\cap B_i\Bigl)\times\Gamma \times S_{i, F}\ar[r,hook] &\xxm. \end{tikzcd} \] By the pushout property (Lemma~\ref{lem: pushouts of Cantor bundles}) we obtain a new Cantor bundle map $\Phi'\colon \xxm\to N^{\mathrm{Ca}}(\mathcal{U})^{(n)}$ that coincides with $\Phi$ on $R$ and is the equivariant extension of $\operatorname{id}\times \tilde h_{i,F}$ on each summand of the left lower corner of the pushout. The map $\Phi'$ is still Lipschitz and piecewise smooth on each fiber. We say that the map $\Phi'$ is a \emph{pushout deformation} of~$\Phi$. Similarly as in Lemma~\ref{lem: federer-fleming volume}, we conclude the following statement from~\eqref{eq: pushout lemma volume}. \begin{lemma}\label{lem: pushout deformation volume} If $\Phi'$ is a pushout deformation of $\Phi$, then \[\operatorname{vol}^{\mathrm{tr}}_d(\Phi')\le \operatorname{vol}^{\mathrm{tr}}_d(\Phi)\] and \[\operatorname{vol}_d\Bigl(\Phi'_x\vert_{{\Phi'_x}^{-1}(\star(F))}\Bigr)\le \operatorname{vol}_d\Bigl(\Phi_x\vert_{\Phi_x^{-1}(\star(F))}\Bigr) \] for every thin face $F\in N(\mathcal{V})$ and every $x\in X$. \end{lemma} \subsection{Pushing down the skeleta} We consider the Cantor nerve map $\Phi$ of a Cantor cover $\mathcal{U}$ with no self-intersections. Let $\mathcal{V}$ be the locally finite cover of $\widetilde M$ obtained by the right factors of elements in $\mathcal{U}$ (see Definition~\ref{def: Cantor nerve}). Let $N\in{\mathbb N}$ be such that the nerve map $\Phi\colon \xxm\toN^{\mathrm{Ca}}(\mathcal{U})$ lands in the $N$-skeleton. We define \[ \delta(k)\mathrel{\mathop:}=\Bigl( 3\cdot\epsilon(d)\cdot \sigma(d)^d\cdot e^{-\beta(d)\cdot k}\Bigr)^{1/d}.\] We set $\Phi_N\mathrel{\mathop:}= \Phi$ and construct, by a finite downward induction, a sequence of Lipschitz Cantor bundle maps subordinate to $\mathcal{U}$ \[\Phi_{i}\colon \xxm\to N^{\mathrm{Ca}}(\mathcal{U})^{(i)},~i=N, \dots, d,\] such that for every $i\in \{N, N-1, \dots, d+1\}$ \begin{equation}\label{eq: inductive global volume} \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_{i-1}\bigr)\le \bigl(1-2\cdot\delta(i)\bigr)^{-d}\cdot G(V_1,d)\cdot \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_{i}\bigr), \end{equation} and for every thin face $F\in N(\mathcal{V})$ and every $x\in X$ and $i\in \{N, N-1, \dots, d+1\}$ \begin{equation}\label{eq: inductive local volume} \operatorname{vol}_d\Bigl(\Phi_{i-1}\vert_{\Phi_{i-1}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)\le \bigl(1-2\cdot\delta(i)\bigr)^{-d}\cdot \operatorname{vol}_d\Bigl(\Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr) \end{equation} and for every thin face $F\in N(\mathcal{V})$ and every $x\in X$ and $i\in \{N, N-1, \dots, d\}$ \begin{equation}\label{eq: any local volumes} \operatorname{vol}_d\Bigl(\Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr) \le 2\cdot\epsilon(d)\cdot r_1(F)^d\cdot e^{-\beta(d)\cdot d(F)}. \end{equation} We combine the deformation steps in the previous subsections to inductively deform $\Phi_i$ with $i>d$ which lands in the $i$-skeleton to map $\Phi_{i-1}$ which lands in the $(i-1)$-skeleton. The application of Guth's pushout lemma requires that the map to the nerve is fiberwise piecewise smooth. This is true for the original Cantor nerve map. All deformation steps preserve that property. The first map $\Phi_N=\Phi$ satisfies~\eqref{eq: any local volumes} by Theorem~\ref{thm: volume restricted to star} and~\eqref{eq: thin face}. Next we construct $\Phi_{i-1}$ from $\Phi_i$. First, let $\Phi_i'$ be a Federer-Fleming deformation of $\Phi_i$. The image of $\Phi_i'$ does not meet any open thick $i$-faces; it lies in the Cantor subcomplex $Z_2^{(i)}$. By Lemma~\ref{lem: federer-fleming volume}, \begin{equation}\label{eq: G estimate of Federer Fleming} \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_i'\bigr)\le G(V_1,d)\cdot \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_i\bigr). \end{equation} Let $F$ be an open thin face in $N^{\mathrm{Ca}} (\mathcal{U})_x\subset \{x\}\times N(\mathcal{V})$. Let $x\in X$. We have \[\Phi_{i}'\vert_{{\Phi_{i}'}^{-1}\bigl(\{x\}\times F\bigr)}= \Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times F\bigr)}.\] Since every face in the open star of a thin face is thin, we also obtain that \begin{equation}\label{eq: equality thin star} \Phi_{i}'\vert_{{\Phi_{i}'}^{-1}\bigl(\{x\}\times \star(F)\bigr)}= \Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times \star(F)\bigr)}. \end{equation} Next we assume that $F$ is $i$-dimensional and we consider a pushout deformation $\Phi_i''$ of $\Phi_i'$ which does not increase volumes according to Lemma~\ref{lem: pushout deformation volume}. Let \[w_{i,F}\mathrel{\mathop:}= \sigma(d)\cdot \operatorname{vol}_d\Bigl(\Phi_{i}'\vert_{\Phi_{i}'^{-1}\bigl(\{x\}\times F\bigr)}\Bigr)^{1/d}=\sigma(d)\cdot \operatorname{vol}_d\Bigl(\Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times F\bigr)}\Bigr)^{1/d}. \] The image of $(\Phi_i'')_x$ within~$F$ lies in the $w_{i,F}$-neighborhood of the boundary of a convex subset of $F$ which we can choose arbitrarily large within $F$. By~\eqref{eq: any local volumes} we have \begin{equation*} w_{i,F}\le \sigma(d)\cdot\operatorname{vol}_d\Bigl(\Phi_{i}\vert_{\Phi_{i}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)^{1/d}\le \bigl(2\cdot\sigma(d)^d\cdot \epsilon(d)\cdot e^{-\beta(d)\cdot i}\bigr)^{1/d}\cdot r_1(F). \end{equation*} Hence we choose the convex subset so that $\operatorname{im} (\Phi_i'')_x\cap F$ lies in the $\delta(i)\cdot r_1(F)$-neighborhood of $\partial F$. Hence the composition with a suitable compression map \[\Phi_{i-1}\mathrel{\mathop:}= \mathrm{P}_{\delta(i)}\circ \Phi_i''\] lands in the $(i-1)$-skeleton. Now~\eqref{eq: inductive global volume} follows from~\eqref{eq: G estimate of Federer Fleming}, the fact that the pushout deformation does not increase volume, and $\mathrm{P}_{\delta(i)}$ having Lipschitz constant at most $\bigl(1-2\cdot\delta(i)\bigr)^{-1}$. Similarly and because of~\eqref{eq: equality thin star} for every thin face $F$ and $\mathrm{P}_{\delta(i)}^{-1}(\star(F))\subset \star(F)$ for every face $F$ we obtain~\eqref{eq: inductive local volume}. Note that~\eqref{eq: epsilon constant} says that \[ \prod_{l=i}^N\bigl(1-2\cdot\delta(l)\bigr)^{-d}<2.\] Using the induction hypothesis, we obtain~\eqref{eq: any local volumes} from \begin{align*} \operatorname{vol}_d\Bigl(\Phi_{i-1}\vert_{\Phi_{i-1}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)&\le \operatorname{vol}_d\Bigl(\Phi\vert_{\Phi^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)\cdot\prod_{l=i}^N \bigl(1-2\cdot\delta(l)\bigr)^{-d}\\ &\le 2\cdot\operatorname{vol}_d\Bigl(\Phi\vert_{\Phi^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)\\ &\le 2\cdot C(d)\cdot V_1\cdot r_1(F)^{d+1}\cdot e^{-\beta(d)\cdot d(F)}\quad\text{(Theorem~\ref{thm: volume restricted to star})}\\ &\le 2\cdot \epsilon(d)\cdot r_1(F)^d\cdot e^{-\beta(d)\cdot d(F)}\quad\text{(see~\eqref{eq: thin face})}. \end{align*} \begin{theorem}\label{thm: homotoped down map} There is constant $C(V_1, d)>0$ only depending on~$V_1$ and~$d$ such that the map $\Phi_d\colon \xxm\toN^{\mathrm{Ca}}(\mathcal{U})^{(d)}$ satisfies \[\operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_d\bigr)\le C(V_1,d)\cdot\operatorname{vol}(M).\] Furthermore, for every thin $d$-face $F$ in $N(\mathcal{V})$ and every $x\in X$ we have \[\operatorname{vol}_d\Bigl(\Phi_d\vert_{\Phi_{d}^{-1}\bigl(\{x\}\times \star(F)\bigr)}\Bigr)< r_1(F)^d.\] \end{theorem} \begin{proof} By the same argument as in~\cite{guthvolume}*{Proof of Lemma~9} based on~\cite{guthvolume}*{Lemma~3}, there is a constant $D(V_1, d)>0$ only depending on $V_1$ and the dimension $d$ such that every thick face in $N^{\mathrm{Ca}}(\mathcal{U})$ is at most $D(V_1,d)$-dimensional. Hence we have to apply the Federer-Fleming deformation step at most $D(V_1, d)$ times. The constant $G(V_1,d)$ in~\eqref{eq: inductive global volume} only appears if a Federer-Fleming deformation was used when deforming $\Phi_i$ to $\Phi_{i-1}$. Therefore we obtain that \begin{align*} \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi_d\bigr)&\le \prod_{l=d+1}^\infty \bigl( 1-2\delta(l)\bigr)^{-d}\cdot G(V_1,d)^{D(V_1,d)}\cdot \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi\bigr)\\ &\le 2 \cdot G(V_1,d)^{D(V_1,d)}\cdot \operatorname{vol}^{\mathrm{tr}}_d\bigl(\Phi\bigr). \end{align*} The first assertion now follows from Theorem~\ref{thm: volume of nerve map}. The second assertion follows from~\eqref{eq: epsilon constant} and~\eqref{eq: any local volumes}. \end{proof} \section{From volume to simplicial volume}\label{sec: from volume to simplicial volume} In this section we complete the proofs of the main results in the introduction. Let us recall the common setup of the main theorems. Let $M$ be a closed connected $d$-dimensional Riemannian manifold with fundamental group~$\Gamma$. Let $X$ be a Cantor space endowed with a free continuous action of $\Gamma$ and a $\Gamma$-invariant Borel probability measure~$\mu$ (Theorem~\ref{thm: existence of action}). Let $V_1>0$ be an upper bound of the volumes of $1$-balls in the universal cover $\widetilde M$. We choose a good Cantor cover $\mathcal{U}$ on~$\xxm$ with no self-intersections (Theorem~\ref{thm:EquivariantCover}). The transverse volume of the associated Cantor nerve map is universally bounded by the volume of~$M$ (Theorem~\ref{thm: volume of nerve map}). According to the previous section, in particular Theorem~\ref{thm: homotoped down map}, we can deform the Cantor nerve map to the $d$-skeleton without loosing the control on its transverse volume. More precisely, we obtain a Cantor bundle map $\Phi\colon\xxm\to N^{\mathrm{Ca}}(\mathcal{U})^{(d)}$ subordinate to~$\mathcal{U}$ such that \begin{equation} \operatorname{vol}^{\mathrm{tr}}_d(\Phi)\le C(d,V_1)\cdot\operatorname{vol}(M)\label{eq: volume bound final map} \end{equation} for a constant $C(d,V_1)>0$ only depending on $V_1$ and the dimension $d$. The second assertion of Theorem~\ref{thm: homotoped down map} and the fact that $r_1(F)^d\le \operatorname{vol}_d(F)$ for an open $d$-face $F$ implies that for every $x\in X$ and every thin open $d$-face $F$ in $N(\mathcal{U}_x)$ the image of $\Phi_x$ misses at least a point of $F$. As before, we will denote by $\mathcal{V}$ the locally finite cover of $\widetilde M$ by all balls appearing as factors of elements of $\mathcal{U}$. For the rest of this section, let $N$ denote the $d$-skeleton of the nerve of $\mathcal{V}$. In particular, we have \[N^{\mathrm{Ca}}(\mathcal{U})\subset X\times N.\] The remaining steps to complete the proofs of the main theorems are as follows. In~\ref{sub: modules over the group ring} we present an auxiliary result on the freeness of certain ${\mathbb Z}[\Gamma]$-modules. This is, for example, needed in the proof of Lemma~\ref{lem: htp equivalence with caterina} where we invoke the fundamental theorem of homological algebra to show that a homology isomorphism between free ${\mathbb Z}[\Gamma]$-chain complexes is induced by a ${\mathbb Z}[\Gamma]$-chain homotopy equivalence. In~\ref{sub: classifying maps} we discuss classifying maps to classifying spaces. In~\ref{sub: cellular chains vs volume} we see how to read off geometric information from the coefficients of a suitable representative of the image of the fundamental class under $\Phi_\ast$. We finish the proofs of the main theorems in~\ref{sub: conclusion}. \subsection{A result on modules over the group ring}\label{sub: modules over the group ring} If $H<\Gamma$ is a finite subgroup, then ${\mathbb Q}[\Gamma]\otimes_{{\mathbb Q}[H]}{\mathbb Q}$ is a projective ${\mathbb Q}[\Gamma]$-module. However, if $H$ is a non-trivial finite subgroup, ${\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]}{\mathbb Z}$ is not a projective ${\mathbb Z}[\Gamma]$-module. The following lemma shows how to remedy the situation for integral coefficients using the ${\mathbb Z}[\Gamma]$-module $C(X;{\mathbb Z})$. \begin{lemma}\label{lem: free module} The following statements hold true: \begin{enumerate} \item Let $H<\Gamma$ be a finite subgroup and $\chi\colon H\to \{\pm 1\}$ a character. Let ${\mathbb Z}^\chi$ denote the ${\mathbb Z}[H]$-module ${\mathbb Z}$ endowed with the action $h\cdot x=\chi(h)x$ for $h\in H$ and $x\in {\mathbb Z}$. Then the ${\mathbb Z}[\Gamma]$-module \[ C(X;{\mathbb Z})\otimes\Bigl( {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} {\mathbb Z}^\chi\Bigr)\] with the module structure induced by the diagonal (left) $\Gamma$-action is free. \item Let $H<\Gamma$ be a finite subgroup. The ${\mathbb Z}[\Gamma]$-module $C(X;{\mathbb Z})\otimes{\mathbb Z}[\Gamma/H]$ with the module structure induced by the diagonal (left) $\Gamma$-action is free. \end{enumerate} \end{lemma} \begin{proof} Ad 1). Since $H$ is finite there is a clopen fundamental domain $A$ of the $H$-action on $X$. Consider the ${\mathbb Z}$-homomorphism \[ C(A;{\mathbb Z})\to C(X;{\mathbb Z})\otimes\Bigl( {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} {\mathbb Z}^\chi\Bigr)\] that maps $f\in C(A;{\mathbb Z})$ to $f\otimes 1\otimes 1$. Here we regard $C(A;{\mathbb Z})$ as a subgroup of $C(X;{\mathbb Z})$ by extending functions by zero. The above homomorphism extends to a ${\mathbb Z}[\Gamma]$-homomorphism \[ g\colon{\mathbb Z}[\Gamma]\otimes C(A;{\mathbb Z})\to C(X;{\mathbb Z})\otimes\Bigl( {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} {\mathbb Z}^\chi\Bigr)\] from the induced module. Since $C(A;{\mathbb Z})$ is a free ${\mathbb Z}$-module by Corollary~\ref{cor: continuous function free} it suffices to show that $g$ is bijective. Let $S\subset \Gamma$ be a set of representatives for the right $H$-cosets. We obtain a natural isomorphism as ${\mathbb Z}$-modules \[ {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} {\mathbb Z}^\chi\cong \bigoplus_{\gamma\in S} {\mathbb Z}\] and thus \begin{equation}\label{eq: coset representation of target} C(X;{\mathbb Z})\otimes \Bigl( {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} {\mathbb Z}^\chi\Bigr)\cong \bigoplus_{\gamma\in S} C(X;{\mathbb Z}). \end{equation} The domain of~$g$ is in an obvious way isomorphic to \begin{equation}\label{eq: coset representation of domain} {\mathbb Z}[\Gamma]\otimes C(A;{\mathbb Z})\cong {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]} C(X;{\mathbb Z})\cong {\mathbb Z}[\Gamma/H]\otimes C(X;{\mathbb Z})\cong \bigoplus_{\gamma\in S}C(X;{\mathbb Z}). \end{equation} Both isomorphisms~\eqref{eq: coset representation of target} and~\eqref{eq: coset representation of domain} are compatible with $g$. Thus $g$ is an isomorphism. Ad 2). This is a special case of~(1) for the trivial character. \end{proof} \subsection{Classifying maps and the simplicial norm}\label{sub: classifying maps} Upon passing to the barycentric subdivision $N$ becomes a proper $\Gamma$-CW complex (Lemma~\ref{lem: proper Gamma CW}). We choose a map $\Psi\colon X\times N\to X\times E\Gamma$ as in Lemma~\ref{lem: map to classifying space}. We consider the following composition of maps of ${\mathbb Z}[\Gamma]$-chain complexes. \begin{equation}\label{eq: composition up to classifying space} C_\ast(\widetilde M)\to C(X;{\mathbb Z})\otimes C_\ast(\widetilde M)\xrightarrow{\Phi_\ast} C(X;{\mathbb Z})\otimes C_\ast(N)\xrightarrow{\Psi_\ast} C(X;{\mathbb Z})\otimes C_\ast(E\Gamma) \end{equation} The first map is induced by the inclusion of constant function ${\mathbb Z}\hookrightarrow C(X;{\mathbb Z})$. The next two maps are induced by $\Phi$ and $\Psi$ according to Lemma~\ref{lem: induced map in smaller subcomplex}. Recall that the group $\Gamma$ acts diagonally on the tensor products. The abelian group $C(X;{\mathbb Z})$ is free, in particular flat, due to Corollary~\ref{cor: continuous function free}. Hence we have a resolution of the ${\mathbb Z}[\Gamma]$-module $C(X;{\mathbb Z})$ on the right. Again by Lemma~\ref{lem: free module} it is a free ${\mathbb Z}[\Gamma]$-resolution. The singular chain complex $C_\ast(\widetilde M)$ is a free ${\mathbb Z}[\Gamma]$-chain complex, and on $0$-th homology the above composition is the inclusion of constant functions. Let $c\colon \widetilde M\to E\Gamma$ be the classifying map. Then \[C_\ast(\widetilde M)\xrightarrow{c_\ast} C_\ast(E\Gamma)\hookrightarrow C(X;{\mathbb Z})\otimes C_\ast(E\Gamma)\] is a ${\mathbb Z}[\Gamma]$-chain map with the same behaviour on $0$-th homology. By the fundamental theorem of homological algebra the two chain maps are chain homotopic which we record for later use. \begin{remark}\label{rem: homotopic chain maps to classifying space} Let $c\colon \widetilde M\to E\Gamma$ be the classifying map. The map~\eqref{eq: composition up to classifying space} and the chain map $C_\ast(\widetilde M)\xrightarrow{c_\ast} C_\ast(E\Gamma)\hookrightarrow C(X;{\mathbb Z})\otimes C_\ast(E\Gamma)$ are chain homotopic as ${\mathbb Z}[\Gamma]$-chain maps. Further, the latter map is equal to the composition $C_\ast(\widetilde M)\hookrightarrow C(X;{\mathbb Z})\otimes C_\ast(\widetilde M)\xrightarrow{\operatorname{id}\otimes c_\ast} C(X;{\mathbb Z})\otimes C_\ast(E\Gamma)$. \end{remark} \ \subsection{Cellular chains and volume in the Cantor nerve}\label{sub: cellular chains vs volume} Let $S_n$ be a complete set of representatives of the $\Gamma$-orbits of $n$-faces of~$N$. For each $n$-face $F$ let $\Gamma_F<\Gamma$ be the finite subgroup of elements $\gamma\in \Gamma$ with $\gamma F=F$ as subsets of $N$. After choosing an orientation for an $n$-face $F$ we obtain a character $\eta_F\colon\Gamma_F\to \{\pm 1\}$ which indicates whether $\gamma\in\Gamma_F$ preserves or reverses the orientation; the character is independent of the choice of orientation. Note that the character would be trivial if the CW-structure of $N$ would be a $\Gamma$-CW structure. There is an isomorphism of ${\mathbb Z}[\Gamma]$-modules \begin{equation}\label{eq: cellular chain complex as Gamma module} C_n^\mathrm{cell}(N)\cong \bigoplus_{F\in S_n}{\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[\Gamma_F]}{\mathbb Z}^{\eta_F}. \end{equation} Lemma~\ref{lem: free module} now implies the first statement of the following lemma. The second statement follows similarly by noting that the singular chain groups $C_n(N;{\mathbb Z})$ is a direct sum of ${\mathbb Z}[\Gamma]$-modules of the type ${\mathbb Z}[\Gamma/H]\cong {\mathbb Z}[\Gamma]\otimes_{{\mathbb Z}[H]}{\mathbb Z}$ with $H<\Gamma$ being finite. \begin{lemma}\label{lem: singular chains free} For every $n\in{\mathbb N}$ the ${\mathbb Z}[\Gamma]$-module $C(X;{\mathbb Z})\otimes_{\mathbb Z} C_n^\mathrm{cell}(N)$ endowed with the diagonal $\Gamma$-action is free. The same is true when $C_n^\mathrm{cell}(N)$ is replaced by $C(X)$. \end{lemma} There is a natural chain map from the cellular chain complex of $N$ to the oriented singular chain complex of $N$ -- but not to the singular chain complex of $N$. Recall that the \emph{oriented singular chain complex} $C_\ast^o(Y)$ of a space $Y$ is the quotient complex of $C_\ast(Y)$ obtained by introducing the relation $g\sigma-\operatorname{sign}(g)\sigma$ for $\sigma\in C_p(X)$, $g\in S(p+1)$ and the natural action of the symmetric group $S(p+1)$ on singular $p$-simplices, and the relation $\sigma=0$ if there is a transposition $t$ with $t\sigma=\sigma$. The barycentric subdivision of a (closed) $n$-face in $N$ consists of $2^n\cdot n!$ many $n$-simplices. For each oriented $n$-face $F$ of $N$ the sum of the affine singular $n$-simplices matching the simplices of the barycentric subdivision with orientation is a chain $c_F\in C^o_n(N)$. We obtain an equivariant chain map \[ s_\ast\colon C^\mathrm{cell}_\ast(N)\to C^o_\ast(N)\] that maps an oriented $n$-face $F$ to $c_F$. We endow the oriented singular chain complex with the quotient norm. Since the integral foliated simplicial volume is defined in terms of the norm on singular chains it is important to know that we do not lose too much by passing to oriented singular chains. The following result can be found in~\cite{campagnolo+sauer}*{Theorem~3.3 and Remark~3.4}. \begin{theorem}\label{thm: comparision non-oriented and oriented homology} Let $Y$ be a topological space. The projection $\operatorname{pr}_\ast\colon C_\ast(Y)\to C_\ast^o(Y)$ is a natural chain homotopy equivalence. The norm of the map $\operatorname{pr}_\ast$ is at most~$1$, and the norm of a suitable chain homotopy inverse is at most~$(p+1)!$ in degree~$p$. \end{theorem} \begin{remark}\label{rem: geometric form in paper with caterina} The natural chain homotopy inverse constructed in~\cite{campagnolo+sauer}*{Theorem~3.3 and Remark~3.4} takes the equivalence class of a singular simplex $\sigma$ and maps it to a linear combination of singular simplices with coefficients in $\{1,-1\}$ which corresponds to a barycentric subdivision of $\sigma$. \end{remark} \begin{lemma}\label{lem: htp equivalence with caterina} The composition \[ C(X;{\mathbb Z})\otimes C^{\mathrm{cell}}_\ast(N)\xrightarrow{\operatorname{id}\otimes s_\ast} C(X;{\mathbb Z})\otimes C^o_\ast(N)\xrightarrow{\operatorname{id}\otimes q_\ast^N} C(X;{\mathbb Z})\otimes C_\ast(N),\] where $q_\ast^N$ is any natural chain homotopy inverse as in Theorem~\ref{thm: comparision non-oriented and oriented homology}, is a ${\mathbb Z}[\Gamma]$-chain homotopy equivalence (with regard to the diagonal $\Gamma$-actions). \end{lemma} \begin{proof} Both $s_\ast$ and $q_\ast^N$ are homology isomorphisms. Since $C(X;{\mathbb Z})$ is a free abelian group (Corollary~\ref{cor: continuous function free}), also $\operatorname{id}\otimes s_\ast$ and $\operatorname{id}\otimes q_\ast^N$ are homology isomorphisms. Both the domain and the codomain of the composition are free ${\mathbb Z}[\Gamma]$-modules by Lemma~\ref{lem: singular chains free}. By the fundamental theorem of homological algebra the composition is a ${\mathbb Z}[\Gamma]$-homotopy equivalence. \end{proof} In the following we consider the local degree of a map $f\colon \widetilde M\to S^d$ which is proper outside a fixed basepoint of $S^d$~\cite{dold}*{VIII.4}. Recall that the \emph{local degree} of $f$ at point $z\in S^d$ different from the basepoint is the integer $\deg_z(f)$ such that the locally finite fundamental class is sent to $\deg_z(f)\cdot [S^d]$ under \[H_d^\mathrm{lf}\bigl(\widetilde M\bigr)\rightarrow H_d\bigl(\widetilde M, \widetilde M\backslash f^{-1}(\{z\})\bigr)\xrightarrow{f_\ast} H_d\bigl(S^d, S^d\backslash\{z\}\bigr)\xleftarrow{\cong}H_d(S^d). \] The local degree does not depend on the choice of $z$~\cite{dold}*{Proposition~4.4 on p.~267}; thus we denote it by $\deg(f)$. If $f$ is Lipschitz then \begin{equation}\label{eq: local degree as differentials} \deg(f)=\sum_{y\in f^{-1}(\{z\})} \operatorname{sign}\det Df_y \end{equation} for almost every $z\in S^d$ by~\cite{federer}*{Corollary~4.1.26 on p.~383}. For a $d$-face $F$ in $N$ we write $S(F)$ for the quotient of the closure of $F$ by its boundary which is homeomorphic to $S^d$. We take the collapsed boundary as the basepoint of $S(F)$. Below we refer to the map $j_\ast^M$ from Definition~\ref{def: inclusion into parametrised chains}. \begin{lemma}\label{lem: jacobian formula in homology} Let $S_d$ and $\Gamma_F$ for a $d$-face be as in~\eqref{eq: cellular chain complex as Gamma module}. Let $A_F\subset X$ be a fundamental domain for the $\Gamma_F$-action on $X$. Then there are $a_F\in C(X;{\mathbb Z})$ supported on $A_F$ and integral chains $e_F\in C_d(N)$ of $\ell^1$-norm at most $2^d\cdot d!\cdot (d+1)!$ such that the image of the fundamental class under \[ H_d(M)\xrightarrow{j_\ast^M} H_d^\Gamma\bigl(\widetilde M; C(X;{\mathbb Z})\bigr)\xrightarrow{\Phi_\ast} H_d^\Gamma\bigl(N; C(X;{\mathbb Z})\bigr) \] is represented by the cycle $\sum_{F\in S_d} a_F\otimes e_F$. For $x\in A_F$, we have \[\deg\bigl(\widetilde M\xrightarrow{\Phi_x} N\xrightarrow{\operatorname{pr}_F} S(F)\bigr)=\pm a_F(x)\] and \[ \abs{a_F(x)}\le\frac{1}{\operatorname{vol}_d(F)}\int_{\Phi_x^{-1}(F)}J_d\Phi(y)d\operatorname{vol}_d^{\widetilde M}(y). \] \end{lemma} \begin{proof} Every $d$-cycle in $C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]} C_\ast(N)$ is homologous to a cycle coming from $C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]} C_d^{\mathrm{cell}}(N)$ via the map in Lemma~\ref{lem: htp equivalence with caterina} (after passing to $\Gamma$-coinvariants). Since $C_d^{\mathrm{cell}}(N)$ is an abelian group generated by $\Gamma$-translates of $F\in S_d$, it follows that every $d$-cycle is homologous to a $d$-cycle of the form \[ \sum_{F\in S_d} b_F\otimes q_\ast^N\bigl([c_F]\bigr)\in C(X;{\mathbb Z})\otimes_{{\mathbb Z}[\Gamma]}C_d(N) \] for some $b_F\in C(X;{\mathbb Z})$. Set $e_F\mathrel{\mathop:}= q_\ast^N([c_F])$. The statement about the norm $e_F$ follows from the fact that $c_F$ consists of $2^d \cdot d!$ singular simplices and Theorem~\ref{thm: comparision non-oriented and oriented homology}. Next we rewrite the tensor products to obtain functions supported on $A_F$: Let $b_F'=\chi_{A_F}\cdot b_F$ where $\chi_{A_F}$ is the characteristic function of $A_F$. We have \[ b_F\otimes e_F=\sum_{h\in \Gamma_F}b_F'(h^{-1}\_)\otimes e_F=\sum_{h\in \Gamma_F}b_F'\otimes \eta_F(h)\cdot e_F=\Bigl(\sum_{h\in\Gamma_F} \eta_F(h)b_F'\Bigr)\otimes e_F\] where $\eta_F\colon \Gamma_F\to \{\pm 1\}$ is the character as in~\eqref{eq: cellular chain complex as Gamma module}. Therefore every homology class is of the form $\bigl[\sum_{F\in S_d} a_F\otimes e_F\bigr]$ where $a_F$ are functions supported on~$A_F$ and $e_F$ are integral chains of $\ell^1$-norm at most $2^d\cdot d!\cdot (d+1)!$. We represent $z_M\mathrel{\mathop:}= \Phi_\ast\circ j_\ast^M([M])$ like that with suitable $a_F$ and $e_F$. The image of $z_M$ under the map $\operatorname{ev}_x$, $x\in X$, from Section~\ref{sub: from chains to measure chains}, which is a locally finite homology class of $N$, is denoted by $(z_M)_x^\mathrm{lf}$. We obtain that \[ (\Phi_x)_\ast\bigl([\widetilde M]^\mathrm{lf}\bigr)=(z_M)_x^\mathrm{lf}=\bigl[\sum_{\gamma\in\Gamma}\sum_{F\in S_d} a_F(\gamma^{-1}x)\cdot \gamma\cdot e_F \bigr ] \] as elements in the locally finite homology of $N$. Let $F_0$ be an open $d$-face in $N$ and $z_0$ a point in $F_0$. We consider the image of $(z_M)_x^\mathrm{lf}$ under the homomorphism \[ H_d^\mathrm{lf}\bigl(N\bigr)\to H_d\bigl(N, N\backslash \operatorname{pr}_{F_0}^{-1}(\{z_0\})\bigr)\xrightarrow{H_d(\operatorname{pr}_{F_0})} H_d\bigl(S(F_0), S(F_0)\backslash\{z_0\}\bigr)\xleftarrow{\cong} H_d\bigl(S(F_0)\bigr). \] For $\gamma\not\in \Gamma_{F_0}$ or $F\ne F_0$ the chain $\gamma\cdot e_F$ is mapped to zero under the chain map $C_d^\mathrm{lf}(N)\to C_d(N, N\backslash\operatorname{pr}_{F_0}^{-1}(\{z_0\}))\to C_d(S(F_0), S(F_0)\backslash\{z_0\})$. Therefore only the terms $a_F(\gamma^{-1}x)\gamma\cdot e_F$ with $\gamma\in \Gamma_{F_0}$ and $F=F_0$ contribute something potentially non-zero to the image of the homology class. But for $x\in A_{F_0}$ and $\gamma\in \Gamma_{F_0}\backslash\{1\}$ we have $a_F(\gamma^{-1}x)=0$. Hence $(z_M)_x^\mathrm{lf}$ is mapped to $a_{F_0}(x)$ times the generator, which implies the statement about the local degree. The bound for $\abs{a_F(x)}$ is now a direct consequence of the area formula~\cite{federer}*{Theorem~3.2.5 on p.~244 and the remark before 3.2.47 on p.~282} and the characterization~\eqref{eq: local degree as differentials} of the local degree. \end{proof} \subsection{Conclusion of proofs of main results}\label{sub: conclusion} For the next result we refer the reader to the overview of dimensional constants after Theorem~\ref{thm: pushout lemma}. \begin{theorem}\label{thm: main auxiliary statement} For every $V_1>0$ and $d\in{\mathbb N}$ there are constants $\operatorname{const}(d, V_1)>0$ and $\epsilon(d)>0$ with the following properties. Let $(M,g)$ be a $d$-dimensional closed Riemannian manifold with $V_{(\widetilde M,\tilde g)}(1)<V_1$. Let $\Gamma=\pi_1(M)$, and let $c\colon M\to B\Gamma$ be the classifying map. Then \[ \norm{i_\ast^{\mathbb R}\circ c_\ast([M])}\le\norm{j_\ast^{B\Gamma}\circ c_\ast([M])}_{\mathbb Z}^X\le \operatorname{const}(d, V_1)\cdot \operatorname{vol}(M). \] Furthermore, if $V_{(\widetilde M,\tilde g)}(1)<C(d)^{-1}\cdot \epsilon(d)$, then \[i_\ast^{\mathbb R}\circ c_\ast([M])=0\in H_d(B\Gamma;{\mathbb R}).\] \end{theorem} \begin{proof} The following diagram contains all the maps we have to consider. \[ \begin{tikzcd} & & H_d^\Gamma(N;C(X;{\mathbb Z}))\ar[d, "\Psi_\ast"] & &\\ H_d(M)\ar[rr, bend right=20, "j_\ast^{B\Gamma}\circ c_\ast"]\ar[rrr, bend right=25,"i_{\mathbb R}\circ c_\ast"]\ar[r, "j_\ast^M"] &H_d^\Gamma(\widetilde M; C(X;{\mathbb Z}))\ar[ru, "\Phi_\ast"]\ar[r]& H_d^\Gamma(E\Gamma; C(X;{\mathbb Z}))\ar[r] & H_d(B\Gamma;{\mathbb R}) \end{tikzcd} \] The middle horizontal map is induced by the (equivariant) classifying map $\widetilde M\to E\Gamma$ and the identity on $C(X;{\mathbb Z})$. The right-hand horizontal map is induced by integration $C(X;{\mathbb Z})\to {\mathbb R}$ (see Remark~\ref{rem: comparision parametrised and real norm}). The upper triangle is commutative by Remark~\ref{rem: homotopic chain maps to classifying space}. That the lower part commutes is straightforward. Let $z_M$ be the image of $[M]\in H_d(M)$ in $H^\Gamma_d(N; C(X;{\mathbb Z}))$. According to Lemma~\ref{lem: jacobian formula in homology} the homology class $z_M$ is represented by a cycle of the form \[\sum_{F\in S_d} a_F\otimes e_F\] where the function $a_F$ is supported on~$A_F$ and $a_F(x)$ is the local degree of $\Phi_x$ followed by the projection to $S(F)$ for $x\in A_F$. If $F\in S_d$ is thin and $x\in X$, then there is at least one point in the interior of $F$ that is not in the image of $\Phi_x$ according to the volume estimate of Theorem~\ref{thm: homotoped down map}. In this case $a_F(x)=0$. Under the assumption $V_{(\widetilde M,\tilde g)}(1)<C(d)^{-1}\cdot \epsilon(d)$ every $d$-face is thin. Hence $z_M=0$. The commutativity of the diagram implies the second statement. The smallest side length of a thick $d$-face in $N$ and hence its volume are bounded from below by a constant that only depends on the dimension $d$ and $V_1$. Let $\operatorname{const}'(d,V_1)>0$ be such that $1/\operatorname{const}'(d, V_1)$ is a lower volume bound of thick $d$-faces. We now set $\operatorname{const}(d,V_1)\mathrel{\mathop:}= 2^d\cdot d!\cdot (d+1)!\cdot\operatorname{const}'(d, V_1)\cdot C(d,V_1)$, where $C(d, V_1)$ is the constant in~\eqref{eq: volume bound final map}. Since $\Psi_\ast$ does not increase the integral foliated norm, the above diagram commutes and because of Remark~\ref{rem: comparision parametrised and real norm}, it suffices to show that $\norm{z_M}_{\mathbb Z}^X\le\operatorname{const}(d,V_1)\cdot\operatorname{vol}(M)$ to obtain the first statement of the theorem. Let $T_d\subset S_d$ be the subset of thick $d$-faces. With the norm bound on $e_F$ from Lemma~\ref{lem: jacobian formula in homology} and the above argument for thin $d$-faces we obtain that \begin{equation*} \norm{z_M}_{\mathbb Z}^X \le 2^d\cdot d!\cdot (d+1)!\cdot\sum_{F\in T_d}\int_{A_F} \abs{a_F(x)}d\mu(x) \end{equation*} Again with Lemma~\ref{lem: jacobian formula in homology} we conclude that \begin{equation*} \norm{z_M}_{\mathbb Z}^X \le 2^d\cdot d!\cdot (d+1)!\cdot\operatorname{const}'(d,V_1)\sum_{F\in T_d}\int_{A_F}\int_{\Phi_x^{-1}(F)}\bigl\vert J_d\Phi_x(y)\bigr\vert d\operatorname{vol}_d^{\widetilde M}(y) \end{equation*} The subset \[ \Bigl\{ (x,y)\mid F\in T_d, x\in A_F, y\in \Phi_x^{-1}(F)\Bigr\} \] is contained in a $\Gamma$-fundamental domain of~$\xxm$. Hence the Area Formula (Theorem~\ref{thm: Area formula}) and the definition of $\operatorname{const}(d,V_1)$ imply that \begin{equation*} \norm{z_M}_{\mathbb Z}^X \le 2^d\cdot d!\cdot (d+1)!\cdot\operatorname{const}'(d,V_1)\cdot\operatorname{vol}_d(\Phi)\le \operatorname{const}(d,V_1)\cdot\operatorname{vol}(M).\qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: main result}] By Gromov's mapping theorem~\cite{gromovvbc}*{Section~3.1. on p.~248} we have $\norm{i_{{\mathbb R},\ast}\circ c_\ast([M])}=\norm{M}$. Therefore Theorem~\ref{thm: main result} is implied by Theorem~\ref{thm: main auxiliary statement}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: main result about essential manifolds}] By scaling the metric, it is enough to prove the case $R=1$. The case $R=1$ is the second statement of Theorem~\ref{thm: main auxiliary statement}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: main l2 result}] According to Theorem~\ref{thm: main auxiliary statement} the $X$-parametrised integral simplicial norm of $c_\ast([M])\in H_d(B\Gamma)$ is bounded from above by $\operatorname{const}(d, V_1)\cdot\operatorname{vol}(M)$. By Theorem~\ref{thm: bound von Neumann rank} the von Neumann rank of $c_\ast([M])$ is bounded by $d\cdot C(d,V_1)\cdot \operatorname{vol}(M)$. If $M$ is, in addition, aspherical, then $c$ is a homotopy equivalence and the von Neumann rank of $c_\ast([M])$ is the von Neumann rank of $[M]$ which is the sum of the $\ell^2$-Betti numbers of $M$ according to Remark~\ref{rem: equivariant duality}. This implies the second statement of Theorem~\ref{thm: main l2 result}. The statement about the Euler characteristic is an immediate consequence since the alternating sum of $\ell^2$-Betti numbers equals the Euler characteristic. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: main foliated simplicial volume}] Let $\alpha$ be a free measurable pmp action of $\Gamma$ on a standard probability space $(Y,\mu)$. By~\cite{elek}*{Theorem~2} there is a free continuous action of $\Gamma$ on the Cantor set~$X$ and an equivariant Borel embedding $X\hookrightarrow Y$ such that $\mu(X)=1$. This means that we can realize every free measurable pmp action by a free continuous action on the Cantor set. Therefore the $\alpha$-parametrised simplicial volume $|M|^\alpha$ coincides with the $X$-parametrised simplicial volume (with regard to $\mu$). According to Theorem~\ref{thm: main auxiliary statement} the $X$-parametrised integral simplicial norm of $c_\ast([M])\in H_d(B\Gamma)$ is bounded from above by $\operatorname{const}(d, V_1)\cdot\operatorname{vol}(M)$. Since $c$ is a homotopy equivalence the $X$-parametrised integral simplicial norm of $c_\ast([M])$ is the $X$-parametrised integral simplicial volume of $M$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{alpert-area}{article}{ author={Alpert, Hannah}, author={Funano, Kei}, title={Macroscopic scalar curvature and areas of cycles}, journal={Geom. Funct. Anal.}, volume={27}, date={2017}, number={4}, pages={727--743}, } \bib{balacheff+karam}{article}{ author={Balacheff, F.}, author={Karam, S.}, title={Macroscopic Schoen conjecture for manifolds with nonzero simplicial volume}, journal={Trans. Amer. Math. Soc.}, volume={372}, date={2019}, number={10}, pages={7071--7086}, } \bib{braunphd}{thesis}{ author={Braun, Sabine}, title={Simplicial Volume and Macroscopic Scalar Curvature}, date={2018}, organization={Karlsruher Institut f\"ur Technologie (KIT)}, type={Ph.D. thesis}, eprint={https://publikationen.bibliothek.kit.edu/1000086838}, } \bib{bridson+haefliger}{book}{ author={Bridson, Martin R.}, author={Haefliger, Andr\'{e}}, title={Metric spaces of non-positive curvature}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={319}, publisher={Springer-Verlag, Berlin}, date={1999}, pages={xxii+643}, isbn={3-540-64324-9}, } \bib{brown}{book}{ author={Brown, Kenneth S.}, title={Cohomology of groups}, series={Graduate Texts in Mathematics}, volume={87}, note={Corrected reprint of the 1982 original}, publisher={Springer-Verlag, New York}, date={1994}, pages={x+306}, } \bib{brunnbauer+hanke}{article}{ author={Brunnbauer, Michael}, author={Hanke, Bernhard}, title={Large and small group homology}, journal={J. Topol.}, volume={3}, date={2010}, number={2}, pages={463--486}, } \bib{campagnolo+sauer}{article}{ author={Campagnolo, Caterina}, author={Sauer, Roman}, title={Counting maximally broken Morse trajectories on aspherical manifolds}, journal={Geom. Dedicata}, volume={202}, date={2019}, pages={387--399}, } \bib{collapse}{article}{ author={Cheeger, Jeff}, author={Gromov, Mikhael}, title={Collapsing Riemannian manifolds while keeping their curvature bounded. I}, journal={J. Differential Geom.}, volume={23}, date={1986}, number={3}, } \bib{dold}{book}{ author={Dold, Albrecht}, title={Lectures on algebraic topology}, series={Classics in Mathematics}, note={Reprint of the 1972 edition}, publisher={Springer-Verlag, Berlin}, date={1995}, pages={xii+377}, } \bib{elek}{article}{ author={Elek, G\'{a}bor}, title={Free minimal actions of countable groups with invariant probability measures}, journal={Ergodic Theory Dynam. Systems}, volume={41}, date={2021}, number={5}, pages={1369--1389}, } \bib{fauser}{article}{ author={Fauser, Daniel}, title={Integral foliated simplicial volume and $S^1$-actions}, journal={Forum Math.}, volume={33}, date={2021}, number={3}, pages={773--788}, } \bib{fauser+friedl+loeh}{article}{ author={Fauser, Daniel}, author={Friedl, Stefan}, author={L\"{o}h, Clara}, title={Integral approximation of simplicial volume of graph manifolds}, journal={Bull. Lond. Math. Soc.}, volume={51}, date={2019}, number={4}, pages={715--731}, } \bib{federer}{book}{ author={Federer, Herbert}, title={Geometric measure theory}, series={Die Grundlehren der mathematischen Wissenschaften, Band 153}, publisher={Springer-Verlag New York Inc., New York}, date={1969}, pages={xiv+676}, } \bib{foliated}{article}{ author={Frigerio, Roberto}, author={L\"{o}h, Clara}, author={Pagliantini, Cristina}, author={Sauer, Roman}, title={Integral foliated simplicial volume of aspherical manifolds}, journal={Israel J. Math.}, volume={216}, date={2016}, number={2}, pages={707--751}, } \bib{gaboriau}{article}{ author={Gaboriau, Damien}, title={Invariants $l^2$ de relations d'\'{e}quivalence et de groupes}, language={French}, journal={Publ. Math. Inst. Hautes \'{E}tudes Sci.}, number={95}, date={2002}, pages={93--150}, } \bib{glasner}{article}{ author={Glasner, Eli}, author={Uspenskij, Vladimir V.}, title={Effective minimal subflows of Bernoulli flows}, journal={Proc. Amer. Math. Soc.}, volume={137}, date={2009}, number={9}, pages={3147--3154}, } \bib{gromov-foliated}{article}{ author={Gromov, M.}, title={Foliated Plateau problem. I. Minimal varieties}, journal={Geom. Funct. Anal.}, volume={1}, date={1991}, number={1}, pages={14--79}, } \bib{gromovvbc}{article}{ author={Gromov, Michael}, title={Volume and bounded cohomology}, journal={Inst. Hautes \'{E}tudes Sci. Publ. Math.}, number={56}, date={1982}, pages={5--99 (1983)}, issn={0073-8301}, } \bib{gromov-large}{article}{ author={Gromov, M.}, title={Large Riemannian manifolds}, conference={ title={Curvature and topology of Riemannian manifolds}, address={Katata}, date={1985}, }, book={ series={Lecture Notes in Math.}, volume={1201}, publisher={Springer, Berlin}, }, date={1986}, pages={108--121}, doi={10.1007/BFb0075649}, } \bib{gromov-asymptotic}{article}{ author={Gromov, M.}, title={Asymptotic invariants of infinite groups}, conference={ title={Geometric group theory, Vol. 2}, address={Sussex}, date={1991}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={182}, publisher={Cambridge Univ. Press, Cambridge}, }, date={1993}, pages={1--295}, } \bib{gromov-singularities}{article}{ author={Gromov, Mikhail}, title={Singularities, expanders and topology of maps. I. Homology versus volume in the spaces of cycles}, journal={Geom. Funct. Anal.}, volume={19}, date={2009}, number={3}, pages={743--841}, } \bib{guth-metaphors}{article}{ author={Guth, Larry}, title={Metaphors in systolic geometry}, conference={ title={Proceedings of the International Congress of Mathematicians. Volume II}, }, book={ publisher={Hindustan Book Agency, New Delhi}, }, date={2010}, pages={745--768}, } \bib{guthvolume}{article}{ author={Guth, Larry}, title={Volumes of balls in large Riemannian manifolds}, journal={Ann. of Math. (2)}, volume={173}, date={2011}, number={1}, pages={51--76}, issn={0003-486X}, } \bib{guthuryson}{article}{ author={Guth, Larry}, title={Volumes of balls in Riemannian manifolds and Uryson width}, journal={J. Topol. Anal.}, volume={9}, date={2017}, number={2}, pages={195--219}, issn={1793-5253}, } \bib{hjorth+molberg}{article}{ author={Hjorth, Greg}, author={Molberg, Mats}, title={Free continuous actions on zero-dimensional spaces}, journal={Topology Appl.}, volume={153}, date={2006}, number={7}, pages={1116--1131}, issn={0166-8641}, } \bib{fillingmetric}{article}{ author={Liokumovich, Yevgeny}, author={Lishak, Boris}, author={Nabutovsky, Alexander}, author={Rotman, Regina}, title={Filling metric spaces}, eprint={arXiv:1704.08538}, } \bib{loeh-measure}{article}{ author={L\"{o}h, Clara}, title={Measure homology and singular homology are isometrically isomorphic}, journal={Math. Z.}, volume={253}, date={2006}, number={1}, pages={197--218}, issn={0025-5874}, } \bib{lueck-dimension}{article}{ author={L\"{u}ck, Wolfgang}, title={Dimension theory of arbitrary modules over finite von Neumann algebras and $L^2$-Betti numbers. I. Foundations}, journal={J. Reine Angew. Math.}, volume={495}, date={1998}, pages={135--162}, } \bib{lueck-l2book}{book}{ author={L\"{u}ck, Wolfgang}, title={$L^2$-invariants: theory and applications to geometry and $K$-theory}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]}, volume={44}, publisher={Springer-Verlag, Berlin}, date={2002}, pages={xvi+595}, } \bib{explicitbound}{article}{ author={Nabutovsky, Alexander}, title={Linear bounds for constants in Gromov's systolic inequality and related results}, eprint={arXiv:1909.12225}, } \bib{papasoglu}{article}{ author={Papasoglu, Panos}, title={Uryson width and volume}, journal={Geom. Funct. Anal.}, volume={30}, date={2020}, number={2}, pages={574--587}, } \bib{sauer-groupoid}{article}{ author={Sauer, Roman}, title={$L^2$-Betti numbers of discrete measured groupoids}, journal={Internat. J. Algebra Comput.}, volume={15}, date={2005}, number={5-6}, pages={1169--1188}, } \bib{sauerminvol}{article}{ author={Sauer, Roman}, title={Amenable covers, volume and $L^2$-Betti numbers of aspherical manifolds}, journal={J. Reine Angew. Math.}, volume={636}, date={2009}, pages={47--92}, issn={0075-4102}, } \bib{sauer-homologygrowth}{article}{ author={Sauer, Roman}, title={Volume and homology growth of aspherical manifolds}, journal={Geom. Topol.}, volume={20}, date={2016}, number={2}, pages={1035--1059}, } \bib{schmidt}{thesis}{ author={Schmidt, Marco}, title={$L^2$-Betti numbers of $\mathcal{R}$-spaces and the integral foliated simplicial volume}, date={2005}, organization={WWU Münster}, type={Ph.D. thesis}, eprint={https://nbn-resolving.org/urn:nbn:de:hbz:6-05699458563}, } \bib{schoen}{article}{ author={Schoen, Richard M.}, title={Variational theory for the total scalar curvature functional for Riemannian metrics and related topics}, conference={ title={Topics in calculus of variations}, address={Montecatini Terme}, date={1987}, }, book={ series={Lecture Notes in Math.}, volume={1365}, publisher={Springer, Berlin}, }, date={1989}, pages={120--154}, } \bib{steprans}{article}{ author={Stepr\={a}ns, Juris}, title={A characterization of free abelian groups}, journal={Proc. Amer. Math. Soc.}, volume={93}, date={1985}, number={2}, pages={347--349}, } \bib{tomdieck-topology}{book}{ author={tom Dieck, Tammo}, title={Algebraic topology}, series={EMS Textbooks in Mathematics}, publisher={European Mathematical Society (EMS), Z\"{u}rich}, date={2008}, pages={xii+567}, } \bib{tomdieck}{book}{ author={tom Dieck, Tammo}, title={Transformation groups}, series={De Gruyter Studies in Mathematics}, volume={8}, publisher={Walter de Gruyter \& Co., Berlin}, date={1987}, pages={x+312}, isbn={3-11-009745-1}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2021-11-09T02:10:42", "yymm": "2012", "arxiv_id": "2012.08999", "language": "en", "url": "https://arxiv.org/abs/2012.08999" }
\section[Introduction]{Introduction} \label{sec:intro} Outflows in the form of winds are commonly associated with various astrophysical sources like AGNs, X-ray binaries, YSOs etc. In radio quiet AGNs, blue-shifted iron lines are frequently reported. This blue shift is believed to be generated from resonance absorption of Fe-xxv or Fe-xxvi by propagating winds away from the source. The speeds of these winds are found to be relativistic and may reach up to $0.4c$ \citep{CH02, CH03, Mrk06, DM05, C09, RJN09}. Similarly this blue shift is observed in case of X-ray binaries as well \citep{ML07, TB16, GP12}. These winds are observed in nearly half of these sources \citep{TF2010} indicating that the feature is quite general. Further, their short variability timescales ($\sim100$ks) suggest that the winds might be outflowing from the central source from a region within 100 Schwarzschild radii ($r_{\rm \small S}$). In X-ray binaries, the winds are observed in soft state only where the observed spectra is mostly dominated by thermal emissions. In soft state, the accretion discs are well described by standard thin disc model, where the discs are optically thick but geometrically thin and emit thermally distributed radiation \citep{SS73}. Though, theoretically these discs are highly stable against most perturbations, but is susceptible to magneto-rotational instabilities \citep{BH91, SUZ2009, Yuan12}, which apart from providing an origin of shear viscosity, may also contribute to outflows. Independent of such instabilities, magnetic field can remove energy and angular momentum from the accretion disc, such that centrifugally driven outflow along the magnetic field is possible \citep{BP82}. Not only magnetic field can drive winds, but winds can also be generated by thermal and radiation pressure from the accretion discs \citep{BMS83}. It may be noted that outflows within sub-Eddington limit from optically thick discs were also studied by performing MHD simulations \citep{OH2009,OH2011}. \cite{LD2019} also studied the MHD outflows from the accretion discs in general relativistic limit. As the winds are found to be traveling up to mildly relativistic speeds, they need driving agents. The radiation driving of outflows (jets or winds) are studied by various authors through semi analytic works \citep{F96, TF96,CC00a,CC00b,cc02, IC05,kcm14, VKMC15, VC18, VC19} and radiation was shown to be an effective factor to accelerate jets and winds up to relativistic speeds. Similarly simulations were also carried out to study the effects of radiation on outflows \cite{PR97, PSD98, Y18, PR00, NO2017, PR03}. \cite{PR03b} and \cite{NO2017} studied line driven winds. However, the line force may not be that effective when the temperature of the wind exceeds the ionization temperature significantly ($>10^5$ K). Hence the winds driven by the radiation from inner region of the accretion disc especially in microquasars, where the temperatures are hotter, one may need other mechanisms. In that case, the radiation drives the winds directly by depositing the momentum and/or energy. \citet{Y18} studied winds driven from hot corona by radiation force of the underlying Keplerian disc (hereafter KD). It may be noted that, \citet{Y18} did not consider the role of radiation drag although they maintained the optically thin condition through out in their simulation. Moreover, \citet{Y18} considered the outflow from a hot corona. In most of the previous attempts mentioned above, the radiation drag was not part of their analysis. We would like to investigate whether continuum emission of a thin accretion disc can radiatively drive matter to form a wind, in presence of radiation drag. Apart from this, we investigate how much angular momentum of the accretion disc is transmitted to the winds above it. We would also like to study the effect of radiation drag on the wind solution, and will discuss the role of angular momentum removal in the winds due to radiation. As this is an exploratory study, we intend to study how accretion rate affects these aspects of wind generation. In section \ref{sec:assump}, we discuss the underlying assumptions, then we will show the set of governing equations and the radiation field in section \ref{sec_equations}. Afterwards we describe the simulation set up, initial and boundary conditions for the simulations, method of solving the equations, and numerical technique in section \ref{sec_numeri_approach}. We then proceed to results (section \ref{sec_results}) and conclude the paper (section \ref{sec_conclusions}) with the significance of the analysis. \section{Assumptions} \label{sec:assump} We perform hydrodynamic simulation in cylindrical coordinate system $r,\phi,z$. Axisymmetry in the system is assumed. In our simulation, the source of the wind is KD around a $10 M_\odot$ black hole, which occupies the equatorial plane and winds are launched from the KD in the $r-z$ plane. The radiation field above the KD interacts with the out-flowing wind through Thomson scattering, and drives the wind by depositing momentum onto the matter. We restrict ourselves to non relativistic regime and hence, while calculating the radiation field above the disc, relativistic transformations are ignored. However, to take care of strong gravity near the central source, we have assumed the Paczy\'nski \& Wiita potential \citep{PW} that mimics the general relativistic effects. In this paper, all ten independent components of the moments of radiation field are calculated, hence the effect of radiation drag is also incorporated. In this paper the distances are scaled and shown in Schwarzschild units. \section{Governing equations} \label{sec_equations} The equations of motion for a fluid in the radiation-hydrodynamic regime \citep[correct up to first order in $v_i$, see ][for details]{mm84,kfm98} with density $\rho$, pressure $p$, propagating with velocity components $v_i\equiv (v_r, v_\phi{~\rm and~} v_z)$, are given by \begin{equation} \frac{\partial{\rho}}{\partial t} + \frac{1}{r}\frac{\partial (r\rho v_r)}{\partial r} + \frac{\partial{(\rho v_z)}}{\partial z} = 0 \label{eq.cont} \end{equation} \begin{eqnarray} \label{eq.momentum_r} \frac{\partial{(\rho v_r)}}{\partial t}&+& \frac{1}{r}\frac{\partial (r\rho v_r^2)}{\partial r} + \frac{\partial p}{\partial r} + \frac{\partial{(\rho v_r v_z)}}{\partial z} \\ \nonumber &=& \rho v_{\phi}^2/r + \rho f_{{\rm g},r} + \frac{\rho k}{c} {\cal F}_r \end{eqnarray} \begin{eqnarray} \label{eq.momentum_phi} \frac{\partial{(\rho v_{\phi})}}{\partial t} &+& \frac{1}{r}\frac{\partial (r\rho v_r v_{\phi})}{\partial r} + \frac{\partial{(\rho v_z v_{\phi})}}{\partial z} \\ \nonumber &=& - \frac{\rho v_r v_{\phi}}{r} + \frac{\rho k}{c}{\cal F}_\phi \end{eqnarray} \begin{eqnarray} \label{eq.momentum_z} \frac{\partial{(\rho v_z)}}{\partial t} &+& \frac{1}{r}\frac{\partial (r\rho v_r v_z)}{\partial r} + \frac{\partial{(\rho v_z^2+P)}}{\partial z} \\ \nonumber &=& \rho f_{{\rm g},z} + \frac{\rho k}{c}{\cal F}_z \end{eqnarray} \begin{equation} \frac{\partial \cal E}{\partial t} + \frac{1}{r}\frac{\partial [r({\cal E}+p)v_r]}{\partial r} + \frac{\partial{[({\cal E}+p)v_z}]}{\partial z} = -\rho\left(\frac{k}{c}v_i{\cal F}_i+v_if_{{\rm g},i}\right) \label{eq.energy} \end{equation} In the above equations, ${\cal E}=\rho v^2/2+e$ is the energy density of the fluid and $e=p/(\Gamma -1)$ is the thermal energy density, where $\Gamma=5/3$ is the adiabatic index of the fluid. The local sound speed is defined as $c_s=(\Gamma p/\rho)^{1/2}$. The fluid is being driven by the radiation field of the underlying thin accretion disc. The components of radiation terms are: \begin{equation} {\cal F}_i= F_i - E v_i - v_r P_{ir} - v_{\phi} P_{i \phi} - v_z P_{iz}, \label{radcomp.eq} \end{equation} where $i\equiv (r,\phi,z)$. Various moments of the radiation field are $E$, $F_i$s and $P_{ij}$s (here, $i,j \rightarrow r,\phi, z$), which are basically the radiation energy density, various components of radiation flux and various components of radiation pressure, respectively. The scattering opacity is $k=\sigma_{\rm \small T}/m_{\rm p}$ with $\sigma_{\rm \small T}$ being Thomson scattering cross section and $m_{\rm p}$ is the proton mass. Moreover, $f_{{\rm g}r}$ and $f_{{\rm g}z}$ are $r$ and $z$ components of the gravitational force and are given by \begin{equation} f_{{\rm g}z} = \frac{GM}{r_{\rm s}^2} \frac{z}{R(R/r_{\rm s} - 1)^2}, \end{equation} \begin{equation} f_{{\rm g}r} = \frac{GM}{r_{\rm s}^2} \frac{r}{R(R/r_{\rm s} - 1)^2}, \end{equation} where $G$ and $M$ are the universal constant of gravity and the mass of the black hole, respectively. The Schwarzschild radius of black hole defined as $r_{\rm s}=2GM/c^2$. All the lengths mentioned in the paper are in terms of $r_{\rm s}$ and we will refer to them in dimensionless form afterwards. Further, $R$ is the radial distance from the centre of the black hole defined as \begin{equation} R = \sqrt{r^2 + z^2} \end{equation} In the R.H.S of the components of momentum equations (\ref{eq.momentum_r}, \ref{eq.momentum_phi} and \ref{eq.momentum_z}), the components of radiative flux accelerates, while other velocity dependent terms with $E$ and $P_{ij}$, have negative sign and therefore decelerates the flow. These are called radiation-drag terms and they show that radiation can also reduce the momentum of the flow. As the radiation drag depends upon various components of the fluid velocity, it becomes effective as the flow speed increases. The radiative acceleration and deceleration depend upon the relative strengths of various radiative moments and the components of flow speeds, therefore, the effect of radiation on the outflow can behave in a very nonlinear manner. We will show in section \ref{sec_results} that radiation acceleration drives the winds to the infinity. However, consideration of radiative drag term reduces the outflow speed, to the extent that it can even disrupt the ejected winds. Below we discuss the accretion disc and the radiative moments computed from its radiation field. \subsection{Accretion disc properties} An accretion disc around a black hole, on one hand, supplies matter to the black hole, on the other hand also supplies matter flowing out as outflow. In the present case, the outflow is driven by the disc radiation. Since the KD is defined on the equatorial plane so the dynamical coordinates of the KD is represented by $R_{\rm d}\equiv (r_{\rm d},\phi, 0)$. From the mass conservation equation, we have the expression of accretion rate to be \begin{equation} \label{eqn:masscon} \dot{M} = 2\pi r_{\rm d} \rho v_{r{\rm K}} (2H), \end{equation} where, $H$ is the height of the disc from equatorial plane. $v_{r{\rm K}}$ is the radial inflow speed due to accretion. The KD rotation velocity is \citep{PW,kfm98}, \begin{equation} v_{\rm K} = \sqrt{\frac{GMr_{\rm d}}{(r_{\rm d}-r_{\rm s})^2}} \label{vfi.eq} \end{equation} In KD $v_{\rm K}>>v_{r{\rm K}}$, and the radial velocity distribution is given by \citep{kfm98} \begin{equation} v_{r{\rm K}} = 3.1 \times 10^6 \alpha^{\frac{4}{5}} \dot{m}^{\frac{2}{5}} m^{-\frac{1}{5}}x^{-\frac{2}{5}} \left(1-\sqrt{\frac{3}{x}}\right)^{-\frac{3}{5}}, \label{radvel.eq} \end{equation} where $x=r_{\rm d}/r_{\rm s}$ and $\alpha$ is the viscosity parameter. Now, the distribution of the equatorial density along $r_{\rm d}$ is obtained to be \citep{SS73} \begin{equation} \rho = 4.423 \times 10^4 m^{-1} \dot{m} x^{-1} v_{r{\rm K}}^{-1} \label{dens.eq} \end{equation} The distribution of density along $z$ is $\tilde{\rho}=\rho e^{-{(z/r_{\rm s})}^2}$ \citep{SS73}. It may be noted that at high accretion rates the inner part of the disc may become radiation pressure dominated and in such cases the disc thickness is controlled by vertical radiative pressure rather than by gas pressure. The density profile would change and instability might set in. However, we assume that since radiation pressure is driving winds from the inner region, so the density profile of the disc may not depart significantly from equation \ref{dens.eq}. For a KD, viscosity is required for angular momentum transport in a manner such that the matter occupies subsequent Keplerian orbits. Viscosity heats up the matter, the dissipated heat is locally radiated as blackbody emission at each radius of the disc. Assuming the surface temperature ($T_{\rm disc}$) as the temperature of each annulus, its radial distribution is given by \begin{equation} \sigma {T_{\rm disc}}^4 = \frac{3GM\dot{M}}{8\pi r_{\rm d}^3}\left(1-\sqrt{\frac{r_{\rm in}}{r_{\rm d}}}\right), \label{temp1.eq} \end{equation} where $\sigma$ is Stefan-Boltzmann’s constant, $r_{\rm in} =3$ (in units of $r_{\rm \small S}$) is the inner radius of the disc, and the disc extends up to an outer boundary $r_{\rm o}=512$. Expressing accretion rate and mass of the black hole in units of Eddington accretion rate, the previous equation becomes: \begin{equation} T_{\rm disc} = 4.35 \times 10^7 \dot{m}^\frac{1}{4} m^{-\frac{1}{4}} x^{-\frac{3}{4}}\left(1-\sqrt{\frac{3}{x}}\right)^{1/4}, \label{temp2.eq} \end{equation} where, ${\dot m}={\dot M}/{\dot M}_{\rm Edd}$ and $M=m~M_\odot$, moreover, the Eddington accretion rate is ${\dot M}_{\rm Edd}= 1.44\times 10^{17}m$ (gm s$^{-1}$) and $M_\odot=2\times10^{33}$gm. \subsection{Radiation field above a thin accretion disc} \begin{figure*} {\caption{Contours of radiative moments computed at each point in $r-z$ plane around the black hole which resides at the origin and the disc on the equatorial plane. Radiation energy density (a); radiative flux terms, $F_r$, $F_z$ and $F_\phi$ (b)-(d) respectively; and the components of radiation pressure tensor, $P_{rr}$, $P_{r\phi}$, $P_{rz}$, $P_{\phi\phi}$, $P_{\phi z}$, $P_{zz}$ from (e) to (j), respectively. Only the inner $51.2r_{\rm s} \times 51.2r_{\rm s}$ region is shown.} \label{lab_rad_field}} {\includegraphics[height=21cm,width=10cm]{fig1.eps}} \end{figure*} In the following we present the expression of various radiative moments. For the convenience of representation, we define the radiative moments in the following forms, $$ \frac{kE}{c} = {E}_{0}{\varepsilon};~~~~\frac{kF_i}{c} = F_{0} f_i~~~{\rm and}~~~\frac{kP_{ij}}{c} = P_{0} p_{ij} $$ {with, $$ {E}_{0}=F_{0}=P_{0}= \frac{3GM_B\dot{M}_K \sigma_T}{8 \pi^2 r_s^3 m_p c} $$ The dimensionless radiation energy density ($\varepsilon$), the three components of radiative flux ($f_i$) , as well as the six components of pressure tensor ($p_{ij}$) are given by \citet{IC05}; \begin{eqnarray} {\varepsilon} = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0\frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d})d{\phi}^{\prime} }{(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 }dr_{\rm d \label{eq:radeng} \end{eqnarray} \begin{eqnarray} f_i = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0 \frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d}){\hskip 0.1cm} l_id{\phi}^{\prime}} {(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 } dr_{\rm d} \label{eq:radflux} \end{eqnarray} \begin{eqnarray} p_{ij} = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0 \frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d}){\hskip 0.1cm} l_i{\hskip 0.1cm}l_jd{\phi}^{\prime}} {(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 }dr_{\rm d}, \label{eq:radpres} \end{eqnarray} where $l_i$s are the direction cosines from the disc to the field point. Since the accretion disc is not a static radiator, but the disc matter is in motion, therefore the radiation field is Doppler beamed by this disc motion. It can be shown that the frequency integrated radiation intensity measured by the comoving observer ($I_0$) has the following transformation relation with that measured by an inertial observer ($I$) \citep{kfm98} \begin{equation} \frac{I_0}{I}=\gamma^4(1-v_il_i)^4 \approx (1-v_il_i)^4 \label{eq:dopfac} \end{equation} The Lorentz factor $\gamma \approx 1$ for KD. This factor appears in the expression of the moment equations and affects the radiation field. In particular the disc motion along $\phi$ direction generates non-zero $f_\phi$ and also various components of $p_{i \phi}$ The coordinates of the thin Keplerian disc are ($r_{\rm d},\phi^\prime$) and the integration limits of accretion disc are $r_{\rm in}$= $3$ and $r_{\rm o}= 512$. We plot the dimensionless radiation moments {\em i.e. } $E$ (Figure \ref{lab_rad_field}a), radiative fluxes $F_r$ (Figure \ref{lab_rad_field}b), $F_z$ (Figure \ref{lab_rad_field}c), $F_\phi$ (Figure \ref{lab_rad_field}d) and the 6 independent components of radiative pressure $P_{rr}$ (Figure \ref{lab_rad_field}e), $P_{r\phi}$ (Figure \ref{lab_rad_field}f), $P_{rz}$ (Figure \ref{lab_rad_field}g), $P_{\phi \phi}$ (Figure \ref{lab_rad_field}h), $P_{\phi z}$ (Figure \ref{lab_rad_field}i) and $P_{zz}$ (Figure \ref{lab_rad_field}j). The moments are plotted in $r-z$ plane. Each panel zooms the inner $51.2~\times ~51.2$ in order to resolve the contours of the radiative moments. The radiative moments are distinctly anisotropic, especially close to the black hole. Since the KD only extends up to $3$, and the KD flux maximizes at $r\sim 4$, so the radiative moments maximizes at around $4-5$. $E$ or the radiative energy density is by far the most dominant of all the moments. Close to the axis $F_r\approx F_\phi \approx 0$, while $F_z$ is very important. In general, $|F_z| \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} |F_r|$ and dominates $F_\phi$. In addition, $P_{\phi \phi}$ is quite strong, hence the azimuthal velocity gained by the wind due to $F_\phi$ will also be reduced due to the radiation drag along $\phi$ direction. Moreover, none of the components of the radiative pressure is greater than all the radiative flux components. This would confine the effect of radiative drag. This augurs well for the wind, so that it can be driven away from the KD, but would not be spread by a very large angle due to the gain in angular momentum from the radiation field. We have plotted the radiative moments in a region very close to the horizon ($\leq 51.2$), and in that region, the radiation field is from an extended source (KD), and therefore the moments show a complicated space dependence. At large distances, the space dependence of radiation field follows $\sim R^{-2}$, although not in the computational domain we have chosen. \section{Numerical approach} \label{sec_numeri_approach} \subsection{The numerical scheme and simulation set up} The hydrodynamic equations (\ref{eq.cont}-\ref{eq.energy}) are solved in this paper using Total Variation Diminishing (TVD) scheme, introduced and developed by \cite{AH83}. The scheme (or, the modified version of it) is applicable to hydrodynamic problems and has been used extensively in relevant astrophysical applications \citep{DR93,rbol95,rjf95,lrc11,IC12,lckhr16}. TVD scheme is an Eulerian, second order accurate, nonlinear, finite difference scheme, which accurately captures shock. The temporal and spatial evolution of the conserved quantities $\rho$, $\rho v_i$, and ${\cal E}$ is computed using approximate Roe type Riemann solver to solve the differential equations, followed by application of a non-oscillatory first order accurate scheme to the modified flux functions to achieve second order accuracy \citep[see,][]{ROE81, DR93, AH83}. Equations of motion (\ref{eq.cont}-\ref{eq.energy}) are similar to those solved in \citet{IC12}. In \citet{IC12} the galactic outflow was powered by the radiation from the galactic disc, while being decelerated by the gravity of the galactic disc, the halo and the bulge matter. In contrast, in this paper, the accretion disc outflow is powered by the radiative fluxes and the centrifugal force from the KD, and is decelerated by the radiative drag terms as well as, the gravity of the central black hole. To solve the equations of motion (\ref{eq.cont}-\ref{eq.energy}), we considered the TVD scheme \citep[see,][for details]{IC12} for the resolution $512 \times 512$. A schematic representation of the computational arrangement is presented in Figure \ref{lab_grid}, which marks the ghost cells where the boundary conditions are implemented and also the computational domain. We employed continuous boundary condition at $z=0$ boundary, and outflow boundary condition at the outer $r$ and $z$ boundaries (i. e., no inflow but continuous if ${\bf v}>0$). At $r=0$, or the axis of symmetry, reflection boundary condition has been employed. The type of boundary conditions employed, are also mentioned in Figure (\ref{lab_grid}). We simulate a region of $512$ from the black hole, each in $r$ and $z$ direction, therefore the dimension of each cell is equivalent to $1$. The gravity of the black hole is described by Paczy\'nski \& Wiita potential \citep{PW}. In order to avoid the coordinate singularity on the horizon, the black hole is covered by a sink region of radius $3$ around the origin, which do not affect the physics since the inner edge of KD is $3$. The KD is on the equatorial plane, ranging from $3-512$, the density, pressure and the components of velocity distribution given by equations (\ref{eqn:masscon} -\ref{temp2.eq}) are maintained in a region described by $r\rightarrow 3$---$512$ and $z\rightarrow 0$---$3$. We supply the dynamical variables of the KD at every time step within a height of $3$ above the equatorial plane. Therefore, KD acts as a boundary condition and is not dynamically sustained, so one may say its a quasi-KD. The moments of the radiation field are computed in a region outside the KD. We compute the outflowing winds due to the action of the radiation field of a KD. The speed of light in vacuum $c$ is the unit of velocity in the code, the unit of length is $r_{\rm o}=512$. Since the KD flux used in the code is for $m=10$ so the unit of time is $5.12\times 10^{-2}$ s. The reference density is $\rho_{\rm ref}=10^{-5}$ gm cm$^{-3}$. The ambient medium or the computational domain is kept initially ($t=0$) tenuous enough w.r.t the accretion disc with constant parameters. The ambient density is considered to be as low as a factor of $10^{-8}$ and pressure is $10^{-9}$ in code units, so the outflow from the disc is not suppressed artificially by the initial distribution of right above the disc. \begin{figure} \centering \includegraphics[width=80mm]{fig2.eps} \caption{Uniform grids in computational domain, with two ghost cells in each boundary. Respective boundary conditions are mentioned accordingly. Grids drawn are not to the scale.} \label{lab_grid} \end{figure} \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=180mm]{fig3.eps} \caption{Contours of Density $log_{10}(\rho)$ over plotted with respective net velocity vector arrows. These profiles are for $\dot{m} = 3$. Panels correspond to the snapshots at run time $t=2, 6, 62, 72, 82$ and $92$ from (a) to (f).} \label{lab_den_md3} \end{center} \end{minipage} \end{figure*} \section{Results} \label{sec_results} \subsection{Wind propagation above the disc: density and velocity evolution} In Figure \ref{lab_den_md3}(a)-(f), we overplot velocity vectors ($v_{\rm p} \equiv \sqrt{[v_r^2+v_z^2]}$) on the density contours of radiatively driven winds from a KD, for ${\dot m}=3$ and at different time steps $t=2$ (a), $t=6$ (b), $t=62$ (c), $t=72$ (d), $t=82$ (e), and $t=92$ (f). Arrows represent velocity vectors in $r-z$ plane, where the magnitude of the velocity ($v_p$) is proportional to the length of the arrows. All densities in this paper are scaled to $\rho_{\rm ref}$. The KD is hotter and denser near the inner edge, and the radiative flux maximizes at around $4$. Therefore, both the thermal gradient force and the radiative force drive matter in the form of wind from the inner parts of the disc. Very little matter is ejected from the region $r_{\rm d} > 100$, even if the simulation is run for a longer time. As the wind emerges from the inner regions of the disc, the general direction of motion is away from the axis of symmetry [Figure \ref{lab_den_md3}(b)]. However, at a later time, a part of the wind moves towards the axis of symmetry [Figure \ref{lab_den_md3}(d)], but the wind again moves away from the axis. The entire wind-fan oscillates as a whole, somewhat dancing like a flame in a breeze. All the matter that is being ejected, does not flow out, but a tiny fraction of it falls back, and hits the wind base, which causes a perturbation propagating along the wind. Moreover, $F_r$ near the wind base is directed towards the axis, but higher up it is directed away from the axis. The inner radius of KD is $r=3$, so there is no source of radiation for $r < 3$. Hence, close to the axis of symmetry and just above the disc, $r$ component of the radiative flux points inward {\em i.e. } $F_r<0$. The centrifugal force is always directed away from the axis. $F_\phi$ which is weaker than the fluxes in the other two directions, will spin up the wind, but stronger pressure components boosts the drag in the $\phi$ direction. Additionally the radiative force along $z$ powers the wind upwards. And finally gravity attracts every part of the wind towards the black hole. All these factors together interact with the ejected matter and generates a wind which originates from the inner region of the KD, but fans out in the $r-z$ plane. This effect is quite clearly presented in various panels (Figure \ref{lab_den_md3}c-f). It may also be noted that all the matter coming out of the KD do not become a wind but sits above the KD. \subsection{Angular momentum transport} \label{sec_results_angular_momentum} \begin{figure} \centering \includegraphics[width=80mm]{fig4.eps} \caption Radial variation of $v_\phi$ at a height $z=6$ from the equatorial plane, for three run times $30$, $50$ and $80$ as shown in legends for $\dot{m} = 3$. The disc $v_\phi$ is plotted for comparison.} \label{lab_v_phi} \end{figure} \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=150mm]{fig5.eps} \caption{Contours of $v_\phi$. Time elapsed is $t = 92$, for winds generated from accretion discs with (a) $\dot{m}=3$ and (b) $\dot{m}=4$} \label{lab_angmom} \end{center} \end{minipage} \end{figure*} Due to high rotational speed of the inner region of the disc, winds produced from this region propagate with a fraction of the rotational speed of the disc. Hence matter ejected from the KD carry a part of the disc angular momentum along with them. This can be seen as one of the ways through which the disc removes its angular momentum. In Figure (\ref{lab_v_phi}), we plot $v_{\phi}$ as a function of $r$ at a height of $z=6$ from the equatorial plane, measured in three different ru times $t=30$ (solid), $50$ (dashed) and $80$ (dashed-dotted). The disc $v_\phi$(dotted) is also included for comparison. It can be seen that near the disc inner edge, almost $64\%$ of the azimuthal component of velocity is effectively removed by the wind which therefore would reduce angular momentum too. All the terms with negative sign in the last term of Equation(\ref{eq.momentum_phi}) resist rotation, thereby remove angular momentum (and $v_\phi$) from the wind. The rotational speed of the wind can be as high as $0.3$ near the axis of symmetry but are much less than the disc rotational velocity. Since the radiative moments are weak above the outer part of the disc, the rotation velocity just above the disc is similar to that on the disc at the same $r$. Since all the curves (solid, dashed, dash-dotted) almost overlap each other in Figure(\ref{lab_v_phi}) we conclude that the $v_\phi$ distribution close to the disc is almost steady. Further, we plot the contours of $v_\phi$ of winds generated from accretion discs with accretion rates $\dot{m}=3$ and $4$ (Figures \ref{lab_angmom}a \& \ref{lab_angmom}b, respectively). As the winds are stronger for higher accretion rates, the outflowing matter driven by radiation from a disc with higher $\dot{m}$, matter with higher $v_\phi$ are injected. Winds from an accretion disc with $\dot{m}=4$, posses higher values of azimuthal velocities in a larger region above the accretion disc, compared to the winds from a disc of lower ${\dot m}$. It may be noted that, a fraction of outlfowing matter near the axis of symmetry fall back, and at some height above the disc interacts with the outflowing wind and makes it to bend away. For KDs with higher $\dot{m}$ which are ejecting fast matter and with higher rotation, traps these relatively higher rotating matter in the region where the inner boundary of the wind bend away. And as the matter further move away, $v_\phi$ is reduced by radiation drag. \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=150mm]{fig6.eps} \caption{Density distribution for $\dot{m}$ $\approx$ 1.3 (a \& b), $\dot{m}$ $\approx$ 2.5 (c \& d) and $\dot{m}$ $\approx$ 3 (e \& f), snapshots are at run time t = 72; frames in the left column are generated considering the radiation drag and the right panel frames are without the drag effect. It is evident that for low mass accretion rate (here, $\dot{m}\approx 1.3$), the winds cannot be driven away due to the presence of the radiation drag.} \label{lab_rad_drag1} \end{center} \end{minipage} \end{figure*} \subsection{Effect of radiation drag} In section (\ref{sec_equations}) the expression of the radiation term (equation \ref{radcomp.eq}) contains both positive and negative terms. The flux terms ($F_i$) are positive and therefore would accelerate the flow along its direction. However, the terms having radiation energy density and pressure components, appear with a negative sign and are also proportional to various velocity components. The negative terms causes deceleration and would reduce relevant components of momentum density. These negative terms are called radiative drag terms. For example, the radial component of momentum density equation (\ref{eq.momentum_r}) will be increased by $F_r$, but will be reduced provided any or all the terms containing $E$, $P_{rr}$, $P_{r\phi}$, $P_{rz}$ are dominant. It may be noted that, the radiative drag terms are highly non-linear, for example, $P_{\phi r}$ will couple with $v_r$ and hinder the growth of azimuthal momentum density ($\rho v_\phi$), but will also couple with $v_\phi$ and oppose the growth of $\rho v_r$ (refer to, equation(\ref{radcomp.eq}) for radiative terms and the equations of motion \ref{eq.momentum_r}-\ref{eq.momentum_phi}). To show the impact of radiation drag on the dynamics of the winds, we compare solutions with and without drag terms. We plot the density contours and velocity field of wind solutions with drag terms in the left panels (a, c, e) of Figure (\ref{lab_rad_drag1}), while solutions without drag terms are plotted in the right panels (b, d, f) of the same figure. The accretion rates for each pair of comparable panels are ${\dot m}=1.3$ (Figures \ref{lab_rad_drag1}a \& b), ${\dot m}=2.5$ (Figures \ref{lab_rad_drag1}c \& d) and ${\dot m}=3$ (Figures \ref{lab_rad_drag1}e \& f). All the plots obtained are at run time $t=72$. For lower ${\dot m}$ the wind is launched, but as it accelerates to higher velocities the drag terms suppress the wind. However, in absence of radiation drag the wind freely propagates outwards. For a slightly higher $\dot{m}(=2.5)$, a weaker wind is generated in presence of drag terms (Figure \ref{lab_rad_drag1}c) however, without drag terms the wind is relatively stronger (Figure \ref{lab_rad_drag1}d). Similar effect can be seen for $\dot{m}=3$ in which the wind in presence of drag terms (Figure \ref{lab_rad_drag1}e) is weaker than the the one in absence of drag term ({\em i.e. } Figure \ref{lab_rad_drag1}f}). It may be noted, in absence of radiation drag term, a lower luminosity disc will produce stronger winds than that above a luminous disc in which drag terms are considered (compare Figures \ref{lab_rad_drag1}d \& \ref{lab_rad_drag1}e). Here Figure \ref{lab_rad_drag1}(e) is identical to Figure \ref{lab_den_md3}(d). Radiation driving ($F_r,~F_z$) is weaker above low luminosity accretion discs. As the wind is launched from such a disc, it has low poloidal velocity ($v_p=\sqrt{v_r^2+v_z^2}$) to start with, but high $v_\phi$. It means radiation drag is not important along $r,~z$ directions, but, $v_\phi$ being high, it boosts the drag terms in all the three direction. Therefore, matter ejected from the disc will not flow out as wind but will be smothered down by the drag term. Above a luminous disc, however, jets are strongly driven in $r,~z$ direction and can overcome the radiation drag due to high $v_\phi$. At larger distances, where poloidal velocity increases, drag becomes more effective and limits the terminal speed of the outflow. This figure illustrates the effect of radiative drag. \subsection{Terminal speeds of the winds} \begin{figure} \centering \includegraphics[width=8.5cm]{fig7.eps} \caption{Terminal speeds as a function of $\dot{m}$ for winds with radiation drag terms (solid curve) and without radiation drag terms (dashed) at run time $t=100$.} \label{lab_rad_speeds} \end{figure} Once the winds leave the computational domain and escape to infinity, the maximum speeds they acquire while escaping is what we call terminal speed ($v_{\rm \small T}$). These are the speeds that correspond to the observed blue shift in the spectra from these sources. In this paper, we consider the the maximum speed at the outer boundary as the $v_{\rm \small T}$ i.e., $v_{\rm \small T}=v_p(r,512)$. In Figure (\ref{lab_rad_speeds}), we plot the terminal speeds as a function of $\dot{m}$ at run time $t={100}$. The dashed curve corresponds to $v_{\rm \small T}$ when all the radiative terms are effective (including drag terms). To show the effect of radiation drag, we over plot corresponding terminal speeds without considering the radiation drag terms (solid). The winds are faster for higher accretion rates and safely reach mildly relativistic values. In absence of radiation drag, the terminal speeds are overestimated by about an order of magnitude. \subsection{Mass outflow properties} \label{sec_outflow} \begin{figure} \centering \includegraphics[width=90mm]{fig8a.eps} \includegraphics[width=90mm]{fig8b.eps} \includegraphics[width=90mm]{fig8c.eps} \caption{Radial variation of mass outflow rate $d\dot{M}_{\rm out}(r)$, (a) just above the accretion disc and (b) at outer $z$ boundary. (c) Variation of mass outflow ${\dot M}$ at the outer z boundary, w.r.t ${\dot m}$, with and without considering the effect of radiation drag on the outflows. All the outflows are measured in Eddington unit, at run time $t = 100$.} \label{lab_mdout} \end{figure} \begin{figure} \centering \includegraphics[width=90mm]{fig9.eps} \caption{The percentage of the total outflow which are transonic ${\dot R}_{\rm out}$ as a function of time, for two accretion rates $\dot m=3.0$ (dotted, blue) and ${\dot m}=3.5$ (solid, red).} \label{lab_transwind} \end{figure} As the matter is ejected from the upper disc surface, the mass flux due to the emission can be represented in differential form as \begin{equation} d\dot{M}_{\rm out} (r) = 2\pi r \rho v_z dr \end{equation} The total integrated value of $\dot{M}_{out}$ along radial direction can be written as \begin{equation} \dot{M}_{\rm out} = \int_{r_{i}}^{r_{o}} 2\pi r \rho v_z dr \end{equation} Here we have spatial resolution of $dr=1$. We calculate the radial variation of outflow at a certain height, z from the disc, as well as the net integrated outflow along r at that specific z, for a particular time step. In Figure \ref{lab_mdout}(a), we plot $d{\dot M}_{\rm out}$ (in Eddington units) with $r$, calculated just above the disc for $\dot{m} = 2.5, 3$ \& $4$ at run time $t=100$, signifying the outflow from the launching point of the wind. Further, in Figure \ref{lab_mdout}(b), we estimate the radial variation of the outflow rate at the outer $z$ boundary of the computational domain. The outflow at the ejection base mostly comes out from the inner regions of the accretion disc and with time it gradually covers the entire numerical domain diagonally leaving the domain through the outer boundaries. Further, the outflow rates at the outer boundary are significantly less than the outflow at the launching, indicating that all the matter ejected from the disc does not leave the domain but only a fraction of it escapes. In Fig. \ref{lab_mdout}(c), we show the integrated outflow rates ${\dot M}_{\rm out}$ (black dashed) from the disc calculated at outer $z$ boundary ($z=512$). These values are obtained by integrating the outflow rates along $r$, at run time $t=100$. For comparison, we show the outflow rates without drag (blue solid) and observed that, as expected, the radiation drag has significant effect in suppressing the matter ejection in form of the winds. Furthermore, very small magnitudes of the outflow rates compared to the accretion rates justify our assumption that the disc can remain mostly in steady state and is not affected by the matter ejection. In other words, the accretion rates remain time-independent. The computational domain in this paper is just $512\times 512$, so it is intriguing to wonder what fraction of the computed outflow will actually escape the gravity of the black hole. Since the wind is also a fluid, therefore if the wind is transonic then it will definitely escape the the black hole gravity. In Fig. \ref{lab_transwind} we plot the percentage of the calculated mass outflow rate at $z=512$ which are transonic or $v_T(r,512)/c_s(r,512) > 1$, and is measured as $${\dot R}_{\rm out}=\frac{{\dot M}_{\rm out}(\mbox{trans})}{{\dot M}_{\rm out}(\mbox{total})},$$ where both ${\dot M}_{\rm out}(\mbox{trans})$ and ${\dot M}_{\rm out}(\mbox{total})$ are measured at $z=512$ or the upper boundary of the computational box. Figure \ref{lab_transwind} shows that of all the matter which is ejected from the computational domain, only a fraction of it is transonic and therefore actually can leave the gravitational attraction of the central black hole. For $\dot{m}=3$ (dotted, blue) the transonic mass outflow rate is about 10\% of the total mass leaving the computational domain as winds. For $\dot{m}=3.5$ (solid, red) it varies between 30-70\% of the total outflow. We compute the wind flowing out of the computational domain only through upper $z$ boundary, because mass-outflow rate through outer $r$ boundary is about an order of magnitude less compared to that through the upper $z$. Moreover, the mass flowing out through the outer $r$-boundary is subsonic and would not contribute significantly in the net outflow rate. The wind outflow rate is also variable. \section{Conclusions} \label{sec_conclusions} In this paper, we have studied the generation mechanism and properties of the winds around black hole accretion discs. The winds are generated by a Shakura - Sunyaev Keplerian accretion disc which is steady in nature and act as a source of wind and the radiation field which drives them. The radiation field is controlled by $\dot{m}$. We computed all components of radiative moments numerically. The radiation field generated by the steady KD are also steady. It may be noted that, our assumption of optically thin nature of the medium above the KD is justified since the cumulative optical depth along the $z$-direction is much less than 1 (see appendix A). Although we keep our analysis in non relativistic regime, we have used pseudo-Newtonian gravitational potential to take care of strong gravity near the black hole. We show that the radiation pressure inside the disc along with the thermal pressure is able to push the matter out of the disc. The winds are mostly generated from inner region of the accretion disc ($r<30$). The matter emitted out not only carries matter with it but also removes angular momentum from the disc. Highly rotating winds are driven by a combined effect of thermal pressure, radiation field and centrifugal force and typically for $\dot{m}>1.8$ the winds escape to infinity. While for smaller accretion rates we showed that the winds fall back to the disc as they don't have sufficient radiation drive to push them to escape. One curious fact which these simulations showed is that a part of the matter ejected does not become wind but may accumulate above the disc. We showed that the radiation drag limits the jet speed. In fact below a certain luminosity, the wind is destroyed by the drag term. Only for a luminous disc, radiation can generate a wind against gravity and its own drag. So radiation drag is a significant factor in determining the dynamical properties of the winds. The $\phi$ component of the radiation drag is also capable to reduce the angular momentum of the wind. The work of \citet{Y18} is similar to ours, except that they considered outflows from a corona and they did not consider radiation drag. These authors considered more luminous disc (up to 0.75 Eddington luminosity) while we considered only up to 0.66 Eddington luminosity ($\equiv 4 {\dot M}_{\rm Edd}$). However, the maximum terminal speeds are somewhat similar for luminous disc, although we predict a lower cutoff of disc luminosity to drive a wind from KD. We analyzed the terminal properties of the winds and found that the terminal velocities of the disc winds are sub relativistic and higher accretion rate leads to higher magnitudes of the wind speed. The wind speeds are found to be mildly relativistic which is consistent with observations. We show that if radiation drag is ignored, the terminal speeds are overestimated significantly Detailed study of mass outflow rate shows that the mass loss from the disc is indeed a very small fraction of the disc mass and hence we may conclude that the radiative property of the KD will not be significantly affected by radiatively driven winds. Inclusion of radiation drag sufficiently suppresses the mass outflow rates at outer boundary of our computational domain. So one needs to take care of radiation drag effects while carrying out the analysis of radiation driving in the winds. It is a non relativistic study of the disc wind dynamics under impact of radiation field in Thomson scattering regime. In upcoming works, we would examine the role of Compton scattering in driving such winds. \section*{Acknowledgments} The authors would like to thank the anonymous reviewer for insightful comments and suggestions that help us to improve the manuscript. SR acknowledges the hospitality extended by ARIES during her many academic visits. MKV acknowledges his brief postdoc tenure in ARIES where this work was initiated. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author.
{ "timestamp": "2020-12-21T02:08:32", "yymm": "2012", "arxiv_id": "2012.08886", "language": "en", "url": "https://arxiv.org/abs/2012.08886" }
\section{\label{sec:level1}Introduction} The emergence of 4\textsuperscript{th} generation synchrotron sources based on low-emittance rings \cite{MAXIV18} and of spatially coherent X-ray Free Electron Lasers (XFELs) \cite{EUXFEL20,LCLS10}, open a new era in the study of high coherence materials, such as perfect crystals, using high coherence probes \cite{Vaclav04}. Recent years have seen an increase in the development and use of inverse microscopy techniques based on coherent X-rays \cite{Thibault379,Ulvestad2014, Singer2018, Godard11, Hill18, Hruszkewycz2017, Chamard10}, and other types of speckle analysis \cite{Zhang18, Jacques16}, for the study of crystalline materials , as well as for the characterization of coherent X-ray beams \cite{Schropp2010, Vila2011, Schropp2013,Bjorling2020,Thibault379}. These methods are based on the assumption of a Fourier transform relation between the sample's electron density and the X-ray scattered field, derived from the so-called kinematical approximation \cite{Zacha1945}. On the other end, large single crystals are one of the primary optical elements for X-ray experiments. For these, more complex interactions including multiple diffraction, absorption and refraction effects must be considered, which are well described by the dynamical diffraction theory \cite{Zacha1945, Batter64,Authi01}. Dynamical diffraction effects have already proven very useful for applications in ultrafast X-ray optics \cite{Amann2012}. Nevertheless, dynamical diffraction can arise already from samples of modest dimensions ($\sim\SI{500}{\nano\meter}$) and, in conjunction with the use of coherent nanobeams, have already been shown to be not only measurable \cite{Pateras18,Civita18} but also a hindrance to standard inversion algorithms for digital microscopies \cite{Shabalin17} and requiring adaptive solutions \cite{Gorobtsov16}. Here, we show that the exceptional coherence properties of the X-ray beam produced at MAX IV Laboratory can be exploited to understand dynamical diffraction effects in crystalline samples and how the wave-front changes in presence of strain, and propose an approach to analyse them. The effect under study, presented theoretically in Ref.~\cite{Shvy12}, consists of the appearance of multiple monochromatic beams in the diffracted and forward directions, that present a transverse displacement of a few microns between each other. These beams, also known as echoes, are delayed between each other by few femtoseconds depending of the thickness of the crystal and energy of the X-ray beam. This ultra-fast effect is used in X-ray optics at XFELs for self-seeding radiation production \cite{Amann2012}. We have previously demonstrated the transverse displacement of the echoes in diamond plates \cite{ARF18} and Si wafers \cite{ARF20} using a $\SI{2}{\micro\meter}$ beam focused onto an optically coupled X-ray detector with sub-$\SI{}{\micro\meter}$ resolution. In this Letter, we improve the resolution of our previous works and demonstrate that localised surface strain in a Si wafer can produce a spatial modulation of the echoes that translates into the tuning of their time delay. A nanobeam is used to select regions of a Si sample where a residual strain field was created by nano-indentation. The coherence of the X-rays is exploited to measure the finely structured echoes, strongly dependent on the local strain field, with a sub-$\SI{100}{\nano\meter}$ resolution only achievable with inverse microscopy. The technique used, known as X-ray tele-ptychography \cite{Tsai26}, is capable of measuring the X-ray diffracted field, using as probe a small pinhole placed \emph{after} the sample \footnote{We note that conventional ptychography, in which the sample is scanned with respect to the incoming beam, would not work in presence of dynamical diffraction due the assumption of a factorization of the illumination and the sample transmissivity in conventional ptychography, ref. \cite{Thibault379}}. In this way, no assumption is made on the interaction of X-rays with the sample and this approach can be extended to study dynamical diffraction effects. A similar configuration, using a large coherent beam, with the sample in diffraction condition for strain sensitivity, has been recently shown in \cite{Verezhak18, verezhak2020}. The results presented are corroborated by simulations based on dynamical diffraction theory. This work provides a very fine tool to understand the role of dynamical diffraction in the study of ultrafast processes in perfect and strained thin crystals. With this knowledge new X-ray optics for high coherence sources can be designed when a better control of the temporal dependence of the diffracted signal is needed. Finally, this work proposes an approach for the study of strained micro-crystals and time-resolved lattice deformation \cite{Crysta_top} in highly ordered materials when dynamical effects are dominant, using the echoes as a probe. \begin{figure} \centering \includegraphics[scale = 0.15]{Fig1geometry.png} \caption{ Schematics of the diffraction setup. The focused X-ray beam with wavelength $\lambda$ and wave vector $\mathbf{k_0}$, $|\mathbf{k_0}| = 2 \pi/\lambda$, impinges on the Si sample oriented with a family of atomic planes (with spacing $d$) in diffraction conditions, $\mathbf{k_0}+\mathbf{H} = \mathbf{k_H}$, with $\lvert\mathbf{H}\rvert=2\pi /d$. The photons are diffracted at an angle $2\theta$ along $\mathbf{k_H}$. Echoes in the forward (diffraction) direction are indicated with their wave-vectors $\mathbf{k_{0i}}$ ($\mathbf{k_{Hi}}$). The position of the indents on the sample, the pinhole used to scan the wave-front and the two pixel detectors, one in forward and one in diffraction direction, are shown. }\label{fig:geometry} \end{figure} The samples used in this study are two Si$(100)$ wafers of $\SI{100}{\micro\meter}$ thickness. On the surface of one of them, a series of nano-indents are performed using an Alemnis nanoindenter inside a Zeiss Leo Ultra 55 FEG scanning electron miroscope -SEM, as shown in Fig.~S1 of the SM \cite{SM20}. Each indent produces a strain field that propagates radially from the indent with an amplitude that decreases exponentially with distance \cite{Reuber14, Liu14}. The Si samples are measured with X-rays using a Laue geometry, with the Si(111) planes in diffraction conditions, so that the forward diffraction (FD) signal can be measured, as described in our previous work \cite{ARF18,ARF20}. A schematics of the sample geometry is shown in Fig.~\ref{fig:geometry}. The FD signal can be only described using dynamical diffraction theory \cite{Batter64,Zacha1945,Shvy12}, which includes all X-ray interactions within the crystal unit, beyond the diffraction of the incident beam with the directly illuminated volume. Due to the crystal long-range order, the diffracted photons see more lattice planes also in diffraction condition. Therefore, the photons can be diffracted multiple times both in the diffracted and forward. The diffraction inside the crystal occurs in the volume enclosed by the beam size in the y direction and the gray triangle in Fig.~\ref{fig:geometry} in the x-z plane. At the crystal exit surface all diffracted waves interfere generating the transverse displaced echoes \cite{Shvy12,ARF18}, both in the forward and diffracted directions, with wave vectors $\mathbf{k_{0i}}$ and $\mathbf{k_{Hi}}$, respectively, with $i = 1 ,2,...$ the number of the X-ray beams generated. Due to the different path the waves follow inside the crystal, they accumulate a temporal delay with respect to each other \cite{ARF18,Shvy12}. The time delay $\Delta t$ and the transverse displacement $\Delta x$ associated to each beam are related linearly \cite{Shvy12} by \begin{equation} \label{eq:deltax_deltat} \Delta x= c\cot(\theta)\Delta t \end{equation} where $c$ is the speed of light and $\theta$ is the Bragg angle \footnote{From Bragg's law $2d \sin \theta=n\lambda$}. The measurements are performed at the beamline NanoMAX using a photon energy of $\SI{8}{\kilo\electronvolt}$ ($\lambda=\SI{0.155}{\nano\meter}$) selected by a Si$(111)$ monochromator. At this energy the flux at the sample is of $10^{11}$~ph/s and the Kirkpatrick-Baez (KB) mirrors produce a highly coherent X-ray focused beam with $\sim\SI{110}{\nano\meter}$ waist and \SI{250}{\micro\meter} focal depth, with a divergence of $\sim\SI{1.2}{\milli\radian}$ in both horizontal and vertical directions \cite{Bjorling2020}. The Si samples are mounted on a scanning stage in the KB focal plane. A $\SI{3}{\micro\meter}$ diameter pinhole is mounted on a second piezo scanning stage $\SI{3.68}{\milli\meter}$ downstream the sample along the beam propagation axis, as shown schematically in Fig.~\ref{fig:geometry}. The asymmetric Si$(111)$ reflection, $\theta_{111}=\SI{14.3}{\degree}$, is chosen to match the $\sim\SI{1}{\electronvolt}$ monochromator bandwidth. For Laue symmetric geometry the extinction length of Si$(111)$ at $\SI{8}{\kilo\electronvolt}$ is $\SI{18.54}{\micro\meter}$ and the absorption depth is $\SI{220.38}{\micro\meter}$. The detector, a photon counting Merlin system with a 512$\times$512 array of pixels with $\SI{55}{\micro\meter}$ edge size, is placed $\SI{4.5}{\meter}$ downstream the sample, to satisfy the ptychographic sampling requirement from the pinhole at the energy used. A He filled flight tube is used to minimise air scattering between pinhole and Merlin detector. A Pilatus 100K detector is placed in the horizontal scattering plane to optimise the Si(111) reflection. \begin{figure} \centering \includegraphics[scale = 0.23]{Fig2propagation.png} \caption{ Intensity of the FD wave-front from strain-free Si wafer at two angles: (left) $1$ degree away -echoes are absent, and (right) at the diffraction angle. (a,a') Reconstruction at the pinhole plane. (b,b') Reconstruction propagated to the KB focus. (c,c') Simulation at the focus. (d,d') Line-cuts through (a,a') and (b,b'). The simulated signal, lower line, is offset for clarity. (e') Propagation of (a') along the beam direction z up to the sample position, illustrating the divergence of the transmitted beam and the parallel echoes. } \label{fig:Reconst_prop_sim} \end{figure} The intensity of the forward wave-front is measured with tele-ptychography scans \cite{Tsai26}, for both samples and both in and out of the diffraction condition. Tele-ptychography measurements are performed by scanning the pinhole in the x-y plane, over an area of 18$\times$$\SI{36}{\micro\meter}$$^2$ for the strain free sample and 18$\times$$\SI{27}{\micro\meter}$$^2$ for the nano-indented sample, with a step size of $\SI{0.5}{\micro\meter}$. In order to counteract possible drifts of the setup, the data-set is acquired over eight separated scans, each covering an area of 10$\times$$\SI{10}{\micro\meter}$$^2$, with $\SI{1}{\micro\meter}$ overlap between adjacent areas, as shown in Fig.~S3 of SM \cite{SM20}. Data reconstruction is performed with a tailored routine \cite{Wakonig20} that combines 500 iterations of the difference map algorithm \cite{Thibault09} without updating the reconstruction of the pinhole for the first 499 iterations and 1500 iterations of a maximum likelihood refinement \cite{Thibault12} with update of both wavefront and pinhole. This routine applies a common refinement of the wavefront in the overlapping areas, while the refinement of the pinhole is independent for the 8 scans \cite{Guizar14}. In the reconstruction we use the pinhole obtained with a test specimen as initial guess, see Fig.~S4 of the SM \cite{SM20}. The tele-ptychography reconstructions return the amplitude and phase of the forward diffracted wave field and the transmitted beam at the pinhole position (cf.~Fig.~S5 of the SM \cite{SM20}). Figure \ref{fig:Reconst_prop_sim} (a,a') show the forward diffracted intensity (the square of the retrieved amplitude) from the strain-free sample out and in diffraction conditions, respectively. Echoes are indeed only seen in diffraction. These measurements reproduce previous results obtained with direct measurements \cite{ARF18} validating our approach. The divergence of the focused beam is not propagated through the echoes in the diffraction plane, but is only visible in the direction perpendicular to it. In the horizontal plane, indeed, the crystal acts as a X-ray beam filter diffracting only the photons with a divergence comparable with the Darwin width of the selected diffraction peak, which in our case is of $\SI{100}{\nano\radian}$ \cite{Zacha1945}. In the vertical plane, this filter effect does not apply (cf.~Fig.~\ref{fig:Reconst_prop_sim}(a')). Nevertheless, the transmitted beam preserves the full divergence of the incoming beam, overlapping with the echoes at large distance from the sample. Figure \ref{fig:Reconst_prop_sim}(b,b') shows the diffracted intensities of (a,a') at the sample plane, obtained by the propagation of the retrieved wave-field \cite{Tsai26,verezhak2020}. The sample plane coincides with the location of the X-ray beam focus, and therefore provides the best possible resolution of the forward diffracted field. Here, we note that there is no strict correspondence, in real space, between the position of the sample in the focus of the beam, its thickness, unintentionally comparable to the beam focal depth, and the propagation of the echoes in the focal plane. Indeed, the diffracted field measured is produced at the sample exit surface, and the back-propagation of the wavefield beyond this point does not reflect any physical reality. The pixel size obtained for the reconstruction is $\SI{32}{\nano\meter}$, while the resolution is estimated to be $\SI{55}{\nano\meter}$, using Fourier ring correlation \cite{Heel05} for two different scans of the forward beam out of diffraction conditions (see Fig.~S6 in SM \cite{SM20}). Figures~\ref{fig:Reconst_prop_sim}(c,c') show the simulation of the intensity of the forward diffracted field at the sample position for the configuration used, i.e.~a $\SI{100}{\micro\meter}$ thick Si crystal at the asymmetric $(111)$ reflection, using a horizontal diffraction geometry and a strain-free model, as described in \cite{ARF18}. The agreement with simulations is more clearly seen in the line-cuts shown in Fig.~\ref{fig:Reconst_prop_sim}(d,d'), where the data are offset for clarity. Figure \ref{fig:Reconst_prop_sim}(e') shows a map of the intensity of the echoes along the beam propagation direction. Due to the beam divergence, already at a distance of $\SI{2}{\milli\meter}$ from the sample surface, the echoes are overlapping with the divergent transmitted beam. Moreover, the echoes are parallel to the beam propagation direction, as expected. \begin{figure} \centering \includegraphics[scale = 0.3]{Fig3Strain.png} \caption{ Line-cuts through the retrieved (empty dots) and simulated (filled dots) echoes from the indented Si sample in regions with different strain state, cf. text for details. Intensities are offset for clarity. } \label{fig:Strain-profile} \end{figure} From the indented sample data are collected at different positions to compare the FD wave-fronts produced by different strain fields. The dynamical diffraction goes far beyond the volume affected by the indent-induced strain. As a consequence the echoes are still produced with a large contribution from the strain-free part of the Si crystal. Simulations of the expected echoes distribution are performed with our code \cite{ARF18,ARF20} that we have adapted to model strained crystals following the work in Ref.~\cite{Lings06,Takagi62,Kato59}. The model assumes that the indent produces an isotropic strain field of amplitude proportional to the load, which decreases with distance from the indent following an inverse exponential law. Numerically this is achieved by slicing the crystal in $\SI{20}{\nano\meter}$ layers parallel to the surface, each with a different value of lattice parameter following the exponential decay. This numerical step, necessary to reduce the complexity of the calculations, approximates the radial to a linear decay. Despite this approximation introduces a variation of a factor between 1 and 2 of the effective decay length X-rays experience along all possible paths defined by the dynamical diffraction process, it still provides good correspondence between data and simulations. We expect that improving the model to reflect a true radial decay will increase this correspondence. Finally, our model also disregards the strain anisotropy expected along different crystallographic directions, due to different elastic constants. This should be included in a future model. Figure \ref{fig:Strain-profile}(a), (b) and (c) show the results from the inversion of the data collected respectively at \SI{1}{\milli\m} from the indented area, and close to two nano-indents with loads $\SI{25}{\milli\newton}$ and $\SI{75}{\milli\newton}$. The FD signal retrieved shows an increase of the number of echoes and a decrease of their mutual distance for increasing indentation strength. At higher strain, the echoes tend to merge into a continuum. Simulations of the dynamical diffraction signal using models with different strain fields provide a rather good agreement with data. Tele-ptychography reconstructions from this sample and the respective simulations are shown in Fig.~S7-S10 of the SM. More accurate fitting routines will be implemented in the future. However, the position of the echoes are easily extracted via algorithms to find local maxima, as shown in Fig.~S11. New echoes with shorter time delays arise in the strained areas, confirming this experimental approach to observe the strain-tailored FD maxima involved in this ultra-fast process. Their time delay, visualised in Fig.~S12 of SM, can be calculated using Eq.(\ref{eq:deltax_deltat}) and is shown in Fig.~\ref{fig:table_time}. The strain amplitude (expressed as fractional variation of the lattice parameter) and the decay length of the exponential that best reproduces the data for the three regions (see Fig.~S9 in SM at \cite{SM20}) are: \begin{itemize}[itemsep=-0.3em] \item $5\cdot10^{-5}$ over $\SI{2}{\micro\meter}$, profile (a) \item $10^{-4}$ over $\SI{7}{\micro\meter}$, profile (b) \item $10^{-4}$ over $\SI{10}{\micro\meter}$, profile (c) \end{itemize} These findings are in agreement with results from literature \cite{Li2015}. \begin{figure} \centering \includegraphics[scale = 0.30]{Fig4_temporal.png} \caption{ (Left) Transverse displacement of the echoes along the first $\SI{8}{\micro\meter}$ of the reconstruction with respect to the transmitted beam as extracted from Fig.~S11 in SM at \cite{SM20} and (right) calculated delays from equation (\ref{eq:deltax_deltat}). For the strain-free sample and the indented sample at the positions (a), (b) and (c) as presented in Fig. \ref{fig:Strain-profile}. The bars correspond to the half width half maximum of the respective echoes }\label{fig:table_time} \end{figure} These pioneering results demonstrate the extreme sensitivity of dynamical diffraction effects to localised strain fields of small amplitude. Echoes are produced from a volume extending throughout the full sample thickness, of which the strained area represents a mere $10\%$. Further improvement of the model will include the implementation of a radial decay of the strain field and eventually the crystal elastic anisotropy. A natural progression of this research is measuring echoes in the diffraction direction, where the spatio-temporal signal does not suffer from the absorption effects, as in FD. As presented in Fig.~S13 in SM at \cite{SM20}, the echoes probe the crystal in a more homogeneous manner, which allows to retrieve more clearly the information from lattice deformations. We have presented an experimental study of the dynamical diffraction produced by a Si wafer in transmission geometry, in presence of a localised strain. We have shown that strain can be used to finely tune time delay of the echoes, in a way we are able to model. We achieve a high resolution in both, sample and detection space with the use of a focused beam combined with a tele-ptychography approach. With the use of a nanobeam, we effectively increase the sensitivity of our measurement to strain gradients and sample heterogeneity by reducing the illuminated volume. Our approach provides a sensitive and efficient way to analyse the impact of strain of different amplitudes onto dynamical diffraction effects from perfect crystals. Conversely, understanding how dynamical diffraction is affected by the presence of strain in otherwise perfectly ordered crystals offers a new and visibly more sensitive tool for the characterisation of strain and defects in crystalline materials. This work provides insights on the dynamical effects arising in coherent imaging experiments involving high atomic-number crystals, such as Ni, Au, InSb or CdTe, for which the extinction length reduces to the \SI{}{\micro\meter} and sub-\SI{}{\micro\meter} level, and contributes to the landscape of emerging experimental and analysis approaches developed to model them. It is important to mention that the presence from the echoes in the diffracted signal could be distorting the results in temporal ultrafast studies. Finally, being able to predict and control the behaviour of dynamical diffraction in presence of strain opens the fascinating possibility of strain-tailoring ultrafast optics. This might include the generation of multi bunch sources from single ultrafast pulses or the production of tailored crystals for split-and-delay lines with full control of the timing and width of the generated pulses with respect to the incoming pulse. We acknowledge MAX IV Laboratory for beamtime under Proposal 20180253. MAX IV is supported by the Swedish Research council, Vinnova and Formas under contract 2018-07152, 2018-04969 and 2019-02496, respectively. NanoMAX staff is acknowledged for support during the preparation and the execution of the experiment. M.V. acknowledges funding by European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie grant agreement No 701647, and the SNSF grant No 200021L\_169753. We thank Zdenek Matej, Manuel Guizar-Sicairos, Bill Pedrini, Virginie Chamard and Kenneth Finkelstein for useful discussions. A.R.F and DC conceptualized the work; H.S.I. and M.H.C. prepared the samples; A.R.F., D.C, M.H.C., M.V. and A.D. planned the experiment; A.R.F., D.C and A.D. performed the experiment; A.R.F. analysed the data with contributions of A.D., K.W. and M.V.; A.R.F. performed the simulations; D.C. and A.R.F. wrote the manuscript with the contribution of all authors.
{ "timestamp": "2021-10-12T02:31:29", "yymm": "2012", "arxiv_id": "2012.08893", "language": "en", "url": "https://arxiv.org/abs/2012.08893" }
\section{Introduction} The reproduction number, $R_t \equiv R$, gives the average number of new infections caused by a single infected person throughout the infectious period. In contrast to the basic reproduction number $R_0$, which describes the reproduction of the virus in a na\"ive, unmitigated population, $R$ (sometimes called the \emph{effective} reproduction number) varies through time as the epidemic develops and the opportunities for transmission change due to, for example, behavioral response, seasonality, and changes in the relative size of the susceptible population. In every population, some individuals will cause considerably more infections than others - a phenomenon known as \emph{superspreading}. It can be quantified using a framework provided by \citet{lloyd2005superspreading}. In this paper, we extend the model of \citet{cori2013new} to include the phenomenon of superspreading. Our goal is to better quantify the uncertainty inherent in this type of estimate of $R$, \emph{not} to derive a more accurate estimate. Ultimately we are interested in the estimation of $R$ and specifically the question whether, given current case numbers, we can claim with statistical guarantees that $R \leq 1$ or $R > 1$. Given the growing body of evidence about the existence and importance of superspreaders \citep{Adam2020clustering, Liu2020secondary}, we incorporate this feature into our models. We observe two important effects: first, it becomes increasingly difficult to accurately estimate the reproduction number $R$ in the presence of superspreading; second, models with superspreading produce prediction intervals for new cases that have improved coverage compared to those without superpreading. Both of these are demonstrated in our Austrian case-study in Section \ref{sec:austria}. In particular, it becomes infeasible even in early May to support the claim that $R<1$ using our methods. This is a critical period of time as it coincides with the removal of lockdown restrictions in Austria. In particular, the width of a credible interval for $R$ should decrease as a function of total number of cases used during estimation and increase with the extent of superspreading. Let $S$ be the set of days used to estimate $R$ in the nowcasting framework presented in Section \ref{sec:nowcasting} and assume that the (average) reproduction number does not change over time. One would then expect that a $(1-\alpha)\%$ credible interval to have width approximately equal to \begin{IEEEeqnarray}{rCl} \frac{2z_{1-\alpha/2}}{\sqrt{k \sum_{s \in S} I_s}}, \end{IEEEeqnarray} where $z_{1-\alpha/2}$ is the $(1-\alpha/2)$ quantile of the standard normal distribution and for values of dispersion parameter $k$ much smaller than 1, which corresponds to scenarios with high superspreading. We derive this exact functional form in a simplified model introduced in Section \ref{sec:generation}. \subsection{Nowcasting} \label{sec:nowcasting} The goal of nowcasting is to get accurate estimates of the current state of an epidemic. Given that our observed infections are random observations from an underlying process, our goal is to understand the parameters of that process, particularly with respect to the reproduction number. In addition, we define a time-varying parameter we call the ``momentum'' of an epidemic, which is a \emph{random} realization of population infectiousness at a time-point which accounts for superspreading. This is introduced formally in Section \ref{sec:model}. Benchmark methods for estimating the reproduction number $R$ include those of \citet{cori2013new} and \citet{WallingaT04}. The method of \citet{cori2013new} provides near real-time estimation of $R$ and is implemented in the R software package `EpiEstim'. An improvement of this framework is given in \citet{thompson2019improved} which accounts for variability in the generation interval (defined below). A substantial extension of the EpiEstim-package (`EpiNow') was developed by a group of researchers at the London School of Hygiene and Tropical Medicine \citep{abbott2020estimating}. The method of \citet{WallingaT04} provides an alternate estimate for historical values of $R$. Contrary to the methods discussed in this paper, it requires observations from both before and after the time point at which an estimate for $R$ is desired. An important overview of other estimation methods and challenges due to COVID-19 is given in \citet{gostic2020practical} and a comparative analysis of statistical methods to estimate $R$ is given in \citet{o2020comparative}. If the epidemic is at an early stage, the reproduction number $R$ and the rate of exponential growth are connected by the Euler-Lotka equation \citep{wallinga2007generation, ma2020estimating}. As we follow the framework of \citet{cori2013new}, we briefly describe their basic model. Let $I_0$ be the number of initial infections and $I_1, I_2, \ldots$ be the number of new infections on days $1,2,\ldots$. By $(w_n)_{n\geq 1}$ we denote the \emph{generation interval distribution}. If $J_m$ denotes the number of people infected by a specific person on the $m$-th day after this person got infected, then we have for $m\in \mathbb{N}$ $$w_m=\frac{\mathbb{E}[J_m]}{\sum_{l=1}^\infty \mathbb{E}[J_l]}.$$ We assume that a newly infected individual does not cause secondary cases on the same day, corresponding to $w_0=0$. The generation interval can be interpreted as the infectiousness profile of infected persons. The basic model of \citet{cori2013new} assumes that the stochastic process of total new infections on day $t$, $(I_t)_{t\in \mathbb{N}}$, satisfies \begin{align} \label{Cori13model} I_t \sim \text{Poisson}\left(R_t \sum_{m=1}^{t}I_{t-m}w_m\right), \end{align} for a sequence of numbers $R_t,\,t\in\mathbb{N}$. In practice it is often assumed that the generation interval distribution is given as a Gamma distribution that has been discretized in such a way that $w_m=0$ for all $m$ larger than some cut-off number $\nu$ \citep{gostic2020practical}. As a result, the sum in \eqref{Cori13model} will only have $\nu \in \mathbb{N}$ summands, and to make assertions about $I_t$ we only have to consider the case numbers $I_{t-\nu}, \ldots, I_{t-1}$. As $\nu$ is a parameter that can vary between diseases, this term is kept and used throughout our model description in Section \ref{sec:model}. When estimating the time-varying reproduction number, \citet{cori2013new} assume that the reproduction number has stayed constant over a window of $\tau$ days. In this case, for $s \in (t-\tau+1,\ldots,t)$, equation \eqref{Cori13model} simplifies to \begin{align} \label{Cori13model2} I_s \sim \text{Poisson}\left(R \sum_{m=1}^{\nu}I_{s-m}w_m\right). \end{align} In order to treat $R$ as fixed in the above expression, it is necessary to only explicitly model a subset of time points, lest $R$ be assumed constant over all time points. Note that the reproduction number in the sense of \eqref{Cori13model2} does not denote the number of people that actually have been infected by a given individual, but rather describes what one would expect in an ``average'' evolution of the epidemic. Furthermore, while $R=R_t$ is assumed to be constant over the window of width $\tau$, as this window moves through time the method produces \emph{estimates} of $R$ that slowly vary over time. \subsection{Heterogeneity in Reproduction Numbers} \label{sec:heterogeneity} The motivation for our hierarchical Bayesian approach follows the framework of superspreading provided in \citet{lloyd2005superspreading}. Even if the reproduction number $R$ is constant over a small window of time, it might vary between individuals. We consider the reproduction number of a specific person with index $x$ to be drawn randomly as \begin{IEEEeqnarray}{rCl} r_x \sim \text{Gamma}(k,\, \text{rate} = k/R). \label{eqn:r_x} \end{IEEEeqnarray} This distribution has mean $R$ and variance $R^2/k$. Note that the above gamma distribution will also be referred to as having dispersion parameter $k$. The degenerate case $k=\infty$ corresponds to the deterministic case where $r_x= R$ for all individuals and leads to the model in \eqref{Cori13model2}. Given $r_x=r$, this person causes Poisson($r$) new infections. If one integrates out the Poisson parameter $r$, one is left with the unconditional number of descendants which follows a negative binomial distribution with mean $R$ and variance $R+R^2 / k$. This negative binomial model is further analyzed in Section \ref{sec:generation}. A basic extension of \eqref{Cori13model2} that follows the concept of random individual reproduction numbers in the sense of \citet{lloyd2005superspreading} is to assign, on day $t$, the individual reproduction numbers $r_1^t, \ldots, r_{I_t}^t$ to the $I_t$ individuals that got infected on this day. This leads to the recursion \begin{align}\label{Kory20} I_t \sim \text{Poisson}\left(\sum_{m=1}^\nu w_m \sum_{x=1}^{I_{t-m}} r_x^{t-m}\right), \end{align} where the individual reproduction numbers $r_x^{m}$ are drawn i.i.d.\ according to \eqref{eqn:r_x}. Note that for the degenerate case $k=\infty$, \eqref{Kory20} recovers \eqref{Cori13model2}. This forms the foundation of the model explained in detail in Section \ref{sec:model}. The theme of the present paper is close to that of \citet{donnat2020modeling}, in which heterogeneity in $R$ between \emph{groups} is explicitly modeled. While the high-level descriptions of these models sound nearly identical, those models are relevantly different than ours. In particular, \citet{donnat2020modeling} are interested in estimating group-specific or time-varying reproduction numbers for different geographical regions and age groups. On one hand, with sufficient group-specific data, this provides tools of a much broader scope than we present here; on the other hand, it is assumed that within-group variability is negligibly small. Instead, we focus on aggregate data from a \emph{single} geographical region but do \emph{not} assume that individual variability is negligible. Rather, this is precisely the variability we are interested in modeling. Furthermore, our critiques of the estimability of the reproduction number transfers to their setting as well: if within-group variability exists, group-specific reproduction numbers are more difficult to estimate than previously acknowledged. \section{Methods} This section introduces two methods. First, the ``momentum'' model formulates the estimation problem as a Bayesian Poisson regression. Second, the ``generation'' model is a simplification which provides a fast approximation to the momentum model as well as an explicit formula for dependence of credible interval width on $k$. Both are of interest beyond COVID modeling and aim to address different goals: precise estimation (momentum) and valuable speed and heuristics (generation). \subsection{The ``Momentum'' Model} \label{sec:model} As mentioned in the introduction, we identify an unobserved random variable which we term the ``momentum'' of the epidemic. This follows from a simple notational change in \eqref{Kory20} according to the observation that a sum of i.i.d. Gamma random variables is also Gamma distributed with the same dispersion parameter. We rewrite \eqref{Kory20} as \begin{align}\label{Kory20.2} I_t \sim \text{Poisson}\left(\sum_{m=1}^\nu w_m \theta_{t-m}\right), \end{align} where \begin{IEEEeqnarray}{rCcCl} \label{Kory20.2b} \theta_{t} & = & \sum_{x=1}^{I_t} r_x^t & \sim & \text{Gamma}(I_tk,\, \text{rate} = k/R). \end{IEEEeqnarray} The terms $(\theta_t)_{t\geq 0}$ are collectively referred to as the ``momentum'' of the disease. They will be treated as a set of nuisance parameters of the offspring distribution, as our primary interest lies in estimating the reproduction number $R$. In our Bayesian framework introduced below, $R$ is a hyperparameter of the prior distribution for $(\theta_t)_{t\geq 0}$. Equation \eqref{Kory20.2} describes the distribution of $I_t$ conditioned on its whole past, i.e., $I_{s}, \theta_{s}$, $s < t$. Analogously, equation \eqref{Kory20.2b} describes $\theta_s$ given its history. The difference in what we understand as the relative past originates from $\theta_t$ being conceptually determined ``after'' $I_t$. For increased clarity of the form of the model and the estimation methods required, we recast our model as a Bayesian Poisson regression using vector notation. This is made painfully explicit by using an arrow as in $\vec{I}$ for vectors. Following \citet{cori2013new}, we estimate $R$ by explicitly modeling a set of $\tau$ days over which we assume $R$ to be constant. We specify the regression function for each observation in this estimation window. To condense notation, we use $[l]$, for $l\in\mathbb{N}$, to be the vector $(1,2,\ldots,l)$. Similarly, $[l,\,m]$ for $l,\,m\in\mathbb{N}$ is shorthand for the vector $(l,l+1,\ldots,m)$, i.e., $[l]=[1,l]$. This notation will primarily be used for vector indices. Furthermore, the indices of our vectors increase in time. As such, our generation interval truncated to $\nu$ days can be condensely written as $\vec{w}_{[\nu]} = (w_1, \ldots, w_\nu)$. Similarly, the $\tau$ observations we model are given by $\vec{I}_{[t-\tau+1,\,t]} = (I_{t-\tau+1}, \ldots, I_t)$. As a regression model for $\vec{I}_{[t-\tau+1,\,t]}$, equation \eqref{Kory20.2} can be written as \begin{IEEEeqnarray*}{rCl} \vec{I}_{[t-\tau+1,t]} & \sim & \text{Poisson}(W \vec{\theta}_{[t-\nu-\tau+1,t-1]})\quad \text{where} \IEEEyesnumber\label{eq:reg}\\ W &=& \begin{pmatrix} w_\nu & w_{\nu-1} & \ldots & w_1 & 0 & 0 & \cdots & 0 \\ 0 & w_\nu & w_{\nu-1} & \ldots & w_1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \cdots &\vdots \\ 0 & \cdots & 0 & w_\nu & w_{\nu-1} & \ldots & w_1 & 0\\ 0 & \cdots & 0 & 0 & w_\nu & w_{\nu-1} & \ldots & w_1 \end{pmatrix}\\ \end{IEEEeqnarray*} In the above expression, we have a fixed covariate matrix $W$ which is a function of the generation interval $w_{[\nu]}$. The momentum parameters $\vec{\theta}_{[t-\nu+\tau-1,t-1]}$ are seen to be the regression parameters to be estimated. Note that the expressions in the previous display suppress the notation for conditioning on all observations before time $t-\tau+1$. Furthermore, given $\vec{\theta}_{[t-1]}$, $I_t$ is independent of $\vec{I}_{[t-1]}$. We place a prior distribution on $\vec\theta$ which depends on $R$ as in equation \eqref{Kory20.2b}, as well as a hyperprior on $R$ to account for the previously identified uncertainty in the distribution of $R$ as reported in \citet{abbott2020estimating}. As we have parameterized the gamma prior on $\theta_t$ to have mean $I_tR$, the conjugate hyperprior for $R$ is the inverse-gamma distribution. This is transparent in the posterior distribution given by equation \eqref{eq:post} below. Hence we use an inverse-gamma hyperprior on $R$, where these hyperparameters are set to match the results of \citet{abbott2020estimating}. As such, we assume that $R$ has mean 2.6 and standard deviation 2, yielding shape parameter $3.69$ and rate parameter $6.994$: \begin{IEEEeqnarray*}{rCl} R & \sim & \text{Inv-Gamma(3.69, rate = 6.994)}. \end{IEEEeqnarray*} An a priori distribution for $R$ is itself uncertain and one could theoretically place additional hyperpriors on the parameters of this inverse-gamma distribution. That being said, the change would increase computational complexity while introducing hyper-hyperparameters that would be difficult to estimate. Hence, this proposal distribution for $R$ is treated as fixed. This regression formulation is important as it highlights the latent variables $\vec{\theta}$ that are required to fully determine the generative model. It also focuses attention on which observations are conditioned upon and which are treated as random, i.e., the $\tau$ observations to which we fit the model are treated as random. This is relevant as more than $\tau$ nuisance parameters are present, namely $\nu+\tau-1$. Observe that the earliest data point is $I_{t-\tau+1}$, which itself requires a history of $\nu$ momentum values of $\vec{\theta}$ to determine. While we also think of individual reproduction numbers as changing over time due to factors such as changes in social restrictions, the assumption of constant $R$ over a period renders this moot. Likewise, we set $k$ to be a constant for the results presented in Section \ref{sec:austria}, as $k$ is best estimated with contact tracing data instead of case count data. We set $k=0.072$, in line with the results of \citet{Lax+20}, which estimated the extent of superspreading for COVID-19 from Indian data. This is also within the range of parameter values identified in \citet{Endo+20}. Alternatively, it is possible to consider an independently estimated distribution for $k$. To estimate the momentum model with random $k$, one can merely draw $k$ from a proposal distribution and estimate the momentum model with this fixed value. This process is repeated for many sampled values of $k$, and the posterior samples for $R$ and $I_t$ from all $k$ are combined. This follows the same methodology as \citet{thompson2019improved}, where the generation interval was estimated with a separate data set before fitting model \eqref{Cori13model2} without superspreading. Brief results for this case are presented in \ref{sec:validation} as none of the results change significantly. The joint estimation of $k$ and $R$ within the momentum model appears infeasible as $k$ is the dispersion parameter of the nuisance parameter distribution. This makes learning about $k$ using this data highly challenging. A full derivation of the posterior distribution of the pair $R,\vec{\theta}_{[t]}$ given $\vec{I}_{[t]}$ is given in \ref{sec:derivation}. We obtain as posterior \begin{IEEEeqnarray*}{rCl} \multicol{3}{l}{p(R,\vec{\theta}_{[t - \tau -\nu + 1, \, t-1]}|\vec{I}_{[t - \tau - \nu + 1, \, t]})}\\ & \propto & p(\vec{I}_{[t-\tau+1,\,t]}, \vec{\theta}_{[t - \tau + 1,\, t-1]}|\vec{\theta}_{[t - \tau - \nu + 1, \, t - \tau]},\vec{I}_{[t - \tau - \nu + 1, \, t-\tau]},R) p(\vec{\theta}_{[t - \tau]},R|\vec{I}_{[t-\tau]})\\ & \propto & \left(\prod_{s = t - \tau + 1}^t \Big(\sum_{m < s} w_{s - m} \theta_m\Big)^{I_{s}} e^{- \sum_{m < s} w_{s - m} \theta_m } \right)\\ & & \cdot \left(\prod_{s = t - \tau + 1}^{t-1} \frac{k^{I_sk}}{\Gamma(I_sk)R^{I_sk}} \theta_s^{I_sk-1}e^{-\frac{k}{R} \theta_s} \right) \left(\prod_{s=t-\nu-\tau+1}^{t - \tau} \frac{k^{I_sk}}{\Gamma(I_sk)R^{I_sk}} \theta_s^{I_sk-1}e^{-\frac{k}{R} \theta_s}\right)\\ & & \cdot \left(R^{-3.69-1} e^{-6.994/R}\right),\IEEEyesnumber\label{eq:post}. \end{IEEEeqnarray*} The first line of \eqref{eq:post} specifies the distribution of the observations given all other parameters, and the third line gives the inverse-gamma prior for $R$. The second line describes the distribution of $\vec\theta$, and we have explicitly partitioned the indices into two sets. The values $\theta_s$ in the first index set $[t-\tau+1,t-1]$ require no special discussion as they depend on values $I_s$ which are being explicitly modeled. The values of $\theta_s$ in the second index set $[t-\nu-\tau+1,t-\tau]$, however, treat the corresponding $I_s$ values as \emph{fixed} and \emph{constant}. This is done so that we do not need to specify further nuisance parameters before time $t - \tau - \nu + 1$. Doing so would create an infinite recursion in historical observations, requiring us to treat $R_t$ as fixed for all $t$. Hence we need not only a prior for $R$, but also for $\vec{\theta}_{[t - \tau - \nu +1, \, t - \tau]}$. More details are provided in \ref{sec:derivation}. In order to condense notation for summations in exponents, let $S$ be the index set for the second product; i.e., $S = \{t-\nu-\tau+1,t-\nu-\tau+2,\ldots,t-1\}$. The additional shorthand below drops ``$s\in$'' from $s\in S$. With this notation, the posterior distribution of $R$ given $\vec{\theta}$ and $\vec{I}$ is $$p(R|\vec{\theta}_{[t-1]},\vec{I}_{[t]}) \propto R^{-k \sum_{S}I_s-3.69-1} e^{(-k\sum_{S}\theta_s - 6.994)R^{-1}},$$ which is Inv-Gamma($k \sum_{S}I_s+3.69$, $k\sum_{S}\theta_s + 6.994$). A perhaps counter-intuitive observation is that the posterior distribution of $R$ does not depend on the generation interval $\vec w_{[\nu]}$. This is the result of conditioning on $\vec{\theta}$ versus integrating it out as done in \citet{lloyd2005superspreading}. In our case, it is infeasible to integrate out $\vec\theta$ as the dependence is too complex. If we truly know population infectiousness, i.e., the epidemic momentum at all points in time, then $\vec w_{[\nu]}$ is irrelevant for estimating $R$, because $\vec w_{[\nu]}$ just determines \emph{how we learn} about $\vec{\theta}$ via \eqref{eq:reg}. More concretely, there are no terms in \eqref{eq:post} that include all of $R$, $\vec{\theta}$, and $w_{[\nu]}$. The posterior expectation and variance of $R$ are \begin{IEEEeqnarray*}{rCl't} \mathbb{E}[R|\vec{\theta},\vec{I}] & = & \frac{k\sum_{S}\theta_s + 6.994}{k \sum_{S}I_s+3.69-1} & and\\ \text{Var}[R|\vec{\theta},\vec{I}] & = & \frac{(k\sum_{S}\theta_s + 6.994)^2}{(k \sum_{S}I_s+3.69-1)^2(k \sum_{S}I_s+3.69-2)}. \end{IEEEeqnarray*} The denominator of the variance picks up an additional $k$ term, making credible intervals wider when $k$ is small. The dependence on $\vec\theta$ is difficult to remove in this general setting. Section \ref{sec:generation} considers a simpler setting in which $\vec\theta$ can be integrated out in order to derive a transparent function for credible interval width. To estimate this model, we alternate between a Gibbs-step to sample $R$ and a Metropolis-Hastings step to sample $\vec{\theta}$. As $\mathbb{E}[\theta_s|I_s,R] = I_sR$, we can initialize reasonable starting values for $\vec{\theta}$ using various values of $R$ such that we require little burn-in. We find total chain length to be the more important tuning parameter for valid prediction and credible intervals. In all models presented in this paper, we set $\nu=\tau=13$ to make valid comparisons with results from the EpiEstim framework \citep{cori2013new}. We set $\vec w_{[\nu]}$ to be a discretized gamma distribution with mean 4.46 and standard deviation 2.63 per the results of \citet{agesInterval} for Austria, which are similar to values determined elsewhere \citep{Knight20, Ganyani2020}. Inference is conducted using the $10^6$ samples that remain after a burn-in of 1,000 and thinning by 5. While the majority of the model validation and supporting graphs is relegated to \ref{sec:validation}, we address here the particular concern that we have 25 nuisance parameters in $\vec\theta$ for modeling 13 observations. Our simulation evidence indicates that all nuisance parameters are well-estimated, even those far in the past: coverage of $\vec\theta$ by credible intervals in simulated data is nearly exact. Furthermore, we see approximate coverage when predicting new cases in Section \ref{sec:austria}. As such, we do not believe that we are over-fitting the data with a larger number of nuisance parameters. This is in part due to the role of the prior distribution for $\theta_s$. For example, the first nuisance parameter $\theta_{t-\nu-\tau+1}$ only appears in a single observation term in the posterior \eqref{eq:post}: the distribution of $I_{t-\tau+1}$. Similarly, $\theta_{t-\nu-\tau+2}$ only appears in two, etc. The prior therefore plays a larger role in determining the values of these parameters. \subsection{Generation Model} \label{sec:generation} In order to directly relate the dispersion parameter $k$ to the width of the credible interval and to provide a fast approximation to the momentum model, we consider the trivial generation interval in which an infected person is only infectious for a single day. For real data, this assumption is obviously inaccurate. Therefore, we switch to modeling infections per generation instead of infections per day. While we model generations spanning multiple days, we estimate and forecast cases for conventional days. When the generation interval $w$ is of this form, $\vec{w}_{[1]} = (1)$, the model is purely Markovian and the data follow a Galton-Watson process. Recall that a Poisson$(\lambda)$-distributed random variable $Y$, where $\lambda$ is distributed according to Gamma$(\alpha,\beta)$, follows a negative binomial distribution \citep{lloyd2005superspreading}: \begin{equation} \label{eq:NB} Y \sim NB\left(\alpha,\frac{1}{1 + \beta}\right), \quad p(Y) = \frac{\Gamma(Y + \alpha)}{Y!\Gamma(\alpha)}\left(\frac{\beta}{ 1 + \beta}\right)^\alpha \left(\frac{1}{1 + \beta}\right)^Y. \end{equation} Applying \eqref{eq:NB} and $\vec{w}_{[1]} = (1)$ to the momentum model \eqref{Kory20.2} yields the following distribution for the infections $I_t$: \begin{IEEEeqnarray}{rCl} \label{eq:infections trivial 1} I_t | \vec{I}_{[t-1]},R,k & \sim & NB \left( k I_{t-1}, \frac{R}{R + k} \right), \\ \label{eq:infections trivial 2} p(I_t| \vec{I}_{[t-1]}, R, k) & = &\frac{\Gamma(I_t + kI_{t-1})}{I_t!\Gamma(kI_{t-1})} \left(\frac{k}{R + k}\right)^{k I_{t-1}} \left(\frac{R}{R+k}\right)^{I_t}. \end{IEEEeqnarray} In \ref{sec:meta-derivation}, we reparameterize this model in terms of $\frac{R}{R+k}$ in order to place a suitable prior which mimics that of the momentum model. After transforming the resulting posterior back to a distribution for $R$ and using standard normal approximation techniques \citep{gelmanBDA04}, we derive a normal approximation of the posterior of \begin{IEEEeqnarray*}{rCl} p(R|\vec{I}_{[t]},k) & \approx & N\left(\frac{k(\alpha - 1)}{\beta + 1},\frac{k^2(\alpha + \beta)(\alpha -1)}{(\beta + 1)^3} \right). \end{IEEEeqnarray*} where \begin{IEEEeqnarray*}{rCl't'rCl} \alpha & = & 98.82 + \sum_{ s = t - \tau + 1}^t I_s & and & \beta & = & 3.74 + k \sum_{s = t - \tau}^{t-1} I_s. \end{IEEEeqnarray*} We are interested in the setting in which $R\approx 1$ and $\beta \approx k\cdot\alpha$. Note that $\sum_{s = t - \tau + 1}^t I_s$ and $k \sum_{s = t - \tau}^{t - 1} I_s$ are of this approximate ratio: the terms in these two sums almost entirely overlap. Furthermore, while the hyperparameters (98.82 and 3.74) are of moderate size, they also approximately satisfy the desired ratio. This yields the following simplification of the variance of the normal approximation: \[ \frac{k^2(\alpha + \beta)(\alpha - 1)}{(\beta + 1)^3} \approx \frac{k^2\alpha^2(k+1)}{k^3\alpha^3} = \frac{k+1}{k\alpha} \approx \frac{1}{k \sum_{s = t - \tau + 1}^t I_s}. \] Hence, the approximate length of a credible interval for $R$ behaves like \[ \frac{2z_{1-\alpha/2}}{\sqrt{k \sum_{s = t - \tau + 1}^tI_s}}. \] It is clear that the assumption $\nu=1$ is highly unrealistic for COVID-19 and most other diseases. In order to bridge this gap, we estimate the model for non-overlapping generations instead of conventional days. The length of a generation is set equal to the mean of the generation interval, i.e., \[ D_{g} := \sum_{t = 0}^{\nu} t \omega_t. \] Given the modeling assumptions we have made for COVID-19, a generation comprises approximately 4.87 conventional days. The first 4.87 days after infection also accounts for 64\% of the assumed infectiousness given by the generation interval. This helps explain why partitioning the data into generations produces reasonable results. When a model is defined over generations, setting $\nu=1$ is equivalent to assuming that someone is equally infectious over $D_{g}$ days. The negative binomial model estimated using generations is approximately equivalent to the momentum model estimated using conventional days. In order to account for non-integer-valued generations, consider $D_{g} = \lfloor D_{g} \rfloor + D_{frac}$, where $D_{frac} \in [0,1)$. For simplicity, we assume that new infections are uniformly distributed during the day so that we may use standard data with records of new daily cases. In order to not confuse subscripts indexing days and generations, times in the generation model will be indicated by $\tilde t$ instead of $t$. Lastly, as we are interested in using the most recent data, we care about matching the right endpoint of our time series. As such, we compute the generations \emph{backwards} from a reference day $t$. Let day $t$ be the maximal day in our data set. We define the corresponding generation incidence, $\tilde I_{\tilde t}$, to be \begin{IEEEeqnarray*}{rCl} \tilde I_{\tilde t} & = & \sum_{s = 0} ^{\lfloor D_{g} \rfloor - 1} I_{t-s} + D_{frac} \cdot I_{t - \lfloor D_{g} \rfloor}. \end{IEEEeqnarray*} This is merely the sum over $\lfloor D_{g} \rfloor$ full days, and a proportion of the remaining day. Infections for previous generations then sum similarly over the historical data such that the generations form a partition of days in our data set. As before, some mathematical details are moved to \ref{sec:meta-derivation}. With simple notational changes, however, we derive a model for generations which looks functionally identical to \eqref{eq:infections trivial 1}, i.e., \[ \tilde I_{\tilde t} | R, \tilde I_{\tilde t-1} \sim NB\left( \tilde I_{\tilde t-1}k, \frac{R}{k+R}\right). \] This formula can then be used to forecast the cumulative incidence over several generations as described in \ref{sec:meta-derivation}. This yields a simple, closed form approximation of the momentum model without resorting to costly Bayesian computation methods. \section{Results} \label{sec:austria} This section focuses on understanding the evolution of the reproduction number in Austria between April 1 and October 31, 2020. As the momentum model effectively needs $\tau+\nu$ observations to be fit, this is approximately as early as estimates can be provided for Austria. Our goals are three-fold: to demonstrate the increase in estimated variability of $R$ due to superspreading, to provide valid prediction intervals for new cases, and to compare to similar models without superspreading. Some results will be shown for Croatia and Czechia as well to help establish the validity of our method, but the focus in on Austrian data. Other supporting graphs for Croatia and Czechia are given in \ref{sec:results2}. An important component of estimating the reproduction number on a given date is to account for the delay distribution between date of infection and date of confirmation as discussed in \citet{gostic2020practical}. If a delay of length $d$ occurs between infection and confirmation, then an infection observed at time $t$ actually occurred on day $t-d$. In this case, we have a ``true infection history'' that is distinct from the reported case numbers. In reality, the delay $d$ is random. \citet{abbott2020estimating} estimate and sample true potential infection histories given observed case numbers by sampling possible delays $d$. As our primary goal is to understand the uncertainty in estimating $R$ as opposed to providing best in class predictions of $R$ for a given date, we ignore this complication. This allows us to take as model input the historical 7-day moving average of reported cases and to compare methods with simple, transparent input. As a result, however, we are not attempting to predict the number of true infections on a given date. Instead, we are predicting the number of reported or confirmed cases on this date. In order to highlight this, axes are explicitly labeled with ``Reported Cases'' and ``Confirmation Date''. Data on the progression of COVID-19 in Austria is shown in Figure \ref{fig:austria-history}. This graph includes curves for the raw infection data as reported by the European Center for Disease Prevention and Control (Raw), the 7-day moving average of Raw (Raw (MA)), each sampled infection history (Sampled Inf.), and the daily median of the sampled infection histories (Sampled Inf.\ (M)). Observe that the boundary of the ``band'' created by the sampled infection histories is not smooth, as it is created from 1,000 distinct faded lines. Note that using sampled infection histories effectively shifts the time series backward in time. In order for the infection histories to approximately match the reported case numbers, we have aligned them in time. \begin{figure} \centerline{\includegraphics[width=1\textwidth]{austria-history.png}} \caption{Summary of new cases of COVID-19 in Austria: raw infection data (Raw), the 7-day moving average of Raw (Raw (MA)), each sampled infection history (Sampled Inf.), and the daily median of the sampled infection histories (Sampled Inf.\ (M)).} \label{fig:austria-history} \end{figure} As mentioned in Section \ref{sec:model}, we sample one million total samples of $R$ and the momentum vector $\vec{\theta}$. To forecast future cases, we use an individual sample of parameters and run the momentum model for a specified period of time. Our graphs show results for the average number of new cases over the following week. As such, they are on the scale of daily reported cases. There is no additional smoothing of the raw data or predictions. As our input is the 7-day moving average, our prediction is the 7-day-ahead forecast of this moving average. In all of the following graphs, we plot predictions and intervals from three models: the momentum model with $k=0.072$, the generation model of Section \ref{sec:generation} with $k=0.072$, and the EpiEstim model of \citet{cori2013new}. As mentioned previously and visible in \ref{sec:results2}, treating $k$ as random within a relevant region does not alter our results. We label the EpiEstim model ``Epi*'', as the estimates are produced directly via equation \eqref{eq:epistar} below instead of using the EpiEstim R package. As in \citet{cori2013new}, we fix a generation interval, as opposed to taking samples of a generation interval estimated from a separate data source as in \citet{thompson2019improved}. As a result, we are not comparing to the best in class model within the EpiEstim/EpiNow framework, but with a model of corresponding complexity to the momentum model. Other improvements to the modeling framework could then be built on top of the momentum model as they have been for the model of \citet{cori2013new}. To estimate the model of \citet{cori2013new}, we estimate the parameters of the \citet{cori2013new} posterior distribution directly from the infection data: \begin{IEEEeqnarray}{rCl} \label{eq:epistar} p(R_t|I_{[t]}) & = & \text{Gamma}\left(a + \sum_{s=t-\tau+1}^t I_s,\, \text{rate} = b + \sum_{s=t-\tau+1}^t \sum_{m = 1}^\nu w_{m}I_{s - m}\right) \end{IEEEeqnarray} where $a$ and $b$ are the shape and rate parameter of the gamma prior distribution on $R$. We estimate this posterior distribution, draw one million samples for $R$, and run the corresponding data generating process \eqref{Cori13model} for the required number of days. \begin{figure} \centerline{\includegraphics[width=1\textwidth, trim=0 0 0 0, clip]{austriaPI.png}} \caption{Predictions between April 1 and October 31, 2020, and 90\% prediction intervals between two significant dates: June 15 and September 7, 2020. Predictions and intervals are for the 7-day average of new cases in the following week in Austria. Relevant event dates are given as vertical, dashed lines and are described in Table \ref{tab:austria-dates}. The Epi* predictions consistently lag behind the observed values, whereas the other methods overshoot in the peaks due to momentum. Models with superspreading produce predictions intervals 2-3 times as wide as those without and achieve better coverage.} \label{fig:austriaPI} \end{figure} \begin{table} \begin{center} \caption{Dates of important events related to COVID-19 in Austria. Changes which occur in large parts of the country but not uniformly are listed as occurring in ``some regions''.} \label{tab:austria-dates} \begin{tabular}{lll} \toprule Label & Date & Event\\\midrule NA & 2020-03-16 & Start of general lock down \\ 1 & 2020-05-01 & Begin relaxation of movement restrictions\\ 2 & 2020-05-15 & Bars and restaurants can open\\ 3 & 2020-05-29 & Hotels and cultural sites can open\\ 4 & 2020-06-15 & Near complete removal of COVID restrictions\\ 5 & 2020-07-24 & Face masks mandatory in essential businesses\\ 6 & 2020-09-07 & Start of school year in some regions\\ 7 & 2020-09-14 & Face masks mandatory\\ 8 & 2020-09-25 & Bars and restaurants close early in some regions \\ NA & 2020-11-03 & Start of general soft lock down \\\bottomrule \end{tabular} \end{center} \end{table} Figure \ref{fig:austriaPI} shows the difference between models with and without superspreading on Austrian data. In order to show a long time period, the data must be plotted on a logarithmic scale such that the low cases in the summer months are visible. As this distorts the plotting of prediction intervals in the same graph, the comparison of prediction intervals is given separately by focusing attention on the summer months between the effective end of COVID restrictions and the start of the school year. For reference, we marked the dates of important changes in COVID-19 restrictions in Austria as vertical, dashed lines. A complete list is available at \href{https://regiowiki.at/wiki/Chronologie_der_Corona-Krise_in_Österreich} {https://regiowiki.at} (in German). The events are described in Table \ref{tab:austria-dates}. When comparing the events to both reported cases and the estimated reproduction number in Figure \ref{fig:austriaCI}, it is necessary to keep the delay distribution in mind; i.e., the effect of an intervention will not be visible in confirmed cases and thereby the estimated reproduction number for roughly two weeks \citep{abbott2020estimating}. Prior to the removal of any lockdown restrictions, reported case numbers were decaying exponentially. This is visible as a linear decrease given the logarithmic scaling of the y-axis. The slope of this line changed substantially around the time that Austria began to reopen in May and June. From approximately July through the end of October, case numbers fluctuate between growing exponentially and brief periods of relative stability. These fluctuations are not modeled and reflect both noise as well as features which we do not include in our analysis, e.g., common holiday periods, changes in testing, etc. Throughout this period, some restrictions are brought back into effect without apparent substantial impact. Lockdown measures were reinstated at the end of the plotted window of time. While all of the prediction curves track the observed cases, there are subtle but significant differences in behavior. If one looks closely, one can see that the Epi* model predictions lag behind the observed 7-day moving average: it fails to accurately estimate the rapid changes in case numbers. On the other hand, the momentum and generation model predictions ``overshoot'' the peaks in the time series. As the name suggests, there appears to be excess ``momentum'' in the process around these change points, and the model anticipates cases to continue rising as in the previous days. The various models produce prediction intervals with drastically different widths. Most notably, the intervals for the momentum model with $k=0.072$ are much wider than those of Epi*. The generation variant of this model produces intervals which are wider still. The momentum intervals are, on average, approximately three times as wide as those of Epi*. While the generation model provides a computationally cheap and fast estimate, it is clear that it suffers relative to the momentum model in terms of interval length. The ratio between the prediction interval lengths visible during the summer months is approximately the same throughout the entire prediction period. To assess the validity of the prediction intervals, Table \ref{tab:coverage} shows, for each method, the proportion of true weekly new cases that fall within the prediction intervals over the prediction period. Coverage is shown for the 50\% and 90\% prediction intervals for the raw infection data. When cases are steadily increasing (or decreasing) prediction intervals become narrower, and when the behavior changes they become considerably wider. The prediction intervals of the momentum model cover the true values during periods of growth, while those of Epi* often fail to do so over the entire growth period. Clearly coverage is still not exact, and all models perform worse on the Czech data (see \ref{sec:results2}). It is still notable that the momentum models provide approximate coverage in these cases even with the inherent messiness of the COVID-19 case data. For example, Czechia had a much higher test positivity rate than Austria and Croatia during the majority of the prediction period, which is ignored in our model. \begin{table} \begin{center} \caption{Coverage of the 50\% and 90\% prediction intervals (PI) for 7-day-ahead predictions of the 7-day moving average. Models with superspreading improve coverage significantly over that of Epi*.} \label{tab:coverage} \begin{tabular}{llcc} \toprule Country & Model & Coverage, 50\% PI & Coverage, 90\% PI\\\midrule Austria & Momentum, k = 0.072 & 0.46 & 0.79 \\ & Generation, k = 0.072 & 0.47 & 0.73 \\ & Epi*, k $\rightarrow \infty$ & 0.16 & 0.38 \\\midrule Croatia & Momentum, k = 0.072 & 0.48 & 0.85 \\ & Generation, k = 0.072 & 0.49 & 0.77 \\ & Epi*, k $\rightarrow \infty$ & 0.18 & 0.47 \\\midrule Czechia & Momentum, k = 0.072 & 0.40 & 0.69 \\ & Generation, k = 0.072 & 0.39 & 0.66 \\ & Epi*, k $\rightarrow \infty$ & 0.12 & 0.32 \\\bottomrule \end{tabular} \end{center} \end{table} As the reproduction number is unobserved, we are unable to compare our predictions within a supervised setting as we compared our model forecasts. Given the previous discussion though, we see that the additional variability provided by the momentum model is needed to provide prediction intervals with approximate coverage. Figure \ref{fig:austriaCI} shows the median predictions and 90\% credible intervals for $R$ given by the momentum, generation, and Epi* models. Intervals are, in general, asymmetric, and skewed toward higher values. The figure clearly demonstrates that the intervals for $R$ are drastically different: with superspreading, intervals for $R$ are roughly 2-3 \emph{times} as wide as those without. This could have potentially large implications for policy making as we know that relatively small changes in the size of $R$ can lead to large differences in the number of new cases if the disease is allowed to progress unchecked. \begin{figure} \centerline{\includegraphics[width=1\textwidth, trim=0 0 0 0, clip]{austriaCI.png}} \caption{Credible intervals for R in Austria. The momentum and generation model predictions are consistently slightly higher than those of Epi*. They also produce credible intervals that are 2-3 times as wide. Relevant event dates are given as vertical, dashed lines and are described in Table \ref{tab:austria-dates}. Observe that $R$ becomes indistinguishable from 1 using our models around the time when lockdown restrictions begin to be removed.} \label{fig:austriaCI} \end{figure} Near the beginning of our estimation period and around the time when restrictions were being relaxed in Austria, it quickly becomes infeasible to claim that the reproduction number is below 1; i.e., the credible intervals estimated during May and June include the value 1. Beginning in July and August, however, we observe long periods with reproduction numbers significantly greater than 1, even with our comparatively wide credible intervals. As before, there is a delay of approximately two weeks between when these interventions occur and any change in reproduction number could be observed. Hence any discussion of dates should be interpreted loosely. As we see a clear improvement in coverage for switching to a model with superspreading, it is useful to have a clearer understanding of the degree of heterogeneity implied by our models. To do so, we consider the posterior samples of $R$ from October 31, 2020. According to equation \eqref{eqn:r_x}, each individual has a separate reproduction number, $r_x$, given the population reproduction number $R$. For each posterior sample of $R$, we therefore draw an individual $r_x$ and secondary infections $I_x$. The Epi* models of \citet{cori2013new} set $r_x=R$ for all individuals. Hence, it is possible to compare the degree of heterogeneity by considering a Lorenz curve of the population of values of $r_x$ or $I_x$ \citep{Lorenz}. The Lorenz curve is typically used to demonstrate income inequality by showing the proportion of overall income or wealth held by the bottom x\% of the people. Here we consider this to be ``infectiousness inequality''. The distribution of $R$ estimated for October 31, 2020 as well as the implied Lorenz curve are shown in Figure \ref{fig:lorenz}. The Lorenz curve is a representation of the cumulative distribution function of the number of new expected infections. It allows us to visualize the degree of heterogeneity by seeing which proportion of individuals contribute to new infections. One can draw the Lorenz curve with $I_x$ instead of $r_x$, which only results in a slightly rougher image with no qualitative differences. While the population reproduction number is moderately high, this is largely driven by superspreading. The momentum model implies that the top 10\% of individuals contribute 84.6\% of new infections, while the top 20\% contribute 98\%. The usefulness of Figure \ref{fig:lorenz-sub} is that is shows this entire distribution instead of these two common quantiles. We can clearly see that essentially no new cases are produced by nearly 75\% of infected individuals. These statistics match quite closely the observed values reported in \citet{Ari+2020}. The figures can also be drawn for the estimation setting in which $k$ is assumed to be randomly drawn from an appropriate gamma distribution. The resulting graphs look essentially identical. As such, treating $k$ as fixed at $0.072$ or fluctuating in the approximate range $[.04,\,.2]$ makes little difference in the infectiousness inequality implied by the momentum model. \begin{figure} \centering \begin{subfigure}{.49\textwidth} \centerline{\includegraphics[width=1\textwidth, trim=90 0 90 0, clip]{R_dens.png}} \caption{Estimated distribution of $R$} \label{fig:R_dens} \end{subfigure} \begin{subfigure}{.49\textwidth} \centerline{\includegraphics[width=1\textwidth, trim=90 0 90 0, clip]{lorenz.png}} \caption{Lorenz curve of $r_x$} \label{fig:lorenz-sub} \end{subfigure} \caption{Momentum model estimates of $R$ and individual heterogeneity for October 31, 2020. 10\% of individuals are expected to contribute approximately 84.6\% of new infections. The dashed curve in (b) corresponds to a model without superspreading (Epi*).} \label{fig:lorenz} \end{figure} \section{Conclusion} In this paper, we provide a simple extension of the \citet{cori2013new} model to account for superspreading. While we explicitly use this to model the COVID-19 pandemic, the methods are easily adaptable to other diseases where superspreading is present. This ``momentum'' model incorporates unobserved random variables which drive the process of new infections. Even if case numbers and $R$ are relatively small, the presence of superspreaders can increase the momentum of the disease beyond what would be expected if all individuals have the same infectiousness. We observe that this appears necessary to properly track the steep increases or decreases in reported COVID-19 cases. The momentum model produces credible intervals and posterior predictive intervals that are approximately 2-3 times as wide as those that neglect superspreading. We find that these wider intervals significantly improve the coverage of the prediction intervals. The heterogeneity in infectiousness implied by the momentum model is extremely high: 10\% of individuals contribute approximately 84.6\% of new infections. As Bayesian models are time and resource intensive to estimate, we also derive a simplified model in which infected individuals are only infectious for a single day. In order to improve the fit to real data, we partition disease incidence into generations, each of which spans multiple days. The length of each generation corresponds to the generation time of the disease, and within this period an infected person is assumed to be equally infectious. This yields two main benefits. First, estimation of $R$ and predictions of new cases are immediately available through an explicit approximation of the posterior distribution of $R$. Second, this model allows us to derive a simple equation to relate the width of credible intervals to the degree of superspreading. Hence, we have rigorous analysis which supports the heuristic that the approximate length of a credible interval for $R$ behaves like \[\frac{2z_{1-\alpha/2}}{\sqrt{k \sum_{s = t - \tau + 1}^tI_s}},\] where $z_{1-\alpha/2}$ is the $(1-\alpha/2)$ quantile of the standard normal distribution and for values of dispersion parameter $k$ much smaller than 1, which corresponds to scenarios with high superspreading. The model assumes that $R$ has been constant for the preceding $\tau$ days.
{ "timestamp": "2021-03-16T01:34:13", "yymm": "2012", "arxiv_id": "2012.08843", "language": "en", "url": "https://arxiv.org/abs/2012.08843" }
\section{Introduction} \IEEEPARstart{C}loud computing demands high-portability. Containerisation ensures compatibility of applications and their environment by encapsulating applications with their libraries and configuration files \cite{8360359}, thus enables users to move and deploy programs easily among clusters. Containerisation is a virtualisation technology \cite{DBLP:journals/spe/RodriguezB19}. Rather than starting a holistically simulated OS on top of the host kernel as in a Virtual Machine (VM), a container only shares the host kernel. This feature makes containers more lightweight than VM. Containers are dedicated to run micro-services and one container mostly hosts one application. Nevertheless, containerised applications can become complex, \textit{e.g.} thousands of separate containers may be required in production. Production can benefit from container orchestrators that can provide efficient environment provisioning and auto-scaling. High Performance Computing (HPC) systems are traditionally applied to perform large-scale financial and engineering simulation, which demands low-latency and high-throughput. The typical HPC jobs are large workloads that are often host-specific and hardware-specific. HPC systems are typically equipped with workload managers. A \textit{workload manager} is composed of a \textit{resource manager} and a \textit{job scheduler}. A resource manager \cite{Hovestadt2003} allocates resources (\textit{e.g.} CPU, memory), schedules jobs and guarantees no interference from other user processes. A job scheduler determines the job priorities, enforces resource limits and dispatch jobs to available nodes \cite{Klusacek2015}. Two main-stream workload managers are TORQUE \cite{Staples2006} and Slurm \cite{Jette02slurm:simple}. Slurm includes both resource managers and job schedulers, while originally Torque only incorporates resource managers and later extends with job schedulers. Overall, HPC workload managers lack micro-service supports and deeply-integrated container management capabilities in which container orchestrators manifest their efficiency. We herein describe a plugin named \textit{Torque-Operator}. It serves as a bridge between the HPC workload manager \textit{Torque} and the container orchestrator \textit{Kubernetes} \cite{8457916}. Kubernetes has been widely adopted, as it has a rapidly growing community and ecosystem with plenty of platforms being developed upon it. Furthermore, we propose a testbed architecture composed of an HPC cluster and a big data cluster where Torque-Operator enables scheduling container jobs from the big data cluster to the HPC cluster. The rest of the paper is organised as follows. Firstly, Section~\ref{sec:related_work} briefly views the related work. Next, we describe the proposed architecture of our testbed and Torque-Operator in Section~\ref{sec:architecture}. Followed, some preliminary results are given in Section~\ref{sec:resulst}. Lastly, Section~\ref{sec:future_work} concludes this paper and proposes future work. \section{Related Work}\label{sec:related_work} Torque-Operator extends WLM-Operator \cite{wlmoperator2019} with Torque support. Both operators share similar mechanisms, \textit{i.e.} schedule container jobs from cloud clusters to HPC clusters, nevertheless, their implementation varies significantly as Torque and Slurm have different structures and parameters. WLM-Operator only allows submission of Slurm batch jobs wrapped in a Kubernetes \textit{yaml} file from a cluster managed by Kubernetes. It invokes Slurm binaries \textit{i.e.} \textit{sbatch}, \textit{scancel}, \textit{sacct} and \textit{scontol} to transfer and manage Slurm jobs to a Slurm cluster. The operator creates \textit{virtual nodes} which correspond to each Slurm partition, \textit{e.g.} one virtual node corresponds to one Slurm partition and contains the information of its corresponding partition. Virtual node is a concept in Kubernetes. It is not a real worker node, however, it enables users to connects Kubernetes to other APIs and allows developers to deploy \textit{pods} (a Kubernetes term) and containers with their own APIs. Jobs on the virtual node can be scheduled to the worker nodes. WLM-Operator creates a \textit{dummy pod} on the virtual node in order to transfer the Slurm batch job to a specific Slurm partition. When the batch job completes, another dummy pod is generated to transfer the results to the directory specified in the submitted yaml file. In Kubernetes terminology, WLM-Operator creates a new \textit{object kind} \textit{i.e. Slurmjob}. The operator includes a service program \textit{red-box} that builds a gRPC proxy between Slurm and Kubernetes. gRPC proxy defines a service and implements a server and clients. The service defines the methods and their message types of responds and requests in a \textit{.proto} format file. The server implements: 1) the interfaces 2) and runs a gRPC server which listens to the requests from clients and dispatches them to the right services. The client defines the identical methods as the server. \section{Torque-Operator and Platform Description}\label{sec:architecture} We firstly illustrate the design of our platform architecture, then describe the structure of Torque-Operator. Torque-Operator is written in Golang programming language. \textit{Singularity} \cite{Kurtzer2017SingularitySC} is the runtime container of our choice. Singularity is starting to be applied in many HPC centres \cite{Hu2019}, as it provides a secure means to capture and distribute software and computer environment. For example, execution of a Singularity container only demands a user privilege, while a Docker container \cite{10.5555/2898929}, which is a container runtime widely adopted in cloud systems, requires root permission. Kubernetes supports Docker by default, though it can be adjusted to perform services for Singularity by adding Singularity-CRI \cite{Singularitycri}. Table~\ref{table:tool_list} manifests the list of core applications that construct the testbed. \begin{table}[ht] \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{| l| l|} \hline \hlin Orchestrator & Kubernetes, Torque\\ \hline Container runtime \& its support & Singularity, Singulairy-CRI\\ \hline Operator & Torque-Operator \\ \hline Compiler & Golang compiler\\[1ex] \hline\hline \end{tabular} } \caption[]{The list of core applications for the testbed.}\label{table:tool_list} \end{table} \subsection{Platform Architecture} The architecture of our platform is designed to serve as the testbeds for the EU research project CYBELE\footnote{CYBELE: Fostering Precision Agriculture and Livestock Farming through Secure Access to Large-Scale HPC-Enabled Virtual Industrial Experimentation Environment Empowering Scalable Big Data Analytics https://www.cybele-project.eu/}. The platform is composed of an HPC cluster with Torque as its workload manager and a big data cluster with Kubernetes as its orchestrator. Its architecture is illustrated in Fig.~\ref{fig:arch_testbed}. Noting that Fig.~\ref{fig:arch_testbed} is for illustration purpose, the number of nodes and the queues can vary in the testbeds. In Torque, nodes are grouped into queues. Each queue is associated with resources limits such as walltime, job size. One node can be included in multiple queues. The HPC cluster is composed of a head node which controls the whole cluster nodes and compute nodes which perform computation. The Torque login node in Fig.~\ref{fig:arch_testbed} also serves as one of the worker nodes in the Kubernetes cluster. The Kubernetes cluster incorporates a master node which schedules the jobs and worker nodes which execute the jobs. A virtual node indicated in Fig.~\ref{fig:arch_toruqe_operator} transfers Torque jobs to the Torque cluster. The Torque job submitted from the Kubernetes login node is scheduled by Kubernetes master node to the virtual node. The virtual node transfers the abstracted Torque jobs to the Torque queue through the Torque login node. The merits of this architecture are: 1) it provides users with flexibility to run containerised and non-containerised jobs, 2) the containerised applications can be better scheduled to Torque cluster by taking advantage of the scheduling policies of Kubernetes. \begin{figure}[!t] \centering \includegraphics[width=0.50\textwidth]{images/cluster_arch.pdf} \caption[]{Architecture of the testbed. The login node belongs to both Kubernetes and Torque clusters. One queue (named batch) is shown in the Torque cluster.} \label{fig:arch_testbed} \end{figure} \subsection{Structure of Torque-Operator} The Torque job script is encapsulated into a Kubernetes yaml job script. The yaml script is submitted from a Kubernetes login node (in our case, the login node is also the master node). The PBS script part is processed by Toque-Operator. A dummy pod is generated to transfer the Torque job specification to a scheduling queue (\textit{e.g.} waiting queue, test queue, which is a concept in the job scheduler). Torque-Operator invokes the Torque binary \textit{qsub} which submits PBS job to the Torque cluster. When the Torque job completes, Torque-operator creates a Kubernetes pod which redirects the results to the directory that the user specifies in the yaml file. \begin{figure}[!t] \centering \includegraphics[width=0.20\textwidth]{images/Torque_login_node.pdf} \caption[]{Architecture of Torque-Operator. This is the internal architecture of the login node as in Fig.~\ref{fig:arch_testbed}. The virtual node corresponds to the Torque queue (named batch) in Fig.~\ref{fig:arch_testbed}} \label{fig:arch_toruqe_operator} \end{figure} As in WLM-Operator (Section~\ref{sec:related_work}), Torque-Operator includes a service program red-box. Red-box generates a Unix socket which allows data exchange among the Kubernetes and Torque processes. Torque-Operator introduces a new \textit{object kind} \textit{i.e. Torquejob} (\textit{Slurmjob} in WLM-Operator) and sets it as \textit{Kubernetes deployment}. Torque-Operator builds four Singularity containers which are deployed by Kubernetes on its worker nodes to perform the corresponding services, \textit{e.g.} create dummy pod to transfer the results from Torque to Kubernetes. \section{Test Case}\label{sec:resulst} Simple experiments have been conducted to validate Torque-Operator. Fig.~\ref{fig:yaml_script} presents a Kubernetes yaml job script ($cow\_job.yaml$). More specifically, inside the yaml script, the Torque script requests 30-minute walltime and one compute node. The error file and output file are stored in $low.err$ and $low.out$ which locate in {\footnotesize $\$HOME/$} directory. The script appends the path $/usr/local/bin$ where Singularity binary resides. The Singularity container image $lolcow\_latest.sif$ is executed. The results are given in Fig.~\ref{fig:result_cow_job}. The user can view the status of the job easily from Kubernetes login node as shown in Fig.~\ref{fig:job_status}. Additionally, the status of the PBS job can be output using the Torque commands on the Torque login node. \begin{figure} \begin{lstlisting} apiVersion: wlm.sylabs.io/v1alpha1 kind: TorqueJob metadata name: cow spec: batch: | #!/bin/sh #PBS -l walltime=00:30:00 #PBS -l nodes=1 #PBS -e $HOME/low.err #PBS -o $HOME/low.out export PATH=$PATH:/usr/local/bin singularity run lolcow_latest.sif results: from: $HOME/low.out mount: name: data hostPath: path: $HOME/ type: DirectoryOrCreate \end{lstlisting} \vspace{-6pt} {\footnotesize \begin{verbatim} $kubectl apply -f $HOME/cow_job.yaml \end{verbatim} } \caption{An example of the yaml script and its submission command. The scirpt encloses a PBS script.}\label{fig:yaml_script} \end{figure} \begin{figure} {\footnotesize \begin{verbatim} $kubectl get torquejob NAME AGE STATUS cow 2s running \end{verbatim} \caption{The command to view the status of the yaml job}\label{fig:job_status} } \end{figure} \begin{figure} {\footnotesize \begin{verbatim} If one cannot enjoy reading a book over \ | and over again, there is no use in | | reading it at all. | | | \ -- Oscar Wilde / ----------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || \end{verbatim} \caption{A result of the Singularity job}\label{fig:result_cow_job} } \end{figure} \section{Conclusion and Future Work}\label{sec:future_work} We described the testbed architecture for the EU research project CYBELE and introduced the structure of Torque-Operator that extends WLM-Operator with Torque support. This testbed architecture creates a connection between HPC and cloud clusters. Moreover, it provides users with flexibility to run containerised and non-containerised jobs and may enhance the capability of container scheduling on HPC. The future work will focus on optimization of Torque-Operator that can offer more stable deployments. Performance evaluation will be carried out to compare efficiency of scheduling the container jobs by Kubernetes and Torque. The pilots of CYBELE project will be adopted as the benchmarks. \section*{Acknowledgment} This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement NO.825355. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2021-01-14T02:19:37", "yymm": "2012", "arxiv_id": "2012.08866", "language": "en", "url": "https://arxiv.org/abs/2012.08866" }
\section{Introduction} Although deep neural networks have recently been contributing to state-of-the-art advances in various areas \cite{Krizhevsky:2012:ICD:2999134.2999257, hinton2012deep, DBLP:journals/corr/SutskeverVL14}, such black-box models may not be deemed reliable in situations where safety needs to be guaranteed, such as legal judgment prediction and medical diagnosis. Interpretable deep neural networks are a promising way to increase the reliability of neural models~\cite{sabour2017dynamic}. To this end, extractive rationales, i.e., subsets of features of instances on which models rely for their predictions on the instances, can be used as evidence for humans to decide whether or not to trust a predicted result and, more generally, to trust a~model. Previous works mainly use selector-predictor types of neural models to provide extractive rationales, i.e., models composed of two modules: (i) a \textit{selector} that selects a subset of important features, and (ii) a \textit{predictor} that makes a prediction based solely on the selected features. For example, \newcite{yoon2018invase} and \newcite{lei2016rationalizing} use a selector network to calculate a selection probability for each token in a sequence, then sample a set of tokens that is exclusively passed to the predictor. An additional typical desideratum in natural language pro\-cessing (NLP) tasks is that the selected tokens form a semantically fluent rationale. To achieve this, \newcite{lei2016rationalizing} added a non-differential regularizer that encourages any two adjacent tokens to be simultaneously selected or unselected. \newcite{bastings2019interpretable} further improved the quality of the rationales by using a Hard Kuma regularizer that also encourages any two adjacent tokens to be selected or unselected together. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{./graphics/rational_intro.pdf}\\[0.75ex] \caption{A sample rationale in legal judgement prediction. The human-provided rationale is shown in bold in Sample~1. In Sample 2, the selector missed the key information ``he stole a VIVO X9'', but the predictor only tells the selector that the whole extracted rationale (in bold) is not so informative, by producing a low probability of the correct crime. } \label{fig:intro} \end{center} \end{figure} One drawback of previous works is that the learning signal for both the selector and the predictor comes mainly from comparing the prediction of the selector-predictor model with the ground-truth answer. Therefore, the exploration space to get to the correct rationale is large, decreasing the chances of converging to the optimal rationales and predictions. Moreover, in NLP applications, the regularizers commonly used for achieving fluency of rationales treat all adjacent token pairs in the same way. This often leads to the selection of unnecessary tokens due to their adjacency to informative~ones. In this work, we first propose an alternative method to rationalize the predictions of a neural model. Our method aims to squeeze more information from the predictor in order to guide the selector in selecting the rationales. Our method trains two models: a ``guider" model that solves the task at hand in an accurate but black-box manner, and a selector-predictor model that solves the task while also providing rationales. We use an adversarial-based method to encourage the final information vectors generated by the two models to encode the same information. We use an information bottleneck technique in two places: (i)~to encourage the features selected by the selector to be the least-but-enough features, and (ii)~to encourage the final information vector of the guider model to also contain the least-but-enough information for the prediction. Secondly, we propose using language models as regularizers for rationales in natural language understanding tasks. A language model (LM) regularizer encourages rationales to be fluent subphrases, which means that the rationales are formed by consecutive tokens while avoiding unnecessary tokens to be selected simply due to their adjacency to informative tokens. The effectiveness of our LM-based regularizer is proved by both mathematical derivation and experiments. All the further details are given in the Appendix of the extended (ArXiv) paper. Our contributions are briefly summarized as follows: \begin{itemize} \item We introduce an adversarial approach to rationale extraction for neural predictions, which calibrates the information between a guider and a selector-predictor model, such that the selector-predictor model learns to mimic a typical neural model while additionally providing rationales. \item We propose a language-model-based regularizer to encourage the sampled tokens to form fluent rationales. \item We experimentally evaluate our method on a sentiment analysis dataset with ground-truth rationale annotations, and on three tasks of a legal judgement prediction dataset, for which we conducted human evaluations of the extracted rationales. The results show that our method improves over the previous state-of-the-art models in precision and recall of rationale extraction without sacrificing the prediction performance. \end{itemize} \section{Approach} Our approach is composed of a selector-predictor architecture, in which we use an information bottleneck technique to restrict the number of selected features, and a guider model, for which we again use the information bottleneck technique to restrict the information in the final feature vector. Then, we use an adversarial method to make the guider model guide the selector to select least-but-enough features. Finally, we use an LM regularizer to make the selected rationale semantically fluent. \subsection{InfoCal: Selector-Predictor-Guider with Information Bottleneck} The high-level architecture of our model, called InfoCal, is shown in Fig.~\ref{fig:arch}. Below, we detail each of its components. \subsubsection{Selector.} For a given instance $(\textbf{x},y)$, $\textbf{x}$ is the input with $n$ features $\textbf{x} = (x_1, x_2, \ldots, x_n)$, and $y$ is the ground-truth corresponding label. The selector network $\text{Sel}(\tilde{\textbf{z}}_\text{sym}|\textbf{x})$ takes $\textbf{x}$ as input and outputs $p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})$, which is a sequence of probabilities $(p_i)_{i=1,\ldots,n}$ representing the probability of choosing each feature $x_i$ as part of the rationale. Given the sampling probabilities, a subset of features is sampled using the Gumbel softmax~\cite{jang2016categorical}, which provides a differentiable sampling process: \begin{align} u_i&\sim U(0,1),\quad g_i=-\log(-\log(u_i))\\ m_i&=\frac{\exp((\log(p_i)+g_i)/\tau)}{\sum_j\exp((\log(p_j)+g_j)/\tau)},\label{eq:maski} \end{align} where $U(0,1)$ represents the uniform distribution between $0$ and $1$, and $\tau$ is a temperature hyperparameter. Hence, we obtain the sampled mask $m_i$ for each feature $x_i$, and the vector symbolizing the rationale $\tilde{\textbf{z}}_\text{sym}=(m_1x_1,\ldots,m_nx_n)$. Thus, $\tilde{\textbf{z}}_\text{sym}$ is the sequence of discrete selected symbolic features forming the rationale. \subsubsection{Predictor.} The predictor takes as input the rationale $\tilde{\textbf{z}}_\text{sym}$ given by the selector, and outputs the prediction $\hat{y}_{sp}$. In the selector-predictor part of InfoCal, the input to the predictor is the multiplication of each feature $x_i$ with the sampled mask $m_i$. The predictor first calculates a dense feature vector $\tilde{\textbf{z}}_\text{nero}$,\footnote{Here, ``nero'' stands for neural feature (i.e., a neural vector representation) as opposed to a symbolic input feature.} then uses one feed-forward layer and a softmax layer to calculate the probability distribution over the possible predictions: \begin{align} \tilde{\textbf{z}}_\text{nero}&=\text{Pred}(\tilde{\textbf{z}}_\text{sym})\\ p(\hat{y}_{sp}|\tilde{\textbf{z}}_\text{sym}) &= \text{Softmax}(W_p\tilde{\textbf{z}}_\text{nero}+b_p).\label{eq:pyn} \end{align} As the input is masked by $m_i$, the prediction $\hat{y}_{sp}$ is made~exclusively based on the features selected by the selector. The loss of the selector-predictor model is the cross-entropy loss: \begin{equation}\label{eq:lsp} \begin{small} \begin{aligned} L_{sp}&=-\frac{1}{K}\sum_k\log p( y^{(k)}_\text{sp}|\textbf{x}^{(k)})\\ &=-\frac{1}{K}\sum_k\log \mathbb E_{\text{Sel}(\tilde{\textbf{z}}_\text{sym}^{(k)}|\textbf{x}^{(k)})}p(y^{(k)}_\text{sp}|\tilde{\textbf{z}}_\text{sym}^{(k)})\\ &\le -\frac{1}{K}\sum_k\mathbb E_{\text{Sel}(\tilde{\textbf{z}}_\text{sym}^{(k)}|\textbf{x}^{(k)})}\log p(y^{(k)}_\text{sp}|\tilde{\textbf{z}}_\text{sym}^{(k)}), \end{aligned} \end{small} \end{equation} where $K$ represents the size of the training set, the superscript (k) denotes the k-th instance in the training set, and the inequality follows from Jensen's inequality. \subsubsection{Guider.} To guide the rationale selection of the selector-predictor model, we train a \textit{guider} model, denoted Pred$_G$, which receives the full original input $\textbf{x}$ and transforms it into a dense feature vector $\textbf{z}_\text{nero}$, using the same predictor architecture but different weights, as shown in Fig.~\ref{fig:arch}. We generate the dense feature vector in a variational way, which means that we first generate a Gaussian distribution according to the input $\textbf{x}$, from which we sample a vector $\textbf{z}_\text{nero}$: \begin{align} h&=\text{Pred}_G(\textbf{x}),\quad\mu=W_mh+b_m,\quad\sigma=W_sh+b_s \\ u&\sim \mathcal N(0,1), \quad \textbf{z}_\text{nero} = u\sigma + \mu\\ p&(\hat y_\text{guide}|\textbf{z}_\text{nero}) = \text{Softmax}(W_p\textbf{z}_\text{nero}+b_p). \end{align} We use the reparameterization trick of Gaussian distributions to make the sampling process differentiable~\cite{kingma2013auto}. Note that we share the parameters $W_p$ and $b_p$ with those in Eq.~\ref{eq:pyn}. The guider model's loss $L_\text{guide}$ is as follows: \begin{equation}\label{eq:full} \begin{aligned} L_\text{guide}&=-\frac{1}{K}\sum_k\log p(y^{(k)}_\text{guide}|\textbf{x}^{(k)})\\ &\le-\frac{1}{K}\sum_k\mathbb E_{p(\textbf{z}_\text{nero}|\textbf{x}^{(k)})}\log p(y^{(k)}_\text{guide}|\textbf{z}_\text{nero}^{(k)}), \end{aligned} \end{equation} where the inequality again follows from Jensen's inequality. The guider and the selector-predictor are trained jointly. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{./graphics/arch.eps}\\ \caption{Architecture of InfoCal: the grey round boxes stand for the losses, and the red arrows indicate the data required for the calculation of losses. FFL is an abbreviation for feed-forward layer.} \label{fig:arch} \end{center} \end{figure} \subsubsection{Information Bottleneck.} To guide the model to select the least-but-enough information, we employ an information bottleneck technique \cite{li-eisner-2019-specializing}. We aim to minimize $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym}) - I(\tilde{\textbf{z}}_\text{sym},y)$\footnote{$I(a,b) = \int_a\int_bp(a,b)\log\frac{p(a,b)}{p(a)p(b)} \,{=}\, \mathbb E_{a,b}[\frac{p(a|b)}{p(a)}]$ denotes the mutual information between the variables $a$ and $b$.}, where the former term encourages the selection of few features, and the latter term encourages the selection of the necessary features. As $I(\tilde{\textbf{z}}_\text{sym},y)$ is implemented by $L_{sp}$ (the proof is given in Appendix \ref{the:0} in the extended paper), we only need to minimize the mutual information $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym})$: \begin{align}\label{eq:Isym} I(\textbf{x}, \tilde{\textbf{z}}_\text{sym})=\mathbb E_{\textbf{x}, \tilde{\textbf{z}}_\text{sym}}\Big[\log\frac{p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})}{p(\tilde{\textbf{z}}_\text{sym})}\Big]. \end{align} However, there is a time-consuming term $p(\tilde{\textbf{z}}_\text{sym})=\sum_{\textbf{x}}p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})p(\textbf{x})$, which needs to be calculated by a loop over all the instances $\textbf{x}$ in the training set. Inspired by \newcite{li-eisner-2019-specializing}, we replace this term with a variational distribution $r_\phi(z)$ and obtain an upper bound of Eq.~\ref{eq:Isym}: $I(\textbf{x}, \tilde{\textbf{z}}_\text{sym}) \le \mathbb E_{\textbf{x}, \tilde{\textbf{z}}_\text{sym}}\Big[\log\frac{p(\tilde{\textbf{z}}_\text{sym}|\textbf{x})}{r_\phi(z)}\Big]$. Since $\tilde{\textbf{z}}_\text{sym}$ is a sequence of binary-selected features, we sum up the mutual information term of each element of $\tilde{\textbf{z}}_\text{sym}$ as the information bottleneck loss: \begin{align} L_\text{ib}=\sum_i\sum_{\tilde{z}_i} p(\tilde{z}_i|\textbf{x})\log\frac{p(\tilde{z}_i|\textbf{x})}{r_\phi(z_i)}, \end{align} where $\tilde{z}_i$ represents whether to select the $i$-th feature: $1$ for selected, $0$ for not selected. To encourage $\textbf{z}_\text{nero}$ to contain the least-but-enough information in the guider model, we again use the information bottleneck technique. Here, we minimize $I(\textbf{x}, \textbf{z}_\text{nero}) - I(\textbf{z}_\text{nero},y)$. Again, $I(\textbf{z}_\text{nero},y)$ can be implemented by $L_\text{guide}$. Due to the fact that $\textbf{z}_\text{nero}$ is sampled from a Gaussian distribution, the mutual information has a closed-form upper bound: \begin{equation}\label{eq:mi} \begin{aligned} L_\text{mi}&=I(\textbf{x}, \textbf{z}_\text{nero})\le \mathbb E_{\textbf{z}_\text{nero}}\Big[\log\frac{p(\textbf{z}_\text{nero}|\textbf{x})}{p(\textbf{z}_\text{nero})}\Big] =\\ &=0.5(\mu^2+\sigma^2-1-2\log\sigma). \end{aligned} \end{equation} The derivation is in Appendix~\ref{proof:mi} in the extended paper \subsection{Calibrating Key Features via Adversarial Training} We would like to tell the selector what kind of information is still missing or has been wrongly selected. Since we already use the information bottleneck principal to encourage $\textbf{z}_\text{nero}$ to encode the information from the least-but-enough features, if we also require $\tilde{\textbf{z}}_\text{nero}$ and $\textbf{z}_\text{nero}$ to encode the same information, then we would encourage the selector to select the least-but-enough discrete features. To achieve this, we use an adversarial-based training method. Thus, we employ an additional discriminator neural module, called~$D$, which takes as input either $\tilde{\textbf{z}}_\text{nero}$ or $\textbf{z}_\text{nero}$ and outputs 0 or 1, respectively. The discriminator can be any differentiable neural network. The generator in our model is formed by the selector-predictor that outputs $\tilde{\textbf{z}}_\text{nero}$. The losses associated with the generator and discriminator are: \begin{align} L_d&=-\log D(\textbf{z}_\text{nero}) + \log D(\tilde{\textbf{z}}_\text{nero})\label{eq:D}\\ L_g&=- \log D(\tilde{\textbf{z}}_\text{nero}). \end{align} \subsection{Regularizing Rationales with Language Models} For NLP tasks, it is often desirable that a rationale is formed of fluent subphrases \cite{lei2016rationalizing}. To this end, previous works propose regularizers that bind the adjacent tokens to make them be simultaneously sampled or not. For example, \newcite{lei2016rationalizing} proposed a non-differentiable regularizer trained using REINFORCE~\cite{williams1992simple}. To make the method differentiable, \newcite{bastings2019interpretable} applied the Kuma-distribution to the regularizer. However, they treat all pairs of adjacent tokens in the same way, although some adjacent tokens have more priority to be bound than others, such as ``He stole'' or ``the victim'' rather than ``. He'' or ``) in'' in Fig.~\ref{fig:intro}. We propose a novel differentiable regularizer for extractive rationales that is based on a pre-trained language model, thus encouraging both consecutiveness and fluency of tokens in the extracted rationale. The LM-based regularizer is implemented as follows: \begin{equation}\label{eq:lm} L_\text{lm} = -\sum_im_{i-1}\log p_{lm}(m_ix_i|\textbf{x}_{<i}), \end{equation} where the $m_i$'s are the masks obtained in Eq.~\ref{eq:maski}. Note that non-selected tokens are masked instead of deleted in this regularizer. The language model can have any architecture. First, we note that $L_\text{lm}$ is differentiable. Secondly, the following theorem guarantees that $L_\text{lm}$ encourages consecutiveness of selected tokens. \begin{theorem}\label{the:1} If the following is satisfied for all $i,j$: \begin{itemize} \item $m'_i<\epsilon \ll 1-\epsilon< m_i$, \,$0<\epsilon<1$, and \item $\big|p(m'_ix_i|x_{<i})-p(m'_jx_j|x_{<j})\big|<\epsilon$, \end{itemize} then the following two inequalities hold:\\ (1) $L_\text{lm}(\ldots,m_k,\ldots, m'_{n})<L_\text{lm}(\ldots,m'_k,\ldots, m_{n})$.\\ (2) $L_\text{lm}(m_1, \ldots,m'_k,\ldots)>L_\text{lm}(m'_1,\ldots,m_k,\ldots)$. \end{theorem} The theorem says that for the same number of selected tokens, if they are consecutive, then they will get a lower $L_\text{lm}$ value. Its proof is given in Appendix \ref{proof:1} in the extended paper. The pre-training procedure of the language model is shown in Appendix \ref{lmvec} in the extended paper \begin{table*}[!t] \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Appearance} & \multicolumn{4}{c|}{Smell}& \multicolumn{4}{c|}{Palate}\\\cline{2-13} & P& R& F & \% selected & P& R& F & \% selected & P& R& F & \% selected \\\hline Attention &80.6 &35.6 & 49.4 &13 &88.4 &20.6 &33.4 &7 &65.3 &35.8 &46.2 &7 \\\hline Bernoulli &96.3 &56.5 &71.2 &14 &95.1 &38.2 &54.5 &7 &80.2 &53.6 &64.3 &7\\\hline HardKuma &98.1 &65.1 &78.3 &13 &\textbf{96.8} &31.5 &47.5 &7 &\textbf{89.8} &48.6 &63.1 &7\\\hline\hline InfoCal &\textbf{98.5} &\textbf{73.2} &\textbf{84.0} &13 &95.6 &\textbf{45.6} &\textbf{61.7} &7 &89.6 &\textbf{59.8} &\textbf{71.7} &7\\\hline InfoCal(HK) &97.9 &71.7 & 82.8 &13 &94.8 &42.3 & 58.5 &7 &89.4 &56.9 & 69.5 &7\\\hline InfoCal$-L_\text{adv}$ &97.3 &67.8 & 79.9 &13 &94.3 &34.5 &50.5 &7 &89.6 &51.2 &65.2 &7\\\hline InfoCal$-L_\text{lm}$ &79.8 &54.9 &65.0 &13 &87.1 &32.3 &47.1 &7 &83.1 &47.4 &60.4 &7\\\hline \end{tabular} } \smallskip \caption{Precision, recall, and F1-score of selected rationales for the three aspects of BeerAdvocate. In bold, the best performance. ``\% selected'' means the average percentage of tokens selected out of the total number of tokens per instance.} \label{tab:beer_rational} \end{table*}% \begin{table*}[!t] \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{|c|m{25cm}|} \hline Gold & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and leaves a small to moderate amount of lace in sheets when it eventually departs} \textcolor{green}{the nose is sweet and spicy and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes .} there ...... alcohol . \textcolor{blue}{the mouthfeel is exemplary ; full and rich , very creamy . mouthfilling with some mouthcoating as well .} drinkability is high ...... \\\hline Bernoulli & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and} leaves a small to moderate amount of lace in sheets when it eventually departs the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . the mouthfeel \textcolor{blue}{is exemplary ; full and rich , very creamy . mouthfilling with} some mouthcoating as well . drinkability is high ......\\\hline HardKuma & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance and leaves a small} to moderate amount of lace in sheets when it eventually departs the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . the mouthfeel \textcolor{blue}{is exemplary ; full and rich , very creamy . mouthfilling with} some mouthcoating as well . drinkability is high ...... \\\hline InfoCal & \textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance} and leaves a small to moderate amount of lace in sheets when it eventually departs \textcolor{green}{the nose is sweet and spicy and the flavor is malty sweet} , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . \textcolor{blue}{the mouthfeel is exemplary ; full and rich , very creamy . mouthfilling with some mouthcoating }as well . drinkability is high ...... \\\hline InfoCal$-L_\text{adv}$ &\textcolor{red}{clear , burnished copper-brown topped by a large beige head that displays impressive persistance} and leaves a small to moderate amount of lace in sheets when it eventually departs the nose is \textcolor{green}{sweet and spicy} and the flavor is malty sweet , accented nicely by honey and by abundant caramel/toffee notes . there ...... alcohol . \textcolor{blue}{the mouthfeel is exemplary }; full and rich , very creamy . mouthfilling with some mouthcoating as well . drinkability is high ...... \\\hline InfoCal$-L_\text{lm}$ & \textcolor{red}{clear , burnished} copper-brown topped by a large beige head that displays \textcolor{red}{impressive persistance} and leaves a small to \textcolor{red}{moderate amount of lace} in sheets when it eventually departs the nose is \textcolor{green}{sweet} and \textcolor{green}{spicy} and the flavor is \textcolor{green}{malty sweet} , accented nicely by \textcolor{green}{honey} and by abundant caramel/toffee notes . there ...... alcohol . the mouthfeel is \textcolor{blue}{exemplary} ; \textcolor{blue}{full} and rich , very \textcolor{blue}{creamy} . mouthfilling with some mouthcoating as well . drinkability is high ...... \\ \hline \end{tabular} } \smallskip \caption{One example of extracted rationales by different methods. Different colors correspond to different aspects: \textcolor{red}{red}: appearance, \textcolor{green}{green}: smell, and \textcolor{blue}{blue}: palate.} \label{tab:case} \end{table*}% \subsection{Training and Inference} The total loss function of our model, which takes the generator's role in adversarial training, is shown in Eq.~\ref{eq:G}. The adversarial-related losses are denoted by $L_\text{adv}$. The discriminator is trained by $L_d$ from Eq.~\ref{eq:D}. \begin{align} L_\text{adv} &= \lambda_{g}L_g + L_{guide} + \lambda_{mi}L_\text{mi}\\ J_\text{total}&=L_{sp} +\lambda_{ib}L_\text{ib}+L_\text{adv}+\lambda_{lm}L_\text{lm}, \label{eq:G} \end{align} where $\lambda_{ib},\lambda_{g},\lambda_{mi}$, and $\lambda_{lm}$ are hyperparameters. At training time, we optimize the generator loss $J_\text{total}$ and discriminator loss $L_d$ alternately until convergence. The detailed algorithm for training is given in Appendix~\ref{training} in the extended paper. At inference time, we run the selector-predictor model to obtain the prediction and the rationale~$\tilde{\textbf{z}}_\text{sym}$. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{./graphics/mse.png}\\ \caption{MSE of all aspects of BeerAdvocate. The blue dashed line represents the full-text baseline (all tokens are selected).} \label{tab:mse}\vspace*{1ex} \end{figure} \section{Experiments} We performed experiments on two NLP applications: multi-aspect sentiment analysis and legal judgement prediction. \subsection{Beer Reviews} \subsubsection{Data.} To provide a quantitative analysis for the extracted rationales, we use the BeerAdvocate\footnote{\url{https://www.beeradvocate.com/}} dataset~\cite{mcauley2012learning}. This dataset contains instances of human-written multi-aspect reviews on beers. Similarly to \citet{lei2016rationalizing}, we consider the following three aspects: appearance, smell, and palate. \newcite{mcauley2012learning} provide manually annotated rationales for 994 reviews for all aspects, which we use as test set. The detailed data preprocessing and experimental settings are given in Appendix \ref{expsetting} in the extended paper \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{./graphics/PR.png}\\ \caption{The precision (left) and recall (right) for rationales on the smell aspect of the BeerAdvocate test set. } \label{tab:pr}\vspace*{1ex} \end{figure} \subsubsection{Evaluation Metrics and Baselines.} For the evaluation of the selected tokens as rationales, we use precision, recall, and F1-score. Typically, precision is defined as the percentage of selected tokens that also belong to the human-annotated rationale. Recall is the percentage of human-annotated rationale tokens that are selected by our model. The predictions made by the selected rationale tokens are evaluated using the mean-square error (MSE). We compare our method with the following baselines: \begin{itemize} \item Attention~\cite{lei2016rationalizing}: This method calculates attention scores over the tokens and selects top-k percent tokens as the rationale. \item Bernoulli~\cite{lei2016rationalizing}: This method uses a selector network to calculate a Bernoulli distribution for each token, and then samples the tokens from the distributions as the rationale. \item HardKuma~\cite{bastings2019interpretable}: This method replaces the Bernoulli distribution by a Kuma distribution to facilitate differentiability. \end{itemize} The details of the choice of neural architecture for each module of our model, as well as the training setup are given in Appendix~\ref{expsetting} in the extended paper. \subsubsection{Results.} The rationale extraction performances are shown in Table~\ref{tab:beer_rational}. The precision values for the baselines are directly taken from \cite{bastings2019interpretable}. We use their source code for the Bernoulli\footnote{\url{https://github.com/taolei87/rcnn}} and HardKuma\footnote{\url{https://github.com/bastings/interpretable_predictions}} baselines. We trained these baseline for 50 epochs and selected the models with the best recall on the dev set when the precision was equal or larger than the reported dev precision. For fair comparison, we used the same stopping criteria for InfoCal (for which we fixed a threshold for the precision at 2\% lower than the previous state-of-the-art). We also conducted ablation studies: (1) we removed the adversarial loss and report the results in the line InfoCal$-L_\text{adv}$, and (2)~we removed the LM regularizer and report the results in the line InfoCal$-L_\text{lm}$. In Table~\ref{tab:beer_rational}, we see that, although Bernoulli and HardKuma achieve very high precisions, their recall scores are significantly low. In comparison, our method InfoCal significantly outperforms the previous methods in the recall scores for all the three aspects of the BeerAdvocate dataset (we performed Student's t-test, $p<0.01$). Also, all the three F-scores of InfoCal are a new state-of-the-art performance. In the ablation studies, we see that when we remove the adversarial information calibrating structure, namely, for InfoCal$-L_\text{adv}$, the recall scores decrease significantly in all the three aspects. This shows that our guider model is critical for the increased performance. Moreover, when we remove the LM regularizer, we find a significant drop in both precision and recall, in the line InfoCal$-L_\text{lm}$. This highlights the importance of semantical fluency of rationales, which are encouraged by our LM regularizer. We also replace the LM regularizer with the regularizer used in the HardKuma method with all the other parts of the model unchanged, denoted InfoCal(HK) in Table~\ref{tab:beer_rational}. We found that the recall and F-score of InfoCal outperforms InfoCal(HK), which shows the effectiveness of our LM regularizer. \begin{table*}[!t] \begin{center} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l l |ccccc| ccccc |ccccc} \toprule[1.0pt] \multirow{2}{*}{Small}&Tasks & \multicolumn{5}{c|}{Law Articles} & \multicolumn{5}{c|}{Charges} & \multicolumn{5}{c}{Terms of Penalty}\\ \cmidrule[0.5pt]{2-17} &Metrics& Acc & MP & MR & F1 & \%S & Acc & MP & MR & F1 &\%S & Acc & MP & MR & F1 &\%S \\ \midrule[0.5pt] \multirow{8}{*}{Single} &Bernoulli &0.812 & 0.726 & 0.765 & 0.756 &100 & 0.810 & 0.788 & 0.760 & 0.777 & 100 & 0.331 & 0.323 & 0.297 & 0.306 & 100\\ &Bernoulli &0.755 & 0.701 & 0.737 & 0.728 &14 & 0.761 & 0.753 & 0.739 & 0.754 & 14 & 0.323 & 0.308 & 0.265 & 0.278 & 30\\ &HardKuma & 0.807 & 0.704 & 0.757 & 0.739 &100&0.811 & 0.776 & 0.763 & 0.776 &100 & 0.345 & 0.355 & 0.307 & 0.319& 100\\ &HardKuma & 0.783 & 0.706 & 0.735 & 0.729 &14&0.778 & 0.757 &0.714 &0.736 &14 & 0.340 & 0.328 &0.296 & 0.309 & 30\\ \cline{2-17} &InfoCal & \textcolor{red}{0.834} &\textcolor{red}{0.744} &\textcolor{red}{0.776} &\textcolor{red}{0.786} &14 &\textcolor{red}{0.849} &\textcolor{red}{0.817} &\textcolor{red}{0.798} &\textcolor{red}{0.813} &14 &\textcolor{red}{0.358} &\textcolor{red}{0.372} & \textcolor{red}{0.335}&\textcolor{red}{0.337} &30\\ &InfoCal$-L_\text{adv}$ & 0.826 &0.739 &0.774 &0.777 &14 &0.845 &0.804 &0.781 &0.797 &14 &0.351 &0.374 & 0.329&0.330 &30\\ &InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ & \textbf{0.841} & \textbf{0.759}& \textbf{0.785}&\textbf{0.793} &100 & \textbf{0.850} & \textbf{0.820}& \textbf{0.801}&\textbf{0.814} &100 & \textbf{0.368}&\textbf{0.378} &\textbf{0.341} & \textbf{0.346}&100\\ &InfoCal$-L_\text{lm}$ & 0.822 &0.723 &0.768 &0.773 &14 &0.843 &0.796 &0.770 &0.772 &14 &0.347 &0.361 & 0.318&0.320 &30\\ \midrule[0.5pt] \multirow{3}{*}{Multi} &FLA & 0.803& 0.724& 0.720 &0.714 &$-$&0.767& 0.758& 0.738& 0.732&$-$ &0.371& 0.310 &0.300 &0.299&$-$\\ &TOPJUDGE & 0.872 & 0.819 &0.808 &0.800 &$-$&0.871 &0.864& 0.851 &0.846&$-$& 0.380 &0.350 &0.353&0.346&$-$\\ &MPBFN-WCA &\underline{0.883} &\underline{0.832}& \underline{0.824} &\underline{0.822}&$-$ &\underline{0.887} &\underline{0.875} &\underline{0.857} &\underline{0.859}&$-$&\underline{ 0.414} &\underline{0.406} &\underline{0.369}& \underline{0.392}&$-$\\ \midrule[1.0pt] \multirow{2}{*}{Big}&Tasks & \multicolumn{5}{c|}{Law Articles} & \multicolumn{5}{c|}{Charges} & \multicolumn{5}{c}{Terms of Penalty}\\ \cmidrule[0.5pt]{2-17} &Metrics& Acc & MP & MR & F1 &\%S& Acc & MP & MR & F1&\%S & Acc & MP & MR & F1 &\%S\\ \midrule[0.5pt] \multirow{8}{*}{Single} &Bernoulli & 0.876 &0.636 & 0.388 &0.625 &100 & 0.857 &0.643 &0.410 &0.569 &100 & 0.509 &0.511 &0.304 &0.312 & 100\\ &Bernoulli & 0.857 &0.632 & 0.374 &0.621 &14 & 0.848 &0.635 &0.402 &0.543 &14 & 0.496 &0.505 &0.289 &0.306 & 30\\ &HardKuma & 0.907 & 0.664 & 0.397 & 0.627 & 100& 0.907 & 0.689 & 0.438 & 0.608&100 & 0.555 & 0.547 & 0.335 & 0.356&100\\ &HardKuma & 0.876 & 0.645 & 0.384 & 0.609 & 14& 0.892 & 0.676 & 0.425 & 0.587&14 & 0.534 & 0.535 & 0.310 & 0.334&30\\ \cline{2-17} &InfoCal & \textcolor{red}{0.956}&\textcolor{red}{0.852}& \textcolor{red}{0.742}& \textbf{0.805} & 20 &\textcolor{red}{0.955}&\textcolor{red}{0.868}& \textbf{0.788}&\textbf{0.820} &20 & 0.556&\textbf{0.519}&0.362&0.372 &30 \\ &InfoCal$-L_\text{adv}$ & 0.953 & 0.844& 0.711&0.782&20 &0.954& 0.857 & 0.772 &0.806& 20&0.552&0.490& 0.353& 0.356 &30\\ &InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ & \textbf{0.959} & \textbf{0.862} & \textbf{0.751}&0.791&100 &\textbf{0.957}&\textbf{0.878}&0.776&0.807&100 &\textbf{0.584}& \textbf{0.519}& \textbf{0.411}&\textbf{0.427} &30\\ &InfoCal$-L_\text{lm}$ & 0.953 & 0.851& 0.730 & 0.775& 20 & 0.950& 0.857&0.756& 0.789 &20 &0.563& 0.486& 0.374& 0.367& 30\\ \midrule[0.5pt] \multirow{3}{*}{Multi}&FLA & 0.942&0.763&0.695&0.746&$-$&0.931&0.798&0.747&0.780&$-$&0.531&0.437&0.331&0.370&$-$\\ &TOPJUDGE&0.963&0.870&0.778&0.802&$-$&0.960&0.906&0.824&0.853&$-$&0.569&0.480&0.398&0.426&$-$\\ &MPBFN-WCA&\underline{0.978}&\underline{0.872}&\underline{0.789}&\underline{0.820}&$-$&\underline{0.977}&\underline{0.914}&\underline{0.836}&\underline{0.867}&$-$&\underline{0.604}&\underline{0.534}&\underline{0.430}&\underline{0.464}&$-$\\ \bottomrule[1.0pt] \end{tabular} } \end{center} \caption{The overall performance on the CAIL2018 dataset (Small and Big). The results from previous works are directly quoted from \newcite{yang2019legal}, because we share the same experimental settings, and hence we can make direct comparisons. \%S represents the selection percentage (which is determined by the model). ``Single'' represents single-task models, ``Multi'' represents multi-task models. The best performance is in bold. The red numbers mean that they are less than the best performance by no more than $0.01$. The underlined numbers are the state-of-the-art performances, all of which are obtained by multi-task models.} \label{tab:overall_law} \end{table*}% We further show the relation between a model's performance on predicting the final answer and the rationale selection percentage (which is determined by the model) in Fig.~\ref{tab:mse}, as well as the relation between precision/recall and training epochs in Fig.~\ref{tab:pr}. The rationale selection percentage is influenced by $\lambda_\text{ib}$. According to Fig.~\ref{tab:mse}, our method InfoCal achieves a similar prediction performance compared to previous works, and does slightly better than HardKuma for some selection percentages. Fig.~\ref{tab:pr} shows the changes in precision and recall with training epochs. We can see that our model achieves a similar precision after several training epochs, while significantly outperforming the previous methods in recall, which proves the effectiveness of our proposed method. Table~\ref{tab:case} shows an example of rationale extraction. Compared to the rationales extracted by Bernoulli and HardKuma, our method provides more fluent rationales for each aspect. For example, unimportant tokens like ``and'' (after ``persistance'', in the Bernoulli method), and ``with'' (after ``mouthful'', in the HardKuma method) were selected just because they are adjacent to important ones. \subsection{Legal Judgement Prediction} \subsubsection{Datasets and Preprocessing.} We use the CAIL2018 data\-set\footnote{ \url{https://cail.oss-cn-qingdao.aliyuncs.com/CAIL2018_ALL_DATA.zip}}~\cite{zhong-etal-2018-legal} for three tasks on legal judgment prediction. The dataset consists of criminal cases published by the Supreme People's Court of China.\footnote{\url{http://cail.cipsc.org.cn/index.html}} To be consistent with previous works, we used two versions of CAIL2018, namely, CAIL-small (the exercise stage data) and CAIL-big (the first stage data). The instances in CAIL2018 consist of a \textit{fact description} and three kinds of annotations: \textit{applicable law articles}, \textit{charges}, and \textit{the penalty terms}. Therefore, our three tasks on this dataset consist of predicting (1)~law articles, (2)~charges, and (3)~terms of penalty according to the given fact description. The detailed experimental settings are given in Appendix~\ref{expsetting} in the extended paper. \subsubsection{Overall Performance.} We again compare our method with the Bernoulli~\cite{lei2016rationalizing} and the HardKuma~\cite{bastings2019interpretable} methods on rationale extraction. These two methods are both single-task models, which means that we train a model separately for each task. We also compare our method with three multi-task methods listed as follows: \begin{itemize} \item FLA~\cite{luo-etal-2017-learning} uses an attention mechanism to capture the interaction between fact descriptions and applicable law articles. \item TOPJUDGE~\cite{zhong-etal-2018-legal} uses a topological architecture to link different legal prediction tasks together, including the prediction of law articles, charges, and terms of penalty. \item MPBFN-WCA~\cite{yang2019legal} uses a backward verification to verify upstream tasks given the results of downstream tasks. \end{itemize} The results are listed in Table~\ref{tab:overall_law}. On CAIL-small, we observe that it is more difficult for the single-task models to outperform multi-task methods. This is likely due to the fact that the tasks are related, and learning them together can help a model to achieve better performance on each task separately. After removing the restriction of the information bottleneck, InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ achieves the best performance in all tasks, however, it selects all the tokens in the review. When we restrict the number of selected tokens to $14\%$ (by tuning the hyperparameter $\lambda_\text{ib}$), InfoCal (in red) only slightly drops in all evaluation metrics, and it already outperforms Bernoulli and HardKuma, even if they have used all tokens. This means that the $14\%$ selected tokens are very important to the predictions. We observe a similar phenomenon for CAIL-big. Specifically, InfoCal outperforms InfoCal$-L_\text{adv}\!\!-\!\!L_\text{ib}$ in some evaluation metrics, such as the F1-score of law article prediction and charge prediction tasks. \subsubsection{Rationales.} The CAIL2018 dataset does not contain annotations of rationales. Therefore, we conducted human evaluation for the extracted rationales. Due to limited budget and resources, we sampled 300 examples for each task. We randomly shuffled the rationales for each task and asked six undergraduate students from Peking University to evaluate them. The human evaluation is based on three metrics: usefulness (U), completeness (C), and fluency (F); each scored from $1$ (lowest) to~$5$. The scoring standard for human annotators is given in Appendix \ref{human} in the extended paper. The human evaluation results are shown in Table~\ref{tab:he}. We can see that our proposed method outperforms previous methods in all metrics. Our inter-rater agreement is acceptable by Krippendorff's rule~(\citeyear{krippendorff2004content}), which is shown in Table~\ref{tab:he}. A sample case of extracted rationales in legal judgement is shown in Fig.~\ref{fig:lawcase}. We observe that our method selects all the useful information for the charge prediction task, and the selected rationales are formed of continuous and fluent sub-phrases. \begin{table}[!t] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Law} & \multicolumn{3}{|c|}{Charges} & \multicolumn{3}{|c|}{ToP} \\\cline{2-10} & U & C &F & U & C & F&U & C & F \\\hline Bernoulli &4.71 &2.46 &3.45 & 3.67 &2.35 &3.45 &3.35 &2.76&3.55 \\\hline HardKuma &4.65 &3.21 &3.78 & 4.01 &3.26&3.44 &3.84 &2.97&3.76\\\hline InfoCal &\textbf{4.72} &\textbf{3.78} &\textbf{4.02} &\textbf{4.65} & \textbf{3.89}&\textbf{4.23} &\textbf{4.21} &\textbf{3.43}&\textbf{3.97}\\\hline\hline $\alpha$&0.81 & 0.79&0.83&0.92&0.85&0.87&0.82&0.83&0.94\\\hline \end{tabular} } \smallskip \caption{Human evaluation on the CAIL2018 dataset. ``ToP" is the abbreviation of ``Terms of Penalty". The metrics are: usefulness (U), completeness (C), and fluency (F), each scored from $1$ to~$5$. Best performance is in bold. $\alpha$ represents Krippendorff's alpha values. } \label{tab:he}\vspace*{1ex} \end{table}% \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{./graphics/lawsmall.pdf}\\ \smallskip \caption{An example of extracted rationale for charge prediction. The correct charge is ``Rape". The original fact description is in Chinese, we have translated it to English. It is easy to see that the extracted rationales are very helpful in making the charge prediction.} \label{fig:lawcase}\vspace*{1ex} \end{figure} \section{Related Work} Explainability is currently a key bottleneck of deep-lear\-ning-based approaches. The model proposed in this work belongs to the class of self-explanatory models, which contain an explainable structure in the model architecture, thus providing explanations for their predictions. Self-explanatory models can use different types of explanations, such as feature-based explanations \cite{lei2016rationalizing, yoon2018invase,chen2018learning,yu-etal-2019-rethinking,carton-etal-2018-extractive} and natural language explanations \cite{DBLP:conf/eccv/HendricksARDSD16, esnli, zeynep, cars}. Our model uses feature-based explanations. Self-explanatory models with feature-based explanations can be further divided into two branches. The first branch is formed of representation-inter\-pretable approaches, which map specific features into latent spaces and then use the latent variables to control~the outcomes of the model, such as disentangling methods~\cite{chen2016infogan,sha2021multi}, information bottleneck methods~\cite{tishby2000information}, and constrained generation~\cite{sha-2020-gradient}. The second branch consists of architecture-in\-ter\-pretable models, such as attention-based models \cite{zhang2018top,sha2016reading,sha2018order,sha2018multi,liu2017table}, neural Turing machines~\cite{collier2018im,xia-etal-2017-progressive,sha2020estimating}, capsule networks~\cite{sabour2017dynamic}, and energy-based models~\cite{grathwohl2019your}. Among them, attention-based models have an important extension, that of sparse feature learning, which implies learning to extract a subset of features that are most informative for each example. Most of the sparse feature learning methods use a selector-predictor architecture. Among them, L2X \cite{chen2018learning} and INVASE~\cite{yoon2018invase} make use of information theories for feature selection, while CAR~\cite{chang2019game} extracts useful features in a game-theoretic approach. In addition, rationale extraction for NLP usually raises one desideratum for the extracted subset of tokens: rationales need to be fluent subphrases instead of separate tokens. To this end, \newcite{lei2016rationalizing} proposed a non-differentiable regularizer to encourage selected tokens to be consecutive, which can be optimized by REINFORCE-style methods~\cite{williams1992simple}. \newcite{bastings2019interpretable} proposed a differentiable regularizer using the Hard Kumaraswamy distribution; however, this still does not consider the difference in the importance of different adjacent token pairs. Our adversarial calibration method is inspired by distilling methods~\cite{hinton2015distilling}. Distilling methods are usually applied to compress large models into small models while keeping a comparable performance. For example, TinyBERT~\cite{jiao2019tinybert} is a distillation of BERT~\cite{devlin-etal-2019-bert}. Our method is different from distilling methods, because we calibrate the final feature vector instead of the softmax prediction. The information bottleneck~(IB) theory is an important basic theory of neural networks ~\cite{tishby2000information}. It originated in information theory and has been widely used as a theoretical framework in analyzing deep neural networks~\cite{tishby2015deep}. For example, \newcite{li-eisner-2019-specializing} used IB to compress word embeddings in order to make them contain only specialized information, which leads to a much better performance in parsing tasks. Adversarial methods, which had been widely applied in image generation~\cite{chen2016infogan} and text generation~\cite{yu2017seqgan}, usually have a discriminator and a generator. The discriminator receives pairs of instances from the real distribution and from the distribution generated by the generator, and it is trained to differentiate between the two. The generator is trained to fool the discriminator~\cite{goodfellow2014generative}. Our information calibration method generates a dense feature vector using selected symbolic features, and the discriminator is used for measuring the calibration extent. \section{Summary and Outlook} In this work, we proposed a novel method to extract rationales for neural predictions. Our method uses an adversarial-based technique to make a selector-predictor model learn from a guider model. In addition, we proposed a novel regularizer based on language models, which makes the extracted rationales semantically fluent. The experimental results showed that our method improves the selection of rationales by a large margin. As future work, the main architecture of our model can be directly applied to other domains, e.g., images or tabular data. However, it remains an open question what would be a good regularizer for these domains. \section*{Acknowledgments} This work was supported by the EPSRC grant ``Unlocking the Potential of AI for English Law'', a JP Morgan PhD Fellowship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund. We also acknowledge the use of Oxford's Advanced Research Computing (ARC) facility, of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1), and of GPU computing support by Scan Computers International Ltd.
{ "timestamp": "2020-12-21T02:12:16", "yymm": "2012", "arxiv_id": "2012.08884", "language": "en", "url": "https://arxiv.org/abs/2012.08884" }
\section{Introduction and main results} \label{sec:introduction} \subsection*{Harmonic functions for the standard Laplacian} Let us first briefly recall some facts around the continuous Laplacian in $\mathbb R^2$ \begin{equation} \label{eq:continuous_Laplacian} \Delta=\sigma_1\frac{\partial^2}{\partial x^2}+2\sigma_{1,2}\frac{\partial^2}{\partial x\partial y}+\sigma_2\frac{\partial^2}{\partial y^2}. \end{equation} Given a domain $D\subset \mathbb R^2$ with boundary $\partial D$, solving the Dirichlet problem on $D$ for the operator $\Delta$ amounts to find the set $H(D)$ of functions $h:D\rightarrow \mathbb{R}$ which are harmonic on $D$, continuous on the closure $\overline{D}$ of $D$, and zero on $\partial D$. This problem admits the unique solution $h=0$ when $D$ is bounded, but the situation is much richer when $D$ is unbounded. Assume first that $\Delta$ is the standard Laplacian (i.e., $\sigma_1=\sigma_2=1$ and $\sigma_{1,2}=0$ in \eqref{eq:continuous_Laplacian}) and $D$ is the upper half-space $\mathcal{H}=\{x+iy:y>0\}$, and look at the associated Dirichlet problem. Defining $h(z)=-h(\overline{z})$ for $z$ in the lower half-space, the classical Schwarz reflection principle implies that $h$ can be extended to a harmonic function on $\mathbb{C}$. Hence $h$ is solution to this Dirichlet problem if and only if there exists an analytic function $f$ on $\mathbb{C}$ such that $h=\Im f$, and the Schwarz reflection implies that $f$ is self-conjugate. The set $H(\mathcal{H})$ therefore admits the explicit description \begin{align*} H(\mathcal{H})&=\bigl\{\Im f_{\vert\mathcal{H}}: f \text{ is analytic on $\mathbb C$ and } f(\overline{z})=\overline{f(z)}\bigr\},\\ &=\bigl\{\sum_{n\geq 1} a_n \Im(x^{n}): a_n\in\mathbb{R} \text{ and } \vert a_n\vert^{1/n}\rightarrow 0\bigr\}. \end{align*} A similar description holds for other cones (not necessarily the half-space $\mathcal{H}$) and Laplacian operators \eqref{eq:continuous_Laplacian} with general covariance matrix \begin{equation} \label{eq:covariance_matrix} \sigma=\left(\begin{array}{ll}\sigma_1&\sigma_{1,2}\\\sigma_{1,2}&\sigma_{2}\end{array}\right). \end{equation} In particular, for a future use, when $D$ is the positive quadrant $\mathcal Q$, we have \begin{equation} \label{eq:basis_continuous_case} H(\mathcal Q)=\bigl\{\sum_{n\geq1} a_n h_n^\sigma: a_n\in\mathbb{R} \text{ and } \vert a_n\vert^{1/n}\rightarrow 0\bigr\}, \end{equation} where the functions $h_n^\sigma$ will be introduced later (see \eqref{eq:h_n^sigma}) and $\sigma$ denotes the covariance matrix \eqref{eq:covariance_matrix}. \subsection*{A glimpse of our results} Our main objective in the present paper is to prove that in the discrete setting, a surprisingly simple equality of set holds, analogue to \eqref{eq:basis_continuous_case}; see Theorem \ref{thm:main_intro-2} for a precise statement. A function $h:\mathbb Z^2\to\mathbb R$ is discrete harmonic (or preharmonic) in a domain $D\subset\mathbb Z^2$ with respect to the discrete Laplacian operator \begin{equation} \label{eq:def_Laplacian} \Delta(h)(i,j) = \sum_{k,\ell} p_{k,\ell} h(i+k,j+\ell)- h(i,j), \end{equation} the set of weights $\{p_{k,\ell}\}$ being fixed, if $\Delta(h)(i,j)=0$ for all $(i,j)\in D$. In this article, we start from the analytic approach of \cite{CoBo-83} and propose a systematic construction of discrete harmonic functions in the quarter plane (i.e., $D=\mathbb N^2=\{1,2,3,\ldots\}^2$) which vanish on the boundary axes. We go beyond the existing literature, in the sense that our construction works: \begin{enumerate}[label=(\roman{*}),ref={\rm(\roman{*})}] \item\label{item:neg_jumps}for walks with arbitrary big (negative) jumps (see Figure \ref{fig:step_sets}), \item\label{item:neg_values}not only for positive, but also for signed discrete harmonic functions. \end{enumerate} The two above features illustrate the robustness of our theory. The constructive aspect will follow from that we will obtain exact expressions for the generating functions of harmonic functions in terms of certain conformal mappings. \subsection*{Construction of preharmonic functions} First of all, we would like to review some results in the literature dedicated to constructions of discrete harmonic functions. Let us first mention elementary constructions of preharmonic functions on $\mathbb Z^d$. Discrete polynomials and discrete exponential functions are constructed (mostly iteratively) in \cite{Fe-44,He-49,Is-52,Du-56}; see also \cite{Mu-64} for a construction of preharmonic polynomials in terms of well-signed multinomials. Further examples arise when preharmonic functions are defined on sets having certain rigid structures. Picardello and Woess \cite{PiWo-92} prove that discrete harmonic functions defined on Cartesian products of Markov chains have a product form. In the case of Weyl chambers of type A, Eichelsbacher and K\"onig \cite{EiKo-08} prove that preharmonic functions can be expressed in terms of Vandermonde determinants. K\"onig and Schmid \cite{KoSc-10} demonstrate similar results for Weyl chambers of other types. Also in presence of a Weyl chamber structure, Biane, Bougerol and O'Connell \cite{BiBoOC-05} compute the (continuous) harmonic function (namely, the survival probability) with the help of the reflection principle. Using the theory of analytic combinatorics in several variables \cite{PeWi-13}, Courtiel et al.\ \cite{CoMeMiRa-17} study drifted simple random walks in various (two-dimensional) wedges and show that the harmonic functions obey to a rigid construction: namely, they are all obtained from a single function (again related to the reflection principle, see \cite[Eq.~(20)]{CoMeMiRa-17}), after some elementary operations (differentiation, division, evaluation). However, this technique seems to work only for random walks satisfying to (a certain algebraic version of) the reflection principle. Roughly speaking, the Martin boundary theory aims at describing the set of all positive harmonic functions, given a Laplacian operator. See \cite{KuMa-98,IR-08,IRLo-10,Ra-11,Ra-14,LeRa-16} for examples when the Laplacian is related to random walks killed on the boundary of a half-plane or quadrant. This theory only rarely yields a construction of harmonic functions. In the previously cited articles, the computation of Martin boundary relies on an asymptotic study of quotients of Green functions. Let us underline, however, that Ignatiouk-Robert and Loree \cite{IR-08,IRLo-10} find an explicit formula for the exponential growth of all positive preharmonic functions in the Martin boundary. In another direction, it is natural to ask whether discrete and classical harmonic functions are related, see for instance the papers \cite{Va-09,Va-15} by Varopoulos. In \cite{DeWa-15}, Denisov and Wachtel prove that a certain natural preharmonic function (appearing as a prefactor in the persistence probability asymptotics) can be constructed by compensating the classical harmonic function, see \cite[Eq.~(5)]{DeWa-15}. Representation theory provides us with many examples of Markov processes for which explicit expressions of harmonic functions exist. For instance, preharmonic functions are expressed in terms of dimensions of irreducible representations in \cite{Bi-91,Bi-92}. Such results are quite powerful, but intrinsically limited to a few step sets. We mention here a very last framework, which will be at the heart of the present work, and which relies on complex analysis techniques. As we shall see, the generating functions of preharmonic functions satisfy certain functional equations and, after further analysis, boundary value problems (BVPs). Solving these BVPs eventually yields exact expressions for the generating functions in terms of conformal mappings. The link between harmonic functions and conformal mappings is already illustrated in \cite{Ra-14,LeRa-16}, but we shall go much further here, by considering random walks with arbitrary big negative jumps (see item \ref{item:neg_jumps} above), and by constructing not only positive preharmonic functions (item \ref{item:neg_values}); recall that in \cite{Ra-14,LeRa-16}, the attention is restricted to positive harmonic functions for small step walks. Obviously, the list of constructive approaches to preharmonic functions could be continued, as the latter are all-present in probability theory and statistical physics, see the book \cite{Pr-12} for recent illustrations. \begin{figure} \begin{center} \begin{tikzpicture}[scale=.7] \draw[->,white] (1,2) -- (-1,-2); \draw[->,white] (1,-2) -- (-1,1); \draw[->] (0,0) -- (0,-1); \draw[->] (0,0) -- (0,1); \draw[->] (0,0) -- (-1,0); \draw[->] (0,0) -- (1,0); \end{tikzpicture} \begin{tikzpicture}[scale=.7] \draw[->,white] (2,2) -- (-2,-2); \draw[->,white] (2,-2) -- (-2,2); \draw[->] (0,0) -- (0,-1); \draw[->] (0,0) -- (0,1); \draw[->] (0,0) -- (1,1); \draw[->] (0,0) -- (-1,-1); \draw[->] (0,0) -- (1,-1); \draw[->] (0,0) -- (-1,1); \draw[->] (0,0) -- (-1,0); \draw[->] (0,0) -- (1,0); \end{tikzpicture} \begin{tikzpicture}[scale=.7] \draw[->,white] (1.5,2) -- (-2,-2); \draw[->,white] (1.5,-2) -- (-2,2); \draw[->] (0,0) -- (1,1); \draw[->] (0,0) -- (0,1); \draw[->] (0,0) -- (1,0); \draw[->] (0,0) -- (-2,-1); \draw[->] (0,0) -- (-2,-2); \draw[->] (0,0) -- (-2,0); \draw[->] (0,0) -- (-1,-2); \draw[->] (0,0) -- (0,-2); \draw[->] (0,0) -- (1,-2); \draw[->] (0,0) -- (-2,1); \draw[->] (0,0) -- (-1,1); \draw[->] (0,0) -- (1,-1); \end{tikzpicture} \begin{tikzpicture}[scale=.7] \draw[->,white] (2.5,2) -- (-1.5,-2); \draw[->,white] (2,-2) -- (-1.5,2); \draw[->] (0,0) -- (-1,-1); \draw[->] (0,0) -- (0,-1); \draw[->] (0,0) -- (-1,0); \draw[->] (0,0) -- (2,1); \draw[->] (0,0) -- (2,2); \draw[->] (0,0) -- (2,0); \draw[->] (0,0) -- (1,2); \draw[->] (0,0) -- (0,2); \draw[->] (0,0) -- (-1,2); \draw[->] (0,0) -- (2,-1); \draw[->] (0,0) -- (1,-1); \draw[->] (0,0) -- (-1,1); \end{tikzpicture} \begin{tikzpicture}[scale=.7] \draw[->,white] (2,2) -- (-2.5,-2); \draw[->,white] (2,-2) -- (-2,2); \draw[->] (0,0) -- (0,-2); \draw[->] (0,0) -- (-2,2); \draw[->] (0,0) -- (2,1); \draw[->] (0,0) -- (-1,0); \draw[->] (0,0) -- (2,-1); \end{tikzpicture} \end{center} \label{fig:step_sets} \caption{Various step sets. From left to right: the simple random walk; the king walk (with jumps to all eight nearest neighbors); an example with big negative jumps; an example with big positive jumps; an arbitrary model.} \end{figure} \subsection*{Various analytic approaches} As we have seen above, some results on harmonic functions (structure of the Martin boundary, exact expressions, see \cite{Ra-14,LeRa-16}) have already been obtained using the analytic approach of \cite{FaIaMa-17}. Let us now say a few words about this method. It was initially developed by Malyshev in \cite{Ma-72}, Fayolle and Iasnogorodski \cite{FaIa-79}, to study the stationary distribution of random walks in the quarter plane reflected at the boundary. The stationary distribution generating functions are shown to satisfy BVPs. From this, explicit expressions (typically, contour integral involving special functions) are deduced. See \cite[Chap.~5]{FaIaMa-17} for full details. Since then, the analytic approach of \cite{FaIaMa-17} has been applied to various contexts: queueing networks \cite{KuSu-03}, potential theory \cite{Ra-11,Ra-14,LeRa-16}, enumerative combinatorics (counting walks in the quarter plane) \cite{BMMi-10,KuRa-12,DrHaRoSi-18}. This approach is thus particularly fruitful and leads to precise results (both exact and asymptotic results). However, the construction in \cite{FaIaMa-17} is restricted to the case of random walks with jumps to the eight nearest neighbors (see Figure \ref{fig:step_sets}). Its generalization to bigger jumps would require the precise understanding of the location of the branch points on a certain Riemann surface, see \cite{FaRa-15}. This is certainly possible on a few given examples, but the variety of possible behaviors makes us pessimistic to pursue in this direction to develop a general theory. We will prefer here the alternative analytic approach of \cite{CoBo-83} by Cohen and Boxma (see also \cite{Co-92} by Cohen). The starting point is essentially the same (writing functional equations and BVPs for the generating functions) but remarkably, the construction can be done for arbitrary big (negative) jumps without increasing the level of complexity. The analytic approaches of \cite{FaIaMa-17} and \cite{CoBo-83} will be compared in Section \ref{sec:examples}. \subsection*{Signed harmonic functions} In most of the literature cited above, the focus is put on positive harmonic functions, for clear probabilistic reasons, for example in relation with the concept of Doob transform, or because many probabilistic estimates use positive harmonic functions. In this paper, we go further and look at signed harmonic functions. Our motivation is fourfold: first, this will allow us to study the structure of the set of harmonic functions (for instance, we shall see that the vector space of harmonic functions having a bounded polynomial growth is finite-dimensional and will give a basis). Our second motivation is that signed harmonic functions appear in various complete asymptotic expansions of relevant probabilistic or combinatorial quantities. To give a concrete example (related to this paper), let $K$ be a given cone of $\mathbb R^d$ and $\{Z(n)\}_{n\geq0}$ be a zero-mean random walk (with some moment assumptions). Then it is shown in \cite{DeWa-15} that, as $n\to \infty$, the survival probability is asymptotically equivalent to \begin{equation} \label{eq:one-term_asymp} \mathbb P_x(\tau_K>n)\sim h(x) n^{-\alpha}, \end{equation} where $x\in K$ is the starting point, $\tau_K$ is the first exit time of $\{Z(n)\}_{n\geq0}$ from the cone $K$, $h(x)$ is a harmonic function and $\alpha$ is a critical exponent. Assuming that \eqref{eq:one-term_asymp} may be refined as \begin{equation} \label{eq:many-terms_asymp} \mathbb P_x(\tau_K>n)\sim \sum_{i}h_i(x) n^{-\alpha_i}, \end{equation} it is shown in \cite{ChFuRa-20} that the $h_i(x)$ may be constructed from certain polyharmonic functions as well as from signed harmonic functions. Our third motivation comes from potential theory, where non-positive discrete harmonic functions turn out to be more difficult to study, compared to positive harmonic functions. Indeed, such classical tools as Harnack inequality (heavily used in \cite{Mu-06,Va-99}, for instance) do not hold anymore. Any construction of such functions becomes more relevant. Finally, there might be some interesting features regarding the nodal domains of these non-positive harmonic functions. Indeed, within the connected components of the nodal lines, harmonic functions take a constant sign (by definition). Take an example: the function $h(i,j)=ij(i-j)(i+j)$ is discrete harmonic for the usual Laplacian in dimension $2$ (probability $\frac{1}{4}$ to the four nearest neighbors): \begin{equation} \label{eq:def_Laplacian_usual} \Delta(h)(i,j) = \frac{1}{4}\bigl(h(i+1,j)+h(i,j-1)+h(i-1,j)+h(i,j+1)\bigr)- h(i,j). \end{equation} Its nodal lines are given by the two axes, the diagonal $\{i-j=0\}$ and the anti-diagonal $\{i+j=0\}$, see Figure \ref{fig:nodal_lines}. The same function $h$ is also a positive harmonic function with Dirichlet boundary condition in the octant $\{(i,j)\in\mathbb Z^2 : 0\leq i\leq j\}$. What would be the general structure of these nodal lines? \begin{figure}[ht] \includegraphics[width=0.2\textwidth]{NodalDomain2}\qquad\qquad \includegraphics[width=0.2\textwidth]{NodalDomain3} \caption{Nodal domains associated to the functions $ij(i-j)(i+j)$ and $ij(3i^4-10i^2j^2+3j^4-5i^2-5j^2+14)$, which are discrete harmonic for the operator \eqref{eq:def_Laplacian_usual}. On the right picture, the blue cone is tangent to the nodal lines at infinity.} \label{fig:nodal_lines} \end{figure} \subsection*{Preharmonic functions and their generating functions} Let $\{p_{k,\ell}\}_{(k,\ell)\in\mathbb Z^2}$ be non-negative weights (or transition probabilities) summing to $1$, such that: \begin{enumerate}[label=(H\arabic{*}),ref={\rm (H\arabic{*})}] \item\label{H1:jumps}The weights are symmetric, i.e., for all $k$ and $\ell$, $p_{k,\ell}=p_{\ell,k}$; \item\label{H2:jumps}Weights in the positive directions should be small, i.e., $p_{k,\ell}=0$ if $k\geq2$ or $\ell\geq2$, while weights in the negative directions may be arbitrary large, see Figure \ref{fig:step_sets}; \item\label{H3:jumps}If $\big\vert \sum p_{k,\ell}x^ky^\ell \big\vert=1$ and $\vert x\vert=\vert y\vert=1$, then $x=y=1$ (equivalently, the random walk on $\mathbb Z^2$ with increment distribution given by the $p_{k,\ell}$ is irreducible); \item\label{H4:jumps} The drift is zero, meaning that $\sum k p_{k,\ell}=\sum \ell p_{k,\ell}=0$. \end{enumerate} Consider now the associated discrete Laplacian operator \eqref{eq:def_Laplacian}, acting on complex-valued functions $h=\{h(i,j)\}_{(i,j)\in\mathbb Z^2}$. Our aim is to describe the functions $h$ which satisfy to: \begin{enumerate}[label=(H\arabic{*}),ref={\rm (H\arabic{*})}]\setcounter{enumi}{4} \item\label{H1:harmonic}For all $(i,j)\in\mathbb Z^2$ with $i\leq0$ and/or $j\leq0$, $h(i,j)=0$; \item\label{H2:harmonic}For all $i,j\geq 1$, $\Delta(h)(i,j)=0$. \end{enumerate} Such functions are harmonic for the random walk on $\mathbb Z^2$ killed when exiting the quadrant. The generating function of the harmonic function $h$ is \begin{equation} \label{eq:generating_functions_harmonic_functions} H(x,y)= \sum_{i,j\geq 1} h(i,j) x^{i-1}y^{j-1}, \end{equation} and its sections are \begin{equation} \label{eq:generating_functions_harmonic_functions_uni} H(x,0)= \sum_{i\geq 1} h(i,1) x^{i-1} \quad\text{and}\quad H(0,y)= \sum_{j\geq 1} h(1,j) y^{j-1}. \end{equation} Finally, the kernel is \begin{equation} \label{eq:def_K} K(x,y)= xy\left(1-\sum p_{k,\ell}x^{-k}y^{-\ell}\right) =xy-\sum p_{k,\ell}x^{-k+1}y^{-\ell+1}. \end{equation} The kernel is a polynomial due to hypothesis \ref{H2:jumps} and is obviously fully characterized by the jumps $\{p_{k,\ell}\}$. The function $H(x,y)$ satisfies the functional equation (which simply reflects the harmonicity relations) \begin{equation} \label{eq:functional_equation} K(x,y)H(x,y) = K(x,0) H(x,0) +K(0,y) H(0,y)-K(0,0) H(0,0). \end{equation} \subsection*{Main results} Equation \eqref{eq:functional_equation} implies that any generating series $H(x,y)$ of a harmonic function has the form \begin{equation} \label{eq:form_H} H(x,y)=\frac{F(x)+G(y)}{K(x,y)} \end{equation} for some power series $F,G\in\mathbb{C}[[t]]$. On the other hand, not any power series $F$ and $G$ are such that the right-hand side of \eqref{eq:form_H} is a bivariate power series (indeed, in the case $p_{1,1}=0$, notice that $K(0,0)=0$). From that point of view, we will answer the following questions: \begin{itemize} \item For which power series $F$ and $G$ is the function $H$ in \eqref{eq:form_H} a power series? (Remark that in the case $p_{1,1}\neq 0$, this is always the case, as $K(0,0)\neq0$.) \item In case the function $H$ is a bivariate power series, is it analytic in a neighbourhood of $(0,0)$? What is the associated radius of convergence? \item Which choice of $F,G\in\mathbb{C}(t)$ guarantees that the generating function $H$ has positive coefficients? \end{itemize} In order to state our main result, we need to introduce a certain curve as well as two related conformal mappings. This step is the basis of our entire analysis and is inspired by the books \cite{CoBo-83,Co-92}. To do so, we first introduce the domain \begin{equation} \label{eq:main_domain} \mathcal{K}=\{(x,y)\in\mathbb C^2 : K(x,y)=0 \text{ and } \vert x\vert = \vert y\vert \leq 1\}. \end{equation} As it turns out (more details are to come in Section \ref{sec:func_eq}), its projection along the first variable defines a curve $\mathcal S_1$ which is closed, non-intersecting, symmetric with respect to the real axis and contains $1$; see Figure \ref{fig:some_curves} for a few examples. The bounded (resp.\ unbounded) domain whose boundary is $\mathcal S_1$ is denoted by $\mathcal S_1^+$ (resp.\ $\mathcal S_1^-$). Let also $\mathcal{H}^+$ (resp.\ $\mathcal{H}^-$) denote the interior of the right (resp.\ left) half-plane. Let $\psi_{1}$ be a conformal mapping from $\mathcal S_1^+$ to $\mathcal{H}^+$ with the conditions that $\psi_{1}(0)=p_{1,1}$ and $\psi_{1}'(0)>0$. Finally, introduce for $n\geq 0$ the polynomial \begin{equation} \label{eq:def_P_n} P_{n}=\bigl(X^2-p_{1,1}^2\bigr)^{\lfloor n/2\rfloor}X^{n\,[2]}, \end{equation} where $n\,[2]$ stands for $n$ modulo $2$. The family $\{ P_{n}\}_{n\geq 0}$ is a basis of $\mathbb{R}[X]$. The first few polynomials $P_n$ are then: \begin{equation*} P_0=1,\quad P_1=X,\quad P_2=X^2-p_{1,1}^2,\quad P_{3}=X^3-p_{1,1}^2X,\quad \text{etc.} \end{equation*} \begin{figure} \begin{center} \includegraphics[height=4.0cm]{SRW1}\quad \includegraphics[height=4.0cm]{SRW}\quad \includegraphics[height=4.0cm]{Tandem} \end{center} \caption{The curve $\mathcal S_1$ for the simple random walk (left). It is included in the parametric curve \eqref{eq:expression_S1_SRW} with self-intersection (middle). On the right, the parametric curve for the (non-symmetric) walk with jumps $(-1,1),(1,0),(0,-1)$. The curve $\mathcal S_1$ is the intersection with the unit disk.} \label{fig:some_curves} \end{figure} \begin{theorem} \label{thm:main_intro-1} For any $n\geq 1$, the function $\frac{P_n(\psi_{1}(x))-P_n(-\psi_{1}(y))}{K(x,y)}$ defines a bivariate power series \begin{equation} \label{eq:expression_H_n} H_{n}(x,y)=\sum_{i,j\geq 1}h_n(i,j) x^{i-1}y^{j-1}=\frac{P_{n}(\psi_{1}(x))-P_{n}(-\psi_{1}(y))}{K(x,y)}, \end{equation} which satisfies the following properties: \begin{enumerate}[label=\textnormal{(\arabic{*})},ref=\textnormal{(\arabic{*})}] \item\label{thm:main_intro-1:it1}Its Taylor coefficients $\{h_n(i,j)\}_{i,j\geq 1}$ form a discrete harmonic function for the Laplacian operator \eqref{eq:def_Laplacian}. \item\label{thm:main_intro-1:it2}$H_n$ is analytic for $x,y\in \mathcal{S}_1^+$. \item\label{thm:main_intro-1:it3}Denote by $h_{n}^{\sigma}$ the continuous harmonic function as in \eqref{eq:basis_continuous_case} or \eqref{eq:h_n^sigma}, with $\sigma$ being the covariance matrix \eqref{eq:covariance_matrix} associated to the transition probabilities $p_{k,\ell}$, and set \begin{equation} \label{eq:angle_at_1} \theta = \arccos\frac{-\sigma_{12}}{\sqrt{\sigma_{1}\sigma_{2}}}= \arccos\frac{-\sum k\ell p_{k,\ell}}{\sqrt{\sum k^2p_{k,\ell}}\sqrt{\sum \ell^2p_{k,\ell}}}. \end{equation} One has $\frac{1}{m^{n\pi/\theta+1}}h_{n}(\lfloor mx\rfloor ,\lfloor my\rfloor)\to h_{n}^{\sigma}(x,y)$ as $m$ goes to infinity, in the sense of their Laplace transforms. \end{enumerate} \end{theorem} Our second main result shows that the harmonic functions $h_n$ introduced in Theorem~\ref{thm:main_intro-1} actually describe the whole space of discrete harmonic functions. \begin{theorem} \label{thm:main_intro-2} The space of discrete harmonic functions is isomorphic to the vector space of formal power series $\mathbb{R}_{0}[[t]]$ with vanishing constant term, through the isomorphism \begin{equation*} \Phi:\left\lbrace\begin{matrix} \mathbb{R}_0[[t]]&\longrightarrow&H(\mathbb N^2)\\ \sum_{n\geq 1}a_{n}t^n&\longmapsto& \sum_{n\geq 1}a_{n}h_n. \end{matrix}\right. \end{equation*} \end{theorem} Various examples illustrating Theorems \ref{thm:main_intro-1} and \ref{thm:main_intro-2} will be given in Section \ref{sec:examples}. Let us now describe the main features of these theorems: \begin{itemize} \item In the case $p_{1,1}= 0$, Theorems \ref{thm:main_intro-1} and \ref{thm:main_intro-2} imply that a function $\{h(i,j)\}_{i,j\geq 1}$ with generating function $H(x,y)$ is harmonic if and only if there exists a formal power series $F=\Phi^{-1}(h)$ such that \begin{equation} \label{eq:main-intro-2} H(x,y)=\frac{F(\psi_{1}(x))-F(-\psi_{1}(y))}{K(x,y)}. \end{equation} In the case $p_{1,1}\neq 0$, the expression of $H$ in terms of $\Phi^{-1}(h)$ is not so direct, and \eqref{eq:main-intro-2} should be replaced by \eqref{eq:form_H} for some functions $F$ and $G$ depending on $\Phi^{-1}(h)$ in a non-trivial way. In the framework of this article, we say that a function $F$ characterizes a harmonic function $h$ if its generating function can be written as \eqref{eq:main-intro-2}. \item Part of the results of Theorem \ref{thm:main_intro-2} is that for any formal power series $F$ (even with a zero radius of convergence), the formula \eqref{eq:main-intro-2} defines formal power series (see our Lemmas \ref{lemma:triangle_lemma} and \ref{lemma:rectangle_lemma}) and eventually discrete harmonic functions. \item This correspondence between harmonic functions and formal power series may be refined as follows: writing $h(i,j)=\frac{h(i,j)+h(j,i)}{2}+\frac{h(i,j)-h(j,i)}{2}$, one may decompose a harmonic function as a sum of a symmetric harmonic function and another anti-symmetric harmonic function. Then in Theorem \ref{thm:main_intro-2}, symmetric (resp.\ anti-symmetric) harmonic functions correspond to even (resp.\ odd) power series $F(t)=\sum_{n\geq 1}a_{n}t^n$, see Section~\ref{sec:sym_antisym}. \item In Equation \eqref{eq:main-intro-2}, the only dependence in the model $\{p_{k,\ell}\}$ relies in the conformal mapping $\psi_{1}$. The fact that we may take any power series $F$ is independent of the model and therefore appears as a universal feature. \item It is known from \cite{BoMuSi-15,IR-20,DuRaTaWa-20} that there exists a unique function which is both positive and harmonic for the Laplacian operator \eqref{eq:def_Laplacian}. We conjecture (and bring some evidence) that this unique positive harmonic function corresponds to $F(t)=t$ in \eqref{eq:main-intro-2}. In this sense, positivity of the harmonic function corresponds to minimality in terms of polynomial degree. \begin{conj} \label{conj:pos} The unique (up to multiplicative factors) positive harmonic function associated to the Laplacian operator \ref{H1:jumps}--\ref{H4:jumps} is $h_1$ in Theorem \ref{thm:main_intro-1}. \end{conj} \item The polynomial growth of the unique positive harmonic function can be read off on the corner point of the curve $\mathcal S_1$ at $1$ (see Figure \ref{fig:some_curves}), or equivalently on the singularity of the conformal mapping $\psi_{1}$ at $1$. \item The vector space of harmonic functions with growth bounded by some constant $g$ is finite-dimensional, and amounts to taking $F\in \mathbb C_n[t]$ for some $n$ related to $g$. \item A much weaker statement of Theorem \ref{thm:main_intro-1} (small step random walks and only $F(t)=t$) is given in \cite{Ra-14}; see Section \ref{sec:uniformization} for a more detailed comparison between our approach and the one in \cite{Ra-14}. \item The conformal mapping $\psi_{1}$ is uniquely characterized by the Riemann mapping theorem. We show how to obtain its explicit expression in the small step case, as well as in a few models with larger steps. However, in general, despite the particular structure of the set \eqref{eq:main_domain}, there is little hope to derive concrete expressions for these conformal mappings. \item We will also explain how, given a (finite or infinite) number of boundary values, we may construct (and give a formula for) a harmonic function $h$ having these values. \item Interestingly, by Theorem \ref{thm:main_intro-2} one can transfer the algebra structure of $\mathbb{R}_0[[t]]$ into an algebra structure on the space of harmonic functions $H(\mathbb{N}^2)$. Formally, the multiplication is defined on $H(\mathbb{N}^2)$ by the formula \begin{equation*} h\cdot h'=\Phi\bigl(\Phi^{-1}(h)\Phi^{-1}(h')\bigr),\quad \forall h,h'\in H(\mathbb{N}^2). \end{equation*} This implies in particular that $h_n\cdot h_m=h_{n+m}$, and that $H(\mathbb{N}^2)$ is generated as an algebra by the (conjecturally) unique positive harmonic function $h_1$. \end{itemize} \subsection*{Acknowledgments} We would like to thank \'Eric Fusy for enlightening discussions concerning Section~\ref{sec:SRW}. We further thank Gerold Alsmeyer, Onno Boxma and Fran\c cois Chapon for interesting discussions. \section{Study of the kernel} \label{sec:func_eq} The main objective of this section is the description of the set $\mathcal{K}$ introduced in \eqref{eq:main_domain}. To that purpose, in Section~\ref{subsec:preliminary}, we first recall from \cite{CoBo-83} some key properties of the two-variable polynomial $K$ defined in \eqref{eq:def_K}. We also give a few elementary properties of the curve \eqref{eq:main_domain}. In Section~\ref{subsec:corner}, we prove a new result on the existence of a corner point of the curve \eqref{eq:main_domain} and compute the associated angle. This angle turns out to be very important, as we will prove later that it is strongly related to the growth of harmonic functions, as stated in Theorem \ref{thm:main_intro-1} \ref{thm:main_intro-1:it2}. Throughout the manuscript, $\mathcal{C}$ will denote the unit circle and $\mathcal{C}^+$ (resp.\ $\mathcal{C}^-$, $\overline{\mathcal{C}^{+}}$) will stand for the interior (resp.\ exterior, resp.\ closed interior) domain, i.e., the open unit disk (resp.\ the complex plane deprived from the closed unit disk, resp.\ the closed unit disk). We shall also denote the bidisk by $\mathcal{U}=\mathcal{C}^+\times\mathcal{C}^+=\lbrace (x,y)\in \mathbb{C}^2:\vert x\vert< 1,\vert y\vert< 1\rbrace$. \subsection{Preliminary results on the solutions of the kernel equation} \label{subsec:preliminary} To describe the domain $\mathcal K$ in \eqref{eq:main_domain}, we write \begin{equation} \label{eq:x_y_eta_s} x = \eta s\quad \text{and}\quad y = \eta s^{-1}, \end{equation} with $\eta\in\mathbb{C}$ and $\vert s\vert=1$. By \eqref{eq:def_K}, we have \begin{equation} \label{eq:kernel_eq_eta_s} K(\eta s,\eta s^{-1}) = \eta ^2-\sum p_{k,\ell}\eta ^{-k-\ell+2}s^{-k+\ell}. \end{equation} The lemma hereafter presents some properties of the roots of the kernel \eqref{eq:kernel_eq_eta_s}. Remark that in Lemma \ref{lemma:kernel} below, and in many other statements as well, two cases will be considered separately, according to whether $p_{1,1}=0$ or not. \begin{lemma} \label{lemma:kernel} Assume \ref{H1:jumps}--\ref{H4:jumps}. For $\vert s\vert=1$, the equation $K(\eta s ,\eta s^{-1})=0$ admits exactly two solutions in $\overline{\mathcal{C}^+}$. Moreover, the following assertions hold: \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{item:kernel_p11=0}If $p_{1,1}= 0$, one solution is identically zero and the other one, denoted by $\eta(s)$, is real and varies in $[-1,1]$. Furthermore, $\eta(1)=1$, $\eta(-1)=-1$ and $\vert\eta(s)\vert<1$ for all $\vert s\vert=1$ with $s\neq \pm 1$. \item\label{item:kernel_p11_not=0} If $p_{1,1}\neq 0$, the two solutions, denoted by $\eta_1(s)$ and $\eta_2(s)$, are real and satisfy, for all $\vert s\vert =1$, $\eta_2(s) = -\eta_1(-s)$. Further, $\eta_1(1)=1$ and $\eta_1(s)\in(0,1)$ for all $\vert s\vert=1$ with $s\neq 1$. \end{enumerate} \end{lemma} Although the proof of Lemma \ref{lemma:kernel} may be found in the book \cite{CoBo-83}, we recall here the details, for the sake of completeness. \begin{proof} We first present the proof of \ref{item:kernel_p11=0}. When $p_{1,1}=0$, a term $\eta$ may be factorised out of the kernel \eqref{eq:kernel_eq_eta_s}, so one of the roots is identically equal to zero. More precisely, we have $K(\eta s,\eta s^{-1})=\eta \widetilde{K}(\eta,s)$, where \begin{equation} \label{eq:K-tilde} \widetilde{K}(\eta,s)=\eta -\sum p_{k,\ell}\eta^{-k-\ell+1}s^{-k+\ell}. \end{equation} By Assumption \ref{H3:jumps}, one knows that if $(\eta s,\eta s^{-1})$ is a root of ${K}$ with $\vert\eta\vert=\vert s\vert =1$, then $\eta=s=1$ or $\eta=s=-1$. Therefore, for all $\vert s\vert =1$ with $s\neq \pm 1$, one has the following strict inequality: \begin{equation} \label{eq:before_Rouch\'e} \left\vert\sum p_{k,\ell} \eta^{-k-\ell+1} s^{-k+\ell}\right\vert < 1 = \vert \eta \vert. \end{equation} It readily follows from Rouch\'e's theorem that, as a function of $\eta$, $\widetilde{K}(\eta,s)$ has a unique solution in $\mathcal{C}^+$, denoted by $\eta(s)$. Since $\widetilde{K}(1,s)\geq 0$ and $\widetilde{K}(-1,s)\leq 0$, then $\eta(s)\in[-1,1]$, hence the solution is real. The previous argument does not work for $s=\pm 1$, because \eqref{eq:before_Rouch\'e} becomes an equality at this point. We now consider $s=1$. Fix $r\in (0,1)$ and consider the polynomial \begin{equation*} \widetilde{K}_r(\eta)=\eta -r\sum p_{k,\ell}\eta^{-k-\ell+1}. \end{equation*} Rouch\'e's theorem yields that $\widetilde{K}_r$ has a unique root $\eta_r(1)\in\mathcal{C}^+$. Further, Assumption~\ref{H4:jumps} ensures that $\widetilde{K}_r$ is increasing on $[0,1]$, with $\widetilde{K}_r(0)=-(p_{1,0}+p_{0,1})<0$ and $\widetilde{K}_r(1)=1-r>0$. Moreover, $\eta_r(1)\in(0,1)$ and converges to $1$ as $r\to 1$. By continuity arguments, one has $\eta(1)=1$. At the point $s=-1$, notice that for all $\eta$ and $s$, one has $\widetilde{K}(-\eta,-s) = -\widetilde{K}(\eta,s)$. This implies that $\eta(-1)=-1$. We move to the proof of \ref{item:kernel_p11_not=0}. Fix $\vert s\vert=1$ with $s\neq 1$. Since for all $\vert\eta\vert=1$, one has \begin{equation*} \left\vert\sum p_{k,\ell} \eta^{-k-\ell+2} s^{-k+\ell} \right\vert < 1 = \vert \eta^2 \vert, \end{equation*} then by Rouch\'e's theorem, $K(\eta s,\eta s^{-1})$ admits two solutions $\eta_1(s)$ and $\eta_2(s)$ in $\mathcal{C}^+$. These quantities are real, as $K(\eta s,\eta s^{-1})$ is non-negative at $1$ and $-1$, but negative at $0$. So one of them (say $\eta_1(s)$) is positive, and the other one ($\eta_2(s)$) is negative. Looking now at the point $s=1$, we set for $r\in(0,1)$ \begin{equation*} K_r(\eta) = \eta^2 - r\sum p_{k,\ell}\eta^{-k-\ell+2}. \end{equation*} Rouch\'e's theorem implies that $K_r$ has two solutions in $\mathcal{C}^+$, say $\eta_{1,r}(1)$ and $\eta_{2,r}(1)$. Notice that $K_r$ is positive at $\pm 1$, negative at $0$, so $\eta_{1,r}(1)\in (0,1)$ and $\eta_{2,r}(1)\in (-1,0)$. Since $K_r'$ is concave on $[0,1]$, with $K_r'(0)=-r(p_{1,0}+p_{0,1})\leq 0$ and $K_r'(1)=2(1-r)>0$, there exists $\eta_0\in[0,1)$ such that $K_r'(\eta_0)=0$. Hence, $K_r$ is decreasing on $[0,\eta_0]$, increasing on $[\eta_0,1]$, and $\eta_{1,r}(1)\in (\eta_0,1)$. As $r\to 1$, then $\eta_{1,r}(1)\to 1$, while $\eta_{2,r}(1)$ goes to a point in $(-1,0)$. Hence, $K_1$ has two solutions in $\overline{\mathcal{C}^+}$, which are $\eta_1(1)=1$ and $\eta_2(1)\in (-1,0)$. To conclude, we prove that for all $\vert s\vert=1$, $\eta_2(s)=-\eta_1(-s)$. Equation $K(\eta s,\eta s^{-1})=0$ may be rewritten as \begin{equation} \label{eq:quadratic_eq_kernel} \left(1-\sum_{k+\ell\leq 0}p_{k,\ell}\eta^{-k-\ell}s^{-k+\ell}\right)\eta^2-(p_{1,0}s^{-1}+p_{0,1}s)\eta - p_{1,1}=0. \end{equation} Notice that if $\eta(s)$ is a solution of \eqref{eq:quadratic_eq_kernel}, then $-\eta(-s)$ is the other solution. Hence, our claim follows. \end{proof} It is worth mentioning that in the case $p_{1,1}\neq 0$, the domain \eqref{eq:main_domain} does not depend on the choice of $\eta_1(s)$ and $\eta_2(s)$. Hereafter, we shall write $\eta(s)$ for $\eta_1(s)$ in the case $p_{1,1}\not=0$. Based on Lemma \ref{lemma:kernel} and of the notation just introduced, we may now properly define the domain \eqref{eq:main_domain}: \begin{equation*} \mathcal{K}=\{ (\eta(s)s,\eta(s)s^{-1}):\vert s\vert=1 \}=\{ (\eta(s)s,\overline{\eta(s)s}):\vert s\vert=1 \}, \end{equation*} with the bar denoting the complex conjugation. Introduce the curves \begin{equation*} \mathcal{S}_1=\{\eta(s)s : \vert s\vert=1\} \quad \text{and}\quad \mathcal{S}_2=\{\eta(s)s^{-1}:\vert s\vert=1\}, \end{equation*} which are obtained by taking the projection of $\mathcal{K}$ along the first and second coordinates, respectively. See Figure \ref{fig:some_curves} for a few examples. Obviously, an equivalent description of $\mathcal{S}_1$ (as a parametrized curve) is \begin{equation*} \mathcal{S}_1=\{(\eta(e^{it})\cos t,\eta(e^{it})\sin t) : t\in[0,2\pi)\}, \end{equation*} where we recall that $\eta(e^{it})$ is real, and that the interval $[0,2\pi)$ may be replaced by $[0,\pi)$ in the case $p_{1,1}= 0$. For example, in the case of the simple random walk, one will have \begin{equation} \label{eq:expression_S1_SRW} \mathcal{S}_1 = \{(1-\sin t,(1-\sin t)\tan t): t\in [0,\pi)\}, \end{equation} see Figures \ref{fig:some_curves} and \ref{fig:def_theta}. We also emphasize that if the support of the $p_{k,\ell}$ is bounded, then the function $\eta(s)$ is an algebraic function of $s$. \begin{lemma} \label{lemma: properties of S1 S2'} Assume \ref{H1:jumps}--\ref{H4:jumps}. The following assertions hold: \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{it:S1-i}If $p_{1,1}=0$, then $0\in\mathcal{S}_1$ and for all $\vert s\vert=1$, $\eta(-\overline{s})=-\eta(s)$. The maps $s\mapsto\eta(s)s$ and $s\mapsto \eta(s)s^{-1}$ are two-to-one from $\mathcal{C}$ to $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively. \item\label{it:S1-ii}If $p_{1,1}\neq 0$, then $0\in\mathcal{S}_1^+$ and for all $\vert s\vert=1$, $\eta(s)>0$ and $\eta(\overline{s})=\eta(s)$. The maps $s\mapsto\eta(s)s$ and $s\mapsto \eta(s)s^{-1}$ are one-to-one from $\mathcal{C}$ to $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively. \item\label{it:S1-iii}The curves $\mathcal{S}_1$ and $\mathcal{S}_2$ are equal, non-self-intersecting and symmetric with respect to the real axis. If $s$ traverses $\mathcal{C}$ counterclockwise, then $\eta(s)s$ traverses $\mathcal{S}_1$ counterclockwise and $\eta(s)s^{-1}$ traverses $\mathcal{S}_2$ clockwise. \end{enumerate} \end{lemma} \begin{proof} Item \ref{it:S1-iii} is a direct consequence of \ref{it:S1-i} and \ref{it:S1-ii}, and we start with the proof of \ref{it:S1-i}. By Assumption \ref{H1:jumps} and our notation \eqref{eq:K-tilde}, we have for $s={e}^{{i}\lambda}$: \begin{equation*} \widetilde{K}(\eta, {e}^{{i}\lambda}) = \eta-\sum p_{k,\ell}\eta^{-k-\ell+1}\cos((-k+\ell)\lambda). \end{equation*} It is seen that if $\eta=\eta({e}^{{i}\lambda})$ is a root of the above polynomial, then so is $-\eta(-{e}^{{-i}\lambda})$. By uniqueness of the solution, we must have $\eta(s)=-\eta(-\overline{s})$. Moving to the proof of \ref{it:S1-ii}, we first have \begin{equation} \label{eq:formula_K_ii} {K}(\eta{e}^{{i}\lambda},\eta{e}^{{-i}\lambda})= \eta^2-\sum p_{k,\ell}\eta^{-k-\ell+2}\cos((-k+\ell)\lambda), \end{equation} see \eqref{eq:kernel_eq_eta_s}. It is seen that if $\eta({e}^{{i}\lambda})$ is a solution of \eqref{eq:formula_K_ii}, then so is $\eta({e}^{{-i}\lambda})$. Hence, $\eta(s)=\eta(\overline{s})$, being both positive (see Lemma \ref{lemma:kernel} and its proof). \end{proof} \subsection{Corner point of the curve $\mathcal{S}_1$ at $1$} \label{subsec:corner} Since it will be crucial in the next sections, we need to study the precise shape of the curve at the point $1$. It should be noted that this single point $1$ contains all the information about the growth of harmonic functions. Throughout the paper, we write $V[K]$ for the set of zeros of $K$ in $\mathbb C^2$. We also recall here that a point of $V[K]$ is singular if and only if $\frac{\partial K}{\partial x}$ and $\frac{\partial K}{\partial y}$ simultaneously vanish at that point. \begin{lemma} \label{lemma:angle_at_1} Assume \ref{H1:jumps}--\ref{H4:jumps}. The curve $\mathcal{S}_1$ admits a corner point at $1$ with interior angle $\theta$ given by \eqref{eq:angle_at_1}, and is smooth elsewhere. Moreover, $\mathcal{K}\setminus\{(1,1)\}$ consists of non-singular points of $V[K]$. \end{lemma} In other words, the value of the angle at the corner point is given by the arccosine of the opposite of the correlation coefficient. See Figure~\ref{fig:def_theta} for an illustration of Lemma \ref{lemma:angle_at_1}. \begin{figure} \hspace{-10mm} \begin{tikzpicture}[scale=2.5] \draw[gray,very thin] (-1.1,-1.1) grid (1.1,1.1) [step=0.25cm] (-1,-1) grid (1,1); \draw[->] (-1.1,0) -- (1.1,0) node[right] {$1$}; \draw[->] (0,-1.1) -- (0,1.1) node[above] {$1$}; \textcolor{red}{\qbezier(59,-10)(47,0)(59,10)} \draw[thick,variable=\t,domain=0:180,samples=50,blue] plot ({1-sin(\t)},{tan(\t)-sin(\t)*tan(\t)}); \put(45,5){$\theta$} \put(30,25){$\mathcal S_1$} \end{tikzpicture} \begin{tikzpicture}[scale=2.5] \draw[gray,very thin] (-1.1,-1.1) grid (1.1,1.1) [step=0.25cm] (-1,-1) grid (1,1); \draw[->] (-1.1,0) -- (1.1,0) node[right] {$1$}; \draw[->] (0,-1.1) -- (0,1.1) node[above] {$1$}; \textcolor{red}{\qbezier(59,-10)(47,0)(59,10)} \draw[thick,variable=\t,domain=0:180,samples=50,blue] plot ({(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t))-sqrt((-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))*(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))-4))*cos(\t)/2},{(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t))-sqrt((-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))*(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))-4))*sin(\t)/2}); \draw[thick,variable=\t,domain=0:180,samples=50,blue] plot ({(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t))-sqrt((-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))*(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))-4))*cos(\t)/2},{-(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t))-sqrt((-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))*(-cos(\t)+sqrt(12-3*cos(\t)*cos(\t)))-4))*sin(\t)/2}); \put(45,5){$\theta$} \put(30,25){$\mathcal S_1$} \end{tikzpicture} \caption{The curve $\mathcal S_1$ for the simple walk (left) and for the king walk (right), and definition of the angle $\theta$. For the two models above, $\theta=\frac{\pi}{4}$.} \label{fig:def_theta} \end{figure} \begin{proof} We start with the case $p_{1,1}=0$ and first prove that $\mathcal{K}\setminus\{(1,1)\}$ consists of smooth points of $V[K]$. Suppose that $\widetilde{K}(\eta_0,s_0)=0$, with $\vert\eta_0\vert<1$, $\vert s_0\vert=1$ and $s_0\not=\pm 1$. Then, by Weierstrass preparation theorem for analytic functions in several variables (whose statement is recalled in Appendix~\ref{sec:app_W}, see also \cite{Gu-65}), there exist $r\geq 1$, a neighbourhood $V$ around $(\eta_0,s_0)$ (whose projection along the second coordinate is denoted by $P_s(V)$), $r$ analytic functions $g_0,\ldots,g_{r-1}$ on $P_s(V)$ vanishing only at $s_0$ and a non-vanishing analytic function $h$ on $V$ such that \begin{equation*} \widetilde{K}(\eta,s)=h(\eta,s)\bigl((\eta-\eta_0)^r+g_{r-1}(s)(\eta-\eta_0)^{r-1}+\cdots+g_{0}(s)\bigr). \end{equation*} Hence, for all $s$ close to but different from $s_0$, there are $r$ distinct solutions to the equation $\widetilde{K}(\eta,s)=0$. Since for $s\in\mathcal{C}$ in a neighbourhood of $s_0$, there is a unique solution to the equation $\widetilde{K}(\eta,s)=0$, we must have $r=1$. In particular, \begin{equation*} \frac{\partial\widetilde{K}}{\partial \eta}(\eta_0,s_0)=h(\eta_0,s_0)\not=0, \end{equation*} and for $\eta_0\neq 0$, \begin{equation*} \frac{\partial K}{\partial \eta}(\eta_0s_0,\eta_0s_0^{-1})=\eta_0\frac{\partial\widetilde{K}}{\partial \eta}(\eta_0,s_0) \neq 0. \end{equation*} Since \begin{equation*} \frac{\partial K}{\partial \eta}=\frac{\partial x}{\partial \eta}\frac{\partial K}{\partial x}+\frac{\partial y}{\partial \eta}\frac{\partial K}{\partial y} = s\frac{\partial K}{\partial x}+s^{-1}\frac{\partial K}{\partial y}, \end{equation*} we must have either $\frac{\partial K}{\partial x}$ or $\frac{\partial K}{\partial y}$ non-zero on $\mathcal{K}\setminus \{(1,1)\}$. For $\eta_0=0$, it is easily seen that $\frac{\partial K}{\partial x}(0,0)\neq 0$. The claim then follows. We move to the proof of the smoothness of the curve $\mathcal{S}_1\setminus\{1\}$. Differentiating the identity $\widetilde{K}(\eta(s),s)=0$, see \eqref{eq:K-tilde}, we obtain \begin{equation} \label{eq:derivative_kernel_1} \eta'(s)\frac{\partial\widetilde{K}}{\partial\eta}(\eta(s),s) - \frac{\partial\widetilde{K}}{\partial s}(\eta(s),s) = 0. \end{equation} It is seen that for $\vert s\vert =1$ and $s\neq \pm 1$, $\eta'(s)$ exists and is finite. Now we set $s={e}^{{i}\lambda}$, with $\lambda\in (0,\pi)$. Since both $\lambda$ and $\eta({e}^{{i}\lambda})$ are real, then so is $(\eta({e}^{{i}\lambda}))'$. Moreover, $\eta({e}^{{i}\lambda})$ and $(\eta({e}^{{i}\lambda}))'$ are not simultaneously zero, which leads to \begin{equation*} (\eta({e}^{{i}\lambda}){e}^{{i}\lambda})'= {e}^{{i}\lambda}\bigl( (\eta({e}^{{i}\lambda}))' + {i}\eta({e}^{{i}\lambda}) \bigr)\neq 0. \end{equation*} In other words, $\mathcal{S}_1\setminus\{1\}$ is smooth. We now deal with the case $(1,1)$. We differentiate again \eqref{eq:derivative_kernel_1} and evaluate the new equation at $s=1$, which leads to \begin{equation} \label{eq:derivative_kernel_3} \left(\sum p_{k,\ell}(k+\ell)^2\right)\eta'(1)^2 + \sum p_{k,\ell}(k-\ell)^2 =0. \end{equation} Since the equation \eqref{eq:derivative_kernel_3} has two distinct roots, then $\eta(s)$ is semi-differentiable at $s=1$. Further, the roots of \eqref{eq:derivative_kernel_3} represent the left and right derivatives of $\eta$ at $1$. Let $\partial_+\eta(1)$ denote the right derivative of $\eta$ at $1$, i.e., \begin{equation*} \partial_+\eta(1) = \lim_{\lambda\to 0^+} \frac{\eta({e}^{{i}\lambda})-\eta(1)}{{e}^{{i}\lambda}-1}. \end{equation*} From Lemma \ref{lemma: properties of S1 S2'}, we know that $\eta({e}^{{i}\lambda})\to 1^-$ as $\lambda\to 0^+$. Hence, by \eqref{eq:derivative_kernel_3}, one has \begin{equation*} \partial_+\eta(1) = {i}\sqrt{\frac{\sum p_{k,\ell}(k-\ell)^2}{\sum p_{k,\ell}(k+\ell)^2}}. \end{equation*} We then have \begin{equation*} (\partial_+\eta(s)s)\vert_{s=1}=\partial_+\eta(1) + \eta(1) = 1+{i}\sqrt{\frac{\sum p_{k,\ell}(k-\ell)^2}{\sum p_{k,\ell}(k+\ell)^2}}. \end{equation*} This allows us to derive the interior angle $\theta$ of $\mathcal{S}_1$ at $1$: \begin{equation*} \cos \theta = 2\sin^2\bigl((\arg \partial_+\eta(s)s)\vert_{s=1}\bigr)-1=\frac{-\sum k\ell p_{k,\ell}}{\sqrt{\sum k^2p_{k,\ell}}\sqrt{\sum \ell^2p_{k,\ell}}}. \end{equation*} This completes the proof in the case $p_{1,1}=0$. In the case $p_{1,1}\neq 0$, one may apply the same arguments directly on $K$ (instead of $\widetilde{K}$) and derive the smoothness of the curves. Differentiating the identity $K(\eta(s)s,\eta(s)s^{-1})=0$, we obtain \begin{multline*} \eta'(s) \left(2\eta(s)- \sum p_{k,\ell}(-k-\ell+2)\eta(s)^{-k-\ell+1}s^{-k+\ell}\right) \\ - \left(\sum p_{k,\ell}(-k+\ell)\eta(s)^{-k-\ell+2}s^{-k+\ell-1}\right)=0. \end{multline*} Equation \eqref{eq:derivative_kernel_3} and the rest of the proof follow in a similar way. \end{proof} \subsection{Non-existence of solutions to the kernel equation in $\mathcal{S}_1^+\times\mathcal{S}_1^+$} In this part, we show that $K(x,y)$ does not have any zero in the domain $\mathcal{S}_1^+\times\mathcal{S}_1^+$. This will allow us to deduce some regularity properties of $H(x,y)$ in the same domain, which will be used importantly in the next sections \begin{lemma} \label{lemma:no_solution} Assume \ref{H1:jumps}--\ref{H4:jumps}. If $(x,y)\in V[K]$ are such that $\vert y\vert<\vert x\vert <1$ (resp.\ $\vert x\vert<\vert y\vert <1$), then $x\notin \mathcal{S}_1^+$ (resp.\ $y\notin \mathcal{S}_1^+$). As a consequence, $K(x,y)$ does not have any solution in $\mathcal{S}_1^+\times\mathcal{S}_1^+$. \end{lemma} \begin{proof} We first consider the case $p_{1,1}\not=0$ and set \begin{equation*} \mathcal{V}_x=\{(x,y)\in V[K]: \vert y\vert<\vert x\vert<1\}. \end{equation*} Reasoning by contradiction, let us suppose that there exists $(x,y)\in\mathcal{V}_x$ with $x\in \mathcal S_1^+$. Let $\gamma$ be a path from $x$ to $0$ which avoids $\mathcal S_1$ and any point of $V[K]$ such that $\frac{\partial K}{\partial x}$ vanishes (there are only finitely many such points). Then, there exists a path $\gamma':[0,1]\rightarrow V[K]$ such that $\gamma'(0)=(x,y)$ and $P_{x}\circ \gamma'=\gamma$. Hence, since $p_{1,1}\not=0$, $\gamma'(1)=(0,y')$ for some $y'\in\mathbb{C}\setminus \{0\}$. Since $P_x\circ\gamma'=\gamma$ never meets $S_1$, $\gamma'$ never meets $\mathcal{K}$ and thus $\gamma'([0,1])\subset \mathcal{V}_x$. We should thus have $\vert y'\vert<0$, which is a contradiction. We move to the case $p_{1,1}=0$, for which $(0,0)\in \mathcal{K}$. By Weierstrass preparation theorem (recalled here in Appendix~\ref{sec:app_W}) and coefficient identifications, we have \begin{equation*} K(x,y)=h(x,y)(y+f(x)), \end{equation*} for $(x,y)$ in a neighbourhood $V$ of $(0,0)$, with \begin{equation*} f(x)=x+\frac{1+2p_{2,0}-p_{0,1}}{p_{1,0}}x^2+o(x^2). \end{equation*} Since $\frac{1+2p_{2,0}-p_{0,1}}{p_{1,0}}>0$, for $x$ small enough with $x>0$, there is a unique $y\in\mathbb{C}$ such that $(x,y)\in V[K]\cap V$, and moreover for $x$ small enough \begin{equation*} \vert y\vert=\vert f(x)\vert>\vert x\vert. \end{equation*} Hence, suppose that $(x,y)\in\mathcal{V}_x$ with $x\in \mathcal S_1^+$, and let $\gamma$ be a path from $x$ to $0$ in $\mathcal S_1^+$ avoiding any singular point of $V[K]$ and such that $\gamma(t)$ is real for $t$ close to $1$. Then, there exists a path $\gamma'(x)$ such that $P_{x}\circ \gamma'=\gamma$. By the previous reasoning, for $t$ close to $1$ we have $\vert P_y\circ\gamma'(t)\vert>\vert P_x\circ\gamma'(t)\vert$. Since $\gamma'$ is continuous and for $t$ close to $0$ we have $\vert P_y\circ\gamma'(t)\vert<\vert P_x\circ\gamma'(t)\vert$, there exists $t\in[0,1)$ such that $\gamma'(t)\in\mathcal{K}$. This contradicts the fact that $P_x\circ\gamma'$ avoids $\mathcal S_1$. \end{proof} \section{Boundary value problems for the generating functions} \label{sec:BVP} The main objective of this section is to prove Proposition \ref{prop:BVP}, which states a BVP for the generating functions $H(x,0)$ and $H(0,y)$ defined in \eqref{eq:generating_functions_harmonic_functions_uni}. The polynomial solutions to this BVP will be analyzed in Corollary \ref{cor:solutions-poly_BVP}. We first need to introduce conformal mappings for the bounded domain delimitated by $\mathcal S_1$. We begin with the following notation. Let $\mathcal A$ denote a non-intersecting curve separating the complex plane into two domains, $\mathcal A^+$ and $\mathcal A^-$. Let also $f$ be meromorphic in $\mathbb C\setminus\mathcal A$. Then for $t$ on the curve $\mathcal A$, $f^+(t)$ will denote $\lim_{x\to t, x\in\mathcal A^+} f(x)$, provided it exists. The notation $f^-(t)$ is defined similarly. \subsection{Conformal mappings} \begin{lemma}\label{lem:pi_{1}} For any $a\in\mathcal{S}_1^+\cap\mathbb{R}$, there exists a unique conformal mapping $\pi_{1}$ from $\mathcal{S}_1^+$ onto $\mathcal{C}^+$ such that $\pi_{1}(a)=0$ and $\pi_{1}'(a)>0$. Moreover, $\pi_{1}^+(1)=1$ and \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{lem:pi_{1}-1} In the case $p_{1,1}=0$, $\pi_{1}^+(0)=-1$ \item\label{lem:pi_{1}-2} In the case $p_{1,1}\neq 0$, $\pi_{1}(0)\in(-1,1)$. \end{enumerate} \end{lemma} \begin{proof} The existence and uniqueness of $\pi_{1}$ is an immediate consequence of the classical Riemann mapping theorem. We now prove that $\pi_{1}(x) = \overline{\pi_{1}(\overline{x})}$. Let $\chi(x)$ denote $\overline{\pi_{1}(\overline{x})}$ and $a_0$ denote the other intersection (not $1$) of $\mathcal{S}_1$ and $\mathbb{R}$. Since $\pi_{1}(x)$ maps one-to-one from $\mathcal{S}_1^+$ onto $\mathcal{C}^+$, which are both symmetric w.r.t.\ the real axis, then so is $\chi(x)$. Moreover, since $\chi(a)=\pi_{1}(a)$ and $\chi'(a)=\overline{\pi_{1}'(\overline{a})}>0$ (since $a$ is real and $\pi_{1}'(a)>0$), then $\chi(x)$ maps conformally $\mathcal{S}_1^+$ onto $\mathcal{C}^+$ and $\chi = \pi_{1}$. This implies that $\pi(x)\in (-1,1)$ for all $x\in (a_0,1)$. Since $\pi_{1}$ is univalent and $\pi_{1}'(a)>0$, then $\pi_{1}$ maps one-to-one from $(a_0,1)$ onto $(-1,1)$ in the same direction. Hence, $\pi_{1}^+(1)=1$ and $\pi_{1}^+(a_0)=-1$. Since $0=a_0$ in the case $p_{1,1}=0$ and $0\in (a_0,1)$ in the case $p_{1,1}\neq 0$, then \ref{lem:pi_{1}-1} and \ref{lem:pi_{1}-2} follow. \end{proof} With $\pi_{1}$ as in Lemma \ref{lem:pi_{1}}, we introduce \begin{equation} \label{eq:pi_{2}} \pi_{2}(z)=\frac{1}{\pi_{1}(z)}, \end{equation} which maps $\mathcal{S}_1^+$ conformally onto $\mathcal{C}^-$, the exterior of the unit disk. Denote by $\mathcal{H}^+$ (resp.\ $\mathcal{H}^-$) the interior of the right (resp.\ left) half-plane and define the following mapping \begin{equation} \label{eq:expression_phi} \phi(z)=-\frac{z+1}{z-1}. \end{equation} It is well known that $\phi$ maps conformally $\mathcal{C}^+$ (resp.\ $\mathcal{C}^-$) onto $\mathcal{H}^+$ (resp.\ $\mathcal{H}^-$). We finally introduce a few further conformal mappings: \begin{itemize} \item $\psi_{1}=\phi\circ\pi_{1}:\mathcal S_1^+\to \mathcal{H}^+$; \item $\psi_{2}=\phi\circ\pi_{2}:\mathcal S_1^+\to \mathcal{H}^-$; \item $\pi_{10},\pi_{20},\phi_{0},\psi_{10},\psi_{20}$ denote respectively the inverses of $\pi_{1},\pi_{2},\phi,\psi_{1},\psi_{2}$. \end{itemize} The following lemma presents some crucial properties of $\psi_{10}$ and $\psi_{20}$. \begin{lemma}\label{lem:psi_{10}-psi_{20}} We have: \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{item:psi-0}$\psi_{10}(\infty)=1$ and $\psi_{20}(\infty)=1$; \item\label{item:psi-1}$\psi_{20}(t) = \psi_{10}(-t)$ for all $t\in\mathcal{H}^-$; \item\label{item:psi-2}$\psi_{10}(\overline{t})=\overline{\psi_{10}(t)}$ for all $t\in\mathcal{H}^+$; \item\label{item:psi-3}$\psi_{20}(\overline{t})=\overline{\psi_{20}(t)}$ for all $t\in\mathcal{H}^-$; \item\label{item:psi-4}$\psi_{20}^-(t)=\overline{\psi_{10}^+(t)}$ for all $t\in {i}\mathbb{R}$, i.e., $\psi_{10}^+\times\psi_{20}^-({i}\mathbb{R})=\mathcal{K}$. \end{enumerate} \end{lemma} \begin{proof} Item \ref{item:psi-0} follows by construction: $\psi_{10}(\infty)=\pi_{10}\circ \phi_{0}(\infty)=\pi_{10}(1)=1$, see \eqref{eq:expression_phi} and Lemma \ref{lem:pi_{1}}. Similar computations hold for $\psi_{20}(\infty)$. In order to prove \ref{item:psi-1}, we show equivalently that for all $x\in\mathcal{S}_1^+$, $\psi_{2}(x) =- \psi_{1}(x)$. This comes from the construction of $\pi_{2}$ and $\phi$. Indeed, for all $x\in\mathcal{S}_1^+$, \begin{equation} \label{eq:psi_{2}=-psi_{1}} \psi_{2}(x) = \phi( \pi_{2}(x))= \phi\left(\frac{1}{\pi_{1}(x)}\right) = - \phi(\pi_{1}(x)) = -\psi_{1}(x), \end{equation} see \eqref{eq:pi_{2}} and \eqref{eq:expression_phi}. We conclude by proving \ref{item:psi-4}. We show equivalently that for all $x\in\mathcal{S}_1$, $\psi_{2}^+(x)=\psi_{1}^+(\overline{x})$. Notice that for $x\in\mathcal{S}_1$, since $\pi_{2}^+(x)=\frac{1}{\pi_{1}^+(x)}=\pi_{1}^+(\overline{x})$, then $\psi_{2}^+(x)=\psi_{1}^+(\overline{x})$. The proof is complete. \end{proof} \begin{lemma} \label{lem:psi_{1}(0)} The following assertions hold true: \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{lemma:psi_{1}(0)-0}The asymptotic behavior of $\psi_{1}$ around $1$ is \begin{equation*} \psi_{1}(x)\sim \frac{c}{(1-x)^{\pi/\theta}}, \end{equation*} for some non-zero constant $c$ and $\theta$ as in \eqref{eq:angle_at_1}. \item\label{lemma:psi_{1}(0)-1} In the case $p_{1,1}=0$, $\psi_{1}$ can be extended analytically around $0$, such that $\psi_{1}(0)=0$ and $\psi_{1}'(0)\neq 0$. \item\label{lemma:psi_{1}(0)-2} In the case $p_{1,1}\neq 0$, $\psi_{1}(0)>0$. Without loss of generality, we will assume that $\psi_{1}(0)=p_{1,1}$. \end{enumerate} \end{lemma} \begin{proof} We first prove \ref{lemma:psi_{1}(0)-0} about the asymptotic behavior of $\psi_{1}$ around $1$. Recall that $\pi_{10}$ maps conformally $\mathcal{C}^+$ onto $\mathcal{S}_1^+$ and the interior angle of $\mathcal{S}_1^+$ at $1$ is $\theta$. Then by \cite[Thm~3.11]{Po-92}, there exists a non-zero constant $c_1$ such that as $z\to 1$, \begin{equation*} \pi_{10}(z)= \pi_{10}(1) + (1-z)^{\theta/\pi}(c_1+o(1)). \end{equation*} Hence as $x\to1$, there exists $c\neq 0$ such that \begin{equation*} \pi_{1}(x)=1 + (1-x)^{\pi/\theta}(1/c_1+o(1)) \quad \text{and} \quad \psi_{1}(x) = \phi\circ \pi_{1}(x) \sim \frac{c}{ (1-x)^{\pi/\theta}}. \end{equation*} We now move to the proof of \ref{lemma:psi_{1}(0)-1}. Carath\'eodory's theorem implies that $\pi_{1}^+$ maps one-to-one from $\mathcal{S}_1$ onto $\mathcal{C}$. Moreover, $\mathcal{S}_1$ is analytic around $0$ (because the parametrization of $\mathcal{S}_1$ possesses derivatives of any order at $0$). Hence, $\pi_{1}$ can be extended analytically around $0$, see \cite[p.~186]{Ne-52}, and by \cite[Thm~3.9]{Po-92}, $\pi'_{10}(0)\neq 0$ We now prove \ref{lemma:psi_{1}(0)-2}. Since $\pi_{1}(0)\in(-1,1)$, then $\psi_{1}(0)>0$. It is seen that if in Lemma~\ref{lem:pi_{1}} we choose two different points $a_1$ and $a_2$, and consider the associated mappings $\psi_{1}$ and ${\widetilde\psi}_{10}$, then there exists a constant $c>0$ such that $\psi_{1}=c\widetilde\psi_{1}$. Indeed, since $\phi$ is a conformal mapping from $\mathcal{C}^+$ onto $\mathcal{H}^+$, so is $c\phi$ for any $c>0$. Hence, $c\widetilde\psi_{1}$ is a conformal mapping from $\mathcal{S}_1^+$ onto $\mathcal{H}^+$. One may choose $c=\frac{\psi_{1}(a_2)}{\widetilde\psi_{1}(a_2)}=\psi_{1}(a_2)>0$ and the proof is complete. \end{proof} \subsection{Boundary value problems} We now have collected enough material to construct a BVP, whose solutions relate to the generating functions $H(x,0)$ and $H(0,y)$. For the sake of brevity, we shall denote $KH(x,0)=K(x,0)\times H(x,0)$ and similarly $KH(0,y)=K(0,y)\times H(0,y)$. Define a function $F$ on $\mathbb C\setminus i\mathbb R$ by \begin{equation} \label{eq:def_f_BVP} F(t) = \left\{ \begin{array}{lc} KH(\psi_{10}(t),0)-\frac{KH(0,0)}{2} & \text{if } t\in\mathcal{H}^+,\medskip\\ -KH(0,\psi_{20}(t))+\frac{KH(0,0)}{2} & \text{if } t\in\mathcal{H}^-, \end{array}\right. \end{equation} and recall that the notation $F^\pm(t)$ has been introduced at the beginning of Section~\ref{sec:BVP}. \begin{proposition} \label{prop:BVP} Assume \ref{H1:jumps}--\ref{H2:harmonic}, and assume in addition that the radius of convergence of $H(x,0)$ and $H(0,y)$ are greater than or equal to one. Then $F$ in \eqref{eq:def_f_BVP} is sectionally analytic on $\mathbb{C} \setminus i\mathbb R$ and more specifically, $F$ satisfies the following BVP: \begin{enumerate}[label=\textnormal{(\roman{*})},ref=\textnormal{(\roman{*})}] \item\label{item:BVP-1}$F$ is analytic on $\mathcal{H}^+$ and continuous on $\mathcal{H}^+\cup{i}\mathbb{R}$. \item\label{item:BVP-2} $F$ is analytic on $\mathcal{H}^-$ and continuous on $\mathcal{H}^-\cup{i}\mathbb{R}$. \item\label{item:BVP-3}For all $t\in{i}\mathbb{R}$, \begin{equation} \label{eq:boundary_condition} F^+(t) - F^-(t)=0. \end{equation} \item\label{item:BVP-4}If the associated harmonic function $h$ is non-zero, then $F(\infty)=\infty$. \end{enumerate} \end{proposition} \begin{proof We start with the proof of \ref{item:BVP-1}. Since $H(x,0)$ is assumed to be analytic in the unit disk, which contains $\mathcal{S}_1^+$, then $F$ is analytic on $\mathcal H^+$. Moreover, with the exception of the point $1$, the curve $\mathcal{S}_1$ is contained in the open unit disk, so the continuity on $\mathcal{H}^+\cup {i}\mathbb{R}$ follows. Item \ref{item:BVP-2} would be proved along the same lines. We now prove \ref{item:BVP-3}. By Lemma \ref{lem:psi_{10}-psi_{20}} \ref{item:psi-4}, we have $K(\psi_{10}^+(t),\psi_{20}^-(t))=0$ for all $t\in {i}\mathbb{R}$. So the identity \eqref{eq:boundary_condition} is just a consequence of the functional equation \eqref{eq:functional_equation}. It remains to prove \ref{item:BVP-4}. If $F$ is bounded at infinity, then by Lemma \ref{lem:constant_functions} below, the function $F$ should be constant, and actually even identically zero by the fact that if $x=y=0$, then $KH(x,0)=KH(0,y)=KH(0,0)$. \end{proof} \begin{lemma} \label{lem:constant_functions} Let $F$ be a sectionally analytic on $\mathbb C\setminus i\mathbb R$, which satisfies \ref{item:BVP-1}, \ref{item:BVP-2} and \ref{item:BVP-3} of Proposition~\ref{prop:BVP}. If in addition $F$ is bounded at infinity, then $F$ is a constant function. \end{lemma} \begin{proof} Since $F$ is sectionally analytic on $\mathbb{C}\setminus i\mathbb{R}$ and continuous on $\mathbb{C}$ ($F^+(t)=F^-(t)$ on ${i}\mathbb{R}$ by \eqref{eq:boundary_condition}), then $F$ is an entire function. Moreover, since $F$ is bounded, then by Liouville's theorem, $F$ is a constant function. \end{proof} \subsection{Polynomial solutions to the boundary value problems} \label{sec:polynomial_solutions} As shown in~Proposition~\ref{prop:BVP}~\ref{item:BVP-4}, the solutions $F$ to the BVP cannot be bounded at infinity for non-trivial harmonic functions. It is thus natural to look at functions $F$ of (polynomial) order $n$ at infinity, i.e., such that $F(t)=O(t^n)$, for some $n\geq 1$. Such functions will be called polynomial solutions and are studied in this section. \begin{cor} \label{cor:solutions-poly_BVP} Given that $F$ in \eqref{eq:def_f_BVP} has a pole of order $n>0$ at infinity, then $F$ is a polynomial of degree $n$ satisfying to the following conditions: \begin{align} \label{eq:condition_p11} &F(p_{1,1})=-F(-p_{1,1})=\frac{KH(0,0)}{2}=-\frac{1}{2}p_{1,1}h(1,1). \end{align} We then have, for $x,y\in\mathcal{S}_1^+$, \begin{align}\label{eq: KH(x,0),KH(0,y)} KH(x,0) &= F(\psi_{1}(x)) + \frac{KH(0,0)}{2},\\ KH(0,y) &= -F(-\psi_{1}(y)) + \frac{KH(0,0)}{2},\\ \label{eq: KH(x,y)}H(x,y) &= \frac{F(\psi_{1}(x))-F(-\psi_{1}(y))}{K(x,y)}. \end{align} \end{cor} \begin{proof} With a proof similar to that of Lemma \ref{lem:constant_functions}, we derive that $F$ is an entire function. Since $F$ has a pole of order $n$ at infinity, then by the extended version of Liouville's theorem, $F$ is a polynomial. The identity \eqref{eq:condition_p11} is derived very naturally. First, in the case $p_{1,1}=0$, since $\psi_{10}^+(0)=0$ (see Lemma \ref{lem:psi_{1}(0)}) and $KH(0,0)=0$, one must have $F(0) = KH(\psi_{10}^+(0),0)=0$. In the case $p_{1,1}\neq 0$, since $\psi_{10}(p_{1,1})=\psi_{20}(-p_{1,1})=0$ (see again Lemma \ref{lem:psi_{1}(0)}), one has \begin{equation*} F(p_{1,1}) = -F(-p_{1,1})=KH(\psi_{10}(p_{1,1}),0)-\frac{KH(0,0)}{2}=\frac{KH(0,0)}{2}. \end{equation*} Applying into the solutions the inverses of $\psi_{10}$ and $\psi_{20}$, which are $\psi_{1}$ and $\psi_{2}$, we derive \eqref{eq: KH(x,0),KH(0,y)}. Equation \eqref{eq: KH(x,y)} is deduced from \eqref{eq: KH(x,0),KH(0,y)} together with the main functional equation \eqref{eq:functional_equation}. \end{proof} \section{Proof of our main results (Theorems \ref{thm:main_intro-1} and \ref{thm:main_intro-2})} \label{sec:solution_BVP} This part is structured as follows: we successively prove Theorem~\ref{thm:main_intro-1}~\ref{thm:main_intro-1:it1}, Theorem~\ref{thm:main_intro-1}~\ref{thm:main_intro-1:it2}, Theorem~\ref{thm:main_intro-1}~\ref{thm:main_intro-1:it3} and Theorem~\ref{thm:main_intro-2}. Finally, in the independent Section~\ref{sec:sym_antisym}, we study some features of symmetric and anti-symmetric harmonic functions. \subsection{Proof of Theorem \ref{thm:main_intro-1} \ref{thm:main_intro-1:it1} and \ref{thm:main_intro-1:it2}} \begin{proof} Let $P_n$ be the family of polynomials introduced in \eqref{eq:def_P_n} and $H_n$ the associated bivariate function \eqref{eq:expression_H_n}. Let us first prove that $H_n$ defines a bivariate power series. This is obvious in the case $p_{1,1}\neq 0$, since $K(0,0)\neq 0$. We therefore assume that $p_{1,1}=0$. In this case $P_{n}(t)=t^n$, and one can rewrite \eqref{eq:expression_H_n} as \begin{equation*} H_n(x,y) = \frac{\psi_{1}(x)^n-(-\psi_{1}(y))^n}{K(x,y)} = \frac{\psi_{1}(x)+\psi_{1}(y)}{K(x,y)}\sum_{k=0}^{n-1}\psi_{1}(x)^k(-\psi_{1}(y))^{n-1-k}. \end{equation*} By Lemma \ref{lem:psi_{1}(0)}, $\psi_{1}$ is defined on a neighborhood of $0$. Since $\psi_{1}(x)=-\psi_{1}(y)$ on a neighbourhood of $(0,0)$ in the one-dimensional real variety $\mathcal{K}$ in \eqref{eq:main_domain} (see Lemma \ref{lem:psi_{10}-psi_{20}} \ref{item:psi-4} and its proof), by analyticity of $\psi_{1}$ we have $\psi_{1}(x)=-\psi_{1}(y)$ on a neighbourhood of $(0,0)$ in the one-dimensional complex variety where $K(x,y)=0$. Now recall that on a neighborhood of $(0,0)$, \begin{equation*} \frac{\partial}{\partial y}(\psi_{1}(x)+\psi_{1}(y))\neq 0 \quad \text{and}\quad \frac{\partial}{\partial y}K(x,y)\neq 0. \end{equation*} Hence, by the Weierstrass preparation theorem for analytic functions in several variables (see Appendix~\ref{sec:app_W}), there exist two functions $u(x,y)$ and $v(x,y)$ analytic in a neighborhood of $(0,0)$ and not vanishing at $(0,0)$, as well as a function $g(x)$ analytic in a neighborhood of $0$ and vanishing at $0$, such that \begin{equation*} \frac{\psi_{1}(x)+\psi_{1}(y)}{K(x,y)} = \frac{u(x,y)(y-g(x))}{v(x,y)(y-g(x))} = \frac{u(x,y)}{v(x,y)}. \end{equation*} This implies that $H_n(x,y)$ is analytic in a neighborhood of $(0,0)$, i.e., $H_n(x,y)$ is the generating function of a function $h_n$ on the quadrant. Since $H_{n}$ satisfies the main functional equation \eqref{eq:functional_equation}, $h_n$ is a harmonic function. Since $\psi_{1}$ is defined on $\mathcal{S}_1^+$ and $K(x,y)$ does not have any solution in $\mathcal{S}_1^+\times\mathcal{S}_1^+$ by Lemma~\ref{lemma:no_solution}, then $H_n(x,y)$ is analytic in $\mathcal{S}_1^+\times\mathcal{S}_1^+$. The proof is complete. \end{proof} \subsection{Proof of Theorem \ref{thm:main_intro-1} \ref{thm:main_intro-1:it3}} Our main result in this part will be stated under Proposition~\ref{prop:main_intro-1:it3} and appears as a refinement of Theorem~\ref{thm:main_intro-1}~\ref{thm:main_intro-1:it3}. We first introduce some necessary notation. The Laplace transform of a discrete function $f$ on $\mathbb{N}^2$ is defined as \begin{equation*} \mathcal{L}f(x,y) = \sum_{u,v=0}^\infty f(u,v){e}^{-(ux+vy)}. \end{equation*} For a measurable function $f$ on the quadrant $\mathcal{Q}$, its Laplace transform is defined by \begin{equation*} \mathcal{L}f(x,y)=\int_{0}^\infty\int_{0}^\infty f(u,v)e^{-(ux+vy)}dudv. \end{equation*} Remark that the above Laplace transforms are well defined (and analytic) on $\mathcal{H}^+\times \mathcal{H}^+$ as soon as the growth of $f$ at infinity is at most polynomial. Finally, we introduce \begin{equation} \label{eq:h_n^sigma} h_n^\sigma (x,y) = \Im \bigl((x/\sin\theta+y\cot\theta+{i}y)^{n\pi/\theta}\bigr)= g_n\bigl(x/\sin\theta+y\cot\theta ,y\bigr), \end{equation} with $g_n(x,y) = \Im\bigl((x+{i}y)^{n\pi/\theta}\bigr)$. It is easily checked that setting \begin{equation} \label{eq:continuous_Laplacian_theta} \Delta = \frac{\partial^2}{\partial x^2} - 2\cos\theta\frac{\partial^2}{\partial x\partial y} +\frac{\partial^2}{\partial y^2}, \end{equation} one has $\Delta(h_n^\sigma)=0$. Notice that \eqref{eq:continuous_Laplacian_theta} is exactly \eqref{eq:continuous_Laplacian} with $\sigma_1=\sigma_2$ and $\theta = \arccos\frac{-\sigma_{12}}{\sqrt{\sigma_{1}\sigma_{2}}}$, see \eqref{eq:angle_at_1}. \begin{proposition} \label{prop:main_intro-1:it3} Let $h_n$ be the harmonic function with generating function $H_n$ defined by \eqref{eq:expression_H_n} and $h_n^\sigma$ be the continuous harmonic function defined in \eqref{eq:h_n^sigma}. Then there exists a positive constant $c$ such that for all $x,y>0$, \begin{equation*} \lim_{m\to\infty} \frac{c}{m^{n\pi/\theta +1}}\mathcal{L}h_n( mx, my) = \mathcal{L}h_n^\sigma (x,y). \end{equation*} \end{proposition} Before embarking into the proof of Proposition \ref{prop:main_intro-1:it3}, we provide a few remarks on the construction of the function $h_n^\sigma$ as introduced in \eqref{eq:h_n^sigma}. We first consider a Dirichlet problem on the cone ${D}=\{(r\cos t,r\sin t):r> 0, t\in(0,\theta)\}$. Recall from our introduction that the associated set of harmonic functions may be described as \begin{equation*} H(D)= \bigl\{\sum_{n\geq 1} a_ng_n(x,y): a_n\in\mathbb{R} \text{ and }\vert a_n\vert^{1/n}\to 0 \bigr\}, \end{equation*} with $g_n$ as above. By the linear transformation $(x,y)\mapsto (x/\sin\theta+y\cot\theta ,y)$ from the positive quadrant $\mathcal{Q}$ onto the cone $D$, one can transform the above problem into a problem on $\mathcal{Q}$ with corresponding Laplacian \eqref{eq:continuous_Laplacian_theta}. Naturally, the set of underlying continuous harmonic functions associated is described by \eqref{eq:basis_continuous_case}. \begin{proof}[Proof of Proposition \ref{prop:main_intro-1:it3}] By definition of Laplace transforms and generating functions, one has \begin{multline} \label{comp_h_n} \mathcal{L}h_n(mx,my) = \frac{{e}^{-(x+y)/m}}{m} H_n({e}^{-x/m},{e}^{-y/m})\\ = \frac{{e}^{-(x+y)/m}}{m} \frac{P\bigl(\psi_{1}({e}^{-x/m})\bigr)-P\bigl( -\psi_{1}({e}^{-y/m}) \bigr)}{K({e}^{-x/m},{e}^{-y/m})}. \end{multline} Using now Lemma~\ref{lem:psi_{1}(0)}~\ref{lemma:psi_{1}(0)-0}, one deduces that as $m\to\infty$, \begin{align*} \psi_{1}({e}^{-x/m})&\sim \frac{c_0}{(1-{e}^{-x/m})^{\pi/\theta}}\sim \frac{c_0}{(x/m)^{\pi/\theta}},\\ P\left(\psi_{1}({e}^{-\frac{x}{m}})\right) &\sim c_1\left(\left(\frac{m}{x}\right)^{\frac{2\pi}{\theta}}-p_{1,1}\right)^{\left\lfloor \frac{n}{2}\right\rfloor} \left(\frac{m}{x}\right)^{n\,[2]}\sim c_1\left(\frac{m}{x}\right)^{\frac{n\pi}{\theta}}, \end{align*} where $c_0$ and $c_1$ are non-zero constants. On the other hand, \begin{align*} K({e}^{-x/m},{e}^{-y/m}) &= {e}^{-(x+y)/m} - \sum p_{k,\ell}{e}^{-(x(-k+1)+y(-\ell+1))/m}\\ &= -\frac{(x+y)\sum kp_{k,\ell}}{m} - \frac{x^2\sum k^2p_{k,\ell} +2xy\sum k\ell p_{k,\ell} +y^2\sum \ell^2p_{k,\ell}}{2m^2} +o\left(\frac{1}{m^2}\right)\\ & = -\frac{\sum k^2p_{k,\ell}}{2m^2}(x^2-2xy\cos\theta +y^2)+o\left(\frac{1}{m^2}\right). \end{align*} Going back to \eqref{comp_h_n}, we have for some non-zero constant $c_2$ that as $m\to \infty$, \begin{align*} \mathcal{L}h_n(mx,my) \sim c_2 m^{n\pi/\theta+1}\frac{(x^{-\pi/\theta})^n - (-y^{-\pi/\theta})^n}{x^2-2xy\cos\theta+y^2}. \end{align*} We now move the Laplace transform of the continuous harmonic function $h_n^\sigma$: \begin{align*} \mathcal{L}h_n^\sigma(x,y) &= \int_0^\infty \int_0^\infty h_n^\sigma(u,v){e}^{-(ux+vy)}dudv\\ & = \int_0^\infty \int_0^\infty g_n(u/\sin\theta+v\cot\theta ,v){e}^{-(ux+vy)}dudv\\ & = \int_0^\infty \int_{v'\cot\theta}^\infty g_n(u',v'){e}^{-(u'\sin\theta - v'\cos\theta)x-v'y}\sin\theta du'dv'\\ & =\sin\theta \int_0^\infty\int_0^{\theta}g_n(r\cos t,r\sin t){e}^{-r(x\sin (\theta-t)+y\sin t)}rdtdr\\ & = \sin\theta \int_0^{\theta} \sin \left(n\frac{\pi}{\theta}t\right) \int_0^\infty r^{n\frac{\pi}{\theta} +1}{e}^{-r(x\sin (\theta-t)+y\sin t)}drdt\\ & = \Gamma(n\pi/\theta+2)\sin\theta \int_0^{\theta} \frac{\sin (n\frac{\pi}{\theta}t)}{(x\sin (\theta-t)+y\sin t)^{n\frac{\pi}{\theta} +2}}dt\\ & = \left.\frac{\frac{\Gamma(n\pi/\theta+2)\sin\theta}{n\pi/\theta+1}}{x^2-2xy\cos\theta+y^2} \frac{-x\sin (\theta-(n\frac{\pi}{\theta}+1)t)-y\sin((n\frac{\pi}{\theta}+1)t)}{(x\sin (\theta-t)+y\sin t)^{n\frac{\pi}{\theta} +1}}\right|_0^\theta\\ & = \frac{\Gamma(n\pi/\theta+2)}{(n\pi/\theta+1)(\sin\theta)^{n\pi/\theta-1}}\frac{(x^{-\pi/\theta})^n - (-y^{-\pi/\theta})^n}{x^2-2xy\cos\theta+y^2}. \end{align*} One can compare the above results and the proof is then complete. \end{proof} Remark that one cannot deduce from Proposition \ref{prop:main_intro-1:it3} the asymptotics of $h_n(i,j)$ as $n$ is fixed and $i+j\to\infty$. However, classically, Proposition \ref{prop:main_intro-1:it3} entails that $h_n$ is converging locally in the $L^1$-norm towards $h_n^\sigma$. \subsection{Proof of Theorem \ref{thm:main_intro-2}} As a first step, we need to prove that as $n$ increases, the harmonic function $h_n$ in \eqref{eq:expression_H_n} has more and more zero coefficients, in a sense which will be made precise in Lemma~\ref{lemma:triangle_lemma} (case $p_{1,1}=0$) and Lemma~\ref{lemma:rectangle_lemma} (case $p_{1,1}\neq 0$), see also Figure~\ref{fig:zeros_h_n}. Let us remark here that Lemma~\ref{lemma:rectangle_lemma} is an a posteriori justification of our choice of the polynomials $P_n$ in the definition \eqref{eq:def_P_n}. \begin{lemma \label{lemma:triangle_lemma} Assuming $p_{1,1}=0$, then $h_n$ satisfies the following assertions: \begin{itemize} \item For all $i,j\geq 1$ such that $i+j\leq n$, we have $h_n(i,j)=0$; \item For all $i,j\geq 1$ such that $i+j= n+1$, we have $h_n(i,j)\neq 0$. \end{itemize} \end{lemma} \begin{figure} \hspace{-10mm} \begin{tikzpicture}[scale=2.5] \draw[gray,very thin] (-0.1,-0.1) grid (1.1,1.1) [step=0.25cm] (0,0) grid (1.1,1.1); \draw[->] (-0.1,0) -- (1.1,0); \draw[->] (0,-0.1) -- (0,1.1); \put(-2.6,-2.6){\textcolor{black}{$\bullet$}} \put(15.1,-2.6){\textcolor{black}{$\bullet$}} \put(15.1,15.1){\textcolor{blue}{$\bullet$}} \put(32.8,-2.6){\textcolor{black}{$\bullet$}} \put(32.8,15.1){\textcolor{blue}{$\bullet$}} \put(15.1,32.8){\textcolor{blue}{$\bullet$}} \put(50.5,-2.6){\textcolor{black}{$\bullet$}} \put(-2.6,15.1){\textcolor{black}{$\bullet$}} \put(-2.6,32.8){\textcolor{black}{$\bullet$}} \put(-2.6,50.5){\textcolor{black}{$\bullet$}} \put(-2.6,68.2){\textcolor{black}{$\bullet$}} \put(68.2,-2.6){\textcolor{black}{$\bullet$}} \put(15.1,68.2){\textcolor{red}{$\bullet$}} \put(32.8,32.8){\textcolor{blue}{$\bullet$}} \put(68.2,15.1){\textcolor{red}{$\bullet$}} \put(50.5,32.8){\textcolor{red}{$\bullet$}} \put(32.8,50.5){\textcolor{red}{$\bullet$}} \put(50.5,15.1){\textcolor{blue}{$\bullet$}} \put(15.1,50.5){\textcolor{blue}{$\bullet$}} \end{tikzpicture}\qquad\qquad\qquad \begin{tikzpicture}[scale=2.5] \draw[gray,very thin] (-0.1,-0.1) grid (1.1,1.1) [step=0.25cm] (0,0) grid (1.1,1.1); \draw[->] (-0.1,0) -- (1.1,0); \draw[->] (0,-0.1) -- (0,1.1); \put(-2.6,-2.6){\textcolor{black}{$\bullet$}} \put(15.1,-2.6){\textcolor{black}{$\bullet$}} \put(32.8,-2.6){\textcolor{black}{$\bullet$}} \put(50.5,-2.6){\textcolor{black}{$\bullet$}} \put(68.2,-2.6){\textcolor{black}{$\bullet$}} \put(-2.6,68.2){\textcolor{black}{$\bullet$}} \put(-2.6,15.1){\textcolor{black}{$\bullet$}} \put(15.1,15.1){\textcolor{blue}{$\bullet$}} \put(32.8,15.1){\textcolor{blue}{$\bullet$}} \put(50.5,15.1){\textcolor{blue}{$\bullet$}} \put(-2.6,32.8){\textcolor{black}{$\bullet$}} \put(15.1,32.8){\textcolor{blue}{$\bullet$}} \put(32.8,32.8){\textcolor{blue}{$\bullet$}} \put(50.5,32.8){\textcolor{blue}{$\bullet$}} \put(-2.6,50.5){\textcolor{black}{$\bullet$}} \put(15.1,50.5){\textcolor{blue}{$\bullet$}} \put(32.8,50.5){\textcolor{blue}{$\bullet$}} \put(50.5,50.5){\textcolor{blue}{$\bullet$}} \put(50.5,50.5){\textcolor{blue}{$\bullet$}} \put(68.2,15.1){\textcolor{red}{$\bullet$}} \put(15.1,68.2){\textcolor{red}{$\bullet$}} \end{tikzpicture} \caption{Illustrations of Lemmas~\ref{lemma:triangle_lemma} and~\ref{lemma:rectangle_lemma}. The harmonic function $h_n$ is always zero on the axes. Left: in the case $p_{1,1}=0$, the coefficients of $h_n$ are zero at the blue points of the triangle, and non-zero just above (the red points). Right: in the case $p_{1,1}\neq 0$, the coefficients of $h_n$ are zero at all blue points of a square, non-zero at the extremal red points.} \label{fig:zeros_h_n} \end{figure} \begin{proof} As in \eqref{eq:expression_H_n}, rewrite $H_n$ as \begin{align*} H_n(x,y) = \frac{\psi_{1}(x)^n-(-\psi_{1}(y))^n}{K(x,y)} &= \frac{\psi_{1}(x)+\psi_{1}(y)}{K(x,y)}\sum_{k=0}^{n-1}\psi_{1}(x)^k(-\psi_{1}(y))^{n-1-k}\\ &=H_{1}(x,y)\sum_{k=0}^{n-1}\psi_{1}(x)^k(-\psi_{1}(y))^{n-1-k}. \end{align*} Recall from Lemma \ref{lem:psi_{1}(0)} that $\psi_{1}$ is analytic at $0$, with $\psi_{1}(0)=0$ and $\psi_{1}'(0)\not=0$. Hence, we may write \begin{equation*} \psi_{1}(x)=\sum_{i\geq 1}c_{i}x^i, \end{equation*} with $c_1\not=0$. We then obtain, using the bivariate expansion of $H_1$, that \begin{equation} \label{eq:expansion_H_n_vanishing} H_n(x,y) = \sum_{i,j\geq 1} h_1(i,j)x^{i-1}y^{j-1}\sum_{k=0}^{n-1}\left(\sum_{i\geq 1}c_i x^i\right)^k\left(-\sum_{j\geq 1}c_j y^j\right)^{n-1-k}. \end{equation} By L'Hospital's rule, $\frac{\psi_{1}(x)}{K(x,0)}\rightarrow \frac{c_1}{p_{1,0}}\not =0$. Hence, $h_1(1,1)\neq 0$ and we deduce from \eqref{eq:expansion_H_n_vanishing} the proof of Lemma \ref{lemma:triangle_lemma}. \end{proof} \begin{lemma \label{lemma:rectangle_lemma} If $p_{1,1}\neq 0$, then $h_{1}(1,1)\not=0$ and for $k\geq 1$ and $\epsilon\in\{0,1\}$, the harmonic function $h_{2k+\epsilon}$ satisfies $h_{2k+\epsilon}(i,j)=0$ for all $1\leq i,j\leq k$. Moreover, the following matrix is invertible: \begin{equation*} T_k=\begin{pmatrix} h_{2k}(k+1,1) &h_{2k+1}(k+1,1)\\h_{2k}(1,k+1)&h_{2k+1}(1,k+1) \end{pmatrix}. \end{equation*} \end{lemma} \begin{proof} First, the proof that $h_{1}(1,1)\not=0$ is similar to the end of the Lemma~\ref{lemma:triangle_lemma}. Let $k\geq 1$ and $\epsilon\in\{0,1\}$. Recall from Theorem~\ref{thm:main_intro-1}~\ref{thm:main_intro-1:it1} that $h_{2k+\epsilon}$ admits the generating function \begin{equation*} H_{2k+\epsilon}(x,y)=\frac{P_{2k+\epsilon}\bigl(\psi_{1}(x)\bigr)-P_{2k+\epsilon}\bigl(-\psi_{1}(y)\bigr)}{K(x,y)}, \end{equation*} with $P_{2k+\epsilon}(t)=t^\epsilon(t^2-p_{1,1}^2)^k=t^\epsilon(t^2-\psi_{1}(0)^2)^k$, see \eqref{eq:def_P_n}. Hence we have \begin{multline*} H_{2k+\epsilon}(x,y) =\frac{1}{K(x,y)}\Bigl(\psi_{1}(x)^\epsilon\bigl(\psi_{1}(x)-{\psi_{1}(0)}\bigr)^k\bigl(\psi_{1}(x)+{\psi_{1}(0)}\bigr)^k \\ - \bigl(-\psi_{1}(y)\bigr)^\epsilon\bigl(-\psi_{1}(y)-{\psi_{1}(0)}\bigr)^k\bigl(-\psi_{1}(y)+\psi_{1}(0)\bigr)^k \Bigr). \end{multline*} Since $K(0,0)\neq 0$, $\frac{1}{K(x,y)}$ is analytic at $(0,0)$. Moreover, since $\frac{1}{K(x,y)}$ satisfies the functional equation \eqref{eq:functional_equation}, then it is also a generating function of a harmonic function $h_0(i,j)$, i.e., \begin{equation*} \frac{1}{K(x,y)} = \sum_{i,j\geq 1} h_0(i,j)x^{i-1}y^{j-1}. \end{equation*} Since $\psi_{1}$ is analytic at $0$, which is in the interior of $\mathcal{S}_1^+$, then it has an expansion \begin{equation*} \psi_{1}(x) = \psi_{1}(0) + \sum_{i\geq 1}c_i x^i, \end{equation*} where $c_1\neq 0$ since $\psi_{1}$ is a conformal map. Therefore, \begin{multline*} H_{2k+\epsilon}(x,y) = \left(\sum_{i,j\geq 1} h_0(i,j)x^{i-1}y^{j-1}\right)\times \\ \times\left[\left(\sum_{i\geq 1}c_i x^i\right)^k\left(\psi_{1}(0)+\sum_{i\geq 1}c_ix^i\right)^\epsilon\left(2{\psi_{1}(0)} + \sum_{i\geq 1}c_i x^i\right)^k\right.\\ -\left(-\sum_{j\geq 1}c_j y^j\right)^k\left.\left(-\psi_{1}(0)-\sum_{j\geq 1}c_jy^j\right)^\epsilon\left(-2{\psi_{1}(0)} - \sum_{j\geq 1}c_j y^j\right)^k\right]. \end{multline*} Since $h_0(1,1)\neq 0$ and $c_1\neq 0$, the bivariate power series expansion of the function above has zero coefficients for all monomials $x^iy^j$ with $i, j\leq k-1$, while the coefficient of the monomial $x^{k}$ is \begin{equation*} h_{2k+\epsilon}(k,1)=(2c_1)^kh_{0}(1,1)\psi_{1}(0)^{k+\epsilon}=(2c_1)^kh_{0}(1,1)p_{1,1}^{k+\epsilon} \end{equation*} and that of $y^{k}$ is \begin{equation*} h_{2k+\epsilon}(1,k)=(-1)^{\epsilon-1}(2c_1)^kh_{0}(1,1)p_{1,1}^{k+\epsilon}. \end{equation*} Hence, \begin{equation*} \begin{pmatrix} h_{2k}(k+1,1) &h_{2k+1}(k+1,1)\\h_{2k}(1,k+1)&h_{2k+1}(1,k+1) \end{pmatrix}=(2c_1)^kh_{0}(1,1)p_{1,1}^{k}\begin{pmatrix} \phantom{-}1& p_{1,1}\\-1&p_{1,1} \end{pmatrix}. \end{equation*} The determinant of the latter matrix is $2^{k+1}c_1^kh_{0}(1,1)p_{1,1}^{k+1}\not=0$, which implies the second statement of Lemma \ref{lemma:rectangle_lemma}. \end{proof} \begin{remark} As the proof of Lemma \ref{lemma:rectangle_lemma} shows, choosing instead of $P_n$ in \eqref{eq:def_P_n} the family of polynomials \begin{equation*} P_{m,n}=(X-p_{1,1})^m(X+p_{1,1})^n \end{equation*} for $m,n\geq 1$, would yield harmonic functions vanishing on the rectangle $\{(i,j):1\leq i\leq m, 1\leq j\leq n\}$. In particular, there is no uniqueness in the choice of $P_n$. \end{remark} \begin{lemma} \label{lemma:consequence_triangle_lemma} Given $p_{1,1}=0$ and a sequence $\{c_n\}_{n\geq1}$, there exists a unique sequence $\{a_n\}_{n\geq1}$ such that the (harmonic) function $\sum_{n\geq1} a_{n}h_n$ satisfies \begin{equation} \label{cond:consequence_triangle_lemma} \sum_{n\geq1} a_{n}h_{n}(i,1)=c_i,\quad \forall i\geq 1. \end{equation} Moreover, $\{a_n\}_{n\geq1}$ can be deduced from the (infinite) linear system of equations \begin{equation*} M\cdot a = c, \end{equation*} where $a=(a_1,a_2,a_3,\ldots)^\top$, $c=(c_1,c_2,c_3,\ldots)^\top$ and $M$ is a infinite non-singular lower triangular matrix. \end{lemma} By construction, the generating function of the harmonic function given in Lemma~\ref{lemma:consequence_triangle_lemma} is the function \begin{equation*} H(x,y)=\frac{\sum_{n\geq1} a_{n}\psi_{1}(x)^n-\sum_{n\geq1} a_{n}(-\psi_{1}(y))^n}{K(x,y)}=\frac{F(\psi_{1}(x))-F(-\psi_{1}(y))}{K(x,y)}, \end{equation*} with $F(t)=\sum_{n\geq1} a_{n}t^n$, in accordance with \eqref{eq:main-intro-2}. \begin{proof} By Lemma \ref{lemma:triangle_lemma}, $h_{n}$ vanishes on $(i,1)$ for $1\leq i\leq n-1$ and $h_{n}(n,1)\not=0$. Hence, the infinite matrix \begin{equation*} L=\bigl(h_{j}(i,1)\bigr)_{1\leq i,j\leq \infty} \end{equation*} is lower triangular, with non-zero diagonal coefficients. Hence, $L$ is invertible, and for any vector $c=(c_1,c_2,c_3,\ldots)^\top$, there exists a unique vector $a=(a_1,a_2,a_3,\ldots)^\top$ such that \begin{equation*} \begin{pmatrix} h_{1}(1,1) & 0 & 0 & \ldots & 0&\ldots\\ h_{1}(2,1) & h_{2}(2,1) & 0 & \ldots & 0&\ldots\\ \vdots & \vdots & \ddots & \ddots & \ddots &\ddots\\ h_{1}(n,1) & h_{2}(n,1) & \ldots & \ldots & h_{n}(n,1)&\ddots\\ \vdots&\vdots&\ddots&\ddots&\ddots&\ddots \end{pmatrix} \begin{pmatrix} a_1\\a_2\\ \vdots \\a_n\\\vdots \end{pmatrix} = \begin{pmatrix} c_1\\c_2\\ \vdots \\c_n\\\vdots \end{pmatrix}. \end{equation*} Hence, given $\{c_n\}_{n\geq1}$, there exists a unique sequence $\{a_n\}_{n\geq1}$ such that \eqref{cond:consequence_triangle_lemma} holds. \end{proof} \begin{lemma} \label{lemma:consequence_rectangle_lemma} Given $p_{1,1}\neq 0$ and two infinite sequences $\{c_n\}_{n\geq1}$ and $\{d_n\}_{n\geq2}$, there exists a unique sequence $\{a_n\}_{n\geq1}$ such that \eqref{cond:consequence_triangle_lemma} holds, as well as \begin{equation} \label{cond:consequence_rectangle_lemma} \sum_{n\geq1} a_{n}h_{n}(1,i)=d_i, \quad \forall i\geq 2. \end{equation} Moreover, $\{a_n\}_{n\geq1}$ can be deduced from the linear system \begin{equation*} M\cdot a= b, \end{equation*} where $a=(a_1,a_2,a_3,\ldots)^\top$, $b=(c_1,c_2,d_2,c_3,d_3,\ldots)^\top$ and $M$ is an infinite block lower triangular matrix with invertible blocks of size $1$ or $2$ on the diagonal. \end{lemma} \begin{proof} Let $\tau$ be the involution of $(\mathbb{N}\times \{1\})\cup (\{1\}\times \mathbb{N})$ switching coordinates. The only fixed point of $\tau$ is $(1,1)$. By Lemma \ref{lemma:rectangle_lemma}, for fixed values of $k\geq 1$ and $\epsilon\in\{ 0,1\}$, $h_{2k+\epsilon}$ vanishes at $(i,1)$ and $(1,i)$ for all $i\leq k-1$, and the matrix \begin{equation*} T_k=\bigl(h_{2k+j-1}(\tau^{i-1}(k+1,1))\bigr)_{1\leq i,j\leq 2} \end{equation*} is invertible. Likewise, $h_{1}(1,1)\not=0$. Hence, \begin{equation*} M=\bigl(h_{j}(\tau^{i\,[2]}(\lfloor i/2\rfloor+1,1))\bigr)_{1\leq i,j\leq\infty}=\begin{pmatrix} h_{1}(1,1) &\begin{array}{cc} 0&0\end{array} & 0 & \ldots \\ \begin{array}{c} h_1(2,1) \\ h_1(1,2) \end{array} & T_1 & 0 & \ldots \\ \vdots & \vdots & T_2 & \ldots \\ \vdots&\vdots&\vdots&\ddots \end{pmatrix}, \end{equation*} with each $T_i$ invertible. Thus $M$ is invertible and for any vector $b=(c_1,c_2,d_2,c_3,d_3,\ldots)^\top$, there exists a unique vector $a=(a_1,a_2,a_3,\ldots)^\top$ such that $M\cdot a=b$. For such a vector $a$, we thus have conditions \eqref {cond:consequence_triangle_lemma} and \eqref{cond:consequence_rectangle_lemma} satisfied. \end{proof} Putting all the latter lemmas together yields the proof of Theorem \ref{thm:main_intro-2}, as follows: \begin{proof}[Proof of Theorem \ref{thm:main_intro-2}] Let $\Phi$ be the function as in the statement of Theorem \ref{thm:main_intro-2}. First notice that by Lemma \ref{lemma:triangle_lemma} when $p_{1,1}=0$ and Lemma \ref{lemma:rectangle_lemma} for $p_{1,1}\not=0$, the map $\Phi$ is well defined, since for any sequence $\{a_{n}\}_{n\geq 1}$ and all $i,j\geq 1$, $\sum_{n\geq 1} a_{n}h_{n}(i,j)$ is a finite sum and the harmonicity of $\sum a_n h_n$ is directly inherited by the harmonicity of each $h_n$. Suppose first that $p_{1,1}>0$. First, by Lemma \ref{lemma:consequence_rectangle_lemma}, $\Phi$ is injective, and for any pair of formal power series $F,G$ with $F(0)=G(0)$, there exists a sequence $\{a_{n}\}_{n\geq 1}$ such that \begin{equation*} F(x)=\sum_{i\geq 1}\sum_{n\geq 1}a_nh_n(i,1)x^{i-1}\quad \text{and}\quad G(y)=\sum_{j\geq 1}\sum_{n\geq 1}a_nh_n(1,j)y^{j-1}. \end{equation*} Since any harmonic function $h$ is uniquely determined by its sectional generating functions \begin{equation} \label{eq:def_F-and-G} F(x)=\sum_{i\geq 1}h(i,1)x^{i-1} \quad \text{and}\quad G(y)=\sum_{j\geq 1}h(1,j)y^{j-1} \end{equation} through \eqref{eq:functional_equation}, this shows the surjectivity of $\Phi$. The proof is more involved in the case $p_{1,1}=0$. By Lemma \ref{lemma:consequence_triangle_lemma}, $\Phi$ is injective. It remains to show that $\Phi$ is surjective. Let $h$ be a harmonic function and set $F$ as in \eqref{eq:def_F-and-G}. Then, by Lemma~\ref{lemma:consequence_triangle_lemma}, there exists $\{a_n\}_{n\geq 1}$ such that $\sum a_nh_{n}(i,1)=h(i,1)$ for all $i\geq 1$. Hence, the formal power series $F(x)$ and \begin{equation*} \widetilde{F}(x)=K(x,0)\sum_{i\geq 1}\left(\sum_{n\geq1}a_nh_n(i,1)\right)x^{i-1} \end{equation*} coincide. Let $H$ be the generating series of $h$ and $\widetilde{H}$ the generating series of $\sum a_nh_{n}$. Then we have \begin{equation*} H(x,y)=\frac{F(x)+G(y)}{K(x,y)}\quad \text{and} \quad \widetilde{H}(x,y)=\frac{\widetilde{F}(x)+\widetilde{G}(y)}{K(x,y)}=\frac{F(x)+\widetilde{G}(y)}{K(x,y)}, \end{equation*} with $G, \widetilde{G}\in \mathbb{R}[[t]]$. Since $K(0,0)=0$ and $\frac{\partial}{\partial x}K(0,0)\not=0$, by the Weierstrass preparation theorem (see Appendix~\ref{sec:app_W}) applied to $K(x,y)$ with respect to $x$ in the ring of formal power series $\mathbb{R}[[x,y]]$, one can write \begin{equation*} K(x,y)=u(x,y)(x-v(y)), \end{equation*} with $u$ invertible in $\mathbb{R}[[x,y]]$ and $v\in \mathbb{R}[[y]]$ satisfying $v(0)=0$. In particular, $F(x)+G(y)$ and $F(x)+\widetilde{G}(y)$ are both divisible by $(x-v(y))$ in $\mathbb{R}[[x,y]]$. By the Weierstrass division theorem (see Appendix~\ref{sec:app_W}) applied to the division of $F(x)$ by the Weierstrass polynomial of degree one $x-v(y)$ in $\mathbb{R}[[x,y]]$, there is a unique Weierstrass polynomial $G$ of degree zero in $\mathbb{R}[[x,y]]$ (namely, $G\in \mathbb{R}[[y]]$ with $G(0)=0$) such that $G(y)+F(x)$ is divisible by $x-v(y)$. Hence, $G(y)=\widetilde{G}(y)$ and $H=\widetilde{H}$. This shows the surjectivity of $\Phi$. \end{proof} \subsection{Symmetric and anti-symmetric harmonic functions} \label{sec:sym_antisym} \begin{proposition}\label{proposition: symmetric, anti-symmetric harmonic function} The harmonic function $h(i,j)$ is symmetric (resp.\ anti-symmetric) if and only if its characterizing series $F(t)=\Phi^{-1}(h)$ is odd (resp.\ even). \end{proposition} \begin{proof} It is seen that $h(i,j)$ is symmetric (resp.\ anti-symmetric) if and only if $H(x,0)=H(0,x)$ (resp.\ $H(x,0)=-H(0,x)$). First, in the case where $H$ is of type \eqref{eq: KH(x,0),KH(0,y)}, the identity $H(x,0)=H(0,x)$ is equivalent to $F(\psi_{1}(x)) = -F(-\psi_{1}(x))$. This implies that $h(i,j)$ is symmetric if and only if $F(t)$ is an odd function. The second statement in the anti-symmetric case would be proved similarly. By \eqref{eq:def_P_n}, $P_n$ is odd for $n$ odd and even for $n$ even, thus by the previous reasoning $h_n$ is symmetric for $n$ odd and anti-symmetric for $n$ even. We deduce then from Theorem \ref{thm:main_intro-2} that $\Phi(F)$ is symmetric if $F$ is odd and anti-symmetric if $F$ is even. Since the vector subspace of odd power series and the one of even power series are complementary in $\mathbb{R}_0[[t]]$, the result is deduced. \end{proof} \section{Various examples} \label{sec:examples} In this part, we provide a list of models for which one may compute the conformal map $\psi_{1}$ explicitly. We start with the example of simple random walks (Section~\ref{sec:SRW}). For this model, all generating functions happen to be rational functions, and most of the computations are rather easy to lead. We then move to arbitrary small step random walks (Section~\ref{sec:uniformization}) and obtain an expression for the conformal mapping in terms of generalized Chebychev polynomials. Finally, we construct of family of walks with arbitrary large jumps, in relation with plane bipolar orientations, for which one has a simple (algebraic) formula for the conformal mapping (Section~\ref{sec:larger_jumps}). \subsection{The simple random walk} \label{sec:SRW} We present here a first example, which is the simple random walk with transition probabilities $p_{1,0}=p_{0,1}=p_{-1,0}=p_{0,-1}=1/4$, see Figure \ref{fig:step_sets} and \eqref{eq:def_Laplacian_usual} for the associated Laplacian operator. The kernel takes the form \begin{equation*} K(x,y) = xy\left(1-\frac{x+y+x^{-1}+y^{-1}}{4}\right)=-\frac{x(y-1)^2}{4}-\frac{y(x-1)^2}{4}. \end{equation*} \subsubsection*{Curve and conformal mappings} Setting $(x,y)=(\eta s,\eta s^{-1})$ and solving (in $\eta$) the equation $K(x,y)=0$, one easily obtains \begin{equation*} \mathcal{S}_1 = \biggl\{{i}s\frac{s-{i}}{s+{i}}: s={e}^{{i}t},t\in[0,\pi)\biggr\}, \end{equation*} which we may rewrite as \begin{equation*} \mathcal{S}_1 = \{(1-\sin t,(1-\sin t)\tan t): t\in [0,\pi)\}, \end{equation*} as announced in \eqref{eq:expression_S1_SRW}. See Figures \ref{fig:some_curves} and \ref{fig:def_theta}. One can choose a conformal mapping $\pi_{1}$ as \begin{equation*} \pi_{1}(x) = \frac{x-(x-1)^2}{x+(x-1)^2}. \end{equation*} Indeed, $\pi_{1}$ is analytic in $\mathcal{S}_1^+$ and for all $x=\eta(e^{it})e^{it}\in\mathcal{S}_1$, \begin{equation*} \pi_{1}(x) = \frac{-(x+\frac{1}{x})+3}{(x+\frac{1}{x})-1} = \frac{1+2i\frac{\sin^2t}{\cos t}}{1-2i\frac{\sin^2t}{\cos t}}\in\mathcal{C}. \end{equation*} With the conformal mapping $\phi$ defined in \eqref{eq:expression_phi}, we finally obtain \begin{equation} \label{eq:expression_psi_{1}_SRW} \psi_{1}(x) = \phi\circ\pi_{1}(x)= \frac{x}{(1-x)^2}. \end{equation} \subsubsection*{Polynomial harmonic functions} Using Equation \eqref{eq:main-intro-2} and \eqref{eq:expression_psi_{1}_SRW}, one has \begin{equation*} H(x,y)=\frac{F\bigl(\frac{x}{(1-x)^2}\bigr)-F\bigl(-\frac{y}{(1-y)^2}\bigr)}{K(x,y)}. \end{equation*} Writing $X=\frac{x}{(1-x)^2}$, $Y=\frac{y}{(1-y)^2}$ and $G(t)=-4F(t)$, one may rewrite the above equation more symmetrically, as \begin{equation} \label{eq:expressions_SRW_XY} H(x,y)=\frac{XY}{xy}\frac{G(X)-G(-Y)}{X+Y}. \end{equation} In particular, applying \eqref{eq:expressions_SRW_XY} with $G(t)= t$, we have \begin{equation*} H(x,y)= \frac{1}{(1-x)^2(1-y)^2}=\sum_{i,j\geq 1}ijx^{i-1}y^{j-1}. \end{equation*} Recall that $h(i,j) = ij$ is the unique positive harmonic function for this model (unique up to multiplicative factors). \subsubsection*{Characterization of the positive harmonic function} As explained in Section \ref{sec:solution_BVP}, the general Martin boundary theory implies that in the framework of this paper, there is a unique positive harmonic function, which in our construction corresponds to taking $F$ as a one-degree polynomial in Theorem \ref{thm:main_intro-2}. However, we don't have any direct proof of this general fact, except precisely for the simple random walk, for which explicit, and in our opinion instructive computations may be done. More specifically, the question we would like to address here is the following: prove that for all polynomials $G(t)=\sum_{m=1}^{k}a_m t^m$ with degree $k\geq 2$, there exists at least one coefficient of $H(x,y)$ in \eqref{eq:expressions_SRW_XY} above which is negative (and in fact infinitely many coefficients are then negative). We first look at the case when $G$ is a monomial. If $G(t)=t^{2k}$, then one has $H(x,y)=-H(y,x)$ (see \eqref{eq:expressions_SRW_XY}) and thus $H$ must admit negative coefficients in its Taylor expansion, as the function itself takes negative values. The more interesting case is $G(t)=t^{2k+1}$. However, for a future use, we look at general (meaning non necessarily odd) exponents, i.e., $G(t)=t^{k}$, for some $k\geq 1$. Then with \eqref{eq:expressions_SRW_XY} one has \begin{equation*} H(x,y)=\frac{1}{xy}\sum_{\substack{i+j=k+1\\ i,j\geq 1}}(-1)^{j-1} X^i Y^j. \end{equation*} Moreover, as $n\to \infty$, one has \begin{equation*} [x^n]X^i \sim \frac{n^{2i-1}}{(2i-1)!}. \end{equation*} So for large values of $p$ and $q$, \begin{align*} [x^{p+1} y^{q+1}]H(x,y)&\sim \sum_{\substack{i+j=k+1\\ i,j\geq 1}}(-1)^{j-1} \frac{p^{2i-1}}{(2i-1)!}\frac{q^{2j-1}}{(2j-1)!} \\&=pq\sum_{\substack{i+j=k-1\\ i,j\geq 0}}(-1)^{j} \frac{p^{2i}}{(2i+1)!} \frac{q^{2j}}{(2j+1)!}. \end{align*} Consider now the general case where $G(t)$ is a polynomial $\sum_{m=1}^{k}a_m t^m$, with $a_k\neq 0$, and take $(p,q)=(r_1,r_2)n$, where $r_1,r_2$ are positive integers and $n$ tends to infinity. Using the above estimate, we have \begin{equation*} [x^{p+1} y^{q+1}]H(x,y)\sim n^{2k} r_1^{2k-1} r_2\sum_{\substack{i+j=k-1\\ i,j\geq 0}}(-1)^{j} \frac{(r_2/r_1)^{2j}}{(2i+1)!(2j+1)!}. \end{equation*} Replacing $r_2/r_1$ by $x$, our question is therefore equivalent to prove that for any fixed integer $k\geq 2$, there exists $x\in(0,\infty)$ such that \begin{equation*} \sum_{\substack{i+j=k-1\\ i,j\geq 0}}(-1)^{j} \frac{x^{2j}}{(2i+1)!(2j+1)!}<0. \end{equation*} To that purpose, we first observe that \begin{align*} \sum_{\substack{i+j=k-1\\ i,j\geq 0}}(-1)^{j} \frac{x^{2j}}{(2i+1)!(2j+1)!}&=\frac{(1+ix)^{2k}-(1-ix)^{2k}}{2(2k)!ix}\\&=\frac{1}{(2k)!x}\Im ((ix+1)^{2k})\\&=\frac{(1+x^2)^k}{(2k)!x}\sin(2k\arctan x). \end{align*} Clearly, given $k\geq 2$ one may fix $x\in(0,\infty)$ such that the above is negative. More precisely, this function admits $k-1$ sign changes. \subsubsection*{A related example: the king walk} We continue with the king walk, which (see the second example on Figure \ref{fig:step_sets}) by definition admits the kernel \begin{equation*} K(x,y) = xy\left(1-\frac{xy+x+xy^{-1}+y^{-1}+x^{-1}y^{-1}+x^{-1}+x^{-1}y+y}{8}\right). \end{equation*} As the simple walk, its unique positive harmonic function is given by $h(i,j)=ij$. However, this example is a bit different as we now have $p_{1,1}\neq 0$. A few computations starting from the kernel yield \begin{equation*} \mathcal{S}_1= \biggl\{\frac{u(t)-\sqrt{u(t)^2-4}}{2}{e}^{{i}t}:t\in[0,2\pi)\biggr\}, \end{equation*} where $u(t) = -\cos t+\sqrt{12-3\cos^2 t}$. This curve is drawn on Figure~\ref{fig:def_theta}. Computations similar to that of the simple walk lead to the following rather simple expressions for the conformal mappings: \begin{equation*} \pi_{1}(x) = \frac{3x}{x^2+x+1} \quad \text{and} \quad \psi_{1}(x) = \frac{x^2+4x+1}{(1-x)^2}. \end{equation*} Using Equation \eqref{eq:main-intro-2} with $F(t)=t/16$\ , then one can recover the generating function of the positive harmonic function \begin{equation*} H(x,y) = \frac{F(\psi_{1}(x))-F(-\psi_{1}(y))}{K(x,y)}=\frac{1}{(1-x)^2(1-y)^2} = \sum_{i,j\geq 1} ijx^{i-1}y^{j-1}. \end{equation*} Similar computations would lead to other harmonic functions. \subsection{Symmetric small step random walks and comparison between the analytic approaches of \cite{CoBo-83} and \cite{FaIaMa-17,Ra-14}} \label{sec:uniformization} In this part, we look at the class of symmetric, small step random walks, meaning that $p_{i,j}=0$ as soon as $\vert i\vert\geq 2$ or $\vert j\vert\geq 2$. As illustrated by our bibliography, this class of models has been (and is still) widely studied in the literature, the main reason being that the zero set of the kernel is, in this case, a Riemann surface of genus $0$ or $1$, opening the way to explicit parametrizations in terms of rational or elliptic functions. Here our main objective is twofold: first, we will derive an expression of the conformal mapping $\psi_{1}$ for small step random walks, see Proposition~\ref{prop:expression_conformal_map_small_jumps}. Doing so, we will introduce some tools from complex analysis, which are close to the analytic approach developed in \cite{FaIaMa-17,Ra-14}. As a second step, we will compare the analytic approach used in this paper (inspired by \cite{CoBo-83}) with the one of \cite{FaIaMa-17,Ra-14}. \subsubsection*{Explicit expression for the conformal mapping $\psi_{1}$} We first introduce a function $T_a(x)$ that generalizes the classical Chebyshev polynomials of the first kind, obtained when $a\in\mathbb N$. This function is defined, for $x\in\mathbb C\setminus (-\infty,-1]$, by \begin{equation} \label{eq:Chebyshev_polynomial} T_a(x) ={_2F}_1 \Bigl(-a,a;\frac{1}{2};\frac{1-x}{2}\Bigr), \end{equation} where ${_2F}_1$ is the Gauss hypergeometric function. Then $T_a$ admits the following expansion, valid for $\vert x-1\vert<2$: \begin{equation*} T_a(x)= \sum_{n\geq 0} \frac a{a+n} {a+n \choose 2n} 2^n (x-1)^n. \end{equation*} When $a\in \mathbb N$, then the above sum ranges from $0$ to $a$, and $T_a(\cos t)=\cos(at)$, by definition of the classical Chebychev polynomial. Another useful formula, valid for $x$ in $\mathbb C\setminus (-\infty,-1)$, is \begin{equation*} T_a(x)=\frac{1}{2} \Bigl(\bigl(x+\sqrt{x^2-1}\bigr)^a+\bigl(x-\sqrt{x^2-1}\bigr)^a\Bigr). \end{equation*} \begin{figure} \hspace{-10mm} \begin{tikzpicture}[scale=1.5] [step=0.25cm] (-1,-1) grid (1,1); \draw[->] (-1.1,0) -- (1.1,0) node[right] {$1$}; \draw[->] (0,-1.1) -- (0,1.1) node[above] {$1$}; \filldraw[thick,variable=\t,domain=0:180,samples=50,color=blue,fill=blue!5] plot ({1-sin(\t)},{tan(\t)-sin(\t)*tan(\t)}); \draw[thick,variable=\t,domain=0:360,samples=50,magenta] plot ({cos(\t)},{sin(\t)}); \draw[thick,variable=\t,domain=(3-2*sqrt(2)):1,samples=50,red] plot ({\t},{0}); \fill[red] (0.17157,0) circle (1pt); \put(20,15){$\mathcal S_1$}; \put(8,-10){$x_1$}; \end{tikzpicture} \begin{tikzpicture}[scale=2.5] [step=0.25cm] (0,0) grid (1,1); \draw[->] (-0.1,0) -- (1.1,0) node[right] {$\infty$}; \draw[->] (0,-0.1) -- (0,1.1) node[above] {$\infty$}; \fill[blue!5,] (0,0) -- (0.7,0.7) -- (1,0) -- cycle; \draw[thick,variable=\t,domain=0:0.8,samples=50,blue] plot ({\t},{\t}); \draw[thick,variable=\t,domain=0:1,samples=50,red] plot ({\t},{0}); \draw[thick,variable=\t,domain=0:1,samples=50,magenta] plot ({0},{\t}); \fill[red] (0.5,0) circle (0.75pt); \put(35,-13){$1$}; \draw[->] (-0.4,0.5) -- (-0.1,0.5); \draw[->] (-0.1,0.4) -- (-0.4,0.4); \draw[->] (1,0.5) -- (1.3,0.5); \put(70,40){$\omega(s)$}; \put(-30,40){$\sigma(x)$}; \put(-30,15){$x(s)$}; \end{tikzpicture} \begin{tikzpicture}[scale=1.5] [step=0.25cm] (0,1) grid (1,-1); \draw[->] (0,-1.1) -- (0,1.1) node[above] {$\infty$}; \draw[->] (-0.1,0) -- (1.1,0) node[right] {$\infty$}; \fill[blue!5,] (0,1) -- (0,-1) -- (1,-1) -- (1,1) -- cycle; \draw[thick,variable=\t,domain=-1:1,samples=50,blue] plot ({0},{\t}); \draw[thick,variable=\t,domain=0.4:1,samples=50,red] plot ({\t},{0}); \fill[red] (0.4,0) circle (0.75pt); \put(15,-10){$2$}; \end{tikzpicture} \caption{The conformal mapping for the simple random walk: $\sigma(x)$ maps conformally $\mathcal{S}_1^+\setminus [x_1,1]$ (the light blue domain) onto the cone, and $\omega(s)$ maps conformally the cone onto the right half plane $\mathcal{H}^+$.} \label{fig:uniformzation-simpleRW} \end{figure} \begin{proposition} \label{prop:expression_conformal_map_small_jumps} Assume \ref{H1:jumps}--\ref{H4:jumps} and the small step hypothesis. Let $\theta$ as in \eqref{eq:angle_at_1} and \begin{equation*} \mu(x) = \frac{\mu_0x-\mu_1}{2(x-1)}, \end{equation*} with $\mu_0$ and $\mu_1$ defined by \eqref{eq:uniformization_points}. Then the conformal map $\psi_{1}$ may be chosen such that \begin{equation*} \psi_{1}(x)=2 T_{\pi/\theta}(\mu(x)), \end{equation*} where $T_{\pi/\theta}$ is the generalized Chebyshev polynomial \eqref{eq:Chebyshev_polynomial} with $a=\pi/\theta$. \end{proposition} We now prove Proposition \ref{prop:expression_conformal_map_small_jumps}. The main idea of the proof is borrowed from \cite[Sec.~6.5]{FaIaMa-17}, see also \cite{Ra-14}. For driftless, small step random walks, the zero set of the kernel \begin{equation*} \{(x,y)\in(\mathbb C\cup\{\infty\})^2: K(x,y)=0\} \end{equation*} is a Riemann surface of genus $0$, which can thus be parametrized with rational functions. As it turns out, the curves $\mathcal S_1$ and $\mathcal S_2$ become particularly simple in the uniformizing variable, from what we will deduce an expression for the conformal map. Before stating the rational uniformisation of the above Riemann surface, we introduce a few notations. First, the kernel \eqref{eq:def_K} may be rewritten as $K(x,y) = a(x)y^2 + b(x)y + c(x)$, where \begin{equation*} \left\{\begin{array}{lcl} a(x) &=& -(p_{-1,-1}x^2 + p_{0,-1}x + p_{1,-1}),\\ b(x) &=& -(p_{-1,0}x^2 - x +p_{1,0}),\\ c(x) &=& -(p_{-1,1}x^2 + p_{0,1}x + p_{1,1}). \end{array}\right. \end{equation*} Let also $d=b^2-4ac$ denote the discriminant of $K(x,y)$ in $y$. It is seen that $d$ has degree $3$ or $4$, and that $1$ is a double root. In the case that $d$ has degree $3$ (resp.\ $4$), then the remaining root is denoted by $x_1$ (resp.\ the remaining roots are denoted by $x_1$ and $x_4$). It has been proved in \cite{FaIaMa-17} that $x_1\in [-1,1)$ and $x_4\in (1,\infty)\cup (-\infty,-1]$. In the case of $d$ having degree $3$, we will denote $x_4=\infty$. Now put \begin{equation*} \left\{\begin{array}{lcl} s_0 &=&\frac{2-(x_1+x_4)+2\sqrt{(1-x_1)(1-x_4)}}{x_4-x_1},\\ s_1 &=& \frac{x_1+x_4-2x_1x_4+2\sqrt{x_1x_4(1-x_1)(1-x_4)}}{x_4-x_1}, \end{array}\right. \end{equation*} as well as (with $\theta$ defined in \eqref{eq:angle_at_1}) \begin{equation} \label{eq:uniformization_points} \mu_0 = s_0 +\frac{1}{s_0},\quad \mu_1 = s_1 +\frac{1}{s_1},\quad\rho = e^{-i\theta}. \end{equation} We may now state the rational uniformization; for a proof, we refer to \cite[Sec.~2.3]{FaRa-11}. \begin{lemma} \label{lem:unif} One has \begin{equation*} \{(x,y)\in(\mathbb C\cup\{\infty\})^2: K(x,y)=0\}=\{(x(s),y(s)) : s\in\mathbb C\cup\{\infty\}\}, \end{equation*} where \begin{equation*} x(s) = \frac{(s-s_1)(s-\frac{1}{s_1})}{(s-s_0)(s-\frac{1}{s_0})} \quad \text{and} \quad y(s) = \frac{(\rho s-s_1)(\rho s-\frac{1}{s_1})}{(\rho s-s_0)(\rho s-\frac{1}{s_0})}. \end{equation*} Moreover, the above rational functions admit the involutions $x(s)=x(1/s)$ and $y(s)=y(1/(\rho^2s))$. \end{lemma} Then the set of complex points where $\vert x\vert =\vert y\vert $ (of which $\mathcal K$ in \eqref{eq:main_domain} is a subset) is very simple, as in the $s$ variable, it becomes simply the line ${e}^{{i}\frac{\theta}{2}}\mathbb{R}$. More precisely, defining the cone \begin{equation*} \mathcal{E}^+=\{r{e}^{{i}\chi}:r>0 \text{ and } \chi\in (0,\frac{\theta}{2})\}, \end{equation*} one has the following result, which is illustrated on Figure \ref{fig:uniformzation-simpleRW}: \begin{lemma} The function $x(s)$ is one-to-one from ${e}^{{i}\frac{\theta}{2}}\mathbb{R}_+$ onto $\mathcal{S}_1$. Moreover, $x(s)$ is a conformal mapping from $\mathcal{E}^+$ onto $\mathcal{S}_1^+\setminus [x_1,1]$. \end{lemma} \begin{proof} The crucial fact is that $x(s)$ maps one-to-one from $\mathcal{H}^+$ onto the whole plane $\mathbb{C}$ cut some segments. Depending on the value of $x_4$, $x(s)$ maps one-to-one from $\mathcal{H}^+$ onto \begin{itemize} \item $\mathbb{C}\setminus [x_1,x_4]$ if $x_4>1$ or $x_4=\infty$; \item $\mathbb{C}\setminus ([x_1,\infty)\cup (-\infty,x_4])$ if $x_4<-1$. \end{itemize} We first prove that $x(s)$ maps one-to-one from ${e}^{{i}\theta/2}\mathbb{R}_+$ onto $\mathcal{S}_1$. Indeed, for a point $s=r{e}^{{i}\frac{\theta}{2}}$ with $r>0$, one has \begin{equation*} \Im (\phi(x(s)))=\frac{2(r+\frac{1}{r})\cos\frac{\theta}{2}-(\mu_0+\mu_1)}{\mu_1-\mu_0}\geq \frac{4\cos\frac{\theta}{2}-(\mu_0+\mu_1)}{\mu_1-\mu_0} >0, \end{equation*} where the last inequality follows from a direct computation. This means that $\vert x(s)\vert <1$. Combining with the facts that $x(s)=\overline{y(s)}$ and $(x(s),y(s))$ is a solution of $K(x,y)$, we deduce that $x(s)\in\mathcal{S}_1$ for all $s\in {e}^{{i}\theta/2}\mathbb{R}_+$. The one-to-one property between two curves then follows. To conclude, it is easy to check that $x(s)$ also maps one-to-one $[0,1]$ (resp.\ $[1,\infty)$) onto $[x_1,1]$. Hence, $x(s)$ maps one-to-one from $\mathcal{E}^+$ onto $\mathcal{S}_1\setminus [x_1,1]$. \end{proof} Let $\sigma(s)$ denote the inverse mapping of $x(s)$: \begin{equation} \label{eq:inverse_mapping_of_uniformization} \sigma(x) = \frac{\mu_0x-\mu_1}{2(x-1)} +\sqrt{\left (\frac{\mu_0x-\mu_1}{2(x-1)}\right )^2-1}. \end{equation} The branch cut of the square root in \eqref{eq:inverse_mapping_of_uniformization} is chosen such that $\sigma(s)$ maps conformally $\mathcal{S}_1^+\setminus [x_1,1]$ onto $\mathcal{E}^+$. Now consider the following mapping \begin{equation*} \omega(s)=s^{\pi/\theta} + s^{-\pi/\theta}, \end{equation*} which maps conformally $\mathcal{E}^+$ onto $\mathcal{H}^+\setminus [2,\infty)$. It is seen that $\omega\circ\sigma(s)$ maps conformally $\mathcal{S}_1^+\setminus [x_1,1]$ onto $\mathcal{H}^+\setminus[2,\infty)$ (see Figure \ref{fig:uniformzation-simpleRW}). In fact, in the following lemma, a stronger statement can be deduced. \begin{lemma} \label{lem:bef_conc} The function $\omega\circ \sigma$ maps conformally $\mathcal{S}_1^+$ onto $\mathcal{H}^+$. \end{lemma} \begin{proof} By the specific form of $\omega\circ\sigma$, we know that $\omega\circ\sigma$ is analytic in $\mathcal{S}_1^+$. Moreover, since $\omega\circ\sigma$ is univalent on $\mathcal{S}_1^+\setminus [x_1,1]$, then it remains to prove that $\omega\circ\sigma$ is injective on $[x_1,1)$. This is true since $\sigma$ (with suitable branch) maps one-to-one $[x_1,1)$ onto $[1,\infty)$ and $\omega$ maps one-to-one $[1,\infty)$ onto $[2,\infty)$. Therefore, $\omega\circ\sigma$ is univalent on $\mathcal{S}_1^+$. The proof is complete. \end{proof} \begin{proof}[End of the proof of Proposition \ref{prop:expression_conformal_map_small_jumps}] We use Lemma \ref{lem:bef_conc} together with the expressions for $\omega(s)$ and $s(x)$. \end{proof} \subsubsection*{Comparison between the analytic approaches of \cite{CoBo-83} and \cite{FaIaMa-17,Ra-14}} Both approaches start with the same functional equation \eqref{eq:functional_equation}, and, as a second step, introduce subsets of $\mathbb C^2$ where this equation may be evaluated. The approaches differ in the choice of these subsets: \begin{itemize} \item In \cite{CoBo-83} (which we choose to follow in the present work), this subset is chosen to be $\mathcal K$ in \eqref{eq:main_domain} (where we recall that $\vert x\vert=\vert y\vert\leq 1$); \item On the other hand, the set in \cite{FaIaMa-17} is $x\in [x_1,1]$ (then $y=Y(x)$ is a solution to the kernel equation, and $x_1$ is the branch point introduced in the previous section). \end{itemize} In both cases, the method continues by stating (and solving) a BVP for the generating functions on curves obtained from the subsets above. This short recap shows that the unique, but major difference in the two approaches lies in the choice of the domain of evaluation. Both choices are equally natural for small step random walks. However, the main advantage of the choice of \cite{CoBo-83} is that the domain $\mathcal K$ may be defined without any difficulty for models admitting arbitrary big negative jumps, as in our paper. On the contrary, we did not find any canonical way to extend the definition of the segment $[x_1,1]$ for large step models\footnote{For small step models, there is only one branch point interior to the unit disk (see again the previous section), so $x_1$ appears as the only possible choice. However, the number of branch points being increasing with the amplitude of the big jumps, it is not clear at all in general what segment, or what union of segments might replace $[x_1,1]$ in general.}. Let us also underline the $x\leftrightarrow y$ symmetry of the domain $\mathcal K$, while this symmetry is broken when taking $x\in [x_1,1]$ (indeed $x$ is then real and $y$ becomes non-real). Finally, as shown in the last section, in the case of small steps, the two approaches are very similar: our curve $\mathcal S_1$ corresponds to the line $e^{i\theta/2}\mathbb R_+$, while the curve $[x_1,1]$ would correspond to $\mathbb R_+$; one passes from one curve to the other simply by multiplying by a complex number. \subsection{A family of random walks with larger steps} \label{sec:larger_jumps} Consider the model whose jumps and weights are given by \begin{equation} \label{eq:jumps_expression_mapping_fam_ex} p_{k,\ell}=\left\{\begin{array}{ll} z & \text{if } (k,\ell)=(1,1),\\ z_r & \text{if } k+\ell+r=0, \end{array}\right. \end{equation} where the $z_r$ satisfy $z+\sum_{r} (r+1)z_r=1$ (so that the $p_{k,\ell}$ are transition probabilities) and $z=\sum_{r} z_r \frac{r(r+1)}{2}$ (so as to have a zero drift, see our hypothesis \ref{H4:jumps} in Section~\ref{sec:introduction}). See Figure~\ref{fig:large_jumps_example}. For example, choosing $z=z_1=\frac{1}{3}$ and all other $z_r=0$ leads to Kreweras' step set $\{(1,1), (-1,0), (0,-1)\}$ with uniform weights. Let us remark that the model \eqref{eq:jumps_expression_mapping_fam_ex} is not always irreducible. More precisely, it is reducible if and only if all steps $(k,\ell)$ have even size (by which we mean that all coordinate sums $k+\ell$ are even). Although our irreducibility assumption \ref{H3:jumps} is not satisfied in general, we will show how to apply our main results in this slightly modified framework. Our motivation to look at this particular family of model comes from bipolar orientations on planar maps, which, as shown in the papers \cite{KeMiSh-19} and \cite{BMFuRa-20}, are in close correspondence with the model of walks confined to the first quadrant as on the right of Figure \ref{fig:large_jumps_example}. Then our model (on the left on the same picture) is just obtained through a horizontal symmetry, so as in particular to have a model symmetric in the first diagonal. Because of the connection with bipolar orientations, both models represented on Figure~\ref{fig:large_jumps_example} admit a very strong structure, which the results in this section will also illustrate. \begin{figure} \begin{center} \begin{tikzpicture}[scale=.7] \draw[->,white] (-3,-3) -- (-3,3); \draw[->,white] (3,-3) -- (-3,3); \draw[->,black] (0,0) -- (1,1); \draw[->,violet] (0,0) -- (0,-1); \draw[->,violet] (0,0) -- (-1,0); \draw[->,blue] (0,0) -- (-2,0); \draw[->,blue] (0,0) -- (0,-2); \draw[->,blue] (0,0) -- (-1,-1); \draw[->,green] (0,0) -- (-3,0); \draw[->,green] (0,0) -- (-2,-1); \draw[->,green] (0,0) -- (-1,-2); \draw[->,green] (0,0) -- (-0,-3); \end{tikzpicture} \begin{tikzpicture}[scale=.7] \draw[->,white] (-3,-3) -- (-3,3); \draw[->,white] (3,-3) -- (-3,3); \draw[->,black] (0,0) -- (1,-1); \draw[->,violet] (0,0) -- (0,1); \draw[->,violet] (0,0) -- (-1,0); \draw[->,blue] (0,0) -- (-2,0); \draw[->,blue] (0,0) -- (0,2); \draw[->,blue] (0,0) -- (-1,1); \draw[->,green] (0,0) -- (-3,0); \draw[->,green] (0,0) -- (-2,1); \draw[->,green] (0,0) -- (-1,2); \draw[->,green] (0,0) -- (-0,3); \end{tikzpicture} \end{center} \caption{On the left: the model studied in Section \ref{sec:larger_jumps}. It is inspired by a similar (non-symmetric) model analyzed in \cite{KeMiSh-19,BMFuRa-20}, displayed on the right picture.} \label{fig:large_jumps_example} \end{figure} Introduce \begin{equation} \label{eq:expression_rho} \rho(x)=z\sqrt{\frac{I_0(x)-I_0(t)}{I_0(x)-I_0(1)}}, \end{equation} where the function $I_0$, introduced in \cite{BMFuRa-20}, takes the value \begin{equation*} I_0(x)=x+\frac{z}{x}-\sum z_r x^{r+1}, \end{equation*} where $t$ is the unique real point of $\mathcal S_1$ apart from $1$, and where the branch cut of the square root function in \eqref{eq:expression_rho} is chosen on $\mathbb{R}_-$. Equivalently, $t$ is characterized by \begin{equation} \label{eq:characterization_t} I_0'(t)=0 \quad \text{and} \quad t\in (-1,1), \end{equation} as it easily follows from \eqref{eq:K(x,x)}. Our main result is the following: \begin{proposition} \label{prop:expression_mapping_fam_ex} The map $\rho$ in \eqref{eq:expression_rho} is a conformal map from $\mathcal S_1^+$ to $\mathcal{H}^+$, sending $0$ to~$z$. \end{proposition} The remainder of this section has the following structure: we will first provide the proof of Proposition \ref{prop:expression_mapping_fam_ex}, and then give a few consequences of it. \subsubsection*{Proof of Proposition \ref{prop:expression_mapping_fam_ex}} We start by noticing that for $x\not=y$, \begin{align*} K(x,y)&=xy-z-xy\sum z_r\frac{y^{r+1}-x^{r+1}}{y-x}\\ &=\frac{xy}{y-x}\left(y-x-\frac{z}{x}+\frac{z}{y}-\sum z_r(y^{r+1}-x^{r+1})\right)\\&=\frac{xy}{y-x}\bigl(I_0(y)-I_0(x)\bigr), \end{align*} and for $x=y$ \begin{equation} \label{eq:K(x,x)} K(x,x)=x^2-z-\sum (r+1)z_rx^{r+2}=x^2I_0'(x). \end{equation} Denote $\mathbb{P}^1=\mathbb C\cup\{\infty\}$. \begin{lemma} \label{lem:prelim_prop:expression_mapping_fam_ex} If $(x,y)\in\mathcal{K}$, then $I_0(x)=I_0(y)\in\mathbb{R}$, and the map $(x,y)\mapsto I_0(x)$ is two-to-one from $\mathcal{K}$ to an interval $[a,b]$ of $\mathbb{R}$, with $a=I_0(t)<0$ and $b=I_0(1)>0$. Moreover, $I_0$ maps the curve $\mathcal S_1^+$ onto $\mathbb{P}^1\setminus [a,b]$. \end{lemma} \begin{proof} Note first that if $x=y$, then the condition $x=\overline{y}$ implied by the symmetry of $K$ yields $x=\pm1$, which implies $I_0(x)\in\mathbb{R}$. The point $(1,1)$ always belongs to $\mathcal{K}$, and so does $(-1,-1)$ if and only if the walk is reducible (which, we recall, happens when all steps have even size). If $x\not=y$, the above expression of $K$ yields $I_0(x)=I_0(y)$. Since $I_0$ has real coefficients and $y=\overline{x}$, we also have $I_0(x)=I_0(\overline{x})=\overline{I_0(x)}$, and $I_0(x)\in \mathbb{R}$. By the equality $K(x,x)=x^{2}I_0'(x)$, see \eqref{eq:K(x,x)}, we get that $I'_0(x)=0$ if and only if $K(x,x)=0$. By Rouch\'e's theorem, for $h\in(0,1)$, the map \begin{equation*} K_{h}(x)=x^2-h\bigl(z+\sum (r+1)z_rx^{r+2}\bigr) \end{equation*} has exactly two zeros in the unit disk, which are real. Hence, letting $h$ go to one yields that $x\mapsto K(x,x)$ has at most two real zeros in the unit disk, plus eventual additional zeros on the unit circle. Moreover, $K(x,x)$ cannot vanish on the unit circle except at $-1,1$; this follows from \ref{H3:jumps} in the irreducible case, and would follow from a similar argument in the reducible case. By the study of $x\mapsto K(x,x)$ on $[-1,1]$ done in Section \ref{sec:func_eq}, there are only two zeros at the two real points of $\mathcal S_1$, one being $1$ and the other one being strictly negative (since $p_{1,1}>0$). Hence, $I'_0(x)$ vanishes only at the two real points of $\mathcal S_1$. Since $I_0(x)=I_0(\overline{x})$, we deduce that $I_0$ is two-to-one from $\mathcal S_1$ to an interval $[a,b]$ of $\mathbb{R}$. Since $z>\sum z_r$, we deduce that $I_0(1)>0$ and $I_0(t)<0$, where $t$ is the unique real point of $\mathcal S_1$ apart from $1$ (recall that $t$ is negative by Lemma \ref{lemma: properties of S1 S2'}). Since $I_0$ is analytic on $\mathcal S_1^+$ and continuous on the closure $\overline{\mathcal S_1^+}$, \begin{equation*} \partial \bigl(I_0(\mathcal S_1^+)\bigr)\subset I_0(\partial \mathcal S_1^+)=I_0(\mathcal S_1)=[I_0(t),I_0(1)]=[a,b]. \end{equation*} Hence, $\mathbb{P}^1\setminus [a,b]\subset I_0(\mathcal S_1^+)$. Let $x\in \mathcal S_1^+\setminus \mathbb{R}$. Then, $I_0(x)\not\in[a,b]$, for otherwise $K(x,\overline{x})=0$ which would contradict the fact that $x\not\in \mathcal S_1$. On $\mathbb{R}\cap S_1^+$, $x^2I_0'(x)=K(x,x)<0$, thus $I_0$ is increasing. Since $I_0(0)=\infty$, we deduce that $I_0(\mathbb{R}\cap \mathcal S_1^+)=(\mathbb{R}\setminus[a,b])\cup\{\infty\}$, and thus $I_0(\mathcal S_1^+)=\mathbb{P}^1\setminus [a,b]$. \end{proof} \begin{proof}[End of the proof of Proposition \ref{prop:expression_mapping_fam_ex}] First, we directly check that $\rho(0)=z\sqrt{1}=z$. Further, since \begin{equation*} z\mapsto \sqrt{\frac{z-a}{z-b}} \end{equation*} maps conformally $\mathbb{P}^1\setminus[a,b]$ to the right half-plane $\mathcal{H}^+$, we deduce from Lemma \ref{lem:prelim_prop:expression_mapping_fam_ex} that $\rho$ in \eqref{eq:expression_rho} is analytic from $\mathcal S_1^+$ to $\mathcal{H}^+$. As a second step, we will prove that $\rho$ is continuous and one-to-one from $\mathcal S_1$ to $[a,b]$. As a direct consequence, $\rho$ will be conformal from $\mathcal S_1^+$ to $\mathcal{H}^+$, thereby concluding the proof of Proposition \ref{prop:expression_mapping_fam_ex}. We thus show that $\rho$ extends by continuity to an injective map from $\mathcal S_1$ to $[a,b]\subset\mathbb{R}$. Suppose that $\{x_n\}_{n\geq 1}$ converges to $\xi\in \mathcal S_1$ with $\Im\xi>0$. Then, by continuity of $I_0$, $I_0(x_n)\rightarrow I_0(\xi)$. Since $I_0'(x)<0$ for $x\in [t,1]$ and $I_0([t,1])\subset\mathbb{R}$, we deduce that $\Im I_0(x)<0$ for $x$ in a neighbourhood of $[1,t]$ in $i\mathcal{H}^+$. Taking into account that $I_0(\mathcal S_1^+\cap i\mathcal{H}^+)\subset \mathbb{C}\setminus \mathbb{R}$ (for otherwise it would meet $\mathcal S_1$), we have $I_0(\mathcal S_1^+\cap i\mathcal{H}^+)\subset -i\mathcal{H}^+$. Hence, $I_0(x_n)\rightarrow I_0(\xi)\in [a,b]$ while staying in $-i\mathcal{H}^+$. This implies that \begin{equation*} \frac{I_0(x_n)-a}{I_0(x_n)-b} =1+\frac{b-a}{I_0(x_n)-b}\to \frac{I_0(\xi)-a}{I_0(\xi)-b}, \end{equation*} while being after some rank in $i\mathcal{H}^+$. Thus, $\rho(x_n)$ goes towards $iz\sqrt{\frac{I_0(\xi)-a}{b-I_0(\xi)}}$, where $\sqrt{\frac{I_0(\xi)-a}{b-I_0(\xi)}}$ is the unique positive root of $X^2=\frac{I_0(\xi)-a}{b-I_0(\xi)}$. Similarly, if $\{x_n\}_{n\geq 1}$ converges to $\xi\in \mathcal S_1$ with $\Im\xi<0$, then $\rho(x_n)$ goes to $-iz\sqrt{\frac{I_0(\xi)-a}{b-I_0(\xi)}}$. Hence, we can extend $\rho$ by continuity to $\mathcal S_1$ with the value \begin{equation*} \rho(\xi)=\pm iz \sqrt{\frac{I_0(\xi)-a}{b-I_0(\xi)}}\qquad \text{if }\xi\in \pm i\mathcal{H}^+. \end{equation*} Notice that the above formula still holds when $\xi\in\mathbb{R}$, with $\rho(1)=\infty$ and $\rho(t)=0$. Since $I_0$ is two-to-one from $\mathcal S_1$ to $[a,b]$ except at $t$ and $1$, we deduce that $\rho$ is injective on $\mathcal S_1$. \end{proof} \subsubsection*{Applications of Proposition \ref{prop:expression_mapping_fam_ex}} Theorem \ref{thm:main_intro-1} directly yields the following explicit expression for the harmonic functions $h_n$ as introduced in \eqref{eq:expression_H_n}. \begin{proposition} Let $t$ as in \eqref{eq:characterization_t}. Then, for each $n\geq 1$, the power series expansion at $(0,0)$ of the bivariate series \begin{equation*} H_n(x,y)=\frac{(y-x)z^n\left(\left(\frac{I_0(x)-I_0(t)}{I_0(x)-I_0(1)}\right)^{n/2}+(-1)^{n-1}\left(\frac{I_0(y)-I_0(t)}{I_0(y)-I_0(1)}\right)^{n/2}\right)}{xy\bigl(I_0(y)-I_0(x)\bigr)} \end{equation*} defines a harmonic function $h_n$ for the Laplacian operator associated to the model \eqref{eq:jumps_expression_mapping_fam_ex}. \end{proposition} Remark that for any even values of $n\geq 1$, $H_n$ is a rational function. For Kreweras' model, $I_0(x)=x+\frac{1}{3x}-\frac{x^2}{3}$. Hence, $I_0'(x)=1-\frac{1}{3x^2}-\frac{2x}{3}$, which admits the unique root $t=-\frac{1}{2}$ in $(-1,1)$. Hence, after some computations, $\rho$ in \eqref{eq:expression_rho} simplifies into \begin{equation*} \rho(x)=\frac{1+2x}{3}\sqrt{\frac{1-x/4}{(1-x)^3}}, \end{equation*} and we have for example \begin{align*} H_1(x,y)&=\frac{(1+2x)\sqrt{\frac{1-x/4}{(1-x)^3}}+(1+2y)\sqrt{\frac{1-y/4}{(1-y)^3}}}{3xy\bigl(1-\frac{1}{3}(\frac{1}{xy}+x+y)\bigr)}\\&=-\frac{1}{18}\left(1+\frac{27}{16}x+\frac{27}{16}y+\frac{567}{256}x^2+3xy+\frac{567}{256}y^2+\cdots\right). \end{align*} We also have \begin{equation*} H_2(x,y)=-\frac{9}{4}\frac{x-y}{(1-x)^3(1-y)^3}=-\frac{9}{4}\left(x-y+3x^2-3y^2+\cdots\right). \end{equation*} As a second example, consider $z=\frac{1}{2}$, $z_2=\frac{1}{6}$ and all other $z_r=0$. Then the curve admits the parametrization \begin{equation} \label{eq:curve_S1_big_jumps_Example} \mathcal{S}_1=\left\lbrace\frac{1}{\sqrt{1+\frac{2}{\sqrt{3}}\sin t}}e^{it}: t\in[0,2\pi)\right\rbrace, \end{equation} see Figure \ref{fig:curve_S1_example}. In this case, the conformal mapping takes the form \begin{equation*} \rho(x)=\frac{1}{2}\sqrt{\frac{(3-x)(1+x)^3}{(3+x)(1-x)^3}}. \end{equation*} \begin{figure} \begin{tikzpicture}[scale=2.5] \draw[gray,very thin] (-1.1,-1.1) grid (1.1,1.1) [step=0.25cm] (-1,-1) grid (1,1); \draw[->] (-1.1,0) -- (1.1,0) node[right] {$1$}; \draw[->] (0,-1.1) -- (0,1.1) node[above] {$1$}; \textcolor{red}{\qbezier(64,-11)(49,0)(64,11)} \draw[thick,variable=\t,domain=0:180,samples=50,blue] plot ({cos(\t)/sqrt(1+2/sqrt(3)*sin(\t))},{sin(\t)/sqrt(1+2/sqrt(3)*sin(\t))}); \draw[thick,variable=\t,domain=0:180,samples=50,blue] plot ({cos(\t)/sqrt(1+2/sqrt(3)*sin(\t))},{sin(\t)/sqrt(1+2/sqrt(3)*sin(\t))*(-1)}); \put(22,5){$\theta=\frac{2\pi}{3}$} \put(28,23){$\mathcal S_1$} \end{tikzpicture} \caption{The curve $\mathcal S_1$ in \eqref{eq:curve_S1_big_jumps_Example}, for the model \eqref{eq:jumps_expression_mapping_fam_ex} with jumps $z=\frac{1}{2}$, $z_2=\frac{1}{6}$ and all other $z_r=0$.} \label{fig:curve_S1_example} \end{figure} \bibliographystyle{plain}
{ "timestamp": "2020-12-17T02:18:35", "yymm": "2012", "arxiv_id": "2012.08947", "language": "en", "url": "https://arxiv.org/abs/2012.08947" }
\section{Introduction} \label{sec:intro} A \ac{GW} signal contains enough information to determine the luminosity distance to the binary \citep{1986Natur.323..310S}. Distance distribution of multiple observations can be combined to infer the redshift evolution of the merger rate \citep{2018ApJ...863L..41F, 2019ApJ...882L..24A, 2020arXiv200807014R, o3a_rnp}\footnote{We fix the cosmological parameters to their Planck 2015 values \citep{2016A&A...594A..13P}.}. This is an important aspect in the population analysis of merging black holes as it has close ties with the star formation history of the universe or the dynamical evolution of star clusters. At the least, it can probe the time delay between the star formation and merger of remnants. For isolated binaries the time delay between formation of a stellar mass binary and merger of the remnants is due to the evolution of components in a binary, the common envelope phase, and due to orbital evolution caused by the emission of the \ac{GW}s \citep{2014LRR....17....3P, 2015ApJ...806..263D}. Binaries are also formed in star cluster after components segregate to the core and form and tighten binaries due to many body dynamics \citep{2000ApJ...528L..17P}. The usual effect of delay results in a shallower evolution of the merger rate of remnants with the redshift compared to the redshift evolution of the star formation rate \citep{2020ApJ...898..152S}. In this letter we reconstruct the evolution of the merger rate with redshift using observations in GWTC-2 occurring during the first two observing runs and the first half of the third run (O1, O2, and O3a) \citep{losc}. We extend the flexible mixture-model framework VAMANA\citep{vamana} to model the evolution as a power-law function independent of other signal parameters. Our work differs from previous analysis in using a flexible Gaussian mixture model that admits chirp mass as a signal parameter. Additionally and more importantly we show that the uncertainty on reconstruction can be reduced by imposing homogeneity and isotropy of the universe. Moreover, by comparing with an independent analysis that does not make such imposition we probe the cosmological principle. We describe the method and analysis in section \ref{sec:method} and present results in section \ref{sec:results}. \section{Method and Analysis} \label{sec:method} The methodology to model population properties of merging compact binaries has been discussed in multiple publications \citep{2019MNRAS.486.1086M, 2018ApJ...868..140T, 2019PASA...36...10T}. Following \citep{2018ApJ...868..140T}, the posterior on model hyper-parameters is given by equation \ref{eq:pop_bayes}, \begin{eqnarray} p(\lambda | \{\bm{d}\}) &\propto& \prod_{i=1}^{N_\mathrm{obs}} \frac{\int \mathrm{d}\bm{\theta}\;p(\bm{d}_{i} | \bm{\theta} )\;p (\bm{\theta} | \lambda)}{\int \mathrm{d}\bm{\theta}\;p_{\mathrm{det}}(\bm{\theta})\; p(\bm{\theta}|\lambda)}\,p(\lambda) \nonumber \\ &\equiv& e^{\mathcal{L}}\,p(\lambda) \label{eq:pop_bayes} \end{eqnarray} where $\bm{d} \equiv \{ \bm{d_0}, \cdots, \bm{d}_{N_\mathrm{obs}}\}$ is the set of observations, $\lambda$ is the population model, $\bm{\theta}$ are the signal parameters admitted in the population analysis and $p_\mathrm{det}(\bm{\theta})$ encodes the probability of an event with signal parameters $\bm{\theta}$ to be observed with confidence. $p(\lambda)$ is the prior probability on the model hyper-parameters and $\mathcal{L}$ is the log-likelihood. The analysis samples the posterior $p(\lambda | \{\bm{d}\})$ using a \ac{MCMC} method and thus does not require the normalisation constant for equation \ref{eq:pop_bayes}. In practice equation \ref{eq:pop_bayes} is estimated using discrete samples. \ac{PE} analysis samples $p(\bm{d}_{i} | \bm{\theta} )$ for a population model $p (\bm{\theta} | \lambda_{\mathrm{PE}})$ \citep{2015PhRvD..91d2003V}, and large scale injection campaign are performed to estimate the sensitivity of the detector network for a population model $p(\bm{\theta} | \lambda_{\mathrm{inj}})$. Both the numerator and the denominator are then calculated for a target population $p (\bm{\theta} | \lambda)$ using importance sampling \citep{2018CQGra..35n5009T, 2019MNRAS.486.1086M}. Probability distribution in equation \ref{eq:pop_bayes} is extended to include the merger rate by incorporating the Poisson term \citep{extended_lkl} \begin{eqnarray} p(N_\mathrm{obs} | \lambda, \{\bm{d}\}) = \frac{\mu^{-N_\mathrm{obs}}\mathrm{e}^{-\mu}}{N_\mathrm{obs}!} \\ \mu = \int R(z)\,p(\bm{\theta}|\lambda)\frac{1}{1+z}\frac{\mathrm{d}V_c}{dz}\, p_{\mathrm{det}}(\bm{\theta})\; \mathrm{d}\bm{\theta}, \label{eq:poisson} \end{eqnarray} where $\mathrm{d} V_c/\mathrm{d} z$ is the differential co-moving volume and $\mu$ is the expected number of observations at population averaged merger rate $R(z) $ for the population model $p(\bm{\theta}|\lambda)$ at redhsift $z$. Often prior on merger rate is chosen to be uniform-in-log that is scale invariant. On marginaling over the merger rate the posteriors of other hyper-parameters remain unaffected and equivalent to am \emph{unextended} analysis \citep{uniforminlog}. Thus Poisson term carries no information. The redshift evolution of the merger rate can be probed by including a phenomenological model for the redshift evolution of the merger rate. For example, some works have used a phenomenological distribution that is also used in modeling the star formation history \citep{2014ARA&A..52..415M} \begin{equation} R(z) = \frac{R\;(1 + z) ^ \kappa}{1 + [(1 + z)/(1 + z_p)]^\beta}, \label{eq:pz} \end{equation} where $R$ is the population averaged merger rate at $z = 0$ and $z_p$ is the redshift at which the peak star-formation occurs. The ground based gravitational wave detectors have not yet reached sensitivity to observe mergers occurring near peak star-formation \citep{2020LRR....23....3A} therefore current analysis only measure the leading order term which is the powerlaw with exponent $\kappa$ in the numerator of equation \ref{eq:pz} \citep{2018ApJ...863L..41F, 2019ApJ...882L..24A, 2020ApJ...896L..32C, 2020arXiv200807014R, o3a_rnp}. In a previous article, we introduced VAMANA a flexible population analysis framework \citep{vamana}. VAMANA models the chirp mass, mass-ratio, and aligned spin distribution using a mixture model. The analysis explores all distributions – expressible as the sum of weighted Gaussians – that are within a distance measure from the reference chirp mass distribution. The reference chirp mass distribution is a simple power-law that obtains the maximum value of $\mathcal{L}$. We refer the interested reader to section IIIB of \cite{vamana} for a detailed description of the model. Equation \ref{eq:pz} is independent of the other signal parameters and we extend VAMANA to model redshift evolution of merger rate by multiplying the existing population model by the density \begin{equation} p(z\,|\,k) \propto \frac{1}{1+z}\frac{\mathrm{d}V_c}{dz} (1 + z)^\kappa. \label{eq:pz1} \end{equation} The normalisation constant in the previous equation can be ignored as it cancels from the numerator and the denominator of equation \ref{eq:pop_bayes}. Having incorporated this model, we also maximise on $\kappa$ when obtaining the reference population. The phenomenological model in equation \ref{eq:pz1} has only one hyper-parameter, in-spite of that, the measured credible intervals on $\kappa$ are large \citep{o3a_rnp}, and thus there is a large uncertainty on how steep or how shallow is the evolution of the merger rate with the redshift. The primary source of the uncertainty in $\kappa$ is due to the uncertainty in the measurement of luminosity distance, which occurs because of degeneracy between the luminosity distance and the inclination angle of a binary \citep{2019ApJ...877...82U}. Loosely speaking, there isn't enough information in the analysis to precisely model the redshift evolution of the merger rate. However, this information can be collected over a large number of observations and a precise inference can be made. It is also possible to inject information in the analysis. In particular, no where in the population analysis we demand the universe to be isotropic and homogeneous. In fact, we only supply this as a prior information when estimating parameters from the observed signals \citep{2015PhRvD..91d2003V}. Cosmological principle is a notion that universe is isotropic and homogeneous \citep{1972gcpa.book.....W, 1993ppc..book.....P} and is well supported by the data \citep{2005pfc..book.....M}. One way to enforce it is by binning the redshift and extending the likelihood in equation \ref{eq:pop_bayes} by the probability \begin{equation} p(\{N^1_\mathrm{obs}, N^2_\mathrm{obs}, \cdots, N^b_\mathrm{obs}\} | \lambda, \{\bm{d}\}) = \frac{\mu^{-N_\mathrm{obs}}\mathrm{e}^{-\mu}}{\prod_{j = 1}^{j = b} N^j_\mathrm{obs}!}, \label{eq:bpoisson} \end{equation} where $b$ is the number of bins and $ N_\mathrm{obs} = \sum_{j = 1}^{j = b} N^j_\mathrm{obs}$. Thus, in addition to the total number of observation as predicted by the population model and population averaged merger rate, we are also interested in their distribution over the bins. The numerator in equation \ref{eq:poisson} and \ref{eq:bpoisson} are the same but denominator in equation \ref{eq:bpoisson} acts as a goodness of fit with an isotropic and homogeneous universe. We are not applying cosmological principle in the strictest sense as our bin sizes are of the order of few hundred mega-parsecs and it is conceivable to have an anisotropic universe that can reproduces the bin counts, $N^j_\mathrm{obs}$, as predicted by an isotropic universe. To impose isotropy strictly one has to create bins in the sky \citep{2020arXiv200302919S, 2020arXiv200611957P}. To count the number of observation in a bin we introduce the flag $g_{ij}$ which assigns the $i^{\mathrm{th}}$ observation to the $j^{\mathrm{th}}$ bin, \begin{equation} g_{ij}= \begin{cases} 1\quad\mathrm{for} \quad i = j,\\ 0 \quad \mathrm{otherwise}, \end{cases} \end{equation} and the corresponding function, \begin{equation} g_{ij}(z)= \begin{cases} 1\quad\mathrm{for} \quad z^j_{\mathrm{\min}} < z \leq z^j_{\mathrm{\max}},\\ 0 \quad \mathrm{otherwise}. \end{cases} \end{equation} where $z^j_{\mathrm{\min}}$ and $z^j_{\mathrm{\max}}$ are the boundaries of the $j^{\mathrm{th}}$ bin. The number of observation in bins is then \begin{equation} \{N^1_\mathrm{obs}, N^2_\mathrm{obs}, \cdots, N^b_\mathrm{obs}\} = \{\sum_{i=1}^{N_\mathrm{obs}} g_{i1}, \cdots, \sum_{i=1}^{N_\mathrm{obs}} g_{ib}\}. \end{equation} Along with the probability in equation \ref{eq:bpoisson} we also need $p(g_{ij}|\lambda, \bm{\bm{d}_i})$ which is the probability that $i^{\mathrm{th}}$ observation originated from the $j^{\mathrm{th}}$ bin. This can be easily calculated by re-weighting the redshift estimates \citep{2019MNRAS.486.1086M}, \begin{equation} p(g_{ij}|\lambda, \bm{\bm{d}_i}) = \frac{\sum_k g_{ij}(z_k^i)\,p(\theta_k^i|\lambda) / p (\theta_k^i | \lambda_{\mathrm{PE}})}{\sum_k p(\theta_k^i|\lambda) / p (\theta_k^i | \lambda_{\mathrm{PE}})}, \label{eq:pgij} \end{equation} where $\theta_k^i$ are the sampled parameters for the $i^{\mathrm{th}}$ signal. We perform two analysis, one where we use one bin and calculate the Poisson probability using equation \ref{eq:poisson} and the other where we create four redshift bins and calculate the Poisson probability using equation \ref{eq:poisson} and \ref{eq:pgij}. We chose [0, 0.19, 0.27, 0.41, 1.50] as the bin edges. With this choice the expected number of observations in the bins are approximately equal for the reference population. We use a uniform prior on $\kappa$ in the range [-6, 6]. For the second analysis, we use a uniform prior on $g_{ij}$. All the \ac{PE} samples and injection campaign's data we have used in this analysis is publicly available \citep{losc}. All our settings, priors, and event selection remain intact and as described in \citep{2020arXiv201104502T}. \section{Results} In this section, we compare the results obtained for the two analyses. We perform a few sanity checks to verify the correctness of the analysis. Each posterior in our analysis is equally probable to be the true mass, spin, and the redshift distribution. For a reconstructed population, $p(\bm{\theta}|\lambda)$, corresponding to each posterior $\lambda$, we apply selection effects and generate multiple realisation of expected observations from this population model. Each prediction is chosen to have the same number of data points as the number of observations used in the analysis. Moreover, we can also generate multiple realisations of the observed data. We perform importance sampling on each observation using the population model corresponding to each posterior and randomly select one sample from each observation. Figure \ref{fig:sanity} plots the cumulative distributions for the predicted and the observed data. The 90\% credible interval of the observed data lies within the 90\% credible interval of the predicted data. \label{sec:results} \begin{figure*} \centering \includegraphics[width=\textwidth]{sanity_checks.jpg} \caption{Left) The light blue band is the 90\% confidence of the cumulative probability of the posterior predictive obtained after applying selection effects to the reconstructed redshift evolution of the merger rate. The dark blue band is the 90\% confidence obtained by bootstrapping various realisations of the observed data. Each realisation of the observed data is generated by first performing importance sampling on the redshift estimate of the observations using posterior values of $\kappa$ and then selecting one data point from each observation. The observed data is enclosed within the 90\% confidence of the posterior's prediction, right) The natural log of the likelihood defined in equation 2 showing no visible trend in their values indicates proper convergence of the sampler. Both the plots correspond to the four bin case.} \label{fig:sanity} \end{figure*} Figure \ref{fig:zevol} plots the posterior on $k$. For the one bin case we measure $k$ to be $1.09^{+2.30}_{-2.46}$ but this changes to $1.15^{+1.84}_{-1.83}$ for the four bin case. The credible interval reduces by 1.1. The posterior on $\kappa$ is sensitive to the choice of prior on the mass distribution. VAMANA models the chirp mass as a sum of weighted Gaussians and we choose the location of these Gaussians to have a uniform in log prior. However, if we change the prior on the locations of the Gaussians to be uniform, the mean value of $\kappa$ reduces to around 0.5. \begin{figure*} \centering \includegraphics[width=\textwidth]{zevolution.jpg} \caption{Left) The posterior on $\kappa$. The blue curve is for the four bin case and the orange curve| is for the one bins case. The width of the credible interval shrinks from [-1.37, 3.40] to [-0.68, 2.97] due to redshift binning, right) The redshift evolution of the merger rate. Binning reduces the width of the merger rate interval at redshift $z$ = 1.50 by a factor of 2.} \label{fig:zevol} \end{figure*} In \cite{2020arXiv201104502T} we reported a merger rate of $\checkme{25.8}^{+\checkme{11.5}}_{-\checkme{8.8}}\,\mathrm{Gpc}^{-3} \mathrm{yr}^{-1}$ for a rate non-evolving in redshift. After allowing a powerlaw evolution for the merger rate we obtain $20.80^{+16.00}_{-11.40} \mathrm{Gpc}^{-3}\mathrm{yr}^{-1}$ for the one bin case and $20.74^{+12.39}_{-9.09} \mathrm{Gpc}^{-3}\mathrm{yr}^{-1}$ for the four bin case. The mean value of $\kappa$ is positive that results in an increase in the sensitive volume ($\kappa = 0$ for non-evolving merger rate) and a reduced merger rate. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{rate_posterior.jpg} \caption{The rate posterior at redshift zero. The orange curve is posterior for the one bin case and the blue curve for the four bins case. Binning reduces the credible interval from [9.4, 36.8] to [11.6, 33.1] $\mathrm{Gpc}^{-3}\mathrm{yr}^{-1}$. } \label{fig:rate} \end{figure} \subsection{Cosmological Principle} The two analysis we perform contrast in the imposition of the cosmological principle. We should stress again that we are not applying the cosmological principle in the strictest sense. In fact our analysis is only sensitive to the relative distribution of observations in the redshift bins which themselves have width of many hundreds of mega-parsecs. However, we can still probe any substantial departure from the homogeneous and isotropic distribution of the signals. In the event cosmological principle is violated, the four binned analysis will not explain the data as well as the one binned analysis. The usual way to perform model selection is by comparing the evidences from the two analysis. But as we use Metropolis-Hastings sampling \citep{10.1093/biomet/57.1.97} we can not calculate the evidence. However, as probably many hundreds of events are required to properly probe the cosmological principle, in this asymptotic limit we expect the mean of the reconstructed distribution to be very close to the true distribution \citep{judith2011}. We thus use a Bayes factor calculated using the mean of the reconstructed distribution. \begin{eqnarray}\label{eq:odds_ratio} \mathfrak{R} \equiv \frac{p(\{\bm{d}\} | f_1)}{p(\{\bm{d}\} | f_2 )} = \left[\frac{V(f_1)}{V(f_2)}\right]^{-N} \nonumber\\\prod_{i=1}^{N} \left[\frac{ \int \mathrm{d} \bm{\theta} p(\bm{d}_i | \bm{\theta})\;p(\bm{\theta}|f_1) }{ \int \mathrm{d} \bm{\theta} p(\bm{d}_i | \bm{\theta})\;p(\bm{\theta}|f_2)}\right], \label{eq:bf} \end{eqnarray} where the probability distribution $p(\bm{\theta}|f)$ is the integral of the population model on the hyper-parameter posterior, \begin{equation} f(\bm{\theta}) = \frac{1}{n}\sum_k^n p(\bm{\theta} | \lambda_k), \end{equation} and $n$ in the number of posterior samples. Here we are giving equal prior probability to the distributions $f_1$ and $f_2$. Using 39 observations we calculate $\mathfrak{R}$ to be 1.02. In figure \ref{fig:compare_mchirp} we show the reconstructed chirp mass distribution for the two analysis. The reconstructed mean are nearly identical. The very small difference is most probably due to sampling error and due to the use of different waveform models used in correcting for the selection effects and calculation of Poisson term, and in estimating the signal parameters. \begin{figure*} \centering \includegraphics[width=\textwidth]{compare_mchirp.jpg} \caption{The reconstructed chirp mass distribution for the two analysis and the 90\% credible interval. Orange plot is the reconstruction for the one bin analysis and blue plot is the reconstruction for the four bin analysis.} \label{fig:compare_mchirp} \end{figure*} \section{Conclusion} \label{sec:conclusion} In the population analysis of merging binary black holes cosmological principle is not explicitly imposed. In this article, we showed that by creating multiple bins in redshift and counting the number of observations in each bin we can loosely demand the isotropy and homogeneity of the universe. We extend VAMANA the flexible mixture model framework to include redshift as a population parameter. We apply our analysis on the publicly available data and show that by imposing the cosmological principle we can substantially improve the inference on the redshift evolution of merger rate. We can better discriminate between different distributions modeling the redshift evolution of the black hole merger rate. In particular, we plan to apply our increased discrimination in probing the mass dependence of the redshift evolution of merger rate or evolution of the mass distribution with the redshift. Additionally, we suggest that we can probe the cosmological principle by comparing an analysis that imposes the isotropy and homogeneity of the universe to an analysis that does not impose the isotropy and homogeneity of space. \section*{Acknowledgement} This work was supported by the STFC grant ST/L000962/1. We are grateful for the computational resources provided by Cardiff University and funded by an STFC grant supporting UK Involvement in the Operation of Advanced LIGO. We are also grateful for computational resources provided by the Leonard E Parker Center for Gravitation, Cosmology, and Astrophysics at the University of Wisconsin-Milwaukee and supported by National Science Foundation Grants PHY-1626190 and PHY-1700765 This research has made use of data, software, and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org/), a service of LIGO Laboratory, the LIGO Scientific Collaboration, and the Virgo Collaboration. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain.
{ "timestamp": "2020-12-17T02:14:41", "yymm": "2012", "arxiv_id": "2012.08839", "language": "en", "url": "https://arxiv.org/abs/2012.08839" }
\section{Introduction and statement of results}\label{sec1}} \setcounter{equation}{0} \par In this paper, we study the existence and concentration behavior of positive ground state solutions for the following critical Schr\"{o}dinger--Poisson system: \begin{equation}\label{11} \begin{cases} -\varepsilon^{2}\Delta u+h(x)\phi u+V(x)u=\sum\limits_{i=1}^{m} Q_i(x)|u|^{q_i-2}u+K(x)|u|^{4}u,&x\in\R^3,\\ -\varepsilon^{2}\Delta\phi=4\pi h(x)u^2,&x\in\R^3, \end{cases}\tag{$SP_\varepsilon$} \end{equation} where $\varepsilon>0$ is a small parameter, $4<q_1<q_2<\cdots<q_m<6$. $V(x)$ is the potential, $h(x)$ is the electronic potential, $Q_i(x)(i=1,\cdots,m)$ and $K(x)$ are weight potentials satisfying some competing conditions. Moreover, the nonlinear growth of $|u|^{4}u$ reaches the Sobolev critical exponent since $2^*=6$ for three spatial dimensions, which is why we call critical Schr\"{o}dinger--Poisson system in the title. \par Replacing the nonlinear term $\sum\limits_{i=1}^{m} Q_i(x)|u|^{q_i-2}u+K(x)|u|^{4}u$ by $f(x,u)$, \eqref{11} becomes the following system: \begin{equation}\label{112} \begin{cases} -\varepsilon^{2}\Delta u+h(x)\phi u+V(x)u=f(x,u),&x\in\R^3,\\ -\varepsilon^{2}\Delta\phi=4\pi h(x)u^2,&x\in\R^3, \end{cases} \end{equation} which known as the nonlinear Schr\"{o}dinger--Maxwell system, arises in an interesting physical meaning because it appears in quantum mechanics models \cite{RHE,IPL,ET} and in semiconductor theory \cite{VD2,PLL, PMCC}. In fact, systems like \eqref{112} have been first proposed in \cite{VD} as a model describing the interaction between the electrostatic field and the solitary waves of nonlinear Schr\"{o}dinger type equation \begin{equation}\label{111} i\hbar \frac{\partial\psi }{\partial t}=-\frac{\hbar^2}{2m}\Delta\psi+W(x)\psi-|\psi|^{p-2}\psi, \end{equation} where $i$ is the imaginary unit, $\hbar$ is the Planck constant, $m$ is the mass of the field $\psi$, $W(x)$ is the time independent potential of the particle at the position $x\in\R^3$. Then, looking for the standing waves of Eq. \eqref{111}, namely waves of the form $\psi(x,t)=u(x)e^{-i\omega t/\hbar},\hspace{1ex}x\in\R^3,\hspace{1ex}t\in\R,$ one is led to the system \eqref{112} with $\varepsilon^2=\frac{\hbar^2}{2m}$, $V(x)=W(x)-\omega$. For more details on physical background, we refer the reader to \cite{VD,VD2} and the references therein. \par In recent years, the qualitative analysis of positive solutions for problem \eqref{112}, including the existence, nonexistence, concentration behavior, and multiplicity, has been widely investigated under various assumptions of the potentials. For more details, we refer the readers to previous studies \cite{APA,APA2,VD,AD,PMCC,LFZ,DR,DR2,HLR,HLR2,JLJF,JLJF2,MY,PHR,Cerami,Cerami2,TXH,CST,LHL,SJT,CGF}. In particular, if $\varepsilon=V(x)=h(x)=1$, see, for example, Benci and Fortunato \cite{VD} for the case of $f(x,u)=0$, demonstrated the existence of infinitely many solutions of an eigenvalue problem on bounded domain. Azzollini et al. \cite{APA}, when the nonlinearity satisfies Berestycki-Lions type assumptions, proved the existence of nontrivial solution. \cite{AD,DR} for the pure power nonlinearity $|u|^{p-2}u$ with $p\in(2,6)$ by working in the subspace of radial functions of $H^1(\R^3)$ i.e., $H^1_r(\R^3)$, obtained multiple bound state solutions, the existence and nonexistence results, respectively. Zhao and Zhao \cite{LFZ} for the case of $f(x,u)= \mu Q(x)|u|^{q-2}u+K(x)|u|^4u$ where $q\in[4,6)$ and $\mu>0$, introduced the assumption on the weight potential $K(x)$: \vskip2mm \par\noindent $(K_1)$ $|K(x)-K(x_0)|=o(|x-x_0|^{\alpha}),\hspace{1ex}\text{where}\hspace{1ex} 1\leq\alpha<3\hspace{1ex} \text{and}\hspace{1ex} K(x_0)=\max\limits_{x\in \R^3}K(x),$ \vskip2mm \par\noindent to estimate the critical energy level, and proved the existence of a positive solution based on the methods of Brezis and Nirenberg and Lions' concentration-compactness principle. \par If the electronic potential $h(x)$ is not a constant, Cerami and Vaira \cite{Cerami} for the case of $f(x,u)=a(x)|u|^{p-1}u$, $p\in(3,5)$ without any symmetry assumptions, by using the Nehari manifold and establishing compactness lemma, proved the existence of positive ground state and bound state solutions. Huang et al. \cite{HLR} for system \eqref{112} with $f(x,u)=a(x)|u|^{p-2}u+\mu K(x)u$, where $p\in (4,6)$, $K(x)\geq0$ and $a(x)$ can change sign without any symmetry assumptions, mainly prove the existence of at least two positive solutions via a comparison between $\mu$ and the first eigenvalue of $-\Delta u+id$. If $V(x)$ is not a constant, and, then $V(x)$ may not be radial, so one cannot work in $H^1_r(\R^3)$ directly. To overcome this difficulty, by assuming that $V(x)$ satisfying $$0<V_0=\inf\limits_{x\in\R^{3}}V(x)<V_\infty=\lim\limits_{|x|\to\infty}V(x),$$ which was firstly introduced in \cite{PHR}. Azzollini et al. \cite{APA2} for the case of $f(x,u)=|u|^{p-1}u$, proved the existence of ground state solutions and they also studied the existence of solutions for the critical growth by concentration compactness principle. By assuming that $ V(x)$, $a(x)$ and $h(x)$ satisfy some decay rates, Cerami and Molle \cite{Cerami2} for the case of $f(x,u)=a(x)|u|^{p-1}u$, proved the existence of a positive bound state solution via the Nehari manifold, which complements the results given by \cite{Cerami} in some sense. \par If the parameter $\varepsilon\to0$, the systems like \eqref{112} and bound states are called semiclassical problems and semiclassical states, respectively. Semiclassical states can be used to describe a kind of transition between Quantum Mechanics and Newtonian Mechanics. He and Zou \cite{hezou} proved the existence of ground state solutions for the critical growth concentrating on the minima of $V(x)$. Ruiz \cite{DR2} showed the semiclassical states concentrating around a sphere when $V(x)$ statisfies some suitable assumptions. J. Wang et al. \cite{JLJF} for the case of $f(x,u)=b(x)g(u)+|u|^4u$, proved that there are two families of positive solutions concentrating on the maxima of $b(x)$ and the minima of $V(x)$, respectively. Yang \cite{MY} for the case of $f(x,u)=P(x)g(u)+Q(x)|u|^4u$ where $\min\limits_{x\in\R^3}V(x)>0$, $\inf\limits_{x\in\R^3}P(x)>0$ and $\min\limits_{x\in\R^3}Q(x)>0$, proved the existence of semiclassical solutions concentrating at a special set characterized by the potentials, particularly, if $\mathcal{V}\cap \mathcal{P}\cap\mathcal{Q}\neq\emptyset $, then the ground state solusion of $-\Delta v +V_{min}v=P_{max}g(v)+Q_{max}|v|^4v$ is obtained, where $\mathcal{V}=\{y\in\R^3: V(y)=\min\limits_{x\in\R^3}V(x)\}$, $\mathcal{P}=\{y\in\R^3: P(y)=\max\limits_{x\in\R^3}P(x)\}$ and $\mathcal{Q}=\{y\in\R^3: Q(y)=\max\limits_{x\in\R^3}Q(x)\}$. As one can see, the interaction of several potentials and the nonlinearity with critical growth have a significant impact on the existence and concentration behavior of positive solutions for the Schr\"{o}dinger--Poisson systems like \eqref{112}. However, the existence and concentration results on them involving both the critical growth nonlinearity and more than four potentials have not previously been described. That is the main motivation of this work. \par Inspired by the fact mentioned above, the purpose of this paper is to study how the interaction of more than four potentials will make an impact on the existence and concentration of the ground state solutions for the critical Schr\"{o}dinger--Poisson system \eqref{11} when $\varepsilon$ is small. It is quite natural to ask that: can we obtain a concentration results for the ground state solutions of \eqref{11} ? If so, where? In the present paper, we shall give some answers for these questions. Furthermore, the aim of our work is twofold: (i) to study the existence of positive ground state solution for \eqref{11} and its properties, such as concentration, exponential decay, etc. (ii) to find some sufficient conditions for the nonexistence of positive ground state solution. \par It is well known that system \eqref{11} can be easily transformed into a single Schr\"{o}dinger equation with a nonlocal term. In fact, as we shall see in Section 2, for every $u\in H^{1}(\R^3)$, and any fixed $\varepsilon>0$, applying the Lax-Milgram theorem, a unique $\phi_{u/\varepsilon}\in \mathcal{D}^{1,2}(\R^3)$ is obtained, such that $ -\varepsilon^{2}\Delta\phi=4\pi h(x)u^2$ and that, inserted into the first equation of \eqref{11}, gives \begin{equation}\label{1} -\varepsilon^{2}\Delta u+h(x)\phi_{u/\varepsilon} u+V(x)u=\sum\limits_{i=1}^{m} Q_i(x)|u|^{q_i-2}u+K(x)|u|^{4}u. \tag{$S_{\varepsilon}$} \end{equation} Hence, $v$ is a solution of \eqref{1} if and only if $(v, \phi_{v/\varepsilon})$ is a solution of \eqref{11}. For simplicity, in many cases we say $v\in H^1(\R^3)$, instead of $(v, \phi_{v/\varepsilon})\in H^1(\R^3)\times \mathcal{D}^{1,2}(\R^3)$, is a weak solution of \eqref{11}. Then in the following we only need to study the Eq. \eqref{1}. \par Before stating the main result of this paper, we introduce the precise assumptions on $h$, $V$, $K$, and $Q_i(i=1,2,\cdots,m)$ : \vskip2mm \par\noindent $(f_1)$ $V(x),K(x),Q_i(x)(1\le i \le m)\in C^{1}(\R^{3},\R)$.\\ $(f_2)$ $(i-i_0)Q_i(x)\geq 0$ for $i \neq i_0$, $Q_{i_0}(x)$ is allowed to change sign.\\ $(f_3)$ $0<V_0:=\inf\limits_{x\in\R^{3}}V(x)$, $V(x)$ and $Q_i(x)(1\le i \le m)$ are bounded in $\R^3$.\\ $(f_4)$ $0\le K(x)\le K^{\infty }:=\limsup\limits_{|x|\to\infty }K(x)$, and there exists $x_0\in\R^{3}$ such that $K(x_0)=K^{\infty }$.\\ $(H)$ $h(x)\in C(\R^3,\R), 0< h_0=\inf\limits_{x\in\R^{3}}h(x)\le h(x)\le h_\infty:=\lim\limits_{|x|\to\infty }h(x)<\infty.$ \vskip2mm \par For every $s\in\R^3$, we consider the following equation with parameters: \begin{equation}\label{gs} -\Delta u+\left[\frac{1}{|x|}*u^2\right]h^2(s)u+V(s)u=\sum\limits_{i=1}^{m}Q_i(s)|u|^{q_i-2}u+K(s)|u|^{4}u,\quad x\in\R^3, \end{equation} where $s\in \R^3$ acts as a parameter instead of an independent variable. The ground energy function $G(s)$, which is defined to be the ground energy associated with \eqref{gs}, and was firstly introduced in \cite{XB}. See section \ref{scr} for a more detailed. Moreover, let $c_\infty$ be the ground energy associated with ``limiting problem'' of \eqref{1}, which is given as \begin{equation}\label{limpro} -\Delta u+\left[\frac{1}{|x|}*u^2\right]h^2_{\infty}u+V_{\infty}u=\sum\limits_{i=1}^{m} Q_i^{\infty}|u|^{q_i-2}u+K^{\infty}|u|^{4}u,\quad x\in\R^3, \end{equation} where $ V_{\infty}:=\liminf\limits_{|x|\to\infty}V(x),\hspace{1ex} Q_i^{\infty}:=\limsup\limits_{|x|\to\infty}Q_i(x). $ \par Then we state our main results of this work as follows. \begin{theorem}\label{the1} Suppose that the potentials $V$, $K$, and $Q_i(1\le i\le m)$ satisfy conditions $(f_1)$--$(f_4)$, the electronic potential $h(x)\equiv1$ and \begin{equation}\label{14} c_\infty>c_0:=\inf\limits_{s\in \R^3}G(s). \end{equation} Then, for $\varepsilon>0$ small enough, \eqref{1} has a positive ground state solution $v_\varepsilon$. Moreover, we have \begin{tabular}{@{}rp{15.05cm}} \rm{(\romannumeral1)} & The positive ground state solution $v_\varepsilon $ possesses a maximum point $x_\varepsilon$ in $\R^3$, such that $\lim\limits_{\varepsilon\to 0}dist(x_\varepsilon,\mathcal{G})=0$. Setting $\eta_\varepsilon(x)=v_\varepsilon(\varepsilon x+x_\varepsilon)$, where $x_\varepsilon\to x_0,$ as $\varepsilon\to0$, and $\eta_\varepsilon$ converges in $H_\varepsilon$ to a positive ground state solution of\\ \rule[-15pt]{0pt}{35pt}&\multicolumn{1}{c}{$-\Delta u+\phi_u u+V(x_0)u=\sum\limits_{i=1}^{m} Q_i(x_0)|u|^{q_i-2}u+K(x_0)|u|^{4}u,\hspace{1ex}x\in\R^3,$}\\ \rule[0pt]{0pt}{0pt}& where $\mathcal{G} :=\{s\in\R^3;G(s)=c_0\}.$\\ \rm{(\romannumeral2)} &There exist constants $C>0$ and $\mu >0$ such that\\ \rule[20pt]{0pt}{0pt} &\multicolumn{1}{c}{$v_\varepsilon(x)\le C exp\left({-\frac{\mu}{\varepsilon}|x-x_\varepsilon|}\right)$.}\\ \end{tabular} \end{theorem} \begin{remark}\label{rem1} \rm{}The existence of solution can be obtained, if the the electronic potential $h$ satisfies condition $(H)$. However, for concentration, it is complicated since the nonlocal term $$\int_{ \R^3}h(\varepsilon x)\left[\frac{1}{|x|}*\left(h(\varepsilon x)u^2\right)\right] u^2dx= \int_{\R^3}\int_{\R^3}h(\varepsilon x)u^2(x)\frac{h(\varepsilon y)u^2(y)}{|x-y|}dydx,$$ appears $h(\varepsilon x)$ and $h(\varepsilon y)$. For the sake of simplicity, we assume that $h(x)\equiv1$. \end{remark} \begin{remark} \rm{}Conditions like $(f_1)$--$(f_4)$ on the nonlinear term was introduced by \cite{FHN}. Fan obtained the existence of positive solutions of Kirchhoff-type problem. Theorem \ref{the1} extends the main results in \cite{FHN} to the Schr\"{o}dinger--Poisson systems. \end{remark} Now we give our key idea for the proof of Theorem \ref{the1}. As we deal with the problem \eqref{1} in $H^1(\R^3)$, the sobolev embeddings $H^1(\R^3)\hookrightarrow L^q(\R^3)$, $q\in[2,6)$ are not compact. The energy functional does not satisfy $(PS)_c$ condition at any energy level $c$. To overcome this obstacle, we try to pull the energy level down below critical energy level. Different from the assumptions $(K_1)$ on $K(x)$ and the methods in \cite{LFZ,HLR2}, we first prove that the upper bound of the energy level defined in \eqref{lstene} no more than the infimum of $G(s)$. Then, under our conditions, we obtain the estimation of the critical level for the infimum of $G(s)$ with the help of technique of Brezis and Nirenberg \cite{{Bre}}. On the other hand, one can see that the methods used in \cite{hezou,JLJF,JLJF2, MY} to establish a concentrating set of the ground state solutions for \eqref{1} does not work, owing to the competing relationship among multiple potentials. We succeed in doing so by introducing a new concentration set $\mathcal{G}$ for the ground state solutions, which is not relies on the condition \begin{equation}\label{15} \mathcal{M}=\bigcap_{i=1}^{m}\mathcal{Q}_i\cap\mathcal{V} \cap \mathcal{K}\neq\emptyset, \end{equation} where $\mathcal{V}=\{y\in\R^3: V(y)=\min\limits_{x\in\R^3}V(x)\}$, $\mathcal{Q}_i=\{y\in\R^3: Q_i(y)=\max\limits_{x\in\R^3}Q_i(x)\}$ and $\mathcal{K}=\{y\in\R^3: K(y)=\max\limits_{x\in\R^3}K(x)\}$. Particularly, if the condition \eqref{15} holds, it is easy to see that $\mathcal{M}=\mathcal{G}$. \par To overcome the difficulty is mentioned in Remark \ref{rem1}, we assume that the electronic potential $h(x)$ satisfies \vskip2mm \par\noindent $(H_1)$ $h(x)\in C(\R^3,\R)$, $h(x)\geq0$, $\lim\limits_{|x|\to \infty }h(x)=0$ and $h(x)=0$ if $x\in \mathcal{G}.$ \vskip2mm \par Then we have the following result. \begin{theorem}\label{the2} Suppose that the potentials $V$, $K$, and $Q_i(1\le i\le m)$ satisfy conditions $(f_1)$--$(f_4)$, the electronic potential $h$ satisfies $(H_1)$ and \begin{equation*} c_\infty>c_0:=\inf\limits_{s\in \R^3}G(s). \end{equation*} Then, for $\varepsilon>0$ small enough, \eqref{1} has a positive ground state solution $v_\varepsilon$. Moreover, we have\\ \begin{tabular}{@{}rp{15.05cm}} \rm{(\romannumeral1)} & The positive ground state solution $v_\varepsilon $ possesses a maximum point $x_\varepsilon$ in $\R^3$, such that $\lim\limits_{\varepsilon\to 0}dist(x_\varepsilon,\mathcal{G})=0$. Setting $\eta_\varepsilon(x)=v_\varepsilon(\varepsilon x+x_\varepsilon)$, where $x_\varepsilon\to x_0,$ as $\varepsilon\to0$, and $\eta_\varepsilon$ converges in $H_\varepsilon$ to a positive ground state solution of\\ \rule[-15pt]{0pt}{35pt}&\multicolumn{1}{c}{$-\Delta u+V(x_0)u=\sum\limits_{i=1}^{m} Q_i(x_0)|u|^{q_i-2}u+K(x_0)|u|^{4}u,\hspace{1ex}x\in\R^3,$}\\ &where $\mathcal{G}:=\{s\in\R^3;G(s)=c_0\}.$\\ \rm{(\romannumeral2)} &There exist constants $C>0$ and $\mu >0$ such that\\ \rule[-8pt]{0pt}{25pt}&\multicolumn{1}{c}{$v_\varepsilon(x)\le C exp\left({-\frac{\mu}{\varepsilon}|x-x_\varepsilon|}\right).$}\\ \end{tabular} \end{theorem} \begin{remark} \rm{}It is interesting to give some sufficient conditions to guarantee \eqref{14}, in terms of $V$, $K$ and $Q_i(i=1,2,\cdots,m)$. For example, let us consider the following conditions:\\ \begin{tabular}{@{}rp{15.05cm}} \rm{(1)}&$V_\infty=\sup\limits_{x\in\R^3}V(x)$, $Q_i^\infty=\inf\limits_{x\in\R^3}Q_i(x)(i=1,2,\cdots,m)$, $K(x)\equiv C$.\\ \rm{(2)}& There exists $\hat{s}$ such that $$ V_\infty\geq V(\hat{s}),\hspace{1ex} Q_i^\infty \le Q_i(\hat{s})(i=1,2,\cdots,m), K(x)\equiv C$$ with one of the above inequalities being strict. \end{tabular} \end{remark} If $V$, $K$, and $Q_i(i=1,2,\cdots,m)$ are not all constants, then each of the previous conditions (1) and (2) guarantee $c_\infty>c_0$ (see\cite{SCM,XB}). \par Finally, to get the nonexistence of ground state solution, we make the following assumption: \vskip2mm \par\noindent $(f_5)$ $V(x)\geq V_\infty =V_0,$ and $Q_i(x)\le Q_i^{\infty}(1\le i \le m).$ $h(x)\equiv1$ or $h(x)$ satisfies $(H_1)$. \vskip2mm \par The result of this work has the following statement. \begin{theorem}\label{the3} Suppose that conditions $(f_1)$--$(f_5)$ hold. Then, for any $\varepsilon>0$, \eqref{1} has no ground state solution. \end{theorem} \par The remainder of this paper is organized as follows. In Section \ref{sec2}, we derive a variational setting for the problem and give some preliminaries. In section \ref{sec3} and \ref{sec4}, we prove the existence of positive ground state solutions for \eqref{22} with some properties, such as concentration, exponential decay etc. The proofs of Theorems 1.1 and 1.2 will be given in Section \ref{sec5}. Section \ref{sec6} is dedicated to the proof of Theorem \ref{the3}. \par Hereafter we use the following notations:\par $\bullet$ $H^1(\R^3)$ is the usual Sobolev space equipped with the inner product abd norm $$(u.v)=\int_{\R^3}(\nabla u\nabla v)+uv)dx;\hspace{2ex}||u||^2=\int_{ \R^3}(|\nabla u|^2+u^2)dx.$$\par $\bullet$ $\mathcal{D}^{1,2}(\R^3) $ is the completion of $C^{\infty}_0(\R^3)$ with respect to the norm $$||u||_{\mathcal{D}^{1,2}}^2=\int_{ \R^3}|\nabla u|^2dx.$$\par $\bullet$ $L^r(\Omega)$, $1\le r\le\infty$, $\Omega\subset\R^3$, denotes a Lebesgue space, the norm in $L^r(\R^3)$ is denoted by\par \hspace{1.25ex} $|u|_{r,\Omega}$, where $\Omega$ is a proper subset of $\R^3$, by $|u|_r$ where $\Omega=\R^3$.\par $\bullet$ Denote the best constants for the embeddings of $H^1(\R^3)\hookrightarrow L^p(\R^3)(2\le p\le6)$ and\par \hspace{1.25ex} $\mathcal{D}^{1,2}(\R^3)\hookrightarrow$ $ L^6(\R^3)$ by $S_p$ and $S$, respectively. Then \begin{equation}\label{spine} |u|_p\le S_p^{-\frac{1}{2}}||u||\quad \forall u\in H^1(\R^3), \end{equation}\par \hspace{2ex} and \begin{equation} |u|_6\le S^{-\frac{1}{2}}||u||_{\mathcal{D}^{1,2}}\quad \forall u\in \mathcal{D}^{1,2}(\R^3). \end{equation}\par $\bullet$ For any $R>0$ and for any $z\in\R^3$, $B_R(z)$ denotes the ball of rasius $R$ centered at $z$.\par $\bullet$ $C$, $C_k(k=0,1,\cdots ,n+1)$ denote various positive constants may different from line to line.\par $\bullet$ $\rightarrow$ and $\rightharpoonup$ denote the strong and weak convergence in the related function space respectively.\par $\bullet$ $o_n(1)$ denotes any quantity which tends to zero when $n\to\infty$. \vskip4mm {\section{Preliminary results and variational framework}\label{sec2}} \setcounter{equation}{0} \vskip2mm In this section, we outline the variational framework of problem \eqref{11} and give some preliminary lemmas. From this section to the section \ref{subsec51}, we assume that the electronic potential $h\equiv1$. \par For every $u\in H^{1}(\R^3)$, the linear functional $L_u$ defined in $\mathcal{D}^{1,2}(\R^3)$ by \begin{equation} \begin{aligned} L_u(v)=\int_{ \R^3}u^2vdx. \end{aligned} \end{equation} Then it follows from Lax-Milgram theorem that there exists a unique $\phi_u\in\mathcal{D}^{1,2}(\R^3)$ such that \begin{equation}\label{pse} \begin{aligned} \int_{\R^3}\nabla \phi_u\nabla vdx=\int_{\R^3}u^2vdx,\hspace{1ex} \forall v\in\mathcal{D}^{1,2}(\R^3), \end{aligned} \end{equation} which is a weak solution of $- \Delta \phi=4\pi u^2$ and the following representation formula holds \begin{equation*} \phi_u(x)=\int_{\R^3}\frac{u^{2}(y)}{|x-y|}dy=\frac{1}{|x|}*u^2. \end{equation*} Therefore, for any $\varepsilon>0$, we have \begin{equation}\label{sp} \begin{aligned} \phi_{u/\varepsilon}=\frac{1}{\varepsilon^2}\int_{\R^3}\frac{u^{2}(y)}{|x-y|}dy=\varepsilon^{-2}\phi_u. \end{aligned} \end{equation} \par Let us define the operator $\Phi$:$H^{1}(\R^3)\to\mathcal{D}^{1,2}(\R^3)$ as \begin{equation*} \Phi[u]=\phi_u. \end{equation*} We next state some properties of $\Phi$, which will be useful in the following. \begin{lemma}\label{L21}\quad \vspace{0.5ex}\\ \begin{tabular}{@{}rl} \rm{(\romannumeral1)} &$\Phi$ is continuous;\\ \rm{(\romannumeral2)} &$\Phi$ maps bounded sets into bounded sets;\\ \rm{(\romannumeral3)} &$\Phi(tu) = t^{2}\Phi(u)$;\\ \rm{(\romannumeral4)} &$||\phi_{u}||_{\mathcal{D}^{1,2}}\le S^{-\frac{1}{2}}|u|_{12/5}^{2}$.\vspace{0.5ex}\\ \end{tabular} \end{lemma} \begin{proof} The proof of (i) and (ii) can be found in \cite{Cerami}. (iii) and (iv) are clear from the definition of $\phi_u$. Replacing $v$ by $\phi_{u}$ in \eqref{pse} and using H\"{o}lder inequality, we have $$||\phi_{u}||_{\mathcal{D}^{1,2}}^2=\int_{\R^3}\phi_{u}u^2 dx\le |\phi_{u}|_6 |u|_{12/5}^{2}\le S^{-\frac{1}{2}}||\phi_{u}||_{\mathcal{D}^{1,2}} |u|_{12/5}^{2},$$ and then (iv) holds. \end{proof} \begin{lemma}\label{L22} Assume $u_n\rightharpoonup u \in H^{1}(\R^3) $.Then\vspace{0.5ex}\\ \begin{tabular}{@{}rl} \rm{(\romannumeral1)} &$\Phi(u_n)\to\Phi(u)$ in $\mathcal{D}^{1,2}(\R^3)$;\vspace{0.5ex}\\ \rm{(\romannumeral2)} &$\int_{ \R^3}\phi_{u_n}u_n^2 dx\to\int_{ \R^3}\phi_{u}u^2 dx$;\vspace{0.5ex}\\ \rm{(\romannumeral3)} &$\int_{ {\R^3}} \phi_{u_n}u_n\varphi dx\to\int_{ \R^3}\phi_{u}u\varphi dx$,\hspace{1ex} $\forall \varphi \in H^{1}( \R^3)$. \end{tabular} \end{lemma} \begin{proof} We borrow an idea from Cerami et al.\cite{Cerami2} to prove this lemma. \par (i) By definition of $\Phi$ and $L_u$, we have $$||\Phi[u]||_{\mathcal{D}^{1,2}} = ||\phi_{u}||_{\mathcal{D}^{1,2}}=||L_u||_{\mathcal{L}(\mathcal{D}^{1,2},\R)},$$ to prove (i), it is enough to show that $$||L_{u_n}-L_u||_{\mathcal{L}(\mathcal{D}^{1,2},\R)}\to 0, \quad n\to \infty.$$ For all $v\in \mathcal{D}^{1,2}(\R^3)$, then $v\in L^{6}(\R^3)$, thus for any $\sigma >0$, there exists $R>0$ large enough, such that $|v|_{6, B_{R}^C(0)}<\sigma$. Thus, for all $v\in \mathcal{D}^{1,2}(\R^3)$, we obtain \begin{equation}\label{L221} \begin{aligned} 0\le|L_{u_n}(v)-L_u (v)|&=\left|\int_{\R^3}(u_n^2 -u^2)vdx\right|\\ &\le \int_{B_{R}^C(0)}|u_n^2 -u^2||v|dx+\int_{B_{R}(0)}|u_n^2 -u^2||v|dx\\ &\le|v|_{6, B_{R}^C(0)}|u_n -u|_{\frac{12}{5}}|u_n+u|_{\frac{12}{5}}+\int_{B_{R}(0)}|u_n^2 -u^2||v|dx\\ &\le C\sigma +\int_{B_{R}(0)}|u_n^2 -u^2||v|dx. \end{aligned} \end{equation} Furthermore, we may assume, going if necessary to a subsequence, $u_n\to u$ in $L_{loc}^{\frac{12}{5}}(\R^3)$. Hence, we have \begin{equation}\label{L222} \int_{B_{R}(0)}|u_n^2 -u^2||v|dx\le\left( \int_{{B_R}(0)}|u_n -u|^{\frac{12}{5}}dx\right)^{\frac{5}{12}}\left( \int_{B_{R}(0)}|u_n +u|^{\frac{12}{5}}dx\right)^{\frac{5}{12}} |v|_6 =o_n(1). \end{equation} Combining \eqref{L221} and \eqref{L222}, the desired conclusion is obtained. \par (ii) At first, replacing $v$ by $\phi_{u_n}$ and repeating the argument used in the prove of (i), we can obtain that $\int_{ \R^3}\phi_{u_n}(u_n^2 -u^2) dx\to 0$, as $n\to\infty$ holds. On the other hand, we observe that $$\int_{ \R^3}(\phi_{u_n}-\phi_{u})u^2dx\le|u|_{\frac{12}{5}}^2|\phi_{u_n}-\phi_{u}|_6=o(1).$$ Indeed, since the embedding $\mathcal{D}^{1,2}(\R^3)\hookrightarrow L^6(\R^3)$ is continuous and $\Phi(u_n)\to\Phi(u)$ in $\mathcal{D}^{1,2}(\R^3)$. Hence, we obtain \begin{equation*} \left|\int_{ \R^3}(\phi_{u_n}u_n^2 -\phi_{u}u^2) dx\right|\le\left|\int_{ \R^3}\phi_{u_n}(u_n^2 -u^2) dx\right|+\left|\int_{ \R^3}(\phi_{u_n}-\phi_{u})u^2dx\right|=o_n(1), \end{equation*} as desired. \par (iii) To show (iii), we first prove that \begin{equation}\label{26} \left|\int_{ \R^3}\phi_{u}(u_n -u)\varphi dx\right|=o_n(1). \end{equation} In fact, using $\varphi\in H^1(\R^3)$ and $\phi_{u}\in\mathcal{D}^{1,2}(\R^3)$, it follows from H\"{o}lder inequality that $\phi_{u}\varphi\in L^2(\R^3)$. Since $u_n\rightharpoonup u $ in $H^{1}(\R^3) $and, then, in $L^2(\R^3) $. Thus, it is easy to see that \eqref{26} holds. On the other hand, we have \begin{equation}\label{27} \left|\int_{ \R^3}(\phi_{u_n}-\phi_{u})u_n \varphi dx\right|\le |u_n|_2|\phi_{u_n}-\phi_{u}|_6|\varphi|_3=o_n(1). \end{equation} Combining \eqref{26} and \eqref{27}, we can get that \begin{equation*} \begin{aligned} \left|\int_{ \R^3}(\phi_{u_n}u_n\varphi-\phi_{u}u\varphi)dx\right| &\le\left|\int_{ \R^3}\phi_{u}(u_n -u)\varphi dx\right|+\left|\int_{ \R^3}(\phi_{u_n}-\phi_{u})u_n \varphi dx\right|\\ &=o_n(1), \end{aligned} \end{equation*} as desired. \end{proof} \begin{remark} From Lemma \ref{L21} \text{(iv)} and \cite[Lemma 3.1]{Cerami}, we obtain that $F\in C^{2}(H^1(\R^3),\R) $ is well-defined with \begin{equation*} F(u)=\int_{\R^3}\phi_{u}u^2 dx. \end{equation*} Moreover, the functional $F$ and its derivative $F'$ posses BL-splitting property \cite[Lemma 2.2]{LFZ2}, which is similar to Brezis--Lieb lemma \cite{Wi}. \end{remark} \par Substituting \eqref{sp} into \eqref{11} and making the change of variable $ x\mapsto \varepsilon x$ , we can rewrite \eqref{1} as the following equivalent equation \begin{equation}\label{22} -\Delta u+\phi_u u+V(\varepsilon x)u=\sum\limits_{i=1}^{m} Q_i(\varepsilon x)|u|^{q_i-2}u+K(\varepsilon x)|u|^{4}u,\hspace{1ex}x\in\R^3. \end{equation} Obviously, $v(x)$ is a solution of \eqref{1} if and only if $u(x)=v(\varepsilon x)$ is a solution of \eqref{22}. For any $\varepsilon >0$, we define the Hilbert space $H_{\varepsilon} =\{u\in H^{1}(\R^3):\int_{\R^3}V(\varepsilon x)|u|^2 dx<\infty\}$ equipped with the norm \begin{equation*} ||u||_\varepsilon ^2 = \int_{\R^3}(|\nabla u|^2 +V(\varepsilon x)|u|^2) dx . \end{equation*} Since $V(x)$ is positive and bounded for all $x\in\R^3$, we have $H_\varepsilon=H^1(\R^3)$ and the norm $||\cdot||_{\varepsilon}$ is equivalent to $||\cdot||$. At this step, under our assumptions it is standard to see that \eqref{22} is variational and its solutions are the critical points of the functional $I_\varepsilon :H_{\varepsilon}\to \R$ given by \begin{equation} I_\varepsilon(u)=\frac{1}{2}||u||^2_\varepsilon +\frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx. \end{equation} Moreover, $I_{\varepsilon} \in C^{2}(H_{\varepsilon},\R)$. Next, we introduce the Nehari manifold associated to $I_\varepsilon$ by \begin{equation*} \mathcal{N}_\varepsilon= \{u\in H_\varepsilon \backslash \{0\}:\langle I_{\varepsilon}'(u),u \rangle=0\}. \end{equation*} Clearly, $u\in \mathcal{N}_\varepsilon$ if and only if \begin{equation} ||u||^2_\varepsilon +\int_{ \R^3}\phi_{u}u^2 dx = \sum\limits_{i=1}^{m} \int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx + \int_{ \R^3} K(\varepsilon x)|u|^{6}dx. \end{equation} Moreover, $I_\varepsilon$ is bounded from below on $\mathcal{N}_\varepsilon$. So we can consider the following problem: \begin{equation}\label{lstene} c_{\varepsilon}:=\inf\limits_{u\in \mathcal{N}_\varepsilon} I_{\varepsilon}(u). \end{equation} Now, we summarize some properties of $I_\varepsilon$ on $\mathcal{N}_\varepsilon$. \begin{lemma}\label{L23} For any $u\in H_\varepsilon \backslash \{0\}$, the following statements hold true.\vspace{0.5ex}\\ \begin{tabular}{@{}rl} \rm{(\romannumeral1)} &There exists a unique $t_{\varepsilon}=t_{\varepsilon}(u)>0$ such that $t_{\varepsilon}u\in \mathcal{N}_\varepsilon$ and\\ \rule[-10pt]{0pt}{25pt}&\multicolumn{1}{c}{$I_{\varepsilon}(t_{\varepsilon}u):=\max\limits_{t\geq0} I_{\varepsilon}(tu)$.}\\ \rm{(\romannumeral2)} &There exist constants $0<\alpha_1 <\alpha_2$ independent of $\varepsilon >0$, such that $\alpha_1\le t_{\varepsilon}\le \alpha_2$.\\ \rm{(\romannumeral3)} &$I_\varepsilon$ is coercive and bounded from below on $\mathcal{N}_\varepsilon$. \\ \rm{(\romannumeral4)} &There exists $\kappa >0$ independent of $\varepsilon >0$, such that \\ \rule[-8pt]{0pt}{25pt}&\multicolumn{1}{c}{$||u||_\varepsilon\geq \kappa $,\quad $I_\varepsilon(u)\geq\frac{q_{i_0}-2}{2q_{i_0}}\kappa^2$, \quad $\forall u_{\varepsilon}\in \mathcal{N}_\varepsilon$.}\\ \end{tabular} \end{lemma} \begin{proof} (i) For $t>0$, we set \begin{equation*} \begin{aligned} h(t):=I_\varepsilon(tu)=\frac{t^2}{2}||u||^2_\varepsilon +\frac{t^4}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m} \frac{t^{q_i}}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx-\frac{t^6}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx. \end{aligned} \end{equation*} From the Sobolev embedding inequalities \eqref{spine}, we obtain $$h(t)\geq \frac{t^2}{2}||u||^2_\varepsilon - \sum\limits_{i=1}^{m} \frac{t^{q_i}}{q_i}C_i||u||^{q_i}_\varepsilon-\frac{t^6}{6}C_{m+1}||u||^6_\varepsilon.$$ Since $4<q_i<6$, it is easy to see that $h(t)>0$ for small $t>0$. Moreover, it follows from Lemma \ref{L21} (iv) that, $$h(t)\le\frac{t^2}{2}||u||^2_\varepsilon +\frac{t^4}{4}C||u||^4_\varepsilon +\sum\limits_{i=1}^{m}\frac{t^{q_i}}{q_i}C_i||u||^{q_i}_\varepsilon-\frac{t^6}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx \to-\infty,$$ as $t\to\infty$. Consequently, $\max\limits_{t\geq 0} h(t)$ is achieved at $t_{\varepsilon}=t_{\varepsilon}(u)>0$, hence $h'(t_\varepsilon)=0$ and $t_{\varepsilon}u\in \mathcal{N}_\varepsilon$. \par Next, we show the uniqueness of $t_\varepsilon$. Arguing indirectly, if there exist $0<t_1<t_2$ such that $t_1 u, t_2 u\in \mathcal{N}_\varepsilon$. Let, $$ f(t):=-{t^2}\int_{ \R^3}\phi_{u}u^2 dx + \sum\limits_{i=1}^{m} {t^{q_i-2}}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx+{t^4}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx. $$ Taking account of $4<q_1<q_2<\cdots<q_m<6$ and $(f_2)$, we have $f(t)$ is a strictly increasing function on any interval where $f(t)>0$. Then, we deduce from $h'(t_1)=0$ and $h'(t_2)=0$ that $$f(t_1)=||u||_\varepsilon^2\hspace{1ex}\text{and}\hspace{1ex} f(t_2)=||u||_\varepsilon^2,$$ which is a contradiction. \par (ii) Since $t_\varepsilon u\in \mathcal{N}_\varepsilon$, then $f(t_\varepsilon)=0$, i.e., \begin{equation} -{t_\varepsilon^2}\int_{ \R^3}\phi_{u}u^2 dx + \sum\limits_{i=1}^{m} {t_\varepsilon^{q_i-2}}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx+{t_\varepsilon^4}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx=||u||_\varepsilon^2. \end{equation} It follows from that \begin{equation}\label{213} ||u||_\varepsilon^2\le \sum\limits_{i=1}^{m} t_\varepsilon^{q_i-2}C_i||u||^{q_i}_\varepsilon+t_\varepsilon^4C_{m+1}||u||^6_\varepsilon, \end{equation} and $${t_\varepsilon^4}\int_{ \R^3} |u|^{6}dx\le ||u||_\varepsilon^2+ {t_\varepsilon^2}C||u||^4_\varepsilon +\sum\limits_{i=1}^{m}t^{q_i-2}C_i||u||^{q_i}_\varepsilon. $$ Then, it is easy to see that there exist constants $0<\alpha_1 <\alpha_2$ independent of $\varepsilon >0$, such that $\alpha_1\le t_{\varepsilon}\le \alpha_2$. \par (iii) For $u\in\mathcal{N}_\varepsilon$, we have \begin{equation}\label{bddlow} \begin{aligned} I_\varepsilon (u) = & \hspace{0.4em}I_\varepsilon (u)-\frac{1}{q_{i_0}}\langle I_{\varepsilon}'(u),u \rangle\\ =& \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u||^2_\varepsilon +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{u}u^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(\varepsilon x)|u|^{6}dx.\\ \geq&\left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u||^2_\varepsilon>0, \end{aligned} \end{equation} which implies that $I_\varepsilon$ is coercive and bounded from below on $\mathcal{N}_\varepsilon$. \par (iv) The conclusion is immediate by (iii) and taking $t_\varepsilon=1$ in \eqref{213}. \end{proof} \begin{lemma}\label{L24} $I_\varepsilon$ has the mountain pass geometry structure.\\ \begin{tabular}{@{}rl} \rm{(\romannumeral1)} &There exist $\alpha, \rho>0$ independent of $\varepsilon$, such that $I_\varepsilon(u)\geq \alpha$ for $||u||_\varepsilon = \rho $;\\ \rm{(\romannumeral2)} &There exists an $e\in H_{\varepsilon}$ satisfying $||e||_\varepsilon \geq\rho $ such that $I_{\varepsilon}(e)<0$. \end{tabular} \end{lemma} \begin{proof} (i) For any $u\in H_\varepsilon \backslash\{0\}$, we deduce from the Sobolev embedding inequalities \eqref{spine} that \begin{equation*} \begin{aligned} I_\varepsilon(u)&=\frac{1}{2}||u||^2_\varepsilon +\frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m}\frac{1}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx\\ &\geq \frac{1}{2}||u||^2_\varepsilon - \sum\limits_{i=1}^{m} \frac{1}{q_i}C_i||u||^{q_i}_\varepsilon-\frac{1}{6}C_{n+1}||u||^6_\varepsilon. \end{aligned} \end{equation*} Set $||u||_\varepsilon=\rho$ small enough, such that $I_\varepsilon(u)\geq \alpha$. \par (ii) For any $u\in H_\varepsilon \backslash\{0\}$. \begin{equation*} \begin{aligned} I_\varepsilon(tu)&=\frac{t^2}{2}||u||^2_\varepsilon +\frac{t^4}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m} \frac{t^{q_i}}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx-\frac{t^6}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx\\ &\le\frac{t^2}{2}||u||^2_\varepsilon +\frac{t^4}{4}C||u||^4_\varepsilon -\sum\limits_{i=1}^{m}\frac{t^{q_i}}{q_i}C_i||u||^{q_i}_\varepsilon-\frac{t^6}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx. \end{aligned} \end{equation*} Since $4<q_i<6$ and $K(x)\geq 0$, there exist $e:=t'u$ for some $t'>0$ large enough such that $||e||_\varepsilon \geq\rho $ and $I_{\varepsilon}(e)<0$. \end{proof} \begin{lemma}\label{L25} For any $\varepsilon>0$, we can define \begin{equation*} c_{\varepsilon}^{*}:=\inf\limits_{u\in H_\varepsilon \backslash \{0\}}\max\limits_{t\geq 0} I_{\varepsilon}(tu),\quad c_{\varepsilon}^{**}:=\inf\limits_{\gamma \in \Gamma }\sup\limits_{t\in[0,1]} I_{\varepsilon}(\gamma(t)), \end{equation*} where $\Gamma=\{\gamma\in C([0,1],H_\varepsilon);\gamma(0)=0,I_\varepsilon(\gamma(1))<0)\}.$\\ Then, \begin{equation}\label{3eq} c_{\varepsilon}=c_{\varepsilon}^{*}=c_{\varepsilon}^{**}. \end{equation} \end{lemma} \begin{proof} By Lemma \ref{L23} (i), we have $$c_{\varepsilon}=\inf\limits_{u\in \mathcal{N}_\varepsilon}I_\varepsilon(u)=\inf\limits_{u\in H_\varepsilon \backslash \{0\}}I_\varepsilon(t_\varepsilon u)=\inf\limits_{u\in H_\varepsilon \backslash \{0\}}\max\limits_{t\geq 0}I_\varepsilon(tu)=c_{\varepsilon}^{*}.$$ Moreover, by Lemma \ref{L24} (ii), for any $u\in H_\varepsilon \backslash \{0\}$, there exists $k>0$ large enough, such that $I_\varepsilon(ku)<0$, set $\beta(t)=tku$, $t\in[0,1]$, then $\beta\in\Gamma$. Indeed, $\beta(0)=0$, $\beta(1)=ku$ and $I_\varepsilon(\beta(1))<0$. Thus, $$\max\limits_{t\geq 0} I_{\varepsilon}(tu)=\sup\limits_{t\in[0,1]} I_{\varepsilon}(tku)=\sup\limits_{t\in[0,1]} I_{\varepsilon}(\beta(t))\geq c_{\varepsilon}^{**}.$$ It follows that $c_{\varepsilon}^{*}\geq c_{\varepsilon}^{**}$. On the other hand, the Nehari manifold $\mathcal{N}_\varepsilon$ separates $H_\varepsilon$ into two components $$H_\varepsilon^+ =\{u\in H_\varepsilon:\langle I_{\varepsilon}'(u),u \rangle >0\}\cup\{0\}$$ and $$H_\varepsilon^- =\{u\in H_\varepsilon:\langle I_{\varepsilon}'(u),u \rangle <0\}.$$ It follows from \eqref{bddlow} that $I_\varepsilon(u)\geq 0$ for $u\in H_\varepsilon^+$, and $\frac{1}{q_{i_0}}\langle I_{\varepsilon}'(\gamma(1)),\gamma(1)\rangle\leq I_\varepsilon(\gamma(1))<0$. Thus any $\gamma\in\Gamma$ has to cross $\mathcal{N}_\varepsilon$, since $ \gamma(0)\in H_\varepsilon^+ $ and $\gamma(1)\in H_\varepsilon^- $, and so $c_{\varepsilon}^{**}\geq c_{\varepsilon}$. The proof is complete. \end{proof} In order to study \eqref{22}, we need some results about the autonomous problem of \eqref{22}. For $a>0$, $b_{i_0}>0$, $(j-i_0)b_j>0(j=1,2,\cdots,m+1; j\neq i_0)$, consider the autonomous problem: \begin{equation}\label{ap} -\Delta u+\phi_u u+au=\sum\limits_{i=1}^{m} b_i|u|^{q_i-2}u+b_{m+1}|u|^{4}u,\quad x\in\R^3. \end{equation} The associated energy functional is \begin{equation*} \begin{aligned} I_{ab}(u)=\frac{1}{2}||u||^2_{a} + \frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m} \frac{b_i}{q_i}\int_{ \R^3}|u|^{q_i}dx-\frac{b_{m+1}}{6}\int_{ \R^3}|u|^{6}dx, \end{aligned} \end{equation*} where $||u||^2_{a}=\int_{\R^3}(|\nabla u|^2 +a|u|^2) dx$. By Lemma \ref{L25}, we have \begin{equation}\label{217} c_{ab}:=\inf\limits_{u\in \mathcal{N}_{ab}} I_{ab}(u)=\inf\limits_{u\in H_\varepsilon \backslash \{0\}}\max\limits_{t\geq 0} I_{ab}(tu)=\inf\limits_{\gamma \in \Gamma_{ab} }\sup\limits_{t\in[0,1]} I_{ab}(\gamma(t)), \end{equation} where $\mathcal{N}_{ab}= \{u\in H_\varepsilon \backslash \{0\}:\langle I_{ab}'(u),u \rangle=0\}$. $\Gamma=\{\gamma\in C([0,1],H_\varepsilon);\gamma(0)=0,I_\varepsilon(\gamma(1)<0)\}.$ \begin{lemma}\label{L27} Problem \eqref{ap} has at least a positive ground state solution in $H_\varepsilon$. \end{lemma} \begin{proof} The idea of the proofs is sketched as follows. At first, similar to Lemma \ref{L24}, we can show that $I_{ab}$ satisfies the mountain pass geometry structure. Moreover, similar to \eqref{bddlow}, we can show the (PS) sequence is bounded in $H_\varepsilon$. Secondly, with the help of technique of Brezis and Nirenberg \cite{{Bre}}, we can obtain the estimation of the mountain pass critical level under our conditions for \eqref{ap}, which can recover the compactness of the (PS) sequence. Finally, it follows from the Lions' concentration-compactness principle \cite{PL1,PL2} and some standard arguments that \eqref{ap} has a positive ground state solution. For the details of the proof, we refer readers to \cite{hezou,HLR2}. \end{proof} The following lemma describes a comparison between the Mountain-Pass values for different parameters, which will play a very important role in obtaining the existence results. \begin{lemma}\label{L277} For $a,\tilde{a} >0$, $b_{i_0}, \tilde{b}_{i_0}>0$, $(j-i_0)b_j>0$, $(j-i_0)\tilde{b}_j>0$$(j=1,2,\cdots,m+1)$. If $$min\{a-\tilde{a}, \tilde{b}_j-b_j\}\geq0,$$ then $c_{ab}\geq c_{\tilde{a}\tilde{b}}$. In particular, if $max\{a-\tilde{a}, \tilde{b}_j-b_j\}>0$ also holds, then $c_{ab} > c_{\tilde{a}\tilde{b}}$. \end{lemma} \begin{proof} From Lemma \ref{L27}, we choose $u_{ab}$ be a solution of problem \eqref{ap} such that $c_{ab}=I_{ab}(u_{ab})$. There holds $$c_{ab}=\max\limits_{t\geq 0}I_{ab}(tu_{ab}).$$ By the similar arguments to Lemma \ref{L23} (i), there exists $t_1>0$ such that $t_1u_{ab}\in \mathcal{N}_{\tilde{a}\tilde{b}}$. let $\tilde{u}_{\tilde{a}\tilde{b}}=t_1u_{ab}$ be such that $I_{\tilde{a}\tilde{b}}(\tilde{u}_{\tilde{a}\tilde{b}})=\max\limits_{t\geq 0}I_{\tilde{a}\tilde{b}}(tu_{ab})$. Then, we see that $$\begin{aligned} c_{ab}=I_{ab}(u_{ab})\geq& I_{ab}(\tilde{u}_{\tilde{a}\tilde{b}})\\ =&I_{\tilde{a}\tilde{b}}(\tilde{u}_{\tilde{a}\tilde{b}})+\frac{a-\tilde{a}}{2}||u||_{\mathcal{D}^{1,2}}\\ &- \sum\limits_{i=1}^{m} \frac{b_i-\tilde{b}_i}{q_i}\int_{ \R^3}|u|^{q_i}dx-\frac{b_{m+1}-\tilde{b}_{m+1}}{6}\int_{ \R^3}|u|^{6}dx\\ \geq &c_{\tilde{a}\tilde{b}}. \end{aligned}$$ In particular, if $max\{a-\tilde{a}, \tilde{b}_j-b_j\}>0$ also holds, then the above inequality implies that $c_{ab} > c_{\tilde{a}\tilde{b}}$. The proof is complete. \end{proof} \vskip2mm {\section{The existence of ground state solution }\label{scr}\label{sec3}} \setcounter{equation}{0} \vskip2mm In this section, we will establish compactness lemma for $I_\varepsilon$, and prove the existence of ground state solution to \eqref{22}. Firstly, we need to give the energy functional associated with \eqref{limpro} by \begin{equation*} I_\infty(u)=\frac{1}{2}||u||^2_{\mathcal{D}^{1,2}}+\frac{1}{2}\int_{ \R^3}V_{\infty}u^2 dx +\frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx - \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i^{\infty}|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K^{\infty}|u|^{6}dx. \end{equation*} Considering the following minimization problem: \begin{equation*} c_{\infty}:=\inf\limits_{u\in \mathcal{N}_\infty} I_{\infty}(u), \end{equation*} where $\mathcal{N}_\infty= \{u\in H_\varepsilon \backslash \{0\}:\langle I_{\infty}'(u),u \rangle=0\}.$ Moreover, we denote the energy functional for \eqref{gs} by \begin{equation} \begin{aligned} I^s(u)=&\frac{1}{2}||u||^2_{\mathcal{D}^{1,2}}+\frac{1}{2}\int_{ \R^3}V(s)u^2 dx +\frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx\\ &- \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i(s)|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(s)|u|^{6}dx. \end{aligned} \end{equation} Finally, we define the ground energy function $$G(s):=\inf\limits_{u\in \mathcal{N}^s} I^s(u),$$ where $\mathcal{N}^s= \{u\in H_\varepsilon \backslash \{0\}:\langle (I^s)'(u),u \rangle=0\}$, which was mentioned in Section \ref{sec1} and will play an important role in estimating the critical energy level. \begin{lemma}\label{Lc} For $s\in \R^3$, the ground energy function $G(s)$ is locally Lipschitz continuous. \end{lemma} The proof of Lemma \ref{Lc} is essentially similar to that in \cite[Lemma 2.3, Lemma 2.4 ]{XB}, so we omit it here. \begin{lemma}\label{vnsh}(\cite[Lemma 1.21]{Wi}) Let $r>0$ and $\{u_n\}$ is bounded in $H^{1}(\R^N)$. If $$\sup\limits_{y\in\R^N}\int_{B_r (y)}|u_n|^2\to 0,\quad n\to \infty,$$ then $u_n \to 0$ in $L^s(\R^N) $ for $2<s<2^*$. \end{lemma} To estimate the critical energy level for the critical Schr\"{o}dinger--Poisson system involving more than four potentials, we have an important upper bound for the least energy $c_\varepsilon$ defined in \eqref{lstene} via the definition of the ground energy function $G(s)$. \begin{lemma}\label{cric1} There exists $C>0$ independent of $\varepsilon$, such that $c_\varepsilon\geq C$. Furthermore, \begin{equation}\label{32} \limsup\limits_{\varepsilon\to 0}c_\varepsilon\le c_0:=\inf\limits_{s\in \R^3}G(s). \end{equation} \end{lemma} \begin{proof} Taking $a=\inf\limits_{x\in\R^{3}}V(x)$, $b_i=\sup\limits_{x\in\R^{3}}Q_i(x)(1\le i\le m)$, and $b_m+1=\sup\limits_{x\in\R^{3}}K(x)$ in \eqref{217} and using Lemma \ref{L277}, we can obtain that $c_\varepsilon\geq c_{ab}>0$. We only need to show \eqref{32}. It follows from $c_0<c_\infty$ and Lemma \ref{Lc} that there exists $s_0\in\R^3$ such that $G(s_0)=c_0$. Furthermore, by Lemma \ref{L27}, there exists $u_{s_0}\in \mathcal{N}^{s_0}$, i.e., \begin{equation}\label{tto1} \begin{aligned} &||u_{s_0}||^2_{\mathcal{D}^{1,2}}+ \int_{ \R^3} V(s_0)|u_{s_0}|^{2}dx+\int_{ \R^3}\phi_{u_{s_0}}u_{s_0}^2 dx\\ = &\sum\limits_{i=1}^{m} \int_{ \R^3}Q_i (s_0)|u_{s_0}|^{q_i}dx + \int_{ \R^3} K(s_0)|u_{s_0}|^{6}dx, \end{aligned} \end{equation} such that $I^{s_0}(u_{s_0})=c_0$. Set $\omega_\varepsilon(x)=u_{s_0}\left(x-\frac{s_0}{\varepsilon}\right)$, from Lemma \ref{L23}(i) and (ii), we know that there exists a unique bounded $t_\varepsilon>0$ such that $t_\varepsilon\omega_\varepsilon\in\mathcal{N}_\varepsilon$, i.e., $$t_\varepsilon||\omega_\varepsilon||^2_\varepsilon +t_\varepsilon^3\int_{ \R^3}\phi_{\omega_\varepsilon}\omega_\varepsilon^2 dx = \sum\limits_{i=1}^{m}t_\varepsilon^{q_i-1} \int_{ \R^3}Q_i (\varepsilon x)|\omega_\varepsilon|^{q_i}dx + t_\varepsilon^6 \int_{ \R^3} K(\varepsilon x)|\omega_\varepsilon|^{6}dx,$$ and so \begin{equation} \begin{aligned} &t_\varepsilon||u_{s_0}||^2_{\mathcal{D}^{1,2}} +t_\varepsilon\int_{ \R^3}V(\varepsilon x+s_0)|u_{s_0}|^{2}dx+t_\varepsilon^3\int_{ \R^3}\phi_{u_{s_0}}u_{s_0}^2 dx \\ = &\sum\limits_{i=1}^{m} t_\varepsilon^{q_i-1}\int_{ \R^3} Q_i (\varepsilon x+s_0)|u_{s_0}|^{q_i}dx + t_\varepsilon^6 \int_{ \R^3} K(\varepsilon x+s_0)|u_{s_0}|^{6}dx. \end{aligned} \end{equation} \par Since $V(x)$ is bounded, then there exists a constant $M$ such that $|V(\varepsilon x+s_0)-V(s_0)|\le M$. From \eqref{spine}, we have $u_{s_0}\in L^p(\R^3)(2\le p\le6)$, then for any $\sigma>0$, there exists $R>0$ such that $|u_{s_0}|_{2,B_R^C(0)}<\frac{\sigma}{2M}$. Thus, we have \begin{equation*} \int_{\R^3\backslash B_{R}(0)}(V(\varepsilon x+s_0)-V(s_0)|u_{s_0}|^2dx<\frac{\sigma}{2}, \end{equation*} and obviously we observe that \begin{equation*} \int_{B_{R}(0)}(V(\varepsilon x+s_0)-V(s_0)|u_{s_0}|^2dx<\frac{\sigma}{2}, \end{equation*} as $\varepsilon\to 0$. Which implies that \begin{equation} \int_{ \R^3}V(\varepsilon x+s_0)|u_{s_0}|^2dx\to\int_{ \R^3}V(s_0)|u_{s_0}|^2dx, \end{equation} as $\varepsilon\to 0$. Similarly, we can obtain that \begin{equation} \int_{ \R^3}K(\varepsilon x+s_0)|u_{s_0}|^6dx\to\int_{ \R^3}K(s_0)|u_{s_0}|^6dx, \end{equation} and \begin{equation}\label{to1} \int_{ \R^3}Q_i(\varepsilon x+s_0)|u_{s_0}|^2dx\to\int_{ \R^3}Q_i(s_0)|u_{s_0})|^2dx,(1\le i \le m), \end{equation} as $\varepsilon\to 0$. From \eqref{tto1}--\eqref{to1}, we have $t_\varepsilon\to 1$, as $\varepsilon\to 0$. Now observe that \begin{equation} \begin{aligned} c_\varepsilon=&\inf\limits_{u\in \mathcal{N}_\varepsilon}I_\varepsilon(u)\\ \le& I_\varepsilon(t_\varepsilon \omega_\varepsilon)\\ =&I^{s_0}(t_\varepsilon \omega_\varepsilon)+\frac{t_\varepsilon^2}{2}\int_{\R^3}(V(\varepsilon x)-V(s_0))| \omega_\varepsilon|^2dx\\ &-\sum\limits_{i=1}^{m}\frac{t_\varepsilon^{q_i}}{q_i} \int_{ \R^3} (Q_i (\varepsilon x)-Q_i(s_0))|\omega_\varepsilon|^{q_i}dx -\frac{t_\varepsilon^6}{6} \int_{ \R^3} (K(\varepsilon x)-K(s_0)|\omega_\varepsilon|^{6}dx\\ =&I^{s_0}(t_\varepsilon u_{s_0})+\frac{t_\varepsilon^2}{2}\int_{\R^3}(V(\varepsilon x+s_0)-V(s_0))| u_{s_0}|^2dx\\ &-\sum\limits_{i=1}^{m}\frac{t_\varepsilon^{q_i}}{q_i} \int_{ \R^3} (Q_i (\varepsilon x+s_0)-Q_i(s_0))|u_{s_0}|^{q_i}dx\\ &-\frac{t_\varepsilon^6}{6} \int_{ \R^3} (K(\varepsilon x+s_0)-K(s_0)|u_{s_0}|^{6}dx. \end{aligned} \end{equation} The property of $t_\varepsilon$ discussed above implies that $I^{s_0}(t_\varepsilon u_{s_0})\to I^{s_0}( u_{s_0})=c_0$ as $\varepsilon\to 0$. Now the desired conclusion is obtained. \end{proof} For any $\sigma>0$, we consider the function $u_{\sigma,x_0}$ defined by $$u_{\sigma,x_0}(x)=\frac{(3\sigma^2)^\frac{1}{4}}{(\sigma^2+|x-x_0|^2)^{\frac{1}{2}}},$$ which is a solution of the critical problem $-\Delta u=u^5$ in $\R^3$(see \cite{Wi,ms}), and $x_0$ is given in condition $(f_4)$. Let $U_{\sigma,x_0}=\xi(x-x_0)u_{\sigma,x_0}$, where $\xi(x)\in C^{\infty}_0(B_{2R}(0),[0,1])$ satisfies $\xi(x)\equiv 1$ on $B_R(0)$, where $R$ is a positive constant. As in \cite{Bre,ms}, the following estimations hold: \begin{equation} \int_{ \R^3} |\nabla U_{\sigma,x_0}|^2dx=S^{\frac{3}{2}}+O(\sigma)\hspace{1ex}\text{and} \hspace{1ex}\int_{ \R^3} |U_{\sigma,x_0}|^6dx=S^{\frac{3}{2}}+O(\sigma^3), \end{equation} and for any $t\in[2,6)$, \begin{equation}\label{cutest} |U_{\sigma,x_0}|^t_t=\begin{cases} O(\sigma^{\frac{t}{2}}),& t\in[2,3),\\ O(\sigma^{\frac{3}{2}}|ln\sigma|),& t=3, \\ O(\sigma^{\frac{6-t}{2}}),&t\in(3,6). \end{cases} \end{equation} \par In the following lemma, we compare the minimum level of ground energy function with a suitable number which involves the best Sobolev embedding constant $S$. \begin{lemma}\label{cric2} $c_{0}=\inf\limits_{s\in\R^3}G(s)<\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}$. \end{lemma} \begin{proof} Using the condition $(f_4)$, there exists $x_0\in\R^3$ such that $K(x_0)=K^\infty$. We consider the following equation $$-\Delta u+\phi_u u+V(x_0)u=\sum\limits_{i=1}^{m} Q_i(x_0)|u|^{q_i-2}u+K(x_0)|u|^{4}u,\quad x\in\R^3, $$ with the corresponding energy functional \begin{equation} \begin{aligned} I^{x_0}(u)=&\frac{1}{2}||u||^2_{\mathcal{D}^{1,2}}+\frac{1}{2}\int_{ \R^3}V(x_0)u^2 dx +\frac{1}{4}\int_{ \R^3}\phi_{u}u^2 dx\\ &- \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i(x_0)|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(x_0)|u|^{6}dx. \end{aligned} \end{equation} Moreover, by Lemma \ref{L27}, we can obtain that there exists $u_{x_0}\in \mathcal{N}^{x_0}$ such that $G(x_0)=I^{x_0}(u_{x_0})$. Taking account of Lemma \ref{L23} (i), (ii) and \eqref{3eq}, we have $G(x_0)\le\max\limits_{t\geq 0}I^{x_0}(tU_{\sigma,x_0}),$ and there exists a unique bounded $t_\sigma>0$ such that $I^{x_0}(t_\sigma U_{\sigma,x_0})=\max\limits_{t\geq 0}I^{x_0}(tU_{\sigma,x_0})$. By the Lemma \ref{L21} and \eqref{cutest}, we can obtain that \begin{equation} \begin{aligned} I^{x_0}(t_\sigma U_{\sigma,x_0})=&\frac{t_\sigma^2}{2}||U_{\sigma,x_0}||^2_{\mathcal{D}^{1,2}}+\frac{t_\sigma^2}{2}\int_{ \R^3}V(x_0)U_{\sigma,x_0}^2 dx+\frac{t_\sigma^4}{4}\int_{ \R^3}\phi_{ _{U_{\sigma,x_0}}}U_{\sigma,x_0}^2 dx\\ &- \sum\limits_{i=1}^{m} \frac{t_\sigma^{q_i}}{q_i}\int_{ \R^3}Q_i(x_0)|U_{\sigma,x_0}|^{q_i}dx-\frac{t_\sigma^6}{6}\int_{ \R^3} K(x_0)|U_{\sigma,x_0}|^{6}dx\\ \le&\frac{t_\sigma^2}{2}\int_{ \R^3}\left(|\nabla U_{\sigma,x_0}|^2+V(x_0)U_{\sigma,x_0}^2\right) dx+\frac{t_\sigma^4}{4S}\left(\int_{ \R^3}|U_{\sigma,x_0}|^{\frac{12}{5}} dx\right)^{\frac{5}{3}}\\ &- \sum\limits_{i=1}^{m} \frac{t_\sigma^{q_i}}{q_i}\int_{ \R^3}Q_i(x_0)|U_{\sigma,x_0}|^{q_i}dx-\frac{t_\sigma^6}{6}\int_{ \R^3} K(x_0)|U_{\sigma,x_0}|^{6}dx\\ \le&\sup\limits_{t_\sigma>0}\left\{\frac{t_\sigma^2}{2}\int_{ \R^3}|\nabla U_{\sigma,x_0}|^2dx -\frac{t_\sigma^6}{6}\int_{ \R^3} K(x_0)|U_{\sigma,x_0}|^{6}dx\right\}\\ &- \sum\limits_{i=1}^{m} \frac{t_\sigma^{q_i}}{q_i}\int_{ \R^3}Q_i(x_0)|U_{\sigma,x_0}|^{q_i}dx+CO(\sigma)+CS^{-1}O(\sigma^2). \end{aligned} \end{equation} Using the condition $(f_2)$ and $4<q_1<q_2<\cdots<q_m<6$, one has \begin{equation} \begin{aligned} I^{x_0}(t_\sigma U_{\sigma,x_0})\le&\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}+O(\sigma)+CO(\sigma)+CS^{-1}O(\sigma^2)\\ &+\sum\limits_{i=1}^{i_0}C_iO(\sigma^\frac{6-q_i}{2})-\sum\limits_{i=i_0+1}^{m}C_iO(\sigma^\frac{6-q_i}{2})\\ <&\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}, \end{aligned} \end{equation} as $\sigma>0$ small enough. Consequently, we deduce from the definition of $c_0$ that $$c_0\le G(x_0)\le I^{x_0}(t_\sigma U_{\sigma,x_0})<\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}.$$ The desired conclusion is obtained. \end{proof} \begin{remark} By Lemma \ref{cric1} and Lemma \ref{cric2}, we see that $c_\varepsilon<\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}$ for $\varepsilon>0$ small enough. Moreover, it follows from Lemma \ref{cric1} and \eqref{14} that $c_\varepsilon<c_\infty$. \end{remark} \begin{lemma}\label{psbdd} If $\{u_n\}\subset H_\varepsilon$ be a $(PS)_c$ sequence for $I_\varepsilon$, then $\{u_n\}$ is bounded in $H_\varepsilon$. \end{lemma} \begin{proof} Let $\{u_n\}\subset H_\varepsilon$ be a $(PS)_c$ sequence for $I_\varepsilon$, for n large enough, we have \begin{equation} \begin{aligned} c+1+||u_n||_\varepsilon \geq & \hspace{0.4em}I_\varepsilon (u_n)-\frac{1}{q_{i_0}}\langle I_{\varepsilon}'(u_n),u_n \rangle\\ =&\left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u_n||^2_\varepsilon +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{u_n}u_n^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i (\varepsilon x)|u_n|^{q_i}dx\\ &+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i (\varepsilon x)|u_n|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx\\ \geq&\left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u_n||^2_\varepsilon. \end{aligned} \end{equation} It follows that $\{u_n\}$ is bounded in $H_\varepsilon $. \end{proof} \begin{lemma}\label{2chose1} Let $\{u_n\}\subset H_\varepsilon$ be a $(PS)_c$ sequence for $I_\varepsilon$ with $0<c<\min\left\{ c_\infty,\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}} \right \} $ and such that $u_n\rightharpoonup 0$, as $n\to\infty$. Then, one of the following conclusions holds.\vspace{0.5ex}\\ \begin{tabular}{rl} \rm{(\romannumeral1)}& $u_n\to 0$ in $H_\varepsilon$, as $n\to\infty$;\\ \rm{(\romannumeral2)}& there exists a sequence $\{y_n\}\subset \R^3$ and constants $r,\beta>0$ such that \end{tabular} $$\liminf\limits_{n\to \infty }\int_{B_r (y_n)} u_n^2 dx\geq\beta>0.$$ \end{lemma} \begin{proof} Suppose that (ii) does not occur, i.e., for any $r>0$, such that $$\limsup\limits_{n\to\infty}\int_{B_r (y)} u_n^2 dx=0.$$ By Lemma \ref{vnsh}, we have $u_n \to 0$ in $L^s(\R^3) $ for $2<s<6$, as $n\to\infty$. Hence, we see that \begin{equation}\label{qto0} \int_{\R^3} Q_i (\varepsilon x)|u_n|^{q_i}dx\to 0, \end{equation} as $n\to\infty$. Moreover, from Lemma \ref{L21} (iv), we have \begin{equation} \int_{ \R^3}\phi_{u_n}u_n^2 dx\le S^{-1}|u_n|^4_{12/5}\to0, \end{equation} thus, we have $\int_{ \R^3}\phi_{u_n}u_n^2 dx\to0$, as $n\to\infty$. Recalling that $\langle I_{\varepsilon}'(u_n),u_n \rangle=o_n(1)$ , we have $$||u_n||^2_\varepsilon=\int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx+o_n(1).$$ It follows from Lemma \ref{psbdd} that $\{u_n\}$ is bounded in $H_\varepsilon$, up to a subsequence, we may assume that there exists $l\geq 0$ such that \begin{equation}\label{psto0} ||u_n||^2_\varepsilon\to l,\quad\int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx\to l, \end{equation} as $n\to\infty$. If $l=0$, the proof is complete. If $l>0$, by using \eqref{qto0}--\eqref{psto0} and $I_\varepsilon(u_n)=c+o_n(1)$, we get \begin{equation}\label{psc} \begin{aligned} c+o_n(1) = &\frac{1}{2}||u_n||^2_\varepsilon +\frac{1}{4}\int_{ \R^3}\phi_{u_n}u_n^2 dx\\ & -\sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u_n|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx\\ = &\frac{1}{2}||u_n||^2_\varepsilon-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx+o_n(1)\\ = &\frac{1}{3}l+o_n(1). \end{aligned} \end{equation} By the Sobolev inequality and $(f_4)$, we have that $$||u_n||^2_\varepsilon\geq\int_{ \R^3}|\nabla u_n|^{2}dx\geq S \left(\int_{\R^3} |u_n|^{6}dx\right)^{\frac{1}{3}}\geq S|K|^{-\frac{1}{3}}_{\infty}\left(\int_{\R^3}K(\varepsilon x) |u_n|^{6}dx\right)^{\frac{1}{3}}.$$ Taking the limit as $n\to\infty$ at the last inequality, we obtain \begin{equation}\label{lc} l\geq S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}. \end{equation} From \eqref{psc} and \eqref{lc}, we get a contradiction to the definition of $c$. Therefore, $l=0$ and the desired conclusion is obtained. \end{proof} \begin{lemma}\label{ps} Let $\{u_n\}\subset H_\varepsilon$ be a $(PS)_c$ sequence for $I_\varepsilon$ with $0<c<\min\left\{ c_\infty,\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}} \right \} $ and $u_n\rightharpoonup 0$ in $ H_\varepsilon$, then $u_n\to 0$ in $ H_\varepsilon.$ \end{lemma} \begin{proof} Assume by contradiction that $u_n\nrightarrow 0$ in $H_\varepsilon$. From Lemma \ref{L23} (i) and (ii), there exists a positive bounded sequence $\{t_n\}$ such that $\{t_nu_n\}\subset\mathcal{N}_\infty$. Then we claim that $\limsup\limits_{n\to \infty }t_n\le1$. Arguing indirectly, there exists $\delta>$ and a subsequence still denoted by $\{t_n\} $, such that $t_n\geq1+\delta$, for all $n\in\mathbb{N}$. Since $\langle I_{\varepsilon}'(u_n),u_n \rangle=o_n(1)$, we get \begin{equation}\label{str0} \begin{aligned} &||u_n||^2_{\mathcal{D}^{1,2}} +\int_{ \R^3} V(\varepsilon x)|u_n|^{2}dx+\int_{ \R^3}\phi_{u_n}u_n^2 dx \\ = &\sum\limits_{i=1}^{m} \int_{ \R^3}Q_i (\varepsilon x)|u_n|^{q_i}dx + \int_{ \R^3} K(\varepsilon x)|u_n|^{6}dx+o_n(1). \end{aligned} \end{equation} From $t_nu_n\in\mathcal{N}_\infty$, we have \begin{equation}\label{str01} \begin{aligned} &t_n^2||u_n||^2_{\mathcal{D}^{1,2}} +t_n^2\int_{ \R^3} V_\infty|u_n|^{2}dx+t_n^4\int_{ \R^3}\phi_{u_n}u_n^2 dx\\ =&\sum\limits_{i=1}^{m}t_n^{q_i} \int_{ \R^3}Q_i^\infty|u_n|^{q_i}dx + t_n^6 \int_{ \R^3} K^\infty|u_n|^{6}dx. \end{aligned} \end{equation} It follows from \eqref{str0} and \eqref{str01} that \begin{equation}\label{str02} \begin{aligned} &o_n(1)+\left(t_n^{2-q_{i_0}}-1\right)||u_n||^2_{\mathcal{D}^{1,2}} +\int_{ \R^3}\left( t_n^{2-q_{i_0}}V_\infty-V(\varepsilon x)\right)|u_n|^{2}dx\\ &+\left(t_n^{4-q_{i_0}}-1\right)\int_{ \R^3}\phi_{u_n}u_n^2 dx -\sum\limits_{i=1}^{i_0-1}\int_{ \R^3}\left( t_n^{q_i-q_{i_0}}Q_i^\infty-Q_i(\varepsilon x)\right)|u_n|^{q_i}dx\\ =&\sum\limits_{i=i_0}^{m}\int_{ \R^3}\left( t_n^{q_i-q_{i_0}}Q_i^\infty-Q_i(\varepsilon x)\right)|u_n|^{q_i}dx + \int_{ \R^3}\left(t_n^{6-q_{i_0}} K^\infty-K(\varepsilon x)\right)|u_n|^{6}dx. \end{aligned} \end{equation} By using the condition $(f_2)$, $4<q_1<q_2<\cdots<q_m<6$, $t_n>1$, and the definition of $ V_\infty$ and $Q_i^\infty$, for any $\sigma>0$, there exists $R=R(\sigma)>0$ such that \begin{equation}\label{case21} V(\varepsilon x)\geq V_\infty-\sigma>t_n^{2-q_{i_0}}V_\infty-\sigma \end{equation} and \begin{equation} t_n^{q_i-q_{i_0}}Q_i^\infty+\sigma>Q_i^\infty+\sigma\geq Q_i(\varepsilon x),\quad 1\le i\le i_0-1 \end{equation} and \begin{equation} t_n^{q_i-q_{i_0}}Q_i^\infty+\sigma>Q_i^\infty+\sigma\geq Q_i(\varepsilon x),\quad i_0\le i\le n, \end{equation} for any $|\varepsilon x|\geq R$. Moreover, it follows from $(f_4)$ that \begin{equation}\label{case22} t_n^{6-q_{i_0}}K^\infty\geq K^\infty> K(\varepsilon x), \end{equation} for any $x\in\R^3$. Since $u_n\rightharpoonup 0$ in $H_\varepsilon$, we get \begin{equation}\label{str03} u_n\to0\hspace{1ex}\text{in}\hspace{1ex}L^q_{loc}(\R^3),\hspace{1ex}q\in[2,6). \end{equation} Thus, note that $\{u_n\}$ is bounded in $L^p(\R^3)(2\le p\le 6)$, we deduce from \eqref{str02}--\eqref{str03} that \begin{equation}\label{327} \int_{ \R^3}|u_n|^{6}dx<C\sigma. \end{equation} On the other hand, Lemma \ref{2chose1} shows that there exists a sequence $\{y_n\}\subset \R^3$ and constants $r,\beta>0$ such that \begin{equation}\label{str04} \liminf\limits_{n\to \infty }\int_{B_r (y_n)} u_n^2 dx\geq\beta>0. \end{equation} If we set $v_n(x)=u_n(x+y_n)$, then there exists a non-zero function $v(x)$ such that, up to a subsequence, $v_n\rightharpoonup v$ in $H_\varepsilon$, $v_n\to v$ in $L_{loc}^q(\R^3)$, $q\in[2,6)$, and $v_n\to v$ a.e. in $\R^3$. Moreover, by \eqref{str04}, we have that there exists a subset $\Lambda\subset B_r(0)$ with positive measure such that $v\neq0$ a.e. in $\Lambda$. It follows from Fatou's lemma and Sobolev inequality that $$\int_{ \R^3}|u_n|^{6}dx=\int_{ \R^3}|v_n|^{6}dx\geq\int_{ \Lambda}|v|^{6}dx\geq C \int_{ \Lambda}|v_n|^{2}dx>\beta_0>0,$$ for some $\beta_0>0$, which contradicts to \eqref{327} and the claim is true. \par We next divide the proof into two separate cases. \par Case 1: $\limsup\limits_{n\to \infty }t_n=1.$ In this case, there exists a subsequence, still denote by $\{t_n\}$, such that $t_n\to1$ as $n\to\infty$. Observe that \begin{equation} \begin{aligned} &I_\infty(t_nu_n)-I_\varepsilon(u_n)\\ =\hspace{1ex}&\frac{t_n^2-1}{2}||u_n||^2_{\mathcal{D}^{1,2}}+\frac{1}{2}\int_{ \R^3}\left( t_n^{2}V_\infty-V(\varepsilon x)\right)u_n^2 dx +\frac{t_n^4-1}{4}\int_{ \R^3}\phi_{u_n}u_n^2 dx\\ &- \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}\left( t_n^{q_i}Q_i^\infty-Q_i(\varepsilon x)\right)|u_n|^{q_i}dx-\frac{1}{6}\int_{ \R^3}\left( t_n^{6}K^\infty-K(\varepsilon x)\right)|u_n|^{6}dx. \end{aligned} \end{equation} Arguing as in the proof of the claim above, fixed $n$ large enough , for any $\sigma>0$, there exists $R=R(\sigma)>0$ such that \begin{equation}\label{str1} V(\varepsilon x)\geq t_n^{2}V_\infty-\sigma,\hspace{1ex}t_n^{q_i}Q_i^\infty+\sigma\geq Q_i(\varepsilon x)(1\le i\le n) \end{equation} for any $|\varepsilon x|\geq R$, and \begin{equation}\label{str2} t_n^{6}K^\infty\geq K(\varepsilon x), \end{equation} for any $x\in\R^3$. Thus, we deduce from\eqref{str03}, \eqref{str1} and \eqref{str2} that $$I_\infty(t_nu_n)-I_\varepsilon(u_n)\le o_n(1)+C\sigma.$$ Taking limit in the above inequality, we have $$ \begin{aligned}c+o_n(1)=I_\varepsilon(u_n)\geq& I_\varepsilon(u_n)+c_\infty-I_\infty(t_nu_n)\\ \geq&c_\infty+o_n(1), \end{aligned}$$ which is a contradiction. \par Case 2: $\limsup\limits_{n\to \infty }t_n<1.$ In this case, we suppose that, without loss of generality, $t_n<1$ for all $n\in \mathbb{N}$. From \eqref{case21}--\eqref{str03} and $\{u_n\}$ is bounded in $L^p(\R^3)(2\le p\le 6)$, for any $\sigma>0$, we have \begin{equation}\label{case23} \int_{ \R^3}\left( V_\infty-V(\varepsilon x)\right)u_n^2 dx\le C\sigma+o_n(1), \end{equation} \begin{equation} \int_{ \R^3}\left( Q_i^\infty-Q_i(\varepsilon x)\right)|u_n|^{q_i}dx\geq -C\sigma+o_n(1), \end{equation} \begin{equation}\label{case24} \int_{ \R^3}\left( K^\infty-K(\varepsilon x)\right)|u_n|^{6}dx\geq -C\sigma+o_n(1). \end{equation} Then, \eqref{case23}--\eqref{case24} imply that $$ \begin{aligned} c_\infty\le & I_\infty(t_nu_n)\\ = &I_\varepsilon(t_nu_n)+\frac{t_n^{2}}{2}\int_{ \R^3}\left( V_\infty-V(\varepsilon x)\right)u_n^2 dx \\ &-\sum\limits_{i=1}^{m} \frac{t_n^{q_i}}{q_i}\int_{ \R^3}\left( Q_i^\infty-Q_i(\varepsilon x)\right)|u_n|^{q_i}dx-\frac{t_n^{6}}{6}\int_{ \R^3}\left( K^\infty-K(\varepsilon x)\right)|u_n|^{6}dx\\ \le& I_\varepsilon(t_nu_n)+C\sigma +o_n(1)\\ \le& I_\varepsilon(u_n)+C\sigma+o_n(1)\\ =&c+C\sigma+o_n(1). \end{aligned}$$ Let $\sigma\to0$ and $n\to\infty$, we have $c_\infty\le c$, which yields a contradiction ends the proof. \end{proof} \begin{lemma}\label{pscc} $I_\varepsilon$ satisfies the $(PS)_c$ conditions in $H_\varepsilon$ with $0<c<\min\left\{c_\infty, \frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}} \right\}$. \end{lemma} \begin{proof} Let $\{u_n\}\subset H_\varepsilon$ be a $(PS)_\varepsilon$ for $I_\varepsilon$ with $0<c<\min\left\{c_\infty, \frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}} \right\}$. By Lemma \ref{psbdd}, we get that $\{u_n\}$ is bounded in $H_\varepsilon$. Then, there exists $u\in H_\varepsilon$ such that $u_n\rightharpoonup u$ in $H_\varepsilon$. By a standard argument, we get $I_\varepsilon'(u_n)\to I_\varepsilon'(u)=0$, i.e., $u$ is a critical point of $I_\varepsilon$. Let $\omega_n=u_n-u$, by Lemma \ref{L22} and Brezis--Lieb Lemma, it is not difficult to see that \begin{equation}\label{psc1} I_\varepsilon(\omega_n)=I_\varepsilon(u_n)-I_\varepsilon(u)+o_n(1)=c-I_\varepsilon(u)+o_n(1). \end{equation} Under our conditions, we see that \begin{equation}\label{psc2} \begin{aligned} I_\varepsilon(u) = & \hspace{0.4em}I_\varepsilon (u)-\frac{1}{q_{i_0}}\langle I_{\varepsilon}'(u),u \rangle\\ =& \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u||^2_\varepsilon +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{u}u^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(\varepsilon x)|u|^{6}dx\\ \geq&\left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||u||^2_\varepsilon\\ \geq&0. \end{aligned} \end{equation} Then, \eqref{psc1} and \eqref{psc2} imply that $$I_\varepsilon(\omega_n)\le c.$$ It follows from Lemma \ref{ps} that $\omega_n\to 0$ in $H_\varepsilon$, then $u_n\to u$ in $H_\varepsilon$ and the proof is complete. \end{proof} \begin{lemma}\label{L39} Problem \eqref{22} has at least a positive ground state solution in $H_\varepsilon$. \end{lemma} \begin{proof} From Lemma \ref{L24}, we have that $I_{\varepsilon}$ satisfies the mountain pass geometry structure. Moreover, using a version of the mountain pass theorem without $(PS)$ condition, there exists a $(PS)_{c_\varepsilon}$ sequence $\{u_n\}\subset H_\varepsilon$ for $I_\varepsilon$. Lemma \ref{cric1} and Lemma \ref{cric2} imply that $0<c_\varepsilon<\min\left\{c_\infty, \frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}} \right\}$ for $\varepsilon>0$ small enough. Therefore, with the aim of Lemma \ref{pscc}, we conclude that there exists $u_\varepsilon\in H_\varepsilon$ such that $$I_\varepsilon(u_\varepsilon)=c_\varepsilon,\hspace{1ex} \text{and} \hspace{1ex}I_\varepsilon'(u_\varepsilon)=0.$$ It follows from \eqref{3eq} that $u_\varepsilon$ is a ground state solution of \eqref{22}. If we denote $u_\varepsilon^{\pm}=\max\{\pm u_\varepsilon, 0\}$, and replace $I_\varepsilon$ by the following functional $$ I^{+}_\varepsilon(u_\varepsilon)=\frac{1}{2}||u_\varepsilon||^2_\varepsilon +\frac{1}{4}\int_{ \R^3}\phi_{u_\varepsilon}u_\varepsilon^2 dx - \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u_\varepsilon^+|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u_\varepsilon^+|^{6}dx.$$ Repeating the above proof and calculations, we have $$0=\langle (I^{+})'_\varepsilon(u_\varepsilon), u_\varepsilon^-\rangle=||u_\varepsilon^-||^2_\varepsilon +\int_{ \R^3}\phi_{u_\varepsilon^-}(u_\varepsilon^-)^2 dx\geq ||u_\varepsilon^-||^2_\varepsilon,$$ which implies $u_\varepsilon\geq 0$ in $\R^3$. The strong maximum principle implies that $u_\varepsilon(x)>0$ for all $x\in\R^3$. The proof is complete. \end{proof} \vskip2mm {\section{Concentration of positive ground state solutions}\label{sec4}} \setcounter{equation}{0} \vskip2mm In this section, in order to discuss the concentration behavior of ground state solutions $v_\varepsilon$ of \eqref{1} with $\varepsilon\to0$, we will consider the family $u_\varepsilon(x)=v_\varepsilon(\varepsilon x)$, which is a family of positive ground state solutions of \eqref{22}. \begin{lemma}\label{L41} There exist $\varepsilon_*>0$, $\{y_\varepsilon\}\subset\R^3$ and $r$, $\beta>0$, such that $$\int_{B_r (y_\varepsilon)} |u_\varepsilon|^2 dx\geq\beta,\hspace{1ex}\text{for all}\hspace{1ex}\varepsilon\in(0,\varepsilon_*).$$ \end{lemma} \begin{proof} Arguing by contradiction, suppose that the lemma does not hold. Then, there exists a sequence $\varepsilon_n\to0$, such that for all $r>0$, $$\lim\limits_{n\to\infty}\sup\limits_{y\in \R^3}\int_{B_r (y)}|u_{\varepsilon_n}|^2dx=0.$$ Then, repeating the arguments employed in the proof of Lemma \ref{2chose1}, we can obtain that $$c_{\varepsilon_n}\geq \frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}},$$ which is a contradiction with $c_\varepsilon<\frac{1}{3}S^{\frac{3}{2}}|K|_{\infty}^{-\frac{1}{2}}.$ The proof is complete. \end{proof} \begin{lemma}\label{L42} The family $\{\varepsilon y_\varepsilon\}$ is bounded as $\varepsilon\to0$. Moreover, assume that $\varepsilon_n y_{\varepsilon_n}\to x_0$ under a choice of subsequence, then we have $x_0\in \mathcal{G}$, i.e., $$G(x_0)=\inf\limits_{s\in \R^3}G(s).$$ \end{lemma} \begin{proof} For the sake of simplicity, we denote $y_n=y_{\varepsilon_n}$, $u_n(x)=u_{\varepsilon_n }(x)$. Arguing by contradiction, suppose that there is a sequence $\varepsilon_n\to 0$ such that $ |\varepsilon_n y_n|\to\infty$, as $n\to\infty$. Set $\widetilde{u}_n(x)=u_n(x+y_n)$, where $\widetilde{u}_n(x)=\widetilde{u}_{\varepsilon_n}(x)$. It follows from Lemma \ref{L41} that \begin{equation}\label{ineq41} \int_{B_r (0)} |\widetilde{u}_n|^2 dx\geq\beta,\hspace{1ex}\text{for all}\hspace{1ex}n\in \mathbb{N}. \end{equation} Then, $\widetilde{u}_n$ is a ground state solution of \begin{equation*} -\Delta\widetilde{u}_n+\phi_{\widetilde{u}_n} \widetilde{u}_n+V(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n=\sum\limits_{i=1}^{m} Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i-2}\widetilde{u}_n+K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{4}\widetilde{u}_n, \end{equation*} and $||\widetilde{u}_n||_{\varepsilon_n}=||u_n||_{\varepsilon_n}$ is bounded in $\R$. Moreover, up to a subsequence, we may assume that $\widetilde{u}_n\rightharpoonup\widetilde{u}$ in $H_\varepsilon$ with $\widetilde{u}\neq0$ and $\widetilde{u}\geq0$ . It follows that \begin{equation}\label{41} \begin{aligned} &\int_{ \R^3} \nabla \widetilde{u}_n \nabla \varphi dx +\int_{ \R^3} V(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\varphi dx+\int_{ \R^3}\phi_{\widetilde{u}_n}\widetilde{u}_n\varphi dx \\ = &\sum\limits_{i=1}^{m} \int_{ \R^3}Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i-1}\varphi dx + \int_{ \R^3}K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{5}\varphi dx, \end{aligned} \end{equation} for any $ \varphi\in H^1(\R^3)$. Without loss of generality, we may asuume that $V(\varepsilon_ny_n)\to V_\infty$ and $Q_i(\varepsilon_ny_n)\to Q_i^\infty$ $(1\le i\le m)$ and $K(\varepsilon_ny_n)\to K^\infty$, as $n\to\infty$. Under our assumptions on $V$, $K$, and $Q_i(1\le i\le m)$, we get that $V$, $K$, and $Q_i(1\le i\le m)$ are uniformly continuous, which implies that \begin{equation} V(\varepsilon_n x+\varepsilon_ny_n)\to V_\infty\quad\text{and}\quad K(\varepsilon_n x+\varepsilon_ny_n)\to K^\infty, \end{equation} and \begin{equation}\label{43} Q_i(\varepsilon_n x+\varepsilon_ny_n)\to Q_i^\infty(1\le i\le m), \end{equation} as $n\to\infty$ uniformly on bounded sets of $\R^3$. Using the weak limit of $\widetilde{u}_n$ and \eqref{41}--\eqref{43}, we get \begin{equation*} \begin{aligned} &\int_{ \R^3} \nabla \widetilde{u} \nabla \varphi dx +\int_{ \R^3} V_\infty \widetilde{u}\varphi dx+\int_{ \R^3}\phi_{\widetilde{u}}\widetilde{u}\varphi dx \\ = &\sum\limits_{i=1}^{m} \int_{ \R^3}Q_i^\infty|\widetilde{u}|^{q_i-1}\varphi dx + \int_{ \R^3}K^\infty|\widetilde{u}|^{5}\varphi dx, \end{aligned} \end{equation*} for any $ \varphi\in H^1(\R^3)$. Which implies that $\widetilde{u}\in \mathcal{N}_\infty$, i.e., $\langle I_{\infty}'(\widetilde{u}),\widetilde{u} \rangle=0.$ We deduce from Lemma \ref{cric1} and Fatou's Lemma that \begin{equation} \begin{aligned} c_\infty\le& I_{\infty}(\widetilde{u})-\frac{1}{q_{i_0}}\langle I_{\infty}'(\widetilde{u}),\widetilde{u} \rangle\\ =& \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\left(|\nabla \widetilde{u}|^2+V_\infty|\widetilde{u}|^{2}\right) dx +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{\widetilde{u}}\widetilde{u}^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i^\infty|\widetilde{u}|^{q_i}dx+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i^\infty|\widetilde{u}|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K^\infty|\widetilde{u}|^{6}dx\\ \le&\liminf\limits_{n\to \infty } \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\left(|\nabla \widetilde{u}_n|^2+V(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{2}\right) dx +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{\widetilde{u}_n}\widetilde{u}_n^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i}dx\\ &+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{6}dx\\ \le&\limsup\limits_{n\to \infty } \left(I_{\varepsilon_n}(\widetilde{u}_n)-\frac{1}{q_{i_0}}\langle I_{\varepsilon_n}'(\widetilde{u}_n),\widetilde{u}_n \rangle\right)\\ =&\limsup\limits_{n\to \infty }c_{\varepsilon_n} \le c_0, \end{aligned} \end{equation} which contradicts the fact that $c_0>c_\infty$. Hence, $\{\varepsilon y_\varepsilon\}$ is bounded as $\varepsilon\to0$. Note that if $\varepsilon_n y_n\to x_0$ under a choice of subsequence, and $V$, $Q_i(1\le i\le m ) $, $K$ are uniformly continuous, we have \begin{equation}\label{46} V(\varepsilon_n x+\varepsilon_ny_n)\to V(x_0)\quad\text{and}\quad K(\varepsilon_n x+\varepsilon_ny_n)\to K(x_0), \end{equation} and \begin{equation}\label{477} Q_i(\varepsilon_n x+\varepsilon_ny_n)\to Q_i(x_0)(1\le i\le m), \end{equation} as $n\to\infty$ uniformly on bounded sets of $\R^3$. Taking $\varphi=\widetilde{u}$ and limit as $n\to\infty$ in the above Eq. \eqref{41}, we get \begin{equation*} \begin{aligned} ||\widetilde{u}||^2_{\mathcal{D}^{1,2}} +\int_{ \R^3} V(x_0)|\widetilde{u}^{2}dx+\int_{ \R^3}\phi_{\widetilde{u}}\widetilde{u}^2 dx = \sum\limits_{i=1}^{m} \int_{ \R^3}Q_i(x_0)|\widetilde{u}|^{q_i}dx + \int_{ \R^3} K(x_0)|\widetilde{u}|^{6}dx, \end{aligned} \end{equation*} which implies that $\widetilde{u}\in \mathcal{N}^{x_0}$, i.e., $\langle (I^{x_0})'(\widetilde{u}),\widetilde{u} \rangle=0.$ We deduce from Lemma \ref{cric1} and Fatou's Lemma that \begin{equation}\label{47} \begin{aligned} c_0=& \inf\limits_{s\in\R^3}G(s)\le G(x_0)\\ \le& I^{x_0}(\widetilde{u} )-\frac{1}{q_{i_0}}\langle (I^{x_0})'(\widetilde{u}),\widetilde{u} \rangle\\ =& \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\left(|\nabla \widetilde{u}|^2+V(x_0)|\widetilde{u}|^{2}\right) dx +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{\widetilde{u}}\widetilde{u}^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i(x_0)|\widetilde{u}|^{q_i}dx+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i(x_0)|\widetilde{u}|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(x_0)|\widetilde{u}|^{6}dx\\ \le&\liminf\limits_{n\to \infty } \left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\left(|\nabla \widetilde{u}_n|^2+V(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{2}\right) dx +\left(\frac{1}{4}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}\phi_{\widetilde{u}_n}\widetilde{u}_n^2 dx \\ &- \sum\limits_{i=1}^{i_0-1} \left(\frac{1}{q_i}-\frac{1}{q_{i_0}}\right)\int_{ \R^3}Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i}dx\\ &+\sum\limits_{i=i_0+1}^{m}\left (\frac{1}{q_{i_0}}-\frac{1}{q_i}\right)\int_{ \R^3}Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i}dx\\ &+\left({\frac{1}{q_{i_0}}-\frac{1}{6}}\right)\int_{ \R^3} K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{6}dx\\ \le&\limsup\limits_{n\to \infty } \left(I_{\varepsilon_n}(\widetilde{u}_n)-\frac{1}{q_{i_0}}\langle I_{\varepsilon_n}'(\widetilde{u}_n),\widetilde{u}_n \rangle\right)\\ =&\limsup\limits_{n\to \infty }c_{\varepsilon_n}\\ \le&c_0, \end{aligned} \end{equation} it follows that $G(x_0)=\inf\limits_{s\in \R^3}G(s)$. The proof is complete. \end{proof} \begin{lemma}\label{L43} $\widetilde{u}_n \to \widetilde{u}$ in $H_\varepsilon$ as $n\to\infty$. Furthermore, there exist $C>0$ and $\varepsilon_*>0$ such that $|\widetilde{u}_n|_\infty\le C $ and $$ \lim\limits_{|x|\to\infty}\widetilde{u}_{\varepsilon}(x)=0 \hspace{1ex}\text{uniformly on} \hspace{1ex}\varepsilon\in (0,\varepsilon_*).$$ \end{lemma} \begin{proof} Due to the Brezis--Lieb lemma and \eqref{46}--\eqref{477}, we get $$I_{\varepsilon_n}(\widetilde{u}_n - \widetilde{u}_)=I_{\varepsilon_n}(\widetilde{u}_n)-I^{x_0}(\widetilde{u})+o_n(1).$$ Since $\varepsilon_n y_{\varepsilon_n}\to x_0$ and \eqref{47}, we have $$\lim\limits_{n\to\infty}I_{\varepsilon_n}(\widetilde{u}_n - \widetilde{u})=0. $$ Similarly, we also get $$\lim\limits_{n\to\infty}\langle I_{\varepsilon_n}'(\widetilde{u}_n - \widetilde{u}),\widetilde{u}_n - \widetilde{u} \rangle=0.$$ Consequently, $$\left(\frac{1}{2}-\frac{1}{q_{i_0}}\right)||\widetilde{u}_n - \widetilde{u}||^2 \le\lim\limits_{n\to \infty } \left(I_{\varepsilon_n}(\widetilde{u}_n- \widetilde{u})-\frac{1}{q_{i_0}}\langle I_{\varepsilon_n}'(\widetilde{u}_n- \widetilde{u}),\widetilde{u}_n- \widetilde{u} \rangle\right)=0,$$ which implies that $\widetilde{u}_n \to \widetilde{u}$ in $H_\varepsilon$ as $n\to\infty$. From Lemma \ref{L42}, we know that the sequence $\widetilde{u}_n $ satisfies \begin{equation}\label{49} -\Delta\widetilde{u}_n+\left(\phi_{\widetilde{u}_n}+V(\varepsilon_n x+\varepsilon_ny_n)-K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{4}\right) \widetilde{u}_n=\sum\limits_{i=1}^{m} Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i-2}\widetilde{u}_n. \end{equation} By Lemma \ref{L21} (iv) and \eqref{ineq41}, we obtain that $0<\phi_{\widetilde{u}_n} <C$, then $\phi_{\widetilde{u}_n}+V(\varepsilon_n x+\varepsilon_ny_n)\in L^{\infty}_{loc}(\R^3)$. Moreover, since $K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{4}\in L^\frac{3}{2}(\R^3)$ and $4<q_1<q_2<\cdots<q_m<6$, using a result in \cite[Proposition 3.3]{hezou} or \cite{BK}, we have $\widetilde{u}_n\in L^{t}(\R^3)$ for all $t\geq 2$. Furthermore, $\widetilde{u}_n $ satisfies \begin{equation} \begin{aligned} -\Delta\widetilde{u}_n&\le -\Delta\widetilde{u}_n+\left(\phi_{\widetilde{u}_n}+V(\varepsilon_n x+\varepsilon_ny_n)\right) \widetilde{u}_n\\ &=g_n(x):=\sum\limits_{i=1}^{m} Q_i(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{q_i-2}\widetilde{u}_n+K(\varepsilon_n x+\varepsilon_ny_n)|\widetilde{u}_n|^{4} \widetilde{u}_n, \end{aligned} \end{equation} where $g_n(x)\in L^{\frac{s}{2}}(\R^3)$, for some $s>3$. Applying a result of \cite{NS} or \cite[Proposition 3.4]{hezou}, we can obtain that $$\sup\limits_{x\in B_r(y)}\widetilde{u}_n(x)\le C\left(|\widetilde{u}_n|_{L^2(B_{2r}(y))}+|\widetilde{u}_n|_{L^{\frac{s}{2}}(B_{2r}(y))}\right),\hspace{1ex}\text{for any}\hspace{1ex} y\in \R^3,$$ which implies that $|\widetilde{u}_n|_{\infty}\le C$ and $$ \lim\limits_{|x|\to\infty}\widetilde{u}_n(x)=0 \hspace{1ex}\text{uniformly on} \hspace{1ex}n\in \mathbb{N}.$$ Consequently, there exists $\varepsilon_*>0$ such that \begin{equation}\label{411} \lim\limits_{|x|\to\infty}\widetilde{u}_{\varepsilon}(x)=0 \hspace{1ex}\text{uniformly on} \hspace{1ex}\varepsilon\in (0,\varepsilon_*). \end{equation} The proof is complete. \end{proof} In order to see the exponential decay of solutions $u_\varepsilon$, it is enough to show the following result about $\widetilde{u}_\varepsilon$. \begin{lemma}\label{L44} There exist constants $C>0$ and $\mu >0$ such that $$\widetilde{u}_\varepsilon\le C e^{-\mu|x|} \hspace{1ex}\text{for all}\hspace{1ex}x\in\R^3.$$ \end{lemma} \begin{proof} We borrow an idea from \cite[Lemma 3.11]{hezou}. It follows from \eqref{411} that there exists $R>0$ such that \begin{equation} \sum\limits_{i=1}^{m} Q_i(x_0)|\widetilde{u}_\varepsilon|^{q_i-2}+K(x_0)|\widetilde{u}_\varepsilon|^{4}\le \frac{V(x_0)}{2}\hspace{1ex}\text{for all} \hspace{1ex} |x|> R\hspace{1ex}\text{and}\hspace{1ex}\forall\varepsilon\in(0,\varepsilon_*). \end{equation} Set $\tau (x) =C e^{-\mu|x|}$ such that $\mu^2<\frac{V(x_0)}{2}$ and $\tau(R)\geq \widetilde{u}_\varepsilon$, we can obtain that \begin{equation}\label{tau} \nabla\tau(x)=\mu^2\tau(x). \end{equation} By Lemma \ref{L21} (iv) and \eqref{ineq41}, we can get \begin{equation}\label{tau1} \begin{aligned} -\Delta\widetilde{u}_\varepsilon+V(x_0)\widetilde{u}_\varepsilon &<-\Delta\widetilde{u}_\varepsilon+V(x_0)\widetilde{u}_\varepsilon+\phi_{\widetilde{u}_\varepsilon}\widetilde{u}_\varepsilon\\ &=\sum\limits_{i=1}^{m} Q_i(x_0)|\widetilde{u}_\varepsilon|^{q_i-2}\widetilde{u}_\varepsilon+K(x_0)|\widetilde{u}_\varepsilon|^{4}\widetilde{u}_\varepsilon\\ &\le \frac{V(x_0)}{2}\widetilde{u}_\varepsilon \hspace{1ex}\text{for} \hspace{1ex} |x|> R. \end{aligned} \end{equation} Let $\tau_\varepsilon=\tau-\widetilde{u}_\varepsilon$, combining \eqref{tau} and \eqref{tau1}, we have \begin{equation} \begin{cases} -\Delta\tau_\varepsilon +\frac{V(x_0)}{2}\tau_\varepsilon>0,& |x|>R,\\ \tau_\varepsilon\geq0,&|x|=R,\\ \lim\limits_{|x|\to\infty}\tau_\varepsilon(x)=0. \end{cases} \end{equation} The strong maximum principle implies that $\tau_\varepsilon\geq0$ in $|x|\geq R$. It follows that $$\widetilde{u}_\varepsilon\le C e^{-\mu|x|}\hspace{1ex}\text{for all} \hspace{1ex} |x|\geq R\hspace{1ex}\text{and}\hspace{1ex}\forall\varepsilon\in(0,\varepsilon_*).$$ The proof is complete. \end{proof} {\section{Proof of Theorem \ref{the1}--\ref{the2}}\label{sec5}} \setcounter{equation}{0} \vskip2mm In this section, we will prove the existence, concentration and exponential decay of $v_\varepsilon$ in Theorem \ref{the1}--\ref{the2}. \vskip4mm {\subsection{Proof of Theorem \ref{the1}}\label{subsec51}} \vskip2mm By Lemma \ref{L39}, $u_\varepsilon$ is a positive ground state solution of \eqref{22}. Then, $v_\varepsilon(x)=u_\varepsilon(\frac{x}{\varepsilon})$ is a positive ground state solution of \eqref{1}. Suppose $z_n$ denotes a maximum point of $\widetilde{u}_n$, it follows from Lemma \ref{L43} that $z_n$ is a bounded sequence in $\R^3$. Then, there exists $R>0$, such that $z_n\in B_R(0)$. Thus, the global maximum point of $u_n$ is $z_n+y_n$. Using the boundedness of $\{z_n\}$ and Lemma \ref{L42}, we have $$\lim\limits_{n\to \infty }\varepsilon_n(z_n+y_n)=x_0.$$ Then, note the relations of $\widetilde{u}_\varepsilon$, $u_\varepsilon$ and $v_\varepsilon$, we have \begin{equation}\label{51} v_\varepsilon(\varepsilon x+\varepsilon z_\varepsilon +\varepsilon y_\varepsilon)=u_\varepsilon( x+ z_\varepsilon + y_\varepsilon)=\widetilde{u}_\varepsilon( x+ z_\varepsilon). \end{equation} Set $\eta_\varepsilon(x)=v_\varepsilon(\varepsilon x+x_\varepsilon)$, where $x_\varepsilon\to x_0,$ as $\varepsilon\to0$. It follows from Lemma \ref{L41} and \eqref{51} that $\eta\neq0$. By the similar arguments to Lemma \ref{L42} and Lemma \ref{L43}, we can obtain that $\eta_\varepsilon\to \eta $ in $H_\varepsilon $ and $\eta$ is a ground state solution of \begin{equation*} \begin{aligned} -\Delta u+\phi_u u+V(x_0)u=\sum\limits_{i=1}^{m} Q_i(x_0)|u|^{q_i-2}u+K(x_0)|u|^{4}u,\hspace{1ex}x\in\R^3. \end{aligned} \end{equation*} From Lemma \ref{L44}, we have $$v_\varepsilon(x)=u_\varepsilon(\frac{x}{\varepsilon})=\widetilde{u}_\varepsilon(\frac{x-\varepsilon y_\varepsilon}{\varepsilon})\le C exp\left({-\frac{\mu}{\varepsilon}|x-x_\varepsilon|}\right), $$ for all $x\in\R^3.$ The proof is complete. \hspace{9.15cm}$\Box$ \vskip4mm {\subsection{Proof of Theorem \ref{the2}}} \vskip2mm In this subsection, we will consider that the electronic potential $h(x)$ satisfies $(H_1)$. By the similar strategy to Lemma \ref{L39}, we can obtain that problem \eqref{1} has a positive ground state solution, still denote by $v_\varepsilon$. Then, the following energy functional \begin{equation} \begin{aligned} \mathcal{I}_\varepsilon(u)=&\frac{1}{2}||u||^2_\varepsilon +\frac{1}{4}\int_{ \R^3}\left[\frac{1}{|x|}*h(\varepsilon x)u^2\right]h(\varepsilon x)u^2\rm{d} x\\ &- \sum\limits_{i=1}^{m} \frac{1}{q_i}\int_{ \R^3}Q_i (\varepsilon x)|u|^{q_i}dx-\frac{1}{6}\int_{ \R^3} K(\varepsilon x)|u|^{6}dx, \end{aligned} \end{equation} has a critical point $u_\varepsilon$. Next, we want to show the following claims. \vskip2mm \par\noindent \begin{claim}\ Suppose that $\varepsilon_n y_n\to \infty$, as $n\to\infty$, then $$\int_{R^3}\left[\frac{1}{|x|}*h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n^2\right]h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\varphi dx\to0,$$ for any $ \varphi\in H^1(\R^3)$, as $n\to\infty.$ \end{claim} \begin{proof} From assumption $(H_1),$ we know that $h(x)$ is bounded in $\R^3$. Since $\{\widetilde{u}_n\}$ is bounded in $H_\varepsilon$, by H\"{o}lder inequality, we have \begin{equation}\label{53} \begin{aligned} \left|\frac{1}{|x|}*h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n^2\right| & =\left|\int_{\R^3}h(\varepsilon_n y+\varepsilon_ny_n)\frac{\widetilde{u}_n^{2}(y)}{|x-y|}dy\right|\\ &\le C_1\int_{\R^3}\frac{\widetilde{u}_n^{2}(y)}{|x-y|}dy\\ &=C_1\int_{|x-y|\le1}\frac{\widetilde{u}_n^{2}(y)}{|x-y|}dy+C_1\int_{|x-y|\geq1}\frac{\widetilde{u}_n^{2}(y)}{|x-y|}dy.\\ &\le C_1\int_{|x-y|\le1}\frac{\widetilde{u}_n^{2}(y)}{|x-y|}dy+C_2\\ &\le C_1\left(\int_{|x-y|\le1}\widetilde{u}_n^{4}(y)dy\right)^{\frac{1}{2}} \left(\int_{|x-y|\le1}\frac{1}{|x-y|^2}dy\right)^{\frac{1}{2}}+C_2\\ &\le C_3 \left(\int_{|r|\le1}dr\right)^{\frac{1}{2}}+C_2\\ &\le C. \end{aligned} \end{equation} Since $\varepsilon_n y_n\to \infty$, as $n\to\infty$, it follows from $(H_1)$ that $h(\varepsilon_n x+\varepsilon_ny_n)\to0$, as $n\to\infty$. Note that \begin{equation}\label{54} \begin{aligned} \widetilde{u}_n\rightharpoonup\widetilde{u} &\hspace{2ex}\text{in} \hspace{1ex}H_\varepsilon;\\ \widetilde{u}_n\rightharpoonup\widetilde{u} &\hspace{2ex}\text{in}\hspace{1ex} L^p(\R^3),\hspace{1ex}2\le p\le 6;\\ \widetilde{u}_n\to\widetilde{u}&\hspace{2ex}\text{a.e.}\hspace{1ex}\text{on}\hspace{1ex}\R^3. \end{aligned} \end{equation} We can obtain that $h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\to0$ a.e. on $\R^3$ and $h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n$ is bounded in $L^2(\R^3)$. It follows that $h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\rightharpoonup0$ in $L^2(\R^3)$. Then, we have \begin{equation}\label{55} \begin{aligned} \int_{\R^3}h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\varphi dx\to0, \end{aligned} \end{equation} as $n\to\infty$. Combining \eqref{53} and \eqref{55}, the desired conclusion is obtained. \end{proof} \begin{claim} Suppose that $\varepsilon_n y_n\to x_0$, as $n\to\infty$, then $$\begin{aligned} &\int_{\R^3}\left[\frac{1}{|x|}*h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n^2\right]h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\varphi dx\\ =&h(x_0)^2\int_{ \R^3}\left[\frac{1}{|x|}*\widetilde{u}^2\right]\widetilde{u}\varphi dx+o_n(1)\\ =&o_n(1), \end{aligned}$$ for any $ \varphi\in H^1(\R^3)$. \end{claim} \begin{proof} For simplicity, we denote $h(\varepsilon_n x+\varepsilon_ny_n)$ by $\widetilde{h}_n(x)$. Then, from assumption $(H_1)$, we have \begin{equation}\label{56} \widetilde{h}_n(x)\to h(x_0), \end{equation} as $n\to\infty$ uniformly on bounded sets of $\R^3$. Combining \eqref{54} and \eqref{56}, we have $\widetilde{h}_n(x)\widetilde{u}_n\to h(x_0)\widetilde{u}$ a.e. on $\R^3$ and $\widetilde{h}_n(x)\widetilde{u}_n$ is bounded in $L^2(\R^3)$. It follows that $\widetilde{h}_n(x)\widetilde{u}_n\rightharpoonup h(x_0)\widetilde{u}$ in $L^2(\R^3)$. Then, we have \begin{equation}\label{57} \left|\int_{\R^3}\left(\widetilde{h}_n(x)\widetilde{u}_n-h(x_0) \widetilde{u}\right)\varphi dx\right|\to0, \end{equation} for any $ \varphi\in H^1(\R^3)$, as $n\to\infty.$ By \eqref{53} and \eqref{57}, we have \begin{equation}\label{58} \left|\int_{ \R^3}\left[\frac{1}{|x|}*\widetilde{h}_n(x)\widetilde{u}_n^2\right]\left(\widetilde{h}_n(x)\widetilde{u}_n-h(x_0) \widetilde{u} \right)\varphi dx\right|\to0, \end{equation} for any $ \varphi\in H^1(\R^3)$, as $n\to\infty$. On the other hand, using \eqref{54} and \eqref{56}, we can deduce that $\widetilde{h}_n(x)\widetilde{u}_n^2\rightharpoonup h(x_0)\widetilde{u}^2$ in $L^{\frac{12}{5}}(\R^3)$. It follows from Lemma \ref{L21} (iv) that \begin{equation*} \frac{1}{|x|}*\widetilde{h}_n(x)\widetilde{u}_n^2\rightharpoonup\frac{1}{|x|}*h(x_0)\widetilde{u}^2 \hspace{2ex}\text{in} \hspace{1ex}\mathcal{D}^{1,2}(\R^3) \end{equation*} and, then, in $L^6(\R^3)$. Then, we have \begin{equation} \left|\int_{ \R^3}\left[\frac{1}{|x|}*\left(\widetilde{h}_n(x)\widetilde{u}_n^2-h(x_0)\widetilde{u}^2\right) \right]\widetilde{u}\varphi dx\right|\to0, \end{equation} for any $ \varphi\in H^1(\R^3)$, as $n\to\infty.$ Note that \begin{equation}\label{510} \begin{aligned} &\left|\int_{\R^3}\left[\frac{1}{|x|}*\widetilde{h}_n(x)\widetilde{u}_n^2\right]\widetilde{h}_n(x)\widetilde{u}_n\varphi dx-h(x_0)^2\int_{ \R^3}\left[\frac{1}{|x|}*\widetilde{u}^2\right]\widetilde{u}\varphi dx\right|\\ \le&\left|\int_{ \R^3}\left[\frac{1}{|x|}*\widetilde{h}_n(x)\widetilde{u}_n^2\right]\left(\widetilde{h}_n(x)\widetilde{u}_n-h(x_0) \widetilde{u} \right)\varphi dx\right|\\ &+\left|h(x_0)\int_{\R^3}\left[\frac{1}{|x|}*\left(\widetilde{h}_n(x)\widetilde{u}_n^2-h(x_0)\widetilde{u}^2\right) \right]\widetilde{u}\varphi dx\right|. \end{aligned} \end{equation} Thus, using \eqref{58}--\eqref{510} and $(H_1)$, we have $$\begin{aligned} &\int_{\R^3}\left[\frac{1}{|x|}*h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n^2\right]h(\varepsilon_n x+\varepsilon_ny_n)\widetilde{u}_n\varphi dx\\ =&h(x_0)^2\int_{\R^3}\left[\frac{1}{|x|}*\widetilde{u}^2\right]\widetilde{u}\varphi dx+o_n(1)\\ =&o_n(1), \end{aligned}$$ as desired. The rest proof is closely similar to that of Theorem \ref{the1}. \end{proof} \vskip2mm {\section{Nonexistence of ground state solution}\label{sec6}} \setcounter{equation}{0} \vskip2mm In this section, we will study the nonexistence of ground state solutions for the problem \eqref{1}. The proof of the case $h(x)\equiv 1$ is similar to that of $h(x)$ satisfying condition $(H_1)$, so we just give the proof for the latter case. \vskip2mm \begin{lemma}\label{L61} Assume that $(f_1)$--$(f_5)$ hold. Then, $c_\varepsilon=c_\infty$, for any $\varepsilon>0$. \end{lemma} \begin{proof} By Lemma \ref{L27}, we see that there exists $u_\infty\in \mathcal{N}_{\infty}$ such that $\mathcal{I}_{\infty}(u_\infty)=c_\infty$. Let us define $u_\infty^{\zeta _n}=u_\infty(x-\zeta_n)$, where $\zeta_n\in\R^3$ and $|\zeta_n|\to\infty$ as $n\to\infty$. From Lemma \ref{L23} (i) and (ii), there exists a positive bounded sequence $\{t_n\}$ such that $\{t_nu_\infty^{\zeta _n}\}\subset\mathcal{N}_\varepsilon$. Then, we have \begin{equation*} \begin{aligned} &\hspace{1ex}t_n^2\left(||u_\infty||_{\mathcal{D}^{1,2}}^2+\int_{ \R^3}V(\varepsilon(x+\zeta_n))|u_\infty|^2dx\right)\\ &+t_n^4\int_{ \R^3}\left[\frac{1}{|x|}*h(\varepsilon(x+\zeta_n)|u_\infty|^2\right]h(\varepsilon(x+\zeta_n)|u_\infty|^2 dx\\ =& \sum\limits_{i=1}^{m} t_n^{q_i}\int_{\R^3}Q_i (\varepsilon(x+\zeta_n))|u_\infty|^{q_i}dx + t_n^6 \int_{ \R^3} K(\varepsilon(x+\zeta_n))|u_\infty|^{6}dx. \end{aligned} \end{equation*} Up to a subsequence, we may assume that $t_n\to {t}_0 >0$, as $n\to\infty$. Taking the limit as $n\to\infty$ in the above equality and using condition $(f_5)$, we get \begin{equation}\label{61} \begin{aligned} t_0^2\left(||u_\infty||_{\mathcal{D}^{1,2}}^2+\int_{ \R^3}V_\infty|u_\infty|^2dx\right) = \sum\limits_{i=1}^{m} t_0^{q_i}\int_{ \R^3}Q_i ^\infty|u_\infty|^{q_i}dx + t_0^6 \int_{ \R^3} K^\infty|u_\infty|^{6}dx. \end{aligned} \end{equation} Note that $u_\infty\in \mathcal{N}_{\infty}$, we have \begin{equation}\label{62} \left(||u_\infty||_{\mathcal{D}^{1,2}}^2+\int_{ \R^3}V_\infty|u_\infty|^2dx\right) = \sum\limits_{i=1}^{m}\int_{\R^3}Q_i ^\infty|u_\infty|^{q_i}dx + \int_{ \R^3} K^\infty|u_\infty|^{6}dx. \end{equation} Combining \eqref{61} and \eqref{62}, we can obtain that $\lim\limits_{n\to\infty}t_n=t_0=1$. It follows from $V(x)$ is bounded that there exists a constant $M$ sucht that $|V(\varepsilon(x+\zeta_n))-V_\infty|\le M$. Since $u_\infty\in H_\varepsilon$, then for any $\sigma>0$, there exists $R>0$ such that $|u_\infty|_{2,B_R^C(0)}<\frac{\sigma}{2M}$. Thus, we have \begin{equation*} \int_{\R^3\backslash B_{R}(0)}|V(\varepsilon(x+\zeta_n))-V_\infty|u_{\infty}^2dx<\frac{\sigma}{2}, \end{equation*} and using the condition $(f_5)$ and $|\zeta_n|\to\infty$, as $n\to\infty$, we have \begin{equation*} \int_{B_{R}(0)}|V(\varepsilon(x+\zeta_n))-V_\infty|u_{\infty}^2dx<\frac{\sigma}{2}, \end{equation*} as $n\to\infty$. Which implies that \begin{equation}\label{63} \int_{\R^3}V(\varepsilon(x+\zeta_n))u_{\infty}^2dx\to\int_{ \R^3}V_\infty u_{\infty}^2dx, \end{equation} as $n\to\infty$. Similarly, we can obtain that \begin{equation} \int_{ \R^3}K(\varepsilon(x+\zeta_n))|u_{\infty}|^6dx\to\int_{ \R^3}K^\infty|u_{\infty}|^6dx, \end{equation} and \begin{equation}\label{65} \int_{ \R^3}Q_i(\varepsilon(x+\zeta_n))|u_{\infty}|^{q_i}dx\to\int_{\R^3}Q_i^\infty|u_{\infty}|^{q_i}dx, \end{equation} as $n\to\infty$. From \eqref{63}--\eqref{65} and $\lim\limits_{n\to\infty}t_n=1$, we can deduce that \begin{equation}\label{66} \begin{aligned} c_\varepsilon=&\inf\limits_{u\in \mathcal{N}_\varepsilon} \mathcal{I}_\varepsilon(u)\\ \le& \mathcal{I}_\varepsilon(t_nu_\infty^{\zeta _n}) \\ =&\mathcal{I}_\infty(t_nu_\infty)+\frac{t_n^2}{2} \int_{ \R^3}\left(V(\varepsilon(x+\zeta_n))-V_\infty\right)u_{\infty}^2dx\\ &+\frac{t_n^4}{4}\int_{ \R^3}\left[\frac{1}{|x|}*h(\varepsilon(x+\zeta_n)u_\infty^2\right]h(\varepsilon(x+\zeta_n)u_\infty^2 dx\\ & - \sum\limits_{i=1}^{m}\frac{t_n^{q_i}}{q_i} \int_{\R^3}(Q_i (\varepsilon(x+\zeta_n))-Q_i^\infty)|u_\infty|^{q_i}dx - \frac{t_n^6}{6} \int_{ \R^3} (K(\varepsilon(x+\zeta_n))-K^\infty)|u_\infty|^{6}dx\\ \to&\mathcal{I}_{\infty}(u_\infty)=c_\infty, \end{aligned} \end{equation} as $n\to\infty$. Thus, we get $c_\varepsilon\le c_\infty$. On the other hand, it follows from condition $(f_5)$ that $\mathcal{I}_{\infty}(u)\le \mathcal{I}_{\varepsilon}(u)$, for any $u\in H_\varepsilon$. Then, for any $u\in\mathcal{N}_\infty$, by the similar arguments to Lemma \ref{L23} (i) and (ii), we can obtain that there exists a unique bounded $t_\varepsilon>0$ such that $t_\varepsilon u\in\mathcal{N}_\varepsilon$ and $\mathcal{I}_{\varepsilon}(t_{\varepsilon}u):=\max\limits_{t\geq0} \mathcal{I}_{\varepsilon}(tu)$. So it is easy to see that \begin{equation}\label{67} c_\infty =\inf\limits_{u\in \mathcal{N}_\infty} \mathcal{I}_\infty(u) \le\inf\limits_{u\in \mathcal{N}_\infty} \mathcal{I}_\varepsilon(u) \le\inf\limits_{u\in \mathcal{N}_\infty} \mathcal{I}_\varepsilon(t_\varepsilon u) =\inf\limits_{u\in \mathcal{N}_\varepsilon} \mathcal{I}_\varepsilon(u)=c_\varepsilon. \end{equation} Combining \eqref{66} and \eqref{67}, the proof is complete. \end{proof} \begin{proof}[\rm{}\textbf{Proof of Theorem \ref{the3}}] Arguing by contradiction, suppose that problem \eqref{1} has a positive ground state solutions, i.e., there exist $\varepsilon_0>0$ and $u_0\in\mathcal{N}_{\varepsilon_0}$ such that $\mathcal{I}_{\varepsilon_0}(u_0)=c_{\varepsilon_0}$. By the similar arguments to Lemma \ref{L23} (i), there exists $t_{\varepsilon_0}>0$ such that $t_{\varepsilon_0} u_0\in\mathcal{N}_\infty$. From condition $(f_5)$, we get that $\mathcal{I}_{\infty}(u)\le \mathcal{I}_{\varepsilon_0}(u)$, for any $u\in H_{\varepsilon_0}$. Then, we have \begin{equation} c_\infty =\inf\limits_{u\in \mathcal{N}_\infty} \mathcal{I}_\infty(u) \le\mathcal{I}_\infty(t_{\varepsilon_0}u_0) \le \mathcal{I}_{\varepsilon_0}(t_{\varepsilon_0}u_0) \le\mathcal{I}_{\varepsilon_0}(u_0) =c_{\varepsilon_0}, \end{equation} it follows from Lemma \ref{L61} that $\mathcal{I}_{\varepsilon_0}(t_{\varepsilon_0}u_0) =\mathcal{I}_{\infty}(t_{\varepsilon_0}u_0)$. On the other hand, we have \begin{equation} \begin{aligned} \mathcal{I}_{\varepsilon_0}(t_{\varepsilon_0}u_0) =&\mathcal{I}_{\infty}(t_{\varepsilon_0}u_0)+\frac{t_{\varepsilon_0}^2}{2} \int_{ \R^3}\left(V(\varepsilon_0x)-V_\infty\right)u_{\infty}^2dx\\ &+\frac{t_{\varepsilon_0}^4}{4}\int_{ \R^3}\left[\frac{1}{|x|}*h(\varepsilon_0x)u_\infty^2\right]h(\varepsilon_0x)u_\infty^2 dx + \sum\limits_{i=1}^{m}\frac{t_{\varepsilon_0}^{q_i}}{q_i} \int_{ \R^3}(Q_i^\infty-Q_i (\varepsilon_0x))|u_\infty|^{q_i}dx\\ & + \frac{t_{\varepsilon_0}^6}{6} \int_{ \R^3} (K^\infty - K(\varepsilon_0x))|u_\infty|^{6}dx, \end{aligned} \end{equation} which implies that $\mathcal{I}_{\varepsilon_0}(t_{\varepsilon_0}u_0) >\mathcal{I}_{\infty}(t_{\varepsilon_0}u_0)$. This is a contradiction and ends the proof. \end{proof} \par\noindent \textbf{Acknowledgments} \vskip2mm This work is supported by National Natural Science Foundation of China (No. 11671403). \vskip6mm
{ "timestamp": "2020-12-17T02:19:22", "yymm": "2012", "arxiv_id": "2012.08978", "language": "en", "url": "https://arxiv.org/abs/2012.08978" }
\section{Introduction} Molecular outflows are a commonly observed feature of the star-formation process detected toward protostars that will ultimately form stars covering a wide range of main sequence masses up to spectral type B ($M_*\sim10$\solmass) and beyond. The driving force responsible for such bipolar molecular outflows is thought to be the accretion of matter onto the central protostellar object from a disk coupled with the requirement for angular momentum to be conserved in the process \citep{Konigl00,Pudritz07,Tan14,RosenOffner20}. The effects of limited resolution and obscuration impede the testing of theories of the exact physical mechanism driving these outflows observationally, be it X-wind or disk-wind driven \citep[see e.g.][]{PudritzBanerjee05, Pudritz07,Frank14}, and the extent to which magnetic fields play a role. The breadth of the scales of molecular outflows, compared to the region from which they are driven, make them an invaluable tool when testing star formation theories. For example, they can be used to indirectly probe the accretion rates of material onto forming protostars \citep{BontempsOutflow96,Ana13}. In addition to providing information on the protostars that drive them, outflows also have an impact on their natal clouds by entraining matter as they inject energy and momentum into the surrounding interstellar medium (ISM)\citep{Offner17}. This feedback may drive turbulence and ultimately help to disrupt the cloud. These effects have implications for the number and masses of stars (i.e. the initial mass function) which can form within a single molecular cloud \citep{Ana12, Plunkett13,Krumholz14, ZhangY15,Drabek-Maunder16}. As a result, the observation of molecular outflows toward candidate high-mass protostars provides valuable information on the poorly constrained formation processes of such objects and their effects on the environments in which they form. The Infrared Dark Cloud (IRDC) SDC335.579-0.292 \citep{Peretto09} (hereafter, SDC335) is an increasingly well-studied star-forming region \citep{Garay02, Peretto13, Avison15} harbouring one of the most massive millimetre cores observed in the Milky Way. Seen in absorption against the mid-infrared background, SDC335 covers approximately 2.4pc at its widest extent and displays six filamentary arms, which converge at the bright infrared source at its centre. Using the Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 0 data, \citet{Peretto13} demonstrated that the whole SDC335 cloud is in the process of global collapse and that gas (traced by N$_2$H$^+$) is flowing along the filamentary arms towards this central region. ALMA continuum data at 3mm also highlighted two mm-cores, MM1 and MM2, with $M_{core}\sim$500 and 50\solmass, respectively \citep{Peretto13}. These data also included HNC observations indicating the presence of a molecular outflow from the MM1 core, however, that outflow has not been studied prior to this work. \defcitealias{Avison15}{Paper I} \citet{Avison15}, (hereafter, Paper I) used radio continuum data from the Australia Telescope Compact Array (ATCA) (from 6 to 25 GHz) to reveal that the MM1 core houses two Hyper Compact H\textsc{ii}\ (HCH\textsc{ii}) regions, whilst MM2 contains a single source, which also exhibits characteristics of an HCH\textsc{ii}\ region. Each HCH\textsc{ii}\ source was coincident with a Class II methanol (CH$_3$OH) maser, the pair of tracers clearly demonstrating SDC335 is in the process of forming three massive stars (each with $M_* > 9.0$\solmass, based on their calculated Lyman $\alpha$ flux). Using this constraint on the upper end of the mass range in SDC335, the authors estimate a final stellar population in SDC335 of $\sim$1400 ($M_* > 0.08$\solmass, assuming a \citealt{Kroupa02} IMF), suggesting that SDC335 could be a precursor to a massive cluster such as the Trapezium Cluster. In this paper, we report on the molecular outflows observed in SDC335, which are likely to have been launched by the massive protostars therein. We present the observed molecular species and observations used to study the molecular outflows in SDC335 in Section 2. Section 3 describes the detected outflow properties and how they were measured. In Section 4, we discuss the measured properties and how this relates to the evolutionary status of the SDC335 high-mass protostellar objects. In Section 5, we discuss evidence for outflow-filament interactions. We present our conclusions and summarise our findings in Section 6. \section{Observations and ancillary data} \begin{table*} \caption[]{Instrumental setup and image product properties used within this paper.} \begin{center} \small \begin{tabular}{p{2.0cm} p{1.4cm} c c c c p{1.8cm} p{1.8cm} p{1.8cm}} \hline \hline Molecule & Obs. Freq. & Beam & MRS$^{\star}$ & Chan. Width & Sensitivity$^{\star\star}$ & \multicolumn{3}{c}{Calibrators} \\ & [GHz] & [$^{\prime\prime}$\ $\times$ $^{\prime\prime}$]& [$^{\prime\prime}$] & [MHz]/[kms$^{-1}$] & [mJy/bm] / [K]& Amplitude & $\phi$ & Bandpass \\ \hline \multicolumn{9}{c}{~}\\ \multicolumn{9}{c}{Telescope: ATCA}\\ \multicolumn{9}{c}{~}\\ SiO (1$\rightarrow$0) & 43.42385 & 3.1 $\times$ 1.3 & 23.3 & 0.125 / 0.86 & 4.3 & PKS1934-638 & PKS1646-50$^{\ddagger}$ & PKS1253-055 \\ CH$_3$OH\ (7$\rightarrow$6) & 44.06941 & 2.0 $\times$ 1.1 & 23.0 & 0.0313 / 0.21 & 9.3 & PKS1934-638 & PKS1646-50 & PKS1253-055\\ \hline \multicolumn{9}{c}{~}\\ \multicolumn{9}{c}{Telescope: ALMA}\\ \multicolumn{9}{c}{~}\\ HNC (1$\rightarrow$0) & 90.66357 & 5.5 $\times$ 3.9 & 22.4 & 0.0665 / 0.22 & 14.0 & Neptune, & J1604-446 & J1517-2422 \\ & & & & & & Mercury & & \\ $^{13}$CO\ (3$\rightarrow$2)$^{\dagger}$ & 330.58797 & 0.82 $\times$ 0.58 & 10.5 & 0.243 /0.22 & 23.5 & Mars, & J1650-5044$^{\ddagger}$ & J1337-1257, \\ & & & & & & Titan, & & J1427-4206,\\ & & & & & & J1613-586 & & J1650-5044\\ & & & & & & & & J1924-2914\\ CO (3$\rightarrow$2)$^{\dagger}$ & 345.79599 & 0.72 $\times$ 0.54 & 11.8 & 0.254 /0.22 & 22.0 & Titan & J1650-5044, & J1427-4026,\\ & & & & & & J1613-586 & J1517-2422\\ \hline \multicolumn{9}{c}{~}\\ \multicolumn{9}{c}{Telescope: APEX}\\ \multicolumn{9}{c}{~}\\ CO (3$\rightarrow$2) & 345.7960 & 19.16 & - & 0.114 / 0.10 & 1.10 & - & - & - \\ CO (4$\rightarrow$3) & 461.0408 & 14.37 & - & 0.290 / 0.15 & 1.70 & - & - & - \\ CO (6$\rightarrow$5) & 691.4731 & 9.58 & - & 1.464 / 0.64 & 1.20 & - & - & - \\ CO (7$\rightarrow$6) & 806.6518 & 8.21 & - & 2.929 / 1.09 & 2.20 & - & - & - \\ \hline \end{tabular} \end{center} \vspace{0.05mm} {\tiny{\textbf{Notes:} $^{(\dagger)}$ALMA + ACA combined data.\\$^{(\star)}$ Maximum Recoverable Scale of emission, $MRS\sim0.6\frac{\lambda}{b_{min}}$, where ${b_{min}}$ is the shortest baseline in an array. \\ $^{(\ddagger)}$ PKS1646-50 and J1650-5044 are the same source under different naming conventions.\\ $^{(\star\star)}$ Unit mJy/bm for ATCA and ALMA, and K for APEX. Measured in a velocity width matching the channel width.}} \label{Obs_props:tab} \end{table*} Molecular line emission is key to understanding the morphologies and, more importantly, the kinematics and dynamic interactions of protostellar outflows. We present the results of SiO and Class I methanol (CH$_3$OH) maser observations made with ATCA toward SDC335 in combination with APEX\footnote{Atacama Pathfinder EXperiment, \citep{Gusten06}.} and archival ALMA data, observing lines of CO, $^{13}$CO\ (ALMA only), and HNC (ALMA only), which allow us to study the outflows and their disruptive effects. Silicon monoxide, SiO, is depleted in the gas phase of the ISM \citep{Walmsley99}, but it can be liberated from the grain mantels by shock fronts arising in molecular outflows \citep{Schilke97}, making it a useful tool for studying outflows \citep[e.g.][]{Cabral14} particularly at outflow-cloud interaction points. Similarly, the Class-I CH$_3$OH\ maser (at 44\,GHz) is regularly used as a tracer of outflows, \citep[e.g.][]{Cyganowski09,Cyganowski11} as it is a collisionally excited maser species. Class-I CH$_3$OH\ masers are frequently seen as spatially offset from the local protostellar source, unlike the radiatively pumped Class-II CH$_3$OH\ maser species, lending support to the idea that the Class-I species is being excited in regions of interaction between outflowing material and the surrounding molecular material \citep{Plambeck90, Kurtz04, Vornokov10}. These lines and the observations characteristics are listed in Table \ref{Obs_props:tab}. Finally, CO, being ubiquitous in the ISM, provides sufficient molecular abundance to trace the high velocity gas entrained by the outflows. Thus, it is a good tracer of the large scale molecular outflow morphology, something that is not possible when observing shock-tracing species. \subsection{ATCA data} The ATCA data were taken in a single 12.5 hour observing block on 3 September 2015 under the project code C3023. These observations comprised two 2 GHz continuum bands within which four 64 MHz zoom bands were placed using the CABB correlator \citep{CABBpaper}. The primary target molecular lines used from these data are SiO (1-0) and the Class-I CH$_3$OH\ maser transition. These observations were taken with ATCA in the `750B' antenna configuration. During the data reduction, all baselines to the antenna at 6.0km from the array centre were flagged out, meaning that these data have maximum and minimum baselines of 765.3 and 61.2m, respectively. Data reduction was carried out in the software package \texttt{MIRIAD} using standard ATNF calibration and imaging strategies\footnote{See the MIRIAD user guide for more information, http://www.atnf.csiro.au/computing/software/miriad/userguide/ .}. We note that poor atmospheric conditions at the start of observing meant that approximately four hours of data were entirely flagged from the start of the run, limiting the total observing time on source to $\sim$ 4.7 hours and setting the theoretical sensitivity to $\sim$5.64mJy/beam per 31.25kHz channel. \subsection{ALMA Band 7 data} We use data taken from the ALMA archive, which forms part of the Cycle 1 observing program 2012.0.00781.S. For this work, we used the 12- and 7-m array observations at 330.64GHz and 345.874GHz, which cover the $^{13}$CO(3-2) and CO(3-2) transitions, respectively. These observations comprise a 39 pointing Nyquist sampled rectangular mosaic of the central region of SDC335, which is sufficient to cover the three HCH\textsc{ii}\ regions observed in \citetalias{Avison15}, with the 12-m array. This mosaic has a spatial extent of 65$^{\prime\prime}$$\times$68$^{\prime\prime}$\, centred at RA = 16h30m58.550s, Dec = $-$48\degs43$^{\prime}$54.00$^{\prime\prime}$. The 7-m data had a complementary 14 point mosaic pattern covering 74.6$^{\prime\prime}$$\times$69$^{\prime\prime}$\ centred at the same position, such that the whole 12-m mosaic region was covered (and fractionally exceeded). The data were downloaded from the archive, reprocessed, and the 12- and 7-m data were combined as per ALMA recommendations\footnote{See e.g. https://casaguides.nrao.edu/index.php/M100\_Band3 for a CASA data combination tutorial.} in the data reduction package CASA \citep{CASAREF}, with a continuum subtraction applied. For the $^{13}$CO\ data, the array was configured with minimum and maximum baselines of 10.7 and 1284.1m, respectively \footnote{We note that in these $^{13}$CO\ data, only ~6\% of baselines are in excess of 600m (the RMS length of all baselines being 325.65m) giving rise to a larger final synthesised beam in the combined image than would be expected from calculating the resolution using the maximum baseline.}. During the CO observations the minimum and maximum baselines were 9.1 and 437.8m, respectively. The synthesised beam sizes of these data are given in Table \ref{Obs_props:tab}. \subsection{ALMA Band 3 data} ALMA Cycle 0 project 2011.0.00474.S was an 11 pointing mosaic observation at 90 GHz, covering the whole IRDC cloud seen in extinction against the mid-infrared background at 8\mewm\ from \textit{Spitzer} (see Figure \ref{PressyFig:fig} and Figure 1 of \citealt{Peretto13}). The continuum and N$_2$H$^+$ from these data were originally published in \citet{Peretto13}, though the line of interest in this current work, HNC, was not. The data were taken with 16 12-m antennas with no 7-m data taken (as this was not available with ALMA at the time). The array had maximum and minimum baselines of 18.3 and 197.7m. \subsection{APEX CO data} Single dish observations were made with the APEX telescope covering CO transitions (3-2), (4-3), (6-5) and (7-6). The observations of CO J=4-3 (at 461.04077 GHz) and J=3-2 (at 345.79599 GHz) were taken with the FLASH+ dual-channel receiver \citep{2014ITTST...4..588K} on 3-4 June 2013. At the J=3-2 transition the average system temperature was $300$\,K giving a typical root mean square (RMS) of 1.1\,K in 0.1\,km/s channels. For the J=4-3 transition, the average system temperature was 1276\,K, giving an average noise of 1.7 K in channels of a velocity width of 0.15 km/s. The data was sampled on a 3$^{\prime\prime}$\ grid and then regridded during the data reduction to a resolution of 19.2$^{\prime\prime}$\ and 14.4$^{\prime\prime}$\ full width half maximum for the J=3-2 and J=4-3 observations, respectively. The emission at the (3-2) and (4-3) transitions shown in Figure \ref{OUTFLOW_GRID:fig}-D. The CO\ transitions of J=6-5 (at 691.473076\,GHz) and J=7-6 (at 806.651806\,GHz) were observed simultaneously with the seven-pixel CHAMP+\citep{2006SPIE.6275E..0NK, 2008SPIE.7020E..10G} dual channel receiver on 30 May 2013. The (7-6) transition data were convolved to a grid with a 9.6$^{\prime\prime}$\ resolution during reduction to match the resolution of the (6-5) transition as shown in Figure \ref{OUTFLOW_GRID:fig}-E. The RMS noise level across the maps was 1.2\,K in 0.64 km/s channels and 2.2\,K in 1.1 km/s channels for the lower and higher frequency transitions, respectively. \begin{figure*} \centering \includegraphics[scale=0.6]{SubmissionFigsPDFs/Final_Pressy_2019.pdf} \caption{Three colour image of SDC335 overlaid with ALMA CO contours highlighting three potential outflows within the cloud. Colour scale: \textit{Spitzer} GLIMPSE three colour image with red, green and blue using 8.0, 4.5 and 3.6\mewm\ respectively. Cyan and red contours: ALMA CO(3-2) integrated images showing the extent and morphology of the three potential outflows in the region with contours at $\sim$9\% to highlight their extent. The white $\star$ denote the locations of Class II CH$_3$OH\ masers associated with the MM1\textit{a}, MM1\textit{b}\ and MM2 compact radio cores \citepalias{Avison15}. Orange dashed lines: Nominal centroid positions of the six filaments seen in SDC335, \citep[c.f.][]{Peretto13}.} \label{PressyFig:fig}% \end{figure*} \section{Outflow identification} \label{Results:sec} To identify potential outflows within our data, a continuum subtracted data cube was created for each molecular line. These cubes have a velocity range from $-$100 to 0 kms$^{-1}$\, which is sufficient to cover the whole kinematic range of the observed species given the $V_{lsr}$ of the target. The MM1 core has a $V_{lsr}$ of $-$46.6kms$^{-1}$\ and for MM2 $V_{lsr}$ = $-$46.5kms$^{-1}$\ \citep{Peretto13}. The $V_{lsr}$ of the compact radio cores observed by \citetalias{Avison15} are consistent with those of the mm-cores based on their associated Class II (6.7-GHz) methanol masers, MM1\textit{a}\ $V_{\rm{CH_3OH}}$=$-$48.0 to $-$45.0kms$^{-1}$, MM1\textit{b}\ $V_{\rm{CH_3OH}}$=$-$56.0 to $-$50.0kms$^{-1}$\ and MM2 $V_{\rm{CH_3OH}}$=$-$51.0 to $-$43.0kms$^{-1}$ \citep{MMB330to345}, and the observed modulus offsets between molecular gas and peak maser emission of \noindent $\sim$ 3 to 4 kms$^{-1}$\ \citep{Szymczak07, Pandian09, GreenMcClure11}. As such, we adopted a $V_{lsr}$ of $-$46.6kms$^{-1}$\ for MM1\textit{a}\ and MM1\textit{b}\ and $-$46.5kms$^{-1}$\ for MM2 in line with the mm-core values. Using these cubes, the velocity ranges of each identified outflow for each molecular species were found by visual inspection and are listed in Table \ref{OutflowCalc:tab}, (defined as the absolute offset from $V_{lsr}$ to the last channel with a 3$\sigma$ detection). Emission close to the $V_{lsr}$ of the system is complex, particularly in the ALMA CO maps. As such, we implemented velocity range limits for our interpretation of the structures in SDC335. For all species, except CO, we excluded emission $\pm5$kms$^{-1}$ from the $V_{lsr}$. For calculations of outflow properties with CO we use a larger velocity offset from the $V_{lsr}$ as described in Section \ref{DerivProps:sec}. The velocity ranges used for each outflow by species are given in Table \ref{OutflowCalc:tab}. We then generated integrated intensity maps over these velocity ranges from which we measured the observed spatial extent and opening angles of each outflow wing. Within the multiple datasets presented in this paper, there are three distinct molecular outflows observed within SDC335. We label these A, B, and C and we describe their respective morphologies, along with the identification of their likely progenitor protostars in Section \ref{OutDescript:sec}. Figure \ref{PressyFig:fig} shows components of each outflow superimposed on a \textit{Spitzer} IRAC three colour image of the SDC335 IRDC to highlight their spatial extent within the larger scale cloud. Figure \ref{OUTFLOW_GRID:fig} presents the morphology of each outflow as observed in CO, $^{13}$CO\, HNC (from ALMA), SiO (from ATCA), and CO from APEX. In Figures \ref{OUTA_SPEC_blue:fig} to \ref{OUTC_SPEC:fig}, we present the spectra of each detected line from each outflow within the interferometric data. We also present spectra over the same velocity range as the outflows (-100 to 0 kms$^{-1}$) at the peak position of each of the three HCH\textsc{ii}\ at the CO, $^{13}$CO, HNC, and SiO tunings in Appendix \ref{HCHII_SPEC:app} as Figures \ref{MM1a_SPEC:fig} to \ref{MM2_SPEC:fig}. \begin{figure*} \centering \includegraphics[scale=0.64]{SubmissionFigsPDFs/OutFlowGrid3b2_2020FINAL.pdf} \caption{\small{Outflows in SDC335. Each panel includes the following data of the SDC335 region. \textit{Greyscale}: 8 GHz continuum emission from \citetalias{Avison15}. \textit{Orange dashed lines}: Filament centroids as per Figure \ref{PressyFig:fig}. \textit{Triangles}: Class-I CH$_3$OH\ maser positions (c.f. Figure \ref{MaserCO:fig} and Table \ref{Maser:tab}). The ellipse in the bottom left of each plot and at top left in panels \textit{(A)}, \textit{(D)} \& \textit{(E)} gives observed beam shape, which are the synthesised beam widths for interferometric and the HPBW for single dish observations, respectively. Beam shape values are given in Table \ref{Obs_props:tab}. Each panel shows the integrated red- and blue-shifted emission from the detected outflows as red and blue contours, respectively over the velocity ranges listed in Table \ref{OutflowCalc:tab}. Panel \textit{(A)} shows the outflows in ALMA CO(3-2), as unfilled contours, and $^{13}$CO(3-2), as filled contours, the dashed box around outflow A$_{blue}$ denotes the region shown in the adjacent panel. Panel \textit{(A zoom)}, Zoom in of region bordered by the dashed box in Panel \textit{(A)}, here the $^{13}$CO(3-2) emission is shown as the solid contour at the 10, 30, 45, 60, 75 and 90\% of the peak integrated emission. The 10\% contour of CO(3-2) is shown as the dotted contour for comparison to Panel \textit{(A)}. Panel \textit{(B)} ATCA SiO(1-0) emission. Panel \textit{(C)} ALMA HNC(1-0) emission, this panel also includes as the filled green contour the 4.5\mewm\ emission above above 12.5 MJy/sr for the EGO observed in SDC335 \citep{Cyganowski09}. \textit{(D)} APEX detected CO(4-3), as solid contours, and CO(3-2), as dashed contours, emission at at 50, 60, 70, 80 and 99\% of the APEX data peak emission at each frequency. \textit{(E)} APEX detected CO(6-5), as solid contours, and CO(7-6), as dashed contours, emission at at 50, 60, 70, 80 and 99\% of the APEX data peak emission at each frequency. For panels \textit{(A)} \& \textit{(B)} the contour levels are at 10, 30, 45, 60, 75 and 90\% of the peak emission per outflow, for panel \textit{(C)} the contour levels are at (5), 10, 20, 30, 40, 50, 70 and 90\% of the peak emission per outflow (5\% contour for red lobe only, to emphasise the curvature noted in the text).}} \label{OUTFLOW_GRID:fig}% \end{figure*} \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/OUTA_SPEC_blue.pdf} \caption{Averaged spectra measured toward the blue lobe of outflow A for species CO, $^{13}$CO, HNC and SiO. The spectra are measured within the outer contour of the integrated intensity maps from Figure \ref{OUTFLOW_GRID:fig} for each species. The vertical dashed line gives the $V_{lsr}$ of the target and the vertical dotted lines mark the range in velocity from Table \ref{OutflowCalc:tab}, based on which the integrated intensity maps were produced as were the CO outflow property calculations. The chosen limits are described in the main text. } \label{OUTA_SPEC_blue:fig}% \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/OUTA_SPEC_red.pdf} \caption{Averaged spectra measured toward the red lobe of outflow A for species CO, HNC, and SiO. Measured and line markings as per Figure \ref{OUTA_SPEC_blue:fig}. } \label{OUTA_SPEC_red:fig}% \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/OUTB_SPEC.pdf} \caption{Averaged spectra measured toward the blue and red lobes of outflow B for species CO. Measured and line markings as per Figure \ref{OUTA_SPEC_blue:fig}. } \label{OUTB_SPEC:fig}% \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/OUTC_SPEC.pdf} \caption{Averaged spectra measured toward the blue (and only detected) lobe of outflow C for species CO, HNC, and SiO. Measured and line markings as per Figure \ref{OUTA_SPEC_blue:fig}. } \label{OUTC_SPEC:fig}% \end{figure} \subsection{Outflow descriptions} \label{OutDescript:sec} \subsubsection{A: North-south molecular outflow} The most apparent of the three outflows, namely, outflow A, runs approximately from north to south through the MM1\footnote{A note on nomenclature: within this paper, we use MM1 to indicate the millimetre emission core from \citet{Peretto13}, while MM1\textit{a}\ and MM1\textit{b}\ to refer to the smaller radio emission/HCH\textsc{ii}\ regions from \citetalias{Avison15}, which are both encompassed within the MM1 mm-core.} core. The outflow is detected in all tracers, SiO, HNC, CO (both ALMA and APEX), and $^{13}$CO\, (see Figure \ref{OUTFLOW_GRID:fig}). However, the red northern component of this outflow is not detected in the $^{13}$CO\ data. This is consistent with the trend seen throughout all the presented data, which shows that red-shifted emission is found to be weaker within each outflow. We attribute this to the geometry of the system, which is caused by the obscuring effect of self-absorption by the intervening material. It is not immediately clear to determine which of the two HCH\textsc{ii}\ regions within MM1 is the progenitor of outflow A. As noted in \citetalias{Avison15}, the area of maximal overlap between the two wings of the HNC outflow (Figure \ref{OUTFLOW_GRID:fig}, panel C) is coincident with the position of the MM1\textit{a}\ compact radio source. At a higher resolution, the morphology of the red-shifted emission seems to show a slight curvature toward MM1\textit{b}\ within the central core region, as seen in CO (3-2) and SiO (Figures \ref{OUTFLOW_GRID:fig}-A and \ref{OUTFLOW_GRID:fig}-B), but this could be due to interactions between the two outflows A and B (\S \ref{DescB:sec}) originating from this region. Since the blue-shifted emission is elongated linearly with the major axis of emission aligned toward the MM1\textit{a}\ core, we treat MM1\textit{a}\ as the progenitor of this outflow. We measure an average length-to-width ratio for this outflow, at the 10\% of peak integrated intensity contour in Figure \ref{OUTFLOW_GRID:fig}-A, using both the red and blue lobes of 3.5. \subsubsection{Jet-like properties in outflow A} An inspection of the ALMA CO and $^{13}$CO\ data shows that the blue lobe of outflow A is highly collimated and exhibits a high velocity gradient. The length-to-width ratio of the ALMA CO and $^{13}$CO\ contour at the 10\% of peak integrated intensity contours in Figure \ref{OUTFLOW_GRID:fig}-A$_{zoom}$ are 2.8 and 3.4, respectively, matching the fiducial lower bounds of collimation of a jet structure ($\sim$ 3 e.g. \citealt{Arceetal06}) and comparable to other observed molecular jet sources, such as HH211 \citep{Gueth99} and Outflow C in IRAS 05358$+$3543 \citep{Beuther02}. Figure \ref{PVJet:fig} gives a position velocity (PV) diagram along the outflow axis as observed in CO outward from the driving source and covering a physical scale of $\sim 0.09$pc. We see that there is a clear linear velocity gradient within (highlighted by the white dashed line) from both CO emission (colour scale) and $^{13}$CO\ (displayed as a 5 and 10$\sigma$ contours). From the CO data, we calculate a velocity gradient of 155 kms$^{-1}$\ pc$^{-1}$. This value is comparable to molecular outflow velocities seen in other high mass sources, such as K3-50A \citep{Klaassen13}. \begin{figure} \centering \includegraphics[scale=0.5]{SubmissionFigsPDFs/PVJet.pdf} \caption{Position velocity digram along the A$_{Blue}$ outflow axis. The PV data covers a physical size of 0.09pc offset from the driving source (MM1\textit{a}) and stopping before the `knot-like' feature described in the text. \textit{Colourscale:} ALMA CO(3-2) data. \textit{Contours:} ALMA $^{13}$CO(3-2) data. The white dashed line shows the the observed velocity gradient of $\sim$155 kms$^{-1}$\ pc$^{-1}$.} \label{PVJet:fig}% \end{figure} \subsubsection{B: East-west molecular outflow} \label{DescB:sec} Outflow B runs approximately east to west through the MM1 core and is seen primarily in the CO (both ALMA and APEX), with the red lobe seen in both HNC and SiO. The APEX CO data (at all four frequencies) is dominated by an EW component (Figure \ref{OUTFLOW_GRID:fig}, D and E). The known extended green object (EGO) in SDC335 \citep{Cyganowski08} is orientated in this same east-west (EW) direction (green filled contour in Figure \ref{OUTFLOW_GRID:fig}-C). The 4.5\mewm\ emission responsible for EGO features is thought to be created predominantly by shocked H$_2$ in outflowing material \citep[][and references therein]{Cyganowski11}. Outflow B displays an average length-to-width ratio as measured in the ALMA CO data of 3.0. From the ALMA CO(3-2) data (Fig. \ref{OUTFLOW_GRID:fig}-A), the morphology of this outflow would suggest that MM1\textit{b}\ is the likely progenitor. Interestingly it is approximately perpendicular, as projected on the sky, to the A outflow. This suggests, if MM1\textit{a}\ and MM1\textit{b}\ represent individual protostars, that their respective axes of rotation are likely to have very different orientations. Reasons for this misalignment in SDC335 are not clear from these data, but a possible scenario is considered in Section \ref{RosenSims:sec}. \subsubsection{C: NW (SE) molecular outflow} Outflow C is located to the north-west (NW) of the MM2 core. Detected in interferometric data in CO, HNC, and SiO, the molecular emission appears to be coming from a single outflow lobe, with its counterpart undetected, likely due to it having a similar velocity to the systemic velocity of SDC335 and, therefore, remaining in the excluded velocity range. This is supported by the presence of red-shifted emission in the APEX CO data that suggests the presence of a counterpart. This outflow hosts the brightest shock-tracing SiO emission of the three and the strongest Class-I CH$_3$OH\ maser detected here. Due to the lack of a counterpart red lobe, we do not attempt to measure an average length-to-width ratio for this outflow. \subsubsection{Considering other possible outflows} The detection of the three outflows described above does not preclude the existence of additional outflows within SDC335. Using the ALMA band 7 CO data (the dataset with both the highest sensitivity and spatial resolution), we see that close to the $V_{lsr}$ of the system, the potential for additional structures still remains. However, the current data in this region suffers from two issues arising from interferometric artefacts: firstly, the bright CO emission and, therefore, the associated interferometric side-lobe artefacts are complex at these velocities; and secondly, owing to a lack of zero-spacing or single dish data, the ALMA image suffers from negative `bowling' that comes from resolved-out extended emission in the source\footnote{see e.g. \citet{Braun85} and Chapter 8 \citet{SynthesisImaging} for an explanation of bowling as different to interferometric side-lobes.}, thereby further complicating the assessment of features at these velocities. These issues, combined with a lack of evidence for other outflows in our data at other frequencies, lead us to disregard some tentative structures which may appear to be additional outflows. \begin{table} \caption[]{Velocity ranges used in calculating outflow properties and integrated intensity maps by observed species from ALMA and ATCA observed transitions. } \begin{center} \begin{tabular}{c c c c} \hline \hline Outflow & Molecular & Lobe & $V_{range}$\\ & species & & [kms$^{-1}$] \\ \hline \multirow{7}{*}{A} & \multirow{2}{*}{CO} & Blue & $-$59.6 - $-$82.4 \\ & & Red & $-$11.6 - $-$33.6\\ &$^{13}$CO & Blue & $-$51.6 - $-$67.9 \\ &\multirow{2}{*}{SiO} & Blue & $-$51.6 - $-$74.0 \\ & & Red & $-$23.9 - $-$41.6 \\ &\multirow{2}{*}{HNC} & Blue & $-$51.6 - $-$70.3 \\ & & Red & $-$24.1 - $-$41.6\\ \hline \multirow{2}{*}{B} & \multirow{2}{*}{CO} & Blue & $-$59.6 - $-$74.0 \\ & & Red & $-$10.0 - $-$33.6\\ \hline \multirow{3}{*}{C} & CO & Blue & $-$56.6 - $-$65.6 \\ &SiO & Blue & $-$51.5 - $-$60.2 \\ &HNC & Blue &$-$51.5 - $-$57.5 \\ \hline \end{tabular} \end{center} \label{OutflowCalc:tab} \end{table} \subsection{Outflow properties} \begin{table*} \caption[]{Measured and derived outflow properties. } \begin{center} \begin{tabular}{c c c l c | c c | c c c | c} \hline \hline & & & & & \multicolumn{2}{ c |}{Measured} & \multicolumn{3}{c |}{Corrected for inclination$^{\dagger\dagger}$} & \\ Outflow & V$_{lsr}$$^{\dagger}$ &Inclination$^{\dagger\dagger}$ & Molecular & Wing & $l$ & $\mid$V$_{max}$$\mid$ & $l$ & $\mid$V$_{max}$$\mid$ & t$_{dyn}$ & Associated\\ & [kms$^{-1}$] & Angle [$^{\circ}$] & Species & & [pc] & [kms$^{-1}$] & [pc] & [kms$^{-1}$] & [10$^3$ years] & Masers$^{\ast}$ \\ \hline \multirow{7}{*}{A} &\multirow{7}{*}{$-$46.6} & \multirow{7}{*}{53-76} & CO (3-2)& Red & 0.30 & 35.0 & 0.34 & 87.6 & 3.8 & \multirow{7}{*}{1, 2, 8} \\ & & &$^{13}$CO (3-2) & Red & - & - & - & - &\\ & & &SiO (1-0) & Red & 0.29 & 22.7 & 0.32 & 56.7 & 5.5 &\\ & & &HNC (1-0) & Red & 0.30 & 22.5 & 0.33 & 56.3 & 5.8 &\\ & & &CO( 3-2)& Blue & 0.16 & 35.8 & 0.18 & 89.5 & 1.9 &\\ & & &$ ^{13}$CO (3-2) & Blue & 0.12 & 21.3 & 0.14 & 53.2 & 2.6 & \\ & & &SiO (1-0)& Blue & 0.17 & 27.4 & 0.19 & 68.5 & 2.7&\\ & & &HNC (1-0)& Blue & 0.16 & 23.7 & 0.18 & 59.5 & 3.0 &\\ \hline \multirow{2}{*}{B} &\multirow{2}{*}{$-46.6$} &\multirow{2}{*}{59-79} & CO& Red & 0.30 & 36.6 & 0.33 & 110.9 & 2.9 & \multirow{2}{*}{3, 4, 5, 6} \\ & & &CO & Blue & 0.25 & 27.4 & 0.25 & 83.1& 3.2 & \\ \hline \multirow{3}{*}{C} &\multirow{3}{*}{$-$46.5} &\multirow{3}{*}{57.3} & CO (3-2)& Blue & 0.19 & 19.1 & 0.22 & 35.4 & 6.1 & \multirow{3}{*}{7} \\ & & &SiO (1-0) & Blue & 0.15 & 13.7 & 0.18 & 25.3 & 7.1 & \\ & & &HNC (1-0) & Blue & 0.16 & 11.0 & 0.19 & 20.4 & 9.2 &\\ \hline \end{tabular} \end{center} \vspace{0.05mm} {\tiny{\textbf{Notes:}\\$^{(\dagger)}$ $V_{lsr}$ is local standard of rest velocity of the outflow driving source, see text for details.\\ $^{(\dagger\dagger)}$ For outflows A and B, we present values of $l$, $|V_{max}$| and t$_{dyn}$ calculated using the average value of the correction factor function of the inclination angle range in column 3, the values of the correction factors are given in Table \ref{CorrectionFactors:tab}. For outflow C, we use the single value of 57.3$^{\circ}$. See text for definitions and inclination correction factors.\\ $^{(\ast)}$ Associated Class I methanol masers as numbered in Table \ref{Maser:tab} and Figure \ref{MaserCO:fig}}.\\} \label{outflowprop:tab} \end{table*} \subsubsection{Inclination angles} \label{incangle:sec} To estimate the properties of the detected outflows, we must consider their inclination to the plane of the sky, measured as the angle $i,$ with $i=0$$^{\circ}$\ indicating that the axis of the outflow is oriented along the observer's line of sight and $i=90$$^{\circ}$\ indicating that it is perpendicular to the light of sight. To assess this, we used the ALMA CO integrated intensity maps (shown in Figure \ref{OUTFLOW_GRID:fig}-A) as this dataset has the highest spatial resolution and signal-to-noise ratio. In the first instance, we assume each outflow has a symmetric biconical geometry, following \citet{CabritBertout86}, and we define the opening angle, $\theta_{max}$, as the angle between the cone's axis of symmetry and the outer emission contour at 3$\sigma$ (see Figure \ref{outflowIncExplain:fig}). \begin{figure*}[] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.6, trim = 220 160 160 120, clip]{SubmissionFigsPDFs/iGTtheta_size} \caption{Limiting case A: $i \geq \theta_{max}$, here $i\sim\theta_{max}$.} \label{iGTtheta:fig} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.6, trim = 220 160 160 120, clip]{SubmissionFigsPDFs/iGT90mintheta} \caption{Limiting case B: $i \leq 90-\theta_{max}$, here $i\sim90-\theta_{max}$.} \label{iGT90mintheta:fig} \end{subfigure} \caption{Two limiting cases of the \citet{CabritBertout86} `case 2' outflow morphology ascribed to outflows A and B. The \citet{CabritBertout86} `case 3' outflow occurs when $i$ is sufficiently large to make the rear of the blue lobe and front of the red lobe cross the dotted $V_{lsr}$ mode in, for example, shown here in (b). } \label{outflowIncExplain:fig}% \end{figure*} Next, we compare the measured $\theta_{max}$, the observed positions and morphology of the red and blue outflow lobes on the plane of the sky to the model outflows described by \citet{CabritBertout86}. The work of \citet{CabritBertout86} includes four model `cases' of differing inclination angle, $i$, and the opening angle (see their Table 1) to describe the morphology of the red and blue lobes an observer would expect to see on the sky for each given case. In brief\footnote{For more details, please refer to the original text in \citet{CabritBertout86}.} the four cases are as follows: Case 1 describes an outflow with both a red-shifted and a blue-shifted lobe on the line of sight, namely, with blue in front of a red lobe. The inclination angle conditions for this case are $i < \theta_{max}$ and $i \leq 90^{\circ} - \theta_{max}$. Observationally, this yields an outflowing red-blue lobe pair, which overlap spatially on the line of sight. Case 2 describes a case with two observed outflow lobes, again one red-shifted and one blue-shifted, but with the inclination angle conditions of this case as $i \geq \theta_{max}$ and $i \leq 90^{\circ} -\theta_{max}$ (labelled A and B in Figure \ref{outflowIncExplain:fig}). Observationally there will be no point in the plane of the sky where the red and blue lobes spatially (or kinematically) overlap. Case 3 has outflow lobes which are do not overlap spatially (as per case 2) but with the emission from each lobe in the pair spanning the source, $V_{lsr}$, resulting in mixed red-and-blue-shifted emission from each lobe. The \citet{CabritBertout86} model here requires inclination angles of values of $i \geq \theta_{max}$ and $i > 90^{\circ} -\theta_{max}$. Case 4 requires very large opening angle ($\theta_{max}$) and inclination angles which obey these conditions, $i < \theta_{max}$ and $i > 90^{\circ} -\theta_{max}$. This results in lobes that overlap spatially (as per Case 1) and also span the $V_{lsr}$ and, as such, exhibit both red-shifted and blue-shifted emission (as per Case 3). We note that for the purposes of the current study, only cases 2 and 3 are relevant here. Outflow A has two spatially and kinematically non-overlapping lobes in the plane of the sky. This is indicative of a \citet{CabritBertout86} case 2 source. We find that the opening angle, $\theta_{max}$, of the blue-shifted part of outflow A is $\sim 14^{\circ}$, putting loose limits on $i$ of between 14 and 76$^{\circ}$. Similarly, outflow B has two non-spatially or kinematically overlapping lobes, again indicating a \citet{CabritBertout86} case 2 source. We measure $\theta_{max}$ as between $\sim$9.4 and 12.6$^{\circ}$, and use the average value of 11$^{\circ}$\ in calculations giving $i$ between 11 and 79$^{\circ}$. Determining the inclination of outflow C is complicated by the fact that only one lobe is reliably detected to the north-west of MM2. We could assume that C is a \citet{CabritBertout86} case 3 source ( $i\geq \theta_{max}$ and i $> 90^{\circ}$) given that the observed velocity range of the one observed lobe spans the $V_{lsr}$ of MM2. This would give an inclination angle of $>77^{\circ}$ given our measured $\theta_{max}$ of 13.4$^{\circ}$. However, as the red lobe is not reliably detected in this source, we cannot confirm that the red lobe would similarly demonstrate an emission spanning the $V_{lsr}$ and, therefore, we use, instead, the commonly adopted statistical average value of $i =57.3^{\circ}$. This average inclination angle, $\langle \theta \rangle$ is calculated as: \begin{equation} \langle \theta \rangle \int_{0}^{\pi} \rm{d}\Omega = \int_{\theta = 0}^{\pi/2} \int_{0}^{\pi}\theta \sin \theta \rm{d}\Omega\rm{d}\theta = 1\rm{rad} \simeq 57.3^{\circ} \label{AvAngle:eqn}.\end{equation}. \subsubsection{Further consideration of the outflow inclination angles for outflows A and B.} \label{angConsider:sec} Based solely on the \citet{CabritBertout86} models the potential inclination angles for outflows A and B cover ranges of 62$^{\circ}$ and 68$^{\circ}$, respectively. Such broad ranges introduce significantly different correction factors when deriving outflow properties. These correction factors range between 1.0 and 5.2 to correct the measured length, velocity, and momentum, with between 1.0 and 27.5 for energy and 0.2 and 27.0 for momentum flux (see Section \ref{DerivProps:sec} for a description of these derived properties and their respective correction factors). To further limit the possible range of inclination angles, we now outline an additional consideration, beyond the \citet{CabritBertout86} models, based on the observed length-to-width ratios ($LWR_O$) and the observed distance from the driving source to the maximum width ($x_O$) of the observed outflows. We present in Appendix \ref{incAngle:app} the full geometric derivation of this morphology and our associated results. Outflows A and B have observed length-to-width ratios of 3.5 and 3.0, respectively (average of both red and blue lobes for each). Given also these outflows' observed opening angles, and assuming the biconical outflow morphology of \citet{CabritBertout86}, we find that it is not possible to recover such observed length-to-width ratios with no external influence on the outflow. With their respective opening angles and a bicone morphology, the maximum recoverable $LWR_O$ for each outflow reaches an asymptote at angles of 76 and 79$^{\circ}$\, and with values of $LWR_O$ of 2.1 and 2.6 for A and B, respectively. This is due to the fact that under the biconical outflow morphology the observed length, $L_O$, is also the point along the outflow of maximum observed width. In observations and modelling of molecular outflows, the idealised biconical morphology is seldom, if ever, seen (see e.g. \citealt[and references therein]{Frank14}). This is due to the influence of the environment on the outflow. In particular, modelling the effects of the collapsing clumps, precession, turbulence, and magnetic fields \citep[see e.g.][]{Rosen20} show that the outflow morphologies are narrower along the outflow axis than the bicone morphology would suggest. Given these effects and the fact that we cannot reproduce the observed \textit{LWR} for outflows A and B using the biconical morphology, we instead propose an alternate morphology to limit the inclination angle ranges of these outflows. Our proposed `pencil-like' morphology allows the widest point can be at some arbitrary point, $x$, along the length (as is observed for outflows A and B). Using this second morphology (as detailed in Appendix \ref{incAngle:app} and Figure \ref{NewOutflow:fig}) we find inclination angle ranges of between 53 and 76$^{\circ}$\ for outflow A and 59 and 79$^{\circ}$\ for outflow B which can provide the observed $LWR_O$. An alternate scenario by which the observed $LWR_O$ for outflows A and B could be achieved would consist of observing these outflows at low inclination angles where the observed emission primarily arises from the outflow cavity rather than the outflow edge (i.e. Figure \ref{iGTtheta:fig}, as opposed to Figure \ref{iGT90mintheta:fig} or similar for our alternate morphology). In such a case, the observed length-to-width ratios would require shaping of the outflow cavity away from symmetry about the outflow direction by the surrounding medium, compressing and elongating the cross-section of the cavity in a single dimension. In the case of SDC335, to observe two outflows from driving sources separated by $<$9000 AU which display similar levels of compression of their outflow cavities but at orthogonal directions seems implausible. We therefore consider it more likely that the outflows are inclined closer to the plane of the sky than along the line of sight. As such, we consider the lower bounds from the \citet{CabritBertout86} models as extreme values and base our analysis instead on values ranging from those allowed by the geometric arguments presented in Appendix \ref{incAngle:app} of 53 to 76$^{\circ}$\ for outflow A and 59 to 79$^{\circ}$\ for outflow B. Table \ref{CorrectionFactors:tab} provides the correction factors used for each outflow to correct the derived values to create the inclination angle corrected values given in Tables \ref{outflowprop:tab} and \ref{CO_outflowprop:tab}. For outflow C we use the correction factor at 57.3$^{\circ}$\ and for outflows A and B we used average value of the correction function over the angle range given in column 3 of Table \ref{outflowprop:tab}. For completeness, we also include in Table \ref{CorrectionFactors:tab} the correction factors which would apply if the inclination angle ranges were based solely on the \citet{CabritBertout86} cases (given in italics) and in, Appendix B, a brief discussion of the implications of using these correction factors. The proper assignment of inclination angles to observed outflows is a difficult task and has significant implications on derived properties. With current leading observatories, for example, ALMA, providing a wealth of observed molecular outflows, it is timely for investigation to be set into a robust method of assigning these angles. We look forward to the possibility that the use of an alternate morphology in this work will raise further discussions of this matter. \begin{table*} \caption[]{Table of factors used corrected observed outflow properties to account for the inclination angle, $i$. The average value of the function over the angle range is given for outflows A and B and for outflow C the value at 57.3$^{\circ}$\ is presented. Additionally, for outflows A and B, we also provide in italics the correction factors applicable discounting the considerations made in Section \ref{angConsider:sec} and Appendix \ref{incAngle:app}. } \begin{center} \begin{tabular}{c c c c c} \hline \hline Derived & Correction & Average in range 53-76$^{\circ}$\ \textit{(14-76$^{\circ}$)}& Average in range 59-79$^{\circ}$\ \textit{(11-79$^{\circ}$)}& Value at 57.3$^{\circ}$\ \\ property & factor & (Outflow A) & (Outflow B) & (Outflow C)\\ \hline Length, $l$ & $\frac{1}{\sin i}$ & 1.12 \textit{(1.71)} & 1.08 \textit{(1.81)} & 1.19\\ Velocity, $|V_{max}|$, & \multirow{2}{*}{$\frac{1}{\cos i}$} & \multirow{2}{*}{2.50 \textit{(1.71)}} & \multirow{2}{*}{3.03 \textit{(1.81)}} & \multirow{2}{*}{1.85}\\ Momentum, $P$ & & & &\\ Energy, $E$ & $\frac{1}{\cos ^2 i}$& 6.69 \textit{(3.47)} & 9.97 \textit{(4.17)}& 3.43\\ \\Momentum Flux, $F$ & $\frac{\sin i}{\cos ^2 i}$& 6.16 \textit{(2.86)} & 9.45 \textit{(3.56)} & 2.88\\ \hline \end{tabular} \end{center} \label{CorrectionFactors:tab} \end{table*} \subsubsection{Temperature estimation} \label{Radex:sec} The four observed transitions of CO within the APEX data allow us to estimate the kinematic temperature of the outflowing gas in SDC335. To achieve this, we first regridded the data from each observation to the poorest spatial and velocity resolution, those being APEX CO (3-2) spatially and APEX CO (7-6) in velocity. We then created integrated intensity maps at 5\,kms$^{-1}$\ intervals from each cube. For each observed outflow we measured the integrated intensity (in K\,kms$^{-1}$) at two positions along the outflow, creating a spectral line energy distribution (SLED) at each position. We then followed the approach of \citet{Liu18} by generating a large grid over temperature, number and column density with the radiative transfer code RADEX \citep{VandertakRADEX} and searching for the best-fit modelled intensities compared to our observed SLED intensities to give a temperature at a given position. The best fit being a $\chi^2\sim1$ following the $\chi^2$ approach of \citet{vanderTak2000}. We then take all models with $\chi^2=1\pm0.05$ and use the average properties of these models. Figure \ref{SLED:fig} gives an example of the APEX CO intensities and best fit model at a given position and velocity along outflow A. Using this technique we find $T_{kin}$ values of 62/53K, 54/61K and 55K for the blue and red components of outflows A, B and C respectively, (see Table \ref{CO_outflowprop:tab}). \begin{figure} \centering \includegraphics[scale=0.5]{SubmissionFigsPDFs/SLED.pdf} \caption{Example spectral line energy distribution used for fitting the temperature toward the blue lobe of outflow A. The data (black circles with errorbars) are at CO transitions (3-2), (4-3), (6-5), and (7-6). The model fits (dotted and solid line) from RADEX also include CO (5-4). The dotted line represents the best fit ($\chi^2=1\pm0.05$) SLEDs from the RADEX grid. The solid line represents the average value for these best fit models in (for this example) SLED. This fit gives T$_{kin}$ = 66K.} \label{SLED:fig} \end{figure} \subsubsection{Derived properties} \label{DerivProps:sec} Using the inclination angles discussed in Section \ref{angConsider:sec}, we were able to calculate inclination corrected physical properties for each outflow. For each observed species, we provide in Table \ref{outflowprop:tab} both the measured and inclination angle corrected outflow lobe length, $l$ and $|V_{max}|$. We use the inclination-corrected values to calculate a dynamical time, $t_{dyn}$, for each outflow lobe. We define these properties and their inclination angle correction factors following \citet[and references therein]{Cunningham16}, as follows: The lobe length, $l$, is the maximum distance from the progenitor peak to the $3\sigma$ emission contour over the wings velocity range. The corrected length is calculated as $l_{corr}=l_{measured}/\sin(i)$ for inclination angle $i$. The absolute velocity difference, $|V_{max}|$, is defined as the difference between the outflow progenitor's $V_{lsr}$ and the final velocity channel in the image cube to show $3\sigma$ emission from the outflow, with a correction for $i$ is calculated as $|V_{max_{corr}}|=|V_{max_{meas}}|/\cos(i)$. Finally, $t_{dyn}$, the dynamical age of the outflow is calculated as $t_{dyn}=l_{corr}/|V_{max_{corr}}|$. For the ALMA CO observations, we also provide in Table \ref{CO_outflowprop:tab} the mass, momentum, and energy of each outflow lobe, following the methods of \citet{Ana12}. As part of this analysis, we note that, as may be expected for a high mass star forming molecular cloud, emission close to the systemic velocity of the cloud becomes increasingly complex, with features of self absorption and missing large scale flux in the interferometric data, which can lead to uncertainties in values derived from emission observed within these velocity ranges. In order to avoid basing our findings on velocity ranges within the data effected by self-absorption or imaging artefacts, we exclude data close to the systemic velocity in the following way. For the MM1 core, we take the spectrum of CO (3-2) data from APEX over the whole SDC335 velocity range. We fit a Gaussian to this spectrum, centred at the $V_{lsr}$ and exclude emission from channels $\pm2.5\times$ the variance, $\sigma$, of this Gaussian fit. The fitted $\sigma$ was found to be 5.2kms$^{-1}$. The factor of 2.5 was chosen to ensure that all self-absorption and missing flux features in the interferometric data were excluded for the MM1 core outflows (A and B) as can be seen in Figures \ref{OUTA_SPEC_blue:fig}, \ref{OUTA_SPEC_red:fig}, and \ref{OUTB_SPEC:fig}. Excluding these velocity channels means our derived properties are lower limits for outflows A and B. We repeat the same process for the MM2 core, taking the APEX CO (3-2) spectrum and fitting a Gaussian. Here, $\sigma$ is found to be 6.7kms$^{-1}$. The velocity range of the observed outflow lobe C from source MM2 is very close to the $V_{lsr}$ of MM2 that to exclude emission from $\pm2.5\sigma$ velocity channels would remove too much emission to provide a realistic determination of the values. Instead we use $\pm1.5\sigma$ sufficient to avoid self absorption artefacts in the CO data and retain sufficient data to get a meaningful result. The velocity ranges used for calculating outflow properties and generating the integrated intensity images of each outflow lobe shown in Figure \ref{OUTFLOW_GRID:fig} are given in Table \ref{OutflowCalc:tab}. Although we have other molecular species in the current data we do not carry out the following calculations with the data for these reasons. In particular, SiO is a shock-tracing species and therefore cannot reliably trace the bulk outflow properties since it will be enhanced in regions of shocks and less abundant elsewhere; HNC is currently a poorly studied outflow tracer therefore it is not clear whether or not (like CO) it traces the bulk outflow properties and further studies of this species in multiple targets would be required to assess its nature in outflows, which is beyond the scope of this paper. For our calculations with the CO ALMA data, we define mass within the outflow, $M_{out}$ in units of \solmass, as: \begin{equation} \begin{split} M_{out}=\left(\frac{10^3}{M_{\odot}}\right)\left(\frac{8Q\pi k \nu^2}{A_{ul}hg_uc^3} \right)\exp\left(\frac{E_{ul}}{T}+\frac{h\nu}{kT}\right)\\\times\left(\frac{\mu/N_{A}}{M_{\odot}}\right)D^2 \sum \limits_{x,y,v} \tau_{corr} T_{mb} \rm{d}V \end{split} \label{Mout:eqn} ,\end{equation} \noindent where $\mu$ is the mean molecular mass, $N_{A}$ the Avagadro's constant, $Q$ the partition function, and $A_{ul}$ the Einstein coefficient. $E_{ul}$ and $g_{u}$ are the energy of the transition in K and the statistical weight of the upper energy level (2J+1), respectively, and D is this distance to SDC335 (3.25kpc). The value given by $ \sum \limits_{x,y,v}{T_{mb} \rm{d}}V$ is equivalent to the integrated intensity and is calculated from the data cubes within a polygon region around each lobe. The sum is made per pixel, $x, y$, per velocity channel, $v$, over the outflows respective velocity range for all pixels with an intensity greater than or equal to three times the RMS per channel. Within this sum the data units are converted from the native units Jy/beam to Kelvin. The factor $\tau_{corr}$ gives the optical depth correction factor of the form $\tau_{corr}=\frac{\tau}{1-e^{-\tau}}$ and d$V$ gives the channel velocity width in kms$^{-1}$. We calculate the mass both with a lower bound temperature $T$ of 20K and a literature value for $\tau_{corr}$ of 3.5 \citep{CabritBertout92} as well as the temperature values recovered from our best fit radiative transfer models from \S \ref{Radex:sec} (Table \ref{CO_outflowprop:tab}) with a $\tau_{corr}$ factor of 8.2 based upon our estimated $^{12}$CO optical depth. To estimate an optical depth for $^{12}$CO in the outflows, we use the $^{13}$CO\ to $^{12}$CO line ratio observed in the blue wing of outflow A. Following the procedure of \citet{Myers83}, we compare the $^{13}$CO/$^{12}$CO line ratios observed at multiple points (at peak emission) along the outflow to the ratio of optical depths given by $\frac{1-e^{-\tau_{^{13}CO}}}{1-e^{-\tau_{^{12}CO}}} $ and use the assumption that $\tau_{^{12}CO}$ is equal to $f\times \tau_{^{13}CO}$. Here, $f$ is the isotopic abundance ratio, in the range $f=50-100$ typical of star-forming regions \citep[e.g.][]{Pineda10,Szucs14}. Given this range of $f$ we can derive numerically a range of optical depths for $^{13}$CO\ and thus attain a corresponding range of $\tau_{^{12}CO}$ values. We find a range of $\tau_{^{12}CO}$ = 2.4 and 23.5 and use the median value of 8.2. These are comparable to typical values seen in CO outflows e.g. \citep{CabritBertout92} of 1 to 8.9 with a median of 3.5. Momentum and energy then follow as: \begin{equation} P_{CO}=\sum \limits_{x,y,v} M_{x,y,v} \Delta V \end{equation} \noindent and \begin{equation} E_{CO}=\frac{1}{2}\sum \limits_{x,y,v} M_{x,y,v} \Delta V^2 ,\end{equation} \noindent where $\Delta V$ is the absolute velocity difference between the source $V_{lsr}$ and the given channel in the sum. $M_{x,y,v}$ is the outflow mass value for only a single voxel ($x,y,v$), which follows from Equation \ref{Mout:eqn}. Finally, we calculate the momentum flux $F_{CO}$ within annular regions projected out from the central source which is defined (following \citealt[][]{BontempsOutflow96}, their equation 1 and \citealt{Ana13}), as: \begin{equation} F_{CO}=\sum \limits_{x,y,v} \left( \frac{\Omega \tau_{corr} T_{mb} \rm{d}V \Delta V^2}{{\rm{d}}r}\right) \label{FCO:eqn} ,\end{equation} \noindent where d$r$ is the width of the annular ring in the sum and $\Omega$ is the same constant as the value preceding the integral in Equation \ref{Mout:eqn}. To take into account the assumed inclination angles, the values of $P_{CO}$, $E_{CO}$, and $F_{CO}$ are corrected by the factors $1/\cos(i)$, $1/\cos ^2(i)$ and $\sin(i)/\cos ^2(i)$, respectively, with their specific values for each outflow and outflow property given in Table \ref{CorrectionFactors:tab}. The error in inclination angle is the most significant contribution to the errors on these values. The derived outflow property values are given in Table \ref{CO_outflowprop:tab}. Our derived values are comparable to those from both observed and theoretical studies of low- to high-mass protostellar objects \citep[e.g.][]{Zhang05,Yildiz12,Cunningham16,vanKempen16,Cyganowski17,Staff18}, with the SDC335 outflows A and B typically having amongst the highest values compared to published sources. The values for outflow C are somewhat lower and likely affected by the limited velocity range we were able to use for this feature. These results confirm SDC335 protostellar sources are indeed high-mass protostars. When compared to \citet{Maud15}, the SDC335 sources appear to have a similar range of momentum fluxes and energies, but significantly lower mass and momentum. This difference is likely due to comparing single dish \citep{Maud15} and interferometric (this work) data with the potential for multiple spatially unresolved outflows in the single dish data boosting values within the \citet{Maud15} sample. \section{Infall, accretion, and the evolutionary status of SDC335} \subsection{Cloud infall and protostellar accretion rates} \label{infall:sec} Outflow momentum flux (Equation \ref{FCO:eqn}) is a proxy for the protostellar mass accretion rate, $\dot M_{acc}$, since the driving force of outflows is thought to result from the conservation of angular momentum within an accreting system \citep{Konigl00,Pudritz07,Tan14}. A theoretical relation between $F_{CO}$ and $\dot M_{acc}$ equates the two values scaled by a material entrainment efficiency, $f_{ent},$ and the protostellar mass ejection rate and wind speed, $\dot M_{w}, v_w$ \citep[see their Equation 4]{Ana13}. Following this form of the $F_{CO}$ to $\dot M_{acc}$ relation and assuming the same material entrainment efficiency and wind properties as these authors have done ($f_{ent} = 0.5$, $\dot M_{w}/\dot M_{acc} = 0.15$ and $v_w = 40$kms$^{-1}$), we derive the mass accretion rates given in Table \ref{CO_outflowprop:tab}. For outflows A and B, the $\dot M_{acc}$ derived from our $F_{CO}$ values are in the range of 8.7 - 64.8 $\times 10^{-5}$\solmass\ yr$^{-1}$ , at a fixed temperature of $T=20$K, and 11.5 - 85.4 $\times 10^{-5}$\solmass\ yr$^{-1}$, for our derived temperatures ($T=53 - 62$K). For outflow C, values are lower at $\dot M_{acc}$ is 0.2 $\times 10^{-5}$\solmass\ yr$^{-1}$ for $T=20$K and 0.3 $\times 10^{-5}$\solmass\ yr$^{-1}$ at the derived temperature of $T=55$K. The values for outflows A and B are at the higher end of inferred accretion rates from low-mass protostars, $\dot M_{acc} \sim 10^{-5}$\solmass\ yr$^{-1}$, \citep[e.g.][]{McKeeTan03,HosokawaOmukai09, HosokawaYorkeOmukai10} and are more typical of those inferred for high-mass protostars, $10^{-4} - 10^{-3}$\solmass\ yr$^{-1}$ \citep[see e.g.][]{Zhang05,Fuller05,Beuther13,Ana13,Rosen16,Goddi18, Rosen19}, from both observational and theoretical works reported within the literature. The value for $\dot M_{acc}$ of outflow C is very low and likely a consequence of the limited velocity range over which we are able to measure this outflow's properties. Indeed, as noted in Section \ref{OutDescript:sec}, the driving sources for all three of the observed outflows are known to be high-mass protostars owing to their association with 6.7GHz methanol masers, so the derived $\dot M_{acc}$ for outflow C can act as a lower limit. We now consider the total derived mass accretion, $\dot M_{tot. acc}$, within SDC335. This value is calculated as the sum of all the derived $\dot M_{acc}$ values presented in Table \ref{CO_outflowprop:tab}, giving a $\dot M_{tot. acc}$ of 1.4 ($\pm$ 0.1) $\times 10^{-3}$\solmass\ yr$^{-1}$. With this value, an interesting comparison can be made with the derived mass infall rate, $\dot M_{inf}$, for the whole SDC335 cloud. \citet{Peretto13} calculated a value of $\dot M_{inf} \simeq 2.5 (\pm 1.0) \times 10^{-3}$\solmass\ yr$^{-1}$. These two values are comparable within the quoted errors, which would imply a near 100\% efficiency of infalling material on cloud scales ($\sim$few pc) being funnelled onto the accretion disk scale ($<<0.1$pc) and driving the outflows. We note, however, that this does not imply a 100\% accretion of the inflowing material into the protostar. A near continuous flow of material from large (cloud/filament) to smaller (clump/core) scales would be significant as it suggests that each core is acting as a sink within the cloud drawing material from all scales and, as such, would have implications for high-mass star forming models. The protostellar accretion rate is a characteristic of the whole forming cluster under competitive accretion models \citep{Bonnell01, Bonnell04, Bonnell07} but under core accretion models \citep{McKeeTan02, McKeeTan03, Tan14}, this is set by the properties of bound cores. It would appear that this result favours the former set of models. Another interesting implication of these inferred accretion rates is that the three known protostellar sources in SDC335 would appear to be drawing the majority of the material onto themselves, suggesting that there is a limited amount of material available to start forming any additional protostars within the system. In ALMA observations of the IRDC G28.34 P1, \citet{Zhang15} found a lack of low-mass protostars compared to the expected number from a typical IMF. These authors concluded, after discounting migration of low-mass stars into the centre of the cluster, that the most likely scenario for this under-abundance was because the low-mass stars are yet to form. If this is true then SDC335 may hold an explanation, in that early onset high-mass star formation dominates the inflowing material budget depriving lower mass proto or pre-stellar cores from gaining mass and starting formation. As the inclination angles of the observed outflow are the dominant factor in the error budget of these calculations, an additional study of the protostars in SDC335 is required to better constrain the outflow properties. For example, testing for the presence of rotation axes, indicative of accretion disks, within the sources would help further this investigation. \subsection{Evolutionary Status of SDC335} \label{EvoStat:sec} The works of \citet{Ana13}, \citet{Maud15} and \citet{vanKempen16}, for instance, have shown there is a linear (in log-space) relation between the molecular outflow momentum flux (and therefore $\dot M_{acc}$) and source luminosity in protostellar sources from low to high-masses. Recent numerical works by \citet{Rosen20} show this same trend. \begin{figure*} \centering \includegraphics[scale=0.6]{SubmissionFigsPDFs/Fout_Lbol_2020_angArgs.pdf} \caption{Outflow momentum flux, $F_{CO}$, as a function of powering source bolometric luminosity $L_{bol}$. Data for outflows A, B, and C in SDC335 are indicated by the filled squares and associated errorbars. The origin of the errorbars are discussed in the main text. The filled and empty hexagons are the Class 0 and Class I low-mass protostars from \citet{BontempsOutflow96}. The diamonds are low-mass Class I sources from observation of Ophiuchus \citet{vanderMarel13}. Filled and empty triangles are literature values for Class 0 and I objects used by \citet[see their Table E.1]{vanderMarel13}. The stars denote the \citet{Ana13} high-mass Class 0 analogues and the triangles are outflows from high-mass protostellar objects observed from the RMS survey, \citet{Maud15}. The green dashed tracks with arrow heads are the evolutionary tracks for decreasing or intermittent accretion from \citet{Ana13}. The arrow heads represent the point at which a protostar of a given mass (given at the end of the arrow) has accreted 50\% and 90\% of their envelope mass. Red dotted line is the best fit to the plotted Class 0 sources extended to high luminosity with the red shaded region given the 1-$\sigma$ error margin for this fit. The blue dashed-dot line and shaded region are the same but for the best fit to the plotted Class I sources.} \label{Fout_Lbol:fig} \end{figure*} Figure \ref{Fout_Lbol:fig} plots the calculated outflow momentum flux as a function of source bolometric luminosity, $L_{bol}$, for the protostars in SDC335 and other sources, both low and high-mass, from the literature. The sources from the literature are low-mass Class 0 and 1 sources using values from \citet[and references therein]{BontempsOutflow96} (both classes), observed and literature Class I sources from \citealt[][their Table 4 and E.1, respectively]{vanderMarel13}. High-mass sources in the plot are Class 0 analogues from \citet{Ana13} and high-mass protostellar sources from \citet{Maud15}. The $L_{bol}$ values for SDC335 are those derived in \citetalias{Avison15}, and we divide the total luminosity of MM1 (1.82$\times 10^4$ $L_{\odot}$) between MM1\textit{a}\ and MM1\textit{b}\ in a 2.5:1 ratio (based on their respective optically thin radio emission flux ratios, see \citetalias{Avison15} Table 5) yielding values of 1.3$\times 10^4$ and 0.51$\times 10^4$~$L_{\odot}$ for MM1\textit{a}\ and MM1\textit{b}\ respectively. To compensate for this source of uncertainty each SDC335 point on Figure \ref{Fout_Lbol:fig} is presented with a $\pm$50\% error bar on the $L_{bol}$ value. For outflows A and B the dominant source of error is the range of possible inclination angles, as such for these sources the errorbar represents the maximum and minimum values for $F_{CO}$ using the respective maximum and minimum possible inclination angles. For outflow C the $F_{CO}$ error includes the measured error on $F_{CO}$, prior to inclination angle correction due to noise within the data, plus a $\pm$5 degree error on the inclination angle used (57.3$^{\circ}$ for the average angle for randomly orientated outflows). We can see from Figure \ref{Fout_Lbol:fig} that all outflows in SDC335 fit the general trend of outflow momentum flux as a function of luminosity over the wide mass range represented. The plotted points are the sum of the red plus blue lobes for each source in SDC335, using the temperatures from our RADEX fitting and derived optical depth values in the calculation of the $F_{CO}$ values. The outflows A and B in SDC335 reside in the same parameter space as the high-mass protostars from \citet{Maud15} with outflows A amongst the higher $F_{CO}$ range for its calculated $L_{bol}$. We note that the sources from \citet{Maud15} are studied with a single dish instrument, thus lacking the resolution of our current work, leading to the question posed by those authors regarding whether or not their observed outflows are driven by a single or multiple objects. SDC335 outflow C has a significantly lower $F_{CO}$ value away from the proposed observed trend seen within the literature. This suggests either it has a less powerful, potentially more evolved outflow or this effect could simply be an artefact caused by the limited velocity range we are able to integrate over for this outflow coupled with a poorly defined inclination angle. Assuming each of the identified outflows in SDC335 represents a single protostellar object, we can consider whether these sources represent an analogue of the protostellar source evolution classifications used at lower mass \citep{Lada99}. To this end, we generate best linear fit lines for the $F_{CO}$ as a function of $L_{bol}$ for Class 0 and I sources using values from the literature plotted in Figure \ref{Fout_Lbol:fig}. The best fit lines are plotted as the red dotted line for Class 0 and blue dashed-dot line for Class I, extended to higher luminosity values for comparison with SDC335 with associated 1-$\sigma$ error margins from the fit plotted as shaded regions of matching colour. The derived best fits are, \begin{equation} \log_{10}(F_{CO}[\rm{Class 0}])=-4.3(\pm0.15) + 0.60(\pm0.08)\log_{10}\left(\frac{L_{bol}}{L_{\odot}}\right) \end{equation} and \begin{equation} \log_{10}(F_{CO}[\rm{Class I}])=-5.3(\pm0.12) + 0.63(\pm0.16)\log_{10}\left(\frac{L_{bol}}{L_{\odot}}\right) \end{equation} \noindent for Class 0 and 1 sources, respectively. We note that the best-fit line for Class 0 objects matches very closely to the results reported by \citet{CabritBertout92} for a sample of Class 0 sources (IRAS 16293, IRAS 3282, L1448, L1455M, RNO 43 and VLA1623) and a single high-mass object (G35.2 N). \vspace{0.5cm} The fitted lines show a decrease in outflow momentum flux between the Class 0 and I stages, as would be expected during the evolution of a protostellar source along the plotted evolutionary tracks \citep{Ana13} assuming a decreasing and intermittent accretion rate. From Figure \ref{Fout_Lbol:fig} we see that both MM1\textit{a}\ and MM1\textit{b}\ lie very close to the Class 0 best fit line and within the 1-$\sigma$ error bound of that line of best fit (red shaded region). This suggests that these sources are very high-mass Class 0 analogue. We use the qualifier `very' to indicate 'more extreme than' based on a comparison to the position of the other high-mass Class 0 protostars in Figure \ref{Fout_Lbol:fig} from \citet{Ana13} (empty star markers in the Figure). Supporting the `young, very high-mass' status of the SDC335 sources MM1\textit{a}\ and MM1\textit{b},\, we can see from the evolutionary tracks in Figure \ref{Fout_Lbol:fig} that these sources sit well above the 50\solmass\ track, but at an early stage before $\sim$50\% of the envelope mass has been accreted (denoted by the first arrow head). This is also corroborated by the short dynamical times of these outflows ($\rm{few}\times 10^3$yr) in Table \ref{outflowprop:tab}. \citetalias{Avison15} found, based on their current radio continuum properties, that the three high-mass protostellar objects in SDC335 are all currently displaying characteristics of zero age main sequence (ZAMS) stars of spectral type B1.5 (or B1.5-B1 for MM1\textit{a}), which equates to stellar masses of $\sim$ 9.0\solmass\ \citep{Mottram11}. These relatively low `current' stellar masses agree with the position of the observed outflow properties for A and B in terms of their position along the evolutionary tracks. \begin{table*} \caption[]{CO(3-2) Outflow properties derived from ALMA data. Values for $P_{CO}$, $E_{CO}$ and $F_{CO}$ are corrected for the respective inclination angles given in Table \ref{outflowprop:tab}. We use the average value of the correction factor function over the range of angles for outflows A and B (see Tables \ref{outflowprop:tab} and \ref{CorrectionFactors:tab}) and the single angle value for C. Two sets of values are given for each outflow lobe, the first at T=20K using a literature $\tau_{corr}$ factor of 3.5, the second using our derived temperature and $\tau_{corr}$ factor (8.2) values. See text for more information. } \begin{center} \begin{tabular}{c c c c c c c c} \hline \hline Outflow & Lobe & T & $M_{out}$ & $P_{CO}$ & $E_{CO}$ & $F_{CO}$ & $\dot M_{acc}$\\ & & [K] & [\solmass] & [\solmass kms$^{-1}$] & [$10^{38}$J] & [10$^{-5}$\solmass kms$^{-1}$ yr$^{-1}$] & [10$^{-5}$\solmass yr$^{-1}$] \\ \hline \multirow{4}{*}{A} & Blue & 20 & 0.19 & 8.72 & 4.51 & 274.1 & 11.0\\ & Red & 20 & 0.89 & 48.0 & 29.2 & 1619.1 & 64. 8\\ & Blue & 62.0 & 0.75 & 12.7 & 6.5 & 368.6 & 14.7\\ & Red & 53.3 & 1.27 & 68.2 & 41.4 & 2133.8 & 85.4 \\ \hline \multirow{4}{*}{B} & Blue & 20 & 0.08 & 4.6 & 2.98 & 218.6 & 8.7\\ & Red & 20 & 0.21 & 13.2 & 9.6 & 461.0 & 18.4 \\ & Blue & 54.6 & 0.22 & 12.1 & 7.5 & 288.0 & 11.5 \\ & Red & 60.9 & 0.32 & 20.0 & 14.4 & 617.9 & 24.7\\ \hline \multirow{2}{*}{C} & Blue & 20 & 0.002 & 0.07 & 0.02 & 6.1 & 0.2\\ & Blue & 54.5 & 0.014 & 0.31 & 0.07 & 8.0 & 0.3\\ \hline \end{tabular} \end{center} \label{CO_outflowprop:tab} \end{table*} \subsubsection{Low bolometric luminosities in the MM1 core} \label{tooLow:sec} An open issue from previous work on SDC335, presented in \citetalias{Avison15}, was the discrepancy between the observed bolometric luminosity $L_{bol}$, as derived from the millimetre core spectral energy distributions, and the luminosity of a ZAMS star of the spectral type necessary to produce the Lyman $\alpha$ photon flux inferred from the radio continuum, $L_{\rm ZAMS}$. The value of $L_{bol}$ was found to be approximately a factor 20 less than $L_{\rm ZAMS}$. To briefly review the \citetalias{Avison15} finding (for more details, see their Section 4.1); the authors calculated a Lyman $\alpha$ photon flux based upon the HCH\textsc{ii}\ continuum flux density for each of the three HCH\textsc{ii}\ sources. From this, a ZAMS spectral type was associated with each core (B1.5 for MM2 and MM1\textit{b}\ and B1 $-$ B1.5 for MM1\textit{a}) using values from \citet{Mottram11} and \citet{Davies11}. The ZAMS is reached when hydrogen burning has commenced, however, given the outflow indicators (masers, EGO etc) present in SDC335, it was assumed in \citetalias{Avison15} that each HCH\textsc{ii}\ region represented a protostar which was still actively accreting. This assumption is borne out by the detection of the three outflows in the current work. An actively accreting protostar will have a total luminosity (assuming ZAMS properties), $L_{tot,ZAMS}$, which is the sum of its intrinsic luminosity, $L_*$, and that from accretion, $L_{acc}$. Here $L_{acc}$ is of the form $L_{acc} = \frac{GM_*\dot M_*}{R_*}$, and the value of $M_*$, $R_*$ and $L_*$ were based on the ZAMS properties and $\dot M_*$ (the mass accretion rate) was assumed to be equal to the global infall rate of SDC335 derived by \citet{Peretto13} of $2.5\times10^{-3}$\solmass\ yr$^{-1}$. From this calculation, the authors noted that the ZAMS $L_*$ and $L_{bol}$ agree to within a factor of $\sim2$ however this did not allow for ongoing accretion. Including $L_{acc}$ gives $L_{tot,ZAMS}$ a value that is a factor 20 higher than $L_{bol}$ (see the values in Tables 2 and 6 of \citetalias{Avison15}). \citetalias{Avison15} presented two scenarios which could account for the observed low bolometric luminosity from all three protostars in SDC335. In light of the data presented in this current work we can review these scenarios. The first scenario was that the assumed accretion rate $2.5\times10^{-3}$\solmass\ yr$^{-1}$ from \citet{Peretto13}) was too high and that the protostars were undergoing accretion at a lower rate (either overall or as a function of periodic accretion). In this current work, we derive a mass accretion rate onto each protostellar object from their outflow momentum flux (column 8 in Table \ref{CO_outflowprop:tab}) and find that indeed the values are lower than the \citet{Peretto13} infall rate. This allows us to revise the values of $L_{tot,ZAMS}$ from Table 6 of \citetalias{Avison15} and address the second scenario discussed by those authors. In doing so, using both the minimum and maximum $\dot M_{acc}$ for each outflow from Table \ref{CO_outflowprop:tab} to give a possible range of $\dot M_{acc}$ (and using the derived temperatures from \S\ref{Radex:sec}), we provide revised values for Table 6 in \citetalias{Avison15} in our current Table \ref{RevisedZams:tab}. \begin{table*} \caption{Calculated luminosity values for the three protostellar cores in SDC335, assuming the ZAMS properties associated with each source in \citetalias{Avison15}. This is a revised and expanded version of Table 6 in \citetalias{Avison15}. For each source, we find a minimum and maximum $L_{acc}$ and this $L_{tot,ZAMS}$ based upon the range of $\dot M_{acc}$ between the red and blue outflow lobes from Table 5. $^{\dagger}$ $L_{bol}$ as derived in \citetalias{Avison15}, the division of $L_{bol}$ for the MM1 core is described within $\S$\ref{EvoStat:sec} of this paper.} \begin{center} \begin{tabular}{c c c c c c c c} \hline \hline & $L_{*}$ & $L_{bol}^{\dagger}$ & $L_{acc}$ (min) & $L_{acc}$ (max) & $L_{tot,ZAMS}$ (min) & $L_{tot,ZAMS}$ (max) & $ L_{tot,ZAMS}/L_{bol}$ factor \\ Source & [L$_{\odot}$] & [L$_{\odot}$] & [L$_{\odot}$] & [L$_{\odot}$] & [L$_{\odot}$] & \\ \hline MM1\textit{a}\ & 5.5$\times 10^{3}$ & 1.3$\times 10^{4}$ & 1.3$\times 10^{4}$ & 7.6$\times 10^{4}$ & 1.9$\times 10^{4}$ & 8.1$\times 10^{4}$ & 1.4 - 6.2\\ MM1\textit{b}\ & 4.1$\times 10^{3}$ & 5.1 $\times 10^{3} $ &9.9$\times 10^{4}$ & 2.1$\times 10^{4}$ & 1.4$\times 10^{4}$ & 2.6$\times 10^{4}$ & 2.8 - 5.0\\ MM2 & 4.4$\times 10^{3}$ & 9.9$\times 10^{3}$ & 1.7$\times 10^{2}$ & $-$ & 4.5$\times 10^{3}$ & $-$ & 0.5\\ \hline \end{tabular} \end{center} \label{RevisedZams:tab} \end{table*} For MM2, the $L_{tot,ZAMS}$ is now a factor $\sim$2 lower than the $L_{bol}$ which is consistent within the uncertainties on these values. A spectral type B1.5 ZAMS star has a mass of $\sim$ 9\solmass\ \citep{Mottram11}, which for MM2 would also agree with the accreting protostars position on the evolutionary tracks in Figure \ref{Fout_Lbol:fig} and its status as a potentially more evolved protostar than the cores in MM1. In the case of the two ionising sources in MM1, the derived mass accretion rates cover a large range of values, 14.7 to 85.4$\times10^{-5}$ \solmass\ yr$^{-1}$ and 11.5 to 24.7$\times10^{-5}$ \solmass\ yr$^{-1}$ for MM1\textit{a}\ and MM1\textit{b}, respectively. With these values the bolometric luminosity for MM1\textit{a}\ is consistent with $L_{tot,ZAMS}$ at the lowest $\dot M_{acc}$ values but the discrepancy persists over the majority of the possible range (any value above 30.0 $\times10^{-5}$ \solmass\ yr$^{-1}$ leads to a factor 2.5 discrepancy between the two luminosities). And for MM1\textit{b}\ the discrepancy between $L_{bol}$ and $L_{tot,ZAMS}$ is at a factor of about three over the entire range -- meaning that the observed $L_{bol} < L_{tot,ZAMS}$ from \citetalias{Avison15} also persists for this source. The second scenario to account for the luminosity discrepancy discussed in \citetalias{Avison15} is based upon the models of massive protostellar evolution at high accretion rates ($\sim 10^{-3}$\solmass\ yr$^{-1}$) of \citet{HosokawaOmukai09} and \citet{HosokawaYorkeOmukai10}. In this model massive protostars go through a phase of swelling to radii of $\sim100$ R$_{\odot}$ when their mass is between 6 and 10\solmass, followed by a short contraction phase as their mass grows to between 10 and 30\solmass. During the period of swollen radii the effective temperature of the protostar is lower than that of an equivalent mass ZAMS star, as such there are insufficient UV photons to generate a H\textsc{ii}\ region. However, during the short contraction phase as the radius decreases the effective temperature increases and a H\textsc{ii}\ region can form, whilst the radius remains somewhat swollen ($\sim$10 R$_{\odot}$) giving a lower luminosity than a ZAMS star at the same mass. In \citetalias{Avison15}, the authors suggest that each SDC335 protostar was in the contraction phase, accounting for both the observed H\textsc{ii}\ regions and the low bolometric luminosities, though given the relatively brief duration of the contraction phase having multiple sources at this stage simultaneously would be unlikely. The new derived mass accretion rates for MM1\textit{a}\ and MM1\textit{b}\ have values that are in the range of those considered by the \citet{HosokawaOmukai09} models (although for the swollen radii at accretion rates of $\sim 10^{-4}$\solmass\ yr$^{-1}$, the swelling is to $\sim$40R$_{\odot}$ rather than 100R$_{\odot}$) and thus these sources are in either the swollen radii phase or the contraction phase -- this would still seem a viable scenario that would lead to the lower observed luminosities. Given the relative lengths of each of these phases, it would be more likely that they are still in the swollen radii phase. If this is the case, the origin of the ionised emission in each core would then require an alternate explanation, which we address in Section \ref{radiojets:sec}. \subsubsection{An alternate interpretation of the ionised radio continuum emission in SDC335} \label{radiojets:sec} As discussed briefly in \citetalias{Avison15}, the spectral indices for MM1\textit{b}\ and MM2 fall within the range of values for both photo-ionised H\textsc{ii}\ regions and collimated ionised jets \citep{Reynolds86}. Given the detection in this work of outflows from both these sources, it is important to review origin of radio continuum emission from the SDC335 protostars. Radio continuum emission has been detected toward a number of low luminosity sources driving molecular outflows, for example in \citet{Anglada95}. In this work, the author shows that the low luminosity for their sample of objects precludes the origin of the observed radio continuum emission coming from photoionisation by Lyman alpha photons. Using the models of shock ionised gas from \citet{Curieletal87, Curiel89}, the author then shows that for their sample, shock ionisation is capable of creating the observed radio continuum based on the outflow momentum flux of their targets. Based upon the best-fit model and data in \citet{Anglada95} (their Equation 1 and Figure 5) we find that using their 8, 23, and 25~GHz flux densities \citetalias{Avison15}, MM1\textit{a}\ and MM1\textit{b}\ are shown to reside between the line of minimum requirement (dashed line in \citet{Anglada95} Figure 5) and the line of best fit (solid line in \citet{Anglada95} Figure 5) for shock ionisation as the mechanism for the observed radio continuum emission in their sample. This suggests that for these sources this may indeed be the origin of the detected radio emission. However, MM2 is a factor of between $\sim$17 and 37 below the lower bound of the emission expected from shocks \citep{Anglada95}. This may be an evolutionary factor or one arising from the orientation of the MM2 outflow to the line of sight, which limits the range of velocities we are able to integrate over leading to an underestimate of the outflow momentum flux value for this source (\S \ref{infall:sec}). The direct implication of interpreting the origin of the ionised matter as from outflow shocks as opposed to photoionisation is the evolutionary status of the protostellar sources themselves. If the emission is not due to photoionisation, this suggests that the protostars are at an earlier pre-ionising stage in their evolution. It is important to state that although this alternate interpretation may account for the presence of ionised hydrogen and indicate a resolution of the discrepancies between observed bolometric luminosity and a corresponding ZAMS luminosity, the protostars within SDC335 remain high-mass protostellar sources, which will go on to form stars of mass $>8$\solmass. We base this on both their observed luminosities (of the order 10$^4$ L$_{\odot}$ \citetalias{Avison15}) and the co-location of each of the radio emission peaks with 6.7GHz methanol maser emission. Higher spatial resolution ($\leq$ 2.0 $^{\prime\prime}$) observations of SDC335 at radio frequencies would be required to fully assess this interpretation. The impact on the current work -- should the radio free-free emission prove to be outflow shocks -- is limited to the discussion in Section \ref{tooLow:sec}. All other results would remain unchanged. \section{Interaction between outflows and filaments } Previous studies of the motion of material at large scales in SDC335 have found that the cloud is both globally collapsing toward its centre and that material is being transported inward to this region along the filamentary arms of the cloud \citep{Peretto13}. In our current work, however, we report the detection of material outflowing from the protostellar cores at the cloud centre. In this section, we look at the evidence of the interaction of the filamentary infalling and outflowing material. Finding regions of interaction within SDC335 would make the IRDC a valuable source for studying the potential disruptive effects on material transport from feedback of young massive stars have. In the following, we highlight the observed features within SDC335 that mark potential filament-outflow interactions. As part of this discussion, we identify each outflow lobe by the letter corresponding to the outflow and a sub-script colour to indicate the particular red or blue lobe. \subsection{Class I methanol masers} Collisionally excited Class I methanol masers are commonly associated with molecular outflows in regions of massive star formation \citep[e.g.][]{Kurtz04, Cyganowski09}. SDC335 was known to harbour four Class I maser sources \citepalias[and references therein]{Avison15}, all of which are clearly spatially offset from the compact H\textsc{ii}\ regions in the MM1 and MM2 cores. With our new, more sensitive data, an additional four individual maser sources were detected. The spectrum of each maser is presented in Figure \ref{MaserSpec:fig}. Figure \ref{MaserCO:fig} presents the maser locations within SDC335, with the masers colour-coded by velocity. All the maser spots peak at velocities within $\sim$6.0kms$^{-1}$\ of the systemic velocity of the SDC335 mm-cores. The position and peak emission properties of each spot are given in Table \ref{Maser:tab}. The masers are numbered from south to north. Whilst we do not focus on the properties of each maser spot individually within this work, we include in the following discussion the maser spots that provide useful indications as shock tracing when interpreted as potentially part of the outflow-filament interactions within SDC335. \begin{table*} \caption[]{Characteristics of observed Class I CH$_3$OH\ masers. Maser spots within the table are listed and indexed from south to north. Masers 1 - 5 have velocity profiles which extend into the same velocity range of maser 7 (denoted by the left or right arrows), where it is not possible to correctly identify velocity features of the source or sidelobe artefacts of maser 7. } \begin{center} \begin{tabular}{c c c c c c} \hline \hline Maser & RA & Dec & S$_{peak}$ & V$_{peak}$ & V$_{range}$ \\ No. & {[h : m : s]} & {[ $^{\circ}$ : $^{\prime}$ : $^{\prime\prime}$]} & [Jy] & [kms$^{-1}$] & [kms$^{-1}$]\\ \hline 1 & 16:30:58.31 & -48:44:05.2 & 0.79 & -47.3 & -60.4, -47.0$\rightarrow$\\ 2 & 16:30:58.56 & -48:43:55.4 & 1.01 & -47.7 & -52.3, -47.0$\rightarrow$\\ 3 & 16:31:00.36 & -48:43:54.1 & 1.05 & -47.9 & -49.6, -47.0$\rightarrow$\\ 4 & 16:31:00.56 & -48:43:51.6 & 0.462 & -43.2 & $\leftarrow$-43.8, -39.0\\ 5 & 16:30:59.76 & -48:43:50.9 & 0.294 & -43.4 & $\leftarrow$-43.8, -42.3\\ 6 & 16:30:57.90 & -48:43:46.2 & 0.119 & -40.4 & -41.5, -39.2\\ 7 & 16:30:56.45 & -48:43:33.8 & 48.8 & -45.3 & -52.4, -39.0\\ 8 & 16:30:58.56 & -48:43:32.4 & 0.0742 & -40.0 & -43.0, -36.8\\ \hline \end{tabular} \end{center} \label{Maser:tab} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.35]{SubmissionFigsPDFs/MaserSpecRevised.pdf} \caption{Spectra taken at the peak position of the eight Class I methanol masers observed toward SDC335. Each panel gives the spectra of an individual maser spot (numbered in the upper right of the panel). Each spectra is colour coded by its peak velocity following the colour bar (\textit{right}). The $y$-axis for each spectrum differs and is scaled to between -0.1$\times$ to 1.1$\times$ the peak flux density of the individual maser as listed in Table \ref{Maser:tab}. The velocity range $-47.0$ to $-43.8$kms$^{-1}$\ is blanked out in panels 1 - 6 and 8 as between these velocities the spectra of these masers are dominated by imaging artefacts caused by the strong emission from maser 7. This is to avoid confusion between true maser emission and imaging artefacts.} \label{MaserSpec:fig}% \end{figure*} \begin{figure} \centering \includegraphics[scale=0.35]{SubmissionFigsPDFs/MasersCO.pdf} \caption{SDC335 region showing the position of the Class I methanol masers. Orange dashed lines as per Fig. \ref{PressyFig:fig}. Filled triangles, the Class I methanol masers detected in the ATCA data, colour-coded by velocity matching that in Fig. \ref{MaserSpec:fig}, the size of the triangle is proportional to the flux density of the maser source. Cyan `$\times$' denote the location of Class II methanol masers at 6.7GHz and yellow `$+$' denote the H$_2$O\ masers at 22GHz. Grey contours show the ALMA 3mm continuum emission and the greyscale images shows the ATCA 8GHz continuum emission. The red and blue contours show $\sim$9\% of the peak integrated intensity of each of outflow, using the velocity ranges from Table \ref{OutflowCalc:tab} as a guide when comparing the maser positions. } \label{MaserCO:fig \end{figure} \subsection{Interaction regions} \subsubsection{A$_{Blue}$ and the F3 and F4 filaments} The A$_{Blue}$ outflow is well collimated near the driving source (MM1\textit{a}) when observed at high resolution (ALMA CO and $^{13}$CO), as seen in the narrow structure of the contour plots in Figure \ref{OUTFLOW_GRID:fig}-A$_{zoom}$ and the CO velocity channel maps shown in Figure \ref{BlueJetVcutCO:fig}. The shock-tracing SiO emission detected in this outflow lobe is however predominately at the end (furthest from the driving source) of this lobe Figure \ref{OUTFLOW_GRID:fig}-B. This region of SiO emission and the observed lobe end in the CO and $^{13}$CO\ ALMA observation coincide spatially and kinematically with the ends of the F3 and F4 filaments of the IRDC (marked as the orange dash line in Figures \ref{PressyFig:fig} and \ref{OUTFLOW_GRID:fig}, c.f. \citealt{Peretto13}). The F3 and F4 filaments are carrying material into the SDC335 central region at velocities of $\sim-$45.8 and $-46.5$kms$^{-1}$\, respectively \citep[see][their Fig. 4c and 5]{Peretto13}. Given the $V_{lsr}$ of the MM1\textit{a}\ core, these infalling velocities are moving in the opposite direction to the outflowing material ($V_{Ablue} \sim$ $-$50.5 to $-$82.4kms$^{-1}$), see Figure \ref{BlueJetVcutCO:fig} and \ref{OutflowCalc:tab}). Supporting this interpretation is the position and velocity range covered by the Class-I CH$_3$OH\ maser source 1 (see Table \ref{Maser:tab}). We show the presence of the maser spot at a given velocity in Figure \ref{BlueJetVcutCO:fig} as a solid purple circle. Given the collisionally excited pumping mechanism responsible for Class-I masers and the hypothesis that such collisional excitement occurs at the interfaces of outflows and the surrounding molecular gas as seen in, for example, \citet[][]{Plambeck90, Kurtz04, Vornokov10}, the alignment of the maser activities both spatially and kinematically with the outflow lobe end suggest that we are indeed seeing the point of interaction between the A$_{Blue}$ outflow and the infalling material from the F3 and F4 filaments. The large velocity range the CH$_3$OH\ emission covers requires a strong shock front to pump such maser emission. \begin{figure*} \centering \includegraphics[scale=0.7]{SubmissionFigsPDFs/blueJetVcut_2020FINAL.pdf} \caption{Channel map of the CO emission from the A$_{Blue}$ outflow. The velocity of each image is listed in the upper right corner. The position of MM1a is indicated by the red star. The purple filled circle gives the position of the Class I CH$_3$OH\ maser source 1 (see Table \ref{Maser:tab}), only at velocities where the maser is present. The orange dashed lines show the fiducial directions of the F3 and F4 filaments as per Figure \ref{PressyFig:fig}. Contours are plotted at the 5,10, 20, 40, 50, 60, 70 and 80 $\sigma$ level of the ALMA CO image (where $\sigma$ = 22.0 mJy/bm).} \label{BlueJetVcutCO:fig}% \end{figure*} \subsubsection{C and the F2 filament} The one detected lobe of outflow C (C$_{blue}$) exhibits both the brightest Class I methanol maser (maser 7, Table \ref{Maser:tab}) and the brightest SiO emission detected in the region (see Figure \ref{OUTC_SPEC:fig}). At a peak flux density of 48.78\,Jy, maser 7 is over 40 times the intensity of any of the other detected maser sources (see e.g. Figure \ref{MaserSpec:fig}). Given that both SiO emission and Class I CH$_3$OH\ maser are shock tracing species we note that the end (away from the driving source MM2) of this outflow lobe (Figure \ref{OUTFLOW_GRID:fig}) appears close to the fiducial end of the F2 filament (orange dashed line, c.f. \citealt{Peretto13}). This suggests that we are observing a shocked region caused by the meeting of the outflow and the inflowing matter from the F2 filament. The velocities of C$_{blue}$ and the infall along F2 are in opposite directions, with the F2 filament $V_{F2}=\sim-$46.0kms$^{-1}$\ \citep[see][their Fig. 4c and 5]{Peretto13}, so redder than the $V_{lsr}$ of MM2 and $V_{C_{blue}}$=$-$59.1 to -43.6kms$^{-1}$,\ thereby extending both red and blue-ward of this $V_{lsr}$. Although the maser position and the end of the orange dashed line representing F2 in Figures \ref{OUTFLOW_GRID:fig}-A,B,C do not match exactly, we emphasise that the dashed line is a fiducial marker and the true morphology and kinematics at the end of the inflow are not known at spatial resolutions better than $\sim$5$^{\prime\prime}$ \citep{Peretto13}. Further higher resolution observations of the region would be required to confirm this as a true shock-interaction region. \subsubsection{Curvature of A$_{red}$ and the F1 filament} \label{CurveRed:sec} The A$_{red}$ lobe is observed in our CO (ALMA), SiO, and HNC data to exhibit curvature at its northernmost end. Inspecting the position of the nearby filamentary arm, F1, in Figure \ref{OUTFLOW_GRID:fig} it appears that the curvature coincides with where A$_{red}$ would meet the infalling material from the filament. Emission from the A$_{red}$ here is at velocities of $\sim -$30 to $-$20 kms$^{-1}$\ (CO, SiO) whereas the inflowing material of the F1 filament is at $\sim-$47.25kms$^{-1}$\ \citep[see][their Fig. 4c and 5]{Peretto13}. The start of curvature of the F1 filament also coincides with a clump of SiO emission and CO (beyond the velocity range used to generate the contours in Figure \ref{OUTFLOW_GRID:fig}-A) offset from the outflow (see Figure \ref{OUTFLOW_GRID:fig}-B) at velocities $-36.9$ to $-41.2$kms$^{-1}$. This is spatially and kinematically coincident with maser 8 (see Table \ref{Maser:tab}), suggesting an interaction point between the inflow and outflow is likely associated with a shock front leading to the SiO. Maser 8 peaks at $-$40kms$^{-1}$\ with a velocity range $\pm\sim3.0$kms$^{-1}$\ as such covers velocities which are between those measured for the inflowing and outflowing material. Figure \ref{maser8_COSiO:fig} gives the spectra of CO and SiO at the position of maser 8. \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/MAS8_SPEC.pdf} \caption{CO and SiO spectra measured at the position of the maser 8 peak emission see Table \ref{Maser:tab}. The vertical dashed line gives the $V_{lsr}$ of the target and the vertical dotted lines mark the range in velocity of the clump of emission seen toward the curvature of the A$_{red}$ outflow lobe discussed in \S \ref{CurveRed:sec}. The vertical dash-dot line gives the velocity of peak emission from maser 8.} \label{maser8_COSiO:fig}% \end{figure} \subsubsection{B$_{red}$ shocked emission} The B$_{red}$ lobe is observed in SiO, HNC and CO (both APEX and ALMA). The leading edge of this outflow is coincident with the positions of masers 3 and 4 and peak SiO emission (see Figures \ref{OUTFLOW_GRID:fig}-A and \ref{OUTFLOW_GRID:fig}-B) of the outflow, indicating a region of shocked gas. We note that this lobe does not directly appear to be interacting with a filamentary inflow, but from Figure \ref{PressyFig:fig} the outflow appears toward the edge of the 8\mewm\ dark region of the cloud suggesting some interaction at the cloud boundary. In Figures \ref{maser3_COSiO:fig} and \ref{maser4_COSiO:fig} we give the spectra of CO (ALMA), HNC and SiO at the position of masers 3 and 4. \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/MAS3_SPEC.pdf} \caption{CO (ALMA), HNC and SiO spectra measured at the position of the maser 3 peak emission see Table \ref{Maser:tab}. The vertical dash-dot line gives the $V_{lsr}$ of the outflow driving source, MM1\textit{b}.} \label{maser3_COSiO:fig}% \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{SubmissionFigsPDFs/MAS4_SPEC.pdf} \caption{CO and SiO spectra measured at the position of the maser 4 peak emission see Table \ref{Maser:tab}. The vertical dash-dot line gives the $V_{lsr}$ of the outflow driving source, MM1\textit{b}.} \label{maser4_COSiO:fig}% \end{figure} \subsection{A and B outflows misalignment, a potential scenario} \label{RosenSims:sec} As evident in Figure \ref{OUTFLOW_GRID:fig} and noted in Section \ref{DescB:sec}, outflows A and B have outflow axes that are approximately perpendicular to one another. Naively, one would assume that the angular momentum axes for a pair of protostars of similar mass separated by a relatively small distance ($\sim$9000AU) would be aligned. This appears not to be the case for SDC335. Within the literature, there are other examples showing similar misalignments. For example, protostars A and B within IRAS16293 (separated by 600 AU) were found to have rotation axes misaligned \citep[and references therein]{Pineda16293} with A observed edge on and B either having little to no observable rotation or a rotation axis in the plane of the sky. This gives A and B effectively a $\sim$ 90$^{\circ}$\ offset in their angular momentum axes. Similarly, there are several systems found in Perseus (with pairs separated by 1000 to 10,000 AU) which show outflow axes that are misaligned between low mass protostars with the distribution of axes appearing to prefer random or anti-aligned orientations \citep{Lee16}. We consider a possible cause for this misalignment in SDC335. This scenario posits that the outflow progenitors MM1\textit{a}\ and MM1\textit{b}, after fragmenting from the MM1 mm-core, have grown in mass by accreting matter from different material flows arriving at each core from different directions, thus changing their respective angular momentum vectors as the system evolves. The high, non-uniform accretion rates inherent to massive star formation ($\dot M_{\rm acc} \sim 10^{-3} \; \rm M_{\rm \odot} \, yr^{-1}$) will push the star around and cause the angular momentum vector to precess. Such behaviour has been seen in three-dimensional (3D) radiation-magnetohydrodynamic (RHD) simulations of the collapse of a turbulent massive pre-stellar core into a massive star \citep{Rosen16, Rosen20}. The movement of the star and its angular momentum vector will cause the outflows to be launched over a larger area as the star grows in mass. This effect will broaden the entrained outflows since the protostellar outflows are likely launched along the star's or surrounding accretion disk's angular momentum vector (e.g. \citealt{Shu00,Pudritz07}). Figure~\ref{Rosen:fig} \citep[adapted from][]{Rosen20} shows two snapshots from a 3D RHD simulation that models the collapse of a turbulent massive (150 $M_{\rm \odot}$) pre-stellar core into a massive stellar system and includes radiative feedback and collimated outflows that are launched along the stars' angular momentum vectors \citep[see subgrid model description by][]{Cunningham11}. These snapshots show the column density of the molecular material that is entrained by the protostellar outflows from the stars that have formed. To obtain the distribution of entrained material, we compute the mass-weighted integrated gas density only along cells that have $\rho_{\rm OF}/\rho \geq 0.1,$ where $\rho_{\rm OF}$ is the outflow gas density injected by stars, and $\rho$ is the total gas density. In the first snapshot, where the time elapsed is 0.4 $t_{\rm ff}$ where $t_{\rm ff}=42.7$ kyr is the core's free-fall timescale, the star has a mass of 9.12 $M_{\rm \odot}$ and what looks like multiple molecular outflows. However, this outflow morphology is due to the non-uniform accretion flow that knocks the star around, causing the star to move away from its birth site and precess; thereby causing the outflow launching direction to change with time. At later times, $t=0.6 t_{ff}$, when multiple low-mass protostars have formed and are clustered near the massive protostar (pink filled-in circles), the smaller and weaker multiple outflows from the low mass protostars overlap. The resulting overlapping entrained outflows from these clustered low-mass companions to the right of the primary star are inclined by $\sim 45^{\circ}$ to the entrained outflow from the primary star, as indicated by the arrow in Figure~\ref{Rosen:fig}. We note that this significant offset between outflow axes and separation between the massive and low-mass stars is similar to what is seen in SDC335. The fact this is occurring within less than one cloud free-fall time for the simulated cloud lends further credence to SDC335 being at an early period of star formation. (For SDC335 $t_{ff}\sim$ 3.5 $\times 10^5$ yrs.) Measuring the level of turbulence of material within the central region of SDC335 and the accretion flows onto individual cores would require higher resolution observations of the dense gas (combined with single-dish data to avoid `missing spacing' problems), which are not currently available. \begin{figure*} \centering \includegraphics[scale=0.15]{SubmissionFigsPDFs/Turb_massEntrainedAll_10_Arrowb.pdf} \caption{Projection plots generated from simulations of high-mass star forming regions, \citep[adapted from][]{Rosen20}. These plots show only the material contained in the systems outflows in the condition $\frac{\rho_{outflow}}{\rho} > 0.1 $, at two time stamps during the free fall time of the star forming core. \textit{Left:} Solitary massive protostar (grey circle) is seen at $t=0.4 t_{ff}$. This massive protostar is at the centre of up to three `overlapping' outflows. These are thought to originate from the same outflow and their difference in position and angle is caused by precession of the star's spin axis and the movement of the protostar from its birth position to its current position within the simulation. \textit{Right:} At later times, $t=0.6 t_{ff}$, we see that lower mass protostars have formed (pink circles) and are generating their own outflows. One such outflow, denoted by the arrow, is seen to be significantly offset in angle to the primary outflow from the massive star in the region.} \label{Rosen:fig}% \end{figure*} \section{Discussion and conclusions} Using new ATCA SiO and CH$_3$OH\ observations coupled with archival CO, $^{13}$CO,\ and HNC data from ALMA and four transitions of CO from the APEX telescope, we identify and analyse three molecular outflows within the young high-mass star forming infrared dark cloud SDC335. These data have yielded the following outcomes: \begin{itemize} \item The three outflows, A, B, and C, are identified, each associated with one of the three known HCH\textsc{ii}\ regions in SDC335 \citepalias{Avison15} (MM1\textit{a}, MM1\textit{b}, and MM2, respectively). The red-blue outflow lobes from A extend in the north-south direction, B extends east-west and C, of which only the blue lobe is detected, extends to the north-west. They have a full width velocity ranges of up to 10kms$^{-1}$\ and temperatures of $\sim$60\,K. The two most massive sources in the cloud, MM1\textit{a}\ and MM1\textit{b}, are separated by $\sim9000$AU but have driving outflows of projected outflow axes that are approximately perpendicular to one another. The blue lobe of outflow A displays a structure and velocity that is characteristic of a jet. \item The analysis of the measured outflow momentum flux, $F_{CO}$, as a function of source bolometric luminosity, $L_{bol}$, and in comparison to theoretical evolutionary tracks \citep{Ana13} confirms that the progenitor protostars are massive young stellar objects with two sources residing above the tracks for 50\solmass\ stars. Using samples of $F_{CO}$ and $L_{bol}$ measurements from the literature for low to intermediate mass stars at evolutionary classes 0 and I, we derived best-fit $F_{CO}$-$L_{bol}$ relations for these two classes. Extrapolating these relations upward in $L_{bol}$, we find the outflow momentum flux properties of the SDC335 outflows A and B agree best, with their progenitors being high-mass Class 0 analogues and indicating that SDC335 is at a very early stage of the star-formation process. \item Inferring the mass accretion rates from the source outflow properties, we find that the total mass accretion is 1.4$(\pm0.1) \times 10^{-3}$\solmass\ yr$^{-1}$ on the protostellar scale. This value is consistent with the calculated mass infall rate on cloud and filamentary scales 2.5$(\pm1.0) \times 10^{-3}$\solmass\ yr$^{-1}$ found by \citet{Peretto13}. This result suggests that at this early stage of evolution nearly all the material accreted onto the clump is funnelled through the cores onto these three massive young sources, limiting the scope for the formation of additional, low mass sources in the region. If significant numbers of lower mass stars are to form in SDC335, these would then have to form at a later stage in the evolution of the region. \item These new data combined with existing knowledge of the bulk inflow and global and filamentary collapse properties of the SDC335 cloud provide compelling evidence of the interaction between the molecular outflows and the material infalling along the filamentary arms. Given the very young (Class 0 analogue) status of the protostars driving the outflows, this makes SDC335 a valuable test bed for study of the disruptive feedback effects of massive protostars on their natal clouds. \end{itemize} The observed properties described in this work make the infrared dark cloud SDC335 a key target for the more detailed study of how massive protostars form and effect their natal environments through accretion and protostellar outflows. Such features warrant further study at high spectral and spatial resolutions and sensitivities. \begin{acknowledgements} The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The authors would like to thank all ATNF staff past and present who helped during the ATCA observation used in this paper, particularly those who provided A.A. with curry. The authors would also like to thank the anonymous referee for their input into the paper after initial submission which has helped to improve the work. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00474.S and \#2012.0.00781.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This publication is based on data acquired with the Atacama Pathfinder EXperiment (APEX). APEX is a collaboration between the Max-Planck-Institut fuer Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. A.A. is funded by the STFC at the UK ARC Node. G.A.F acknowledges financial support from the State Agency for Research of the Spanish MCIU through the AYA2017-84390-C2-1-R grant (co-funded by FEDER) and through the ``Center of Excellence Severo Ochoa'' award for the Instituto de Astrof\'isica de Andalucia (SEV-2017-0709). N.P. wishes to acknowledge support under STFC consolidated grants ST/N000706/1 and ST/S00033X/1. A.D.C acknowledges the support from the UK STFC consolidated grant ST/N000706/1. A.L.R acknowledges support from NASA through Einstein Postdoctoral Fellowship grant number PF7- 180166 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. This research made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy:2013, astropy:2018}. \end{acknowledgements}
{ "timestamp": "2020-12-17T02:18:38", "yymm": "2012", "arxiv_id": "2012.08948", "language": "en", "url": "https://arxiv.org/abs/2012.08948" }
\section{Introduction} Black hole binaries (BHBs) are the best candidates to study the connection between accretion and ejection mechanisms due to their high variability and high X-ray flux. The timescale of variability is comparable to human timescales and hence the whole cycle of accretion and ejection can be studied for a single system. Even though BHBs may be seen as a scaled down versions of active galactic nuclei (AGN), their observational signatures may partially differ due to the different source of accreting material, i.e. the companion star or galactic gas, respectively, and also the radiation field that is predominantly governed by X-rays. \\ \\ The variability of low mass X-ray binaries (LMXBs) can be classified using the hardness intensity diagram (HID) in X-rays as many of them exhibit a hysteresis loop within it \citep{2000A&A...355..271B}. Different points in the loop can be attributed to different accretion-ejection geometries. The presence of a powerful jet is generally seen when the source transits from hard to soft state \citep{Fender_2004}, establishing the connection between BHB states and the launch of the jet. However GRS 1915+105, even though being a LMXB, does not exhibit a hysteresis loop in the HID, and was in a comparatively softer state, until recently when it became harder in spectra and fainter in brightness \citep{MillerAtel2019_2019ATel12743....1M}. Outflows in the form of accretion disc winds have also been seen in several LMXBs \citep{2002ApJ...567.1102L,2004ApJ...601..450M, 2006AN....327..997M, 2006ApJ...646..394M}, mainly but not necessarily in the soft state \citep{2009Natur.458..481N}. Despite its clear detection, there is no consensus yet on how and where the wind is launched from the accretion disc, and how it affects the state transitions and the launching of the jet. In GRS 1915+105, it was seen that during the states in which the wind was present, the jet was either weak or absent, and conversely \citep{2009Natur.458..481N}. Thus, understanding the launching mechanism of the wind is required to study the interplay between the wind, the jet and the accretion states of BHBs. Hence, an insight into the launching mechanism, and an estimate of the launching site of the wind, along with the wind mass flux can help in determining the unknown parameters of the state transitions in X-ray binaries. \\ \\ The mechanisms responsible for the launching of disc winds is still debated. The main phenomena that are thought to be responsible for it are (i) thermally driven, (ii) radiation pressure driven, and (iii) magnetically driven. There has been evidence for all the three processes \citep{Begelman1983_1983ApJ...271...70B,Ueda2004_2004ApJ...609..325U,Neilsen2013_2013AdSpR..52..732N,Trigo2016_2016AN....337..368D,Neilsen2016_2016ApJ...822...20N,done2018_2018MNRAS.473..838D,2017NatAs...1E..62F,Miller2006_2006Natur.441..953M}, as well as two component winds where two different mechanisms co-exist, namely MHD and thermal \citep{Neilsen2012_2012ApJ...750...27N}. Thermally driven winds are launched when the X-rays near the compact object heat the disc surface to the Compton temperature ($T_C$) \citep{Begelman1983_1983ApJ...271...70B,Tombesi2010_2010A&A...521A..57T}. The wind is launched at a radius where the plasma thermal velocity is greater than the local escape velocity. However, thermal winds are also possible from 0.1 $R_{C}$ \citep{woods1996_1996ApJ...461..767W}, where $R_{C}$ is the Compton radius. Radiation pressure driven winds are due to the line pressure exerted on partially ionised elements \citep{Castor1975_1975ApJ...195..157C}. Disc winds can also be accelerated by the disc vertical magnetic pressure gradients or by the disc centrifugal forces in combination with magnetic fields \citep{Fukumura2010_2010ApJ...715..636F,2017NatAs...1E..62F,Blanford1982_1982MNRAS.199..883B,Contopoulos1994_1994ApJ...432..508C,Contopoulos1995_1995ApJ...450..616C,Ferreira1997_1997A&A...319..340F}. There are also evidences suggesting that disc winds in BHBs are preferentially seen with equatorial geometry \citep{2012MNRAS.422L..11P}.\\ \\ During the last decade significant improvement has been made in modelling accretion disc winds in BHBs using photo-ionisation codes. GRS 1915+105, GRO 1655-40, 4U 1630-47 and H 1743-322 are some of the BHBs which show the presence of disc winds as seen in terms of intense absorption lines in their X-ray spectra \citep{Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M,Miller2006_2006Natur.441..953M,Miller2008_2008ApJ...680.1359M,Kallman2009_2009ApJ...701..865K,Neilsen2012_2012ApJ...750...27N,Ueda2009_2009ApJ...695..888U}. Several photo-ionisation modelling of these sources points towards a multiple component outflow or a MHD origin for the wind \citep{Miller2008_2008ApJ...680.1359M,Kallman2009_2009ApJ...701..865K,Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M,Miller2006_2006Natur.441..953M}. However, there is a paucity of detailed MHD models that could explain the wind origin by directly fitting the data. \cite{2017NatAs...1E..62F} in the case of GRO J1655-40 being one of these examples.\\ \\ In the case of GRO J1655-40, \cite{Miller2006_2006Natur.441..953M} and \cite{Kallman2009_2009ApJ...701..865K} using photo-ionisation modelling, argue that the wind cannot be explained with one component of absorption in the soft state. They estimated the launch radius from the ionisation parameter ($\xi$) and compared with the Compton radius ($R_{C}$), to disfavour the possibility of a single component thermal origin. \cite{Kallman2009_2009ApJ...701..865K} found that the lines with lower blueshift and larger launching radius from the black hole may be consistent with a thermal wind, while Fe K lines with a shorter launching radius and higher blue-shift favours a magnetic origin. \cite{Neilsen2012_2012ApJ...750...27N} made a comparison of the wind in the soft and hard states of GRO 1655-40, and found that photo-ionisation alone cannot explain the disappearance of many of the absorption lines in the hard state. \cite{Neilsen2012_2012ApJ...750...27N} further argue that the presence of a hybrid wind (thermal and MHD) in the hard state, evolving into a two component wind in the soft state, as suggested also by \cite{Kallman2009_2009ApJ...701..865K} and \cite{Miller2006_2006Natur.441..953M}. However \cite{2017NatAs...1E..62F} using a physically self-consistent photo-ionised MHD wind model found that the two components identified by previous works may be well described by a single MHD wind launched from a large range of radii from the accretion disc. \cite{Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M} showed, from a purely photo-ionisation point of view, that at least two or three absorption components were required to model the lines in the case of all of the above mentioned sources. Another argument disfavouring a thermal origin is the consistency of the launch radius, estimated from re-emission of the wind and the ionisation parameter \citep{Trueba2019_2019ApJ...886..104T,Miller2015_2015ApJ...814...87M}.\\ \\ From the recent NICER observations of GRS 1915+105, \cite{Neilsen2018_2018ApJ...860L..19N} found that the accretion disc wind is persistent, with absorption line flux changing depending on the state and count-rate. In this work we model the absorption lines in GRS 1915+105 in the two representative soft and hard states, using the same MHD model as in the case of \cite{2017NatAs...1E..62F}. We also explore a similar approach as in \cite{Ueda2009_2009ApJ...695..888U}, dividing the soft state observation into two epochs where there was a increase in flux. In section 2 we describe the observations and data reductions. In section 3 we show the lightcurve and the spectra of both soft and hard states. In section 5 we show the phenomenological scaling relations and MHD wind modelling, and finally in section 6 we discuss the results with comparison to previous works. \section{Observations and Data reduction} We considered the high resolution X-ray spectroscopy offered by Chandra High Energy Transmission Gratings Spectrometer (HETGS) data to study the wind variability in different states of GRS 1915+105. The source was observed by Chandra using the HETGS instrument for 22 times. However, we decided to analyse 2 observations, based on the presence and absence of a strong wind as reported in \cite{2009Natur.458..481N}. Since GRS 1915+105 does not have a regular hard state as in case of other black hole X-ray binaries, we distinguished soft and hard state by the spectral class ($\Phi$ for soft and $\chi$ for hard) as reported in \cite{2009Natur.458..481N}. We limit our analysis to only two observations since the motive of the study is to understand the change in state with respect to the change in wind, for which these two observations form an adequate sample. The two observations are comparatively less affected by pile up and provide a view of the source in two different spectral states. The observation performed on 2007-08-14 with an exposure of 49 ks (obsid:7485) corresponds to a soft state and the one performed on 2000-04-24 with an exposure of 30.3 ks (obsid:660) corresponds to a hard state. Other than being in two spectral states, the observations also differ in the strength and number of emission and absorption lines present in the spectra \citep{2009Natur.458..481N}. Both observations were in "pointing" mode and "timed" readout mode. We collected the archival data of these observations from the Chandra Data Archive (CDA) and used the CIAO 4.11 software package for the data reduction and analysis \citep{ciao_2006SPIE.6270E..1VF}. We reprocessed the data using "chandra\_repo" script to ensure data reduction using the latest Calibration data base (CALDB 4.8.2). In bright sources, the zeroth order position cannot be rightly identified by 'tgdetect' due to the hole in the zeroth order. Hence the zeroth order was identified by 'tg\_findzo' as suggested by 'tgdetect2'. For obsid:660, ACIS S-2 and ACIS S-3 were used to extract the signal, while for obsid:7485 ACIS S-1, ACIS S-2, ACIS S-3 and ACIS S-4 were used. We used only the HEG spectrum, as it has two times better energy resolution than the MEG at higher energies. Moreover, we were not interested in the energy range below 1.5 keV as GRS 1915+105 is heavily absorbed in that range and lower energies are more affected by pileup. We examined the +1 and -1 grating spectra separately to check for correspondence before combining them using "add\_grating\_orders". Finally, we used "mkgrmf" and "mkgarf" to generate the combined rmf and arf response files. \section{Light curve} The light curves for both observations were extracted using the "dmextract" tool. We considered the energy range from 1.5 keV to 7.2 keV. The light curves were analysed and re-binned using "lcurve" of XRONOS 5.22. Fig \ref{7485_lc} and Fig \ref{660_lc} show the light curves for both the soft and hard states, respectively. Based on the light curve and colour-colour diagrams, \cite{2000A&A...355..271B} classified GRS 1915+105 into 12 classes. \cite{2000A&A...355..271B} found that all the classes can be reduced to three fundamental states (A, B and C). In state B the accretion disc extends all the way down to innermost stable circular orbit (ISCO), and in state C the inner accretion disc is not visible due to reasons which are still debated. State A corresponds to a soft state with higher soft colour and higher countrate. In obsid:7485, there was an increase in flux observed during the observation. During the initial phase of this observation the source was in class $\phi$ (only state A) and in the later phase, the flux of the source increased to that of $\delta$ (transition between state A and state B). However, there is no change in the colour-colour diagram indicating it to be in bright $\phi$ class \citep{Ueda2009_2009ApJ...695..888U}. During obsid:660 the source was in $\chi$ class (only state C). \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=\hsize]{grs1915+105_hetg_lc_7485_100s_2.pdf}} \caption{Light curve of GRS1915+105 in the soft state (obsid:7485) with a bin size of 100s.} \label{7485_lc} \end{figure} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=\hsize]{grs1915+105_hetg_lc_660_100s.pdf}} \caption{Light curve of GRS1915+105 in the hard state with a bin size of 100s (obsid:660).} \label{660_lc} \end{figure} \section{Spectral analysis} The combined grating spectra of the 1st order were analysed using XSPEC 12.10.1. We used only HEG as the energy interval of interest is in the harder X-rays. We find an excess in soft X-rays below 2 keV in both the soft and hard state observations peaking at $\sim$ 0.12 keV for which the origin might not be physical and might be an instrumental feature. A similar feature for this source is also reported in XMM Newton observation \cite{Martocchia_2006A&A...448..677M}. We fitted the continuum spectra with a disc black body for the disc emission \citep[diskbb:][]{diskbb1_1984PASJ...36..741M,diskbb2_1986ApJ...308..635M}, a black body for the soft excess (bbody) and a power-law continuum for the X-ray corona \citep[nthcomp:][]{nthcomp1_1996MNRAS.283..193Z,nthcomp2_1999MNRAS.309..561Z}, along with a neutral galactic absorption model \citep[tbabs:][]{TBABS_2000ApJ...542..914W} . However, since the spectra are piled up due to the high flux of GRS 1915+105, the parameters of the continuum model should be considered only as a phenomenological best-fit. Absorption and emission lines were modelled with a Gaussian ($zgauss$). Therefore, we concentrate our spectral analysis on the line properties, considering a phenomenological model for the continuum. The $\chi^2$ statistics was used for the fit and the errors are calculated at $1\sigma$ level. The modelled continuum parameters of all the observations are given in the Table \ref{fig_continumm_all}. \begin{table*} \caption{Continuum parameters obtained for the spectral fit for the soft (obsID:7485) and hard (obsID:660) state Chandra observations of GRS 1915+105.} \label{fig_continumm_all} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{c c c c c} \hline\hline Component & Parameter & Unit & soft & hard \\ \hline TBabs & N$_H$ & $\times10^{22}$~cm$^{-2}$ & $6.60^{+0.06}_{-0.06}$ & $5.225^{+0.055}_{-0.037}$ \\ diskbb & $T_{in}$ & keV & $2.28^{+0.03}_{-0.03}$ & $0.77^{+0.02}_{-0.01}$ \\ diskbb & norm & & $30.6^{+1.6}_{-6.4}$ & $1575^{+141}_{-324}$ \\ bbody & $T$ & eV & $142.1^{+3.3}_{-3.3}$ & $126.3^{+7.7}_{-9.2}$ \\ bbody & norm & & $4.82^{+1.01}_{-0.76}$ & $1.11^{+1.18}_{-0.41}$ \\ nthcomp & $\Gamma$ & & $>3.91$ & $1.00^f$ \\ nthcomp & $KT_{e}$ & keV & $<1.00$ & $2.17^f$\\ nthcomp & $KT_{bb}$ & keV & $0.515^{+0.009}_{-0.002}$ & $0.85^{f}$ \\ nthcomp & norm & ph~keV$^{-1}$~cm$^{-2}$~s$^{-1}$ & $7.5^{+0.4}_{-0.3}$ & $0.04^{f}$ \\ flux & & $\times 10^{-9}$ ergs cm$^{-2}$ s$^{-1}$ & $7.7^{+3.7}_{-0.9}$ & $6.7^{+1.2}_{-3.0}$ \\ $\chi^2$/dof & & & $2833/2635$ & $2766/2635$ \\ \hline \end{tabular} \end{table*} \subsection{Soft state} During obsid:7485, GRS 1915+105 was in class $\phi$, indicating a relatively soft spectrum dominated by the disc emission. Fig \ref{Fig_7485_660_total} shows the spectrum during the entire observation of 7485 and Fig \ref{Fig_7485_660_total_zoomed} shows the zoomed in version of the same at different energy ranges. We clearly find 20 absorption lines and 1 emission line in the spectrum. The lines were selected based on a minimum threshold of $\Delta$ $\chi^2$ of $15$. We fitted the absorption lines with inverted Gaussians and the emission line with a Gaussian. The line at 6.69 keV has previously been shown to split into two in the third order spectra \citep{Miller2016_2016ApJ...821L...9M}. Considering the above case, a double Gaussian was used to model the Fe XXV and Fe XXIV features, as these lines are close enough to overlap into a single observed line within the energy resolution of Chandra HETG 1st order. The energies of both the Fe XXV and Fe XXIV lines were tied to each other by an energy shift of $19.2$ eV, and leaving the norm and sigma free to vary. The $\chi {^2}$ value of the best fit is $2928.24$ for 2581 degrees of freedom (dof). There is a wide range of ionic species detected in the spectrum, the dominant being Al, Fe, Si, S, Ar, Ca, Cr, Mn, and Fe. The parameters of the detected lines are given in Table \ref{table_line_7485_epochall}. \begin{table*} \caption{ Line parameters for the soft state observation (obsid:7485).} \label{table_line_7485_epochall} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{c c c c c c c} \hline\hline Energy & Sigma & Norm & Eqw & Line ID & Line Energy & $\Delta$ $\chi^2$ \\ (eV) & (eV) & ($\times 10^{-3}$) & (eV) & & & \\ \hline &&&Absorption lines & & & \\ \hline $1730.5^{+0.3}_{-0.4}$ & $1.66^{+0.44}_{-0.2}$ & $-7.9^{+1.1}_{-0.8}$ & $-1.69^{+0.24}_{-0.23}$ & $Al\ XIII\ 2p$ & $1.72769$ ($^2P_{1/2}$), $1.72899$ ( $^2P_{3/2}$ ) & $68$ \\ $2006.4^{+0.1}_{-0.1}$ & $1.01^{+0.06}_{-0.12}$ & $-8.0^{+0.2}_{-0.4}$ & $-3.25^{+0.2}_{-0.14}$ & $Si\ XIV\ 2p$ & $2.00433$ ($^2P_{1/2}$), $2.00608$ ($^2P_{3/2}$) & $1152$ \\ $2236.7^{+1.7}_{-1.8}$ & $5.03^{+1.44}_{-1.28}$ & $-2.8^{+0.5}_{-0.9}$ & $-1.42^{+0.44}_{-0.44}$ & NA & NA & $21$ \\ $2378.5^{+0.2}_{-1.8}$ & $\equiv 0$ & $-2.5^{+0.2}_{-0.3}$ & $-1.5^{+0.16}_{-0.17}$ & $Si\ XIV\ 3p$ & $2.37611$ ($^2P_{1/2}$), $2.37663$ ($^2P_{3/2}$) & $99$ \\ $2470.7^{+0.7}_{-0.4}$ & $1.96^{+0.63}_{-0.58}$ & $-2.0^{+0.3}_{-0.3}$ & $-1.38^{+0.25}_{-0.21}$ & $S\ II\ 3p^d$& $2.4694^d$ & $46$ \\ $2506.9^{+0.4}_{-0.6}$ & $1.08^{+0.98}_{-0.88}$ & $-1.6^{+0.2}_{-0.3}$ & $-1.08^{+0.21}_{-0.19}$ & $Si\ XIV\ 4p$ & $2.50616$ ($^2P_{1/2}$), $2.50638$ ($^2P_{3/2}$) & $44$\\ $2622.8^{+0.2}_{-0.2}$ & $2.19^{+0.14}_{-0.17}$ & $-6.2^{+0.2}_{-0.2}$ & $-4.76^{+0.26}_{-0.19}$ & $S\ XVI\ 2p$ & $2.61970$ ($^2P_{1/2}$), $2.62270$ ($^2P_{3/2}$) & $977$ \\ $3108.8^{+0.3}_{-1.0}$ & $\equiv 0$ & $-1.5^{+0.1}_{-0.1}$ & $-1.8^{+0.1}_{-0.15}$ & $S\ XVI\ 3p$ & $3.10586$ ($^2P_{1/2}$), $3.10675$ ($^2P_{3/2}$) & $176$ \\ $3277.0^{+0.8}_{-1.3}$ & $0.33^{+1.55}_{-0.33}$ & $-0.5^{+0.1}_{-0.1}$ & $-0.65^{+0.17}_{-0.19}$ & $S\ XVI\ 4p$ & $3.27589$ ($^2P_{1/2}$), $3.27627$ ($^2P_{3/2}$) & $17$\\ $3322.7^{+0.2}_{-0.3}$ & $2.59^{+0.24}_{-0.39}$ & $-2.9^{+0.1}_{-0.1}$ & $-4.05^{+0.21}_{-0.2}$ & $Ar\ XVIII\ 2p$ & $3.31818$ ($^2P_{1/2}$), $3.32299$ ($^2P_{3/2}$) & $651$ \\ $3902.2^{+5.8}_{-0.6}$ & $\equiv 0$ & $-0.5^{+0.1}_{-0.1}$ & $-1.03^{+0.2}_{-0.27}$ & $Ca\ XIX\ 2p$ & $3.90226$ ($^1P_1$) & $31$ \\ $3936.3^{+2.4}_{-1.3}$ & $5.96^{+1.92}_{-3.88}$ & $-0.7^{+0.1}_{-0.1}$ & $-1.47^{+0.33}_{-0.25}$ & $Ar\ XVIII\ 3p$ & $3.93429$ ($^2P_{1/2}$), $3.93572$ ($^2P_{3/2}$) & $40$ \\ $4107.1^{+0.3}_{-0.4}$ & $4.69^{+0.44}_{-0.41}$ & $-2.5^{+0.1}_{-0.1}$ & $-5.91^{+0.23}_{-0.25}$ & $Ca\ XX\ 2p$ & $4.10015$ ($^2P_{1/2}$), $4.10750$ ($^2P_{3/2}$) & $784$ \\ $4865.3^{+1.5}_{-2.8}$ & $1.02^{+3.08}_{-1.02}$ & $-0.4^{+0.1}_{-0.1}$ & $-1.34^{+0.27}_{-0.35}$ & $Ca\ XX\ 3p$ & $4.86192$ ($^2P_{1/2}$), $4.86410$ ($^2P_{3/2}$) & $25$ \\ $5685.4^{+1.3}_{-4.8}$ & $\equiv 0$ & $-0.4^{+0.1}_{-0.1}$ & $-2.34^{+0.55}_{-0.39}$ & $Cr\ XXIII\ 2p$ & $5.68205$ ($^1P_{1}$) & $29$ \\ $5931.3^{+0.8}_{-4.6}$ & $1.14^{+3.85}_{-1.14}$ & $-0.7^{+0.1}_{-0.1}$ & $-4.23^{+0.48}_{-0.41}$ & $Cr\ XXIV\ 2p$ & $5.91650$ ($^2P_{1/2}$), $5.93185$ ($^2P_{3/2}$) & $73$ \\ $6446.2^{+4.5}_{-3.4}$ & $6.65^{+2.52}_{-7.21}$ & $-0.7^{+0.1}_{-0.1}$ & $-6.05^{+1.34}_{-1.22}$ & $Mn\ XXV\ 2p$ & $6.42356$ ($^2P_{1/2}$), $6.44166$ ($^2P_{3/2}$) & $54$ \\ $6682.6^{+1.0}_{-1.5}$ & $23.18^{+2.48}_{-1.6}$ & $-2.0^{+0.1}_{-0.1}$ & $-16.72^{+8.53}_{-8.7}$ & $Fe\ XXIV\ 2p$ & $6.67644$ ($^2P_{1/2}$), $6.67915$ ($^2P_{3/2}$) & $307$ \\ $6701.8^{+1.0}_{-1.5}$ & $23.12^{+1.42}_{-2.49}$ & $-2.4^{+0.1}_{-0.2}$ & $-19.66^{+7.12}_{-9.84}$ & $Fe\ XXV\ 2p$ & $6.6676$ ($^3P_{1}$), $6.7004$ ($^1P_{1}$) & $429$ \\ $6975.6^{+0.8}_{-0.8}$ & $16.31^{+0.82}_{-0.82}$ & $-4.4^{+0.1}_{-0.1}$ & $-41.34^{+1.44}_{-1.0}$ & $Fe\ XXVI\ 2p$ & $6.95196$ ($^2P_{1/2}$), $6.97317$ ($^2P_{3/2}$) & $2225$ \\ \hline &&& Emission lines && &\\ \hline $6538.9^{+19.6}_{-14.6}$ & $163.72^{+13.04}_{-14.8}$ & $4.1^{+0.3}_{-0.4}$ & $34.21^{+5.43}_{-5.66}$ & $Fe\ K$ & & $182$ \\ \hline \end{tabular} \end{table*} \begin{figure}[h] \centering \resizebox{\hsize}{!}{\includegraphics[width=1.\hsize]{grs1915_pheno2.png}} \caption{Top panel shows the spectrum of GRS 1915+105 in the soft state (obsid: 7485) and hard state (obsid: 660) after modelling the absorption and emission lines using the phenomenological Gaussian line profiles. The data in soft and hard states are plotted in black and grey, while the respective models in red and blue. The bottom panel shows the ratio of data and model.} \label{Fig_7485_660_total} \end{figure} \begin{figure}[h] \centering \resizebox{\hsize}{!}{\includegraphics[width=1.\hsize]{grs1915_pheno_zoom2.png}} \caption{Zoomed spectrum (Top: 6.3-7.2 keV, middle: 3.0-4.5 keV, bottom: 1.8-2.7 keV) for the soft state (data in black and model in red) and hard state (data in grey and model in blue).} \label{Fig_7485_660_total_zoomed} \end{figure} \subsection{Hard state} During obsid:660, GRS 1915+105 was in class $\chi$, with a hard spectrum. The spectrum is rather featureless, with just a faint Fe XXVI absorption line. The $\chi {^2}$ value for the best fit is $2916.25$ for 2636 dof. Fig \ref{Fig_7485_660_total} shows the spectrum and Table \ref{table_line_660} shows the line parameters. Further, a zoomed version of the hard state spectrum in different energy intervals is shown in Fig \ref{Fig_7485_660_total_zoomed}. \begin{table*} \caption{Line parameters in hard state observation (obsid:660).} \label{table_line_660} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{c c c c c c c} \hline\hline Energy & Sigma & Norm & Eqw &LineID & Line Energy (keV)\\ (eV) & (eV) & ($\times 10^{-3}$) & (eV) & & \\ \hline &&&Absorption lines \\ \hline $6982.1^{+9.1}_{-10.1}$ & $13.17^{+9.14}_{-5.88}$ & $-0.6^{+0.1}_{-0.2}$ & $6.24^{+1.40}_{-2.50} $ & $Fe\ XXVI\ 2p$ & $6.95196$ ($^2P_{1/2}$), $6.97317$ ($^2P_{3/2}$) \\ \hline \end{tabular} \end{table*} \section{Disc wind characterisation} \subsection{Phenomenological scaling relations} From Fig \ref{fig_7485_vs_vw_eqw} it is shown that the profiles of the velocity shift, velocity width, and equivalent width of the absorption lines may be described by a power-law profile ($A+B\times E^{\lambda}$) with respect to the ion energies, which are a proxy of the different ionic species and ionisation states. We took positive velocity shift as blue-shifted velocity. Within the uncertainties of the current high resolution X-ray instruments, it is not possible to decipher a clear non-linear trend if any. Upcoming high resolution instruments like XRISM and ATHENA will be able to decipher any non-linear trend \citep{XRISM_2020arXiv200304962X, ATHENA_2013arXiv1306.2307N}. The velocity shift is calculated with respect to the line energy calculated by the weighted average of different transitions with respect to their oscillator strength. The velocity width is calculated by V$_{w}$ = $2.355\sigma$c$/E_0$, where c is the speed of light, $E_0$ is the observed energy, V$_{w}$ is the velocity width, and $\sigma$ is the line width which is used to estimate the full width at half maximum (FWHM) of the line velocity profile. The parameters of the fit are listed in Table \ref{table_pl_fit}. The observed trends can be attributed to different wind ionisation states and different radii at which ions are present. \\ \\ For a non linear profiles of the velocity shift, velocity width, and equivalent width, there can exist multiple components within the outflow. Most lines are consistent with a constant fit and some, particularly Fe XXVI and Fe XXV lines, show a large deviation. Either different components can be independent or the may be part of the same outflow spread over different radii and ionisation stages. Similar profiles in velocity shift and velocity width have also been seen in GRO J1655-40 by \cite{Kallman2009_2009ApJ...701..865K}. In GRO J1655-40 a large part of the outflow has similar velocity shift like in this case, with some ions significantly deviating from the constant, similar to the case shown here \citep{Kallman2009_2009ApJ...701..865K}. \cite{2017NatAs...1E..62F} showed that the non linear absorption line trend found in GRO J1655-40 was indicative of different segments of a same magnetic outflow. In the next section we explore such a possibility for this source. \begin{table} \caption{Parameters of the powerlaw fit on velocity shift, velocity width and equivalent width.} \label{table_pl_fit} \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{c c c c} \hline\hline Function & A & B ( $\times10^{-21}$) & $\lambda$ \\ \hline velocity shift & $134.72$ & $3.56$ $\times10^{-21}$ & $27.11$\\ velocity width & $554.88$ & $1.70$ $\times 10^{-08}$ & $16.58$ \\ eqw & $-2.26$ & $-3.22$ $\times 10^{-19}$ & $23.81$\\ \hline \end{tabular} \end{table} \begin{figure}[h] \centering \resizebox{\hsize}{!}{\includegraphics[width=1.05\hsize]{vs_vw_eqw_EpochAll.png}} \caption{Velocity shift (top), velocity width (middle), and equivalent width (bottom) as a function of line energy for the soft state. In the top panel positive velocity shift corresponds to blue-shift.} \label{fig_7485_vs_vw_eqw} \end{figure} \subsection{Magnetic disc wind modelling} A magnetic origin of the wind has been suggested by several authors in both AGN and X-ray binaries \citep{Fukumura2010_2010ApJ...715..636F,2017NatAs...1E..62F,Blanford1982_1982MNRAS.199..883B,Contopoulos1994_1994ApJ...432..508C,Contopoulos1995_1995ApJ...450..616C,Ferreira1997_1997A&A...319..340F}. Plasma ejected and accelerated from the disc surface along the poloidal magnetic field lines can give rise to magnetically driven accretion disc winds \citep{2017NatAs...1E..62F}. This model is mass-invariant in a 2.5D, steady-state and axis-symmetric, wind structure for a set of wind parameters. It accounts for multiple absorption lines simultaneously in a self-consistent manner by modelling the internal structure of the wind, i.e. velocity, velocity gradient, density and ionization. This model can be used to determine the wind structure for a large range of parameters, such as distance, velocity and density, thus potentially constraining lines from a wide range of ionization states, both in the soft and hard X-ray band.\\ \\ The model is comprised of mainly two parts, the poloidal structure of the wind from a geometrically thin accretion disc and the ionisation state of the plasma computed by considering the local heating and cooling balance, and the ionisation equilibrium \citep{Fukumura2010_2010ApJ...715..636F,2017NatAs...1E..62F}. The ionising continuum is assumed to be a point source near the black hole. The model comprises of a single continuous wind structure in which different ions, at different ionisation states, represent parts of the same outflow. The higher the ionisation, column density and velocity, the closer the wind component is to the black hole. For a detailed description of the model we refer to \cite{2017NatAs...1E..62F,Fukumura2010_2010ApJ...715..636F}. The properties of the wind depend on the inclination angle ($\theta$), ionising spectral energy distribution (SED), and density normalisation ($n_{0}$) at the innermost launching radius which is assumed to be the innermost stable circular orbit (ISCO) for a Schwarzschild black hole. If the radii (r) and mass flux ($\Dot{m}$) are scaled by Schwarzschild radius ($R_S$) and Eddington rate ($\Dot{M}_{Edd}$) then the wind density profile can be expressed as $n(r)$ $=$ $n_{0}r^{-p}$. Then, the column density ($N_H$), the ionisation potential ($\xi$) and velocity ($v$) can be related to each other: $N_H$ $\propto$ $\xi^{(p-1)/(2-p)}$, $N_H$ $\propto$ $v^{2(p-1)}$, $v$ $\propto$ $\xi^{1/2(2-p)}$ \citep{2017NatAs...1E..62F}. \\ \\ We calculated the detailed wind photo-ionisation structure using the ionising SED derived from extrapolating the observed Chandra unabsorbed continuum in the energy interval between E=13.6 eV and E=13.6 keV for both the soft and hard states. However only the disc blackbody component and the power law component of the continuum was considered for the SED. We assumed that the wind starts at the ISCO in both the soft and the hard state. Since the inclination angle is well constrained for this source we fixed the inclination at $70.0$ degrees. Major line transitions and edges observable in the X-ray band have been implemented using a solar abundance apart from Fe, S, Si, where we left the abundances as parameter. We varied $p$ and $n_{0}$ in a self consistent way. For the fit $n_{0}$ was set to vary between $10^{15}$ $cm^{-3}$ and $3.2\times10^{18}$ $cm^{-3}$, and the density slope $p$ was set to vary between 0.9 and 1.5. For the soft state we got the best fit for $p$ = $1.378^{+0.001}_{-0.001}$ ($n$ $=$ $n_0(r/r_0)^{-1.38}$), and $n_{0}$ = $(19.8^{+0.6}_{-0.7})\times10^{17}$ cm$^{-3}$ with a $\chi^2$ of 4520 for 2631 dof (Fig \ref{fig_7485_660_EpochALL_mhdmodel}). For the hard state the best fit was obtained for $p$ = $1.378^{+0.005}_{-0.006}$ ($n$ $=$ $n_0(r/r_0)^{-1.18}$), and $n_{0}$ = $(0.50^{+0.05}_{-0.14})\times10^{17}$ cm$^{-3}$ with a $\chi^2$ of 2741 for 2631 dof. Additionally we also kept the abundances of Fe, S and Si as variable parameters in the fit. In the soft state we obtained a A$_{S}$ of $1.19^{+0.08}_{-0.07}$ and a lower limit of 2.92 for A$_{Fe}$ and an upper limit of 1.08 for A$_{Si}$ at 90 percent confidence. Similarly in the hard state we obtained a A$_{Fe}$ of $1.43^{+0.15}_{-0.11}$ and a lower limit of 2.42 and 2.0 for A$_{S}$ and A$_{Si}$ at 90 percent confidence. However, we note that the fit is not very sensitive to the varying abundances and consistent results are obtained also fixing the values to solar abundances. The broad-band modelling of the soft and hard states using the MHD wind model is shown in Fig \ref{fig_7485_660_EpochALL_mhdmodel}. The zoomed plot focused on the Fe XXV and Fe XXVI lines is shown in \ref{fig_7485_660_mhdmodel_zoom}. \begin{figure}[h] \centering \resizebox{\hsize}{!}{\includegraphics[width=1.\hsize]{grs1915_mhd2.png}} \caption{Top panel shows the spectrum of GRS 1915+105 in the soft state and hard state after modelling the absorption and emission lines with the MHD model. The data in soft and hard states are plotted in black and grey, while the respective models in red and blue. The bottom panel shows the ratio of data and model.} \label{fig_7485_660_EpochALL_mhdmodel} \end{figure}[h] \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=1.\hsize]{grs1915_mhd_zoom2.png}} \caption{Zoomed (Top: 6.3-7.2 keV, middle: 3.0-4.5 keV, bottom: 1.8-2.7 keV) spectrum of GRS 1915+105 in the soft state (data in black and model in red) and hard state (data in grey and model in blue) after modelling with the MHD model.} \label{fig_7485_660_mhdmodel_zoom} \end{figure} \section{Discussion and conclusion} In this paper, we investigate the origin of the observed abrupt differences in the wind absorption line properties in a soft and hard state of GRS 1915+105 observed with Chandra HETG. We model the continuum spectra in both states using a black body and power law emission. We note that this is a phenomenological continuum model and we do not investigate in detail the continuum physical parameters because the main objective of this study is to investigate the narrow absorption features. As a second step, we fit the rich spectrum in the soft state with twenty absorption lines and one emission line using phenomenological Gaussian profiles. A faint Fe XXVI line found in the hard state was also fit with an inverted Gaussian profile. \\ \\ We find phenomenological scaling relations of the velocity shift, velocity width and equivalent width of the absorption lines might follow a non linear pattern with respect to energy as a tracer of their ionisation, and thus suggesting multiple components in the wind. To explore this scenario further we fitted both the soft and hard state spectra using the MHD model that was used for the Chandra HETG spectrum of GRO J1655-40 in \cite{2017NatAs...1E..62F}. We find that in the soft state the best fit is obtained with a high wind density normalisation ($n_{0}$ = $(19.8^{+0.6}_{-0.7})\times10^{17}$ cm$^{-3}$) and for a radial wind density profile ($n$ $\propto$ $r^{-p}$) with $p \simeq 1.38$. Instead, in the hard state the best-fit values indicate a wind with a low density normalisation ($n_{0}$ = ($0.50^{+0.05}_{-0.14})\times10^{17}$ cm$^{-3}$), and a density profile with $p \simeq 1.38$. The main difference in the two absorption spectra is not dominated by the continuum change but instead it is due to an increase of two orders of magnitude in the wind density going from the hard to the soft state. Our model therefore predicts a persistent presence of disc winds even during hard state, although their spectroscopic appearance is much weaker. This result shows that the intrinsic wind condition changes internally in different states of this system, as previously speculated in other sources \citep{Neilsen2012_2012ApJ...750...27N, Ponti_2015MNRAS.446.1536P}. This also supports the suggested magnetic origin of the disc wind in GRS 1915+105 \citep{Miller2015_2015ApJ...814...87M, Miller2016_2016ApJ...821L...9M}. \\ \\ The wind density should in principle be a function of various disk micro physics and magnetic field properties, which are beyond the current state of our model. \cite{Fukumura_2014ApJ...780..120F} has investigated various types of morphology of MHD winds depending on the plasma property (e.g. the magnetic flux and plasma angular momentum). In this work we assume that the wind morphology remains almost unchanged throughout, considering that the global magnetic field pressure is dominant over radiation pressure in the framework of a generic MHD wind. In this case, the global field lines act like solid rigid wires \citep[e.g. ][]{Blanford1982_1982MNRAS.199..883B} being only weakly sensitive to mass accretion rate. It is conceivable, however, that local weak magnetic fields inside the disk, responsible for MRI (magneto-rotational instability), are more intimately coupled to the internal disk structure, and the mass accretion rate and accretion mode may well be a function of those local magnetic fields, while the global magnetic fields are persistent \citep[see][]{Mishra_Begelman_2020MNRAS.492.1855M}. Therefore, while a large-scale magnetic field structure is essentially unchanged, the cause of the wind density change could be due to the change in the local magnetic fields within the disk and/or accretion rate. This model is a steady state calculation that reflects the state of the disk-wind system at a given time. The wind could readjust over their entire range in time scales equal to their viscous time at their outer edge ($\sim$10$^6$ R$_g$, where R$_g$ is the Schwarzschild radius) and much faster locally (r $<<$ 10$^6$ R$_g$) for reasons we currently have not full control in our calculations. These time scales are of order of a few days for a 10 solar mass black hole, not inconsistent with the timescale of state transitions in BHBs. \\ \\ In this model we considered that wind is launched from nearby the ISCO in both the states. So, the characteristics of the MHD wind does not seem to depend strongly on a putative truncation radius of the disc in the hard state. Our fit results suggest that the wind structure changes with respect to the possible state change, and that photo-ionisation alone cannot explain the disappearance of the wind in the hard state, in accordance with what reported by, for example, \cite{Neilsen2012_2012ApJ...750...27N,Ponti_2015MNRAS.446.1536P}. It is conceivable that the change in the wind density may be linked to a change in the accretion disc density, accretion rate and/or geometry, which is itself responsible for the state change. \cite{Chakravorty_2016AN....337..429C} claim that a radial power-law profile of accretion rate determines the profile of the outflow, and for a higher power of the profile, the MHD wind is closer to the black hole. Our results are somewhat consistent with their findings, as we assumed a wind starting from ISCO and we obtain a relatively steeper profile in the soft state. They further claim that a wind is not possible in the canonical hard states of BHBs, however the hard state of GRS 1915+105 is peculiar because it does not follow the hysteresis loop in HID as in the case of other LMXBs. \\ \\ In principle, in the hard state, the wind could also be explained by a thermal origin. However, with the resolution and sensitivity of Chandra HETG, it is not possible to clearly differentiate between these two cases. An upcoming high resolution micro-calorimeter like XRISM might be useful to shed light on these rather controversial issues in more details. Also in the soft state we can see that an additional weak component might be required to fit all the lines properly. This can be attributed as a second component and might be due to a thermal wind. The magnetic and thermal component in the soft state would be consistent with the scenario that other authors would have seen as a multiple component wind \citep{Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M}. So, in that scenario, the magnetic wind might be more variable and the thermal wind could be varying only with respect to the observed continuum. This means that the strong magnetic wind in the soft state was either absent or has become so weak for a detection in the hard state with Chandra HETG, and the thermal wind was present in both the states. This scenario can be similar to the hybrid thermal and magnetic wind suggested by \cite{Neilsen2012_2012ApJ...750...27N} for GRO J1655-40 in the hard state. In the soft state of GRO J1655-40, the two component wind found using photo-ionisation modelling by several authors \citep{Kallman2009_2009ApJ...701..865K,Miller2006_2006Natur.441..953M, Neilsen2012_2012ApJ...750...27N}, can be a part of a single component MHD outflow as suggested by \cite{2017NatAs...1E..62F}. However, even in \cite{2017NatAs...1E..62F}, we see that it is possible that a weak additional component might be required for a better fit. Another aspect that we did not incorporate in the spectra is the re-emission from the wind, broadened due to the Keplerian rotation at the photo-ionisation radius as suggested by \cite{Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M}. We would incorporate the re-emission in a future work. We find that considering solar abundances we can derive a very good representation of the wind absorption. However, we note that some authors considered the possibility of super solar abundances for some elements when applying more simplistic models \citep[e.g.][]{Ueda2009_2009ApJ...695..888U}. We further estimate the physical parameters of the wind corresponding to the peak distribution of Fe XXVI ions for our best fit MHD model. Since each ion of a given charge state is produced through photo-ionisation over an radially extended distance along the MHD wind, the resultant ionic column is distributed over a finite range of distances. In the following estimates we provide the peak value which refers to the maximum column density and we include the range of values over which a quantity exceeds 50 per cent of the peak value. In the soft state, our best fit returns a Fe XXVI peak outflow velocity of $v_{out} = 343^{+146}_{-29}$ km s$^{-1}$, an ionisation parameter of log($\xi$) $=$ $4.31^{+0.76}_{-0.55}$ erg~s$^{-1}$~cm, and an equivalent hydrogen column density of $N_H \simeq 1.3\times10^{22}$ cm$^{-2}$. We estimate distance of the Fe XXVI absorber from the X-ray source of log(R/cm) = $13.6^{+1.0}_{-1.2}$. This does not mean that the launching radius of the wind is at this value, but it indicates that the peak of the ionisation parameter corresponds to this radius. Indeed, the MHD model considers a wind launched from a wide range of radii on the disc starting from the ISCO. The absorber located at distances lower than this is physically present but it is simply unobservable because even iron is fully ionized. In the units of Schwarzschild radius (R$_g$) the peak radius of Fe XXVI corresponds to $R \simeq 1.1\times10^7$ R$_g$. At such large radius the possibility of a thermal wind cannot be ruled out. However, we note that the broad ionisation range seen in the absorption lines, interpreted as a multiple component outflow by many authors \citep{Miller2015_2015ApJ...814...87M,Miller2016_2016ApJ...821L...9M}, supports a magnetic origin. Indeed, our MHD wind model in which every single absorber is physically coupled to the same continuous wind, is able to provide a very good representation of the absorption structure of the wind requiring a single stratified MHD disc wind.\\ \\ A recent work on accretion disc winds in H 1743-32 shows that a thermal wind is preferred to a magnetic wind \citep{Tomaru_2020MNRAS.494.3413T}. However they consider only the Fe XXV-XXVI lines and might not able to characterise a vast amount of soft X-ray lines found in GRS 1915+105. Our MHD model is able to provide a good characterisation of all the absorption lines seen in GRS 1915+105, because it intrinsically considers a stratified wind across the accretion disc. The thermal model in \cite{Tomaru_2020MNRAS.494.3413T} and in general thermal wind models, imply a velocity trend in which the velocity decreases for decreasing launching radius, to the point in which the inner wind is highly ionised and static. Our data on GRS 1915+105 soft state show the opposite, we have a stratified wind with outflow velocity and velocity width of the lines increasing for increasing ionisation, which is equivalent to say that the outflow velocity increases for decreasing launching radius. The stratification of the ionisation in the same observation is shown in \cite{Miller2015_2015ApJ...814...87M} as well. Instead, our hard state observation, in which we only observe a faint Fe XXVI, can not exclude the possibility of thermal driving. As we already suggested, it's also possible that we have an hybrid situation, with a combination of thermal and MHD wind, with one of the two mechanisms appearing more prominently in different states of the source. \\ \\ Following \cite{2017NatAs...1E..62F,Tombesi_2015Natur.519..436T}, the local mass outflow rate from the wind can be well estimated combining the equation $\Dot{M}_{out}^{local}$ = $4\pi m_{p} n(r) r^2 v_{out}$ and the definition of the ionisation parameter $\xi$ = $L_{X}/n(r)r^2$, as $\Dot{M}_{out}$ = $4\pi m_{p} L_X v_{out}/\xi$. Considering our best fit Fe XXVI peak parameters and for an ionizing luminosity in the soft state of 4.32$\times$10$^{38}$ ergs s$^{-1}$ we estimate $\Dot{M}_{out}$ $\simeq$ $2.5\times$10$^{-9}$ M$_\odot$ yr$^{-1}$. The accretion rate, defined as $\Dot{M}_{acc}$ = $L_{X}/\eta c^2$, for the source in the soft state is $\simeq$ 7.7$\times$10$^{-8}$ M$_\odot$ yr$^{-1}$ for a typical value of $\eta = 0.1$. The estimate of power output through the wind can be derived as $\Dot{E}$ = $(1/2)\Dot{M}_{out}v_{out}^2$. Integrating the power output throughout the disc, the estimated wind energy output from the system is of the order of one percent of the luminosity. \\ \\ Future upcoming high energy resolution instruments like XRISM \citep{XRISM_2020arXiv200304962X} and Athena \citep{ATHENA_2013arXiv1306.2307N} will be able to investigate this further. Weaker absorbers especially in the hard X-ray band (even during hard state if indeed present) can be better probed with the micro-calorimeters in XRISM/Athena. XRISM is planned to be launched in 2022. Here, we test its capabilities to investigate the accretion disc winds in GRS 1915+105 or any similar sources. So we simulate a spectra using our best fit phenomenological models in both soft and hard states. We used the XSPEC "fakeit" command to generate the spectra using an equivalent XRISM calorimeter response with 5 eV energy resolution and the associated background. Fig \ref{XRISM_total} and Fig \ref{XRISM_Fe} show the broadband and Fe K zoomed XRISM spectra in both the soft and hard states for a short exposure of 10 ks. The accuracy in the line energy, width and equivalent width for the Fe XXIV-XXVI lines with XRISM will represent more than an order of magnitude improvement compared to Chandra HETG. The line significance for the Fe XXVI, Fe XXV, Fe XXIV lines in the soft state and Fe XXVI in the hard state will improve more than a factor of three from the Chandra HETGS spectra. It is to be noted that the exposure for Chandra HETG in the soft and hard state were 49 ks and 30.3 ks compared to the just 10 ks for the simulated XRISM data. Hence, XRISM will very likely enable a time resolved study of accretion winds in this and most stellar mass black hole binaries.\\ \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=1.0\hsize]{grs1915_XRISM_10ks.png}} \caption{Simulated XRISM spectrum in the same energy band as Chandra HETGS for GRS 1915+105. Here we used the same Chandra HETG best-fit models for the spectra in soft (black) and hard (gray) states. The red and blue lines indicate the fitted phenomenological model.} \label{XRISM_total} \end{figure} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics[width=1.0\hsize]{grs1915_XRISM_10ks_Fe.png}} \caption{XRISM spectra of GRS 1915+105 zoomed in the 6.0 to 7.5 keV energy band. The soft and hard states are indicated in black and gray, respectively. The red and blue lines indicate the fitted phenomenological model.} \label{XRISM_Fe} \end{figure} It is important to note that X-ray binaries would be among the best candidates for detecting X-ray polarisation emission from the disc, the corona and circumnuclear matter \citep{XpolDovciak_2008JPhCS.131a2004D,XpolJeremy_2009ApJ...701.1175S,XpolJeremy2_2010ApJ...712..908S,XpolKallman_2015ApJ...815...53K, XpolTaverna_2020MNRAS.493.4960T}. We tested if the absorption lines in the GRS 1915+105 Chandra spectrum can affect the polarisation in the continuum in the context of the upcoming X-ray polarimetry mission IXPE \citep{IXPE_2016SPIE.9905E..17W}. We used the IXPEOBSSIM simulator, which takes as an input the spectrum and the polarisation angle and polarisation degree as a function of energy. The polarisation degree and angle as a function of energy, for different geometry of the disc and corona \citep[for example see][]{XpolDovciak_2008JPhCS.131a2004D}. However, for simplicity we considered a constant polarisation angle of $30\degree$ and constant polarisation degree of $5\%$. For the spectrum we used our Chandra soft state phenomenological model as an input. In the case 1, we used only the continuum, while in case 2 we included the absorption lines as well. From this simple test, we can conclude that the presence of unmodeled wind absorption in the soft state of GRS 1915+105 should not hamper the detection of polarisation from the continuum emission. However, the situation may be different if scattering from such a wind would be included. As a rough qualitative estimate, considering an MHD wind density profile of $n$ $=$ $n_{0}(r/r_0)^{-1.5}$, the total integrated column density would be $N_{H} \simeq 2 n_{0} r_{0}$, where $r_{0}$ is of the order of the ISCO. Substituting the ISCO for a non-spinning black hole with the mass of GRS 1915+105 and the density estimated from our best-fits, we find that in the soft state the wind may have a column of up to $N_{H} \simeq 4.4\times$10$^{25}$ cm$^{-2}$ in the equatorial direction, while in the hard state it would have a much lower value of $N_{H} \simeq 1.1\times$10$^{24}$ cm$^{-2}$. Hence, the wind may be mostly fully ionised but Compton thick in the equatorial direction in the soft state. Moreover, we note that emission and reflection from the disc wind in X-ray binaries has been suggested \citep[e.g.][]{Miller2015_2015ApJ...814...87M}. These evidences raise the possibility that some polarised signatures from the wind may be observable with IXPE if the source is caught in the soft state. We defer to future work for a quantification of the effect of disc winds on the polarised signal in the X-rays. \begin{acknowledgements} AR and FT thank J. Neilsen, G. Matt and F. Muleri for the useful comments and discussions. We also thank the anonymous referee for his suggestions in improving this work. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2020-12-17T02:21:17", "yymm": "2012", "arxiv_id": "2012.09023", "language": "en", "url": "https://arxiv.org/abs/2012.09023" }
\section{experiments/accuracy-predictor-portability} The accuracy-predictor used in DONA~can be reused for search spaces different from the one it has been trained for, as long as the quality metrics for blocks in the new search space overlaps with those the predictor has been generated for. This is a major advantage for DONA~since \textit{a)} the cost of generating the accuracy predictor can be amortized over different searches, and \textit{b)} it enables to extend an existing search space by including newer layers that today are appearing at a fast pace given the activities in deep-learning research field. In this work we validate the portability of DONA~accuracy predictor along two case studies: \textit{a)} for an \textit{extended search space} where a predictor trained for a given search space is validated on an augmented search space including additional type of layers, and \textbf{b)} quantization, where the predictor trained for full-precision blocks and networks search is validated for the quantized blocks and networks. \textbf{Extended search space.} \textbf{TODO: take only validation data from the previously written section.} \textbf{Quantized.} \textbf{TODO: take only validation data from the previously written section.} The DONA~approach can also be used for \textit{rapid search space design and exploration}. Using DONA, a designer can quickly determine whether the search space should be extended or constrained for optimal performance. \input{figures_tex/figure_search_space_analysis} Such extension is possible because the DONA~accuracy predictor generalizes to previously unseen architectures, without having to extend the Architecture Library. Figure~\ref{fig:search_space_analysis}(left) shows this extrapolation works when extending to blocks based on ShiftNets~\cite{shiftnet}. Note how the trendline predicts the performance of full Pareto optimal ShiftNets even though the predictor is created without any ShiftNet data. Here, ShiftNets are our implementation, with learned shifts per group of 32 channels depthwise-separable replacement. These generalization capabilities are obtained because the predictor only uses BKD quality metrics as an input without requiring any structural information about the replacement block. This feature is a major advantage of~DONA~compared to OFA~\cite{ofa} and other methods where the predictor cannot automatically generalize to completely different layer-types, or to blocks of the same layer-type with parameters (expansion rate, kernel size, depth, ...) outside of the original search space. This quick prototyping capability is also showcased for the DONA~search space on a V100 GPU in Figure~\ref{fig:search_space_analysis}(right). We now use the original accuracy predictor to interpolate in the DONA~search space, but estimate the impact of variations and constraints for rapid design space exploration~\cite{radosavovic2020designing}. In doing this, Fig.~\ref{fig:search_space_analysis} shows a diverse search space is crucial to achieve good performance on the V100 GPU. Especially the impact of adding SE-attention~\cite{squeeze_excite} optimally is very large, predicting a 15$\%$ speedup at 77$\%$ accuracy (line C vs D), or a 1$\%$ accuracy boost at 26ms (line E vs D). The full DONA~space finds networks that are $30\%$ faster than a similar space with fewer choices in expansion ratios and with Squeeze-and-Excite in every block (line DONNA vs C). Every plotted line in Figure~\ref{fig:search_space_analysis} (right) is a predicted Pareto-optimal front for a variation of the search space. A baseline (A) considers SE and swish in every block and k $\in$ \{7\}, expand $\in$ \{3,4,6\} and depth $\in$ \{2,3,4\}. Other lines show results for search spaces built starting from (A), e.g. (B) considers k $\in$ \{5,7\}, (C) k $\in$ \{3,5,7\}, (D) removes SE and swish, (E) allows choosing optimal placement of SE and Swish, (F) adds a channel-width multiplier and full~DONA~adds expand $\in$ \{2,3,4,6\} and depth $\in$ \{1,2,3,4\} and channel-scaling $\in$ \{0.5,1.0\}. \section{Experiments} \label{sec:experiments} This section discusses three use-cases of DONA: scenario-aware neural architecture search (Section~\ref{subsec:experiments/nas-for-imagenet}), search-space extrapolation and design (Section~\ref{subsubsec:experiments/search-space-design}), and model compression (Section~\ref{subsubsec:experiments/compression}). We also show that DONNA can be directly applied to object detection on MS-COCO~\cite{coco} and that architectures found by DONA~transfer to optimal detection backbones (Section~\ref{subsec:experiments/detection}). DONNA is compared to random search in Appendix~\ref{sec:supplementary/random-search}. \subsection{ImageNet Classification} \label{subsec:experiments/imagenet} \input{figures_tex/figure_flops_params_gpu_iccv} \input{tables/table_comparison} We present experiments for different search spaces for ImageNet classification: DONA, EfficientNet-Compression and MobileNetV3 (1.0$\times$, 1.2$\times$). The latter two search spaces are \emph{blockwise} versions of the spaces considered by OFA \cite{ofa_repo}; that is, parameters such as expansion ratio and kernel size are modified on the block level rather than the layer level, rendering the overall search space coarser than that of OFA. Selected results for these spaces are discussed in this section, more extensive results can be found in Appendix~\ref{subsec:supplementary/various-search-spaces}. We first show that networks found by DONA~in the DONA~search space outperform the state-of-the-art (Figure~\ref{fig:trendlines_overview}). For example, DONA~is up to $2.4\%$ more accurate on ImageNet~\cite{imagenet} validation compared to OFA\cite{ofa} trained from scratch with the same amount of parameters. At the same time, DONA~finds models outperforming DNA \cite{blockwise_nas} up to 1.5$\%$ on a V100 GPU at the same latency and MobileNetV2 ($1.4\times$) by 10$\%$ at 0.5$\%$ higher accuracy on the Samsung S20 GPU. We also show that MobileNetV3-style networks found by DONA~achieve the same quality of models compared to Mnasnet~\cite{mnasnet} and OFA~\cite{ofa} when optimizing for the same metric (See Fig.~\ref{fig:donna-finds-ofa-models} and Tab.~\ref{tab:mobilenetv3_comparison}). All experiments are for ImageNet~\cite{imagenet} images with $224\times224$ input resolution. Training hyperparameters are discussed in Appendix~\ref{subsec:supplementary/training_hyperparameters}. \subsubsection{NAS for~DONA~on ImageNet} \label{subsec:experiments/nas-for-imagenet} DONA~is used for \textit{scenario-aware Neural Architecture Search} on ImageNet~\cite{imagenet}, quickly finding state-of-the-art models for a variety of deployment scenarios, see Figure~\ref{fig:trendlines_overview}. As shown in Figure~\ref{fig:quarts_search_space}, all 5 blocks $B_n$ in the DONA~space can be replaced by a choice out of $M=384$ options: k $\in$ \{3,5,7\}; expand $\in$ \{2,3,4,6\}; depth $\in$ \{1,2,3,4\}; activation/attention $\in$ \{ReLU/None, Swish\cite{mbv3}/SE\cite{squeeze_excite}\}; layer-type $\in$ \{grouped, depthwise inverted residual bottleneck\}; and channel-scaling $\in$ \{$0.5\times$, $1.0\times$\}. The search-space can be expanded or arbitrarily constrained to known efficient architectures for a device. Each of these $5\times384=1920$ alternative blocks is trained using BKD to complete the Block Library. Once the Block Library is trained, we use the BKD-based ranking metric from DNA\cite{blockwise_nas} to sample a set of architectures uniformly spread over the ranking space. For the DONA~search space, we finally finetune the sampled networks for 50 epochs starting from the BKD initialization, building an Architecture Library with accuracy targets used to fit the linear accuracy predictor. Typically, 20-30 target networks need to be finetuned to yield good results, see Appendix~\ref{subsec:supplementary/accuracy_predictors}. In total, including the training of a reference model ($450$ epochs), $450+1920+30\times50=3870$ epochs of training are required to build the accuracy predictor. This is less than $10\times$ the cost of training a single network from scratch to model the accuracy of more than 8 trillion architectures. Subsequently, any architecture can be selected and trained to full accuracy in 50 epochs, starting from the BKD initialization. Similarly, as further discussed in Appendix~\ref{subsec:supplementary/accuracy_predictors}, an accuracy model for MobileNetV3 (1.2$\times$) and EfficientNet-Compressed costs $450+135+20\times50=1585$ epochs, roughly the same as training 4 models from scratch. Although this is a higher cost than OFA \cite{ofa}, it covers a much more diverse search space. OFA requires an equivalent, accounting for dynamic batch sizes~\cite{ofa_repo}, of $180+125+2\times150+4\times150=1205$ epochs of progressive shrinking with backpropagation on a large supernet. BKDNAS \cite{blockwise_nas} requires only $450+16\times20=770$ epochs to build its ranking model, but $450$ epochs to train models from scratch. Other methods like MnasNet~\cite{mnasnet} can handle a similar diversity as DONA, but typically require an order of magnitude longer search time ($40000$ epochs) \emph{for every deployment scenario}. DONA~offers MNasNet-level diversity at a 2 orders of magnitude lower search cost. On top of that, BKD epochs are significantly faster than epochs on a full network, as BKD requires only partial computation of the reference model and backpropagation on a single block $B_{n,m}$. Moreover, and in contrast to OFA, all blocks $B_{n,m}$ can be trained in parallel since they are completely independent of each other. Table~\ref{tab:comparison} quantifies the differences in search-time between these approaches. With the accuracy predictor in place, Pareto-optimal DONA~models are found for several targets. Figure~\ref{fig:trendlines_overview} shows DONA~finds networks that outperform the state of the art in terms of the number of parameters, on a simulator targeting tensor compute units in a mobile SoC, on a NVIDIA V100 GPU and on the Samsung S20 GPU. Every predicted Pareto-optimal front is generated using an evolutionary search with NSGA-II \cite{nsga_ii, pymoo} on a population of 100 architectures until convergence. Where applicable, full-architecture hardware measurements are used in the evolutionary loop. Details on measurements and baseline accuracy are given in Appendix~\ref{subsec:supplementary/baselines}. Similarly, Tab.~\ref{tab:mobilenetv3_comparison} and Fig.~\ref{fig:donna-finds-ofa-models} show that DONA~finds models that are on-par with architectures found by other state-of-the-art methods such as MnasNet~\cite{mnasnet} and OFA~\cite{ofa} in the same spaces. Tab.~\ref{tab:mobilenetv3_comparison} shows DONNA finds models in the MobileNetV3 (1.0$\times$) space that are on par with MobileNetV3~\cite{mbv3} in terms of number of operations, although~\cite{mbv3} is found using expensive MnasNet~\cite{mnasnet}. Fig.~\ref{fig:donna-finds-ofa-models} shows the same for networks found through DONNA in the MobileNetV3 (1.2$\times$) search space, by comparing them to models found through OFA~\cite{ofa} optimized for the same complexity metric and trained with the same hyperparameters. More results for other search spaces are shown in Figure~\ref{fig:supplementary/search-space-overview} in Appendix~\ref{subsec:supplementary/various-search-spaces}. We also visualize Pareto-optimal DONA~models for different platforms in Appendix~\ref{subsec:supplementary/model-visualizations}. \input{figures_tex/figure_donna_finds_ofa_models} \input{tables/table_flops_vs_mobilenetv3} \subsubsection{Search-Space Extension and Exploration} \label{subsubsec:experiments/search-space-design} The DONA~approach can also be used for \textit{rapid search space extension and exploration}. Using DONA, a designer can quickly determine whether the search space should be extended or constrained for optimal performance. Such \textit{extension} is possible because the DONA~accuracy predictor generalizes to previously unseen architectures, without having to extend the Architecture Library. This is illustrated in Fig.~\ref{fig:linear_predictor}(left), showing the DONNA predictor achieves good quality, in line with the original test set, on a ShiftNet-based test set of architectures. Figure~\ref{fig:search_space_analysis}(left) further illustrates this extrapolation works by showing the confirmed results of a search for the ShiftNet space. Note how the trendline predicts the performance of full Pareto optimal ShiftNets even though the predictor is created without any ShiftNet data. Here, ShiftNets are our implementation, with learned shifts per group of 32 channels as depthwise-separable replacement. These generalization capabilities are obtained because the predictor only uses quality metrics as an input without requiring any structural information about the replacement block. This feature is a major advantage of~DONA~compared to OFA~\cite{ofa} and other methods where the predictor cannot automatically generalize to completely different layer-types, or to blocks of the same layer-type with parameters (expansion rate, kernel size, depth, ...) outside of the original search space. Appendix~\ref{sec:supplementary/qnas} illustrates such extension can also be used to model accuracy of lower precision quantized networks. This prototyping capability is also showcased for the DONA~search space on a V100 GPU in Figure~\ref{fig:search_space_analysis}(right). Here we interpolate, using the original accuracy predictor for \textit{exploration}. In doing this, Fig.~\ref{fig:search_space_analysis} shows search-space diversity is crucial to achieve good performance. Especially the impact of optimally adding SE-attention~\cite{squeeze_excite} is very large, predicting a 25$\%$ speedup at 76$\%$ accuracy (line C vs D), or a 1$\%$ accuracy boost at 26ms (line E vs D). Every plotted line in Figure~\ref{fig:search_space_analysis} (right) is a predicted Pareto-optimal. A baseline (A) considers SE/Swish in every block and k $\in$ \{7\}, expand $\in$ \{3,4,6\} and depth $\in$ \{2,3,4\}. Other lines show results for search spaces built starting from (A), e.g. (B) considers k $\in$ \{5,7\}, (C) k $\in$ \{3,5,7\}, (D) removes SE/Swish, (E) allows choosing optimal placement of SE/Swish, (F) adds a channel-width multiplier. \subsubsection{Model Compression} \label{subsubsec:experiments/compression} \input{figures_tex/figure_search_space_analysis} DONA~is also used for \textit{hardware-aware compression of existing neural architectures} into faster, more efficient versions. DONA~can do compression not just in terms of the number of operations, as is common in literature, but also for different devices. This is useful for a designer who has prototyped a network for their application and wants to run it efficiently on many different devices with various hardware and software constraints. Figure~\ref{fig:efficientnet} shows how EfficientNet-B0 can be compressed into networks that are $10\%$ faster than MnasNet~\cite{mnasnet} on the Samsung S20 GPU. \input{figures_tex/figure_efficientnet} In the DONA~compression pipeline, the EfficientNet search space splits EfficientNet-B0 into 5 blocks and uses it as the reference model. Every replacement block $B_{n,m}$ considered in compression is smaller than the corresponding reference block. $1135$ epochs of training are spent in total to build an accuracy predictor: 135 blocks are trained using BKD, and 20 architectures are trained for 50 epochs as prediction targets, a cost equivalent to the resources needed for training 3 networks from scratch. Figure~\ref{fig:efficientnet} shows DONA~finds a set of smaller, Pareto optimal versions of EfficientNet-B0 both in the number of operations and on-device. These are on-par with MobileNetV3~\cite{mbv3} in the number of operations and $10\%$ faster than MnasNet~\cite{mnasnet} on device. For Samsung S20, the accuracy predictor is calibrated, as these models have no SE and Swish in the head and stem as in the EfficientNet-B0 reference. Similarly, DONNA can be used to optimally compress Vision Transformers (ViT~\cite{dosovitskiy2020image}), see Appendix~\ref{sec:supplementary/vit}. \subsection{Object Detection on MS-COCO} \label{subsec:experiments/detection} \input{figures_tex/figure_detection} The DONA~architectures transfer to other tasks such as object detection on MS COCO \cite{coco}. To this end, we use the EfficientDet-D0 \cite{efficientdet} detection architecture, replacing its backbone with networks optimized through the DONNA pipeline. For training, we use the hyperparameters given in \cite{rw_coco}. The EfficientDet-D0 initialization comes from \cite{rw_imagenet}. Figure~\ref{fig:detection} shows the results of multiple of such searches. First, we optimize backbones on ImageNet in the MobileNetV3 (1.2$\times$) and ~DONA~spaces (ours-224), targetting both the number of operations (left) and latency on a simulator targeting tensor compute units. In this case, the input resolution is fixed to $224\times224$. The backbones are first finetuned on ImageNet and then transferred to MS-COCO. Second, we apply the DONNA pipeline directly on the full DONNA-det0 architecture, building an accuracy predictor for MS-COCO. We optimize only the backbone and keep the BiFPN head fixed (Ours-COCO-512). In this case, the resulting networks are directly finetuned on MS-COCO, following the standard DONNA-flow. For OFA \cite{ofa}, we consider two sets of models. The first set consists of models optimized for the number of operations (FLOP) with varying input resolution coming directly from the OFA repository \cite{ofa_repo}. The second set of models, which we identify by `OFA-224', are obtained by us with the same tools \cite{ofa_repo}, but with the input resolution fixed to $224\times224$. This makes the OFA-224 search space the same as our MobileNetV3 (1.2$\times$) up to the layerwise-vs-blockwise distinction. In the first experiment, we initialize the OFA backbone with weights from progressive shrinking released in \cite{ofa_repo}. In the second experiment, we initialize the OFA backbone with from-scratch trained weights on ImageNet using hyperparameters from \cite{rw_imagenet}. After such initialization, the networks are transferred to object detection for comparison. The comparison of the two experiments shows the benefit of OFA-style training is limited after transfer to a downstream task (See Fig.~\ref{fig:detection}.) The gap between OFA-style training and training from scratch, which is up to $1.4\%$ top-1 on ImageNet, decreases to $0.2\%$ mAP on COCO, reducing its importance. We discuss this point further in Appendix~\ref{sec:supplementary/transfer}. In comparing with DONA~models, we make three key observations. First, models transferred after a search using DONA~are on-par or better than OFA-224 models for both operations and latency. Second, models transferred from the DONA~space outperform OFA models up to $2.4\%$ mAP on the validation set in latency. Third, best results are achieved when applying DONNA directly to MS COCO. \iffalse The DONA~architectures can be applied to other tasks such as object detection on MS COCO \cite{coco}. To this end, we use the EfficientDet-D0 \cite{efficientdet} detection architecture, replacing the standard EfficientNet-B0 \cite{efficientnet} backbone with backbones obtained with DONA~on ImageNet. For training object detection architectures, the hyperparameters given in \cite{rw_coco} are used with the small modification of increasing the per GPU batch size to 64 for faster training. All DONA~backbones are initialized using the weights obtained via the finetuning process described in Section \ref{sec:search_method}. For OFA \cite{ofa} backbones, the weights obtained from training from scratch on ImageNet using the hyperparameters given in \cite{rw_imagenet}. For some OFA backbones, pretrained weights are also available via the OFA Github repository \cite{ofa}, which are used for comparison. All other backbones are initialized with weights coming from \cite{rw_imagenet}. \blueText{PN: Points to make: 1. Transferability of QUARTS architectures 2. In line performance with OFA when search resolution is fixed to 224 for mAP vs FLOPS (Future work: Incorporate resolution into QUARTS) 3. OFA initialized is better than RW initialized although the gain is much less than what ImageNet results would suggest (Add plot in Supp Mat) 4. Models search for 224 latency in QUARTS give pretty good results even for 512 (Future work: 512 proxy and direct search) 5. Add compression results to Supp Mat } \fi \section{Conclusion} \label{sec:conclusion} In this work, we present DONA, a novel approach for rapid scenario-aware NAS in diverse search spaces. Through the use of a model accuracy predictor, built through knowledge distillation, DONA~finds state-of-the-art networks for a variety of deployment scenarios: in terms of number of parameters and operations, and in terms of latency on Samsung S20 and the Nvidia V100 GPU. In ImageNet classification, architectures found by DONA~are 20\% faster than EfficientNet-B0 and MobileNetV2 on V100 at similar accuracy and 10\% faster with 0.5\% higher accuracy than MobileNetV2-1.4x on a Samsung S20 smartphone. In object detection, DONA~finds networks with up to $2.4\%$ higher mAP at the same latency compared to OFA. Furthermore, this pipeline can be used for quick search space extensions (e.g. adding ShiftNets) and exploration, as well as for on-device network compression. \section{Introduction} \label{sec:intro} \input{figures_tex/figure_overview} Although convolutional neural networks (CNN) have achieved state-of-the-art performance for a wide range of vision tasks, they do not always execute efficiently on hardware platforms like desktop GPUs or mobile DSPs and NPUs. To alleviate this issue, CNNs are specifically optimized to minimize latency and energy consumption for on-device performance. However, the optimal CNN architecture can vary significantly between different platforms. Even on a single platform, their efficiency can change with different operating conditions or driver versions. To solve this problem, low-cost methods for automated hardware-aware neural architecture search (NAS) are required. Current NAS algorithms, however, suffer from several limitations. First, many optimization algorithms~\cite{mnasnet,mbv3,single_path_nas,darts} target only a single \textit{deployment scenario}: a hardware-agnostic complexity metric, a hardware platform, or different latency, energy, or accuracy requirements. This means the search has to be repeated whenever any part of that scenario changes. Second, many methods cannot search in truly diverse search spaces, with different types of convolutional kernels, activation functions and attention mechanisms. Current methods either search through large and diverse spaces at a prohibitively expensive search cost~\cite{mnasnet,mbv3}, or limit their applicability by trading search time for a more constrained and less diverse search~\cite{ofa,single_path_nas,efficientnet,big_nas,nat,nsganetv2}. Most of such speedups in NAS come from a reliance on weight sharing mechanisms, which require all architectures in the search space to be structurally similar. Thus, these works typically only search among \textit{micro-architectural} choices such as kernel sizes, expansion rates, and block repeats and not among \textit{macro-architectural} choices of layer types, attention mechanisms and activation functions. As such, they rely on prior expensive methods such as~\cite{mnasnet,mbv3} for an optimal choice of macro-architecture. We present DONA~(Distilling Optimal Neural Network Architectures), a method that addresses both issues: it scales to multiple deployment scenarios with low additional cost, and performs rapid NAS in diverse search spaces. The method starts with a trained reference model. The first issue is resolved by splitting NAS into a scenario-agnostic training phase, and a scenario-aware search phase that requires only limited training, as depicted in Figure~\ref{fig:overview}. After an accuracy predictor is built in the training phase, the search is executed quickly for each new deployment scenario, typically in the time-frame of hours, and only requiring minimal fine-tuning to finalize optimal models. Second, DONA~considers diverse \textit{macro-architectural} choices in addition to \textit{micro-architectural} choices, by creating this accuracy predictor through Blockwise Knowledge Distillation (BKD)~\cite{blockwise_nas}, see Figure~\ref{fig:bkd_overview}. This approach imposes little constraints on the macro- and micro-architectures under consideration, allowing a vast, diverse, and extensible search space. The DONA~pipeline yields state of the art network architectures, as illustrated for a Samsung S20 GPU in Figure~\ref{fig:overview}(d). Finally, we use DONA~for rapid search space extension and exploration, and on-device model compression. This is possible as the DONA~accuracy predictor generalizes to architectures outside the original search space. \iffalse In DONA, we (A) build a cheap accuracy model of a predefined diverse search-space using optimized blockwise knowledge distillation (BKD) with less than 1 epoch of training per block, (B) use this accuracy model together with full-network hardware measurements in the loop in an Evolutionary Algorithm (EA) to find pareto-optimal architectures in terms of accuracy and latency on device and (C) quickly finetune the pareto-optimal network architectures, initialized through BKD in 10-50 epochs, a $9-45\times$ speedup compared to training from scratch. Apart from requiring less epochs of training per search, DONA~can be fully parallellized, also minimizing wall clock time. The four main contributions of this work can be summarized as follows: \begin{itemize} \item DONA~is presented as a novel 3-step pipeline for Hardware-Aware Neural Architecture Search (NAS). We show DONA~finds similar optimal network architectures as OFA~\cite{ofa} when searching in the same search-space, at a lower search-cost. \item We show DONA~allows diverse architecture search, not only over micro-architectural parameters (kernel-size, expand-ratio, network-depth) as in most prior works, but also over macro-architectural parameters such as the block-type (depthwise-separable convolutions, grouped Convolutions, ShiftNets, ...), channel widths as well as activation functions (Swish, ReLU) and attention mechanisms (Squeeze-and-Excite~\cite{squeeze_excite}). We show this diversity in the search space is more important than the granularity of the search of micro-architectural parameters. \item Together with the DONA~method, a novel, large and diverse DONA~search-space is introduced. We show this space can encompass a multitude of other common search spaces in terms of Pareto-optimality, because of its diversity. \item We discuss DONA~can be easily extended towards other tasks in Visual Recognition. \end{itemize} \fi \section{Related Work} \label{sec:related_work} Over time, methods in the NAS literature have evolved from prohibitively expensive but holistic and diverse search methods \cite{nas_rl, nasnet, mnasnet} to lower cost approaches that search in more restrictive non-diverse search spaces \cite{ofa,single_path_nas}. This work, DONA, aims at benefiting from the best of both worlds: rapid search in diverse spaces. We refer the interested reader to the existing dedicated survey of Elsken et al. \cite{elsken2019neural} for a broader discussion of the NAS literature. Early approaches to NAS rely on reinforcement learning \cite{nas_rl, nasnet, mnasnet} or evolutionary optimization \cite{evolution_nas}. These methods allow for diverse search spaces, but at infeasibly high costs due to the requirement to train thousands of models for a number of epochs throughout the search. MNasNet \cite{mnasnet} for example uses up to 40,000 epochs in a single search. This process can be sped up by using weight sharing among different models, as in ENAS \cite{enas}. However, this comes at the cost of a less diverse search space, as the subsampled models have to be similar for the weights to be shareable. In another line of work, differentiable architecture search methods such as DARTS \cite{darts}, FBNet~\cite{fbnet}, FBNetV2~\cite{fbnetv2}, ProxylessNAS \cite{proxylessnas}, AtomNAS \cite{atomnas} and Single-Path NAS \cite{single_path_nas} simultaneously optimize the weights of a large supernet and its architectural parameters. This poses several impediments to scalable and scenario-aware NAS in diverse search spaces. First, in most of these works, different cell choices have to be available to the algorithm, ultimately limiting the space's size and diversity. While several works address this problem either by trading off the number of architecture parameters against the number of weights that are in GPU memory at a given time \cite{p_darts}, by updating only a subset of the weights during the search \cite{pc_darts}, or by exploiting more granular forms of weight-sharing~\cite{single_path_nas}, the fundamental problem remains when new operations are introduced. Second, although differentiable search methods speed up a single search iteration, the search must be repeated for every scenario due to their coupling of accuracy and complexity. Differentiable methods also require differentiable cost models. Typically these models use the sum of layer latencies as a proxy for the network latency, which can be inaccurate. This is especially the case in emerging depth-first processors \cite{depth_first}, where intermediate results are stored in the local memory, making full-graph latency depend on layer sequences rather than on individual layers. To improve the scaling performance of NAS across different scenarios, it is critical to decouple the accuracy prediction of a model from the complexity objective. In Once-for-All (OFA) \cite{ofa} and~\cite{nsganetv2}, a large weight-sharing supernet is trained using progressive shrinking. This process allows the sampling of smaller subnets from the trained supernet that perform comparably with models that have been trained from scratch. A large number of networks can then be sampled to build an accuracy predictor for this search space, which in turn can be used in a scenario-aware evolutionary search, as in Figure~\ref{fig:overview}(c). Although similar to DONA~in this approach, OFA \cite{ofa} has several disadvantages. First, its search space's diversity is limited due to its reliance on progressive shrinking and weight sharing, which requires a fixed macro-architecture in terms of layer types, attention, activations, and channel widths. Furthermore, progressive shrinking can only be parallelized in the batch dimension, limiting the maximum number of GPUs that can process in parallel. DONA~does not suffer from these constraints. Similarly, Blockwisely-Supervised NAS (DNA) \cite{blockwise_nas}, splits NAS into two phases: the creation of a ranking model for a search space and a targeted search to find the highest-ranked models at a given constraint. To build this ranking model, DNA uses blockwise knowledge distillation (BKD) to build a relative ranking of all possible networks in a given search space. The best networks are then trained and verified. It is crucial to note that it is BKD that enables the diverse search for optimal attention mechanisms, activation functions, and channel scaling. However, DNA has three disadvantages: (1) the ranking model fails when ranking large and diverse search spaces (Section~\ref{subsec:search_method/accuracy_model}), (2) the ranking only holds within a search space and does not allow the comparison of different spaces, and (3) because of the reliance on training subsampled architectures from scratch, the method is not competitive in terms of search time. This work, DONA, addresses all these issues. In summary, DONNA differs from prior work on these key aspects: \begin{enumerate} \item Unlike OFA \cite{ofa}, DONA~enables hardware-aware search in \emph{diverse search spaces;} differentiable and RL-/evolutionary-based methods can do this too, but using much more memory or training time, respectively. \item DONA~\emph{scales to multiple accuracy/latency targets}, requiring only marginal cost for every new target. This is in contrast with differentiable or RL-/evolutionary-based methods, where the search has to be repeated for every new target. \item DONNA uses \emph{a novel accuracy predictor} which correlates better with training-from-scratch accuracy than prior work like DNA \cite{blockwise_nas} (See Figure \ref{fig:linear_predictor}). \item Furthermore, the DONNA accuracy predictor \emph{generalizes to unseen search spaces} due to its reliance on block \textit{quality metrics}, not on the network configuration (See Figure \ref{fig:search_space_analysis}). \item DONNA relies on a \emph{fast finetuning} method that achieves the same accuracy as training-from-scratch while being $9\times$ faster, reducing the training time for found architectures compared to DNA \cite{blockwise_nas}. \end{enumerate} \section{Search Algorithm} \label{sec:search_method} \section{Distilling Optimal Neural Networks} \label{sec:search_method} Starting with a trained reference model, DONA~is a three step pipeline for NAS. For a given search space (Section~\ref{subsec:search_method/search-space}), we first build a scenario-agnostic accuracy predictor using Blockwise Knowledge Distillation (BKD) (Section~\ref{subsec:search_method/accuracy_model}). This amounts to a one-time cost. Second, a rapid scenario-aware evolutionary search phase finds the Pareto-optimal network architectures for any specific scenario (Section~\ref{subsec:search_method/evolutionary_search}). Third, the predicted Pareto-optimal architectures can be quickly finetuned up to full accuracy for deployment (Section~\ref{subsubsec:search_method/finetuning}). \subsection{Search Space Structure} \label{subsec:search_method/search-space} Figure~\ref{fig:quarts_search_space} illustrates the block-level architecture of our search spaces and some parameters that can be varied within it. This search space is comprised of a stem, head, and $N$ variable blocks, each with a fixed stride. The choice of stem, head and the stride pattern depends on the choice of the reference model. The blocks used here are comprised of repeated layers, linked together by feedforward and residual connections. The blocks in the search space are denoted $B_{n,m}$, where $B_{n,m}$ is the $m^{th}$ potential replacement out of $M$ choices for block $B_n$ in the reference model. These blocks can be of any style of neural architecture (See Appendix~\ref{sec:supplementary/vit} for Vision Transformers~\cite{dosovitskiy2020image}), with very few structural limitations; only the spatial dimensions of the input and output tensors of $B_{n,m}$ need to match those of the reference model, which allows for diverse search. Throughout the text and in Appendix~\ref{sec:supplementary}, other reference models based on MobileNetV3~\cite{mbv3} and EfficientNet~\cite{efficientnet} are discussed. \subsection{Building a Model Accuracy Predictor} \label{subsec:search_method/accuracy_model} \subsubsection{Blockwise Knowledge Distillation} \label{subsubsec:search_method/bkd} We discuss Blockwise Knowledge Distillation (BKD) as the first step in building an accuracy predictor for our search space, see Figure~\ref{fig:bkd_overview}(a). BKD yields a \textit{Block Library} of pretrained weights and quality metrics for each of the replacement blocks $B_{n,m}$. This is later used for fast finetuning (Section~\ref{subsubsec:search_method/finetuning}) and to fit the accuracy predictor (Section~\ref{subsec:search_method/linear_accuracy_predictor}). To build this library, each block $B_{n,m}$ is trained independently as a student using the pretrained reference block $B_{n}$ as a teacher. The errors between the teacher's output feature map $Y_n$ and the student's output feature map $\bar{Y}_{n,m}$ are used in this process. Formally, this is done by minimizing the per-channel noise-to-signal-power ratio (NSR): \begin{equation} \label{eq:loss} \mathcal{L}(W_{n,m};Y_{n-1},Y_n) = \frac{1}{C}\sum_{c=0}^{C} \frac{\|Y_{n,c}-\bar{Y}_{n,m,c}\|^2}{\sigma_{n,c}^2} \end{equation} Here, $C$ is the number of channels in a feature map, $W_{n,m}$ are the weights of block $B_{n,m}$, $Y_n$ is the target output feature map of $B_n$, $Y'_{n,m}$ is the output of block $B_{n,m}$ and $\sigma_{n,c}^2$ is the variance of $Y_{n,c}$. This metric is closely related to Mean-Square-Error (MSE) on the feature maps, which~\cite{adaround} shows to be correlated to the task loss. Essentially, the blocks $B_{n,m}$ are trained to closely replicate the teacher's non-linear function $Y_n = B_n(Y_{n-1})$. Intuitively, larger, more accurate blocks with a larger ``modeling capacity'' or ``expressivity'' replicate this function more closely than smaller, less accurate blocks. On ImageNet~\cite{imagenet} such knowledge distillation requires only a \emph{single} epoch of training for effective results. After training each block, the resulting NSR metric is added to the Block library as a \textit{quality metric} of the block $B_{n,m}$. Note that the total number of trainable blocks $B_{n,m}$ grows linearly as $N\times M$, whereas the overall search space grows exponentially as $M^N$, making the method scale well even for large search-spaces. \input{figures_tex/figure_ranking_metrics} \subsubsection{Linear Accuracy Predictor} \label{subsec:search_method/linear_accuracy_predictor} The key insight behind DONA~is that block-level quality metrics derived through BKD (e.g., per-block NSR) can be used to predict the accuracy of all architectures sampled from the search space. We later show this metric even works for architectures outside of the search space (Section~\ref{subsubsec:experiments/search-space-design}). To create an accuracy predictor, we build an \textit{Architecture Library} of trained models sampled from the search space, see Figure~\ref{fig:bkd_overview}(b). These models can be trained from scratch or finetuned quickly using weight initialization from BKD (Section~\ref{subsubsec:search_method/finetuning}). Subsequently, we fit a linear regression model, typically using second-order terms, to predict the full search space's accuracy using the quality metrics stored in the Block Library as features and the accuracy from the Architecture Library as targets. Figure~\ref{fig:linear_predictor}(left) shows that the linear predictor fits well with a test-set of network architectures trained on ImageNet~\cite{imagenet} in the~DONA~space (MSE=0.2, KT~\cite{kendall1938new}=0.91). This predictor can be understood as a sensitivity model that indicates which blocks should be large, and which ones can be small, to build networks with high accuracy. Appendix~\ref{subsubsec:supplementary/comparing_quality_metrics} discusses the effectiveness of different derived quality metrics on the quality of the accuracy prediction. This process is now compared to DNA~\cite{blockwise_nas}, where BKD is used to build a ranking-model rather than an accuracy model. DNA~\cite{blockwise_nas} ranks subsampled architectures $i$ as: \begin{equation} \label{eq:dna_ranking} R_i = \sum_{n=0}^{N}\frac{\|Y_{n}-\bar{Y}_{n,m_i}\|_1}{\sigma_{n}} \end{equation} which is sub-optimal due to two reasons. First, a ranking model only ranks models within the same search space and does not allow comparing performance of different search spaces. Second, the simple sum of quality metrics does not take the potentially different noise-sensitivity of blocks into account, for which a weighted sensitivity model is required. The DONA~predictor takes on both roles. Figure~\ref{fig:linear_predictor}(right) illustrates the performance of the linear predictor for the~DONA~search space and compares the quality of its ranking to DNA~\cite{blockwise_nas}. Note that the quality of the DONA~predictor increases over time, as whenever Pareto-optimal networks are finetuned, they can be added to the Architecture Library, and the predictor can be fitted again. \subsection{Evolutionary Search} \label{subsec:search_method/evolutionary_search} Given the accuracy model and the block library, the NSGA-II \cite{nsga_ii, pymoo} evolutionary algorithm is executed to find Pareto-optimal architectures that maximize model accuracy and minimize a target cost function, see Figure~\ref{fig:overview}(c). The cost function can be scenario-agnostic, such as the number of operations or the number of parameters in the network, or scenario-aware, such as on-device latency, throughput, or energy. In this work, full-network latency is considered as a cost function by using direct hardware measurements in the optimization loop. At the end of this process, the Pareto-optimal models yielded by the NSGA-II are finetuned to obtain the final models (Section~\ref{subsubsec:search_method/finetuning}). \subsection{Finetuning Architectures} \label{subsubsec:search_method/finetuning} Full architectures sampled from the search space can be quickly finetuned to match the from-scratch training accuracy by initializing them with weights from the BKD process (Section~\ref{subsubsec:search_method/bkd}). Finetuning is further sped up by using end-to-end knowledge distillation (EKD) using the reference model as a teacher, see Figure~\ref{fig:bkd_overview}(b). In Appendix~\ref{subsec:supplementary/finetuning-speed}, we show such models can be finetuned up to state-of-the-art accuracy in less than 50 epochs. This is a $9\times$ speedup compared to the state-of-the-art 450 epochs required in \cite{rw_imagenet} for training EfficientNet-style networks from scratch. This rapid training scheme is crucial to the overall efficiency of DONA, since we use it for both, generating training targets for the linear accuracy predictor in Section \ref{subsec:search_method/accuracy_model}, as well as to finetune and verify Pareto-optimal architectures. \section{Model Transfer Study} \label{sec:supplementary/transfer} \input{figures_tex/figure_transfer} In this section, we further investigate the transfer properties of DONA~backbones in an object detection task. Our data hints towards two conclusions: (1) ImageNet top-1 validation is a good predictor for COCO mAP if models are sampled from a similar search space and if they are trained using the same hyperparameters and starting from the same initialization and (2) higher accuracies on ImageNet achieved through progressive shrinking in OFA do not transfer to significantly higher COCO mAP. The models under study are the same set as in Section \ref{subsec:experiments/detection}. These conclusions are apparent from Figure~\ref{fig:transfer}. Here, we plot the COCO Val mAPs of the detection architectures against the ImageNet Val top-1 accuracies of their respective backbones. First, we see that OFA models trained from scratch (OFA Scratch and OFA224) and models found in the similar MobileNetV3 ($1.2\times$) search space through DONA, transfer very similarly to COCO. Models found in the DONA~search space reach higher COCO mAP than expected based on their ImageNet top-1 accuracy. We suspect that such bias occurs because instead of strictly relying on depthwise convolutions, which is the case for MobileNetV3 (1.2$\times$) space, grouped convolutions are used in the DONA~search space. Second, we find that while OFA models with OFA training obtain around 1.0-1.5 percent higher accuracy on ImageNet~\cite{imagenet} than the same models trained from scratch, this increased accuracy does not transfer to a meaningful gain in downstream tasks such as object detection. This phenomenon is illustrated in Figure~\ref{fig:transfer}, where the same OFA models are trained on MS-COCO, either starting from weights trained on ImageNet from scratch or starting from weights obtained through progressive shrinking on ImageNet. For one of these models, the $1.4\%$ gain in ImageNet validation accuracy only translates into $0.1\%$ higher mAP on COCO. This observation motivates the choice that throughout the text, we compare to OFA-models which are trained from scratch rather than through progressive shrinking. \section{DONA~for Vision Transformers} \label{sec:supplementary/vit} DONNA can be trivially applied to Vision Transformers~\cite{dosovitskiy2020image}, without any conceptual change to the base algorithm. In this experiment, we use vit-base-patch16-224 from~\cite{ofa_repo} as a teacher model for which we define a related hierarchical search space. Vit-base-patch16-224 is split into 4 DONNA-blocks, each containing 3 ViT blocks (self-attention+MLP) as defined in the original paper~\cite{dosovitskiy2020image}. For every block, we vary the following parameters: \begin{itemize} \item Vit-block \textit{depth} varies $\in$ \{1,2,3\} \item The \textit{embedding dimension} can be scaled down to $50\%$ of the original embedding dimension $\in$\{$50\%$,$75\%$,$100\%$\}, equivalent to $\in$\{384,576,768\} internally in the DONNA-block. \item The \textit{number of heads} used in attention varies from 4-to-12 $\in$ \{4,8,12\}. \item The \textit{mlp-ratio} can be varied from 2-4 $\in$ \{2,3,4\}. Larger mlp-ratios indicate larger MLP's per block. \end{itemize} \input{figures_tex/supplementary_vit} Potentially, sequence length can be searched over as well, but this is not done in this example. The \textit{Block Library} is built using the BKD process, requiring $4\times3\times3\times3=135$ epochs of total training to model a fairly small search space of .5M architectures. The \textit{Architecture Library} exists out of 23 uniformly sampled architectures in this search space, finetuned for 50 epochs on ImageNet~\cite{imagenet}, using a large CNN model as a teacher until convergence. The latter process is calibrated such that the original teacher model (vit-base-patch16-224), initialized with weights from the \textit{Block Library} achieves the accuracy of the teacher model after these 50 epochs. Note that our reliance on such finetuning and knowledge distillation allows extracting knowledge without access to full datasets, in this case ImageNet21k. Finally, we use the Block- and Architecture libraries to train an accuracy predictor and execute an evolutionary search targeting minimization of the number of operations. Figure~\ref{fig:vit}(left) illustrates the results of this search, showing that our search in this space allows finding a pareto set of models. In terms of number of operations, this ViT-based search space does not outperform ResNet-50. Figure~\ref{fig:vit}(right) illustrates the quality of the accuracy predictor, on a limited set of ViT architectures. \section{Search space extension to Quantized Networks} \label{sec:supplementary/qnas} The DONA~accuracy predictor extends to search spaces different from the one it has been trained for, see Section~\ref{subsubsec:experiments/search-space-design}. This is a major advantage of DONA~, as it enables us to quickly extend pre-existing NAS results without the need to create an extended Architecture Library and without retraining the accuracy predictor. For details on this, see Section \ref{subsubsec:experiments/search-space-design} and Fig. \ref{fig:linear_predictor} for a discussion on this using ShiftNets~\cite{shiftnet}. This section illustrates that the DONA~accuracy predictor is not only portable across layer types, but also across different compute precisions, i.e. when using quantized INT8 operators. To demonstrate this, let us consider the MobileNetV3 (1.2$\times$) search space. First, we build and train a DONA~accuracy predictor for full-precision (FP) networks and then test this predictor for networks with weights and activations quantized to 8 bits (INT8). The search space includes k $\in \{3,5,7\}$; expand $\in \{3,4,6\}$; depth $\in \{2,3,4\}$; activation $\in\{ReLU/Swish\}$; attention $\in \{None/SE\}$; and channel-scaling $\in \{0.5\times, 1.0\times\}$. We build a complete Block Library in FP; sampling 43 FP networks as an Architecture Library and finetuning them to collect the training data for the FP accuracy predictor model. Second, we quantize the Block Library using the Data-Free-Quantization (DFQ)~\cite{dfq} post training quantization method using 8 bits weights and activations (INT8). The quantized Block Library now provides the quality metrics for quantized blocks, which can be used as inputs to the FP accuracy predictor to predict INT8 accuracy. Finally, we test the FP accuracy predictor model on a test set of INT8 networks. For this, we sample 20 networks whose INT8-block quality is within the range of the train set of the accuracy predictor. These networks are first finetuned in FP using the procedure outlined in section~\ref{sec:search_method} and then quantized to INT8 using DFQ~\cite{dfq}. \input{figures_tex/supplementary_qnas} Figure \ref{fig:supplementary/qnasvalid} illustrates the FP predictor can be used to directly predict the performance of INT8 networks, indicating that DONNA search spaces can indeed be trivially extended to include INT8 precision. Fig.~\ref{fig:supplementary/qnasvalid}(left) shows FP train and test data for the accuracy predictor model. Fig.~\ref{fig:supplementary/qnasvalid}(right) shows FP train and INT8 test data using the same FP accuracy predictor. Formally, we compare the performance of this predictor on the FP and INT8 test set by comparing the achieved prediction MSE and Kendal-Tau (KT)~\cite{kendall1938new}. We can observe that there are no outliers when using the predictor to predict the accuracy of INT8 networks. MSE for the FP test set is 0.13 and 0.34 for the INT8 test set. MSE for INT8 is higher because of the noise introduced by the quantization process. Nonetheless the KT-ranking is 0.85 for FP test set and 0.86 for the INT8 test set demonstrating that the accuracy predictor can be used for INT8-quantized models. \section{Experimental Details} \label{sec:supplementary} \subsection{Hyperparameters for training and distillation} \label{subsec:supplementary/training_hyperparameters} All reference models for each search space are \textbf{trained from scratch} for 450 epochs on 8 GPUs up to state-of-the-art accuracy using the hyperparameters given in \cite{rw_imagenet} for EfficientNet-B0 \cite{efficientnet}. More specifically, we use a total batch size of 1536 with an initial learning rate of 0.096, RMSprop with momentum of 0.9, RandAugment data augmentation \cite{randaugment}, exponential weight-averaging, dropout \cite{dropout} and stochastic depth \cite{stochastic_depth} of 0.2, together with a learning rate decay of 0.97 every 2.4 epochs. \textbf{Blockwise knowledge distillation (BKD)} is done by training every block for a single epoch. During this epoch, we apply a cosine learning rate schedule \cite{cosine} considering 20 steps, an initial learning rate of 0.01, a batch size of 256, the Adam \cite{kingma2014adam} optimizer, and random cropping and flipping as data augmentation. \textbf{Finetuning} is done via end-to-end knowledge distillation (EKD) by using hard ground truth labels and the soft labels of the reference model, see Figure~\ref{fig:bkd_overview}(b). We use the same hyperparameters used for training from scratch with the following changes: a decay of 0.9 every 2 epochs, the initial learning rate divided by 5 and no dropout, stochastic depth nor RandAugment. Depending on the reference model and the complexity of the search space, finetuning achieves full from-scratch accuracy in 15-50 epochs, see Figure~\ref{fig:supplementary/finetuning-speed}. \subsection{Hardware measurements} \label{subsec:supplementary/hardware_measurements} All complexity measurements used throughout the text, either hardware-aware or hardware-agnostic, are gathered as follows: \begin{itemize} \item \textbf{Nvidia V100 GPU} latency measurements are done in Pytorch 1.4 with CUDNN 10.0. In a single loop, 20 batches are sent to GPU and executed, while the GPU is synced before and after every iteration. The first 10 batches are treated as a warm-up and ignored; the last 10 are used for measurements. We report the fastest measurement as the latency. \item Measurements on the \textbf{Samsung S20 GPU} are always done with a batch-size of 1, in a loop running 30 inferences, after which the system cools down for 1 minute. The average latency is reported. \item The \textbf{number of operations} and \textbf{number of parameters} are measured using the ptflops framework (https://pypi.org/project/ptflops/). \item Latency measurement on the \textbf{simulator} targeting tensor compute units is done with a batch-size of 1. We report the fastest measurement as latency. \end{itemize} All complexity metrics for the reference models shown throughout the text are measured using this same setup. \subsection{Accuracy of baseline models} \label{subsec:supplementary/baselines} Accuracy is taken to be the highest reported in~\cite{rw_imagenet}, the highest reported in the paper, or trained from scratch using the EfficientNet-B0 hyperparameters used in the \cite{rw_imagenet} repository, see Table~\ref{tab:supplementary/baseline_models}. This is the case for EfficientNet-B0 (our training), MobileNetV2, MnasNet, SPNASNet and FBNet. OFA/Scratch is the ``flops@389M\_top1@79.1\_finetune@75'' model from ~\cite{ofa_repo} trained from scratch using the hyperparameters used for EfficientNet-B0 in \cite{rw_imagenet}. Note that these baselines are competitive. MobileNetV2 for example, typically has an accuracy of around 72$\%$, while the training in~\cite{rw_imagenet} pushes that to 73$\%$. ResNet50 is typically at 76$\%$, but reaches 79$\%$ using the training proposed in \cite{rw_imagenet}. ProxylessNas~\cite{proxylessnas} and DNA's~\cite{blockwise_nas} accuracy is taken from their respective papers. \input{tables/supplementary_reference_models} \subsection{Comments on Accuracy Predictors} \label{subsec:supplementary/accuracy_predictors} \subsubsection{Size of the Architecture Library} \label{subsubsec:supplementary/architecture_library} Tables~\ref{tab:supplementary/library-size-mobnetv3} and~\ref{tab:supplementary/library-size-donna} show the impact of the size of the Architecture Library used to fit the linear predictor. The tables show how performance varies on a test set of finetuned models for the MobileNetV3 (1.2$\times$) and DONA~search spaces, respectively. Note how the ranking quality, as measured by Kendall-Tau (KT)~\cite{kendall1938new}, is always better in this work than in DNA~\cite{blockwise_nas}. On top of that, DNA~\cite{blockwise_nas} only ranks models within the search space and does not predict accuracy itself. Another metric to estimate the accuracy predictor's quality is the Mean-Squared-Error (MSE) in terms of predicted top-1 accuracy on the ImageNet validation set. Note that for the MobileNetV3 (1.2$\times$) search space, 20 target accuracies are sufficient for a good predictor, as shown in Table~\ref{tab:supplementary/library-size-mobnetv3}. We use the same amount of targets for the EfficientNet-B0, MobilenetV3 (1.0$\times$) and ProxylessNas (1.3$\times$) search spaces. For the DONA~search space, we use 30 target accuracies, see Table~\ref{tab:supplementary/library-size-donna}. Note that the linear accuracy predictor can improve overtime, whenever the Architecture Library is expanded. As predicted Pareto-optimal architectures are finetuned to full accuracy, those results can be added to the library and the predictor can be fitted again using this extra data. \input{tables/supplementary_accuracy_model} \subsubsection{Choice of Quality Metrics} \label{subsubsec:supplementary/comparing_quality_metrics} Apart from using the Noise-To-Signal-Power-Ratio (NSR) (See Section~\ref{sec:search_method}), other quality metrics can be extracted and used in an accuracy predictor as well. All quality metrics are extracted on a held-out validation set, sampled from the ImageNet training set, which is different from the default ImageNet validation set in order to prevent overfitting. Three other types of quality metrics are considered on top of the metric described in equation~\ref{eq:loss}: one other block-level metric based on L1-loss and two network-level metrics. The block-level metric measures the normalized L1-loss between ideal feature map $Y_n$ and the block $B_{n,m}$'s output feature map $\bar{Y}_{n,m}$. It can be described as the Noise-to-Signal-Amplitude ratio: \begin{equation} \label{eq:l1_alternative} \mathcal{L}(W_{n,m};Y_{n-1},Y_n) = \frac{1}{C}\sum_{c=0}^{C} \frac{\|Y_{n,c}-\bar{Y}_{n,m,c}\|_1}{\sigma_{n,c}} \end{equation} The two network-level metrics are the loss and top-1 accuracy extracted on the separate validation set. The network-level metrics are derived by replacing only block $B_n$ in the reference model with the block-under-test $B_{n,m}$ and then validating the performance of the resulting network. Table~\ref{tab:supplementary/accuracy_model} compares the performance of the 4 different accuracy predictors built on these different styles of features. Although they are conceptually different, they all lead to a very similar performance on the test set with NSR outperforming the others slightly. Because of this, the NSR metric from equation~\ref{eq:loss} is used throughout the text. \input{tables/supplementary_predictor_ablation} \subsubsection{Accuracy predictors for different search-spaces} \label{subsubsec:supplementary/accuracy_predictors_different_searchspaces} Similar to the procedures discussed in section~\ref{sec:search_method}, accuracy models are built for different reference architectures in different search spaces: EfficientNet-B0, MobileNetV3 (1.0$\times$), MobileNetV3 (1.2$\times$) and ProxyLessNas (1.3$\times$). The performance of these models is illustrated in Table~\ref{tab:supplementary/other_predictors}. Note that we can generate reliable accuracy predictors for all of these search spaces, with very high Kendall-Tau ranking metrics and low MSE on the prediction. The Kendall-Tau value on the MobileNetV3 ($1.2\times$) search space is lower than the others, as the test set is larger for this space than for the others. The model is still reliable, as is made apparent by the very low MSE metric. \input{tables/supplementary_other_predictors} \subsubsection{Ablation on accuracy predictor} \label{subsubsec:supplementary/accuracy_predictor_ablation} Throughout this work, we use the Ridge regression from scikit-learn~\cite{scikit-learn} as an accuracy predictor. Other choices can also be valid, although the Ridge regression model has proven stable across our experiments. Table~\ref{tab:supplementary/predictor_ablation_pt2} compares a non-exhaustive list of accuracy predictors from scikit-learn and their performancde on the DONNA architectural test set. \input{tables/supplementary_predictor_ablation_pt2} \subsection{Finetuning speed} \label{subsec:supplementary/finetuning-speed} Depending on the search space's complexity, the used reference model in BKD, and the teacher in end-to-end knowledge distillation (EKD), finetuning can be faster or slower in terms of epochs. We always calibrate the finetuning process to be on-par with training from scratch for a fair comparison, but networks can be trained longer for even better results. With the hyperparameters for EKD given in Appendix~\ref{subsec:supplementary/training_hyperparameters}, Figure~\ref{fig:supplementary/finetuning-speed} shows that finetuning rapidly converges to from-scratch training accuracy for a set of sub-sampled models in different search spaces. Typically, 50 epochs are sufficient for most of the examples. Finetuning speed also depends on the final accuracy of the sub-sampled model. With an accuracy very close to the accuracy of the reference model, larger models typically converge slower using EKD than smaller models with a lower accuracy. For the smaller models, the teacher's guidance dominates more, which leads to faster finetuning. \input{figures_tex/supplementary_finetuning_speed_iccv} \input{figures_tex/supplementary_flops_params_gpu_iccv} \subsection{Models for various search-spaces} \label{subsec:supplementary/various-search-spaces} Figure~\ref{fig:supplementary/search-space-overview} illustrates predicted and measured performance of DONA~models in terms of number of operations, number of parameters, on an Nvidia V100 GPU and on a simulator targeting tensor operations in a mobile SoC. On top of this, predicted Pareto curves for a variety of other search-spaces are shown: MobileNetV3 (1.0$\times$) and MobileNetV3 (1.2$\times$). For these other search-spaces, we perform predictor-based searches in each of the scenarios, illustrating their respective predicted Pareto-optimal trendlines. The quality of these predictors is given in Table~\ref{tab:supplementary/other_predictors}. For the extra search spaces, some optimal models have been finetuned to verify the predicted curve's validity. For every search space, the same accuracy predictor is used across all scenarios. MobileNetV3 (1.0$\times$) and MobileNetV3 (1.2$\times$) are confirmed in terms of number of operations in Figure~\ref{fig:supplementary/search-space-overview} (mid-left). ProxyLessNass (1.3$\times$) is confirmed on an Nvidia V100 GPU in Figure~\ref{fig:supplementary/search-space-overview} (mid-right). In the MobileNetV3 ($1.0\times$) space, we find networks that are on-par with the performance of MobileNetV3~\cite{mbv3} in terms of accuracy for the same number of operations, which validates that DONA~can find the same optimized networks as other methods in the same or similar search spaces. Note that the DONA~outperforms all other search spaces on hardware platforms and in terms of number of parameters, which motivates our choice to introduce the new design space. The DONA~space is only outperformed in terms of Pareto-optimality when optimizing for the number of operations, a proxy metric. \section{Model Visualizations} \label{subsec:supplementary/model-visualizations} Figures~\ref{fig:magellan_visualizations},~\ref{fig:gpu_visualizations},~\ref{fig:flops_visualization},~\ref{fig:params_visualization} and~\ref{fig:kona_visualization} visualize some of the diverse network architectures found through~DONA~in the DONA~search space. Results are shown for a simulator, the Nvidia V100 GPU, the number of operations, the number of parameters, and the Samsung S20 GPU. Note that all of these networks have different patterns of Squeeze-and-Excite (SE~\cite{squeeze_excite}) and activation functions (whenever SE is used, Swish is also used), channel scaling, expansion rates, and kernel factors, as well as varying network depths. In Figure~\ref{fig:magellan_visualizations}, grouped convolutions are also used as parts of optimal networks as a replacement of depthwise separable kernels. Figure~\ref{fig:effnet_flops_visualization} and~\ref{fig:effnet_kona_visualization} illustrate optimal EfficientNet-Style networks for the number of operations and the Samsung S20 respectively, as taken from Figure~\ref{fig:efficientnet}. Note how these networks are typically narrower, with higher expansion rates than the~DONA~models, which makes them faster or more efficient in some cases. However, EfficientNet-Style models cannot achieve higher accuracy than $77.7\%$ top-1 on ImageNet validation using $224\times224$ images, while the~DONA~search space can achieve an accuracy higher than $80\%$ in that case. \input{figures_tex/supplementary_magellan_visualization} \input{figures_tex/supplementary_gpu_visualization} \input{figures_tex/supplementary_flops_visualization} \input{figures_tex/supplementary_params_visualization} \input{figures_tex/supplementary_kona_visualization} \input{figures_tex/supplementary_effnet_flops} \input{figures_tex/supplementary_effnet_kona} \section{Comments on random search} \label{sec:supplementary/random-search} DONNA clearly outperforms random search. In random search, networks are sampled randomly with some latency or complexity constraint and trained from scratch. This can be very costly if the accuracy of these architectures varies widely, as is the case in a large and diverse search space. On top of that, any expensive random search would have to be repeated for every target accuracy or latency on any new hardware platform. This is in stark contrast with DONNA, where the accuracy predictor is reused for any target accuracy, latency and hardware platform. Fig.~\ref{fig:supplementary/random_search} illustrates box-plots for the predicted accuracy on ImageNet-224 for networks randomly sampled in the MobileNetV3 (1.2$\times$) search space, at 400 +/-5 (190 samples), 500 +/- 5 (77 samples) and 600 +/- 5 (19 samples) million operations (MFLOPS). The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution. According to the accuracy predictor, randomly sampled architectures at 400M operations are normally distributed with a mean and standard deviation of 76.2$\%$ and 0.7$\%$ respectively. Based on this, only around 2$\%$ of the randomly sampled architectures will have an accuracy exceeding 77.6$\%$. So, when performing true random search for the 400M operation target, training 100 architectures for 450 epochs (45000 epochs in total) will likely yield 2 networks exceeding 77.6$\%$. In contrast, after building the accuracy predictor for MobileNetV3 (1.2$\times$) in 1500 epochs, DONNA finds an architecture achieving 77.5$\%$ at 400M operations in just 50 epochs, see Figure~\ref{fig:supplementary/search-space-overview}(mid-left). This is close to a 900$\times$ advantage if the start up cost is ignored, a reasonable assumption at a large amount of targets. In summary, the total cost of random search scales as $N\times450\times\#$latency-targets$\times\#$platforms, where $N$ is the number of trained samples for every latency-target on every platform. DONNA scales as $50\times\#$latency-targets$\times\#$platforms when many latency-targets and hardware platforms are being considered, meaning the initial costs of building the reusable accuracy predictor can be ignored. Predictor-based random search could also be used as a replacement for the NSGA-II evolutionary search algorithm \cite{nsga_ii} in DONNA. However, NSGA-II is known to be more sample efficient than random search in a multi-objective setting~\cite{hughes2006multi}. This is also illustrated in Figure~\ref{fig:supplementary/random_search}, where NSGA-II finds networks with a higher predicted accuracy than random search, given the 190 (400M), 77 (500M) and 19 (600M) samples for every target. In this NSGA-II, a total of 2500 samples was generated and measured during the search, covering the full search-space ranging from 150-800M operations. \input{figures_tex/supplementary_random_search} \section{Ablation Study} \label{sec:ablation} \subsection{Block Transplantation and Blockwise Knowledge Distillation} Discuss impact of learning schedule (learning rate, optimizer, number of steps, loss function). \subsection{Training schemes for Blockwise Knowledge Distillation} Discuss impact of learning schedule (learning rate, optimizer, number of steps, loss function). \subsection{Choice of Mother Network} \input{figures_tex/figure_cifar10_experiments} One of the most important choices to make before doing the whole QUARTS procedure is choosing a mother network. Since it will be used at every step of the procedure, choosing a wrong one might result in multiple issues: (1) blocks trained with knowledge distillation might not reach satisfying accuracy; (2) built accuracy predictor might give a wrong estimate for the selected search space; (3) due to knowledge distillation procedure in the finetuning, resulting models can have lower accuracy compared to using a better teacher. The main assumption being used throughout the paper is that the best teacher is the largest one. This is in line with previous works \cite{yang2018knowledge,lan2018knowledge,yim2017agift}. However, another line of research \cite{cho2019efficacy} has shown that the biggest teachers might hinder the performance of students if those can't reach the same capacity as the teacher. We have not observed that in any of our experiments and we hypothesize that it is due to the blockwise training procedure which leads to easier optimization. Since we are trying to approximate per-block function it ends up being easier than approximating full model. To confirm our hypothesis even further, we conduct a set of experiments where we use two different mother networks as teachers - a larger one (MBConvBlockGroup32-based) and a smaller one (MBConvBlock-based). We then use those to build two accuracy models - one per teacher and assess accuracies of models consisting fully of blocks and MBConvGroup32 blocks. As you can see in \ref{fig:cifar10}, using larger model allows for a good estimate for both MBConv and MBConvGroup32 models. However, when using smaller model, MBConvGroup32 models end up being highly underestimated since smaller teacher prevents models from learning further during blockwise kd training. This confirms that when training on a new search space, the largest model in this search space should be used as a teacher. \subsubsection{Ablation on DONA}\end{table} \begin{table}[] \small \caption{\label{tab:supplementary/accuracy_model} Comparing different quality metrics: NSR (Equation~\ref{eq:loss}), L1, network-level loss and top-1 accuracy for DONA.} \centering \begin{tabular}{l|c|c|c|c|c} Ranking Metric & DNA \cite{blockwise_nas} & NSR & L1 & Loss & Top-1 \\ \hline Kendall-Tau \cite{kendall1938new} & 0.77 & 0.9 & 0.89 & 0.89 & 0.88 \\ MSE [top-1\%] & NA & 0.19 & 0.23 & 0.41 & 0.44 \\ \end{tabular} \end{table} \subsubsection{Ablation on DONA}\end{table} \begin{table}[] \small \caption{\label{tab:supplementary/library-size-donna} Ranking quality for DONA, as a function of the size of the Architecture Library. `X'T indicates that `X' targets were used to fit the predictor.} \centering \begin{tabular}{l|c|c|c|c|c} Metric & DNA \cite{blockwise_nas} & 10T & 20T & 30T & 40T \\ \hline Kendall-Tau \cite{kendall1938new} & 0.77 & 0.87 & 0.87 & 0.9 & 0.9 \\ MSE [top-1\%] & NA & 0.28 & 0.18 & 0.2 & 0.19 \\ \end{tabular} \end{table}
{ "timestamp": "2021-08-30T02:17:32", "yymm": "2012", "arxiv_id": "2012.08859", "language": "en", "url": "https://arxiv.org/abs/2012.08859" }
\section{Code Generation} \section{Specifications}\label{app:specs} The specifications used for the experimental evaluation can be found here. \Cref{fig:networkspec} contains the network specification~\cite{fpgalola}. The specification monitors the network traffic of a server based on the source and destination IP of requests, TCP flags, and the length of the payload. It counts the number of incoming connections and computes the workload, i.e.\@\xspace, the number of bytes received over push requests. If any of these numbers exceeds a threshold, it raises and alarm. Moreover, it keeps track of the number of open connections. A trigger indicates when the the server attempts to close a connection even though none is open. The second specification~\cite{uav2} (\Cref{fig:flightphasespec}) detects different flight phases of a drone and raises an alarm if actual velocity and reference velocity deviate. First, the minimal and maximal velocities are computed. If their deviation exceeds a given threshold, these computations are reset. The specification counts how many steps no reset has taken place. Intuitively, this is used to detect whether the drone is accelerating or hovering / keeping its velocity steady. Second, deviations between the actual velocity and the reference velocity, given by the flight controller, are detected. The specification monitors the worst deviation and raises an alarm if it exceeds a given threshold. Note that the computation of the velocity requires a square root. Since this operation is not supported, we left it out for the verification. Alternatively, the compiler could introduce a \lstinline{sqrt} function and mark it as \emph{trusted}, this would indicate \viper that the function is correct as is and thus does not need to be verified. \begin{lstlisting}[ style=LolaDefault, float, floatplacement=t, caption=\lola specification for monitoring network traffic, label=fig:networkspec ] input src: Int32, dst: Int32 input fin: Bool, push: Bool, syn: Bool input length: Int32 constant server: Int32 := 213451 output count : Int32 := ite(count[-1,0] > 201, 0, count[-1,0] + 1) output receiver : Int32 := ite(dst==server, receiver[-2,0] + 2, ite(count > 200, 0, receiver[-1,0])) trigger receiver > 50 "Many incoming connections." output received : Int32 := ite(dst==server && push, 0, length) output workload : Int32 := ite(count > 200, workload[-1,0] + 1, 0) trigger workload > 25 "Workload too high." output opened : Int32 := opened[-1,0] + ite(dst==server && syn, 1, 0) output closed : Int32 := closed[-1,0] + ite(dst==server && fin, 1, 0) trigger opened - closed < 0 "Closed more connections than have been opened." \end{lstlisting} \begin{lstlisting}[ style=LolaDefault, caption=\lola specification for flight phase detection, label=fig:flightphasespec, float, floatplacement=t, ] input vel_x: Int32, vel_y: Int32, vel_r_x: Int32, vel_r_y: Int32 output velocity : Int32 := vel_x*vel_x + vel_y*vel_y output velocity_max : Int32 := ite(reset_max[-1,false], velocity, ite(velocity_max[-1,0] > velocity, velocity_max[-1,0], velocity)) output velocity_min : Int32 := ite(reset_max[-1,false], velocity, ite(velocity_min[-1,0] < velocity, velocity_min[-1,0], velocity)) output dif_max : Int32 := velocity_max - velocity_min output reset_max: Bool := dif_max > 1 output unchanged: Int32 := ite(reset_max[-1,false], 0, unchanged[-1,0] + 1) output vel_dev : Int32 := vel_r_x - vel_x + vel_r_y - vel_y output worst_dev: Int32 := ite(unchanged > 15, vel_dev, ite(worst_dev[-1,-10] < vel_dev, vel_dev, worst_dev[-1,-10])) trigger vel_dev > 10 "Deviation between actual und reference velocity too high." trigger worst_dev > 20 "Worst deviation between actual und reference velocity too high." \end{lstlisting} \section{Experimental Evaluation}\label{sec:casestudy} The implementation of the compiler is based on the \rtlola\footnote{\url{http://www.rtlola.org/}} framework written in \rust. The code verification uses the \rust-frontend of the \viper framework called \prusti~\cite{prusti}. \prusti translates a \rust program into the \viper intermediate verification language, followed by a translation into an \textsc{smt} model, which is checked by the Z3~\cite{z3} \textsc{smt} solver. Thus, our toolchain enables completely automatic proof checking. The experiments were conducted on a machine with a $3.1\giga\hertz$ Dual-Core Intel i5 processor with $16\giga\byte$ of \textsc{ram}. The artifacts for the evaluation are available on github.\footnote{\url{https://github.com/reactive-systems/Lola2RustArtifact}} In all experiments, the compilation itself has a negligible running time of under ten milliseconds and memory consumption of less than 4\mega\byte, mainly due to the \rtlola frontend. As expected, the verification of the annotated rust code using \prusti and the \viper toolkit takes significant time and memory. While the translation into the \textsc{smt} model is deterministic and can be parallelized, the verification with Z3 exhibits generally high and unpredictable running time. \begin{figure}[t] \centering \input{figures/network-plot.tex} \vspace{0.2cm} \caption{Results of $20$ runs in terms of running time (blue, in seconds) and memory consumption (orange, in MB) for the verification of the annotated \rust code of the specification, where the altitude of a drone is monitored (cf. \Cref{fig:runningexample:spec}), and the network traffic monitor specification.} \end{figure} We discuss the results of compiling and verifying three \lola specification of varying size. The process works flawlessly on two of them while the third one occasionally runs into timeouts and inconclusive verification results. First, we consider the specification from \Cref{fig:runningexample:spec}, where the altitude of a drone is monitored. The results in terms of both running time and memory consumption for 20 runs are depicted in \Cref{fig:eval:motivating}. Note that the y-axis displays both the running time in seconds (left plot) and the memory consumption in megabytes (right plot). The plot shows that the running time never exceeds $600\second$ with a median of $225\second$. The memory consumption is significantly more stable ranging between $648$ and $711\mega\byte$ with one outlier ($914\mega\byte$). While the first specification was short and illustrative, the second one is more practically relevant. The specification monitors the network traffic of a server based on the source and destination IP of requests, \textsc{tcp} flags, and the length of the payload~\cite{fpgalola}. The specification counts the number of incoming connections and computes the workload, i.e.\@\xspace, the number of bytes received over push requests. If any of these numbers exceeds a threshold, the specification raises an alarm. Moreover, it keeps track of the number of open connections. A trigger indicates when the the server attempts to close a connection even though none is open. The full specification can be found in \Cref{fig:networkspec}. \Cref{fig:eval:network} depicts the results both in terms of running time and memory consumption for $20$ runs. Again, the y-axis represents both running time in seconds and memory consumption in megabytes. The increase in resource consumption clearly reflects the increase in complexity and size of the input specification. While the longest run took nearly $90\minute$, most of the runs took less than $25\minute$ with a median of roughly $15\minute$. Like before, the memory consumption is relatively stable ranging around $3\giga\byte$. \begin{lstlisting}[ style=LolaDefault, float, floatplacement=t, caption=\lola specification for monitoring network traffic, label=fig:networkspec, basicstyle=\ttfamily\scriptsize, ] input src, dst, length: Int32 input fin: Bool, push: Bool, syn: Bool constant server: Int32 := ... output count : Int32 := if count[-1,0] > 201 then 0 else count[-1,0] + 1 output receiver : Int32 := if dst=server then receiver[-2,0] + 2 else if count > 200 then 0 else receiver[-1,0] trigger receiver > 50 "Many incoming connections." output received : Int32 := if dst=server $\land$ push then 0 else length output workload : Int32 := if count > 200 then workload[-1,0] + 1 else 0 trigger workload > 25 "Workload too high." output opened : Int32 := opened[-1,0] + int(dst=server $\land$ syn) output closed : Int32 := closed[-1,0] + int(dst=server $\land$ fin) trigger opened - closed < 0 "Closed more connections than have been opened." \end{lstlisting} Lastly, we considered a \lola specifications that shows the limitations of our approach. It detects different flight phases of a drone and raises an alarm if actual velocity and a reference velocity provided by the flight controller deviate strongly. The specification is based on a \lola specification for flight phase detection shown in \Cref{fig:flightphasespec}. \begin{lstlisting}[ style=LolaDefault, caption=\lola specification for flight phase detection, label=fig:flightphasespec, float, floatplacement=t, basicstyle=\ttfamily\scriptsize, ] input time_s, time_micros, velo_x, velo_y, velo_r_x, velo_r_y: Int32 output time := time_s + time_micros / 1000000 output count := count[-1,0] + 1 output frequency := 1 / (time - time[-1,0]) output freq_sum := frequency + freq_sum[-1,0] output freq_avg := freq_sum / count output velo : Int32 := vel_x*vel_x + vel_y*vel_y output velo_max : Int32 := if res_max[-1,false] then velo else max(velo_max[-1,0], velo) output velo_min : Int32 := if res_max[-1,false] then velo else min(velo_min[-1,0], velo) output res_max: Bool := (velo_max - velo_min) > 1 output unchanged: Int32 := if res_max[-1,false] then 0 else unchanged[-1,0] + 1 output velo_dev : Int32 := velo_r_x - velo_x + velo_r_y - velo_y output worst_dev: Int32 := if unchanged > 15 then velo_dev else max(velo_dev, worst_dev[-1,-10]) trigger freq_avg < 10 "Low input frequency." trigger velo_dev > 10 "Deviation between velocities too high." trigger worst_dev > 20 "Worst velocity deviation too high." \end{lstlisting} After a successful compilation, the verification was able to reveal potential arithmetic errors in the original specification~\cite{uav2}. The errors arose from division in which the denominator was an input stream access. The resulting value is not necessarily non-zero, so \viper reported that the respective annotation cannot be verified. Hence, our approach is able to detect flaws in specifications stemming from implicit assumptions on the system. These assumptions may not hold during runtime, causing the monitor to fail. Thus, we modified the flight phase detection specification to work without division. Yet, only four of our runs terminated successfully. The running time varies between $6$ and $16\minute$ and the memory consumption between $1.38\giga\byte$ and $1.66\giga\byte$. The successful runs show that our approach is able to verify monitor realizations of large and arithmetically challenging \lola specifications. However, two runs did not terminate within three hours. The reason lies within the underlying \textsc{smt} solver: an unfavorable path choice in the solving procedure can result in extended running times. Additionally, for four runs, the verification reported that some assertions might not hold or crashed internally. While restarting the verification procedure can lead to finding a successful run, the incident shows the reliance of our approach on external tools. Hence, the applicability increases with advances in research on automated proof checking of annotated code. This constitutes another reason for the continued development of valuable tools like \prusti and the \viper framework. \subsection{Performance of Generated Monitors} As expected, the compiled monitors exhibit superior running time when compared against the \rtlola~\cite{rtlolacavtoolpaper} interpreter. The comparison is based on randomly generated input data for the Altimeter\footnote{The specification was adapted to be compliant with \rtlola: rather than accessing the input with a future offset, the specification used a negative offset of -2.} and Network Traffic Monitor. For the first specification, the interpreter required $438\nano\second$ per event on average out of 10 runs, whereas the compiled version took $6.2\nano\second$. The second, more involved specification shows similar results: $1,535\micro\second$ for the interpreter and $63.4\nano\second$ for the compiled version. \section{From Lola to Rust} The compilation proceeds in two steps. First, the \lola specification is analyzed to determine inter-stream dependencies, the overall memory requirement, and the different phases of the monitoring process. Second, the compiler produces \rust code that implements the specification. \subsection{Specification Analysis} \paragraph*{Execution Pre- and Postfix.} Refer back to the \lola specification in \Cref{fig:runningexample:spec}. Another beneficial property of the synchronous input model is that, starting from $t=2$, both stream accesses with offset $-1$ to \texttt{altitude} will always succeed since the offset refers to the last evaluation of \texttt{altitude} which did already happen at $t \geq 1$. For a more general analysis, suppose an output stream $s$ accesses another stream $s'$ with an offset of $n$. If $n$ is non-positive, then accesses may fail until $t = \shift(s) - n - \shift(s')$, i.e.\@\xspace, they will not fail from $\shift(s) - n - \shift(s') + 1$ on. If $n$ is strictly positive, however, the evaluation of $s$ needs to be delayed by $\shift(s) - n$, i.e.\@\xspace, until $s'$ received the respective value. By generally delaying the execution of $s$, all accesses to $s'$ continue to succeed until $s'$ ceases to produce new values. As soon as this is the case, the monitor needs to evaluate $s$ for $\shift(s) - n$ more times to compensate for the delay. For instance, the evaluations of \texttt{tooLow} and \texttt{tooHigh} both have to be delayed by one step. \begin{figure}[t] \input{figures/stream-acceess.tex} \caption{% Illustration of stream accesses in different phases of the execution. An output stream~$o$ accesses an input stream~$i$ with offsets $-2$ and $+2$. In the prefix (postfix) of the execution, the past (future) accesses need to be substituted by their default values.% } \label{fig:streamaccess} \end{figure} This behavior induces the structure of the monitor execution: it starts with a prefix where past accesses always fail, loops in the regular execution where all accesses always succeed, and ends in a postfix where future accesses always fail. \Cref{fig:streamaccess} illustrates stream accesses in the different phases. It shows an output stream $o$ that accesses an input stream $i$ with an offset of $-2$ and $2$. In the first two iterations of the monitor execution, i.e.\@\xspace, in the prefix, the accesses to the past values will fail, requiring the monitor to use the default values instead. Afterwards, all accesses succeed until the input stream ends. In the last two evaluations, i.e.\@\xspace, in the postfix, the future accesses fail and need to be replaced by the default values. While the shift only concerns time, it can also be used to compute the memory requirement of a stream, i.e.\@\xspace, the number of values of a single stream that can be relevant at the same time. If a stream $s$ of type $T$ has a memory requirement $\memreq{s} = i$, the monitor needs to reserve $i \cdot \sizeof(T)$ bytes of memory for $s$. \begin{definition}[Memory Requirement] The \emph{memory requirement} of a dependency $(s', w, s) \in E$ is determined by the shifts of the streams as well as the weight $w$ of the dependency, i.e.\@\xspace, the offset of the stream access: $\shift{s} - \shift{s'} - w$. The memory requirement of a stream is thus the maximum requirement of any outgoing dependency: $\memreq{s} = \max \Set{\shift{s} - \shift{s'} - w \given (s', w, s) \in E}$. \end{definition} Hence, the compilation determines three key values for each specification. \begin{definition}[Memory Consumption, Prefix- and Postfix Length] Let $\memcon$, $\preflen$, and $\postlen$ be the \emph{memory consumption}, \emph{prefix length} and \emph{postfix length} of $\spec$, respectively, defined as follows: \begin{flalign*} \quad \quad \memcon &= \sum_{s \in \spec}\Set{\memreq{s} \cdot \sizeof(T_s) } &&\\ \preflen &= \max_{s \in \spec}\Set{\shift{s} + \memreq{s}} \\ \postlen &= \max_{s \in \spec} \Set{\shift{s}} \end{flalign*} \end{definition} Furthermore, the evaluation order $\evalorder$ of the output streams of a \lola specification induces the so-called \emph{evaluation layers}.% \begin{definition}[Evaluation Layer] Let $\spec$ be a \lola specification and let $\evalorder$ be the evaluation order induced by its dependency graph. If $\layer{s} = k$ for an output stream $s$, then there is a strictly decreasing sequence of $k$ streams with respect to ~$\evalorder$ starting in $s$. \end{definition} Intuitively, an evaluation layer consists of all streams that are incomparable according to the evaluation order. For the \lola specification from \Cref{fig:runningexample:spec}, for instance, the output streams \texttt{tooLow} and \texttt{tooHigh} are incomparable according to the evaluation order. Thus, they are contained in the same evaluation layer. Evaluation Layers are also used to identify independent streams and thus to enable their concurrent evaluation as described in \cref{sec:concurrency}. \subsection{Code Generation} The monitor code starts with a \emph{prelude} which declares data structures and helper functions. It also contains the \lstinline{main} function starting with the static allocation of the working memory. The remainder of the \lstinline{main} function is the operative monitoring code consisting of three components: the \emph{execution prefix}, the \emph{monitor loop}, and the \emph{execution postfix}. The general structure is illustrated in \Cref{fig:monitorstructure}, details follow in the remainder of this section. \input{figures/codestructurealt} \paragraph*{Prelude.} The prelude declares several functions required throughout the monitor execution and declares as well as allocates the working memory. The functions consist of two I/O functions and evaluation functions. \newcommand*\texpadsuxone{T_{s_1}} \newcommand*\texpadsuxtwo{T_{s_\ell}} The \lstinline{get_input() -> Option<($\texpadsuxone,\dots,\texpadsuxtwo$)>} function, where $T_{s_1},\dots,T_{s_\ell}$ are the types of all input streams, models the receipt of input data. It produces either \lstinline{None} if the execution of the system under scrutiny terminated, or \lstinline{Some(v)}, where $v$ is an $\ell$-tuple containing the latest input values. \newcommand*\texpadsuxthree{T_{s_{\ell + 1}}} \newcommand*\texpadsuxfour{T_{s_k}} Conversely, the function \lstinline{emit(&($\texpadsuxthree,\dots,\texpadsuxfour$))} conveys a $(k-\ell)$-tuple of output values to the system. For each stream, there are evaluation functions in several variants depending on whether they will be called in the prefix, the loop, or the postfix. The implementations differ only in the logic accessing other streams. The \lola semantics dictates that the evaluation needs to check whether the accessed value exists and to substitute it with the respective default value if needed. However, an analysis of the dependency graph reveals statically which accesses will fail. Thus, providing several implementations makes the need for such a check during runtime redundant. The working memory is a struct aptly named \lstinline{Memory}. It consists of a static array for each stream in the specification and reads as follows: \begin{lstlisting}[style=ColoredRust] struct Memory { $s_1$: [$T_{s_1}$, $\memreq{s_1}$], $\dots$ , $s_k$: [$T_{s_n}$, $\memreq{s_n}$] } \end{lstlisting} Here, $s_1, \dots, s_k$ are all input and output streams with types $T_1,\dots,T_k$. The monitor allocates \lstinline{Memory} once in its main function, keeps it on the stack, and grants read access to functions evaluating stream expressions. \paragraph*{Execution Prefix.} The prefix consists of $\preflen$ conditional blocks, each processing an input event of the system under scrutiny. If the system terminates before the prefix concludes, the function returns true, indicating an early termination, which prompts the \lstinline{main} function to initiate the postfix. Otherwise, the input is added to the working memory and, evaluation layer by evaluation layer, each output stream is evaluated in a dedicated function as can be seen in the following code snippet. For this, assume that the specification has $\lambda^\ast$ evaluation layers, i.e.\@\xspace, $ \lambda^\ast = \max \Set{x \given \exists s_1, \dots, s_x\colon s_1 \evalorder \dots \evalorder s_x} $ Moreover, $\lambda_i = \card{\Set{s \given \layer{s} = i}}$ denotes the number of streams within evaluation layer $i \leq \lambda^\ast$. Lastly, let $s_{i,j} \extevalorder s_{i, j+1}$ with $\layer{s_{i,j}} = \layer{s_{i,j+1}} = i$. \begin{lstlisting}[style=ColoredRust] let val_$s_{1,1}$ = eval_pre_1_$s_{1,1}$(&Memory); ... let val_$s_{1,\lambda_1}$ = eval_pre_1_$s_{1,\lambda_1}$(&Memory); memory.write_layer_1(val_$s_{1,1}$, ..., val_$s_{1, \lambda_1}$) ... let val_$s_{\lambda^\ast,1}$ = eval_pre_$s_{\lambda^\ast,1}$(&Memory); ... let val_$s_{\lambda^\ast,\lambda_{\lambda^\ast}}$ = eval_pre_$s_{\lambda^\ast,\lambda_{\lambda^\ast}}$(&Memory); Memory.write_layer_$\lambda^\ast$(val_$s_{\lambda^\ast,1}$, ..., val_$s_{\lambda^\ast, \lambda_{\lambda^\ast}}$);$\tikzmark{longest}$ if val_$s_{t_1}$ == true { emit($m_{t_1}$) } \end{lstlisting} Note that, as indicated in the prelude, each conditional block calls a different set of evaluation functions. This allows for a fine-grained treatment of stream accesses, improving the overall performance at the cost of greater code size. Also, the call passes a single argument to the evaluation function: an immutable reference for \lstinline{Memory}. As a result, the \rust type system guarantees that the evaluation does not mutate its state. The function returns a value that is committed to \lstinline{Memory} after fully evaluating the current layer. The bodies of these functions are straight-forward translations of stream expressions: each arithmetic and logical expression has a counterpart in \rust. Stream lookups access the only argument passed to the function, i.e.\@\xspace, a read-only reference to the working memory. The \lstinline{write_layer_$i$} functions commit computed stream values to \lstinline{Memory}. After $\memreq{s}$ iterations, the memory evicts the oldest data point for stream $s$, thus constituting a ring buffer. \paragraph*{Monitor Loop} The main difference between the monitor loop and the prefix is, as the name indicates, that the former consists of a loop. The loop terminates as soon as the system ceases to produce new inputs. At this point, the monitor transitions to the execution postfix. Within the loop, the monitor proceeds just as in the prefix except that the evaluation functions are agnostic to the current iteration number. In the evaluation, all stream accesses are guaranteed to succeed rendering the evaluation free of conditionals except when the stream expression itself contains one. \paragraph*{Execution Postfix} The structure of the execution postfix closely resembles the prefix except for two differences: The postfix does not check for the presence of new input values and calls a different set of evaluation functions, specifically tailored for the postfix iteration. \paragraph*{Code Characteristics} The generated code exhibits two advantageous characteristics. First, the trade-off between an increase in code size by quasi-duplicating the evaluation functions leads to an excellent performance in terms of running time. The functions require few arguments, avoid conditional statements as much as possible, and utilize memory locality. This is further emphasized by the lack of dynamic memory allocation and utilization of native datatypes. Second, the clear code structure, especially with respect to memory accesses, drastically simplifies reasoning about the correctness of the code. \section{Introduction to Lola}\label{sec:compilation} The source language of our verifying compiler is the stream-based monitoring language \lola~\cite{lola}. A \lola monitor is a reactive component that translates, in an online fashion, input streams into output streams. In each time step, the monitor receives new values for the input streams and produces new values for the output streams in accordance with the specification. In principle, the monitoring can continue forever; if the monitor is terminated, it wraps up the remaining computations, produces a final output, and shuts down. \lola specifications are declarative in the sense that the semantics leaves a lot of implementation freedom: the semantics defines how specific values are combined arithmetically and logically, but the precise evaluation order and the memory management are determined by the implementation. A \lola specification defines a set of streams. Each stream is an ordered sequence of typed values that is extended throughout the monitor execution. There are three kinds of streams: \begin{description} \item[Input Streams] constitute the interface between the monitor and an external data source, i.e.\@\xspace, the system under scrutiny. \item[Output Streams] compute new values based on input streams, other output streams, and constant values. The computed values contain relevant information regarding the performance and health status of the system. \item[Triggers] constitute the interface between the monitor and the user. Trigger values are binary and indicate the violation of a property. In this case, the monitor alerts the user. \end{description} Syntactically, a \lola specification is given as a sequence of stream declarations. Input stream declarations are of the form $i_j: T_j$, where $i_j$ is an input stream and $T_j$ is its type. Output stream and trigger declarations are of the form $s_j : T_j = e_j(i_1,\dots,i_m,s_1,\dots,s_n)$, where $i_1, \dots, i_m$ are input streams, $s_1, \dots, s_n$ are output streams, and the $e_j$ are stream expressions. A stream expression consists of constant values, streams, arithmetic and logic operators $f(e_1, \dots, e_k)$, if-then-else expressions \texttt{ite}$(b,e_1,e_2)$, and stream accesses $e[k,c]$, where $e$ is a stream, $k$ is the \emph{offset}, and $c$ is the constant \emph{default value}. Stream accesses are either \emph{synchronous}, i.e.\@\xspace, a stream accesses the latest value of a stream, or \emph{asynchronous}, i.e.\@\xspace, a stream accesses a past or future value of another stream. \begin{figure}[t] \begin{lstlisting}[style=LolaDefault,caption={A \lola specification monitoring the altitude of a drone. The output stream \texttt{tooLow} (\texttt{tooHigh}) checks whether the drone flies below (above) a given minimum (maximum) altitude in the last, current, and next step. If this is the case, an alarm is raised.}, label=fig:runningexample:spec] input altitude: Int32 output tooLow: Bool := altitude[-1,0] < 200 & altitude < 200 & altitude[1,0] < 200 output tooHigh: Bool := altitude[-1,0] > 600 & altitude > 600 & altitude[1,0] > 600 trigger tooLow "Flying below minimum altitude." trigger tooHigh "Flying above maximum altitude." \end{lstlisting} \end{figure} The example specification shown in \Cref{fig:runningexample:spec} monitors the altitude of a drone, detects whether the drone flies below a given minimum altitude or above a given maximum altitude for too long, and raises an alarm if needed. The input stream \texttt{altitude} contains sensor information of the drone. The output stream \texttt{tooLow} checks whether the altitude is lower than the given minimum altitude of \texttt{200} in the last, current, and next step, denoted by \texttt{altitude[-1,0]}, \texttt{altitude}, and \texttt{altitude[1,0]}, respectively. If this is the case, a trigger is raised. Analogously, \texttt{tooHigh} checks whether the altitude is above the given maximum altitude in the last, current, and next step, and a trigger is raised in this case. The evaluations of \texttt{tooHigh} and \texttt{tooLow} try to access the second to last value of \texttt{altitude} as well as the last and the next one. If \texttt{altitude} does not have at least two values, the accesses with offset $-1$ fail and the default value, in this case $0$, is used. If \texttt{altitude} ceases to produce values, the accesses with offset $1$ fail. Hence, in contrast to negative offsets, the default value for accesses with positive offset is used at the end of the execution. The semantics of \lola is defined in terms of \emph{evaluation models}. Intuitively, an evaluation model consists of evaluations of each output stream of the specification. The evaluation is a natural translation of the stream expressions. The full formal definition is given in~\cite{lola}. \begin{definition}[Evaluation Model~\cite{lola}] Let $\varphi$ be a \lola specification over input streams $i_1, \dots, i_\ell$ and output streams $s_1, \dots, s_n$. The tuple $\langle\sigma_1, \dots, \sigma_n\rangle$ of streams of length $N+1$ is called an \emph{evaluation model} if for each equation $s_j = e_j(i_1, \dots,i_\ell,s_1,\dots,s_n)$ in $\varphi$, $\langle\sigma_1, \dots, \sigma_n\rangle$ satisfies $\sigma_j(k) = \val{e_j}{k}$ for $0 \leq k \leq N$, where $\val{e_j}{k}$ evaluates the stream expression $e_j$ at position $k$. \end{definition} \begin{figure}[t] \begin{subfigure}[b]{0.48\textwidth} \begin{center} \scalebox{0.91}{ \begin{tikzpicture}[->,shorten >=0pt,auto,stream/.style={draw,minimum height=15pt, minimum width=18pt},time/.style={draw=none, minimum width=18pt}] \node[] (t) at (-0.7,0.8) {\scriptsize$t$}; \node[] (b) at (-0.7,0) {$b$}; \node[] (a) at (-0.7,-1.2) {$a$}; \node[time] (t-1) at (0,0.8) {\scriptsize$-1$}; \node[time] (t0) [right=-0.5 pt of t-1] {\scriptsize$0$}; \node[time] (t11) [right=-0.5 pt of t0] {\scriptsize$1.1$}; \node[time] (t12) [right=-0.5 pt of t11] {\scriptsize$1.2$}; \node[time] (t21) [right=-0.5 pt of t12] {\scriptsize$2.1$}; \node[time] (t22) [right=-0.5 pt of t21] {\scriptsize$2.2$}; \node[time] (t31) [right=-0.5 pt of t22] {\scriptsize$3.1$}; \node[time] (t32) [right=-0.5 pt of t31] {\scriptsize$3.2$}; \node[] (td) [right=-0.5 pt of t32] {\scriptsize$\dots$}; \node[stream] (b-1) at (0,0) {$-$}; \node[stream] (b0) [right=-0.5 pt of b-1] {$-$}; \node[stream] (b11) [right=-0.5 pt of b0] {$1$}; \node[stream] (b12) [right=-0.5 pt of b11] {}; \node[stream] (b21) [right=-0.5 pt of b12] {$2$}; \node[stream] (b22) [right=-0.5 pt of b21] {}; \node[stream] (b31) [right=-0.5 pt of b22] {$3$}; \node[stream] (b32) [right=-0.5 pt of b31] {}; \node[] (bd) [right=-0.5 pt of b32] {$\dots$}; \node[stream] (a-1) at (0,-1.2) {$-$}; \node[stream] (a0) [right=-0.5 pt of a-1] {$-$}; \node[stream] (a11) [right=-0.5 pt of a0] {}; \node[stream] (a12) [right=-0.5 pt of a11] {$1$}; \node[stream] (a21) [right=-0.5 pt of a12] {}; \node[stream] (a22) [right=-0.5 pt of a21] {$2$}; \node[stream] (a31) [right=-0.5 pt of a22] {}; \node[stream] (a32) [right=-0.5 pt of a31] {$3$}; \node[] (ad) [right=-0.5 pt of a32] {$\dots$}; \path (b11) edge[thick,first,in=70,out=110] node {} (b0) (b21) edge[thick,second,in=60,out=120] node {} (b11) (b31) edge[thick,third,in=60,out=120] node {} (b21) (a12) edge[thick,first,in=270,out=90] node {} (b11) (a22) edge[thick,second,in=270,out=90] node {} (b21) (a32) edge[thick,third,in=270,out=90] node {} (b31); \end{tikzpicture}} \end{center} \vspace{-5pt} \caption{The result of evaluating the output streams respecting the evaluation order.}\label{fig:dependencychangesmodel:correct} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \begin{center} \scalebox{0.91}{ \begin{tikzpicture}[->,shorten >=0pt,auto,stream/.style={draw,minimum height=15pt, minimum width=18pt},time/.style={draw=none, minimum width=18pt}] \node[] (t) at (-0.7,0.8) {\scriptsize$t$}; \node[] (a) at (-0.7,0) {$a$}; \node[] (b) at (-0.7,-1.2) {$b$}; \node[time] (t-1) at (0,0.8) {\scriptsize$-1$}; \node[time] (t0) [right=-0.5 pt of t-1] {\scriptsize$0$}; \node[time] (t11) [right=-0.5 pt of t0] {\scriptsize$1.1$}; \node[time] (t12) [right=-0.5 pt of t11] {\scriptsize$1.2$}; \node[time] (t21) [right=-0.5 pt of t12] {\scriptsize$2.1$}; \node[time] (t22) [right=-0.5 pt of t21] {\scriptsize$2.2$}; \node[time] (t31) [right=-0.5 pt of t22] {\scriptsize$3.1$}; \node[time] (t32) [right=-0.5 pt of t31] {\scriptsize$3.2$}; \node[] (td) [right=-0.5 pt of t32] {\scriptsize$\dots$}; \node[stream] (a-1) at (0,0) {$-$}; \node[stream] (a0) [right=-0.5 pt of a-1] {$-$}; \node[stream] (a11) [right=-0.5 pt of a0] {$1$}; \node[stream] (a12) [right=-0.5 pt of a11] {}; \node[stream] (a21) [right=-0.5 pt of a12] {$1$}; \node[stream] (a22) [right=-0.5 pt of a21] {}; \node[stream] (a31) [right=-0.5 pt of a22] {$2$}; \node[stream] (a32) [right=-0.5 pt of a31] {}; \node[] (ad) [right=-0.5 pt of a32] {$\dots$}; \node[stream] (b-1) at (0,-1.2) {$-$}; \node[stream] (b0) [right=-0.5 pt of b-1] {$-$}; \node[stream] (b11) [right=-0.5 pt of b0] {}; \node[stream] (b12) [right=-0.5 pt of b11] {$1$}; \node[stream] (b21) [right=-0.5 pt of b12] {}; \node[stream] (b22) [right=-0.5 pt of b21] {$2$}; \node[stream] (b31) [right=-0.5 pt of b22] {}; \node[stream] (b32) [right=-0.5 pt of b31] {$3$}; \node[] (bd) [right=-0.5 pt of b32] {$\dots$}; \path (b12) edge[thick,first,in=60,out=120] node {} (b0) (b22) edge[thick,second,in=60,out=120] node {} (b12) (b32) edge[thick,third,in=60,out=120] node {} (b22) (a11) edge[thick,first,in=90,out=270] node {} (b0) (a21) edge[thick,second,in=90,out=270] node {} (b12) (a31) edge[thick,third,in=90,out=270] node {} (b22); \end{tikzpicture}} \end{center} \vspace{-5pt} \caption{The result of evaluating the output streams in order of their declaration.}\label{fig:dependencychangesmodel:wrong} \end{subfigure} \vspace{5pt} \caption{Two different evaluations of the output streams $a$ and $b$, where $a$ accesses~$b$ synchronously and $b$ accesses its previous value. Both accesses default to $0$ and both $a$ and $b$ increase the obtained value by $1$.}\label{fig:dependencychangesmodel} \end{figure} Synchronous accesses harbor a pitfall for the monitor realization as illustrated in \Cref{fig:dependencychangesmodel}. Consider the corresponding \lola specification: \begin{lstlisting}[style=LolaDefault] output a: Int32 := b[ 0, 0] + 1 output b: Int32 := b[-1, 0] + 1 \end{lstlisting} Here, $a$ accesses $b$ synchronously, while $b$ accesses its previous value. The evaluation of $a$ tries to access the current value of $b$ and increases the result by one, which yields the next stream value of $a$. In contrast, the evaluation of $b$ tries to access the last value of $b$ and increases the result by one to determine the next stream value of $b$. \Cref{fig:dependencychangesmodel:correct} depicts the resulting output. If the monitor evaluates the streams in order of their declaration, however, the resulting output, shown in \Cref{fig:dependencychangesmodel:wrong}, differs from the expected one. The reason is that the \emph{current} value of $b$ changes depending on whether or not $b$ has already been extended when accessing the value. This problem is solved by respecting the evaluation order, a partial order on the output streams. It is induced by the dependency graph of a \lola specification. \begin{definition}[Dependency Graph~\cite{lola}] The \emph{dependency graph} $D_\spec = (V, E)$ of a \lola specification $\spec$ is a weighted directed multigraph. Each vertex represents a stream and each edge an access operation. Thus, $s \in V$ iff $s$ is a stream or trigger in $\spec$ and $(s_1, n, s_2) \in E$ for $s_1, s_2 \in V$, $n \in \naturals$ iff the stream expression of $s_1$ contains an access to $s_2$ with offset $n$. \end{definition} Based on the dependency graph, d'Angelo~\etal define the \emph{shift} of a stream~\cite{lola}. Intuitively, the shift of $s$ indicates how many steps the evaluation of its expression needs to be delayed. For instance, suppose the delay is $n > 0$. Then the value of $s$ for time $t$ can be computed at time $t+n$. \begin{definition}[Shift~\cite{lola}] For a \lola specification $\spec$, the \emph{shift} $\shift{s}$ of a stream $s$ is the greatest weight of a path through the dependency graph of $\spec$ originating in $s$: $ \shift{s} = \max(0, \max \Set{w + \shift{s'} | (s, w, s') \in E}). $ \end{definition} The shift allows us to define an order in which streams need to be evaluated. For this, we define the set of synchronized edges $E^\ast$ where the weight of a synchronized edge $(s, n, s') \in E^\ast$ indicates when $s$ can access $s'$ successfully with an offset of $n$. Let $E^\ast = \Set{(s, \shift{s} - w - \shift{s'}, s') \given (s, w, s') \in E}$. \begin{definition}[Evaluation Order] The \emph{evaluation order} $\evalorder$ is a partial order on the output streams of a \lola specification $\spec$. Let $D_\spec=(V,E)$ be the dependency graph of $\spec$. The evaluation order is the transitive closure of a relation $\prec$ with $s \prec s'$ iff $(s', 0, s) \in E^\ast$. \end{definition} Clearly, we obtain $b \evalorder a$ for the above \lola specification, yielding the expected result depicted in \Cref{fig:dependencychangesmodel:correct}. For the \lola specification from \Cref{fig:runningexample:spec}, however, the output streams \texttt{tooLow} and \texttt{tooHigh} are incomparable according to the evaluation order. A total evaluation order on the output streams, denoted~$\extevalorder$, is obtained by relating incomparable streams arbitrarily. \begin{remark}[On Asynchronous Accesses and Off-by-one Errors] It is fairly easy to make off-by-one errors in asynchronous stream accesses. When two streams within one layer access each other asynchronously, one of the offsets needs to be decreased by 1, depending on which stream is evaluated first. This cannot be avoided for any $\extevalorder$. To simplify the presentation, we will ignore this issue in the remainder of the presentation, the correct adjustment of the indices is, however, implemented in the compiler. \end{remark} Specifications where the dependency graph has no positive cycles are called \emph{efficiently monitorable}: such specifications can be monitored with constant memory, and an output value can always be produced after a constant delay~\cite{lola}. All example specifications considered in this paper are efficiently monitorable. \section{Conclusion}\label{sec:conclusion} We have presented a compilation of \lola specifications into \rust code. Using \rust as the compilation target has the advantage that the executables are highly performant and can be used directly on many embedded platforms. The generated code contains annotations that enable the verification of the code using the \viper framework. With the guiding assertions in the code, as well as function contracts and loop invariants, \viper can verify monitors even for large specifications. Our results are promising and encourage further research in this direction, such as compiling more expressive dialects of \lola such as \rtlola~\cite{rtlolaarxiv,maxmaster}. \rtlola extends \lola with real-time aspects and can handle asynchronous inputs. The added functionality is highly relevant in the design of monitors for cyber-physical systems~\cite{rtlolacavindustrial,rtlolacavtoolpaper}. While generating verifiable \rtlola monitors in \rust will require additional effort, such an extension would further improve the practical applicability of our approach. \section{Concurrent Evaluation}\label{sec:concurrency} Evaluating independent streams concurrently can significantly improve the performance of the monitor. In the following, we devise an analysis of \lola specifications that enables safe parallelization. We observe two characteristics of \lola: the computation of a stream expression can only \emph{read} the memory of other streams, and inter-stream dependencies are determined statically. The evaluation layers are a manifestation of the second observation. They group streams which are incomparable according to the evaluation order. Combined with the first observation, we can conclude that all streams within one layer may be computed in parallel. Thus, the compilation spawns a new thread for each stream within the layer with read access to the global memory. We add annotations to the code that enable \viper to verify that the parallel execution remains correct. The compilation capitalizes on \rust's concurrency capabilities by evaluating different output streams in parallel. A major advantage of \rust is that its ownership model enforces a strict separation of mutable and immutable data. Any data point has exactly one owner who can transfer ownership for good or let other functions borrow the data. Borrowing data is again either mutable or immutable. If a function mutably borrows data, no other function, including the owner, can read or write this data. Similarly, if a function immutably borrows data, other functions and the owner can only read it. A consequence of this fine-grained access management with static enforcement is that enabling concurrency becomes rather easy when compared to languages like C. Enabling the concurrent evaluation requires slight changes in the code generation. First, evaluation functions are annotated with \lstinline{#[pure]}. This indicates that a function mutates nothing but its local stack portion. For the evaluation logic, the compiler still proceeds layer by layer, opening a \emph{scope} for each of them. In the scope, it generates code following the total evaluation order $\extevalorder$. However, rather than calling the respective evaluation functions directly, the parallelized version spawns a thread for each stream and starts the evaluation inside it. Assume $s_1, \dots, s_n$ constitute a single layer of a specification. The evaluation then looks as follows: \begin{lstlisting}[style=ColoredRust] let (v_1, ..., v_n) = crossbeam::scope(|scope| { let handle_s1 = scope.spawn(move |_| { eval_s1(&memory) }); ... let handle_sn = scope.spawn(move |_| { eval_sn(&memory) }); (handle_s1.join().unwrap(), ..., handle_sn.join().unwrap()) }).unwrap() \end{lstlisting} Note that the code snippet uses the \rust crate crossbeam, a standard concurrency library. A similar result can be achieved without external code by moving the global memory to the heap and using the standard \rust thread logic.\footnote{% On a technical note: \rust's type system requires the programmer to guarantee that the global memory will not be dropped until all threads terminate. Thus, the memory needs to be wrapped into an \emph{Atomically Reference Counted (Arc)} pointer. This has two disadvantages: all accesses to memory require generally slower heap access and the evaluation suffers from the overhead accompanying atomic reference counting.} The correctness of this approach is an immediate consequence of the correctness of the evaluation order and memory locality of streams. In particular, the independence of streams within the same evaluation layer and the pureness of the functions are crucial. The latter ensures that the function does not mutate anything outside of its local stack. The former ensures that using pure evaluation functions within the same layer is indeed possible. Thus, the order of execution cannot change the outcome of the function, enabling the concurrent evaluation. Note that spawning a thread for each stream evaluation is a double-edged sword. While it can drastically reduce the monitor's latency, each spawn induces a constant overhead. Thus, reducing the number of spawns while increasing the parallel computation time maximizes the gain. Consequently, the monitor benefits stronger from the parallel evaluation when its dependency graph is wide, enabling several cores to compute in parallel. Similarly, specifications with large stream expressions benefit from the multi-threading because the share of parallel computations increases. This lowers the relative impact of the constant thread-spawning overhead. \section{Introduction}\label{sec:intro} Cyber-physical systems are inherently safety-critical, because failures immediately impact the physical environment. A crucial aspect of the development of such systems is therefore the integration of reliable monitoring mechanisms. A \emph{monitor} is a special system component that typically has broad access to the sensor readings and the resulting control decisions. The monitor assesses the system's health by checking its behavior against a specification. If a violation is detected, the monitor raises an alarm and initiates mitigation protocols such as an emergency landing or a graceful shutdown. An obvious concern with this approach is that the safety of the system rests on the correctness of the monitor. \emph{Quis custodiet ipsos custodes?} For simple specifications, this is not a serious problem. An \ltl~\cite{ltl} specification, for example, can be translated into a finite-state automaton that is proven to correspond to the semantics of the specification. Implementing such an automaton correctly as a computer program is not difficult. For more expressive specification languages, establishing the correctness of the monitor is much more challenging. Especially problematic is the use of interpreters, which read the specification as input and then rely on complicated and error-prone software to interpret the specification dynamically at runtime~\cite{rtlolaarxiv,rtlolacavindustrial,rtlolacavtoolpaper,striver,tessla}. Recently, however, much effort has gone into the development of compilers. Compared to a full-scale interpreter, the code produced by a compiler for a specific specification is fairly simple and well-structured. Some compilers even include special mechanisms that increase the confidence in the monitor. For example, the \rtlola compiler~\cite{fpgalola} generates \vhdl code that is annotated with tracing information that relates each line of code back to the specific part of the specification it implements. The \copilot compiler \cite{copilot} produces a test suite for the generated C code. The framework even includes a bounded model checker, which can check the correctness of the output for input sequences up to a fixed length. However, none of these approaches actually proves the functional correctness of the monitor. In this paper, we present a \emph{verifying compiler} that translates specifications given in the stream-based monitoring language \lola~\cite{lola} to implementations in \rust\footnote{\url{https://www.rust-lang.org/}}. The generated code is fully annotated with formal function contracts, loop invariants, and inline assertions, so that functional correctness and guaranteed termination can be automatically verified by the \viper~\cite{viper} toolkit, without any restriction on the length of the input trace. Since the memory requirements of a \lola specification can be computed statically, this yields a formal guarantee that on any platform that satisfies these requirements, the monitor will never crash and will always compute the correct output. A major practical concern for any compiler is the performance of the generated code. Our \lola-to-\rust compiler produces highly efficient monitor implementations because it parallelizes the code for the evaluation of the specifications. Since \lola is a stream-based specification language, it exhibits a highly modular and memory-local structure, i.e.\@\xspace, the computation of a stream writes only in its own local memory, although it may read from the local memory of several other processes. The compiler statically analyzes the dependencies between the streams, resulting in a partial evaluation order. To prove correctness, it is shown that streams that are incomparable with respect to the evaluation order can indeed be evaluated in parallel. We have used our compiler to build monitors from specifications of varying sizes found in the literature. In our experience, the compiler itself scales very well. The verification in \viper, however, is expensive. It appears that the running times of the underlying \textsc{smt} solver Z3~\cite{z3} vary greatly, even for different runs on the same monitor and specification. Nevertheless, we have been successful in all our benchmarks in the sense that the compiler either generated a verified monitor or uncovered an error in the specification. This is a major step forward towards the \emph{verified monitoring} of real-life safety-critical systems. \section{Motivation (1.5p)}\label{sec:motivation} Drones are a common target for runtime verification. For instance, it is easy to detect whether a drone flies below a given minimum altitude too long and to raise an alarm if needed. This can be specified in \lola as follows: \begin{lstlisting}[style=LolaDefault] input altitude: Int output tooLow: Bool := altitude[1,0] < 200 && altitude < 200 && altitude[-1,0] < 200 output tooHigh: Bool := altitude[1,0] > 600 && altitude > 600 && altitude[-1,0] > 600 trigger tooLow "Flying below minimum altitude" trigger tooHigh "Flying above maximum altitude" \end{lstlisting} The input stream \texttt{altitude} contains sensor information of the drone. The output stream \texttt{tooLow} checks whether the altitude is lower than the given minimum altitude of \texttt{200} in the current, the last (\texttt{altitude[-1,0]}), and the next (\texttt{altitude[1,0]}) step. If this is the case, a trigger is raised. Analogously, the output stream \texttt{tooSlow} checks whether the altitude is lower than the given maximum altitude in the last, current, and next step. A trigger is raised in this case. To verify the correctness of the monitor, we translate the specification into rust code containing verification conditions. Since the above \lola specification contains future offsets, in particular \texttt{altitude[1,0]}, the translation has to delay the computations of \texttt{tooLow} and \texttt{tooHigh} by one step. In order to derive all computation delays induced by a \lola specification, we analyze its dependency graph. The one for the above specification is shown in Figure~\ref{fig:dep_graph_motivation}. \begin{figure}[t] \begin{center} \begin{tikzpicture}[->,shorten >=2pt,auto,offset/.style={fill=white, anchor=center, pos=0.5, inner sep=3pt}] \node[state,ellipse] (alt) at (4,-0.75) {altitude}; \node[state,ellipse] (tl) at (0,0) {tooLow}; \node[state,ellipse] (th) at (0,-1.5) {tooHigh}; \node[state,ellipse] (tr1) at (-3,0) {trigger}; \node[state,ellipse] (tr2) at (-3,-1.5) {trigger}; \path (tl) edge node[offset] {$-1, 0, 1$} (alt) (th) edge node[offset] {$-1, 0, 1$} (alt) (tr1) edge node [offset] {$0$} (tl) (tr2) edge node[offset] {$0$} (th); \end{tikzpicture} \end{center} \caption{TODO}\label{fig:dep_graph_motivation} \end{figure} The nodes denote the streams, edges denote stream accesses and, their weight corresponds to the offset of the access. Hence, edges with positive weights imply computation delays. In particular, the above specification introduces two computation delays, namely the one of \texttt{tooLow} and \texttt{tooHigh}. While the output streams \texttt{tooLow} and \texttt{tooHigh} both depend on the input stream \texttt{altitude}, they do not depend on each other. Thus, since stream accesses are read-only, their evaluation is interleaving invariant, i.e.\ an arbitrary scheduler cannot change the result of the evaluations of \texttt{tooLow} and \texttt{tooHigh} or introduce data races. In particular, \texttt{tooLow} and \texttt{tooHigh} can be evaluation concurrently.\todo{more details when definitions and proofs are final} \todo[inline]{Rust Code with Verification Conditions} \section{Related Work}\label{sec:rw} The development of a verifying compiler was identified by Tony Hoare as a grand challenge for computing research~\cite{10.1007/978-3-540-45213-3_4}. Milestone results have been the concept of proof-carrying code (\textsc{pcc})~\cite{proofcarryingcode} and the technique of checking the result of each compilation instead of verifying the compiler's source code~\cite{certifyingcompiler}. \textsc{pcc} architectures~\cite{pccjava} and certifying compilers~\cite{ccjava} exist for general purpose languages like \java. A variation of the \textsc{pcc}, abstraction-carrying code~\cite{acc, cacc} was developed for constraint logic programs, where a fixpoint of an abstract interpretation serves as certificate for invariants. This enables automatic proof generation. In this paper, we present a verifying compiler for the stream-based monitoring language \lola. Compared to general programming languages, the compilation of monitoring languages is still a young research topic. Some work has focused on compiling specifications immediately into executable code. Rmor~\cite{rmor}, for instance, generates constant memory C code. Similarly, a \copilot~\cite{copilot} specification can be compiled into a constant memory and constant time C realization. The \copilot toolchain~\cite{copilotembedded} enables the verification of the monitor using the \textsc{cbmc} model checker~\cite{cbmc}. As opposed to our approach, their verification is limited to the absence of various arithmetic errors, lacking functional correctness. While \textsc{cbmc} can verify arbitrary inline assertions, \copilot does not generate them. Note that, in contrast to \lola, \copilot can express real-time properties. \rtlola\cite{rtlolaarxiv,maxmaster}, on the other hand, is a real-time, asynchronous extension of \lola, for which a compilation into the hardware description language~\vhdl exists~\cite{fpgalola}. The \vhdl code contains traceability annotations~\cite{janmaster} and can then be realized on an \fpga. Similarly, Pellizzoni~\etal~\cite{pellizzioni} and Schumann~\etal\cite{rtutjournal,rtutrv} realize their runtime monitors on \fpga{}s, yet without verification or traceability. Rather than using a dedicated specification language, there are several logics for which verified compilers exist. Differential dynamic logic~\cite{ddl}, for example, was specifically designed to capture the complex hybrid dynamics of cyber-physical systems. The ModelPlex~\cite{modelplex} framework translates such a specification into several verified components monitoring both the environment with respect to the assumed model and the controller decisions. Lastly, there is work on verifying monitors for metric first-order temporal~\cite{metriccompiler} and dynamic logic\cite{metriccompiler2}. \section{Verification}\label{sec:verificationannotations} Our goal is to prove that the verdicts produced by the monitor correspond to the formal semantics. The main challenge is that the the evaluation model of the \lola semantics refers to unbounded data sequences, disregarding any memory concerns. The implementation, however, manages the monitoring process with only a finite amount of memory. As a result, the \lola semantics may refer to data values long after they have been discarded in the implementation. Hence, the relation between the memory content and the evaluation model, and thus the correctness of the computation, is no longer apparent. \begin{figure}[t] \centering \input{figures/informationflow} \vspace{-0.6cm} \caption{Information flow between the monitor and the ghost memory.} \label{fig:verification:flow} \end{figure} We solve this problem with the classic proof technique of introducing so-called \emph{ghost memory}. The compilation introduces another data structure named \lstinline{Ghost Memory} (GM) which is a wrapper for \rust vectors, i.e.\@\xspace, dynamically growing sequences of data. Whenever the monitor receives or computes any data, it commits it to the GM. The GM's size thus obviously exceeds any bound, voiding the memory guarantees. However, the ghost memory's sole purpose is to aid the verification and not the monitor; information flows from the program into the GM and the proof, but remains strictly separated from the monitor execution. This allows for removing the GM after successfully verifying the correctness of the monitor without altering its behavior. \Cref{fig:verification:flow} illustrates the flow of information between the monitor and GM. Clearly, the monitor remains unaffected when removing any proof artifacts. The correctness proof has two major obligations: proving compliance between values in the GM and the working memory, and proving the correctness of the trigger evaluations with respect to the ghost memory. These obligations are encoded as verification annotations, such that the \viper framework verifies them automatically. The compilation generates additional annotations to guide the verification process. \viper annotations fall into the following categories: \begin{description} \item[Function Contracts] Annotations in front of a function \lstinline{f} consist of preconditions and guarantees. \viper imposes constraints on the function caller and the function body itself. Each call to \lstinline{f} is replaced by an assertion of the preconditions of \lstinline{f}, prompting \viper to prove their validity, and an assumption of the guarantees. In a separate step, \viper assumes the preconditions and verifies that the guarantees hold after executing the function body. Note that the \rust type system already ensures that references passed to the function are accessible and cannot be modified or freed unless they are explicitly declared mutable. \item[Loop Invariants] \viper analyzes while-loops similarly to functions in three steps. First, the code leading to the loop needs to satisfy the invariants. Second, \viper assumes both the loop invariant and the loop condition to hold and verifies that the invariant again holds after the execution of the body. Lastly, \viper assumes the invariant and the negation of the loop condition to hold for the code after the loop. \item[Inline Assertions] Both loop invariants and function contracts impose implicit assertions on the code. \viper allows for supplementing them with explicit inline assertions using the \rust \lstinline{assert!} macro. Usually, the macro checks an expression during runtime. \viper, however, eliminates the need for this dynamic check as it verifies the correctness statically and transforms it into an assumption for the remainder of the verification. Thus, the assertions serve a similar function as the ghost memory: they are a proof construct and do not influence the monitor per se (\cf \Cref{fig:verification:flow}). \end{description} \paragraph*{Annotation Generation.} The compilation inserts annotations at several key locations. First, as an example for function annotations, consider a function that retrieves a value of the stream~$s$ from the working memory. The function takes the relative index of the retrieved value as single argument, i.e.\@\xspace, an index of~$1$ accesses the second to newest value. The annotation requires that the index must not exceed the memory reserved for~$s$. Syntactically, this results in the following annotation in front of the function head: \lstinline{#[requires="index < $\lstmathanno\memreq{s}$"]}. Moreover, the function needs to guarantee that the return value corresponds to the respective value stored in \lstinline{Memory}. This is expressed by the annotation \lstinline{#[ensures="index == $\lstmathanno i$ ==> result == self.$\lstmathanno s$[$\lstmathanno i$]"]} for each $i \leq \memreq{s}$. The remaining function annotations follow a similar pattern, i.e.\@\xspace, they require valid arguments, and ensure correct outputs as well as the absence of undesired changes. Note that the ghost memory is essentially a wrapper for \rust vectors as they represent a growing list of values. Thus, functions concerning the ghost memory carry the standard annotation ensuring correctness of the vector as presented in the \viper examples.\footnote{See e.g.\@\xspace the verified solution for the Knapsack Problem: \url{https://github.com/viperproject/prusti-dev/blob/master/prusti/tests/verify/pass/rosetta/Knapsack_Problem.rs}.} Second, the loop has several entry checks that are expressed as inline assertions. These ensure that the iteration count is $\preflen$ and that the length of the ghost memory for a stream $s$ is $\preflen - \shift{s}$. This is necessary because the loop invariant asserts equivalence between an excerpt of the ghost memory and the working memory. While the existence of all accessed values in the working memory is guaranteed due to the static allocation, the GM grows dynamically. Hence, the compilation adds the entry checks. In terms of memory equivalence, it remains to be shown that all values in the working memory correspond to the respective entry in the ghost memory. Formally, let $m$ be the working memory and let $g$ be the ghost memory where index 0 marks the latest value. Furthermore, let $\eta$ be the current iteration count. Then, the invariant checks: \begin{equation}\label{eq:maininv} \forall s\colon\forall i\colon (0 \leq i < \memreq{s} \implies m_s[i] = g_s[i]). \end{equation} At loop entry, $\memreq{s} = \preflen - \shift{s} = \eta - \shift{s}$ is the number of iterations in which a value for $s$ was computed. In each further iteration of the loop, the invariant checks that the former $\memreq{s}-1$ entries remained the same and that the new values in the ghost memory $g$ and the working memory $m$ are equal. The first of these checks is not strictly necessary for the proof because it immediately follows from the function contracts of the helper functions. However, after completing one loop iteration, \viper deletes prior knowledge about all variables that were mutated in the loop. Further reasoning about these variables is thus solely based on the loop invariants. To express \Cref{eq:maininv} in \viper, the compilation needs to statically resolve the universal quantification over the streams. Thus, for each stream~$s$, the compilation generates the annotation \lstinline{#[invariant="forall i: usize :: (0 <= i && i < $\lstmathanno\memreq{s}$) ==> mem.get_$\lstmathanno s$(i) == gm.get_$\lstmathanno s$(iter - 1)"]}, where \lstinline{iter} is a variable denoting the current iteration, \lstinline{mem} is the working memory, and \lstinline{gm} is the ghost memory. \viper is able to handle the remaining universal quantification over $i$. However, the compilation reduces the verification effort further by unrolling it. This is possible since the memory requirement $\memreq{s}$ of a stream $s$ is determined statically. Lastly, the compilation introduces inline assertions after the evaluation of stream expressions, i.e.\@\xspace, in the prefix, postfix, and loop body. These annotations show that computed values are correct when assuming that the values retrieved from the working memory are correct as well. This argument is well-founded because the compilation substitutes failing stream accesses by their respective default values. Thus, any value retrieved from \lstinline{Memory} was computed in an earlier iteration or layer and therefore proven correct by \viper. It only remains to be shown that the stream expression is properly evaluated. Expressions consist of arithmetic or logical functions, constants, and stream accesses. The former two can be trivially represented in \viper. Since the memory is assumed to be correct and failing accesses are substituted by constants when possible, accesses also translate naturally into \viper. \paragraph*{Conclusion.} The validity of the assertions after the evaluation logic shows that newly computed values are correct if the values in the working memory $m$ and the ghost memory $g$ coincide. This fact is guaranteed by the loop invariant. Furthermore, the inductive argument of the loop invariants allows us to conclude that, if $m$ were to never discard values, $m_s[i] = g_s[i]$ for all streams $s$ and $i \leq \eta$. Thus, $m$ is a real subsequence of $g$, which is a perfect reflection of the evaluation model. As a result, any trigger violation detected by the monitor realization corresponds to a violation in the evaluation model for the same sequence of input values; The realization is verifiably correct.
{ "timestamp": "2020-12-17T02:19:07", "yymm": "2012", "arxiv_id": "2012.08961", "language": "en", "url": "https://arxiv.org/abs/2012.08961" }
\subsection{Online Representation}\label{sec:measuerments-oov} \textbf{Word Embedding for Logs.} Logs are designed to facilitate user readability. Consequently, the constant parts of logs are defined in human readable manner by developers. Many methods (\textit{e.g.,} word2vec~\cite{mikolov2013exploiting}) thus use natural language process (NLP) methods to represent words. However, these methods cannot represent domain-specific words accurately. For example, ``down'' and ``up'' in Fig.~\ref{fig:logs} are antonyms but they have similar contexts. Besides, system upgrades usually generate new types of logs with unseen words~\cite{log2vec} (\textit{e.g.,} ``Vlan-interface'' in Figure~\ref{fig:logs}), which pose a challenge for generating distributed representations of words in logs. For this reason, we adopt our previous work, Log2Vec~\cite{log2vec}, to represent the words of logs. Log2Vec combines a log-specific word embedding method to accurately extract the semantic information of logs, with an out of vocabulary (OOV) word processor to embed unseen words into distributed representation at runtime. \textbf{Information Extraction.} Information extraction retrieves relational triples from unstructured text. This is usually done in the form of triples for binary relations, relating two arguments by a predicate or relation for each relation in a given sentence, \textit{i.e.} $(argument_{1}, relation_{1,2},argument_{2})$ \cite{Banko2008OpenIE}. Traditional information extraction approaches rely on predefining a limited set of target relations and hand-crafted patterns. For this they would adapt Named Entity Recognizers and dependency parsers to target a specific domain. These approaches would then require manual effort to be repurposed and applied to a different domain. In order to address the challenge of scalability for performing web information extraction, Banko \textit{et al}. \cite{Banko2008OpenIE} introduced Open Information Extraction (OpenIE). They presented a new information extraction paradigm that would allow to extract relations without defining the number or type of relations in advance. \section{Introduction} \input{intro.tex} \label{sec:introduction} \section{Background}\label{sec:background} \input{background.tex} \section{Challenges and Overview}\label{sec:challenge} \input{challenge.tex} \section{Algorithm}\label{sec:design} \input{design.tex} \section{Experiments}\label{sec:evaluation} \input{evaluation} \section{Case Study}\label{sec:case-study} \input{case-study} \section{Related Work}\label{sec:related} \input{related_work} \section{Conclusion}\label{sec:conclusion} \input{conclusion} \bibliographystyle{unsrt} \subsection{Experimental Setting}\label{sec:setting} \subsubsection{Datasets}\label{sec:datasets} We conduct experiments over four public log datasets from several services: BGL logs~\cite{Xu2009Detecting}, HDFS logs~\cite{deeplog}, HPC logs~\cite{lin2016log}, and Proxifier logs\cite{logzip}. Since the lack of a publicly-available summary gold standard hinders the automatic evaluation, we manually labelled the above public log datasets and make them available\footnote{https://github.com/LogSummary/ICSE2021/tree/main/data}. In this paper, we provide two kinds of gold standard datasets. For information extraction, we labelled all templates for all logs (\cite{zhu2019tools} only label templates for 2000 logs per dataset) and labelled OpenIE triples leveraging semantic information and domain knowledge. To evaluate log summarization, we chose 100 groups of 20 contiguous logs per dataset and generated their summaries manually. The detailed information of above datasets are listed in Table~\ref{tab:datasets}. \begin{table}[!ht] \centering \caption{Detail of the service log datasets.}\label{tab:datasets} \renewcommand\tabcolsep{2.75 pt} \begin{tabular}{ccc \toprule Datasets&Description&\# of logs\\ \midrule BGL&Blue Gene/L supercomputer log& 4,747,963\\ HDFS&Hadoop distributed file system log& 11,175,629 \\%48.61 MB HPC&High performance cluster log& 433,489 \\%9.95 MB Proxifier&Proxifier software log& 21,329 \\%48.61 MB \toprule \end{tabular} \end{table} \subsubsection{Experimental Setup}\label{sec:setup} We conduct all experiments on a Linux server with Intel Xeon 2.40 GHz CPU. We implement LogSummary with Python 3.6 and make it open-source\footnote{https://github.com/LogSummary/ICSE2021}. \subsection{Evaluation on LogIE}\label{sec:experiment-LogIE} We intrinsically evaluate the LogIE framework in order to choose its best implementation and compare it to the baselines. We evaluate the main OpenIE methods from the literature as baselines and incorporate them as the OpenIE component of LogIE. For this task, we build and open-source a dataset of log information extraction triples based on public logs. We also use this gold standard dataset to evaluate an improved version of LogIE which uses manually improved templates instead of an online template extraction approach. This serves as an ablation test showing the influence of the template quality on LogIE's performance. \subsubsection{Triples Gold Standard Dataset}\label{sec:gt-logie} \begin{table}[t] \centering \renewcommand\tabcolsep{8 pt} \caption{Resulting number of manually improved templates and manually extracted OpenIE triples from the source datasets for building the OpenIE Gold Standard dataset.} \label{table:triples-gt-stats} \begin{tabular}{ccc} \toprule Source & Templates & Triples \\ \midrule BGL & 263 & 831 \\ HDFS & 50 & 87 \\ HPC & 85 & 199 \\ Proxifier & 14 & 36 \\ Switch & 95 & 323 \\ \midrule Total & 507 & 1476 \\ \bottomrule \end{tabular} \end{table} This dataset, we employ it as the gold standard for building and evaluating the different approaches within LogIE. The process of building it can be divided into three parts: obtaining the logs data from different services; extracting and improving templates used to assist the triples extraction; and manual annotation. We sourced the logs from different types of systems and services. Four of them described in Section \ref{table:summary} are open-source and one of them comes from real world switch logs. As part of the process of building the gold propositions dataset we extracted templates from all source logs using LogParse \cite{logparse}. Then, we manually extracted OpenIE triples from the logs. For each log we manually extracted relational triples of the form ($arg_1$, $r$, $arg_2$), meaning that $arg_1$ is related to $arg_2$ by predicate $r$. We aimed to keep the form (subject, predicate, object) in this order. For this purpose considered both the domain knowledge and the semantic structure. We made several considerations: We extract (subject, predicate, object) triples in this order. Where applicable, we make prepositions part of the predicate. It's required to have at least the predicate and the subject or object present to extract a triple. We make linking verbs the predicate of a triple where applicable. Conjunctions, such as “and” or “or”, are split into several triples or combined into a single one where the conjunction is part of the predicate. For the case of adjuncts, regardless of the kind, we decided to disregard them and simply consider them as separators of two different clauses. For the cases of apposition, we defined an “is” relation, that would serve as an “is-a” relation which is usually used in ontology building. Lastly, we leveraged domain knowledge to extract the values of arguments or attributes as well the instances of different entities. Usually these would show up as an “=” or a “:” in the logs. In these cases, we also considered an “is” relation to represent the relation between the two. Additionally, arguments are also represented in the format that command line arguments are written. In these cases, we also use an “is” relation and create a second argument “set” for flags. \subsubsection{Task Formulation}\label{sec:task-logie} As explained in Section \ref{sec:background}, OpenIE intends to obtain all relations present in a given sentence or corpus together with the arguments or entities related by such relations in a structured manner. Likewise, the goal of LogIE is to extract relations present within each log, which are used as the minimum unit of information from each log. Specifically, given a stream of raw logs as the input, relational triples of the form (arg$_1$, relation, arg$_2$) are to be extracted for each relation present within each log. This task will be tested against the gold standard we propose in Section \ref{sec:gt-logie} and evaluated as detailed in the following Section \ref{sec:metric-logie}. \subsubsection{Metrics and Baselines}\label{sec:metric-logie} The main challenge in order to intrinsically evaluate LogIE—similarly to cases that are common in NLP—is that we need to allow different OpenIE triples extractions to be considered acceptable for the same gold proposition. For this reason, we follow a similar approach to that of Stanovsky et al. \cite{Stanovsky2016oieBenchmark} in their OpenIE benchmark. Inspired by He et al. \cite{HeLuheng2015QuestionAnswerDS}, where the syntactic heads of the predicate and the arguments from a given extraction should match those of the corresponding gold proposition, they define a more lenient approach that considers their token-level overlap instead. Therefore, we use an approach similar to theirs\footnote{https://github.com/gabrielStanovsky/oie-benchmark} to calculate precision and recall of the evaluation. Among the main differences, we do not propagate a match to all matching predicates, but thoroughly test all triples against all gold propositions instead. We don't produce a confidence score for each triple, so we don't calculate AUC scores. The main metric we consider for comparison between the approaches is the F1 score. The metrics are calculated as follows: $precision = \frac{\#\ correct\ extractions}{\#\ extractions}$, $recall = \frac{\#\ recalled\ gold\ propositions}{\#\ gold\ propositions}$, $F1 = \frac{2\ \times\ precision\ \times\ recall}{precision\ +\ recall}$. We compare LogIE with six Open Information Extraction methods, namely, ClausIE \cite{Corro2013ClausIECO}, Ollie \cite{Mausam2012OpenLL}, OpenIE5\footnote{https://github.com/dair-iitd/OpenIE-standalone}, PredPatt \cite{Stanovsky2016GettingMO}, PropS \cite{Stanovsky2016GettingMO} and Stanford OpenIE \cite{Angeli2015LeveragingLS}. \subsubsection{Experimental Results}\label{sec:eval-public-logie} We evaluate LogIE and compare it against its manually augmented version and the six OpenIE baselines in Table \ref{table:logie} on the four public logs described in Table \ref{tab:datasets}. LogIE learns online templates generated by LogParse \cite{logparse}. However, in order to perform an ablation test, we also compare LogIE Improved which is an augmented version using manually improved templates that were produced as part of the gold standard dataset introduced in Section \ref{sec:gt-logie}. Then each of the baselines are plain OpenIE approaches used directly on the raw logs. LogIE consistently produces better results across all public logs when compared to the baselines. This is because the pipeline approach of LogIE is optimized to make the most of both the structure and the free text present in logs. On the other hand, plain OpenIE methods are actually meant to be used directly on free natural language text. As you will see in Table \ref{table:logie}, even though both versions of LogIE are consistently superior than the baselines, there are cases where their results could be comparable such as on BGL or HPC. The more free text is present in the logs, the easier it is for plain OpenIE methods to generate correct OpenIE triples. However, as we show in Table \ref{table:logie-speed} applying plain OpenIE methods on the logs is not efficient when compared to LogIE which leverages the templates used as input and the high speed of template matching using tries to produce the OpenIE triples output from the raw logs. The throughput of LogIE is over 200X that of applying plain OpenIE. Nonetheless, the performance of LogIE is sensitive to the accuracy of the templates used as input as shown in the ablation test comparison. As demonstrated by the performance of LogIE Improved, the more accurate templates are, the better the performance of LogIE. Further, LogIE leverages either the structure of the log or the semantic information of the unstructured text within logs to extract information. If there is no rich information in the structure, or if details within the log are omitted to make it shorter, its performance is also affected. This is the case for the HDFS logs where the structured parts don't provide rich information and the natural language implicitly refers to the arguments, which is not picked up by LogIE. This affects its results with a low recall as seen in Table \ref{table:logie}. In turn, this affects the output of LogSummary given the pipeline nature of the framework. \ \smallskip \begin{table}[t] \centering \caption{Test accuracy on the log OpenIE triples gold reference dataset of public logs.} \label{table:logie} \begin{tabular}{ccccc} \toprule Logs & Method & Precision & Recall & F1 \\ \midrule \multirow{7}*{BGL} & \textbf{LogIE} & \textbf{0.918} & \textbf{0.864} & \textbf{0.89} \\ & OpenIE5 & 0.788 & 0.733 & 0.760 \\ & Stanford & 0.685 & 0.753 & 0.717 \\ & Ollie & 0.552 & 0.633 & 0.590 \\ & PredPatt & 0.463 & 0.638 & 0.536 \\ & ClausIE & 0.447 & 0.602 & 0.513 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{HDFS} & \textbf{LogIE} & \textbf{0.98} & \textbf{0.459} & \textbf{0.626} \\ & ClausIE & 0.159 & 0.530 & 0.244 \\ & OpenIE5 & 0.271 & 0.220 & 0.243 \\ & Stanford & 0.184 & 0.210 & 0.196 \\ & PredPatt & 0.171 & 0.177 & 0.174 \\ & Ollie & 0.003 & 0.079 & 0.006 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{HPC} & \textbf{LogIE} & \textbf{0.859} & \textbf{0.667} & \textbf{0.751} \\ & ClausIE & 0.588 & 0.648 & 0.616 \\ & PredPatt & 0.591 & 0.556 & 0.573 \\ & Stanford & 0.691 & 0.349 & 0.464 \\ & Ollie & 0.290 & 0.285 & 0.287 \\ & OpenIE5 & 0.567 & 0.123 & 0.202 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{Proxifier} & \textbf{LogIE} & \textbf{0.869} & \textbf{0.812} & \textbf{0.839} \\ & Stanford & 0.831 & 0.254 & 0.389 \\ & ClausIE & 0.247 & 0.719 & 0.368 \\ & OpenIE5 & 0.759 & 0.204 & 0.322 \\ & Ollie & 0.556 & 0.194 & 0.288 \\ & PredPatt & 0.061 & 0.106 & 0.078 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \renewcommand\tabcolsep{18 pt} \caption{Comparison of speed measured in logs per second between LogIE and the plain OpenIE methods when processing the input logs measured over thirty runs for each OpenIE method and each logs dataset.} \label{table:logie-speed} \begin{tabular}{ccc} \toprule \textbf{Approach} & \multicolumn{2}{l}{\textbf{Throughput (logs / s)}} \\ \cmidrule{2-3} {} & mean & std \\ \midrule LogIE & 8550.66 & 1909.62 \\ \midrule OpenIE Methods & 39.05 & 36.19 \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation on Ranking Summaries}\label{sec:experiment-summary} \subsubsection{Metrics and Baselines}\label{sec:metric-summary} To automatically evaluate the log summarization performance of different approaches, we use ROUGE~\cite{rouge}. The ROUGE metric measures the summary quality by counting the overlapping units between the generated summary and reference summaries. In our scenario, different operators may manually generate summaries in different order of words/phrases. Therefore, we apply ROUGE-1 to evaluate performance. Following the common practice~\cite{rouge2}, we report the precision, recall, F1 score for ROUGE-1, where $precision = \frac{\#\ overlapping\ words}{\#\ words\ in \ gold reference}$, $recall = \frac{\#\ overlapping\ words}{\#\ words\ in \ automatic \ summary}$ and $F1 socre = \frac{2 \times precision \times recall}{precision + recall}$ We obtain the metrics using open-source package\footnote{https://github.com/pltrdy/rouge}. We apply the compression ratio, \textit{i.e.,} $\frac{size~of~summaries}{size~of~original~logs}$, to evaluate the log compression performance. We compare LogSummary with three extractive summarization methods, namely, TF-IDF~\cite{tf-idf}, LDA~\cite{ton19}, TextRank (sentence summary)~\cite{textrank}). \subsubsection{Experimental Results}\label{sec:public-summary} We compare LogSummary with three baselines on four public datasets. For LogSummary, we choose top-5 semantic triples from online logs. Table~\ref{table:summary} show the comparison results of LogSummary{} and three baselins. Overall, LogSummary{} achieves the best summarization accuracy among the four methods Both TF-IDF and LDA, however, have low F1 scores ($< 0.5$) on all four datasets, because TF-IDF and LDA generate summaries by extracting keywords, which dismisses valuable information in raw logs. Although TextRank achieves relatively high precision (\textit{e.g.}, 0.904 on the BGL dataset), the high precision is at the cost of low recalls. For instance, on the Proxifier dataset, the recall of TextRank is only 0.050. Because there are many similar logs with different variables, when employed on its own, TextRank may choose many logs of the same type and ignore other types of logs. On the contrary, LogSummary, uses LogIE to extract triples from logs as an intermediate representation, which is more fine grained than each complete log, before applying TextRank achieving a $\approx$4.6 times higher recall. In Table~\ref{table:summary} ,we evaluate the compression ratio for log summarization on four datasets. We find that LogSummary achieves average compression ratio of 3.1\%, which will vastly reduce the reading and understanding load of operators. The results mean that the outputs of LogSummary are not only readable but also highly compressed. \begin{table} \centering \caption{Log summarization performance and Compression Ratio (CR) of LogSummary compared to its Baselines.} \label{table:summary} \begin{tabular}{cccccc} \toprule Logs & Method& Precision & Recall & F1 & CR \\ \midrule \multirow{4}*{BGL}& \textbf{LogSummary} &\textbf{0.815} &\textbf{0.703} &\textbf{0.725} &\textbf{0.026} \\ & LDA & 0.382 & 0.076 & 0.119 & 0.130\\ & TextRank & 0.893 & 0.238 & 0.347 & 0.144\\ & TF-IDF & 0.383 & 0.354 & 0.332 & 0.024 \\ \multirow{4}*{HDFS}& \textbf{LogSummary} & \textbf{0.759} &\textbf{0.432}& \textbf{0.538} & \textbf{0.015}\\ & LDA & 0.220 & 0.045 & 0.074 & 0.225\\ & TextRank & 0.602 & 0.079 & 0.135 & 0.230\\ & TF-IDF & 0.193 & 0.176 & 0.179 & 0.033 \\ \multirow{4}*{HPC}& \textbf{LogSummary}&\textbf{0.819}&\textbf{0.911}&\textbf{0.840} &\textbf{0.037}\\ & LDA & 0.530 & 0.110 & 0.175 & 0.251 \\ & TextRank & 0.904 & 0.265 & 0.365 & 0.208\\ & TF-IDF & 0.487 & 0.506 & 0.472 & 0.039 \\ \multirow{4}*{Proxifier}& \textbf{LogSummary} &\textbf{0.879}&\textbf{0.857} &\textbf{0.864} &\textbf{0.045}\\ & LDA & 0.332 & 0.088 & 0.135 & 0.099\\ & TextRank & 0.663 & 0.050 & 0.093 & 0.275\\ & TF-IDF& 0.281 & 0.324 & 0.290 & 0.023\\ \bottomrule \end{tabular} \end{table} \begin{figure} \begin{minipage}[h]{1.0\linewidth} \centering \includegraphics[width = 8.5 cm]{figures/case-study-logs.pdf}\\ \end{minipage} \vspace{-1 mm} \caption{A case study of LogSummary on switch logs.}\label{fig:case-study-logs} \end{figure} \subsection{Threats to Validility} The LogSummary framework leverages each of its components to automatically produce accurate summaries. However, its pipeline nature makes each component dependant on the quality of the output from the previous ones. Sometimes, we should proceed to improve the templates manually since template extraction is not perfectly precise. These imprecisions meant there would be redundant templates extracted from the logs. Additionally, in some cases, the variables may not have been detected properly. Therefore, the quality of the templates may affect the triples extracted by LogIE, which in turn affects the representations built by Log2Vec \cite{log2vec} which are used by TextRank \cite{textrank} to produce the ranked summaries. Nonetheless, each of the components provides significant benefits over their baselines. LogIE produces triples at over 200 times the throughput of plain OpenIE methods, which serve as the intermediate result that LogSummary leverages to achieve $\approx$4.6 times the recall of TextRank \cite{textrank}, which is the best performing baseline. LogSummary can serve further downstream purposes, which we consider for future work. The triples of LogSummary could aid the creation of knowledge graphs applied to perform automatic root cause analysis. Additionally they could serve as the intermediate representation before other log analysis tasks. LogSummary could produce the summary of anomalous logs for troubleshooting. Further, we consider five log datasets as part of the evaluation from online system services. Our approaches outperform their baselines in all of them, which shows the generalizability of LogSummary. However, it may encounter challenges when dealing with complex application layer logs, \textit{e.g.,} operators create complex rules for the Rule Extraction part. \subsection{LogIE}\label{sec:logie} In order to accurately and efficiently extract valuable information from logs, we propose LogIE (Log Information Extraction). LogIE performs open information extraction on logs, extracting triples relating entities and arguments through a relation or predicate. To achieve this, it combines both Rule Extraction (RE) and OpenIE to extract the triples. In order to make the process fast and efficient, LogIE adopts templates to improve and speed up information extraction of logs. Note that, templates are extracted by existing approaches \cite{zhu2019tools,logparse} automatically. LogIE learns triples from the log templates, so template matching can be used to produce the LogIE triples output. LogIE is a framework composed of four main components, which we describe and explain how they work using the simple example in Fig. \ref{fig:template}. \begin{figure} $\underbrace{\text{Link bandwidth lost totally is resumed.}}_{\text{Free Text}}\ \underbrace{\text{( Reason = }\overbrace{\text{*}}^{\text{VAR1}} \ )}_{\text{Structured Text}}$ \caption{Log template example.} \label{fig:template} \end{figure} \subsubsection{Matching $\And$ Processing} Matching and Processing are the overarching components of LogIE supported by the RE and the OpenIE components that perform the triple extraction. As shown in Fig. \ref{fig:logie}, LogIE takes both raw logs and templates as its input. LogIE performs template matching on the input rawlogs. Using Fig. \ref{fig:template} as an example, if a log is matched with this template, LogIE retrieves the previously extracted triples for this given log-type and substitute the variables present in the triples by their actual value obtained from the raw log. These variables are usually identifiers, values or service addresses \cite{pi2019semantic}. Therefore once a log is received, if a template is matched by the matching component, it will directly be processed by the Processing component and output its LogIE triples. This way LogIE is able to effectively and efficiently yield OpenIE triples in an online manner. Since the goal of LogIE is to get a structured information from logs, we treat all these cases equally by substituting them by a dummy token to be considered an entity or part of an argument \textit{e.g.,} "VARX", where X is the ordinal of the variable within the template. In the case that the log is not matched to any template, a new template needs to be extracted\cite{drain,logparse}. Since LogIE is meant to be run online, it requires a template extraction and matching method that can be incrementally updated online. For this reason, we incorporate LogParse into the Matching component. The new template is then split into subparts as shown in Fig. \ref{fig:template}, based on rules predefined accordingly to the source services log. These subparts are then handled by the RE and OpenIE components to extract a new set of triples from it. The RE component would handle the structured parts, while the OpenIE one handles the free text parts. The output triples are then stored and passed to the Processing component to produce the final LogIE triples output. \begin{algorithm}[t] \caption{Log Summarization} \label{alg:summarization} \begin{algorithmic}[1] \REQUIRE A semantic information triple set $ST$, a domain knowledge triple set $DT$, the number of triple candidates $k$ and word embedding set $WE$ % \ENSURE Ranked summaries $S$ \STATE Create a triple vector set $TV$ \FOR{each triples $st$ of $ST$} \STATE Create a temporary empty triple vector $tv$ \STATE Let a integer variable $len$ record the number of words in $st$ \FOR{each word $w$ of $st$} \STATE Find the corresponding word vector $wv$ for $w$ in $WE$ \STATE Plus the current word vector $wv$ to the temporary triple vector $tv$ \ENDFOR \STATE Get the average vector $av$ by dividing $tv$ by $len$ and regard $av$ as the triple vector $st$ \STATE Append the triple vector $av$ to $TV$ \ENDFOR \STATE Init a matrix of transition probability $M$ by calculating cosine similarity between all $tv$ pairs in $TV$ \STATE Convert $M$ to a weighted graph $G=(V,E)$. \STATE Get the triple scores $TS$ by applying Formula~\ref{equation} to $G$ \STATE Sort the triples in reverse order by scores in $TS$, and the top $k$ triples as the final summaries $S$. \RETURN $S$ \end{algorithmic} \end{algorithm} \subsubsection{RE for Rule Triples}\label{sec:rule-triples} The purpose of the RE component is to make the most out of the structure present in logs, namely the structured text part from the example in Fig. \ref{fig:template}. According to our observation, there are some rules for systems to print logs. Therefore it becomes easier to define rules to extract part of the information present precisely. For example, in our implementation, we use three different rules, where all three cover different ways of representing entity-value pairs. For these cases, entity-value pairs are usually separated by an equals “=” or a colon ``:" symbol. Another common case, is formatting such information in the same way command line arguments are specified in a command line interface program. The outputs of the RE component of LogIE could be used to provide further details of the log stream in a readable structured manner, or store structured information (entity-value pairs) for further data mining. Besides, the RE component processes unstructured logs by first extracting triples from the non-free text parts of the logs before the OpenIE component processes their remaining free text parts. \subsubsection{OpenIE for Semantic Triples}\label{sec:semantic-triples} Operators pay attention to ``entities'', ``events'' and the ``relation'' between them when they read logs which making these the most important pieces of information to be considered for log summarization. OpenIE \cite{Banko2008OpenIE} is usually used to extract relational triples, which is exactly what the operators need, since they are both structured in a human readable way~\cite{ApplicationsMausam2016OpenIE}, and a reduced version of the original logs After rule triples were extracted using templates, the remainder free text is passed on to OpenIE component. There has been substantial progress on OpenIE approaches since it was proposed by Banko et al. \cite{Banko2008OpenIE}. These methods take free text as input and yield OpenIE triples as the output, formed by two arguments related by a predicate e.g. (``Link bandwidth", ``is", ``resumed"). OpenIE methods achieve their objective by leveraging the underlying semantic structure of the sentences for a given language enabling them to find the arguments present and the predicates that relate them. We therefore leverage existing OpenIE approaches in our implementation to fulfill the OpenIE component requirement of the LogIE framework. Since LogIE is a framework, none of its components, including OpenIE, are tied to any implementation in particular. Besides, many short logs do not have whole three element of triples, \textit{e.g.},~, do not contain entity, OpenIE can also generate ``triples'' with less than three elements. The output of the OpenIE component are the semantic triples, LogSummary will rank them and generate the summary later. As you will see in our evaluation in Section \ref{sec:evaluation}, we incorporate the main OpenIE methods from the literature into our work as both baselines for LogIE and as part of the LogIE framework for evaluation. \subsection{Ranking Summaries}\label{sec:summarization} As aforementioned, logs, which record the status of services in real-time, usually suffer from redundancy and repetition. Traditionally, operators need to read raw logs and extract valuable information manually. However, it‘s labor-intensive and time-consuming. The goal of the LogSummary is to help operators to read/understand logs faster. After the LogIE stage, we obtain triples, the minimum units of semantic information within logs. In this section, we introduce a mechanism to rank the triples based on their informativeness. Operators generally hope to find out the importance of each triple by measuring the connection between each triple and other triples. For a set of logs, most algorithms ignore the semantics and other elements of its words, and simply treat a triple as a collection of words. And each word appears independently and does not depend on each other. However, in log analysis domain, different word combinations have different meanings. Operators usually use knowledge drawn from entire logs to make local ranking decisions. Therefore, we integrate the information of the global corpus into the sorting algorithm in the form of word vectors, and through the combination of the sorting algorithm and word vectors, iteratively score sentences and sort them according to the score. \subsubsection{Triple Representation}\label{sec:triple-representation} Firstly, we propose a method to represent triples with domain-specific semantic. Log2Vec~\cite{log2vec} enables generalization to domain-specific words, which is achieved by integrating the embedding of lexical and relation features into a low-dimensional Euclidean space. By training a model over the existing vocabulary, Log2Vec~\cite{log2vec} can later use that model to predict the embedding of any words, even previously unseen words at runtime. Therefore, we apply the technique in Log2Vec to represent triples generated from logs. Leveraging its previous components, we converts any word in the triples into a word embedding vector and generates the triple's vector, which is the weighted average of its word vectors. \subsubsection{Ranking Triples}\label{sec:triple-rank} In this section, we propose a method to rank log summaries. Its workflow is shown in Algorithm~\ref{alg:summarization}. Firstly, we build a graph associated with the logs, where the graph vertices are representative for the units to be ranked. For the application of triple extraction, the goal is to rank entire semantic triples, and therefore a vertex is added to the graph for each triple in logs. Same as sentence extraction, we define a relation that determines a connection between two triples if there is a ``similarity'' relation between them, where the ``similarity'' is measured as the cosine similarity~\cite{gravano2003text} of two triples. Note that, other similarity measures (\textit{e.g.,} Euclidean distance~\cite{danielsson1980euclidean}) are also possible. Such a relation between two triples can be seen as a process of ``recommendation'': given a triple that addresses certain concepts in logs, it is ``recommended'' to refer to other triples in the logs that address similar concepts. Therefore a similarity link is drawn between any two triples. \begin{figure} \begin{minipage}[h]{1.0\linewidth} \centering \includegraphics[width = 8 cm]{figures/triple_networks.pdf}\\ \end{minipage} \caption{Weighted graph of case study on real-world switch logs.}\label{fig:case-graph} \end{figure} Unlike the unweighted graphs in PageRank~\cite{page1999pagerank}, we need to build weighted graphs. The resulting graph is highly connected, with a weight associated with each edge, indicating the strength of connections established between various triple pairs in logs. The logs are therefore represented as a weighted graph (Fig.~\ref{fig:case-graph} shows a weighted graph for the case study in Section \ref{sec:experiment-summary}). Formally, let $G=(V, E)$ be a directed graph with the set of vertices $V$ and set of edges $E$, where $E$ is a subset of $V * V$. For a given vertex $V_i$, let $In(V_i)$ be the set of vertices that point to it and let $Out(V_i)$ be the set of vertices that vertex $V_i$ points to. Then, we adopt the formula in TextRank~\cite{textrank}, which is for graph-based ranking that takes into account edge weights when computing the score associated with a vertex in the graph. Textrank's formula is defined to integrate vertex weights. \begin{equation} \label{equation} WS(V_i) = (1-d) + d*\sum_{V_j\in In_{V_i}}\frac{w_{ji}}{\sum_{V_j\in In_{V_i}}w_{jk}} WS(V_j) \end{equation} where $d$ is a damping factor that can be set between 0 and 1. After the triple-based TextRank~\cite{textrank} is run on the graph, semantic triples are sorted in reverse order of their score Note that, although the summaries of LogSummary are highly compressed, it have different goal with other log compression applications (\textit{e.g.,} LogZip\cite{logzip}). Other log compression applications aim to store logs while our summary are more readable for operators. \subsection{Design Challenges}\label{sec:detail-challenge} Log data is an important data source recording system states and significant events at runtime. It is thus intuitive for operators to observe system status and inspect potential anomalous events using logs. A log is usually printed by logging statements (\textit{e.g.},~ printf(), logger.info()) in the source code, which are predefined by developers. Typically, the predefined part of a log is human readable. Therefore, solving log summarization problems using NLP tools seems promising. However, directly applying existing NLP approaches for log summarization faces several challenges as follows. \noindent\textbf{Domain-specific symbols and grammar.} Logs contain many domain-specific symbols, and their grammer may significantly differ from normal sentences. Existing NLP tools, which are typically designed for normal sentences, cannot get accurate summaries for them. For example, entity-value pairs are valuable and structured information that should be extracted from unstructured logs. However, existing NLP tools cannot extract them directly, because they may be separated by an equal ``='' or a colon ``:'' symbol. Besides, some entity-value pairs are hidden in word combination. For instance, when NLP tools process the first log in Fig. \ref{fig:logs}, it may treat ``Interface ae3'' as a whole, while ``ae3'' is a value for the entity of ``Interface''. \noindent\textbf{High summarization efficiency requirement.} After a failure is detected or predicted, operator hope to quickly obtain the summary of a collection of logs in some period (\textit{e.g.},~ one hour before a detected failure) to figure out what happens on the online service. However, the online service can generate a large number of logs in this period. For example, one program execution in the HDFS system generates 288,775 logs per hour~\cite{deeplog}. On the other hand, existing NLP methods typically get the summarized triples one sentence (log) by one sentence (log), and their efficiency cannot satisfy the requirement of operators (see Table~\ref{table:logie-speed} for more details). \noindent\textbf{Obtaining the summarized triples of important logs.} Typically, logs are generated in the order of program executing, and they contain redundancy and repetition. When operators inspect a collection of logs triggered by a failure detection or prediction, they want to obtain the triples of those \emph{important} logs first. However, existing NLP approaches usually generate summaries according to the order of sentences (logs) in the original text (log sequence). In Fig. \ref{fig:case-study-logs}, for example, an state-of-the-art NLP method generates summaries by compressing original logs, instead of generating the triples of the expected important logs. Consequently, these approaches cannot satisfy the expectation of operators. \noindent\subsection{Overview of LogSummary}\label{sec:overview} In this section, we design LogSummary (as shown in Fig.~\ref{fig:logsummary}) to summarize logs in online services and help operators to read/understand logs faster. LogSummary have two parts, offline training and online summarization. During offline training, LogSummary applies unsupervised template extraction methods \cite{zhu2019tools} to get templates from historical logs. These templates are used by LogIE in the online stage for matching and processing logs Since it's nearly impractical to rank arbitrary summaries, LogSummary applies Log2Vec \cite{log2vec} to learn the semantics of logs and train word embeddings. Log2Vec not only learns domain-specific semantics from offline logs, but also generates a new embedding for unseen (OOV) words at runtime, which are common in new generated logs in online systems. Then, LogSummary applies the trained embeddings to rank summaries online. In the online stage, we design LogIE (describe in Section \ref{sec:logie}) a mechanism to generate triples for given logs. The inputs of LogIE are real-time logs and templates. LogSummary updates new templates automatically by using LogParse \cite{logparse}. LogIE outputs summaries of logs. LogIE solves the challenge that logs contain domain-specific text. Besides, LogIE saves mapping caches between triples and templates, which speeds up log processing and solves the processing speed challenge imposed by the huge amount of logs. After LogIE, LogSummary ranks triples by adopting TextRank, which meets the requirements of operators reading important summaries first rather than following the program execution logs order. \subsection{Online Representation}\label{sec:measuerments-oov} \textbf{Word Embedding for Logs.} Logs are designed to facilitate user readability. Consequently, the constant parts of logs are defined in human readable manner by developers. Many methods (\textit{e.g.,} word2vec~\cite{mikolov2013exploiting}) thus use natural language process (NLP) methods to represent words. However, these methods cannot represent domain-specific words accurately. For example, ``down'' and ``up'' in Fig.~\ref{fig:logs} are antonyms but they have similar contexts. Besides, system upgrades usually generate new types of logs with unseen words~\cite{log2vec} (\textit{e.g.,} ``Vlan-interface'' in Figure~\ref{fig:logs}), which pose a challenge for generating distributed representations of words in logs. For this reason, we adopt our previous work, Log2Vec~\cite{log2vec}, to represent the words of logs. Log2Vec combines a log-specific word embedding method to accurately extract the semantic information of logs, with an out of vocabulary (OOV) word processor to embed unseen words into distributed representation at runtime. \textbf{Information Extraction.} Information extraction retrieves relational triples from unstructured text. This is usually done in the form of triples for binary relations, relating two arguments by a predicate or relation for each relation in a given sentence, \textit{i.e.} $(argument_{1}, relation_{1,2},argument_{2})$ \cite{Banko2008OpenIE}. Traditional information extraction approaches rely on predefining a limited set of target relations and hand-crafted patterns. For this they would adapt Named Entity Recognizers and dependency parsers to target a specific domain. These approaches would then require manual effort to be repurposed and applied to a different domain. In order to address the challenge of scalability for performing web information extraction, Banko \textit{et al}. \cite{Banko2008OpenIE} introduced Open Information Extraction (OpenIE). They presented a new information extraction paradigm that would allow to extract relations without defining the number or type of relations in advance. \subsection{Design Challenges}\label{sec:detail-challenge} Log data is an important data source recording system states and significant events at runtime. It is thus intuitive for operators to observe system status and inspect potential anomalous events using logs. A log is usually printed by logging statements (\textit{e.g.},~ printf(), logger.info()) in the source code, which are predefined by developers. Typically, the predefined part of a log is human readable. Therefore, solving log summarization problems using NLP tools seems promising. However, directly applying existing NLP approaches for log summarization faces several challenges as follows. \noindent\textbf{Domain-specific symbols and grammar.} Logs contain many domain-specific symbols, and their grammer may significantly differ from normal sentences. Existing NLP tools, which are typically designed for normal sentences, cannot get accurate summaries for them. For example, entity-value pairs are valuable and structured information that should be extracted from unstructured logs. However, existing NLP tools cannot extract them directly, because they may be separated by an equal ``='' or a colon ``:'' symbol. Besides, some entity-value pairs are hidden in word combination. For instance, when NLP tools process the first log in Fig. \ref{fig:logs}, it may treat ``Interface ae3'' as a whole, while ``ae3'' is a value for the entity of ``Interface''. \noindent\textbf{High summarization efficiency requirement.} After a failure is detected or predicted, operator hope to quickly obtain the summary of a collection of logs in some period (\textit{e.g.},~ one hour before a detected failure) to figure out what happens on the online service. However, the online service can generate a large number of logs in this period. For example, one program execution in the HDFS system generates 288,775 logs per hour~\cite{deeplog}. On the other hand, existing NLP methods typically get the summarized triples one sentence (log) by one sentence (log), and their efficiency cannot satisfy the requirement of operators (see Table~\ref{table:logie-speed} for more details). \noindent\textbf{Obtaining the summarized triples of important logs.} Typically, logs are generated in the order of program executing, and they contain redundancy and repetition. When operators inspect a collection of logs triggered by a failure detection or prediction, they want to obtain the triples of those \emph{important} logs first. However, existing NLP approaches usually generate summaries according to the order of sentences (logs) in the original text (log sequence). In Fig. \ref{fig:case-study-logs}, for example, an state-of-the-art NLP method generates summaries by compressing original logs, instead of generating the triples of the expected important logs. Consequently, these approaches cannot satisfy the expectation of operators. \noindent\subsection{Overview of LogSummary}\label{sec:overview} In this section, we design LogSummary (as shown in Fig.~\ref{fig:logsummary}) to summarize logs in online services and help operators to read/understand logs faster. LogSummary have two parts, offline training and online summarization. During offline training, LogSummary applies unsupervised template extraction methods \cite{zhu2019tools} to get templates from historical logs. These templates are used by LogIE in the online stage for matching and processing logs Since it's nearly impractical to rank arbitrary summaries, LogSummary applies Log2Vec \cite{log2vec} to learn the semantics of logs and train word embeddings. Log2Vec not only learns domain-specific semantics from offline logs, but also generates a new embedding for unseen (OOV) words at runtime, which are common in new generated logs in online systems. Then, LogSummary applies the trained embeddings to rank summaries online. In the online stage, we design LogIE (describe in Section \ref{sec:logie}) a mechanism to generate triples for given logs. The inputs of LogIE are real-time logs and templates. LogSummary updates new templates automatically by using LogParse \cite{logparse}. LogIE outputs summaries of logs. LogIE solves the challenge that logs contain domain-specific text. Besides, LogIE saves mapping caches between triples and templates, which speeds up log processing and solves the processing speed challenge imposed by the huge amount of logs. After LogIE, LogSummary ranks triples by adopting TextRank, which meets the requirements of operators reading important summaries first rather than following the program execution logs order. \subsection{LogIE}\label{sec:logie} In order to accurately and efficiently extract valuable information from logs, we propose LogIE (Log Information Extraction). LogIE performs open information extraction on logs, extracting triples relating entities and arguments through a relation or predicate. To achieve this, it combines both Rule Extraction (RE) and OpenIE to extract the triples. In order to make the process fast and efficient, LogIE adopts templates to improve and speed up information extraction of logs. Note that, templates are extracted by existing approaches \cite{zhu2019tools,logparse} automatically. LogIE learns triples from the log templates, so template matching can be used to produce the LogIE triples output. LogIE is a framework composed of four main components, which we describe and explain how they work using the simple example in Fig. \ref{fig:template}. \begin{figure} $\underbrace{\text{Link bandwidth lost totally is resumed.}}_{\text{Free Text}}\ \underbrace{\text{( Reason = }\overbrace{\text{*}}^{\text{VAR1}} \ )}_{\text{Structured Text}}$ \caption{Log template example.} \label{fig:template} \end{figure} \subsubsection{Matching $\And$ Processing} Matching and Processing are the overarching components of LogIE supported by the RE and the OpenIE components that perform the triple extraction. As shown in Fig. \ref{fig:logie}, LogIE takes both raw logs and templates as its input. LogIE performs template matching on the input rawlogs. Using Fig. \ref{fig:template} as an example, if a log is matched with this template, LogIE retrieves the previously extracted triples for this given log-type and substitute the variables present in the triples by their actual value obtained from the raw log. These variables are usually identifiers, values or service addresses \cite{pi2019semantic}. Therefore once a log is received, if a template is matched by the matching component, it will directly be processed by the Processing component and output its LogIE triples. This way LogIE is able to effectively and efficiently yield OpenIE triples in an online manner. Since the goal of LogIE is to get a structured information from logs, we treat all these cases equally by substituting them by a dummy token to be considered an entity or part of an argument \textit{e.g.,} "VARX", where X is the ordinal of the variable within the template. In the case that the log is not matched to any template, a new template needs to be extracted\cite{drain,logparse}. Since LogIE is meant to be run online, it requires a template extraction and matching method that can be incrementally updated online. For this reason, we incorporate LogParse into the Matching component. The new template is then split into subparts as shown in Fig. \ref{fig:template}, based on rules predefined accordingly to the source services log. These subparts are then handled by the RE and OpenIE components to extract a new set of triples from it. The RE component would handle the structured parts, while the OpenIE one handles the free text parts. The output triples are then stored and passed to the Processing component to produce the final LogIE triples output. \begin{algorithm}[t] \caption{Log Summarization} \label{alg:summarization} \begin{algorithmic}[1] \REQUIRE A semantic information triple set $ST$, a domain knowledge triple set $DT$, the number of triple candidates $k$ and word embedding set $WE$ % \ENSURE Ranked summaries $S$ \STATE Create a triple vector set $TV$ \FOR{each triples $st$ of $ST$} \STATE Create a temporary empty triple vector $tv$ \STATE Let a integer variable $len$ record the number of words in $st$ \FOR{each word $w$ of $st$} \STATE Find the corresponding word vector $wv$ for $w$ in $WE$ \STATE Plus the current word vector $wv$ to the temporary triple vector $tv$ \ENDFOR \STATE Get the average vector $av$ by dividing $tv$ by $len$ and regard $av$ as the triple vector $st$ \STATE Append the triple vector $av$ to $TV$ \ENDFOR \STATE Init a matrix of transition probability $M$ by calculating cosine similarity between all $tv$ pairs in $TV$ \STATE Convert $M$ to a weighted graph $G=(V,E)$. \STATE Get the triple scores $TS$ by applying Formula~\ref{equation} to $G$ \STATE Sort the triples in reverse order by scores in $TS$, and the top $k$ triples as the final summaries $S$. \RETURN $S$ \end{algorithmic} \end{algorithm} \subsubsection{RE for Rule Triples}\label{sec:rule-triples} The purpose of the RE component is to make the most out of the structure present in logs, namely the structured text part from the example in Fig. \ref{fig:template}. According to our observation, there are some rules for systems to print logs. Therefore it becomes easier to define rules to extract part of the information present precisely. For example, in our implementation, we use three different rules, where all three cover different ways of representing entity-value pairs. For these cases, entity-value pairs are usually separated by an equals “=” or a colon ``:" symbol. Another common case, is formatting such information in the same way command line arguments are specified in a command line interface program. The outputs of the RE component of LogIE could be used to provide further details of the log stream in a readable structured manner, or store structured information (entity-value pairs) for further data mining. Besides, the RE component processes unstructured logs by first extracting triples from the non-free text parts of the logs before the OpenIE component processes their remaining free text parts. \subsubsection{OpenIE for Semantic Triples}\label{sec:semantic-triples} Operators pay attention to ``entities'', ``events'' and the ``relation'' between them when they read logs which making these the most important pieces of information to be considered for log summarization. OpenIE \cite{Banko2008OpenIE} is usually used to extract relational triples, which is exactly what the operators need, since they are both structured in a human readable way~\cite{ApplicationsMausam2016OpenIE}, and a reduced version of the original logs After rule triples were extracted using templates, the remainder free text is passed on to OpenIE component. There has been substantial progress on OpenIE approaches since it was proposed by Banko et al. \cite{Banko2008OpenIE}. These methods take free text as input and yield OpenIE triples as the output, formed by two arguments related by a predicate e.g. (``Link bandwidth", ``is", ``resumed"). OpenIE methods achieve their objective by leveraging the underlying semantic structure of the sentences for a given language enabling them to find the arguments present and the predicates that relate them. We therefore leverage existing OpenIE approaches in our implementation to fulfill the OpenIE component requirement of the LogIE framework. Since LogIE is a framework, none of its components, including OpenIE, are tied to any implementation in particular. Besides, many short logs do not have whole three element of triples, \textit{e.g.},~, do not contain entity, OpenIE can also generate ``triples'' with less than three elements. The output of the OpenIE component are the semantic triples, LogSummary will rank them and generate the summary later. As you will see in our evaluation in Section \ref{sec:evaluation}, we incorporate the main OpenIE methods from the literature into our work as both baselines for LogIE and as part of the LogIE framework for evaluation. \subsection{Ranking Summaries}\label{sec:summarization} As aforementioned, logs, which record the status of services in real-time, usually suffer from redundancy and repetition. Traditionally, operators need to read raw logs and extract valuable information manually. However, it‘s labor-intensive and time-consuming. The goal of the LogSummary is to help operators to read/understand logs faster. After the LogIE stage, we obtain triples, the minimum units of semantic information within logs. In this section, we introduce a mechanism to rank the triples based on their informativeness. Operators generally hope to find out the importance of each triple by measuring the connection between each triple and other triples. For a set of logs, most algorithms ignore the semantics and other elements of its words, and simply treat a triple as a collection of words. And each word appears independently and does not depend on each other. However, in log analysis domain, different word combinations have different meanings. Operators usually use knowledge drawn from entire logs to make local ranking decisions. Therefore, we integrate the information of the global corpus into the sorting algorithm in the form of word vectors, and through the combination of the sorting algorithm and word vectors, iteratively score sentences and sort them according to the score. \subsubsection{Triple Representation}\label{sec:triple-representation} Firstly, we propose a method to represent triples with domain-specific semantic. Log2Vec~\cite{log2vec} enables generalization to domain-specific words, which is achieved by integrating the embedding of lexical and relation features into a low-dimensional Euclidean space. By training a model over the existing vocabulary, Log2Vec~\cite{log2vec} can later use that model to predict the embedding of any words, even previously unseen words at runtime. Therefore, we apply the technique in Log2Vec to represent triples generated from logs. Leveraging its previous components, we converts any word in the triples into a word embedding vector and generates the triple's vector, which is the weighted average of its word vectors. \subsubsection{Ranking Triples}\label{sec:triple-rank} In this section, we propose a method to rank log summaries. Its workflow is shown in Algorithm~\ref{alg:summarization}. Firstly, we build a graph associated with the logs, where the graph vertices are representative for the units to be ranked. For the application of triple extraction, the goal is to rank entire semantic triples, and therefore a vertex is added to the graph for each triple in logs. Same as sentence extraction, we define a relation that determines a connection between two triples if there is a ``similarity'' relation between them, where the ``similarity'' is measured as the cosine similarity~\cite{gravano2003text} of two triples. Note that, other similarity measures (\textit{e.g.,} Euclidean distance~\cite{danielsson1980euclidean}) are also possible. Such a relation between two triples can be seen as a process of ``recommendation'': given a triple that addresses certain concepts in logs, it is ``recommended'' to refer to other triples in the logs that address similar concepts. Therefore a similarity link is drawn between any two triples. \begin{figure} \begin{minipage}[h]{1.0\linewidth} \centering \includegraphics[width = 8 cm]{figures/triple_networks.pdf}\\ \end{minipage} \caption{Weighted graph of case study on real-world switch logs.}\label{fig:case-graph} \end{figure} Unlike the unweighted graphs in PageRank~\cite{page1999pagerank}, we need to build weighted graphs. The resulting graph is highly connected, with a weight associated with each edge, indicating the strength of connections established between various triple pairs in logs. The logs are therefore represented as a weighted graph (Fig.~\ref{fig:case-graph} shows a weighted graph for the case study in Section \ref{sec:experiment-summary}). Formally, let $G=(V, E)$ be a directed graph with the set of vertices $V$ and set of edges $E$, where $E$ is a subset of $V * V$. For a given vertex $V_i$, let $In(V_i)$ be the set of vertices that point to it and let $Out(V_i)$ be the set of vertices that vertex $V_i$ points to. Then, we adopt the formula in TextRank~\cite{textrank}, which is for graph-based ranking that takes into account edge weights when computing the score associated with a vertex in the graph. Textrank's formula is defined to integrate vertex weights. \begin{equation} \label{equation} WS(V_i) = (1-d) + d*\sum_{V_j\in In_{V_i}}\frac{w_{ji}}{\sum_{V_j\in In_{V_i}}w_{jk}} WS(V_j) \end{equation} where $d$ is a damping factor that can be set between 0 and 1. After the triple-based TextRank~\cite{textrank} is run on the graph, semantic triples are sorted in reverse order of their score Note that, although the summaries of LogSummary are highly compressed, it have different goal with other log compression applications (\textit{e.g.,} LogZip\cite{logzip}). Other log compression applications aim to store logs while our summary are more readable for operators. \subsection{Experimental Setting}\label{sec:setting} \subsubsection{Datasets}\label{sec:datasets} We conduct experiments over four public log datasets from several services: BGL logs~\cite{Xu2009Detecting}, HDFS logs~\cite{deeplog}, HPC logs~\cite{lin2016log}, and Proxifier logs\cite{logzip}. Since the lack of a publicly-available summary gold standard hinders the automatic evaluation, we manually labelled the above public log datasets and make them available\footnote{https://github.com/LogSummary/ICSE2021/tree/main/data}. In this paper, we provide two kinds of gold standard datasets. For information extraction, we labelled all templates for all logs (\cite{zhu2019tools} only label templates for 2000 logs per dataset) and labelled OpenIE triples leveraging semantic information and domain knowledge. To evaluate log summarization, we chose 100 groups of 20 contiguous logs per dataset and generated their summaries manually. The detailed information of above datasets are listed in Table~\ref{tab:datasets}. \begin{table}[!ht] \centering \caption{Detail of the service log datasets.}\label{tab:datasets} \renewcommand\tabcolsep{2.75 pt} \begin{tabular}{ccc \toprule Datasets&Description&\# of logs\\ \midrule BGL&Blue Gene/L supercomputer log& 4,747,963\\ HDFS&Hadoop distributed file system log& 11,175,629 \\%48.61 MB HPC&High performance cluster log& 433,489 \\%9.95 MB Proxifier&Proxifier software log& 21,329 \\%48.61 MB \toprule \end{tabular} \end{table} \subsubsection{Experimental Setup}\label{sec:setup} We conduct all experiments on a Linux server with Intel Xeon 2.40 GHz CPU. We implement LogSummary with Python 3.6 and make it open-source\footnote{https://github.com/LogSummary/ICSE2021}. \subsection{Evaluation on LogIE}\label{sec:experiment-LogIE} We intrinsically evaluate the LogIE framework in order to choose its best implementation and compare it to the baselines. We evaluate the main OpenIE methods from the literature as baselines and incorporate them as the OpenIE component of LogIE. For this task, we build and open-source a dataset of log information extraction triples based on public logs. We also use this gold standard dataset to evaluate an improved version of LogIE which uses manually improved templates instead of an online template extraction approach. This serves as an ablation test showing the influence of the template quality on LogIE's performance. \subsubsection{Triples Gold Standard Dataset}\label{sec:gt-logie} \begin{table}[t] \centering \renewcommand\tabcolsep{8 pt} \caption{Resulting number of manually improved templates and manually extracted OpenIE triples from the source datasets for building the OpenIE Gold Standard dataset.} \label{table:triples-gt-stats} \begin{tabular}{ccc} \toprule Source & Templates & Triples \\ \midrule BGL & 263 & 831 \\ HDFS & 50 & 87 \\ HPC & 85 & 199 \\ Proxifier & 14 & 36 \\ Switch & 95 & 323 \\ \midrule Total & 507 & 1476 \\ \bottomrule \end{tabular} \end{table} This dataset, we employ it as the gold standard for building and evaluating the different approaches within LogIE. The process of building it can be divided into three parts: obtaining the logs data from different services; extracting and improving templates used to assist the triples extraction; and manual annotation. We sourced the logs from different types of systems and services. Four of them described in Section \ref{table:summary} are open-source and one of them comes from real world switch logs. As part of the process of building the gold propositions dataset we extracted templates from all source logs using LogParse \cite{logparse}. Then, we manually extracted OpenIE triples from the logs. For each log we manually extracted relational triples of the form ($arg_1$, $r$, $arg_2$), meaning that $arg_1$ is related to $arg_2$ by predicate $r$. We aimed to keep the form (subject, predicate, object) in this order. For this purpose considered both the domain knowledge and the semantic structure. We made several considerations: We extract (subject, predicate, object) triples in this order. Where applicable, we make prepositions part of the predicate. It's required to have at least the predicate and the subject or object present to extract a triple. We make linking verbs the predicate of a triple where applicable. Conjunctions, such as “and” or “or”, are split into several triples or combined into a single one where the conjunction is part of the predicate. For the case of adjuncts, regardless of the kind, we decided to disregard them and simply consider them as separators of two different clauses. For the cases of apposition, we defined an “is” relation, that would serve as an “is-a” relation which is usually used in ontology building. Lastly, we leveraged domain knowledge to extract the values of arguments or attributes as well the instances of different entities. Usually these would show up as an “=” or a “:” in the logs. In these cases, we also considered an “is” relation to represent the relation between the two. Additionally, arguments are also represented in the format that command line arguments are written. In these cases, we also use an “is” relation and create a second argument “set” for flags. \subsubsection{Task Formulation}\label{sec:task-logie} As explained in Section \ref{sec:background}, OpenIE intends to obtain all relations present in a given sentence or corpus together with the arguments or entities related by such relations in a structured manner. Likewise, the goal of LogIE is to extract relations present within each log, which are used as the minimum unit of information from each log. Specifically, given a stream of raw logs as the input, relational triples of the form (arg$_1$, relation, arg$_2$) are to be extracted for each relation present within each log. This task will be tested against the gold standard we propose in Section \ref{sec:gt-logie} and evaluated as detailed in the following Section \ref{sec:metric-logie}. \subsubsection{Metrics and Baselines}\label{sec:metric-logie} The main challenge in order to intrinsically evaluate LogIE—similarly to cases that are common in NLP—is that we need to allow different OpenIE triples extractions to be considered acceptable for the same gold proposition. For this reason, we follow a similar approach to that of Stanovsky et al. \cite{Stanovsky2016oieBenchmark} in their OpenIE benchmark. Inspired by He et al. \cite{HeLuheng2015QuestionAnswerDS}, where the syntactic heads of the predicate and the arguments from a given extraction should match those of the corresponding gold proposition, they define a more lenient approach that considers their token-level overlap instead. Therefore, we use an approach similar to theirs\footnote{https://github.com/gabrielStanovsky/oie-benchmark} to calculate precision and recall of the evaluation. Among the main differences, we do not propagate a match to all matching predicates, but thoroughly test all triples against all gold propositions instead. We don't produce a confidence score for each triple, so we don't calculate AUC scores. The main metric we consider for comparison between the approaches is the F1 score. The metrics are calculated as follows: $precision = \frac{\#\ correct\ extractions}{\#\ extractions}$, $recall = \frac{\#\ recalled\ gold\ propositions}{\#\ gold\ propositions}$, $F1 = \frac{2\ \times\ precision\ \times\ recall}{precision\ +\ recall}$. We compare LogIE with six Open Information Extraction methods, namely, ClausIE \cite{Corro2013ClausIECO}, Ollie \cite{Mausam2012OpenLL}, OpenIE5\footnote{https://github.com/dair-iitd/OpenIE-standalone}, PredPatt \cite{Stanovsky2016GettingMO}, PropS \cite{Stanovsky2016GettingMO} and Stanford OpenIE \cite{Angeli2015LeveragingLS}. \subsubsection{Experimental Results}\label{sec:eval-public-logie} We evaluate LogIE and compare it against its manually augmented version and the six OpenIE baselines in Table \ref{table:logie} on the four public logs described in Table \ref{tab:datasets}. LogIE learns online templates generated by LogParse \cite{logparse}. However, in order to perform an ablation test, we also compare LogIE Improved which is an augmented version using manually improved templates that were produced as part of the gold standard dataset introduced in Section \ref{sec:gt-logie}. Then each of the baselines are plain OpenIE approaches used directly on the raw logs. LogIE consistently produces better results across all public logs when compared to the baselines. This is because the pipeline approach of LogIE is optimized to make the most of both the structure and the free text present in logs. On the other hand, plain OpenIE methods are actually meant to be used directly on free natural language text. As you will see in Table \ref{table:logie}, even though both versions of LogIE are consistently superior than the baselines, there are cases where their results could be comparable such as on BGL or HPC. The more free text is present in the logs, the easier it is for plain OpenIE methods to generate correct OpenIE triples. However, as we show in Table \ref{table:logie-speed} applying plain OpenIE methods on the logs is not efficient when compared to LogIE which leverages the templates used as input and the high speed of template matching using tries to produce the OpenIE triples output from the raw logs. The throughput of LogIE is over 200X that of applying plain OpenIE. Nonetheless, the performance of LogIE is sensitive to the accuracy of the templates used as input as shown in the ablation test comparison. As demonstrated by the performance of LogIE Improved, the more accurate templates are, the better the performance of LogIE. Further, LogIE leverages either the structure of the log or the semantic information of the unstructured text within logs to extract information. If there is no rich information in the structure, or if details within the log are omitted to make it shorter, its performance is also affected. This is the case for the HDFS logs where the structured parts don't provide rich information and the natural language implicitly refers to the arguments, which is not picked up by LogIE. This affects its results with a low recall as seen in Table \ref{table:logie}. In turn, this affects the output of LogSummary given the pipeline nature of the framework. \ \smallskip \begin{table}[t] \centering \caption{Test accuracy on the log OpenIE triples gold reference dataset of public logs.} \label{table:logie} \begin{tabular}{ccccc} \toprule Logs & Method & Precision & Recall & F1 \\ \midrule \multirow{7}*{BGL} & \textbf{LogIE} & \textbf{0.918} & \textbf{0.864} & \textbf{0.89} \\ & OpenIE5 & 0.788 & 0.733 & 0.760 \\ & Stanford & 0.685 & 0.753 & 0.717 \\ & Ollie & 0.552 & 0.633 & 0.590 \\ & PredPatt & 0.463 & 0.638 & 0.536 \\ & ClausIE & 0.447 & 0.602 & 0.513 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{HDFS} & \textbf{LogIE} & \textbf{0.98} & \textbf{0.459} & \textbf{0.626} \\ & ClausIE & 0.159 & 0.530 & 0.244 \\ & OpenIE5 & 0.271 & 0.220 & 0.243 \\ & Stanford & 0.184 & 0.210 & 0.196 \\ & PredPatt & 0.171 & 0.177 & 0.174 \\ & Ollie & 0.003 & 0.079 & 0.006 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{HPC} & \textbf{LogIE} & \textbf{0.859} & \textbf{0.667} & \textbf{0.751} \\ & ClausIE & 0.588 & 0.648 & 0.616 \\ & PredPatt & 0.591 & 0.556 & 0.573 \\ & Stanford & 0.691 & 0.349 & 0.464 \\ & Ollie & 0.290 & 0.285 & 0.287 \\ & OpenIE5 & 0.567 & 0.123 & 0.202 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \multirow{7}*{Proxifier} & \textbf{LogIE} & \textbf{0.869} & \textbf{0.812} & \textbf{0.839} \\ & Stanford & 0.831 & 0.254 & 0.389 \\ & ClausIE & 0.247 & 0.719 & 0.368 \\ & OpenIE5 & 0.759 & 0.204 & 0.322 \\ & Ollie & 0.556 & 0.194 & 0.288 \\ & PredPatt & 0.061 & 0.106 & 0.078 \\ & PropS & 0.000 & 0.000 & 0.000 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \centering \renewcommand\tabcolsep{18 pt} \caption{Comparison of speed measured in logs per second between LogIE and the plain OpenIE methods when processing the input logs measured over thirty runs for each OpenIE method and each logs dataset.} \label{table:logie-speed} \begin{tabular}{ccc} \toprule \textbf{Approach} & \multicolumn{2}{l}{\textbf{Throughput (logs / s)}} \\ \cmidrule{2-3} {} & mean & std \\ \midrule LogIE & 8550.66 & 1909.62 \\ \midrule OpenIE Methods & 39.05 & 36.19 \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation on Ranking Summaries}\label{sec:experiment-summary} \subsubsection{Metrics and Baselines}\label{sec:metric-summary} To automatically evaluate the log summarization performance of different approaches, we use ROUGE~\cite{rouge}. The ROUGE metric measures the summary quality by counting the overlapping units between the generated summary and reference summaries. In our scenario, different operators may manually generate summaries in different order of words/phrases. Therefore, we apply ROUGE-1 to evaluate performance. Following the common practice~\cite{rouge2}, we report the precision, recall, F1 score for ROUGE-1, where $precision = \frac{\#\ overlapping\ words}{\#\ words\ in \ gold reference}$, $recall = \frac{\#\ overlapping\ words}{\#\ words\ in \ automatic \ summary}$ and $F1 socre = \frac{2 \times precision \times recall}{precision + recall}$ We obtain the metrics using open-source package\footnote{https://github.com/pltrdy/rouge}. We apply the compression ratio, \textit{i.e.,} $\frac{size~of~summaries}{size~of~original~logs}$, to evaluate the log compression performance. We compare LogSummary with three extractive summarization methods, namely, TF-IDF~\cite{tf-idf}, LDA~\cite{ton19}, TextRank (sentence summary)~\cite{textrank}). \subsubsection{Experimental Results}\label{sec:public-summary} We compare LogSummary with three baselines on four public datasets. For LogSummary, we choose top-5 semantic triples from online logs. Table~\ref{table:summary} show the comparison results of LogSummary{} and three baselins. Overall, LogSummary{} achieves the best summarization accuracy among the four methods Both TF-IDF and LDA, however, have low F1 scores ($< 0.5$) on all four datasets, because TF-IDF and LDA generate summaries by extracting keywords, which dismisses valuable information in raw logs. Although TextRank achieves relatively high precision (\textit{e.g.}, 0.904 on the BGL dataset), the high precision is at the cost of low recalls. For instance, on the Proxifier dataset, the recall of TextRank is only 0.050. Because there are many similar logs with different variables, when employed on its own, TextRank may choose many logs of the same type and ignore other types of logs. On the contrary, LogSummary, uses LogIE to extract triples from logs as an intermediate representation, which is more fine grained than each complete log, before applying TextRank achieving a $\approx$4.6 times higher recall. In Table~\ref{table:summary} ,we evaluate the compression ratio for log summarization on four datasets. We find that LogSummary achieves average compression ratio of 3.1\%, which will vastly reduce the reading and understanding load of operators. The results mean that the outputs of LogSummary are not only readable but also highly compressed. \begin{table} \centering \caption{Log summarization performance and Compression Ratio (CR) of LogSummary compared to its Baselines.} \label{table:summary} \begin{tabular}{cccccc} \toprule Logs & Method& Precision & Recall & F1 & CR \\ \midrule \multirow{4}*{BGL}& \textbf{LogSummary} &\textbf{0.815} &\textbf{0.703} &\textbf{0.725} &\textbf{0.026} \\ & LDA & 0.382 & 0.076 & 0.119 & 0.130\\ & TextRank & 0.893 & 0.238 & 0.347 & 0.144\\ & TF-IDF & 0.383 & 0.354 & 0.332 & 0.024 \\ \multirow{4}*{HDFS}& \textbf{LogSummary} & \textbf{0.759} &\textbf{0.432}& \textbf{0.538} & \textbf{0.015}\\ & LDA & 0.220 & 0.045 & 0.074 & 0.225\\ & TextRank & 0.602 & 0.079 & 0.135 & 0.230\\ & TF-IDF & 0.193 & 0.176 & 0.179 & 0.033 \\ \multirow{4}*{HPC}& \textbf{LogSummary}&\textbf{0.819}&\textbf{0.911}&\textbf{0.840} &\textbf{0.037}\\ & LDA & 0.530 & 0.110 & 0.175 & 0.251 \\ & TextRank & 0.904 & 0.265 & 0.365 & 0.208\\ & TF-IDF & 0.487 & 0.506 & 0.472 & 0.039 \\ \multirow{4}*{Proxifier}& \textbf{LogSummary} &\textbf{0.879}&\textbf{0.857} &\textbf{0.864} &\textbf{0.045}\\ & LDA & 0.332 & 0.088 & 0.135 & 0.099\\ & TextRank & 0.663 & 0.050 & 0.093 & 0.275\\ & TF-IDF& 0.281 & 0.324 & 0.290 & 0.023\\ \bottomrule \end{tabular} \end{table} \begin{figure} \begin{minipage}[h]{1.0\linewidth} \centering \includegraphics[width = 8.5 cm]{figures/case-study-logs.pdf}\\ \end{minipage} \vspace{-1 mm} \caption{A case study of LogSummary on switch logs.}\label{fig:case-study-logs} \end{figure} \subsection{Threats to Validility} The LogSummary framework leverages each of its components to automatically produce accurate summaries. However, its pipeline nature makes each component dependant on the quality of the output from the previous ones. Sometimes, we should proceed to improve the templates manually since template extraction is not perfectly precise. These imprecisions meant there would be redundant templates extracted from the logs. Additionally, in some cases, the variables may not have been detected properly. Therefore, the quality of the templates may affect the triples extracted by LogIE, which in turn affects the representations built by Log2Vec \cite{log2vec} which are used by TextRank \cite{textrank} to produce the ranked summaries. Nonetheless, each of the components provides significant benefits over their baselines. LogIE produces triples at over 200 times the throughput of plain OpenIE methods, which serve as the intermediate result that LogSummary leverages to achieve $\approx$4.6 times the recall of TextRank \cite{textrank}, which is the best performing baseline. LogSummary can serve further downstream purposes, which we consider for future work. The triples of LogSummary could aid the creation of knowledge graphs applied to perform automatic root cause analysis. Additionally they could serve as the intermediate representation before other log analysis tasks. LogSummary could produce the summary of anomalous logs for troubleshooting. Further, we consider five log datasets as part of the evaluation from online system services. Our approaches outperform their baselines in all of them, which shows the generalizability of LogSummary. However, it may encounter challenges when dealing with complex application layer logs, \textit{e.g.,} operators create complex rules for the Rule Extraction part. \section{Introduction} \input{intro.tex} \label{sec:introduction} \section{Background}\label{sec:background} \input{background.tex} \section{Challenges and Overview}\label{sec:challenge} \input{challenge.tex} \section{Algorithm}\label{sec:design} \input{design.tex} \section{Experiments}\label{sec:evaluation} \input{evaluation} \section{Case Study}\label{sec:case-study} \input{case-study} \section{Related Work}\label{sec:related} \input{related_work} \section{Conclusion}\label{sec:conclusion} \input{conclusion} \bibliographystyle{unsrt}
{ "timestamp": "2020-12-17T02:18:25", "yymm": "2012", "arxiv_id": "2012.08938", "language": "en", "url": "https://arxiv.org/abs/2012.08938" }
\section{Introduction} \label{sec:intro} In 2004, Li \cite{Li/2004} has proposed holographic dark energy principle to search the dark energy scenario and the authors of Refs. \cite{Nojiri/2006,Li/2009,Gong/2005} have investigated that this concept of holographic dark energy can be utilize in quantum gravity. In Wang and Wang \cite{Wang/2017}, a holographic dark energy model inspired by Bekenstein - Hawking entropy has been investigated. We also note that, by using the concept of horizon entropy of a black hole which is also known as Tsallis entropy \cite{Tsallis/2013}, Tavayef et al. \cite{Tavayef/2018} have investigated Tsallis holographic dark energy (THDE) model in general relativity. However, the Hubble horizon as an IR cutoff in modified theories of gravity is not a suitable candidate to explain the late time acceleration in the Universe \cite{Xu/2009,Yadav/2020}. Some important applications of Tsallis entropy are given in Ref. \cite{Nunes/2016}.\\ Recently, Moradpour et al. \cite{Moradpour/2018} have proposed a new holographic model for dark energy model inspired by the concept of R$\grave{e}$nyi entropy \cite{Renyi/1961} in the framework of general relativity. This new model of dark energy is known as R$\grave{e}$nyi holographic dark energy (RHDE) model. In the recent past, Tsallis and R$\grave{e}$nyi entropies \cite{Masi/2005,Touchette/2002,Tsallis/2011,Renyi/1970,Tsallis/1988} have been used in order to study some gravitational events in the framework of general relativity \cite{Komatsu/2017,Moradpour/2017,Moradpour/2018plb,Moradpour/2016,Abreu/2013,Abreu/2013a}. Also, we note that a general approach to THDE and even its generalization is done in Refs. \cite{Nojiri/2019,Nojiri/2020}.\\ In the recent time, the modified theories of gravity have been constantly used to solve dark energy problems or issues associated with standard $\Lambda$CDM model \cite{Padmanabhan/2008,Yoo/2012,Wang/2006}. Some alternative theories of gravity have pretty agreement with astrophysical observations \cite{Demianski/2006,Lubini/2011,Deng/2015}. The $f(R,T)$ theory of gravity \cite{Harko/2011} presents in its field equations extra contributions from both geometry, through a general dependence on $R$, and matter, through a general dependence on $T$, the trace of the energy-momentum tensor. Moreover, the T - dependence of the geometrical action in $f(R,T)$ gravity may be due to the existence of some imperfect fluids and intrinsically may have some quantum effects like particle production \cite{Harko/2014}. Some important applications of $f(R,T)$ theory of gravity in various physical contexts are given in Refs. \cite{Singh/2014,Houndjo/2014,Sharif/2013,Moraes/2016,Moraes/2015,Correa/2016,Yadav/2019,Yadav/2018,Yadav/2014,Prasad/2020}. \\ The article is organized as follows: The theoretical model and its mathematical formalism is given in section \ref{sec:2}. In Section \ref{sec:3} the violation/validation of conservation law for RHDE models in $f(R,T)$ theory of gravity are discussed. Finally, in section \ref{sec:4}, we summarized our findings. \section{Theoretical model and Basic equations} \label{sec:2} The modified Einstein - Hilbert action for the $f(R, T )$ theory of gravity is read as \cite{Harko/2011} \begin{equation} \label{A-1} S = \int\sqrt{-g}\left[\frac{1}{16\pi G}f(R,T)+\mathcal{L}_{m}\right]dx^{4}. \end{equation} where $\mathcal{L}_{m}$ is the matter Lagrangian density of matter and $f(R,T)$ is an arbitrary function of $R$ and $T$. The other symbols have their usual meanings.\\ The energy momentum tensor $ T_{i j} $ of matter is written as \begin{equation}\label{intro2} T_{i j}=-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\mathcal{L}_{m}\right)}{\delta g^{i j}}. \end{equation} and its trace $T = g^{i j} T_{ i j} $.\\ In Dixit et al. \cite{Dixit/2020}, the authors have considered the stress-energy tensor as \begin{equation} \label{A-2} T_{ij} = -pg_{ij}+(\rho + p)u_{i}u_{j}. \end{equation} With the choice of $\mathcal{L}_{m} = - p$, with $p$ being the pressure, and assuming units such that G = 1, the gravitational field equation for $f(R,T) = R + 2f(T)$ is read as \begin{equation} \label{A-3} 2R_{ij}-Rg_{ij} = 16\pi T_{ij}+4\dot{f}(T)T_{ij}+2\left[2p\dot{f}(T)+f(T)\right]g_{ij}, \end{equation} where $\dot{f}(T) = \frac{\partial f}{\partial t}$.\\ For dust filled Universe, $p = 0$, Eq. (\ref{A-3}) is reduced to \begin{equation} \label{A-4} 2R_{ij}-Rg_{ij} = 16\pi T_{ij}+4\dot{f}(T)T_{ij}+2f(T)g_{ij}. \end{equation} Note that Eq. (\ref{A-4}) of this manuscript is exactly same as Eq. (9) of Dixit et al \cite{Dixit/2020}. Further, in Ref. \cite{Dixit/2020}, the authors have selected $f(T) = \xi T$ with $\xi$ as a constant.\\ The spatially flat FRW Universe with a time dependent scale factor $a(t)$ is represented by following space-time \begin{equation} \label{A-5} ds^{2} = dt^{2} - a^{2}(t)\left(dx^{2}+dy^{2}+dz^{2}\right). \end{equation} Therefore, the general field equations for $f(R,T) = R + 2\xi T$ and the metric (\ref{A-5}) are obtained as \begin{equation} \label{A-6} 3\frac{\dot{a}^{2}}{a^{2}} = (8\pi +3\xi)\rho_{T}, \end{equation} \begin{equation} \label{A-7} 2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^{2}} = \xi \rho_{T}. \end{equation} We note that Eqs. (\ref{A-6}) and (\ref{A-7}) are exactly same as Eqs. (11) and (12) of Dixit et al. \cite{Dixit/2020}. With choice of $\Lambda_{eff} \propto H^{2}$, where $H = \frac{\dot{a}}{a}$ is a Hubble function, the general solution the field equations is obtained in Ref. \cite{Dixit/2020} as \begin{equation} \label{A-8} H(t) = \frac{2(8\pi +3\xi)}{3t(8\pi + 2\xi)} = \frac{2\beta}{3t}. \end{equation} where $\beta = \frac{8\pi+3\xi}{8\pi+2\xi}$. Eq. (\ref{A-8}) of this article and Eq. (14) of Dixit et al. \cite{Dixit/2020} are same. It is worthwhile to note the solution (\ref{A-8}) is not new. It has been already given in Harko et al. \cite{Harko/2011}. Along with some other issues, the main issue in Dixit et al. \cite{Dixit/2020} is that the gravitational field equations are not compatible with energy conservation equation. Therefore, the expression for equation of state parameter along with the dynamics of $\omega_{T}$ - $\omega_{T}^{\prime}$ plane does not reflect the actual behaviors of RHDE models in $f(R,T)$ gravity. \section{Non-existence of energy conservation law for RHDE in $f(R,T)$ gravity}\label{sec:3} In Ref. \cite{Dixit/2020}, the conservation equation is \begin{equation} \label{E-1} \frac{\partial \rho_{T}}{\partial t} + 3H(\rho_{T} + p_{T}) = 0. \end{equation} where $\rho_{T}$ is the R$\grave{e}$nyi holographic energy density. $p_{T}$ is not defined in Ref. \cite{Dixit/2020} but we understand that it is pressure of considered fluid in connection to define equation of state parameter $\omega_{T}$. It is interesting to note that for obtaining the field equations (\ref{A-6}) and (\ref{A-7}) in $f(R,T)$ gravity [which are Eqs. (11) and (12) in Dixit et al. \cite{Dixit/2020}], the authors of Ref. \cite{Dixit/2020} have assumed $p = 0$ but in conservation equation they conveniently admit pressure without concrete physical reason behind it. However, it is well known that the conservation equation does not hold in $f(R,T)$ theory of gravity. To make it clear firstly we discuss the conservation equation in the framework of general relativity.\\ The Einstein field equations with cosmological constant $(\Lambda)$ is read as \begin{equation} \label{E-2} R_{ij} - \frac{1}{2}g_{ij}R + \Lambda g_{ij} = 8\pi T_{ij}. \end{equation} The field equations for metric (\ref{A-5}) are written as \begin{equation} \label{E-3} 3\frac{\dot{a}^{2}}{a^{2}} = 8\pi\rho + \Lambda = \rho_{eff}, \end{equation} \begin{equation} \label{E-4} 2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^{2}} = -8\pi p + \Lambda = -p_{eff}. \end{equation} where $\rho_{eff} = 8\pi \rho + \Lambda$ and $p_{eff} = 8\pi p - \Lambda$. Therefore, the effective equation of state parameter is read as \begin{equation} \label{E-5} \omega_{eff} = \frac{p_{eff}}{\rho_{eff}} = \frac{8\pi p - \Lambda}{8\pi \rho + \Lambda}. \end{equation} It is worthwhile to note that in absence of matter, $\rho = 0$ and $p = 0$, the effective equation of state parameter is equal to $\omega_{eff} = -1$ \cite{Yadav/2020prd}. This represents the standard $\Lambda$CDM model of the Universe.\\ Differentiating Eq. (\ref{E-3}) with respect to t and combining the resulting equation with Eq. (\ref{E-4}), we obtain the energy conservation equation as following \begin{equation} \label{E-6} \dot{\rho_{eff}}+3\left(\rho_{eff}+p_{eff}\right)H = 0. \end{equation} which implies $d(\rho_{eff}V) = -p_{eff}dV$. Note that $V = a^{3}$, the volume of the Universe and the quantity $\rho_{eff}V$ represents the total energy of the Universe. As the Universe expands, the amount of dark energy in an expanding Universe increases in proportion to the volume. While in $f(R,T)$ theory of gravity, one may get a different picture because in $f(R,T) = R + 2\xi T$ gravity, $\nabla^{i}T_{ij} \neq 0$ \cite{Harko/2011}. \begin{equation} \label{E-7} \nabla^{i}T_{ij} = \frac{f_{T}(R,T)}{8\pi - f_{T}(R,T)}\left[(T_{ij}+\Theta_{ij})\nabla^{i}ln f_{T}(R,T)+\nabla^{i}\Theta_{ij}\right]. \end{equation} where $f_{T}(R,T) = \frac{\partial f(R,T)}{\partial T}$ and $\Theta_{ij} = g^{ij}\frac{\delta T_{ij}}{\delta g^{ij}}$. \\ In case of $f(R,T) = R +2\xi T$, Eq. (\ref{E-7}) reduces to \begin{equation} \label{E-8} \nabla^{i}T_{ij} = \frac{2\xi}{8\pi - 2\xi}\left[(T_{ij}+\Theta_{ij})\nabla^{i}ln f_{T}(R,T)+\nabla^{i}\Theta_{ij}\right]. \end{equation} It is worthwhile to note that for $\xi = 0$, we obtain $\nabla^{i}T_{ij} = 0$, but in $f(R,T) = R + 2 \xi T$, one can not choose $\xi = 0$ because $\xi = 0$ reduces $f(R,T) = R + 2\xi T$ theory to general relativity case. However for $\xi \neq 0$, the conservation equation does not hold. Some sensible researches for violation of conservation law in $f(R,T)$ theory are given in Refs. \cite{Barrientos/2014,Sahoo/2018}. In Ref. \cite{Dixit/2020}, the conservation equation (15) can neither be obtain from Eqs. (11) and (12) nor it is true for $f(R,T)$ theory of gravity. The violation of conservation law in Dixit et al. \cite{Dixit/2020} raises a serious question on the method and approaches used for constructing RHDE models in $f(R,T)$ gravity. The authors of Ref. \cite{Dixit/2020} have analyzed the cosmological features of RHDE models through conservation equation which is not compatible with the basic field equations of the model. Also, we notice that, in Ref. \cite{Varshney/2020}, similar approach has been used for analyzing the k-essence and dilation field models inspired by THDE in f(R, T) gravity which also need some moderation to establish THDE models in modified theories of gravity.\\ \begin{figure}[h!] \includegraphics[width=7cm,height=6cm,angle=0]{Fig01.png} \caption{The plot of $q$ for numerical values of $\beta$.} \label{fig2} \end{figure} \section{Discussion and final remarks}\label{sec:4} In this paper, we have analyzed that the gravitational field equations are not compatible with energy conservation equation in the framework of $f(R,T) = R + 2\xi T$ gravity. Therefore, the method and approach given in Ref. \cite{Dixit/2020} represents a fractured way for analyzing RHDE models in $f(R,T)$. However, one can use this method to analyze some features of RHDE models in general theory of relativity. Further we investigate that the dynamics of deceleration parameter does not favor the result displayed in Dixit et al. \cite{Dixit/2020} for the particular range of $\beta$.\\ The deceleration parameter is obtained as \begin{equation} \label{D-1} q = -\frac{\ddot{a}a}{\dot{a}^{2}} = -1 - \frac{\dot{H}}{H^{2}}. \end{equation} Eqs. (\ref{A-8}) and (\ref{D-1}) lead to \begin{equation} \label{D-2} q = -1+\frac{3}{2\beta}. \end{equation} The plot of $q$ for numerical values of $\beta$ is shown in Fig. 1. We observe that for $0\leq \beta \leq 1.5$, the Universe in derived model is expanding in decelerating mode while in Dixit et al. \cite{Dixit/2020}, it has been classified as quintessence model of accelerating Universe for the given range of $\beta$ (see particular II of table 1 and 2 of Ref. \cite{Dixit/2020}).\\ It is worthwhile to note that neither we avoid the possible existence of RHDE cosmological models in $f(R,T)$ theory of gravity nor we decline the viability of $f(R,T)$ gravity with special type dark energy density. But the method and technique given in Dixit et al \cite{Dixit/2020} is not suitable for the dynamics of RHDE models in $f(R,T)$ gravity. As a final comment, we note that in spite of good possibility of RHDE models in $f(R,T)$ theory of gravity to provide a theoretical foundation for numerical relativity and cosmology, the experimental point is yet to be considered and still the theory needs a fair trial. \\
{ "timestamp": "2021-06-03T02:13:11", "yymm": "2012", "arxiv_id": "2012.08971", "language": "en", "url": "https://arxiv.org/abs/2012.08971" }
\section{Introduction} \label{sec:intro} ColorShapeLinks is an artificial intelligence (AI) competition framework for the Simplexity board game \citep{simplexitybgg} with arbitrary dimensions. It is a similar game to Connect-4, with pieces defined not only by color, but also by shape. The ColorShapeLinks development framework offers Unity \citep{unity3d} and .NET console frontends. Agents are implemented in C\# and run unmodified in either frontend. Unity and C\# are widely used in the games industry \citep{toftedahl2019taxonomy} and for game development education \citep{dickson2015using,dickson2017an,comber2019engaging,fachada2020topdown,hmeljak2020developing}, making the competition especially accessible to this audience. The framework is open source, fully documented and developed following best practices in software engineering, allowing it to be studied and extended by educators, researchers and students alike \citep{lakhan2008open,jimenez2017four}. Furthermore, it contains all the tooling for setting up competitions. It can be used for internal competitions in AI courses, for example, or for running international AI competitions, as demonstrated in the 2020 edition of the IEEE Conference on Games (IEEE CoG). In this regard, ColorShapeLinks is not simply a software tool that uses AI technologies for educational purposes \citep{chen2020multi}, but a software toolkit to both teach and learn board game AI. This paper is organized as follows. In Section~\ref{sec:background}, the state of the art in board game AI---and how ColorShapeLinks fits in---is briefly discussed. The ColorShapeLinks board game and its original version, Simplexity, are characterized in Section~\ref{sec:csl}. In Section~\ref{sec:motivation}, the motivation for developing ColorShapeLinks, as well as the educational context in which the framework took form, are examined. The development framework, namely its architecture, existing frontends, agent implementation, included agents, and availability, are reviewed in Section~\ref{sec:devframework}. In Section~\ref{sec:deployments}, we describe how ColorShapeLinks was used to host two internal competitions in an AI course unit, as well as a fully-fledged international AI competition in the IEEE CoG 2020 conference. The implications and limitations of the framework and of the reported outcomes are discussed in Section~\ref{sec:limitations}. The paper closes with Section~\ref{sec:conclusions}, in which we present some conclusions. \section{Background} \label{sec:background} Computer programs for playing classical board games such as Chess, Draughts or Go, were the first known application of AI for games \citep{millington2019ai}. AI research in games has subsequently grown far beyond the domain of board games \citep{yannakakis2018artificial}. Nonetheless, these continue to be a focus of active research, not only in AI techniques for playing particular games---e.g., Go \citep{silver2016mastering}---but also for playing board games in general \citep{konen2019general,kowalski2019regular,piette2019ludii}, or even creating new ones \citep{stephenson2019ludii,kowalski2016evolving}. Furthermore, new board game AI challenges and competitions continue to be proposed \citep{justesen2019blood}. While board games are probably one of the easiest ways to introduce AI for games to students \citep{drake2011teaching,chesani2017game}, state-of-the-art board game AI research is gaining some distance from both industry and education in videogame development. Requirements such as general game playing capabilities for the AI or knowledge of general game specification languages \citep{stephenson2019ludii, kowalski2019regular} can potentially raise the entry level for newcomers and/or discourage prospective participants which could otherwise bring new ideas to academia---or at least get involved in its processes. ColorShapeLinks aims to reduce this gap by offering an approachable, open and flexible AI competition for the videogame development education audience. This is accomplished by making the development of an AI agent very simple (e.g., write a single method and test it in Unity), while allowing for considerable customization via a modular framework architecture. This same modularity enables games and full tournaments to run within and without Unity. While accessible for undergraduates and non-specialist game developers, ColorShapeLinks provides a challenging and intricate board game competition, addressable with vastly different techniques, from knowledge-based methods \citep{allis1988knowledge} to machine learning techniques \citep{silver2016mastering}, or anything in between. It should be noted that ColorShapeLinks could be implemented under a general game playing system such as RBG \citep{kowalski2019regular} or Ludii \citep{piette2019ludii}, and this would probably be an excellent exercise. However, using these frameworks would mean losing some of the educational advantages offered by ColorShapeLinks, namely accessibility, openness (e.g., Ludii was not open source at the time of writing) and keeping the focus on general game design and development education while providing students with a broad AI for games background. \section{ColorShapeLinks} \label{sec:csl} ColorShapeLinks is a version of the Simplexity board game with arbitrary and parameterizable dimensions. We describe Simplexity and ColorShapeLinks in Subsections~\ref{sec:csl:simplexity} and \ref{sec:csl:unbounded}, respectively, highlighting their computational characteristics in Subsection~\ref{sec:csl:characteristics}. \subsection{The Simplexity Board Game} \label{sec:csl:simplexity} The Simplexity board game is similar to Connect-4 in two regards: 1) the first player to connect four pieces of a given type wins; and, 2) the base board size is $6 \times 7$. The crucial difference is that, in Simplexity, pieces are defined not only by color, but also by shape---round or square. Player 1, \textit{white}, wins if it can connect either four round pieces or four white pieces. Likewise for player 2, \textit{red}, but with square or red pieces. Players begin with 21 pieces of their color, 11 of which square, and the remaining 10 round. The catch here is that shape has priority over color as a winning condition. Therefore, a player can lose in its turn when placing a piece of the opponent's winning shape. Table~\ref{tab:simplexity} summarizes these rules and \figurename~\ref{fig:victory} shows the possible winning conditions. \begin{figure}[tb] \centering \subfloat[White wins with white pieces.\label{fig:victory:whitewhite}]{ \includegraphics[width=0.5\textwidth]{whitewhite}} \subfloat[White wins with round pieces.\label{fig:victory:whiteround}]{ \includegraphics[width=0.5\textwidth]{whiteround}}\\ \subfloat[Red wins with red pieces.\label{fig:victory:redred}]{ \includegraphics[width=0.5\textwidth]{redred}} \subfloat[Red wins with square pieces.\label{fig:victory:redsquare}]{ \includegraphics[width=0.5\textwidth]{redsquare}} \caption{Possible victory conditions in ColorShapeLinks using standard Simplexity rules.} \label{fig:victory} \end{figure} \begin{table}[tb] \centering \caption{Simplexity rules.} \label{tab:simplexity} \begin{tabular}{lll} \toprule & \bfseries Player 1 & \bfseries Player 2\\ \midrule Plays first? & Yes & No\\ Plays with & White pieces & Red pieces\\ Begins with & $11\times$ white square pieces & $11\times$ red square pieces\\ & $10\times$ white round pieces & $10\times$ red round pieces\\ Wins with line of & $4\times$ round pieces & $4\times$ square pieces\\ (shape has priority) & $4\times$ white pieces & $4\times$ red pieces\\ \bottomrule \end{tabular} \end{table} \subsection{The ColorShapeLinks Board Game} \label{sec:csl:unbounded} ColorShapeLinks is a Simplexity game parameterizable with respect to board dimensions, number of pieces required for a winning sequence, and initial number of round and square pieces. Table~\ref{tab:cslparams} shows the available parameters as well as the symbols used for them in the remainder of this paper. \begin{table}[tb] \centering \caption{ColorShapeLinks parameters.} \label{tab:cslparams} \begin{tabular}{clr} \toprule \bfseries Symbol & \bfseries Name & \bfseries Default\textsuperscript{a}\\ \midrule $r$ & Rows & 6\\ $c$ & Columns & 7\\ $w$ & Win sequence & 4\\ $s$ & Square pieces per player & 11\\ $o$ & Round pieces per player & 10\\ \bottomrule \multicolumn{3}{l}{\mbox{ }\textbf{\textsuperscript{a}} i.e., a regular game of Simplexity.} \end{tabular} \end{table} \subsection{Characteristics} \label{sec:csl:characteristics} ColorShapeLinks can be characterized as follows \citep{yannakakis2018artificial,millington2019ai}: \begin{itemize} \item It is a \textbf{turn-based} game. \item It is a \textbf{two-player}, \textbf{zero-sum adversarial} game. \item It is \textbf{deterministic}, i.e., there is no random element influencing the game state. \item It has \textbf{perfect information}, i.e., the board state is fully observable. \item The \textbf{maximum number of turns} is given by \begin{equation*} t_\text{max} = \min{\{r \cdot c, 2(s + o)\}} \end{equation*} i.e., it is equal to the minimum between the number of board positions, $r \cdot c$, and the total pieces available to be played, $2(s + o)$. \item Since each board position can be in one of five states (empty, white circle, white square, red circle or red square), an upper bound for the \textbf{state space} is given by $5^{t_\text{max}}$. In practice the state space will be considerably smaller, since this value includes invalid states, for example when pieces are on top of empty cells. \item The initial \textbf{branching factor} is $2 \times c$ (2 shapes, $c$ columns), although it may decrease during the game as columns are filled and pieces of a certain shape are played-out. Consequently, an upper bound for the game tree size is given by $(2 \cdot c)^{t_\text{max}}$. \end{itemize} These characteristics place ColorShapeLinks in an interesting position for AI research. The game, even with standard Simplexity parameters, cannot currently be solved using a brute force approach in a short amount of time. Like most games nowadays, machine learning techniques (e.g., deep learning) together with a tree search approach such as Monte Carlo Tree Search (MCTS) \citep{coulom2006efficient}, for example, will certainly be able to produce hard-to-beat agents. However, the limited and well-defined ruleset leaves the door open for knowledge-based or even analytical solutions. \section{Educational Context and Motivation} \label{sec:motivation} ColorShapeLinks was originally developed as a Unity-only assignment for an AI for Games course unit\footnote{\url{https://github.com/VideojogosLusofona/ia_2019_board_game_ai}} at Lusófona University's Videogames BA. This is an evenly interdisciplinary degree \citep{mateas2007design}, meaning that while it possesses solid Computer Science (CS) fundamentals, it is more limited in this regard than technology-focused curriculums, giving equal ground to Game Design and Art courses. The Unity game engine is used in most of the course units, since it is easy to learn, free, cross-platform, and widely used in education and actual game development \citep{dickson2015using,dickson2017an,comber2019engaging,toftedahl2019taxonomy,fachada2020topdown,hmeljak2020developing}. Consequently, and to increase student's proficiency with Unity, the engine's scripting language, C\#, is lectured independently in two programming course units \citep{fachada2020topdown}. The AI for Games course unit threads this fine line by implementing a hands-on, Unity and industry-oriented program based on selected parts of Millington's \textit{AI for Games} textbook \citep{millington2019ai}. The course unit addresses topics such as heuristics, board games, movement, pathfinding, decision making, learning and procedural content generation. However, more complex and/or academic materials are avoided. With respect to the topic of board games, the course focuses on the Minimax algorithm, as well as several variations and optimizations, such as alpha-beta pruning, move ordering and iterative deepening. Techniques such as transposition tables and MCTS are not currently lectured, although in the specific case of MCTS this might change given the proven usefulness of the method in a variety of different games and scenarios \citep{yannakakis2018artificial}. Due to its wide-ranging design, Lusófona's Videogames degree attracts students from various areas of study and with substantially different interests. An issue with this type of curriculum design is that it can be difficult to motivate students enrolled in courses outside their main preferences. This is often the case of art and design-inclined students in CS courses in general \citep{fachada2018db,fachada2019desafios,deandrade2020fun}, and AI in particular \citep{lin2021modeling}. Thus, student motivation strategies become particularly relevant in this context \citep{pintrich2003motivational}. Board game competitions in AI courses have been shown to motivate students in studying course materials and in autonomously search for solutions beyond the scope of the course's program and lectures \citep{chesani2017game}. Hence, and with the goal of increasing motivation among students---particularly those whose primary interests do not lie in CS---a board game AI assignment featuring an internal classroom competition was devised. Given the variety of student backgrounds, the project had to be made accessible for everyone, while challenging for more advanced and/or CS-oriented students. A deterministic, fully observable two-player, non-trivial board game was an obvious choice. Another requirement was that it was not a very well-known game, so that not much AI code was available online, forcing students to be original. Simplexity \citep{simplexitybgg} ticks those boxes, as it was only used once in an AI course \citep{wilkins2012simplexity}, to our knowledge. Furthermore, Simplexity was implemented as a console C\# project (no AI) two years prior\footnote{\url{https://github.com/VideojogosLusofona/lp1_2017_p1}}, thus being a perfect choice, since students already knew the base game. This led to the development of ColorShapeLinks, an AI assignment and competition, first introduced in the first semester of the 2019/20 academic year. Moreover, some authors have reported positive educational outcomes when creating assignments based on international AI competitions \citep{kim2013game,yoon2015challenges}. They argue that, by offering students the possibility of submitting their solutions to international events, and of comparing their solutions with the state of the art, their engagement and motivation are improved. Following this line of reasoning, ColorShapeLinks was proposed as a competition for the IEEE CoG 2020 conference. Since the first version of the framework was only suitable for basic classroom competitions, it was extended to support more advanced use cases. Additions included a console mode, for advanced debugging and analysis of agents, and a set of scripts for setting up automatically running tournaments. Thus, ColorShapeLinks recast its role as an internal competition during the second semester of the 2019/20 academic year, while at the same time hosting the eponymous competition at the IEEE CoG 2020 conference. These experiences are described in Section~\ref{sec:deployments}. Beyond the educational benefits of ColorShapeLinks discussed thus far, and the fact that Unity is widely used in education and industry, it should also be noted that: (1) C\# is used as a scripting language in several major game engines \citep{unity3d,xenko,cryengine,godotengine,monogame,flaxengine,unigine,waveengine}; and, (2) neither Unity nor C\# are very common in game AI competitions. Thus, ColorShapeLinks has the potential to more generally reduce the gap between academic AI and game development industry/education. This is especially relevant for interdisciplinary and/or industry-oriented programs such as Lusófona's Videogames BA. \section{Development Framework} \label{sec:devframework} The ColorShapeLinks development framework offers two application frontends, one for the Unity game engine, the other a text-based affair aimed at terminal use. This allows students familiar with Unity to start implementing an agent from their comfort zone, advancing to the text-based console frontend if Unity development limitations begin to show. Advanced students or researchers can skip the Unity frontend altogether, and start with the console frontend from the offset. The framework architecture supporting this flexibility is described in Subsection~\ref{sec:devframework:arch}. Subsection~\ref{sec:devframework:frontends} details the two available frontends, highlighting their advantages and disadvantages. The basics of implementing an agent are discussed in Subsection~\ref{sec:devframework:implementing}, while the agents included with the framework are presented in Subsection~\ref{sec:devframework:agents}. Finally, the framework's availability, documentation and licensing are outlined in Subsection~\ref{sec:devframework:availability}. \subsection{Architecture} \label{sec:devframework:arch} The development framework is organized around three main components, namely the Unity frontend (UnityApp, a single .NET/Mono project), the console frontend (ConsoleApp, composed of two .NET projects), and the Common .NET project. This organization is shown in \figurename~\ref{fig:arch}, and discussed with additional detail in the following paragraphs. \begin{figure}[tb] \centering \includegraphics[width=1\textwidth]{arch_template} \caption{Internal organization of the ColorShapeLinks development framework. Arrows represent dependencies between separate .NET projects.} \label{fig:arch} \end{figure} \subsubsection{The Common Project} \label{sec:devframework:arch:common} The Common project is a .NET class library which constitutes the core of the framework. It defines the fundamental models---from a Model-View-Controller (MVC) perspective \citep{sarcar2020design}---of the ColorShapeLinks game, such as the board, its pieces or performed moves, and is a dependency of the remaining projects. It is further subdivided in the \texttt{AI} and \texttt{Session} namespaces. The former defines AI-related abstractions, such as the \texttt{AbstractThinker} class, which AI agents must extend, as well as a manager for finding and instantiating concrete AI agents. The latter specifies a number of match and session-related interfaces, as well as concrete match and session (i.e., tournament) models. \subsubsection{The ConsoleApp Frontend Projects} \label{sec:devframework:arch:consoleapp} The ConsoleApp is composed of two .NET projects, App and Lib, both of which depend on the Common class library, as shown in \figurename~\ref{fig:arch}. The App project is a .NET console application with an internal dependency on the Lib project, itself a .NET class library. The App project provides the actual console frontend, namely the text user interface (TUI) with which the user interacts in order to run ColorShapeLinks matches and sessions. The Lib class library acts as an UI-independent ``game engine'', offering match and session controllers, as well as interfaces for the associated event system, allowing to plug in renderers (views, in MVC parlance) or other event handling code at runtime. It serves as a middleware between the Common library and frontend applications, such as the one implemented in the App project. It is not used by the Unity implementation, since Unity already provides its own game engine logic, forcing match and session controllers to be tightly integrated with its frame update cycle. Nonetheless, the Lib class library makes the creation of new ColorShapeLinks TUIs or GUIs very simple, as long as they are not based on highly prescriptive frameworks such as Unity. \subsubsection{The UnityApp Frontend Project} \label{sec:devframework:arch:unityapp} The UnityApp is a ColorShapeLinks frontend implemented in the Unity game engine. Like the ConsoleApp, it is designed around the MVC design pattern, making use of the models provided by the Common library. In this case, however, the views and controllers are tightly integrated with the Unity engine. \subsection{Frontends} \label{sec:devframework:frontends} The two available frontends, for Unity and console, have similar capabilities. Both are capable of performing matches and tournaments involving multiple AI agents and human players. An agent can be used without modification when moving from one frontend to the other. There are, however, advantages in using the console frontend. The following paragraphs offer additional detail on each frontend. \subsubsection{The Unity Frontend} \label{sec:devframework:frontends:unity} In the Unity frontend, matches and tournaments are played within the Unity Editor, though it is possible to create a standalone build with prefixed match or tournament configurations. The rationale behind this choice is that ColorShapeLinks is at its core a development framework. Therefore, it makes sense this is done within the Unity Editor, which is a development environment. A game of ColorShapeLinks running within the Unity Editor is shown in \figurename~\ref{fig:unity}. \begin{figure}[tb] \centering \includegraphics[width=1\textwidth]{unity4} \caption{Running ColorShapeLinks using the Unity Editor UI.} \label{fig:unity} \end{figure} Developing an agent within the Unity editor has the disadvantage that games run considerably slower than in the console. Furthermore, constantly creating standalone builds with prefixed configurations is not practical. Therefore, while this frontend makes ColorShapeLinks approachable, it should be considered more of an introduction to the competition than a definitive way of implementing advanced state-of-the-art ColorShapeLinks agents. \subsubsection{The Console Frontend} \label{sec:devframework:frontends:console} The console UI allows for a more refined control of ColorShapeLinks matches and sessions. Being based on MVC, it allows for easily swapping the UI, as well as running matches at full speed, contrary to the Unity editor. \figurename~\ref{fig:consoleapp} shows running a ColorShapeLinks match using the console UI. While the console frontend offers many extensibility points, its default configuration will likely suffice in most situations. For example, users can run a learning algorithm on top of the console app, making use of its return values, which indicate the match result or if an error occurred. \begin{figure}[tb] \includegraphics[width=1\textwidth]{console} \caption{Running ColorShapeLinks using the console UI. Only the start of the match is shown.} \label{fig:consoleapp} \end{figure} \subsection{Implementing an Agent} \label{sec:devframework:implementing} The first step to implement an AI agent is to extend the \texttt{AbstractThinker} base class. This class has three overridable methods, but it is only mandatory to override one of them, as shown in Table~\ref{tab:thinkmethods}. \begin{table}[tb] \centering \caption{Overridable Methods in the \texttt{AbstractThinker} class} \label{tab:thinkmethods} \begin{tabular}{lll} \toprule Method & Mandatory & Purpose\\ & override? & \\ \midrule \texttt{Setup()} & No & Setup the AI agent.\\ \texttt{Think()} & Yes & Select the next move to perform.\\ \texttt{ToString()} & No & Return the AI agent's name.\\ \bottomrule \end{tabular} \end{table} There is also the non-overridable \texttt{OnThinkingInfo()} method, which can be invoked for producing ``thinking'' information, mainly for debugging purposes. In the Unity frontend this information is printed on Unity's console, while in the console frontend the information is forwarded to the registered thinker listeners (or views, from a MVC perspective). Classes extending \texttt{AbstractThinker} also inherit a number of useful read-only properties, namely board and match configuration properties (number of rows, number of columns, number of pieces in sequence to win a game, number of initial round pieces per player and number of initial square pieces per player) and the time limit for the AI to play. Concerning the board/match configuration properties, these are also available in the board object given as a parameter to the \texttt{Think()} method. However, the \texttt{Setup()} method can only access them via the inherited properties. The following subsections address the overriding of each of these three methods. \subsubsection{Overriding the \texttt{Setup()} method} \label{sec:devframework:implementing:setup} If an AI agent needs to be configured before starting to play, the \texttt{Setup()} method is the place to do it. This method receives a single argument, a string, which can contain agent-specific parameters, such as maximum search depth, heuristic to use, and so on. It is the agent's responsibility to parse this string. In the Unity frontend, the string is specified in the ``Thinker params'' field of the \texttt{AIPlayer} component. When using the console frontend, the string is passed via the \texttt{-{}-white/red-params} option for simple matches, or after the agent's fully qualified name in the configuration file of a complete session. Besides the parameters string, the \texttt{Setup()} method also has access to board/match properties inherited from the base class. The same AI agent can represent both players in a single match, as well as more than one player in sessions/tournaments. Additionally, separate instances of the same AI agent can be configured with different parameters. In such a case it might be useful to also override the \texttt{ToString()} method for discriminating between the instances configured differently. This is an essential feature if ColorShapeLinks is running under a machine learning and/or optimization infrastructure. Note that concrete AI agents require a parameterless constructor in order to be found by the various frontends. Such constructor exists by default in C\# classes if no other constructors are defined. However, it is not advisable to use a parameterless constructor to setup an AI agent, since the various board/match properties will not be initialized at that time. This is yet another good reason to perform all agent configuration tasks in the \texttt{Setup()} method. In any case, concrete AI agents do not need to provide an implementation of this method if they are not parameterizable or if they do not require an initial configuration step. \subsubsection{Overriding the \texttt{Think()} method} \label{sec:devframework:implementing:think} The \texttt{Think()} method is where the AI actually does its job and is the only mandatory override when extending the \texttt{AbstractThinker} class. This method accepts the game board and a cancellation token, returning a \texttt{Future\hyp{}Move} object. In other words, the \texttt{Think()} method accepts the game board, the AI decides the best move to perform, and returns it. The selected move will eventually be executed by the match engine. The \texttt{Think()} method is called in a separate thread. Therefore, it should only access local instance data. The main thread may ask the AI to stop thinking, for example if the thinking time limit has expired. Thus, while thinking, the AI should frequently test if a cancellation request was made to the cancellation token. If so, it should return immediately with no move performed. The game board can be freely modified within the \texttt{Think()} method, since this is a copy and not the original game board being used in the main thread. More specifically, the agent can try moves with the \texttt{DoMove()} method, and cancel them with the \texttt{UndoMove()} method. The board keeps track of the move history, so the agent can perform any sequence of moves, and roll them back afterwards. For parallel implementations, the agent can create additional copies of the board, one per thread, so that threads can search independently of each other. The \texttt{CheckWinner()} method of the game board is useful to determine if there is a winner. If there is one, the solution is placed in the method's optional parameter. For building heuristics, the game board's public read-only variable \texttt{winCorridors} will probably be useful. This variable is a collection containing all corridors (sequences of positions) where promising or winning piece sequences may exist. The AI agent will lose the match in the following situations: \begin{itemize} \item Causes or throws an exception. \item Takes too long to play. \item Returns an invalid move. \end{itemize} \subsubsection{Overriding the \texttt{ToString()} method} \label{sec:devframework:implementing:tostring} By default, the \texttt{ToString()} method removes the namespace from the agent's fully qualified name, as well as the ``thinker'', ``aithinker'' or ``thinkerai'' suffixes. However, this method can be overridden in order to behave differently. One such case is when agents are parameterizable, and differentiating between specific parameterizations during matches and sessions becomes important. \subsubsection{Summing-up} \label{sec:devframework:implementing:bottomline} Building an AI agent for ColorShapeLinks is very simple, asking only the implementer to extend a class and implement one method. A basic Minimax algorithm with a simple heuristic can be implemented in less than 30 minutes. Educators can use it to demonstrate how to create a simple agent from scratch, during a class for example, and leave up to the students to find better, more efficient solutions. This can be done as an assignment, a competition, or both. \subsection{Included Agents} \label{sec:devframework:agents} Three test agents are included with the framework, serving both as an example on how to implement an agent, as well as a baseline reference for testing other agents. The \textit{sequential} agent always plays in sequence, from the first to the last column and going back to the beginning, although skipping full columns. It will start by using pieces with its winning shape, and when these are over, it continues by playing pieces with the losing shape. Therefore, it is not a ``real'' AI agent. The \textit{random} agent plays random valid moves, avoiding full columns and unavailable shapes. It can be parameterized with a seed to perform the same sequence of moves in subsequent matches (as long as the same valid moves are available from match to match). The \textit{minimax} agent uses a basic, unoptimized Minimax algorithm with a naive heuristic which privileges center board positions. It can be parameterized with a search depth, and, although simple, is able to surprise unseasoned human players---even at low search depths. \subsection{Availability} \label{sec:devframework:availability} The development framework is available at \url{https://github.com/VideojogosLusofona/color-shape-links-ai-competition} and is fully open source, licensed under the Mozilla Public License 2.0\footnote{\url{https://www.mozilla.org/en-US/MPL/2.0/}} (MPL2), which requires changes to the source code to be shared, although allowing for integration with proprietary code if the MPL2 licensed code is kept in separate files. The framework is completely documented and the documentation is available at \url{https://videojogoslusofona.github.io/color-shape-links-ai-competition/docs/html/}. \section{Deployments} \label{sec:deployments} The ColorShapeLinks framework has been used to host two internal competitions (in separate semesters) in an AI for Games course unit at Lusófona University's Videogames BA, as well as an international AI competition in the IEEE CoG 2020 conference. These deployments are discussed in the next two subsections. \subsection{Internal Competitions in AI for Games Course Units} \label{sec:deployments:internal} ColorShapeLinks was used as an assignment and internal competition in an AI for Games course unit during the two semesters of the 2019/20 academic year. In both semesters, the submitted AI agents and associated reports were graded preliminarily and separately from the final competition. Since students work in groups of 2 or 3 elements, an individual discussion was also performed. Results from the internal competition were used to potentially improve the preliminary grades, not lower them, since the main goal was to motivate students and have an engaging class where students watched and commented on the performance of each others' agents in real time. Nonetheless, if an agent was not competent enough to enter the competition, i.e., it freezed the Unity project or did not respond to cancellation requests, up to 1 point could be subtracted from the final grade. No students from the first semester repeated the course during the second semester. However, there were several differences between the two semesters. In the first semester, ColorShapeLinks was a Unity-only assignment, as stated in Section~\ref{sec:motivation}. The minimum requirement for passing---i.e., to have a grade of 10 or higher (grades are given in a 0--20 scale)---was that students implemented an agent capable of defeating the \textit{sequential} and \textit{random} agents. The basic \textit{minimax} agent, described in Subsection~\ref{sec:devframework:agents}, was not included in the framework at this time. In the second semester, the console frontend and the basic \textit{minimax} agent were added to the framework, with the assignment coinciding with the first few weeks of the international competition. The minimum passing requirement in the second semester was for the submitted agent to beat the basic \textit{minimax} implementation. Grade distribution and summary statistics for the ColorShapeLinks assignment in both semesters are shown in Fig.~\ref{fig:grades} and Table~\ref{tab:grades}, respectively. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{grades} \caption{Grade distribution for the ColorShapeLinks AI assignment in the two semesters of the 2019/20 academic year. Grades are given in a 0--20 scale.} \label{fig:grades} \end{figure} \begin{table} \caption{\label{tab:grades}Summary statistics for the ColorShapeLinks AI assignment grades in the two semesters of the 2019/20 academic year, namely, number of students enrolled, $n_\text{all}$, number of students evaluated in the assignment, $n_\text{eval}$, mean grade, $\overline{x}$, median grade, $\widetilde{x}$, and percentage of students with a passing grade, $\%_{\ge 10}$. Grades are given in a 0--20 scale.} \centering \begin{tabular}{lccccc} \toprule & $n_\text{all}$ & $n_\text{eval}$ & $\overline{x}$ & $\widetilde{x}$ & $\%_{\ge 10}$ \\ \midrule Semester 1 & 27 & 25 & \num{12.02} & \num{12.00} & \SI{80.00}{\percent} \\ Semester 2 & 25 & 22 & \num{13.64} & \num{13.00} & \SI{86.36}{\percent} \\ \bottomrule \end{tabular} \end{table} Looking at the overall results in both semesters, at least 80\% of the students had a passing grade ($\ge 10$) and participated actively in the assignment. Students less interested in the AI aspect of game design and development mostly submitted Minimax-based agents, e.g. with alpha-beta pruning, just to pass the project. They were, nonetheless, generally more committed than in other AI and programming assignments. On the other end, students more comfortable with programming and AI generally dedicated more hours to the assignment and came up with interesting and competitive agents. Most, if not all students were highly engaged with the final live competition during class (e.g., ``my AI is better than yours''!). There were some notable grading variations between the two semesters. Grades in the second semester were higher in average, and more students met the passing grade (Table~\ref{tab:grades}), even though the minimum requirement was higher. Grade distribution (Fig.~\ref{fig:grades}) shows that, in the first semester, a considerable number of students aimed at the bare minimum just to get a passing grade, while in the second semester students generally went for more elaborate solutions. Some of these went beyond the scope of the lectured materials, making use of transposition tables, search parallelization and heuristics parameter learning. We argue there were three main reasons, which separately or in combination, led to this outcome: \begin{enumerate} \item The console app allowed some students to better test and debug their agent implementations, without being constrained by Unity. \item Some students might have been motivated due to working on an assignment which was at the same time an international AI competition. \item The \textit{minimax} agent allowed students to study and better understand how a ``real'' agent could be implemented, allowing them to build upon it. \end{enumerate} These results suggest that ColorShapeLinks was successful in allowing students with various interests and with different capabilities within game making---as is the case of students in Lusófona's Videogames BA---to be able to produce palpable, working AI code, and to feel included when addressing this challenging topic. \subsection{IEEE CoG 2020 International Competition} ColorShapeLinks was accepted as an official AI competition at the IEEE CoG 2020 conference, and was funded with a prize money of 500 USD for the winner of each track. The competition ran on two distinct tracks: \begin{enumerate} \item The \textit{Base Track}, which used standard Simplexity rules with a time limit of 0.2 seconds per move. Only one processor core was available for the AI agents. \item The \textit{Unknown Track}, which was played on a multi-core processor under a parameterization known only after the competition deadline, since it was dependent on the result of a public lottery draw. \end{enumerate} The goal of the \textit{Base Track} was to test agent capabilities in the standard Simplexity game. The \textit{Unknown Track} evaluated the generalization power of the submitted solutions when applied to a most likely untested parameterization. For each track, the submitted agents played against each other two times, so each had the opportunity of playing first. Agents were awarded 3 points per win, 1 point per draw and 0 points per loss, with the final standings for each track depending on the total number of points obtained per agent. Classifications for the \textit{Base Track} were updated daily during a five month period, up until the submission deadline, together with two larger test parameterizations. This allowed participants to have an idea of how their submission was faring, and update it accordingly. The competition had a total of six submissions, four of which were from undergraduate students, both solo and in teams. Two of the submissions were from students of the author which fared well in the internal competition discussed in the previous subsection. Although the number of submissions was low, the fact that four of them were from undergrads, partially demonstrated that the competition was accessible. Eita Aoki, from Japan, won the \textit{Base Track} with his \textit{Thunder} agent, which used MCTS together with a custom bit board implementation. The winner and runner-up of the \textit{Unknown Track} were teams from Portugal and students of the author which attended the AI for Games course in the second semester. João Moreira won the track with \textit{SureAI}, a highly optimized Minimax-based agent. The \textit{SimpAI} agent, developed by a team of three students, was the runner-up, implementing a set of hand-crafted heuristics balanced with an evolutionary algorithm \citep{fernandes2021simpai}. While interesting, none of these agents was truly state-of-the-art, leaving the door open for better agents going forward. \section{Implications and Limitations} \label{sec:limitations} The deployments discussed in the previous sections demonstrate the potential and usefulness of the ColorShapeLinks framework. The internal competition appeared to motivate the students, who were highly engaged during the class tournaments held in both semesters. The introduction of the international competition during the second semester had a more pronounced effect, though. Overall grades and dedication improved, with students generally showing more autonomy and going beyond what was lectured in the classes, effectively validating the works of \cite{kim2013game} and \cite{yoon2015challenges}. The authors of \textit{SimpAI}, a student group of the second semester, were able to publish a paper describing their solution and the second place it obtained in the \textit{Unknown Track} of the international competition \citep{fernandes2021simpai}. These results show that AI competitions in general, and ColorShapeLinks in particular, have clear educational benefits with respect to student motivation, engagement and autonomy. This is further highlighted by the fact that Lusófona's Videogames degree is interdisciplinary (non-CS focused) and industry-driven (not academy-oriented). The fact that students from such a degree were able to compete in an international competition with good results indicates that ColorShapeLinks shows good promise in bridging the gap between game development education and academic AI. However, the research presented in this paper has two main limitations which should be highlighted. First, no survey was done in order to assess students' reflections on the competitions. The stated student motivation and engagement during the internal competitions are taken from the author's observations, and thus, entirely subjective and potentially biased, no matter how clear or obvious they might have been. Second, the underlying game has a very narrow focus. Even as an unbounded version of Simplexity, ColorShapeLinks---as a game---is relatively simple. Thus, many good and excellent solutions are bound to appear on the short to medium term, possibly even analytical solutions, such as the case of Connect-4 \citep{allis1988knowledge}. Consequently, ColorShapeLinks---as a framework---might have a short service life before the problem it addresses becomes effectively solved. Still, this is not the case at the time of writing, as ColorShapeLinks will again be one of the competitions being held at the IEEE CoG 2021 conference. \section{Conclusions} \label{sec:conclusions} In this paper we presented ColorShapeLinks, and AI board game competition framework specifically designed for game development students and educators, with openness and accessibility at its core. The arbitrarily-sized Simplexity board game---implemented by the framework---offers a good balance between simplicity and complexity, being approachable by undergraduates, while posing a non-trivial challenge to researchers and more advanced students. The framework has been successfully used for running internal competitions in an AI for Games course unit, as well as for hosting an international AI competition, validating its usefulness. Although the problem addressed by the proposed framework is bound to be solved in the next few years---thus rendering the framework obsolete---the general ideas presented in this paper, such as running internal and international AI competitions using industry standard tools and software best practices for educational purposes, were confirmed, and remain valid for similar future endeavors. \section*{Statements on open data and ethics} The data reported in this paper is fully anonymized and publicly available at the Zenodo open-access scientific repository \citep{fachada2021csldata}. \section*{Declaration of competing interest} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \section*{Acknowledgments} This work is supported by Fundação para a Ciência e a Tecnologia under Grants UIDB/04111/2020 (COPELABS) and UIDB/05380/2020 (HEI-Lab). The author would like to thank André Fachada for proof-reading the text. The author would also like to thank the anonymous referees for their valuable comments and helpful suggestions. \bibliographystyle{model5-names}
{ "timestamp": "2021-03-01T02:22:58", "yymm": "2012", "arxiv_id": "2012.09015", "language": "en", "url": "https://arxiv.org/abs/2012.09015" }
\section{Introduction} \label{sec:intro} Giant radio sources (GRSs) are objects with extremely large projected linear sizes ($>$ 0.7 Mpc; assuming H$_0$=71 km $\rm s^{-1} Mpc^{-1}$, $\Omega_M$=0.27, $\Omega_{\Lambda}$=0.73; \citealt{spergel2003}). It is believed that such large radio size structures are relatively rare. Only $\sim$6\% of radio sources from the 3CR complete sample do exceed this size (e.g. \citealt{ishwara99}). The reasons why some radio sources have grown so large are not fully understood, however detailed multi-wavelength studies have significantly increased our knowledge about the nature of GRSs (e.g. \citealt{jamrozy2008}, \citealt{machalski2009}, \citealt{konar2008}, \citealt{kuligowska2009}, \citealt{subrahmanyan2008}). The crucial point in research of the GRS' origin is to study large and homogeneous samples of such objects. Owing to the efforts of many scientists, a~lot of new GRSs were found during the last several years. \cite{kuzmicz2018} catalogued all the GRSs found in the literature up to 2018. The sample includes 349 GRSs, of which 280 are hosted by galaxies (giant radio galaxies; GRGs) and 69 by quasars (giant radio quasars; GRQs). The second-largest sample of GRSs was compiled by \cite{dabhade2020}. Based on the low-frequency LOFAR Two-metre Sky Survey first data release \citep[LoTSS;][]{shimwell2017, shimwell2019}, the authors collected 239 GRSs (199 GRGs and 40 GRQs). This release covers a region of only 424 deg$^2$, but the LOFAR survey is very sensitive to low surface brightness features and has a high angular resolution, which makes it a valuable tool in identifying extended radio galaxies. A smaller sample of GRSs was also compiled by \cite{koziel2020}, as part of the ROGUE project within which the authors catalogued 33 GRGs. Recently, a new large catalogue is being prepared by \cite{dabhade2020b} under the SAGAN project, where authors collected 162 GRSs (139 GRGs and 23 GRQs) using NVSS (\citealt{condon1998}), FIRST (\citealt{becker1995}), and the TIFR GMRT Sky Survey (TGSS; \citealt{intema2017}). In total, we know at least about 770 GRSs, of which only 109 are considered as GRQs. In the 3CRR complete sample of radio sources with flux density limit 10 Jy \citep{laing1983}, 75\% of radio sources are radio galaxies, and 25\% are radio quasars, according to the NASA Extragalactic Database (NED, \url{ned.ipac.caltech.edu}). The smaller fraction of GRQs in the entire GRS population indicates that selection effects play a significant role in their identification. The connection between the production of powerful jets and the conditions within the innermost regions of active galactic nuclei (AGN) has not been fully explored. Various studies have attempted to understand the physical processes underlying the optical and radio emission in quasars (e.g. \citealt{jackson1991}, \citealt{willott1999}, \citealt{miller1999}, \citealt{sulentic2002}, \citealt{kimball2011}, \citealt{jackson2013}, \citealt{olmo2020}, \citealt{gaur2019}). A lot of research has concentrated on testing unified schemes where radio-loud quasars are supposedly galaxies with a central supermasive black hole (BH) surrounded by an accretion disk, a dusty torus and clouds of gas. They can generate powerful radio jets directed along the rotation axis of the BH. The anisotropic emission due to torus obscuration as well as relativistic boosting of radio jet emission leads to the orientation effects visible in optical and radio bands. Certain spectral parameters were found to correlate with the radio source's orientation. For example, \cite{kharb2004} found a correlation between nuclear optical luminosity and radio core prominence, which can be used as an orientation indicator. Also, the equivalent widths of broad emission lines are found to be orientation-dependent e.g. \cite{baker1997} and \cite{kimball2011}, although the authors obtained contradictory results claiming respectively anticorrelation and correlation between those two parameters. However, the basic AGN model predicts a stronger continuum emission in sources viewed closer to the radio-jet axis, which leads to smaller values of equivalent widths. \\ The connection between the extended radio luminosity and luminosities of narrow emission lines found by different authors (e.g. \citealt{baum1989}, \citealt{rawlings1989}, \citealt{tadhunter1998}, \citealt{gaur2019}) indicates that the source responsible for narrow-line emission is actually also the source of radio emission \citep{rawlings1991}, in contrast to other models in which radio and narrow-line luminosities are mainly driven by the environment \citep{dunlop1993}. The narrow-line and radio emission are most likely related to the accretion rate and/or the BH mass, which are also drivers of AGN evolution.\\ The GRSs that are supposed to be sources in an advanced evolutionary stage, were studied in terms of their optical properties by \cite{kuzmicz2012}. The authors analysed BH masses, accretion rates, and radio properties for a sample of 45 GRQs. As a result, they discovered that GRQs are very similar to smaller radio quasars and the determined parameters are typical for powerful quasars. It has to be noted that the analysed sample was relatively small as compared to the number of GRQs known to date. Therefore, we decided to re-examine optical properties of GRQs, focusing on various aspects which had not been explored yet. The aim of this work is to complement the existing samples of GRSs with new objects which are hosted by quasars. While the number of known GRGs ($\sim$650) is relatively high, the GRQs constitute only $\sim$14\% of all known GRSs. It is much less than in the 3CRR sample where the QSOs constitute 25\% of radio sources. It has to be emphasized that our sample enlarges the number of known GRQs nearly threefold, which shows our method to be an efficient way of finding GRSs. The second part of this study concentrates on some fundamental properties observed in AGNs, i.e. the quasar main sequence, infrared colour diagram, and radio core prominence. We compare properties of GRQs with smaller-sized extended radio quasars (SRQs), as well as with the SDSS quasars that have matches with a FIRST radio source according to the SDSS data release 14 Quasar catalogue (DR14Q; \citealt{paris2018}) to look for differences between the GRQs and other quasars. \section{Sample selection} For our analysis, we collected GRQs known to date, along with 174 new objects, which makes our sample the largest one containing such a rare class of radio sources. 69 GRQs out of all previously known GRQs were taken from the literature compilation of GRSs by \cite{kuzmicz2018}. A further 15 GRQs were taken from \cite{dabhade2020}, where authors identified them on low frequency radio images of the LoTSS survey, and another 14 GRQs were taken from \cite{dabhade2020b}. The new GRQs studied in this paper were found using the currently available data from the NVSS, FIRST and DR14Q catalogues. These catalogues were already used in a systematic search of GRSs (e.g. \cite{proctor2016}, \cite{dabhade2017}, \cite{dabhade2020b}). However, e.g. \cite{proctor2016} selected only radio sources with angular size larger than 4$^\prime$. In our study we do not apply any restriction on angular size and we found 59 new GRQs with angular sizes larger than 4$^\prime$. Therefore, in order to find extended radio sources, future search methods have to be improved.\\ The combined sample of new and known GRQs comprises 272 objects, which significantly enlarges the number of known GRQs. The newly discovered 174 GRQs and their basic parameters are listed in Table 1. \subsection{GRQ search method} \label{search} In searching extended radio quasars we used the DR14Q quasar catalogue, including 526 356 spectroscopically identified quasars and quasar candidates. All of the catalogued quasars were cross-matched with the NVSS radio sources in the following way: \begin{itemize} \item In the first step, using the DS9 software\footnote{\url{http://ds9.si.edu}} we plotted the positions of all quasars from the DR14Q on the full NVSS atlas images of 4$^\circ$$\times$4$^\circ$ in size. Only $\sim$1000 NVSS images had DR14Q objects in them and typically each NVSS image had about 500 such quasars. Around each quasar position we drew a circle corresponding to the expected 0.7 Mpc angular size, considering the redshift of each quasar. \item In the second step we visually inspected all the full NVSS atlas images containing DR14Q objects, looking for radio sources exceeding the size of the plotted circles. We considered only the sources where the quasar position was near the centre of radio emission i.e. in a centre of single apparent elongated emission, between two maxima of radio emission, or in a maximum of radio emission between two nearly symmetrically located maxima. Due to such a selection strategy we could have missed very asymmetric radio sources and sources with a one-sided radio lobe visible in NVSS maps. As a result we selected 1341 quasars with visible elongated NVSS radio emission exceeding 0.7 Mpc in size. \item In the next step we manually verified all positive matches using FIRST and AllWISE infrared images \citep{cutri2013} and SDSS optical images to discern false findings. We visually inspected whether radio hot-spots coincide with optical or infrared sources. We also checked if the positions of optical quasar hosts coincide with the FIRST radio core emissions. Almost all the quasars from our sample have radio cores in the FIRST survey catalogue separated by less than 1$^{\prime\prime}$ from the optical quasar. We confirmed that 603 out of 1341 selected quasars are the hosts of extended radio sources. Based on the FIRST radio maps, we measured the projected linear size (distance between the opposite hot-spots) of GRQ candidates. As a result, a lot of radio quasars proved to be actually smaller than 0.7 Mpc, despite their projected linear size on the NVSS maps (as measured to the 3$\sigma$ contour level -- step two of the search method) exceeding 0.7 Mpc (Section \ref{smaller}). In some cases the difference between both these methods of radio source size measurement, from one hot-spot to opposite hot-spot on the FIRST maps and from 3$\sigma$ contour level of one lobe to 3$\sigma$ contour level of the opposite lobe on the NVSS maps, is very large. This can be particularly well seen in the case of high-redshift quasars that have small angular sizes (e.g. for redshift z=1 the NVSS beam size equal to 45$^{\prime\prime}$ corresponds to $\sim$360 kpc), for which NVSS 3$\sigma$ sizes are overestimated more than twofold. Therefore, in our study we use radio source size measurements from the hot-spot to hot-spot method on the FIRST maps, while NVSS 3$\sigma$ sizes where used only in the process of GRQs candidates selection. The difference between the two methods of radio source size measurement is illustrated in Figure \ref{1129}. For the radio sources with no radio core emission detected in the FIRST survey, we checked the host position using the Very Large Array Sky Survey (VLASS; \citealt{lacy2020}) all-sky radio survey at 3 GHz with high angular resolution ($\sim$2.$^{\prime\prime}$5). We also used VLASS maps to determine angular sizes of radio sources which are outside the FIRST survey footprint or have FIRST radio structure visible only on one side of the host quasar. They are marked with letter ``v'' in Tables 1 and 2. In the cases where there was no possibility to measure the radio source's size from hot-spot to hot-spot (because there were no FIRST or VLASS detections), we give an approximated value measured between NVSS maxima of lobe emission (marked as ``nn'' in Tables 1 and 2) or between the FIRST hot-spot and the NVSS maximum on the opposite side in the case of one-sided FIRST radio lobes (marked as ``fn'' in Tables 1 and 2). In our study we use only the radio source size measurements listed in Tables 1 and 2. \end{itemize} It is worth to highlight that using the method described above we ``re-discovered'' almost all the GRQs from \cite{kuzmicz2018} within the field of the NVSS/SDSS surveys. Moreover, we found 15 out of 40 GRQs which have been found earlier by \cite{dabhade2020} based on the low-frequency LoTSS radio maps. A further 19 GRQs from \cite{dabhade2020} are too small to meet the GRS size criterion, based on measurements of their radio structures on FIRST maps, therefore we do not include them in the sample of known GRQs. Of the 23 new GRQs found by \cite{dabhade2020b}, 6 GRQs were identified also in our search method, and 11 occurred to have hot-spot to hot-spot FIRST sizes smaller than 0.7 Mpc. These 11 QSOs were also not included in the sample of known GRQs. It has to be noted here that in \cite{dabhade2020} the authors measured sizes of radio sources up to the 3$\sigma$ level on low resolution (20$^{\prime\prime}$) LOFAR maps, and in \cite{dabhade2020b} up to the 3$\sigma$ level on NVSS maps, therefore their results are overestimated. In our study, for the quasars taken from \cite{dabhade2020} and \cite{dabhade2020b} we use remeasured sizes from hot-spot to hot-spot on FIRST maps. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./1129.png}\\ \caption{The GRQ J1129-0121 located at z=0.726. The NVSS black contours and FIRST red contours are overlaid on an r-band Pan-STARRS optical image. The Figure shows the difference between radio source size measurement methods. In \cite{dabhade2020b} J1129-0121 is measured to 3$\sigma$ NVSS contour level giving the largest linear size D=1.61 Mpc. The measurement from FIRST hot-spot to hot-spot results the D=0.64 Mpc, disqualifying this quasar as a GRQ. In our study the J1129-0121 is classified as SRQ.} \label{1129} \end{figure} \subsection{Subsample of smaller radio quasars} \label{smaller} The method described above allowed for identification of extended radio quasars with NVSS 3$\sigma$ sizes larger than 0.7 Mpc. As was mentioned in Section \ref{search}, some of the radio quasars selected in the second step of the search method proved to be smaller than 0.7 Mpc after remeasuring their sizes from hot-spot to hot-spot in the FIRST radio maps. In our study such quasars are classified as SRQs. In the SRQ sample we collected 367 objects that we found using our search method. They are listed in Table 2. Their projected linear sizes are between 0.2 -- 0.7 Mpc, so they are not GRQs but represent the population of extended radio quasars which can be used in other studies. Together with GRQs they provide a sample covering a continuous size range of smaller and larger radio quasars. The number of quasars in SRQ sample (with 3$\sigma$ sizes larger than 0.7 Mpc and hot-spot to hot-spot sizes smaller than 0.7 Mpc) shows that in samples where the 3$\sigma$ method is used for size measurement, radio source sizes can be even triply overestimated. \subsection{Characteristics of the sample} The characteristic parameters of the GRQ (174 new findings and 98 previously known GRQs listed in \citealt{kuzmicz2018}, \citealt{dabhade2020} and \citealt{dabhade2020b}) and SRQ samples are presented in Figure \ref{Pz} and \ref{PD}, where we plot the redshift versus 1.4 GHz total radio luminosity (P$_{\rm tot}$) and total radio luminosity -- linear size diagram, respectively. The P$_{\rm tot}$ for newly identified GRQs and SRQs was determined by applying the formula given by \cite{brown2001}, where we used the 1.4 GHz flux-densities measured on the NVSS maps, adopting the spectral index value $\alpha=-0.6$ (after \citealt{wardle1997}). We use the convention of S$_{\nu}\sim \nu^{\alpha}$. In order to estimate the core radio luminosity (P$_{\rm core}$) analysed in the next sections, we measured core flux densities on the FIRST maps and used $\alpha=-0.3$ from \cite{zhang2003}. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./Ptot_z_2.png}\\ \caption{The total radio luminosity at 1.4 GHz as a function of redshift for the GRQ (black dots) and SRQ (red dots) samples.} \label{Pz} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./Ptot_D_2.png}\\ \caption{Luminosity -- linear size diagram for GRQs and SRQs. The symbols are the same as in Figure \ref{Pz}. In the figure we use the linear size measured as the distance between hot-spots.} \label{PD} \end{figure} The quasars from our samples cover the redshift range of 0.1$<$z$<$3 and the median value of projected linear size (D) for GRQs is D=0.9 Mpc and for SRQs D=0.44 Mpc. The median value of 1.4 GHz total radio luminosity is $\log$P$_{\rm tot}[\rm W Hz^{-1}]$=26.1 for the GRQ sample and $\log$P$_{\rm tot}[\rm W Hz^{-1}]$=26.6 for the SRQ sample. The smaller median value of P$_{\rm tot}$ for GRQs is in agreement with existing radio source evolutionary models (e.g. \citealt{kaiser1997}), where the larger radio sources are older and thus have lower total radio luminosities. It can be clearly seen in Figure \ref{PD}, where we plotted P$_{\rm tot}$ against the projected linear size measured from hot-spot to hot-spot (column 6 in Table 1), that P$_{\rm tot}$ decreases as the projected linear size increases. The distributions of P$_{\rm tot}$ and D for both the samples are presented in Figure \ref{distr}. The largest GRQ J0931+3204 measures 4.29 Mpc (\citealt{coziol2017}).\\ In Figure \ref{distr_z}, we plot the distribution in redshift for our samples in comparison with the distribution for GRSs hosted by galaxies. It can be seen that the highest number of GRQs and SRQs is at the redshift z$\sim$0.8, while for GRGs the maximum of distribution is at z=0.2. The differences in redshift distributions for GRQs and GRGs are caused by selection effects. The galaxies at higher redshifts are hard to observe, while the highest number of SDSS DR14 quasars is observed at z$\sim$1.5 and z$\sim$2.2 with {only $\sim$50 objects below z=0.1.} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{./P_distr_2.png}\\ \includegraphics[width=0.99\columnwidth]{./D_distr_2.png}\\ \caption{Distribution of P$_{\rm tot}$ (top panel) and D (bottom panel) for GRQ and SRQ samples.} \label{distr} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{z_distr3.png}\\ \caption{Distribution of redshift for GRQs (black line), SRQs (red line) and GRGs (blue line).} \label{distr_z} \end{figure} \begin{itemize} \item High redshift (z$>$1) GRSs are very rare because the IGM density is higher at earlier cosmological epochs and also because the surface brightness of the radio structure strongly depends on redshift, which makes them hard to be detected and identified. Until now, 31 GRQs at z$>$1 of which only 6 have z$>$2 had been reported in literature \citep{kuzmicz2018, kuligowska2018, dabhade2020}. In the presented sample of GRQs, there are 70 new QSOs at z$>$1, of which 9 are located at z$\geq$2. This significantly increases the number of known high-redshift GRSs. The radio maps of the most distant newly discovered GRQs are presented in Figure \ref{z}, where we plotted NVSS and FIRST or VLASS contours overlaid onto r-band Pan-STARRS \citep{flewelling2020} optical image. The most distant GRQ is J1411+0156 located at z=2.95. As can be seen on the radio map of J1411+0156, there is no radio bridge connecting the radio core and the western radio lobe. Moreover, there is a very faint infrared object in the unWISE Catalog \citep{schlafly2019} which coincides with the western radio lobe. Therefore better sensitivity radio data are needed to full confirmation this radio quasar as a GRQ. \begin{figure*} \centering \includegraphics[width=0.68\columnwidth]{./Q141139+015616.png} \includegraphics[width=0.68\columnwidth]{./Q105840+445815.png} \includegraphics[width=0.68\columnwidth]{./Q112759+453417.png} \includegraphics[width=0.68\columnwidth]{./Q135600+190421.png} \includegraphics[width=0.68\columnwidth]{./Q003445+333018cc.png} \includegraphics[width=0.68\columnwidth]{./Q081145+165256.png} \includegraphics[width=0.68\columnwidth]{./Q145409+615402.png} \includegraphics[width=0.68\columnwidth]{./Q083813+135810.png} \includegraphics[width=0.68\columnwidth]{./Q092628+472236.png} \caption{Radio maps of new GRQs located at z$\geq$2. The NVSS black contours are overlaid onto r-band Pan-STARRS optical images. The FIRST radio contours are plotted in red and the VLASS contours in blue. The cross marks the position of the parent QSO and the red arrows in the J0034+3301 image mark the position of VLASS hot-spots}. \label{z} \end{figure*} \item X-ray detections. It is expected that for more distant sources the X-ray-to-radio flux ratio increases due to higher energy density of the cosmic microwave background (CMB). At higher redshift the relativistic electrons in radio jets preferentially lose energy due to scattering on CMB photons (e.g. \citealt{simionescu2016}, \citealt{ghisellini2014}). For quasars included in our samples 72 (of the 174) GRQs and 96 (of the 367) SRQs have detections in the ROSAT all-sky survey \citep{voges2000, voges1999}, while 10 GRQs and 18 SRQs are listed in the Third XMM-Newton Serendipitous Source Catalog \citep{rosen2016}. The most distant GRQs from our sample have not been detected in X-rays to date, so they constitute good targets for future X-ray observations.\\ \end{itemize} \subsection{One-sided radio jets} \label{jet} Some quasars from our samples show evidence of a one-sided radio jet visible very close to the host quasar. In the samples of GRQs and SRQs we found 7 and 26 such quasars, respectively. The presence of one-sided radio jets indicates the Doppler boosting of radio emission due to high radio source inclination of radio jets. Therefore, the projected sizes of such radio sources may be highly underestimated. The quasars with visible one-sided radio jet are marked with letter ``j'' in Tables 1 and 2. \section{Radio core prominence parameter} For the quasars from our samples, we determined the radio core prominence parameter (f$_{\rm c}$) defined as the ratio between the core flux density ($S_{\rm core}$) and the extended radio emission flux density ($S_{\rm ext}=S_{\rm tot}-S_{\rm core}$), $\rm f_c=S_{\rm core}/S_{\rm ext}$, (named as R parameter in \citealt{orr1982}). It was postulated by many authors that f$_{\rm c}$ is a good indicator of radio source orientation. A high value of the f$_{\rm c}$ parameter indicates that the radio source jets are oriented closer to the line of sight. The dependence of optical properties on the f$_{\rm c}$ parameter can result from relativistically beamed radiation or anisotropically emitted radiation due to obscuration in some directions or a line-emitting region which is not spherically symmetric \citep{jackson1991}, therefore it can be used to probe the structure of central regions in AGNs. In our study, the f$_{\rm c}$ parameter was estimated only for those radio quasars for which we were able to separate the radio core from the extended radio emission. We did not use quasars with no radio core detection in FIRST catalogue. The radio core flux density was measured on FIRST radio maps, which have a better resolution than NVSS, while the total flux densities were measured on NVSS maps to avoid loosing weak and diffuse radio emission of radio lobes. In the sample of GRQs we determined f$_{\rm c}$ for 225 GRQs, of which 18 have f$_{\rm c}$$>$1. An f$_{\rm c}$ value larger than one means that the radio core dominates the overall luminosity of the radio source. In the sample of SRQs we measured f$_{\rm c}$ for 284 quasars, of which only 6 have f$_{\rm c}$$>$1. We checked the correlations between the radio core dominance and equivalent widths (EWs) of different emission lines. The measurements of emission lines were done by \cite{suvendu2019}, where the authors estimate spectral parameters for all quasars in DR14Q. It was predicted by different models that the emission line properties depend on orientation. Such a dependence can be caused by obscuration by a clumpy torus, Doppler-boosting of continuum emission or inclination of the accretion disk (\citealt{nenkova2008}, \citealt{browne1987}, \citealt{netzer1987}). We obtained the strongest correlation between $\log \rm EW(\rm H\rm\alpha)$ and $\log \rm f_c$ (the correlation coefficient of linear fit (C) is C=0.57 for GRQs and C=0.49 for SRQs), but it should be noted that we found the H$\alpha$ line to be present in the spectra of only a few quasars, so this result is not representative for the entire sample. The EW of $\rm H\rm\beta$ broad emission line is very weakly correlated with f$_{\rm c}$ (C=0.06 for GRQs and C=0.19 for SRQs), however the overall trend of increasing EW(H$\rm\beta$) with f$_{\rm c}$ can be seen (Figure \ref{ew}). Also other broad emission lines, like H$\rm\gamma$, MgII and CIV, are very weakly correlated with f$_{\rm c}$. In AGN models, it was predicted that the broad emission lines, which are produced close to the accretion disk, should be orientation-dependent and emitted anisotropically (e.g. \citealt{jackson1991}). The lack of correlation between broad emission lines and f$_{\rm c}$ parameter indicates that EW does not depend on orientation or that both the emission line and optical continuum show the same dependence on orientation. On the other hand, the f$_{\rm c}$ parameter may be not a good indicator of orientation for very extended radio sources because the aged radio lobes will have significantly decreased surface brightness due to radiation losses and adiabatic expansion. Also jet interactions with the intergalactic medium, as well as cosmological surface brightness dimming may affect the observed flux of extended radio structures in GRQs. It is also possible that in the case of quasars with a very high f$_{\rm c}$, the high core flux density can be caused by recurrent radio jet emission, which cannot be resolved in too low resolution radio maps (e.g. 4C +02.27, \citealt{kuzmicz2017}). \begin{figure} \centering \includegraphics[width=1\columnwidth]{R_EWHbbr_sn5_b_2.png} \caption{The equivalent width of the H$\beta$ line versus the radio core prominence parameter f$_{\rm c}$. The fitted lines were obtained via the standard linear regression model. The same method was applied in Figures \ref{ewOIII}, \ref{p1} and \ref{p2}.} \label{ew} \end{figure} For our samples of quasars, we do not find either the strong anticorrelation between EW of [OIII] and f$_{\rm c}$ (Figure \ref{ewOIII}) observed by other authors (\citealt{baker1997}, \citealt{jackson1989}, \citealt{jackson2013}). The anticorrelation is predicted by the model in which the broad line region (BLR) is photo-ionized by radiation from the accretion disk and the strength of emission lines is correlated with the intensity of this radiation. When our line of sight is near perpendicular to the plane of the obscuring torus of the AGN (high f$_{\rm c}$), a low EW of [OIII] is expected. For larger inclinations (smaller f$_{\rm c}$), a higher EW should be observed due to obscuration of the ionizing component. The very weak anticorrelation for our GRQs and SRQs (C$\sim$-0.12) may have resulted from a small number of core-dominated quasars in our samples and because the f$_{\rm c}$ parameter may be not a good indicator of radio source orientation. Another possibility is that for very extended radio quasars, the structure of innermost parts of AGN (the dusty torus) does not lead to anisotropic obscuration of the ionizing continuum source. \begin{figure} \centering \includegraphics[width=1\columnwidth]{R_EWOIII_sn5_b_2.png} \caption{The equivalent width of the [OIII] line against f$_{\rm c}$ parameter. } \label{ewOIII} \end{figure} \section{[OIII] luminosity - radio luminosity relation} For our samples of QSOs, we checked the relation between [OIII] luminosity (L[OIII] values are taken from \citealt{suvendu2019}) and total radio luminosity P$_{\rm tot}$, as well as the core radio luminosity P$_{\rm core}$ at 1.4 GHz (Figure \ref{p1} and \ref{p2}), which indicates a possible connection of radio jet emission with the narrow line region (NLR). For the sample of GRQs we obtained a relatively high level of correlation. For the L[OIII] vs P$_{\rm tot}$ relation, the correlation coefficient C is 0.71 and for L[OIII] vs. P$_{\rm core}$ C=0.57. A lower, but still high level of correlation, was found in the sample of SRQs. The correlation coefficients of the L[OIII] vs. P$_{\rm tot}$ and L[OIII] vs. P$_{\rm core}$ relations are C=0.61 and C=0.53 respectively. Also, when we consider the relation between L[OIII] and radio luminosity of extended radio emission (P$_{\rm ext}$=P$_{\rm tot}$-P$_{\rm core}$), there are considerable correlations in both the samples, much stronger than those obtained by \cite{gaur2019} for a sample of radio-loud quasars (C=0.25) or by \cite{tadhunter1998} for the 2 Jy complete sample of radio sources (C=0.38). It may indicate that the connection between radio emission and the NLR in GRQs is quite significant. In the standard quasar illumination model (e.g. \citealt{rawlings1991}) the correlation between radio and [OIII] luminosity is explained as due to a direct relation between the power of the photoionizing continuum and the radio jet power. \begin{figure} \centering \includegraphics[width=1\columnwidth]{LOIIIPtot_fit_sn5_2.png} \caption{Relation of the total radio luminosity at 1.4 GHz and the luminosity of [O III].} \label{p1} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{LOIIIPcore_fit_sn5_2.png} \caption{Relation of the core radio luminosity at 1.4 GHz and the luminosity of [O III].} \label{p2} \end{figure} \section{Eigenvector 1} A powerful tool for probing AGN properties and the fundamental correlations between different spectral components is Eigenvector 1 (EV1; \citealt{boroson92}). The EV1 optical plane is defined by the FWHM of the broad H$\beta$ line and the ratio R$_{\rm FeII}$ between the EW of FeII (4435--4685\AA) and the EW of the H$\beta$ broad line. The location of an object in the plane of EV1 traces the so-called quasar main sequence. In the EV1 plane two different spectral types of QSOs can be distinguished: population A with FWHM $\rm H\beta \leq$4000 km/s and population B with FWHM $\rm H\beta>$4000 km/s. Sources which belong to the same spectral type have similar spectroscopic properties, e.g. line flux ratios and line profiles \citep{sulentic2002}. The R$_{\rm FeII}$ parameter traces the variation of Eddington ratio L/L$_{\rm Edd}$, while the FWHM of H$\beta$ traces a change in orientation (\citealt{marziani2001}, \citealt{sulentic2017}). It was postulated by \cite{zamfir2008} and \cite{marziani2018} that the separation line between populations A and B at FWHM H$\beta$=4000 km/s can correspond to a critical change in accretion disk structures: from wind-dominated geometrically thick accretion disk in population A to the geometrically thin disk-dominated population B.\\ In Figure \ref{EW}, we plotted the EV1 plane for GRQs and SRQ samples. All the spectral parameters used in this study were taken from \cite{suvendu2019}. In Figure \ref{EW}, we also plotted the location of all 18 273 quasars which are flagged in DR14Q as quasars with FIRST counterparts within 2$^{\prime\prime}$ \citep{paris2018}. We refer to this sample as the DR14Q FIRST quasars. In drawing the diagram we used only the quasars that have spectra with S/N$>$5 and we do not remove GRQs and SRQs from the DR14Q FIRST sample as they constitute only 1.6\% and 2\%, respectively, of the latter sample. In Figure \ref{EW}, we also marked the division line between population A and population B at FWHM $\rm H\beta$=4000 km/s. It can be seen that most GRQs (91\%), and SRQs (83\%) belong to the B population. The objects in this region are massive, old galaxies with low accretion rates. The location of GRQs is typical for lobe-dominated quasars \citep{zamfir2008}, however few sources are located far off the main sequence area occupied by the lobe-dominated radio quasars. Two GRQs have extreme values of R$_{\rm FeII}$: J2315+2518 and J1408+3054 as are two smaller-sized radio quasars: J1628+4552 and J1108+6451. The location of J1408+3054 can be explained by a large amount of out-flowing gas \citep{marziani2018}, as this GRQ is a broad absorption line quasar. Spectral properties, as well as properties in other wavebands have to be studied in more detail to understand the specific positions of the remaining quasars in the EV1 plane. \\ \begin{figure} \centering \includegraphics[width=1\columnwidth]{./EV1bc.png} \caption{The Eigenvector 1 plane for GRQs (black dots), SRQs (red dots) and DR14Q quasars with FIRST counterparts (DR14Q FIRST; green dots). The horizontal line at FWHM $\rm H\beta$=4000 km/s separates two different populations of quasars: population A below FWHM $\rm H\beta$=4000 km/s and population B above the FWHM $\rm H\beta$=4000 km/s. The locations of the four extreme cases J1108+6451, J1628+4552, J1408+3054, J2315+2518 are marked. The shaded area indicatively traces the distribution of a quasar sample from \cite{zamfir2010}, defining the quasar main sequence.} \label{EW} \end{figure} The location of GRQs and SRQs in the EV1 plane is in agreement with the relation between BH masses (M$_{\rm BH}$) and Eddington ratio (the ratio of bolometric to Eddington luminosity, R$_{\rm Edd}$=L$_{\rm bol}$/L$_{\rm Edd}$) plotted in Figure \ref{bh}. For all quasars studied in this paper, the estimation of BH masses, as well as the bolometric and Eddington luminosities, were taken from \cite{suvendu2019}. It can be clearly seen that GRQs and SRQs concentrate in the region of larger M$_{BH}$ and lower Eddington ratio as compared to the DR14Q FIRST quasars, which confirms that the extended radio quasars are evolved sources where accretion processes are not currently significant, representing later stages of evolution. \begin{figure} \centering \includegraphics[width=1\columnwidth]{./Mbhb.png} \caption{Eddington ratio vs. BH masses for GRQs, SRQs, and DR14Q FIRST quasars. The symbols are as in Figure \ref{EW}.} \label{bh} \end{figure} \section{WISE colour diagram} In this section, we study the infrared colours based on WISE magnitudes in W1, W2, and W3 bands at 3.4, 4.6 and 12 $\mu$m respectively \citep{wright2010}. W1-W2 and W2-W3 colours were used by \cite{wright2010} for classification of astrophysical objects. Different classes of objects occupy different parts on the colour-colour diagram. In our analysis we used infrared magnitudes given in the DR14Q catalogue, which are a result of cross-matching AllWISE Source Catalog \citep{cutri2013} with DR14Q FIRST quasars. We used only the magnitudes with A or B quality flags as listed for the 198 126 AllWISE matches we found for the 526 356 DR14Q quasars. Almost all the quasars from the GRQ and SRQ samples were detected by WISE. We compared their colours with those of the quasars from DR14Q to check their position on the colour-colour diagram relative to the entire quasar population. In Figure \ref{wise}, we can see that most quasars occupy the region centred at W1-W2$\approx$1.1 and W2-W3$\approx$3.1. Such a position within the colour-colour diagram (according to \citealt{wright2010}, \citealt{klindt2019}, \citealt{wu2012}) is characteristic for quasars. It can be seen that GRQs and SRQs occupy a smaller area than that of the quasars from DR14Q and that of the DR14Q FIRST quasars with FIRST detections, being concentrated within an ellipsoidal area in the upper left part of the diagram, with the central points of W1-W2=1, W2-W3=2.7 and the semi-major and semi-minor axes 0.8 and 0.3 respectively. The smaller area occupied by GRQs and SRQs coincides with the region of highest QSO density in DR14Q, and thus the region with the highest probability of finding quasars. However, when comparing the distributions of WISE colours for GRQs and DR14Q FIRST quasars (Figure \ref{wise2}), it is evident that GRQs have bluer W2-W3 colours, while W1-W2 colours for both the samples have similar distributions. The same behaviour is observed for the SRQ sample, which means that, as compared to the population of DR14Q FIRST quasars, GRQs and SRQs show a deficit of mid-infrared radiation towards longer wavelengths. This can indicate differences in the structure of the dusty torus (e.g. \citealt{wildy2018}) which is responsible for mid-infrared re-emission of absorbed radiation from the central source. It can also be a result of lower star formation rate in GRQs and SRQs, as compared to DR14Q FIRST quasars (e.g. \citealt{klindt2019}). \begin{figure} \centering \includegraphics[width=1\columnwidth]{./wiseGRQ.png} \caption{WISE colour--colour diagram for GRQs (black dots), SRQs (red dots), DR14Q (dark green dots), DR14Q FIRST (green dots).} \label{wise} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{./w2_w3.png} \includegraphics[width=1\columnwidth]{./w1_w2.png} \caption{Normalized distributions of WISE colours for GRQ and SDSS DR14Q FIRST samples.} \label{wise2} \end{figure} \section{Concluding remarks} In this study we present the basic properties of a sample of 272 GRQs, the largest yet constructed and comprised of 98 previously known ones, plus 174 GRQs newly discovered here using NVSS, FIRST and SDSS data. Among the new GRQs, there are 70 quasars at redshift z$>$1, of which 9 are located at z$>$2, with the most distant one, J1411+0156 at z=2.946. A large number of GRQs found by our thorough and time-consuming visual inspection of radio images proves that some of the modern/computer searches should be significantly modified. We enlarged the number of known GRQs nearly threefold, showing that previous catalogues of GRSs are incomplete and future extensive search is also necessitated for GRGs.\\ The search method used for the identification of GRQs also allowed us to compile a sample of 367 extended radio quasars with smaller radio structures (0.2$<$D$<$0.7 Mpc), which can be used in other investigations. It contains only the quasars found in this work.\\ Our analysis of optical, radio and infrared host properties of GRQs, SRQs and quasars from the DR14Q FIRST sample gave the following results: \begin{itemize} \item The radio core prominence for GRQs and SRQs is just very weakly correlated with the EW of broad emission lines. We did not find either any statistically significant anticorrelation between EW of [OIII] and radio core prominence. \item We obtained a strong correlation between the [OIII] luminosity and the total and core radio luminosities for GRQs, as well as SRQs. \item Based on the position of quasars within the Eigenvector 1 plane, most of GRQs and SRQs belong to B population with FWHM $\rm H\beta$$>$4000 km/s, representing evolved objects with high BH masses and the low accretion rates. \item There are few quasars in our samples with extreme values of EW FeII to EW H$\beta$ ratio, compared to other GRQs and SRQs. They should be studied separately to understand their specific location in the Eigenvector 1 plane. \item The positions of GRQs and SRQs on the WISE colour-colour diagram show a deficit of mid-infrared radiation towards longer wavelengths, which may indicate differences in the structure of the dusty torus, as compared to quasars from DR14Q FIRST sample. \end{itemize} We found no significant differences between GRQs and SRQs, which show that in general the extended radio quasars are a group of objects with similar spectral and infrared properties. The size of radio structures seems to be independent of the host galaxies' spectral properties, while there is a strong connection between the radio luminosity and the [OIII] emission line emitted in the NLR. Also the infrared W2-W3 colours of both GRQs and SRQs have smaller values than those of DR14Q quasars in general, indicating differences in their dusty torus structure. \section{Acknowledgments} We thank the referee for valuable comments and corrections that helped to improve the paper. We also thank Conor Wildy for his helpful comments.\\ This paper was supported by the National Science Centre, Poland through the grant 2018/29/B/ST9/01793.
{ "timestamp": "2020-12-17T02:15:16", "yymm": "2012", "arxiv_id": "2012.08857", "language": "en", "url": "https://arxiv.org/abs/2012.08857" }
\section{Introduction} \label{sec:intro} \ac{AEC} is an essential part of any full-duplex hands-free acoustic communication \commentTHd{application, e.g., teleconferencing or human-machine dialogue systems} \cite{enzner_acoustic_2014}. Research on \ac{AEC} algorithms has evolved from simple time-domain \commentTHb{\ac{AF}} approaches \cite{widrow_b_adaptive_1960} and computationally efficient frequency-domain implementations \cite{ferrara_fast_1980} to recent deep learning-based approaches \cite{zhang_deep_2019, westhausen2020acoustic}. In general, current \ac{AEC} algorithms can be classified into model-based \ac{AF}-\ac{PF} approaches and direct \ac{DPF} approaches. While model-based algorithms show unmatched generalization to unknown acoustic environments, they require sophisticated adaptation control mechanisms to cope with double-talk situations \cite{enzner_acoustic_2014}. In particular the probabilistically motivated inference of the \ac{AF} coefficients by a \ac{KF} \commentTHb{\cite{enzner_frequency-domain_2006, kuech_state-space_2014}} enabled the much sought-after continuous adaptation control without the need of a double-talk detector. However, despite the increased double-talk robustness, \ac{KF} approaches suffer from slow reconvergence after abrupt \acp{EPC} which are commonly encountered with portable devices \cite{yang_frequency-domain_2017, jiang_improved_2019}. This slow recovery is caused by overestimating the noise \ac{PSD} matrix and often remedied by auxiliary mechanisms like shadow filters \cite{yang_frequency-domain_2017} \commentTHb{or trained noise models \cite{kfNMF}} which require additional computational power. \commentTHa{Unlike \acp{AF}, \ac{DPF} algorithms for \ac{AEC} \cite{zhang_deep_2019, westhausen2020acoustic} do not require online adaptation when trained on adequate datasets. } \commentTHa{However, as communication devices are usually exposed to a large variety of acoustic environments, \commentTHb{\ac{DNN}} models with many parameters need to be trained which limits their applicability to devices with sufficient computational power.} These computational requirements can be mitigated by using smaller networks with less parameters when combining model-based and deep learning approaches \cite{Carbajal2018, pfeifenberger_nonlinear_2020, combAdFiltAndComValDPF}. However, most methods treat the \ac{AF} estimation independently from the \ac{DPF}. Recently, a \commentTHb{\ac{DNN}}-supported \ac{EM} optimization of a local Gaussian model has shown improved performance for joint reverberation, echo and noise reduction \cite{carbajal_joint_2020}. However, a narrowband assumption is made and the filter estimates are assumed to be time-invariant which limits the performance for time-varying scenarios \cite{carbajal_joint_2020}. In this paper, we introduce a synergistic approach which combines a broadband adaptive \ac{KF} and a \ac{DPF} and thereby successfully copes with time-varying acoustic environments. We show how the slow reconvergence of the \ac{KF} after abrupt \acp{EPC} can be remedied by exploiting the different signal statistics of the various interfering components. By efficiently fusing the \ac{DPF} near-end estimate and the \ac{KF} estimation error, a robust estimate for the \ac{KF} step size is obtained without any auxiliary mechanisms. \begin{figure} [b] \vspace*{-.17cm} \centering \begin{tikzpicture}[node distance=1.5cm] \tikzset{loudspeaker style/.style={% draw,very thick,shape=loudspeaker,minimum size=5pt }} \tikzset{microphone style/.style={% draw,very thick,shape=microphone,minimum size=.5pt, inner sep=2.0pt }} \node (sigInX) at (0,0) {}; \node [left of=sigInX, node distance=2cm, inner sep=.0] (sigInX1) {}; \node [right of=sigInX, inner sep=0] (spltX) {}; \node [below of=spltX, rectangle, draw, thick, node distance=1.0cm] (pbkf) {KF}; \node [below of=pbkf, circle, draw, inner sep=2, thick, node distance=1.9cm] (subtraction) {}; \node [rectangle, draw, thick] (pf) at ($(subtraction)+(-4.0,.1)$) {PF}; \node [left of=pf, thick] (epf) {}; \node [left of=pbkf, rectangle, draw, node distance=2.2cm] (psd) {PSD Est.}; \node[loudspeaker style,rotate=-90] (speaker) at (3.75,-.5) {}; \node[microphone style,rotate=90] (mic) at (3.75,-2.25) {}; \draw [thick] ($(sigInX1)$) -| (speaker.west) node [below, pos=-.119] {\commentTHa{$\underline{\boldsymbol{x}}_{\tau}$}}; \draw [->, thick] (pbkf) -- (subtraction.north) node [pos=.4, right] {$\widehat{\underline{\boldsymbol{d}}}_{\text{early},\tau}$} node [pos=.8, right] {$-$}; \draw [->, thick] (mic.west) |- (subtraction.east) node [pos=.85, above] {$\underline{\boldsymbol{y}}_{\tau}$}; \draw [->, thick] ($(pbkf.north)+(0,.74)$) -- (pbkf.north); \draw [->, thick] (subtraction.west) -- ($(pf.east)+(0,-.1)$) node [pos=.3, above] {$\underline{\boldsymbol{e}}_{\tau}^+$}; \draw [->, thick] (pf.west) -- (epf) node [pos=.5, above] {$\hat{\underline{\boldsymbol{s}}}_{\tau}$}; \draw [->, thick] ($(sigInX1)+(0.2,.0)$) |- ($(pf.east) + (0,.1)$); \draw [->, thick] ($(pf.north)-(-0,0)$) |- (psd.west) node [pos=.3, left] {$\widehat{\boldsymbol{M}}_\tau$}; \draw [thick, <-] (psd.south) -- ($(psd.south)+(0,-1.65)$); \draw [->, thick] (psd.east) -- (pbkf.west) node [pos=.5, above] {$\hat{\boldsymbol{\Psi}}_{\tau}^{(\cdot)}$}; \draw [thick] (2.625,-2.7) rectangle (4.5,-.25); \draw [thick] ($(sigInX1)+(-1.8,0)$) -- ($(sigInX1)$); \draw [thick, dashed, ->] ($(speaker)-(.0,.2)$) -- ($(mic)+(.0,.2)$); \draw [thick, dashed, ->] ($(speaker)-(.0,.2)$) -- ($(speaker)+(.75,-.8)$) -- ($(mic)+(.1,.2)$); \node [] (d) at ($(mic)+(.55,.3)$) {$\underline{\boldsymbol{d}}_\tau$}; \draw [fill] (3.22,-1.12) circle (.05); \draw [thick] (3.1,-1.2) arc(225:360:.15); \node [] (s) at ($(3.22,-1.70)$) {$\underline{\boldsymbol{s}}_\tau$}; \draw [dashed, ->, thick] ($(3.3,-1.3)+(0,0)$) -- ($(mic)+(-.1,.2)$); \draw [fill] (2.8,-2.2) circle (.05); \draw [thick] (2.9,-2.3) arc(-45:45:.15); \draw [dashed, ->, thick] ($(3.0,-2.2)+(0,0)$) -- ($(mic)+(-.2,.10)$); \node [] (s) at ($(3.32,-2.45)$) {$\underline{\boldsymbol{n}}_\tau$}; \end{tikzpicture} \caption{Proposed synergistic \ac{KF}+\ac{DPF} approach to \ac{AEC}.} \label{fig:alg_overview} \end{figure} In the following, we use bold uppercase letters for matrices and bold lowercase letters for vectors with underlined symbols indicating time-domain quantities. A matrix element in the $m$th row and the $n$th column is indicated by $[\cdot]_{mn}$. \commentTHa{We} denote the all-zero matrix of dimensions $D_1 \times D_2$ by $\boldsymbol{0}_{D_1 \times D_2}$, the $D$-dimensional identity matrix by $\boldsymbol{I}_D$ and the $D$-dimensional \ac{DFT} matrix by $\boldsymbol{F}_D$. Furthermore, the transposition and Hermitian transposition are represented by $(\cdot)^{\text{T}}$ and $(\cdot)^{\text{H}}$, respectively. The proper complex Gaussian \ac{PDF}, with mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Psi}$, is denoted by $\mathcal{N}_c(\cdot| \boldsymbol{\mu}, \boldsymbol{\Psi})$ and the expectation operator by $\mathbb{E}[\cdot]$. Finally, the $\text{diag}(\cdot)$ operator constructs a diagonal matrix from its vector-valued argument. \section{Probabilistic Signal Model} \label{sec:prob_sig_mod} The observed time-domain microphone signal block ${\underline{\boldsymbol{y}}}_\tau$ is modelled as a linear superposition of an early echo component $\underline{\boldsymbol{d}}_{\text{early},\tau}$, a late echo component $\underline{\boldsymbol{d}}_{\text{late},\tau}$, background noise $\underline{\boldsymbol{n}}_{\tau}$ and a near-end speaker $\underline{\bs}_\tau$ (cf.~Fig.~\ref{fig:alg_overview}), as follows: \begin{equation} {\underline{\boldsymbol{y}}}_\tau = \underline{\boldsymbol{d}}_{\text{early},\tau} + \underline{\boldsymbol{d}}_{\text{late},\tau} + \underline{\boldsymbol{n}}_{\tau} + \underline{\bs}_\tau \in \mathbb{R}^{R} \label{eq:td_sig_mod} \end{equation} where signal block ${\underline{\boldsymbol{y}}}_\tau$ at time index $\tau$ consists of $R$ consecutive samples, i.e., \begin{equation} \underline{\boldsymbol{y}}_\tau = \begin{bmatrix} \underline{y}_{\tau R - R + 1}, \underline{y}_{\tau R - R + 2}, \dots, \underline{y}_{\tau R} \end{bmatrix}^{{\text{T}}} \in \mathbb{R}^{R}, \end{equation} and $\underline{\boldsymbol{d}}_{\text{early},\tau}$, $\underline{\boldsymbol{d}}_{\text{late},\tau}$, $\underline{\bs}_\tau$ and $\underline{\boldsymbol{n}}_{\tau}$ are defined analogously. The early echo component ${\underline{\boldsymbol{d}}}_{\text{early}, \tau}$ is modelled by a linear convolution of a \ac{FIR} filter $\underline{\boldsymbol{w}}_{\tau} \in \mathbb{R}^{L}$ with the corresponding far-end signal block $\underline{\boldsymbol{x}}_{\tau}\in \mathbb{R}^{L+R-1}$. The linear convolution can efficiently be implemented by a \ac{PBC}. For this the \ac{FIR} filter $\underline{\boldsymbol{w}}_{\tau}$ is separated into $B=\frac{L}{R}$ partitions $\underline{\w}_{b,\tau} \in \mathbb{R}^R$. Subsequently, each partition is convolved with a corresponding delayed far-end block \begin{equation} \underline{\boldsymbol{x}}_{b,\tau} = \begin{bmatrix} \underline{x}_{(\tau-b) R - M + 1}, \underline{x}_{(\tau-b) R - M + 2}, \dots, \underline{x}_{(\tau-b) R} \end{bmatrix}^{{\text{T}}} \in \mathbb{R}^{M} \label{eq:x_def} \end{equation} of length $M=2R$ and the convolution products are added. By implementing each linear convolution in the \ac{DFT} domain one obtains \commentTHa{\cite{kuech_state-space_2014}}: \begin{equation} {\underline{\boldsymbol{d}}}_{\text{early}, \tau} = \sum_{b=0}^{B-1} \boldsymbol{Q}_1^{{\text{T}}} \boldsymbol{F}_M^{-1} \boldsymbol{X}_{b,\tau} {\w}_{b,\tau} \in \mathbb{R}^{R} \label{eq:timeDomObsEq} \end{equation} with the constraint matrix \makebox{$\boldsymbol{Q}_1^{\text{T}} = \begin{bmatrix}\boldsymbol{0}_{R \times R} & \boldsymbol{I}_R\end{bmatrix}$}, the \ac{DFT}-domain \ac{FIR} filter partition ${\w}_{b,\tau} = \boldsymbol{F}_M \boldsymbol{Q}_2 \underline{\w}_{b,\tau} \in \mathbb{C}^M $, the \ac{DFT}-domain far-end signal block \makebox{$\X_{b,\tau} = \text{diag} \left( \boldsymbol{F}_M \underline{\boldsymbol{x}}_{b,\tau} \right)$} and the zero-padding matrix \makebox{$\boldsymbol{Q}_2^{\text{T}}= \begin{bmatrix}\boldsymbol{I}_{R} & \boldsymbol{0}_{R \times R }\end{bmatrix}$}. A relation between the noisy observation $\underline{\boldsymbol{y}}_\tau$ and the filter partitions $\boldsymbol{w}_{b, \tau}$ is obtained by inserting the \ac{PBC} model \eqref{eq:timeDomObsEq} into the signal model \eqref{eq:td_sig_mod}. By using ${\boldsymbol{y}}_\tau = \boldsymbol{F}_M \boldsymbol{Q}_1 \underline{\boldsymbol{y}}_\tau$ this time-domain observation equation \commentTHa{is transformed} to the \ac{DFT} domain: \begin{equation} {{\boldsymbol{y}}}_\tau = \sum_{b=0}^{B-1} \boldsymbol{C}_{b,\tau} {\w}_{b,\tau} + {\boldsymbol{d}}_{\text{late},\tau} + {\boldsymbol{n}}_{\tau} + {\bs}_\tau \in \mathbb{C}^{M} \label{eq:fd_sig_mod} \end{equation} with the \ac{DFT}-domain signal components $ {\boldsymbol{d}}_{\text{late},\tau}$, ${\boldsymbol{n}}_{\tau}$, and $ {\bs}_\tau$ and the overlap-save-constrained far-end signal blocks \makebox{$\boldsymbol{C}_{b,\tau} = \boldsymbol{F}_M \boldsymbol{Q}_1 \boldsymbol{Q}_1^{\text{T}} \boldsymbol{F}_M^{-1} \X_{b,\tau}$}. Note that the corresponding time-domain signals can be computed by the inverse transform \makebox{${\underline{\boldsymbol{y}}}_{\tau} = \boldsymbol{Q}_1^{\text{T}}\boldsymbol{F}_M^{-1}{\boldsymbol{y}}_{\tau}$}. The late echo component ${\boldsymbol{d}}_{\text{late},\tau}$, the background noise ${\boldsymbol{n}}_{\tau}$, and the near-end speaker ${\bs}_\tau$ are considered as additive disturbances when estimating the \ac{AF} partitions ${\w}_{b,\tau}$ in Eq.~\eqref{eq:fd_sig_mod}. In the following, we model each of these interfering components as a zero-mean, non-stationary and spectrally uncorrelated proper complex Gaussian random process by the respective \acp{PDF \begin{align} p({\boldsymbol{d}}_{\text{late}, \tau}) &= \mathcal{N}_c({\boldsymbol{d}}_{\text{late},\tau}|\boldsymbol{0}_{M \times 1}, {{\boldsymbol{\Psi}}}_{\tau}^{\text{D}_{\text{late}}}) \label{eq:d_late_psd} \\ p({\boldsymbol{n}}_\tau) &= \mathcal{N}_c({\boldsymbol{n}}_{\tau}|\boldsymbol{0}_{M \times 1}, {\boldsymbol{\Psi}}_{\tau}^{\text{N}}), \label{eq:bg_psd}\\ p({\bs}_\tau) &= \mathcal{N}_c({\bs}_{\tau}|\boldsymbol{0}_{M \times 1}, {\boldsymbol{\Psi}}_{\tau}^{\text{S}}), \label{eq:nearend_psd} \end{align} with the diagonal \ac{PSD} matrices ${\boldsymbol{\Psi}}_{\tau}^{\text{D}_{\text{late}}}$, ${\boldsymbol{\Psi}}_{\tau}^{\text{N}}$ and \makebox{${\boldsymbol{\Psi}}_{\tau}^{\text{S}} \in \mathbb{C}^{M \times M}$}. \commentTHa{We} assume ${\boldsymbol{d}}_{\text{late}, \tau}$, ${\boldsymbol{n}}_\tau$ and ${\boldsymbol{\bs}}_\tau$ to be mutually uncorrelated \begin{equation} \mathbb{E}\left[{\boldsymbol{d}}_{\text{late},\tau} {\boldsymbol{n}}_{\tau}^{\text{H}} \right]= \mathbb{E}\left[{\boldsymbol{d}}_{\text{late},\tau} {\boldsymbol{s}}_{\tau}^{\text{H}} \right]= \mathbb{E}\left[{\boldsymbol{s}}_{\tau} {\boldsymbol{n}}_{\tau}^{\text{H}} \right] = \commentTHf{\boldsymbol{0}_{M \times M}}. \end{equation} Finally, to account for the time-variance of acoustic environments, the temporal evolution of each \ac{AF} partition ${\w}_{b,\tau}$ is modelled by a \ac{DFT}-domain random walk Markov model \cite{kuech_state-space_2014} \begin{align} {\w}_{b,\tau} &= A ~{\w}_{b,\tau - 1} + \Delta {{\w}}_{b,\tau} \label{eq:stateTransMod} \end{align} with the process noise vector of the $b$th partition $\Delta {{\w}}_{b,\tau}$ and the state transition coefficient $0 < A < 1$. The process noise $\Delta {{\w}}_{b,\tau}$ is assumed to be distributed according to \begin{equation} p(\Delta {{\w}}_{b,\tau}) = \mathcal{N}_c(\Delta {{\w}}_{b,\tau}|{\boldsymbol{0}_{M \times 1}}, {\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}}) \label{eq:process_noise_model} \end{equation} with the diagonal process noise \commentTHb{\ac{PSD}} matrix ${\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}}$. Note that \commentTHf{Eqs.~\eqref{eq:fd_sig_mod} - \eqref{eq:process_noise_model}} represent a linear Gaussian state-space model with the \ac{DFT}-domain \ac{AF} partitions ${\w}_{b,\tau}$ as the states and the microphone signal blocks $\boldsymbol{y}_\tau$ as the observations. \section{Acoustic Echo Cancellation Model} \label{sec:aec_alg} The considered \ac{AEC} architecture is depicted in Fig.~\ref{fig:alg_overview}. A linear \ac{AF} estimates the early echo component $\underline{\boldsymbol{d}}_{\text{early}, \tau}$ which is then subtracted from the noisy observation $\underline{\boldsymbol{y}}_\tau$. The estimation error signal $\underline{\boldsymbol{e}}_\tau^+$ is used as input to a \ac{DPF}. Finally, for a double-talk robust adaptation control of the \ac{AF} partitions $\boldsymbol{w}_{b,\tau}$, we propose a novel approach to estimate the observation noise \ac{PSD} matrix by exploiting the near-end speaker estimate of the \ac{DPF} and the estimation error $\underline{\boldsymbol{e}}_\tau^+$. \subsection{Adaptive Kalman Filter} \label{sec:pbkf} We model the state posterior of each \ac{AF} partition ${\boldsymbol{w}}_{b,\tau}$, given all preceding observations ${\boldsymbol{Y}}_{1:\tau} = \begin{bmatrix}{\boldsymbol{y}}_{1}, & \dots, & {\boldsymbol{y}}_{\tau}\end{bmatrix}$, by \begin{equation} p({\boldsymbol{w}}_{b,\tau} | {\boldsymbol{Y}}_{1:\tau} ) = \mathcal{N}_c \left({\boldsymbol{w}}_{b,\tau}|\hat{\boldsymbol{w}}_{b,\tau}, {\bP}_{b,\tau} \right) \end{equation} with mean $\hat{\boldsymbol{w}}_{b,\tau}$ and diagonal state uncertainty matrix ${\bP}_{b,\tau}$. Due to the linear Gaussian model (cf. Eqs. \eqref{eq:fd_sig_mod} - \eqref{eq:process_noise_model}), a closed-form inference of the state posterior is given by the \ac{KF} equations. By setting the transition factor to one in the prediction of the \commentTHf{echo} and introducing a gradient constraint, the diagonalized \ac{PBKF} is obtained \cite{kuech_state-space_2014} \begin{align} \label{eq:eStep} & \widehat{\boldsymbol{d}}_{\text{early},\tau}=\sum_{b=0}^{B-1} {\C}_{b,\tau} \hat{\boldsymbol{w}}_{b,\tau - 1} \commentTHf{\approx A~\sum_{b=0}^{B-1} {\C}_{b,\tau} \hat{\boldsymbol{w}}_{b,\tau - 1}} \notag \\ & {\e}_{\tau}^{+} = {\y}_{\tau} - \widehat{\boldsymbol{d}}_{\text{early},\tau} \notag\\ &{\bP}^{+}_{b,\tau - 1} = A^2 ~ {\bP}_{b,\tau - 1} + {\boldsymbol{\Psi}}^{\Delta\text{W}}_{b,\tau} \\ &\boldsymbol{\Lambda}_{b,\tau} = {\bP}^{+}_{b,\tau - 1} \left(\commentTHe{\sum_{\tilde{b}=0}^{B-1} {\X}_{\tilde{b},\tau} {\bP}^{+}_{\tilde{b},\tau-1} {\X}_{\tilde{b},\tau}^{\herm}} + \frac{M}{R} {\boldsymbol{\Psi}}^{\text{I}}_{\tau}\right)^{-1} \notag\\ &\hat{\boldsymbol{w}}_{b,\tau} = \hat{\boldsymbol{w}}_{b,\tau - 1} + \boldsymbol{G} \boldsymbol{\Lambda}_{b,\tau} {\X}_{b,\tau}^{\herm} {\e}_{\tau}^{+} \notag\\ &{\bP}_{b,\tau} = \left({\I}_M - \frac{R}{M} \boldsymbol{\Lambda}_{b,\tau} {\X}_{b,\tau}^{\herm} {\X}_{b,\tau}\right) {\bP}^{+}_{b,\tau-1} \notag \end{align} with the estimated early echo signal $\widehat{\boldsymbol{d}}_{\text{early},\tau}$, the prior error ${\e}_{\tau}^{+}$, the \commentTHa{gradient} constraint matrix $\boldsymbol{G} = \boldsymbol{F}_M \boldsymbol{Q}_2 \boldsymbol{Q}_2^{\text{T}} \boldsymbol{F}_M^{-1}$ and the \commentTHa{adaptive diagonal step size} matrix $\boldsymbol{\Lambda}_{b,\tau}$. The double-talk robustness and convergence properties of the \ac{KF} crucially depend on \commentTHa{a precise estimation of the} observation noise \ac{PSD} matrix \makebox{${\boldsymbol{\Psi}}^{\text{I}}_{\tau} = {\boldsymbol{\Psi}}^{\text{D}_{\text{late}}}_{\tau} + {\boldsymbol{\Psi}}^{\text{N}}_{\tau} + {\boldsymbol{\Psi}}^{S}_{\tau}$} \commentTHa{(cf. Sec.~\ref{sec:psd_est}).} \subsection{Deep Neural Network-based Postfilter} \label{sec:deep_pf} The aim of any \ac{PF} is the estimation of the near-end signal $\underline{\boldsymbol{s}}_\tau$ given the estimated error \commentTHa{$\boldsymbol{e}_{\tau}^+$}. \commentTHa{We} consider a recurrent \commentTHb{\ac{DNN}}-based \ac{PF} which is inspired by \cite{pfeifenberger_nonlinear_2020, Xia2020} and comprises four layers. The first layer is a dense feedforward layer with tanh activations which combines the input features to a vector of dimension $P$. Subsequently, two stacked GRU layers are added which extract temporal information from the combined features. Finally, a dense feedforward output layer with sigmoid activations is used to map the GRU states to a corresponding frame-wise diagonal masking matrix $\widehat{\boldsymbol{M}}_\tau$. \commentTHa{As} input features $\tilde{\boldsymbol{u}}_{\text{feat},\tau}$ to the \commentTHb{\ac{DNN}}, we use the normalized logarithmic power spectra of the prior error and the far-end signal. \commentTHa{For this}, we first compute the \commentTHa{\ac{STFT}} of the \commentTHa{time-domain signals} (cf.~Sec.~\ref{sec:prob_sig_mod}), \begin{align} \tilde{\boldsymbol{u}}_{\tau} = \begin{bmatrix} \tilde{\boldsymbol{e}}_{\tau}^+ \\ \tilde{\boldsymbol{x}}_{0,\tau} \end{bmatrix} = \begin{bmatrix} \boldsymbol{F}_M \boldsymbol{V} \begin{bmatrix} (\underline{\boldsymbol{e}}_{\tau-1}^+)^{\text{T}} & (\underline{\boldsymbol{e}}_{\tau}^+)^{\text{T}} \end{bmatrix}^{\text{T}} \\ \boldsymbol{F}_M \boldsymbol{V} \underline{\boldsymbol{x}}_{0,\tau} \end{bmatrix} \in \mathbb{C}^{2M} \label{eq:feature_e_x} \end{align} with the diagonal window matrix $\boldsymbol{V} \in \mathbb{R}^{M \times M}$ and $\tilde{(\cdot)}$ denoting windowed \ac{STFT}-domain quantities. Subsequently, the normalized logarithmic power spectrum is computed by \begin{equation} [\tilde{\boldsymbol{u}}_{\text{feat},\tau}]_m = \frac{\commentTHa{\log (\text{max}(| [\tilde{\boldsymbol{u}}_{\tau}]_m|^{2}, \epsilon_1))} - [\boldsymbol{\mu}]_{m}}{ [\boldsymbol{\sigma}]_m}, \end{equation} with $m=0,\dots,M-1$ and $\commentTHa{\epsilon_1}>0$ being a small number to avoid numerical instabilities. Here, the estimated mean and standard deviation of the feature vector are denoted by $\boldsymbol{\mu}$ and $\boldsymbol{\sigma}$, respectively. Note that due to the symmetry of the \ac{STFT} only the non-redundant $M+1$ frequency bins of $\tilde{\boldsymbol{u}}_{\text{feat},\tau}$ are used as features. \commentTHa{The parameters $\boldsymbol{\theta}$ of the \commentTHb{\ac{DNN}} are} trained by minimizing the cost function \cite{carbajal_joint_2020, nugraha_multichannel_2016} \begin{equation} \mathcal{J}_{\text{PF}}(\boldsymbol{\theta}) = \sum_{\tau,m} d_{\text{KL}}(|\tilde{s}_{m \tau}| , |\hat{\tilde{s}}_{m \tau}| ), \label{eq:cost_func} % % \end{equation} defined by the \acl{KL} divergence \begin{equation} d_{\text{KL}}(|\tilde{s}_{m \tau}| , |\hat{\tilde{s}}_{m \tau}| ) = - |\tilde{s}_{m \tau}| \log(|\hat{\tilde{s}}_{m \tau}| + \commentTHa{\epsilon_2}) + |\hat{\tilde{s}}_{m \tau}| \label{eq:kl_div} \end{equation} between the magnitudes of \commentTHa{the} true \ac{STFT} near-end component \makebox{$ \tilde{s}_{m\tau} = \left[ \boldsymbol{F}_M \boldsymbol{V} \begin{bmatrix} (\underline{\boldsymbol{s}}_{\tau-1})^{\text{T}} & (\underline{\boldsymbol{s}}_{\tau})^{\text{T}} \end{bmatrix}^{\text{T}}\right]_m $} and the estimated one \makebox{$\hat{\tilde{{s}}}_{m \tau} = [\widehat{\boldsymbol{M}}_{\tau}]_{mm} [\tilde{\boldsymbol{e}}_{\tau}^+]_m$}. All constant terms have been omitted in Eq.~\eqref{eq:kl_div} and a regularization term \commentTHa{$\epsilon_2$} has been \commentTHa{included} \cite{nugraha_multichannel_2016}. \commentTHa{Note that any mask-based \ac{PF} of compatible dimensions could be used for the synergistic combination with the \ac{KF} by supporting the observation noise \ac{PSD} estimation (\commentTHa{cf.}~Sec.~\ref{sec:psd_est})}. \section{Proposed Power Spectral Density Estimation} \label{sec:psd_est} For a fast-converging and double-talk robust adaptation of the \ac{AF} partitions $\hat{\boldsymbol{w}}_{b,\tau}$, a precise estimation of the observation noise \ac{PSD} matrix ${\boldsymbol{\Psi}}^{\text{I}}_{\tau}$ and \commentTHa{process noise \ac{PSD} matrices} ${\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}}$ is decisive. In particular, the observation noise \ac{PSD} matrix ${\boldsymbol{\Psi}}^{\text{I}}_{\tau}$ is, due to its fast changing behaviour, difficult to estimate. In contrast to state-of-the-art approaches, we suggest to exploit the different statistical properties of the signal components generating the observation $\boldsymbol{y}_\tau$ (cf.~Eqs.~\eqref{eq:td_sig_mod}~and~\eqref{eq:fd_sig_mod}). We start by representing the prior error signal \begin{equation} \boldsymbol{e}_\tau^+ = {\boldsymbol{e}}_{\text{early},\tau}^+ + \boldsymbol{p}_{\tau} + \boldsymbol{s}_{\tau} \end{equation} in terms of the early echo error signal \makebox{$ {\boldsymbol{e}}_{\text{early},\tau}^+ = {\boldsymbol{d}}_{\text{early},\tau} - \widehat{\boldsymbol{d}}_{\text{early},\tau}$}, the late reverberation and background noise signal \makebox{$\boldsymbol{p}_{\tau}=\boldsymbol{d}_{\text{late},\tau}+\boldsymbol{n}_{\tau}$} and the desired near-end speaker signal $\boldsymbol{s}_{\tau}$. By assuming the early echo error ${\boldsymbol{e}}_{\text{early},\tau}^+ $, the noise signal $\boldsymbol{p}_{\tau}$ and the near-end speaker signal $\boldsymbol{s}_{\tau} $ to be mutually uncorrelated, the \ac{PSD} matrix of the prior error $\boldsymbol{e}^+_\tau$ is given by \begin{equation} {\boldsymbol{\Psi}}_{\tau}^{\text{E}} = {\boldsymbol{\Psi}}_{\tau}^{{\text{E}}_{\text{early}}} + \boldsymbol{\Psi}_\tau^{\text{P}} + {\boldsymbol{\Psi}}_{\tau}^{\text{S}} \label{eq:psd_est_prior_error} \end{equation} with ${\boldsymbol{\Psi}}_{\tau}^{\text{P}} = {\boldsymbol{\Psi}}_{\tau}^{\text{D}_{\text{late}}} + {\boldsymbol{\Psi}}_{\tau}^{\text{N}}$ and \makebox{${\boldsymbol{\Psi}}_{\tau}^{{\text{E}}_{\text{early}}} = \mathbb{E}\left[ {\boldsymbol{e}}_{\text{early},\tau}^+ \left( {\boldsymbol{e}}_{\text{early},\tau}^+\right)^{\text{H}} \right]$}. We now analyze the dynamic behaviour of the different \acp{PSD}. The early echo error \ac{PSD} ${\boldsymbol{\Psi}}_{\tau}^{{\text{E}}_{\text{early}}}$ is assumed to be time-variant and its norm decreases during convergence of the \ac{AF} partitions $\hat{\boldsymbol{w}}_{b, \tau}$. In contrast, we can assume the \ac{PSD} matrix of the late echo and background noise ${\boldsymbol{\Psi}}_{\tau}^{\text{P}}={\boldsymbol{\Psi}}_\tau^{\text{D}_\text{late}} + {\boldsymbol{\Psi}}_\tau^{\text{N}}$ to be only slowly time-varying. This is motivated by the temporal smoothing effect resulting from the tails of \acp{RIR} \cite{kuttruff2016room} and the characteristics of many background noise signals, e.g., microphone noise or babble noise. On the other hand, the near-end speaker \ac{PSD} matrix ${\boldsymbol{\Psi}}_{\tau}^{\text{S}}$ is modelled to be potentially rapidly time-varying following the dynamics of speech signals. While all state-of-the-art approaches aim at a direct estimation of ${\boldsymbol{\Psi}}_{\tau}^{\text{I}}$ from the prior error signal $\boldsymbol{e}_\tau^+$, we propose the additive \commentTHb{observation} noise \ac{PSD} estimator \begin{equation} \hat{\boldsymbol{\Psi}}_{\tau}^{\text{I}} = \hat{\boldsymbol{\Psi}}_{\tau}^{\text{P}} + \hat{\boldsymbol{\Psi}}_{\tau}^{\text{S}} \label{eq:psd_est_tilde_n} \end{equation} which allows to exploit the different time-variance of the statistics of $\boldsymbol{p}_\tau$ and $\boldsymbol{s}_\tau$. Considering \acp{PF} that are designed to extract the desired near-end speaker $\tilde{\boldsymbol{s}}_{\tau}$ with minimum distortion from the prior error $\tilde{\boldsymbol{e}}_\tau^+$ (cf.~Eq.~\eqref{eq:cost_func}), a straightforward estimator for the near-end \ac{PSD} ${\boldsymbol{\Psi}}_{\tau}^{\text{S}}$ is given by the periodogram of the masked prior error \begin{align} \left[\hat{\boldsymbol{\Psi}}_{\tau}^{\text{S}}\right]_{mm} =\lambda_{S} \left[\hat{\boldsymbol{\Psi}}_{\tau-1}^{\text{S}}\right]_{mm} + (1-\lambda_{S}) \left| \left[ \widehat{\boldsymbol{M}}_\tau \boldsymbol{e}_\tau^+ \right]_m \right|^2 \label{eq:obs_noise_est_S} \end{align} with the recursive averaging factor $\lambda_{S}$. Note that due to the same \ac{DFT} length $M$ the estimated \ac{STFT} mask $\widehat{\boldsymbol{M}}_\tau$ can be similarly applied in the overlap-save domain. By subtracting the near-end \ac{PSD} matrix estimate $\hat{\boldsymbol{\Psi}}_{\tau}^{\text{S}}$ from Eq.~\eqref{eq:psd_est_prior_error} and assuming \makebox{${\boldsymbol{\Psi}}_{\tau}^{\text{S}} \approx \hat{\boldsymbol{\Psi}}_{\tau}^{\text{S}}$}, the prior error \ac{PSD} matrix is given by \makebox{${\boldsymbol{\Psi}}_{\tau}^{{\text{E}}} \approx {\boldsymbol{\Psi}}_{\tau}^{\text{E}_{\text{early}}} + \boldsymbol{\Psi}_\tau^{\text{P}}$}. As the late echo and background noise \ac{PSD} matrix ${\boldsymbol{\Psi}}_{\tau}^{\text{P}}$ varies only slowly compared to the early echo error \ac{PSD} matrix ${\boldsymbol{\Psi}}_{\tau}^{\text{E}_{\text{early}}}$, any stationary noise \ac{PSD} estimator can be used for its inference. In this paper we use the minimum statistics estimator \cite{minimum_statistics} which is given by the minimum of the latest $\kappa$ estimates \begin{align} \left[{\hat{\boldsymbol{\Psi}}}_{\tau}^{\text{P}}\right]_{mm} = \min \begin{bmatrix} \left[{\boldsymbol{\Upsilon}}_{\tau-\kappa+1}^{\text{P}}\right]_{mm}, & \dots, & \left[{\boldsymbol{\Upsilon}}_{\tau}^{\text{P}}\right]_{mm} \\ \end{bmatrix} \label{eq:obs_noise_est_P} \end{align} of a smoothed periodogram \begin{equation} \left[{\boldsymbol{\Upsilon}}_{\tau}^{\text{P}}\right]_{mm} = \lambda_{P} \left[{\boldsymbol{\Upsilon}}_{\tau-1}^{\text{P}}\right]_{mm} + (1-\lambda_{P}) \left| \left[\hat{\boldsymbol{p}}_\tau \right]_m \right|^2 \label{eq:obs_noise_est_P_tilde} \end{equation} with \commentTHf{the} late echo and background noise estimate \makebox{$\hat{\boldsymbol{p}}_\tau= \left(\boldsymbol{I}_M -\widehat{\boldsymbol{M}}_{\tau} \right) \boldsymbol{e}_\tau^+$} and the recursive averaging factor $\lambda_P$. Note that ${\hat{\boldsymbol{\Psi}}}_{\tau}^{\text{P}}$ can be interpreted as a temporally smoothly changing minimum regularization in the \ac{KF} step size \eqref{eq:eStep}. Finally, as proposed in \cite{kuech_state-space_2014} the process noise \commentTHb{\ac{PSD} matrices are estimated} by \begin{align} \left[ \hat{\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}} \right]_{mm} = (1-A^2)~\left[ \hat{\boldsymbol{\Psi}}_{b,\tau}^{\text{W}} \right]_{mm} \label{eq:mStepStateNoise} \end{align} with \commentTHf{$\hat{\boldsymbol{\Psi}}_{b,\tau}^{\text{W}} = \lambda_W {\hat{\boldsymbol{\Psi}}_{b,\tau-1}^{\text{W}}} + (1-\lambda_W) \hat{\w}_{b,\tau-1} {\hat{\w}_{b,\tau-1}}^{\text{H}}$}. \section{Algorithmic Description} \label{sec:alg_descr} The proposed echo cancellation scheme for one block of \commentTHb{microphone samples} $\boldsymbol{y}_{\tau}$ is illustrated in Fig.~\ref{fig:alg_overview} and summarized in Alg.~\ref{alg:prop_alg_descr}. After computing the prior error $\boldsymbol{e}_\tau^+$, using the previous \ac{AF} estimate (cf.~Eq.~\eqref{eq:eStep}), the mask $\widehat{\boldsymbol{M}}_\tau$ is inferred by the \ac{DPF} (cf.~Sec.~\ref{sec:deep_pf}). Subsequently, the process noise \ac{PSD} \commentTHb{matrices} $\boldsymbol{\Psi}_{b,\tau}^{\Delta\text{W}}$ and \commentTHb{the} observation noise \ac{PSD} matrix $\boldsymbol{\Psi}_\tau^{\text{I}}$ are estimated by Eqs.~\eqref{eq:mStepStateNoise} and \eqref{eq:psd_est_tilde_n}, respectively. Note that if the initial matrices are chosen to be diagonal, the estimators ensure all subsequent estimates to be diagonal as well. Afterwards, the \commentTHb{means $\hat{\boldsymbol{w}}_{b,\tau}$ and state uncertainty matrices $\boldsymbol{P}_{b,\tau}$} of the \ac{AF} partitions $\boldsymbol{w}_{b,\tau}$ are updated by the \ac{KF} (cf.~Eq.~\eqref{eq:eStep}) using the estimated \ac{PSD} matrices $\hat{\boldsymbol{\Psi}}_{\tau}^{\text{I}}$ and $\hat{\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}}$. Finally, the time-domain near-end signal $\hat{\underline{\boldsymbol{s}}}_\tau$ is computed by applying the inverse \ac{STFT} to the \ac{DPF} estimate $\hat{\tilde{\boldsymbol{s}}}_\tau$. \begin{algorithm}[tb] \caption{Proposed \ac{KF}+\ac{DPF} algorithm for one block of microphone samples.} \label{alg:prop_alg_descr} \begin{algorithmic} \State Compute prior error $\boldsymbol{e}_\tau^+$ (cf.~Eq.~\eqref{eq:eStep}) \State Infer \ac{DPF} mask $\widehat{\boldsymbol{M}}_\tau$ (cf.~Sec.~\ref{sec:deep_pf}) \State \commentTHb{Update} \ac{PSD} \commentTHb{estimates} $\hat{\boldsymbol{\Psi}}^{\text{I}}_{\tau}$, $\hat{\boldsymbol{\Psi}}^{\Delta\text{W}}_{b,\tau}$ (cf.~Eqs.~\eqref{eq:psd_est_tilde_n}~and~\eqref{eq:mStepStateNoise}) \State Update \ac{AF} estimates $\hat{\boldsymbol{w}}_{b,\tau }, {\bP}_{b,\tau} $ (cf.~Eq.~\eqref{eq:eStep}) \State Compute time-domain near-end signal $\underline{\hat{\boldsymbol{s}}}_\tau$ by inverse \ac{STFT} \end{algorithmic} \end{algorithm} \section{Experiments} \label{sec:experiments} In this section, the proposed algorithm is evaluated for a large variety of \ac{AEC} scenarios. Each scenario is created by randomly drawing a far-end and a near-end speech signal from the \textit{LibriSpeech} database \cite{7178964} comprising $283$ different speakers. Subsequently, the clean echo signal \commentTHb{$\underline{\boldsymbol{d}}_\tau$} is simulated by convolving the far-end signal with a randomly drawn \ac{RIR} from the databases \cite{jeub_binaural_2009,Wen06evaluationof, mird}, comprising $201$ different \acp{RIR} in total. Finally, the near-end speaker signal and white Gaussian sensor noise are added. Both signals are scaled according to a random \commentTHb{\acl{NER}} in between \makebox{$-10$ dB} and \makebox{$10$ dB} and a random \commentTHb{\acl{ENR}} in between \makebox{$30$ dB} and \makebox{$35$ dB}. An \ac{EPC} is simulated by randomly drawing a signal duration in between \makebox{$7.2$ s} and \makebox{$8.8$} s and afterwards appending a new scenario. The block shift was set to $R=256$ samples with a sampling frequency of \makebox{$f_s=16$ kHz}. The \ac{PBKF} modelled \commentTHa{$B=8$ partitions which corresponds to an overall filter length of $L=2048$ samples.} The noise \ac{PSD} estimators used the \commentTHb{parameters} $\commentTHa{\lambda_{P}=\lambda_W=0.9}$, $\lambda_{S}=0$ and $\kappa =90$ frames. \commentTHa{The input features to the \ac{DPF} were computed by using a Hamming window and \makebox{$\epsilon_1 = 10^{-12}$}.} The \ac{DPF} \commentTHb{used} approximately $3.4$ million parameters with the input dimension of the stacked GRU layers being $P=512$. \commentTHa{It} was trained using the ADAM optimizer \commentTHb{\cite{kingma2014adam}} with a step size of \commentTHb{$10^{-3}$}\commentTHa{, a regularization factor of \makebox{$\epsilon_2=10^{-12}$}} and $4.4$ hours of training data. The training data was preprocessed by a {\ac{PBKF}} for which the required \ac{PSD} matrices ${\boldsymbol{\Psi}}_{\tau}^{\text{S}}$ and $ {\boldsymbol{\Psi}}_{b,\tau}^{\Delta\text{W}}$ were estimated by Eqs.~\eqref{eq:psd_est_tilde_n} - \eqref{eq:mStepStateNoise}. Here, the estimated mask $\widehat{\boldsymbol{M}}_\tau$ was replaced by an oracle mask. The testing data (27 mins) was disjoint from the training data, i.e., different speakers and \acp{RIR}. \begin{figure}[t!] \centering \newlength\fwidth \setlength\fwidth{.98\columnwidth} \hspace*{-.15cm}\input{temp_erle_3_cp.tikz} \caption{Time-dependent ERLE of the {\ac{PBKF}} for various parametrizations of the baseline noise \ac{PSD} estimators and the proposed estimator.} \label{fig:temp_erle} \end{figure} \commentTHb{In the first experiment} the reconvergence behaviour of the {\ac{PBKF}} after abrupt \acp{EPC} \commentTHb{is} compared for the proposed \ac{PSD} estimator (cf.~Sec.~\ref{sec:psd_est}) relative to the state-of-the-art approach \cite{franzen_improved_2019}, i.e., recursively averaging the prior error \commentTHb{power $|\left[\boldsymbol{e}_{\tau}^+\right]_m |^2$} with an averaging factor of $0.5$. We compared several choices for the state transition parameter $A$ because \cite{yang_frequency-domain_2017} reports a trade-off between steady-state performance and reconvergence behaviour. \commentTHa{As performance measure} we used the time-dependent logarithmic \ac{ERLE} \begin{align} {\mathcal{E}}_{\text{KF},\tau} = 10 \log_{10} \frac{\mathbb{E}\left[||\underline{\boldsymbol{d}}_{\tau}||^2\right] }{\mathbb{E}\left[||\underline{\boldsymbol{d}}_{\tau}-\hat{\boldsymbol{\underline{d}}}_{\tau}||^2\right]}, \end{align} with $||\cdot||^2$ denoting the squared Euclidean norm and the expectation being approximated by temporal recursive averaging. To allow for more general conclusions, the time-dependent logarithmic \ac{ERLE} ${\mathcal{E}}_{\text{KF},\tau}$ has been averaged over $100$ different scenarios. The resulting average time-dependent \ac{ERLE} $\overline{{\mathcal{E}}}_{\text{KF},\tau}$ for various choices of the state transition parameter $A$ for the baseline and the proposed noise \ac{PSD} estimator is shown in Fig.~\ref{fig:temp_erle}. As can be concluded from Fig.~\ref{fig:temp_erle} for the baseline, larger state transition parameters $A$ result in better steady-state performance at the cost of slower reconvergence after \acp{EPC} due to an overestimation of the noise \ac{PSD}. The proposed noise \ac{PSD} estimator, however, avoids this trade-off and allows for both high steady-state performance \commentTHa{and rapid reconvergence.} Finally, we evaluate the echo suppression and near-end distortion performance of the individual algorithmic components, i.e., {\ac{PBKF}}-only and \ac{DPF}-only, and their synergistic combination \ac{PBKF}+\ac{DPF}. As performance measures we use: \begin{align} & {\mathcal{E}}_{\text{PF}} = 10 \log_{10} \frac{||\underline{\boldsymbol{d}}||^2}{|| \text{pf}(\underline{\boldsymbol{d}} - \widehat{\underline{\boldsymbol{d}}})||^2} , ~ {\mathcal{S}}_{\text{PF}} = 10 \log_{10} \frac{||\beta \underline{\boldsymbol{s}}||^2}{||\beta \underline{\boldsymbol{s}} - \text{pf}(\underline{\boldsymbol{s}})||^2}, \notag \\ &{\mathcal{E}}_{\text{KF}} = 10 \log_{10} \frac{||\underline{\boldsymbol{d}}||^2}{||\underline{\boldsymbol{d}} - \widehat{\underline{\boldsymbol{d}}}||^2}, ~~ { \Delta\text{PESQ}} = \text{pq}(\underline{\boldsymbol{s}}, \underline{\hat{\boldsymbol{s}}}) - \text{pq}({\underline{\boldsymbol{s}}}, \underline{\boldsymbol{y}}) \vphantom{\frac{||}{||}}. \notag \end{align} \begin{table}[t] \caption{Mean and standard deviation (in parentheses) of the various components of the proposed \ac{AEC} algorithm. For the best performance values bold font is used.}\vspace{-0mm} \vspace*{-.05cm} \setlength{\tabcolsep}{10.0pt} \begin{center} \begin{tabular}{c c c c} \toprule & {PBKF-only} &PF-only& {PBKF+PF} \\ \midrule $t_\text{pr}$[ms] / RTF & $\phantom{0}\textbf{0.4}/\textbf{0.03}$ & $\phantom{0}1.0/0.06$ & $\phantom{0}1.4/0.09$ \\ $\overline{\mathcal{E}}_{(\cdot)}$ & $10.5~(2.7)$ & $13.2~(2.8)$ & $\textbf{17.0}~(3.6)$ \\ $\overline{\mathcal{S}}_{\text{PF}}$ & $\phantom{0}\infty$ & $14.6~(3.2)$ & $\textbf{26.4}~(5.7)$ \\ $ \overline{\Delta\text{PESQ}}$ & $\phantom{0}0.55~(0.4)$& $\phantom{0}0.67~(0.4)$ & $\textbf{1.12}~(0.5)$ \\ \bottomrule \end{tabular} \end{center} \label{tab:tabPerfMeas} \vspace{-.3cm} \end{table} Here, \commentTHb{the \ac{ERLE} averaged over the entire signal duration}, denoted by omitting the time index $\tau$, after the {\ac{PBKF}} and the \ac{DPF} is represented by $\mathcal{E}_{\text{KF}}$ and $\mathcal{E}_{\text{PF}}$, respectively. The near-end distortion is measured by $\mathcal{S}_{\text{PF}}$, with the scaling factor \makebox{$\beta = \frac{\boldsymbol{{\boldsymbol{s}}}^{\text{T}} \text{pf}(\boldsymbol{s})}{||\boldsymbol{{s}}||^2}$} \cite{roux_sdr_2019}, and the PESQ (Perceptual Evaluation of Speech Quality \cite{pesq}) improvement $\Delta \text{PESQ}$. Note that the processing of a signal by the \ac{DPF} is described by $\text{pf}(\cdot)$ while $\text{pq}(\cdot, \cdot)$ denotes the computation of the {PESQ} \cite{pesq}. Tab.~\ref{tab:tabPerfMeas} shows the arithmetic averages of $100$ random experiments of \commentTHa{the performance measures} ${\mathcal{E}}_{(\cdot)}$, ${\mathcal{S}}_{\text{PF}}$ and ${\Delta\text{PESQ}}$, denoted by an overbar. Furthermore, the runtime $t_{\text{pr}}$ to process one signal block on an \textit{\commentTHa{Intel Xeon CPU E3-1275 v6 @ 3.80GHz}} and the corresponding \ac{RTF} are given. Note that only the proposed noise \ac{PSD} estimator (cf.~Sec.~\ref{sec:psd_est}) is evaluated due to the limited reconvergence capabilities of the baseline estimator. Furthermore, to show the effect the \ac{PBKF}, we evaluated a \ac{DPF}-only algorithm which was trained with the microphone signal $\boldsymbol{y}_{\tau}$ instead of the error signal $\boldsymbol{e}_{\tau}$ in the feature computation in Eq.~\eqref{eq:feature_e_x}. We conclude from Tab.~\ref{tab:tabPerfMeas} that while using only a \ac{PBKF} results in limited echo cancellation, the \ac{DPF}-only approach introduced significant distortions. In contrast the combination allows for high echo attenuation while introducing only little distortions. \commentTHa{Finally, we see from Tab.~\ref{tab:tabPerfMeas} that due to the modest computational requirements, the proposed method is well suited for real-time applications.} \section{Conclusion} \label{sec:summaryOutlook} In this paper, we proposed a novel synergistic \ac{KF}+\ac{DPF} algorithm which \commentTHa{improves state-of-the-art} \ac{AEC} algorithms for time-varying acoustic scenarios in terms of reconvergence speed without compromising performance for static scenarios. This is achieved by efficiently exploiting the \ac{DPF} near-end estimate and the \ac{KF} estimation error \commentTHa{for \commentTHb{inferring} an \ac{AF} step-size.} Without any auxiliary mechanisms, we thereby overcome the limitations of \ac{KF}-based step-size adaptation algorithms. \bibliographystyle{IEEEbib} {\small
{ "timestamp": "2021-03-02T02:44:59", "yymm": "2012", "arxiv_id": "2012.08867", "language": "en", "url": "https://arxiv.org/abs/2012.08867" }
\section{Notation} \subsection{About String} Let $\Sigma$ be a finite alphabet, then $\Sigma^\star$ denotes the free monoid over $\Sigma$. For a linear string $w = a_1\ldots a_n$ over the alphabet $\Sigma$, $|w| = n$ is the \emph{length} of $w$, $w[i] = a_i$ is the $i^{th}$ \emph{character} of $w$, $w[i:j] = a_i \ldots a_j$ is the \emph{substring} from the position $i$ to the position $j$. A \emph{prefix} (respectively a \emph{suffix}) is a substring which begins in $1$ (resp. which ends in $n$). A \emph{proper substring} is a substring which differs from the string. An \emph{overlap} from a linear string $x$ to a linear string $y$ is a proper suffix of $x$ that is also a proper prefix of $y$. We denote by $ov(x,y)$ the length of the longest overlap and by $x \odot y$ the \emph{merge} from $x$ to $y$, i.e. $x \odot y = x\; y[ov(x,y)+1:|y|]$. For a circular string $w = \langle a_1 \ldots a_n \rangle$ over the alphabet $\Sigma$, $|w| = n$ is the \emph{length} of $w$ and a \emph{substring} of $c$ is a finite substring of the linear infinite string $(a_1 \ldots a_n)^{\infty}$ (which denotes the infinite concatenation of the linear string $a_1 \ldots a_n$). \subsection{Greedy reduction} We will exhibit a Strict-reduction~\cite{Crescenzi97}, which is one kind of approximation-preserving reduction between two optimization problems. A \emph{Strict-reduction} from an optimization problem $\mathcal{A}$ to another optimization problem $\mathcal{B}$ is a pair of polynomial-time computable functions $(f,g)$ where: \begin{itemize} \item for each instance $x$ of $\mathcal{A}$, $f(x)$ is an instance of $\mathcal{B}$, \item for each solution $y$ of $\mathcal{B}$, $g(y)$ is a solution of $\mathcal{A}$, \item $R_{\mathcal{A}}(x,g(y)) \leq R_{\mathcal{B}}(f(x),y)$ \\ where $R_{\mathcal{D}}(x,y) = \max \big(\frac{c_{\mathcal{D}}(x,OPT(x))}{c_{\mathcal{D}}(x,y)}, \frac{c_{\mathcal{D}}(x,y)}{c_{\mathcal{D}}(x,OPT(x))} \big)$ for any optimization problem $D$ where $c_{\mathcal{D}}$ is the cost function of $\mathcal{D}$. \end{itemize} We propose here a new type of reduction that links the greedy nature of the solutions of two optimization problems. Indeed, for the Strict-reduction from an optimization problem $\mathcal{A}$ to another optimization problem $\mathcal{B}$, if we have an approximation ratio of $\alpha$ for the greedy algorithm for the problem $\mathcal{B}$, we know that there exists an algorithm of $\mathcal{A}$, which can be different of the greedy algorithm for $\mathcal{A}$, with an approximation ratio smaller than or equal to $\alpha$. We want that the reduction preserves the greedy nature of the solutions. A \emph{Greedy-reduction} from an optimization problem $\mathcal{A}$ to another optimization problem $\mathcal{B}$ is a Strict-reduction $(f,g)$ where: \begin{itemize} \item for each greedy solution $y$ of $\mathcal{B}$, $g(y)$ is a greedy solution of $\mathcal{A}$. \end{itemize} As the notion of greedy algorithm for an optimization problem may be ambiguous, we define in Appendix (Section~\ref{se:appendix}) more formally the Greedy-reduction in the case of subset system maximization problems. \section{Superstring problems: definition and main contributions}\label{sec:super} Let $P$ be a set of linear strings. A \emph{linear superstring} of $P$ is a linear string which has all strings of $P$ as substring. A circular string $\langle w \rangle$ having all strings of $P$ as substrings is a \emph{circular superstring} of $P$. Given a set $P$ of linear strings, the \emph{Shortest Linear Superstring problem} (or \emph{SLS}) corresponds to finding the linear superstring of $P$ of minimal length. The \emph{Shortest Circular Superstring problem} (or \emph{SCS}) is defined as finding the shortest circular superstring of $P$. For both minimization problems, there exists a corresponding associated maximization problem, where instead of minimizing the \emph{superstring length} measure, one maximizes the \emph{compression measure}, i.e. one seeks a superstring maximizing the difference between $\| P \| = \sum_{w \in P} |w|$ and the length of the sought superstring. For both problems, a specific greedy algorithm can be defined. For the Shortest Linear Superstring problem, the well-know greedy algorithm is defined as follows: for a set $P$ of strings, the greedy algorithm takes two strings of $P$ with the maximal overlap, remove these two elements from $P$, and insert their merge into $P$ and continue until only one string remains in $P$. This string is a greedy solution for the Shortest Linear Superstring problem for $P$. We call this solution a \emph{greedy linear superstring}. For the Shortest Circular Superstring problem, the greedy algorithm for SCS is identical to that for SLS except that at the end, it returns the merge of the greedy linear superstring with itself, which creates a circular string. This circular string is called a \emph{greedy circular superstring}. \begin{theorem}\label{th:L:reduction} There exists a Greedy-reduction from the Shortest Linear Superstring problem (SLS) to the Shortest Circular Superstring problem (SCS) for both the length and compression measures. \end{theorem} As SLS is NP-complete~\cite{GallantMS80}, Theorem~\ref{th:L:reduction} implies that SCS also. \begin{corollary} The Shortest Circular Superstring problem is NP-complete. \end{corollary} Another consequence of Theorem~\ref{th:L:reduction}: A proof of a 2-approximation of the greedy algorithm for the Shortest Circular Superstring problem would imply a proof of the well-know greedy conjecture~\cite{BlumLTY94}, which states that the approximation ratio of the greedy algorithm for the Shortest Linear Superstring problem is $2$. Hence, we propose the following conjecture: \begin{conjecture} The approximation ratio of the greedy algorithm for the Shortest Circular Superstring problem is $2$. \end{conjecture} \section{Proof of Theorem~\ref{th:L:reduction}} Given an ordered alphabet $\Sigma$, we take $\overline{\Sigma} = \{\overline{a} \; : \; a \in \Sigma\}$ where $\Sigma \cap \overline{\Sigma} = \emptyset$ and $\Sigma \cap \overline{\Sigma}$ is totally ordered. Let $f$ be the function from $\mathcal{P}(\Sigma^{\star})$ to $\mathcal{P}\big((\Sigma \cup \overline{\Sigma})^{\star}\big)$ where for a set of strings $P$ over $\Sigma$, $f(P) = P \cup \overline{P}$ with $\overline{P} = \{\overline{w} : w \in P\}$ and $\overline{w} = \overline{a_1} \ldots \overline{a_k}$ for $w = a_1 \ldots a_k$. For a circular superstring $c = \langle a_1, \ldots, a_k \rangle$ of $P \cup \overline{P}$, we denote by $l(c)$ the linear string corresponding to the smaller circular shift of $c$ in lexicographic order (for any $a\in \Sigma$ and $\overline{b} \in \overline{\Sigma}$, $a < \overline{b}$) and such that $l(c)[1] \in \Sigma$ and $l(c)[k] \in \overline{\Sigma}$. By construction, $l(c)$ exists and is unique. For a linear string $w = a_1 \ldots a_q$ over $\Sigma'$ and $\Sigma'' \subseteq \Sigma'$, $\left.w\right|_{\Sigma''}$ is the restriction of $w$ to $\Sigma''$. We denote by $g$ the following application such that for a circular superstring $c$ of $P \cup \overline{P}$: \[ g(c) = a_1 \ldots a_k \text{ if } g'(c) \in \Sigma^{\ast} \text{ and } g'(c) = a_1 \ldots a_k \text{ or } \text{if } g'(c) \in \overline{\Sigma}^{\ast} \text{ and } g'(c) = \overline{a_1} \ldots \overline{a_k} \] with \[ g'(c) = \mathtt{Argmin}\Big\{|w| \; : \; w \in \{\left.l(c)\right|_{\Sigma},\left.l(c)\right|_{\overline{\Sigma}} \} \Big\} \] \begin{lemma}\label{le:g} For any circular superstring $c$ of $P \cup \overline{P}$, $g(c)$ is a linear superstring of $P$ of length smaller than or equal to $\frac{|c|}{2}$. \end{lemma} \begin{proof}[proof of Lemma~\ref{le:g}] Let $c = \langle a_1, \ldots, a_k \rangle$ be a circular superstring of $P \cup \overline{P}$. By definition, $g'(c) \in \{ \left.l(c)\right|_{\Sigma},\left.l(c)\right|_{\overline{\Sigma}}\}$. Assume, without loss of generality, that $g(c) = g'(c) = \left.l(c)\right|_{\Sigma}$, i.e. $|\left.l(c)\right|_{\Sigma}| \leq |\left.l(c)\right|_{\overline{\Sigma}}|$. As $l(c)[1] \in \Sigma$ and $l(c)[k] \in \overline{\Sigma}$, $l(c)$ is a linear superstring of $P \cup \overline{P}$ and thus $\left.l(c)\right|_{\Sigma}$ is a linear superstring of $P$. Indeed, as $l(c)$ is a linear superstring of $P \cup \overline{P}$, for each string $s$ of $P$, there exists $i$ and $j$ such that $l(c)[i:j] = s$. As $l(c)[i:j] \in \Sigma^{\ast}$, there exists $i'$ and $j'$ such that $l(c)[i:j] = \left.l(c)\right|_{\Sigma}[i':j']$ and thus $s$ is a substring of $\left.l(c)\right|_{\Sigma}$. As $|l(c)| = |\left.l(c)\right|_{\Sigma}| + |\left.l(c)\right|_{\overline{\Sigma}}|$ , we have that $2|g(c)| \leq |l(c)| = |c|$. \end{proof} \begin{lemma}\label{le:equal} Let $w_o$ be a shortest linear superstring of $P$ and $c_o$ be a shortest circular superstring of $P \cup \overline{P}$. One has $2|w_o| = |c_o|$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:equal}] Let $w$ be a linear superstring of $P$ of length $k$. We want to prove that there exists a circular superstring of $P \cup \overline{P}$ of length smaller than or equal to $2k$. We take $c = \langle w \overline{w} \rangle$. As $w$ is a linear superstring of $P$ and $\overline{w}$ is a linear superstring of $\overline{P}$, $c$ is a circular superstring of $P \cup \overline{P}$. By definition, $(w \overline{w})[1:k] \in \Sigma^{\ast}$ and $(w \overline{w})[k+1:2k] \in \overline{\Sigma}^{\ast}$, thus one gets $ov(w \overline{w},w \overline{w}) = 0$. Indeed, assume $ov(w \overline{w},w \overline{w}) = l > 0$. By construction, we know that $\Sigma \cap \overline{\Sigma} = \emptyset$. If $l \leq k$ then $\overline{w}[k] = w[l]$, which is impossible because $\overline{w}[k] \in \overline{\Sigma}$ and $w[l] \in \Sigma$. If $l>k$ then $\overline{w}[1] = w[l-k+1]$, which is also impossible since $\overline{w}[1] \in \overline{\Sigma}$ and $w[l-k+1] \in \Sigma$. As $ov(w \overline{w},w \overline{w}) = 0$, $|\langle w \overline{w} \rangle| = |w \overline{w}| =|w| + |\overline{w}| = 2k$. Let $c$ be a circular superstring of $P \cup \overline{P}$ of length $k'$. We want to prove that there exists a linear superstring of $P$ of length smaller than or equal to $\frac{k'}{2}$. We take $w = g(c)$. By Lemma~\ref{le:g}, $w$ is a linear superstring of $P$ of length smaller than or equal to $\frac{|c|}{2}$, i.e. $w \leq \frac{k'}{2}$. Now, we can prove that there exists a shortest linear superstring of $P$ of length $k$ if and only if there exists a shortest circular superstring of $P \cup \overline{P}$ of length $2k$. Let $w_o$ a shortest linear superstring of $P$ of length $k$, we know that there exists a circular superstring of $P \cup \overline{P}$ of length smaller than or equal to $2k$ and thus there exists a shortest circular superstring of $P \cup \overline{P}$ of length smaller than of equal to $2k$. Assume the length of a shortest circular superstring of $P \cup \overline{P}$ is strictly smaller than $2k$. Then by Lemma~\ref{le:g}, there exists a linear superstring of $P$ of length strictly smaller than $\frac{2k}{2} = k = |w_o|$, which contradicts the fact that $w_o$ is a shortest linear superstring of $P$, and concludes the proof. \end{proof} \begin{lemma}\label{le:ineq} Given $w_o$ a shortest linear superstring of $P$, $c_o$ a shortest circular superstring of $P \cup \overline{P}$, and $c$ a circular superstring of $P \cup \overline{P}$, one has \[ \frac{|g(c)|}{|w_o|} \leq \frac{|c|}{|c_o|} \text{ and } \frac{\|P\|-|w_o|}{\|P\|-|g(c)|} \leq \frac{\|P \cup \overline{P}\|-|c_o|}{\|P \cup \overline{P}\|-|c|}. \] \end{lemma} \begin{proof}[Proof of Lemma~\ref{le:ineq}] Let $w_o$ be a shortest linear superstring of $P$, $c_o$ a shortest circular superstring of $P \cup \overline{P}$ and $c$ a circular superstring of $P \cup \overline{P}$. By Lemma~\ref{le:g}, we have $2|g(c)| \leq |c|$ and by Lemma~\ref{le:equal}, $2|w_o| = |c_o|$ and thus $\frac{|g(c)|}{|w_o|} = \frac{2|g(c)|}{2|w_o|} = \frac{2|g(c)|}{|c_o|} \leq \frac{|c|}{|c_o|}$. Moreover, because $\|P \cup \overline{P}\| = 2 \|P\|$, one gets \[ \frac{\|P\|-|w_o|}{\|P\|-|g(c)|} = \frac{2\|P\|-2|w_o|}{2\|P\|-2|g(c)|} \leq \frac{2\|P\|-2|w_o|}{2\|P\|-|c|} = \frac{2\|P\|-|c_o|}{2\|P\|-|c|} = \frac{\|P \cup \overline{P}\|-|c_o|}{\|P \cup \overline{P}\|-|c|}.\] \end{proof} Combining lemmas~\ref{le:g} and~\ref{le:ineq} gives us the Strict-reduction from SLS to SCS for both the length measure and the compression measure. \begin{lemma}\label{le:greedy} Let $c$ be a greedy circular superstring of $P \cup \overline{P}$. The linear string $g(c)$ is a greedy linear superstring of $P$. \end{lemma} \begin{proof}[proof of Lemma~\ref{le:greedy}] Let $c$ be a greedy circular superstring of $P \cup \overline{P}$. As $c$ is a greedy circular superstring, there exists a greedy linear superstring $w_c$ of $P \cup \overline{P}$ such that $c$ is the merge of $w_c$ with itself. As for all strings $w \in P$ and $\overline{w} \in \overline{P}$, $ov(w,\overline{w}) = ov(\overline{w},w) = 0$, a greedy linear superstring $w$ of $P \cup \overline{P}$ has $ov(w,w) = 0$ and thus $\langle w_c \rangle = c$. By the definition of the greedy linear superstring, there exists $Q_1 = P \cup \overline{P}$, $Q_2$, \ldots, $Q_k = \{w_c\}$ such that $Q_i$ correspond to the $i^{th}$ recursion of the greedy algorithm for the Shortest Linear Superstring problem where $Q_{i+1} = Q_i \setminus \{u_i,v_i\} \cup \{u_i \odot v_i\}$ where $u_i$ and $v_i$ are the greedy choice at the step $i$. As the greedy choice takes the maximal overlap, we have $ov(u_1,v_1) \geq ov(u_2,v_2) \geq \ldots \geq ov(u_{k-1},v_{k-1})$. As $w \in P$ and $\overline{w} \in \overline{P}$, $ov(w,\overline{w}) = ov(\overline{w},w) = 0$, the set $\{i \; : \; ov(u_i,v_i) = 0\}$ is not empty and we take $j = \min(\{i \; : \; ov(u_i,v_i) = 0\})$. By construction, any string of $P \cup \overline{P}$ is substring of a string of $Q_j$ and each string of $Q_j$ is either in $\Sigma^{\ast}$ or in $\overline{\Sigma}^{\ast}$. As for all strings $w \in P$ and $\overline{w} \in \overline{P}$, $ov(w,\overline{w}) = ov(\overline{w},w) = 0$, any concatenation of strings of $Q_j \cap \Sigma^{\ast}$ is a greedy linear superstring of $P$, and similarly, any concatenation of strings of $Q_j \cap \overline{\Sigma}^{\ast}$ is a greedy linear superstring of $\overline{P}$. Hence, $g(c)$ is a greedy linear superstring of $P$. \end{proof} Lemma~\ref{le:greedy} shows that for any greedy circular superstring $c$ of $P \cup \overline{P}$, $g(c)$ is a greedy linear superstring of $P$. This concludes the proof of Theorem~\ref{th:L:reduction}. \section{Appendix}\label{se:appendix} \subsection{Greedy-reduction for subset system maximization problems} We introduced the notion of Greedy reduction. To make it a useful concept, it is crucial to clarify what is a greedy algorithm. Especially for SLS, the greedy algorithm can be written as described above (see Section~\ref{sec:super} and \cite{GallantMS80}) or as the greedy algorithm from a specific subset system \cite{CazauxR16} Here, we propose a definition of Greedy reduction for a subset system of maximization problems, for which the greedy algorithm is unambiguously defined. For a finite set $E$, a subset system $\mathcal{L}$ is a set of subsets of $E$ satisfying two conditions: first, $\emptyset \in \mathcal{L}$, and second, if $B \in \mathcal{L}$ and $A \subseteq B$ then $A \in \mathcal{L}$. We denote by $\max(\mathcal{L})$ the set of elements of $\mathcal{L}$ that is maximal for inclusion. A maximization problem is called a \emph{subset system maximization problem} if every instance of this problem defines a subset system~\cite{Mestre06}. An instance of a subset system maximization problem is thus a triplet $(E,\mathcal{L},w)$ where $E$ is a finite set, $\mathcal{L}$ is a subset system, and $w$ is a function that assigns a weight to each element of $E$. An optimal solution of this problem for this instance is an element of $\mathcal{L}$ with the maximum weight, i.e. $\mathtt{Argmax}_{F\in \mathcal{L}}\big(w(F)\big)$ where $w(F) = \sum_{x \in F} w(x)$. For a given instance $(E,\mathcal{L},w)$, one can also uniquely define a greedy algorithm (see Algorithm~\ref{alg:glouton:sh}). \begin{algorithm}[htbp] \input{ $(E,\mathcal{L},w)$} The elements $e_i$ of $E$ sorted by decreasing weight: $w(e_1) \geq w(e_2) \geq \ldots \geq w(e_n)$ $F \leftarrow \emptyset$ \For{$i=1$ to $n$} { \lIf{$F \cup \{e_i\} \in \mathcal{L}$} {$F \leftarrow F \cup \{e_i\}$} } \Return $F$ \output{A set $F$ of $\max(\mathcal{L})$.} \caption{The greedy algorithm associated with an instance $(E,\mathcal{L},w)$ of a subset system maximization problem.\label{alg:glouton:sh}} \end{algorithm} A Greedy-reduction from a subset system maximization problem $\mathcal{A}$ to another subset system maximization problem $\mathcal{B}$ is a pair of polynomial-time computable functions $(f,g)$ where: \begin{itemize} \item for each instance $(E,\mathcal{L},w)$ of $\mathcal{A}$, there exists $\mathcal{L}'$ and $w'$ such that $(f(E),\mathcal{L}',w')$ is an instance of $\mathcal{B}$, \item for each element $y$ of $\max(\mathcal{L}')$, $g(y)$ is an element of $\max(\mathcal{L})$, \item for each greedy solution $y$ of $(f(E),\mathcal{L}',w')$, $g(y)$ is a greedy solution of $(E,\mathcal{L},w)$, \item $ \frac{w(g(y))}{\max_{F\in \mathcal{L}}\big( w(F)\big)} \leq \frac{w'(y)}{\max_{F\in \mathcal{L}'}\big( w'(F)\big)}$. \end{itemize} Now, we can reuse the subset systems of~\cite{CazauxR16} to define the greedy algorithm for the Shortest Linear Superstring problem and for the Shortest Circular Superstring problem. For a set of strings $P$, we denote $E_P$ the set of all pairs of $P$, i.e. $E_P = P \times P$. We define the following subset system: \begin{itemize} \item $(E_P, \mathcal{L} = \{F : F \text{ satisfies (L1), (L2) and (L3)}\})$ for the Shortest Linear Superstring problem, \item $(E_P, \mathcal{C} = \{F : F \text{ satisfies (L1), (L2) and (L3b)}\})$ for the Shortest Circular Superstring problem. \end{itemize} where \begin{itemize} \item[(L1)] $\forall s_i, \ s_j$ and $s_k \in P$, $(s_i,s_k)$ and $(s_j,s_k) \in F \Rightarrow i=j$, \item[(L2)] $\forall s_i, \ s_j$ and $s_k \in P$, $(s_k,s_i)$ and $(s_k,s_j) \in F \Rightarrow i=j$, \item[(L3)] for any $r \in \{1,\ldots,|P|\}$, there exists no cycle $\big((s_{i_1},s_{i_2}),\; \ldots,\; (s_{i_{r-1}},s_{i_r}),\; (s_{i_r},s_{i_1})\big)$ in $F$. \item[(L3b)] for any $r \in \{1,\ldots,|P|-1\}$, there exists no cycle $\big((s_{i_1},s_{i_2}),\; \ldots,\; (s_{i_{r-1}},s_{i_r}),\; (s_{i_r},s_{i_1})\big)$ in $F$. \end{itemize} Unlike in (L3), the condition (L3b) allows for a single cycle that contains all the elements (but disallows any other cycle). \bibliographystyle{plainurl}
{ "timestamp": "2021-11-18T02:17:37", "yymm": "2012", "arxiv_id": "2012.08878", "language": "en", "url": "https://arxiv.org/abs/2012.08878" }
\chapter{} \pagenumbering{arabic} \section{} \section{} \chapter{The path ahead} \pagenumbering{arabic} \section{Discussion of the state of the art.} \section{What directions we can go} \section{Conclusion} \chapter{Conclusion} \pagenumbering{arabic} In this thesis, we have examined the Higgsplosion mechanism. The Higgsplosion mechanism appears naturally when we study high multiplicity processes. This mechanism has the potential to generate exciting features inside a Quantum Field Theory. To understand this we need to study the results that lead to it. Firstly, we have studied the systematic construction of high multiplicity amplitudes in a perturbative setup. In particular, we investigated the $\phi^{4}$ theory up to one-loop at treashold. The analysis of these amplitudes showed the factorial growth behavior that is a danger to unitarity. Even quantum corrections could not tame this behavior, indicating that ordinary perturbation theory is not suitable for these processes. It was then presented some beyond threshold results that showed the same behavior, but were perturbative. In this setup, we cannot say if the expressions are blowing because the approximation is terrible or because it is an essential feature of the theory. Next, we introduce the semiclassical result, Eq.~(\ref{eqimpor}), that shows this factorial behavior outside ordinary perturbation theory. This kind of approximation was tested in the zero-dimensional toy model. We show then that usual perturbation theory and strong perturbation theory have a limited domain of validity. The semiclassical like expansion probed a different region than both of the approximations, showing an overall consistency with the result. However, a better understanding of the limitations of Eq.~(\ref{eqimpor}) is still needed. Then, we introduced the Higgsplosion mechanism. We reviewed the original formulation of the Higgsplosion and showed that, at least using the semiclassical calculation, it could be possible to have in (1+3)D broken $\phi^{4}$. The extensions of the Higgsplosion mechanism depend on how you read the results. Using the original interpretation of UV finiteness renders the theory finite and the coupling stops running. The generalization for other models depends yet on a semiclassical approximation with different matter contents. The interpretation proposed in this thesis for the Higgsplosion mechanism is that the theory reorganizes the degrees of freedom at the Higgsploding scale. After this reorganization, another theory takes place in the UV. This is similar to the D-brane decay from tachyon condensation in string theory. The apparent non-locality can be seen in this setup because the $n$-scalars are behaving collectively in a finite size structure. After the change in the description, it is expected that the theory develops different kinds of interactions, and the flow to the UV is not necessarily finite. This new interpretation needs to be tested, and one way is to prove that Higgsplosion occurs at least in the broken $\phi^{4}$ theory in (1+3)D. The state of the lattice now does not bring anything new to the table as we discussed, but the potential is there. Lattice investigation of this system could be fruitful because of its non-perturbative nature. The application for the Standard Model is yet far to be realized. It is not clear, however, if this mechanism occurs in this system. After this is resolved, then the generalization for different matter seems to be immediate. The Higgs would be the portal for this new phenomenon in the Standard Model. \section*{Acknowledgments} This is the acknowledgements section. You should replace this with your own acknowledgements. \chapter{Higgsplosion and Higgspersion} \label{c3} \pagenumbering{arabic} \section{The Rise of the Higgsplosion} \subsection{The Higgsplosion Mechanism} It is know that in a Scalar Field Theory a high multiplicity amplitude could violate perturbative unitarity, making the theory inconsistent in this limit~\cite{Arkhipov-nov-82,Goldberg-may-90}. We did this computation and confirmed these results~\cite{Brown-nov-92,Voloshin-feb-92,loop1,Smith-apr-93}. In a concrete example the off-shell amplitude, like is shown in Figure~\ref{offshell}, for a $n$-particle decay grows factorially with $n$: \begin{align} \mathcal{A}(1 \rightarrow n ) \propto \lambda^{n} n! \, , \end{align} such that the decay rate behave like: \begin{align} \Gamma(1 \rightarrow n) \propto \lambda^{n} n! V_{n}(E) \, . \end{align} \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgsplosion} \caption{Off-shell $1 \to n$ particles process.} \label{offshell} \end{figure} Even after we remove one factor of $n!$ from the expression because the particles at the end are identical, the decay rate grows factorially. At the time that they found this result, they interpreted it as a sign that the perturbation theory became effectively strong coupled when $n > 1/\lambda $. The picture changed in 2017~\cite{Khoze-higgsplosion} when Valentin V. Khoze and Michael Spannowsky proposed that such behavior renders the cross-section of physical processes unitary, and even generate additional effects inside the theory. The central point of the proposal is to look for the full propagator of a virtual scalar in some intermediary process as is depicted in Figure~\ref{sper}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgspersion} \caption{Diagrammatic representation of the Higgspersion. The propagator between processes is the full propagator of the field $\phi$.} \label{sper} \end{figure} In the Standard Model, this role would be played by the Higgs. The full propagator can be written in the form: \begin{align} \Delta_{H}(p) = \frac{i}{p^{2} - M^{2}(p^{2}) + i \frac{m}{Z_{\phi}}\Gamma(p^{2})} \, , \end{align} where $M^{2}(p^{2})$ has the contribution from the bare mass and the real part of the 1PI function Eq.~(\ref{1pid}). The results of a factorially growing decay rate would mean that the propagator became strongly suppressed by $p^{2}$ factors. This blowing up of the decay rate is called Higgsplosion and the propagator suppression Higgspersion. This mechanism would save unitarity because, in an intermediary process, the suppression wins at large $p^{2}$ in a physical process. \begin{align} \sigma_{n} \propto \sqrt{s} \frac{\Gamma_{n}(s)}{s^{2} +(\frac{m}{Z_{\phi}})^{2}\Gamma^{2}(s)} \, . \end{align} Where $\Gamma(s)$ is the total offshell decay width for the scalar field. This remains finite, even when $\Gamma_{n} \rightarrow \infty$. The decay rate going to infinity is not problematic because this is not a real process, even though we can use it to construct real ones. The major consequence of this effect is the suppression of loops at high energy. In any process involving loops, we can trade the free propagator by the full propagator if we go to high enough orders as is exemplified in Figure~\ref{rego}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{fullpropsubst} \caption{Diagrammatic representation of changing the free propagator for the full propagator inside processes. The hashed circle represents the full propagator of the field $\phi$.} \label{rego} \end{figure} When considering the full propagator, the integral in the momentum will shutoff at the Higgsploding scale because of the exponential suppression of the propagator. This scale $E^{*}$ is a new dynamical scale of the theory that appears in the energy scale where the decay rate grows exponentially. The strong claims that are made because of this proposal are that the killing of the loops makes the couplings stop running. This means that the theory becomes UV finite and we are at a nontrivial fixed point at this scale~\cite{wilson}. There is strong evidence that this can happen at least in $\phi^{4}$ theory in the broken phase where we have semiclassical computations in a region where we can trust the decay rate, and it has the characteristic behavior~\cite{Khoze-jun-18}: \begin{align} \label{eqimpor} \Gamma(\epsilon) \propto e^{n \left( \ln(\frac{g}{4}) +0.85\sqrt{g} -1 + \frac{3}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} \epsilon \right)} \, , \end{align} this expression is calculated in the limit $\lambda \rightarrow 0 $, $n \rightarrow \infty$, $\lambda n =g \gg 1$ and $\epsilon \rightarrow 0$. In the first part of this thesis, we redid the perturbative computations of these amplitudes at and beyond threshold and commented some results coming from different methods. Now the focus is to understand what are the consequences if Higgsplosion is true. If the Higgsplosion does not occur, we need to understand why. To study this, we need to review what are the significant steps for the Higgsplosion to work and then analyze each step to see if there are any flaws or inconsistencies. The central player of the Higgsplosion mechanism is the exponential growth of the $n$-particle decay rate at some scale. As we argued, it is difficult to obtain these expressions using ordinary perturbation theory because we are stuck near the threshold of some point in energy space. The picture changed when semiclassical computation confirmed the results coming from perturbation theory, that the decay rate could indeed grow. This is by no means the end of the story as better decay rates expressions need to be found, and we do not fully understand what is the parameter region exactly where Eq.~(\ref{eqimpor}) can be trusted. If the Higgsploding mechanism exists in a Scalar Field Theory, we need to be able to compute the decay rate at high $p^{2}$ to see it. The next essential ingredient is the Optical Theorem and the Dyson resummed propagator. The Optical Theorem holds when we have a unitary theory~\cite{qft} and, as we argued in chapter~\ref{c1}, the propagator is valid even at the non-perturbative level. These are the principal players of this mechanism. If we want to know if this happens, we need to dig deeper into these points to see what we can extract from it. It is very intriguing how simple steps can potentially generate a powerful new phenomenon in a Quantum Field Theory that can be accessible in principle perturbatively. The concept of Higgsplosion was presented and by itself it is straightforward to understand, now we turn our attention to the underlying assumptions to see if they are consistent in such a way that we can have this mechanism. To do this we review and introduce some potential consequences of Higgsplosion. After that, we review some of the modern criticism made in two papers~\cite{Belyaev-aug-18,Monin-aug-18} and discuss some results from the lattice~\cite{lattice1,lattice2}. Having done that, we will try to estimate $E^{*}$ for the case where we have the decay rate Eq.~(\ref{eqimpor}). Finally, we try to understand the role of the perturbation expansion, its validity and work out a toy model where the propagator decays exponentially. \subsection{The Potential Power of the Higgsplosion} The Higgsplosion mechanism could generate an interesting phenomena even when we have only scalar fields. We saw in the computation of the one-loop amplitude in the broken Eq.~(\ref{ampbro}) and unbroken Eq.~(\ref{ampun}) phase that we had to deal with divergences, for the unbroken phase: \begin{align} \delta m^{2}_{1} =- \frac{\lambda_{R}\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} \frac{\delta\lambda_{1}}{3!} =\frac{\lambda_{R}^{2}}{16} \left( \mathcal{I}_{2}+\frac{1}{2\pi^{2}} \right) \, , \end{align} and for the broken phase: \begin{align} \delta m_{1}^{2} = \frac{\lambda_{R}}{8}\left[ \mathcal{I}_{1} - M_{R}^{2}\left( \frac{3 \mathcal{I}_{2}}{4} + \frac{3}{8\pi^{2}} -\frac{\sqrt{3}}{16\pi}\right) \right] \, , \end{align} \begin{align} \frac{\delta \lambda_{1}}{3!} = \frac{\lambda_{R}^{2}}{48} \left[ \frac{3 \mathcal{I}_{2}}{2} +\frac{3}{4\pi^{2}} - \frac{\sqrt{3}}{16\pi} \right] \, . \end{align} These divergences were appearing because in the loop integral we integrated to arbitrary high momentum in the propagator. This changes when we consider the Higgsplosion mechanism. The full propagator can substitute the propagators inside the loop if we reorganize order by order the expansion like it is represented in the Figure~\ref{rego}. Making this substitution in the theory, it suddenly becomes clear that we should not integrate into all momenta. The Higgsplosion shuts down the propagator above the scale $E^{*}$, so we integrate up to a sphere of radius $E^{*}$. This means that the theory is finite, and we have a natural scale to measure our observables. Doing the running of the coupling, we see that it shut down when we hit the Higgsploding scale, entering the new phase in the theory as it is represented in Figure~\ref{fazer}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{RGscalar} \caption{Representation of the coupling flow in a $\phi^{4}$ theory with Higgsplosion. The $\lambda^{*}_{max}$ is the largest coupling for which we can trust the expressions indicating the Higgsplosion. It could be that even outside this region the theory Higgsplodes at some point, then it would have a barrier where all couplings go to constant independently of where it started.} \label{fazer} \end{figure} In $\phi^{4}$ the flow makes the coupling grow, this means that if we do not start at an ultra perturbative value, we will quickly move away from it. The expressions for the decay rate can only be trusted in this limit. If we start at a given $\lambda$ that is too big we do not know if it will Higgsplode. This new phase depends on the value of the coupling to be achieved. Passing through this value, we need to find an $n$ and $\epsilon$ that can generate the growing decay rate of Eq.~(\ref{eqimpor}). This means that for each coupling value exists a Higgsploding scale . The story could potentially be different if the Higgs of the Standard Model Higgsplodes, the coupling there is decreasing, signaling a potential vacuum decay. The theory, in principle, would always flow to the perturbative regime where we can trust the semiclassical calculation and would always Higgsplode provided it does not vacuum decay first. This is just a schematic picture, and further calculations are needed to understand the full flow of the coupling. The same behavior occurs for the mass, meaning that in a Higgsplosion mechanism the mass is of order $E^{*}$, not of the heavier particle that exists on the theory. This is shielding the scalar from receiving quantum correction above this scale and in principle could be used in the Standard Model to render the Higgs mass natural. The nature of this shielding needs to be understood if this mechanism happens in ordinary Quantum Field Theory, probably indicating an enhancement of symmetry in the system. This feature can be interpreted as a UV fixed point that appears dynamically. We do not know if it will stay forever on it. The reorganization of degrees of freedom could generate new dynamics, and the running would need to be adequately analyzed. It could be that after the theory Higgsplodes, we cannot trust the calculations done with the Scalar Field because the theory ceases to create one-particle states as represented diagrammatically in Figure~\ref{aa}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{conjecture} \caption{Representation of the possible phases inside the scalar theory if Higgsplosion occurs, but it is shut down above a certain scale to preserve the proprieties that we expect from a normal Quantum Field Theory. It may be an indication that the UV completion of this theory is on a smaller scale than the Landau pole.} \label{aa} \end{figure} For the rest of the observables, the story stays pretty similar. The Higgspersion forces any process to be unitary and the coupling freezes. This feature is what makes Higgsplosion too good to be true. We can try to imagine what would happen if we implemented something like this in the Standard Model. When we want to apply this to the Standard Model, we have to be real careful. There is no semiclassical result for the case of additional matter fields. We cannot trust the perturbative calculations to say that the Standard Model will Higgsplode, but we can try to extract some qualitative results. The first thing that we want to know is the scale $E^{*}$. We cannot make a reliable prediction for the Standard Model case, but if we trust naturality arguments for the Higgs mass parameter, the theory should Higgsplode at $10$ to $10^{3}$ TeV. This is an assumption coming from outside the theory and it can be that this is not the case. If the Higgs, in fact, Higgsplodes then any loop involving it would be rendered finite. This, however, does not apply to loops with other fields in the Standard Model. The running of the couplings would be modified above this scale, but there is no reason to freeze them. If, however, all particles Higgsplode as proposed in~\cite{Khoze-jun-17}, then the Standard Model would be UV finite. Perturbatively we can see that there is not much difference between the scalar and the fermion Higgsploding or even a spin one particle as it is represented at Figure~\ref{hpuniverse}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgsplodeuniv} \caption{Diagrammatic representation of different high multiplicity decays.} \label{hpuniverse} \end{figure} However, the perturbative analysis cannot be trusted for high multiplicity computation, as we will discuss at the end of this chapter. The analysis done using the perturbative assumption was made in~\cite{Khoze-jun-17,precision}. While in this thesis, we are interested in the Standard Model application of this mechanism, we find it necessary to understand this proposal deeply before trying to use it. The focus shifts now to understand the problems with this mechanism and some useful discussions that ensued. We need to understand if the Higgsplosion can occur in the first place, and this is related to how much we can trust in the parameter space the semiclassical expression in Eq.~(\ref{eqimpor}). In the end, we use some results coming from String Theory to study the exponential decay of a propagator in a toy model. \section{Some Questions about Higgsplosion} \subsection{Criticism From ``Problems with Higgsplosion"} We saw the evolution in power to compute the decay rate in a Scalar Field Theory across time. With modern techniques~\cite{Libanov-jul-94}, it was possible to write the decay rate in the exponential form: \begin{align} \Gamma(E,n) \propto e^{\frac{F(\lambda n, \epsilon)}{\lambda}} \, , \end{align} For this expression to work, the conjecture is that at high multiplicity, only the non-relativistic energy contribute. The whole deal with the Higgsplosion framework is to compute this function $F$ in an appropriate regime and analyze its behavior. For small $g$ and $\epsilon$ it was found an expression for the holy grail function~\cite{Libanov-jul-94}, Eq.~(\ref{smallg}): \begin{align} F(g,\epsilon) = g \ln(\frac{g}{16}) - g + \frac{3g}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} g \epsilon + \frac{ \sqrt{3}}{4\pi} g^{2} \, , \end{align} where we ignore terms like $g^{3}$,$g^{2}\epsilon$ and $g\epsilon^{2}$. We can see that as $g \rightarrow 0$, $F \rightarrow -\infty$, and the decay rate is suppressed. While the opposite is also true, increasing the coupling makes the decay rate grow exponentially. However, as it is pointed out in~\cite{Belyaev-aug-18} this expression is only valid in the limit: \begin{align} g \ll 1 \, ,\quad \epsilon \ll 1 \, . \end{align} So we cannot trust this expression for couplings of order 1: \begin{align} g \approx 1 \, , \quad n \approx \frac{1}{\lambda} \, . \end{align} In this range of validity (small g and $\epsilon$), the expression is always negative and Higgplosion would not occur. This changed when new methods for computing F were developed~\cite{Khoze-jun-18} and it was extended for the new region $g \gg 1$: \begin{align} F(g,\epsilon) = g \left( \ln(\frac{g}{4}) +0.85g^{1/2}-1 + \frac{3}{2}\left( \ln(\frac{\epsilon}{3\pi})+1 \right) - \frac{25}{12} \epsilon +\dots \right) \, . \end{align} We can see that for sufficiently large coupling and small $\epsilon$ this function increases. The detailed analysis of this result is made in section~\ref{s10}. This result is one of the most important for the Higgsplosion proposal. It shows that in an appropriate limit, the decay rate can indeed grow. However, as it is pointed out in~\cite{Belyaev-aug-18}, this semiclassical solution exists only for $\phi^{4}$ in the broken phase in (1+3)D. It is by no means a general result for an arbitrary scalar field, and the generalization for the Standard Model is not clear. It is argued in~\cite{Belyaev-aug-18} that the contributions of order $g^{2}\epsilon$ need to be taken into account so we can determine the validity range of $\epsilon$ in this function. Without these terms, we cannot determine what values of $\epsilon$ can be trusted in this approximation. Such mixed terms could prevent the exponential growth of the decay rate and by consequence prevent the Higgsplosion from happening. The tricky part of this expression is that it has multiple parameters, and each of them plays a crucial role. On the other hand, at least in the ultra-small limit for $\epsilon$, this expression can be trusted, and we can find a coupling that makes the decay rate grow. Going away from this coupling could kill this growth, so we need to be careful with the RG flow in this theory. If for instance, we have a small value of $\epsilon$ where the decay rate explodes, then we never probe larger values of $\epsilon$ because the theory would always decay before reaching this point. Nevertheless the need for a better expression of $F(g,\epsilon)$ as argued in~\cite{Belyaev-aug-18} is legitimate and something important to focus for the better understanding of the Higgsplosion claim and even Quantum Field Theory in general. In the last part of the paper~\cite{Belyaev-aug-18}, they turn the attention to the full propagator. There it is argued that we cannot resum the propagator because the expression of the 1PI diagram is not convergent. That was already addressed, so we do not comment any further here. Lastly, it is pointed out in~\cite{Belyaev-aug-18} that unitarity needs to be restored somehow if Higgsplosion does not enter to play. The behavior of Eq.~(\ref{eqimpor}) shows some potential danger for unitarity violation. From their side, they argue that a better expression for $F$ would play the role of restoring unitarity. From the Higgplosion side, the Higgspersion plays that role in the theory. \subsection{Criticism From "Inconsistencies with Higgsplosion"} Another source of discussion about Higgsplosion arises with the paper from Monin~\cite{Monin-aug-18}. In this paper, he argued for the impossibility of the Higgsplosion mechanism inside local Quantum Field Theory. To understand it better, let us run by its arguments and then discuss a little about these results. We already saw the persistence in the exponential growth of the amplitude. Usually, we are talking about the decay rates but is worth pointing out that in the same way that the decay rate explodes the spectral density also explode: \begin{align} \rho (E,N) \propto \epsilon^{\frac{3N}{2}} e^{\tilde{c} N^{3/2} \sqrt{\lambda}} \left( 1 + O(N\ln(N))\right) \, , \end{align} with $\tilde{c}$ being some constant. This, in the context of Higgsplosion, is related to the exponential decay of the propagator in the UV. While the author argues that this behavior is inconsistent with local Quantum Field Theory, he points out that the natural cutoff for the theory is much smaller than the Landau pole if these expressions cease to be valid at some point. The persistence of the exponential behavior will happen if this is the case. Now, let us discuss a little why this behavior is unusual in an ordinary Quantum Field Theory. Normally, we have the spectral representation for the propagator (this could be any two insertions of a local operator): \begin{align} \Delta_{F}(p^{2}) = \int \-da\-de {s} \frac{1}{p^{2}-s+i\epsilon} \rho(s) \, . \end{align} It is expected that this expression is divergent, and we need to do several subtractions to have a finite result. This procedure is related to the usual renormalization that adds $m$ independent parameters that need to be fixed by experiments. In this context, a non-renormalizable or non-local theory would have infinite subtractions and by consequence an infinite number of new parameters that would need to fix: \begin{align} \Delta_{F} (p^{2})= P_{m-1}(p^{2}) + p^{2m}\int \-da\-de {s} \frac{1}{p^{2}-s+i\epsilon} \frac{\rho(s)}{s^{m}} \, . \end{align} The general form of the propagator allows us to extract what is the spectral density in terms of its imaginary part: \begin{align} -\frac{1}{\pi} \Im \left( \Delta_{F}(p^{2}) \right) = \rho (p^{2}) \, . \end{align} Up to this point, this is standard and very general Quantum Field Theory. What is done next is to assume what kind of distributions these operators are. We usually use tempered distributions, but they are very limiting as pointed out in~\cite{Khoze-sep-18}. This happens because we cannot treat a non-renormalizable theory with such distribution~\cite{Jaffe-jan-67}. As a consequence, restricting to tempered distribution would create a possible inconsistency with the Higgsplosion mechanism. These distributions only exist with a finite number of subtractions and by consequence cannot fall faster than an arbitrary polynomial. An exponential decay would be inconsistent with this choice. This is addressed in~\cite{Khoze-sep-18}, where they argue that Higgsplosion could exist within local Quantum Field Theory if we do not restrict to such distributions. They show that in principle, the Higgsplosion mechanism does not tell how fast the propagator falls apart from being exponential, and because of that, it can be consistent with local Quantum Field Theory. Something that we find strange in this process is the necessity of doing an infinite number of subtractions in the Higgsplosion scenario. The usual Scalar Field Theory in (1+3)D with potential $\phi^{4}$ is known to be perturbatively renormalizable and local, and somehow this is potentially being lost by this phenomena. If the theory became non-local above this scale, this would be strange behavior as well. It can happen that most of these parameters are not relevant, and we need to fix only a finite number of constants, and the theory then is local. That raises a flag about the possibility of such feature. Maybe such mechanism is shut down after some scale saving the behavior at high energies and not making the theory UV finite. In intermediary energies, the theory would enter in this new phase. We could see this new phase as a large number of field excitations behaving collectively. This reorganization of the degrees of freedom would happen at the Higgsplosion scale, but after that, the new degree of freedom will become relevant. This will generate by itself a new class of interactions that could regain the theory at the UV. In this picture, the Higgsplosion is a phase transition inside the theory. In this interpretation, the Higgsploding scale became the cutoff where a UV completion would enter. Such UV completion describes the dynamics of this reorganized degrees of freedom in the high energy limit. This is discussed further in the last section of this chapter. Finally, we think that the discussion in the second appendix of~\cite{Monin-aug-18} is useful to understand better what is happening in the perturbative calculation. We present here what is the discussion and follow from it later to dig deeper into this. In there, they propose a toy model, the (0+0)D Quantum Field Theory with the quartic interaction: \begin{align} \label{integral0} A(\lambda,N) = \int_{0}^{\infty} x^{2N} e^{-\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4}} \, . \end{align} The nice thing about this integral is that we can solve it exactly to test different approximation schemes. The solution for this integral is: \begin{align} A(\lambda,N) = \lambda^{-(\frac{N}{2} + \frac{1}{4})} \Gamma(N+\frac{1}{2}) F_{1 , 1}\left( \frac{N}{2}+\frac{1}{4},\frac{1}{2}, \frac{1}{4\lambda} \right) \, , \end{align} where $F_{1,1}$ is the Hypergeometric function. Doing a normal perturbation theory in the integral generate us a asymptotic series that is divergent in nature: \begin{align} \label{epepep} A(\lambda, N) \approx 2^{N-\frac{1}{2}} \Gamma(N + \frac{1}{2}) \left( 1 - \frac{\lambda}{4}(2N+3)(2N+1) + \dots \right) \, . \end{align} The interesting thing of this series is that not only $\lambda$ needs to be small, but in fact, $\lambda N $ needs to be small such that this expression is a good approximation. Because we are interested in the large $\lambda N $ limit we need a different kind of approximation. In this case, we can work out the proposal of~\cite{Monin-aug-18} and use a nontrivial saddle point to do the perturbation theory: \begin{align} x^{2N} = e^{2N \ln(x)} \end{align} Taking this into account, we can arrive at the expansion for $\lambda \rightarrow 0$, and $N \rightarrow \infty$: \begin{align} \label{epep} A(\lambda,N) \approx \frac{\sqrt{\pi}}{(8\lambda N)^{1/4}} e^{\frac{N}{2}(\ln(\frac{2N}{\lambda}) -1 - \sqrt{\frac{2}{\lambda N}})}(1 + O(\frac{1}{\sqrt{\lambda N}})) \, . \end{align} This expression is valid as $\lambda N \gg 1$. Such expression is what we need to discuss the Higgsplosion. In this case, we can check that both Eq.~(\ref{epepep}) and Eq.~(\ref{epep}) are correct. Many parameters play important roles, and the balance between them determines the validity region of the expansion. Both papers discussing the problems~\cite{Belyaev-aug-18,Monin-aug-18} with Higgsplosion show the necessity to find a better expression for the beyond threshold amplitude and decay rate. We discuss in the next section the arguments coming from lattice simulations that could contest the Higgsplosion hypothesis. After that, we try to extimate the value of $E^{*}$ from Eq.~(\ref{eqimpor}). \subsection{Lattice Results About Higgsplosion and Some Comments.} The need for better analytic expression for the decay rate can be surpassed if we can extract information from lattice about this process. The main problem of this, ignoring the inherent problem of obtaining the result in 4D (triviality problem~\cite{trivi1,trivi2,trivi3}), is that the objects of interest are highly virtual. It is not possible to calculate the off-shell decay rate or the imaginary part of the inverse of the propagator. With this in mind, Yeo-Yie Charng searched for reasonable bounds in $\phi^{4}$ at $3D$ and $2D$ for the Higgsplosion~\cite{lattice1,lattice2}. Instead of looking for the decay rate, he looked for the cross section of two fermions going to $n$ scalars as it is represented in Figure~\ref{fer}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgspersiona} \caption{Scattering of two fermions going to N scalars.} \label{fer} \end{figure} Then, with the cross section $\sigma_{n}$ he defined the inclusive cross section $\sigma$: \begin{align} \sigma = \sum_{n} \sigma_{n} \, . \end{align} In this object we don't expect the factorial growth because the Higgspersion mechanism would dominate. Because the computation of $\sigma$ is hard he finds an inequality involving an object that it is easy to find in the lattice: \begin{align} \int \-da\-de {s} \sigma(s) \leq (\frac{1}{Z'} -1) \, , \end{align} where $Z'$ is defined as: \begin{align} \frac{\partial (G^{(2)})^{-1}}{\partial p^{2}} \vert_{p^{2}=0} = \frac{1}{Z'} \, . \end{align} This is a non-perturbative bound in nature. This bounds how large the integral of the total cross section can be. This is what we expect from unitarity, no growing cross sections at arbitrary energy. He proceeded to simulate $\phi^{4}$ to obtain the values of $Z'$ and see how much of a bound we have. For this object, in $3D$, the result that he gets with 85\% confidence level is: \begin{align} \int \-da\-de {s} \sigma(s) \leq 0.026 \, . \end{align} So we expect the inclusive cross-section to be small. By the predictions of the Higgsplosion, we expect that this cross section to be exponentially suppressed at large $s$, making this possible. For the details of the simulation, it is recommended to read the original paper~\cite{lattice2}. This result on its own does not say much about the possibility of Higgsplosion only that it is consistent, provided Higgspersion works as intended to restore unitarity. Maybe a more in-depth investigation from lattice could settle down any doubt about the possibility of such a mechanism. Calculating the imaginary part of the inverse of the off-shell propagator could do the trick. Until then the Higgsplosion mechanism checks out at least in the lattice. \section{Analysis of the Decay Rate Expression, What we can say about $E^{*}$} \label{s10} The focus of this section is trying to find a value for the Higgsplosion scale $E^{*}$. This turns out to be a hard problem because we do not know in what parameter space the Eq.~(\ref{eqimpor}) is trustworthy. Specifically, we need to know how large $\epsilon$ can be. Before studying Eq.~(\ref{eqimpor}), let us look for the older result using a semiclassical approximation for the small $g$ limit, Eq.~(\ref{smallg}). Here we want to show that in this limit, we do not have any exponential growth. To do that, we can fix a coupling and plot different values of $\epsilon$, trying to find a value of $n$ that makes $F(g,n)$ positive. We will choose three values of the coupling $\lambda$: 0.1, 0.001, 0.0001. These values are inside the perturbative regime for $\lambda \rightarrow 0$. The problem of the small $g$ limit is that the maximum allowed value of $n$ is such that $g$ is at maximum of order one. For larger values, we cannot say anything, and a more in-depth investigation is necessary. These are the safer values to ensure that the expression is inside the domain of validity. With that in mind, we have the plots for the different couplings in Figures~\ref{smalll}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{01small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda =0.1$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{001small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.01$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{0001small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.001$.} \end{subfigure} \caption{The colored region is the trusted region between $\epsilon=0.1$ and $\epsilon= 0.001$, in reality, this region goes all the way to minus infinity the smaller the $\epsilon$. We chose to plot only the regions closer to zero.} \label{smalll} \end{figure} In these three plots, it is clear that the function $F(g,\epsilon)$ never grows. This indicates that there is no exponential growth and by consequence, no Higgsplosion. The story is different when we go for the strong $g$ limit. In this limit, we need to use Eq.~(\ref{eqimpor}) and use values of $n$ such that $\lambda n$ in the minimum is one. We can now plot the different values for $\epsilon$ in each choice of $\lambda$ just as before, represented in Figure~\ref{stronggg} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{01strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.1$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{001strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.01$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{0001strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.001$.} \end{subfigure} \caption{The colored region is the trusted region between $\epsilon=0.1$ and $\epsilon= 0.001$, in reality, this region goes all the way to minus infinity the smaller the $\epsilon$. We chose to plot only the regions closer to zero.} \label{stronggg} \end{figure} In the large $g$ region we can get exponential growth. There is always a point where the behavior shifts from exponential suppression to exponential growth. This point is what caraterizes the Higgsplosion scale $E^{*}$. For each $\lambda$ we have a different Higgsplosion scale. The precise determination of this scale is only possible if we know the maximum allowed value for $\epsilon$. We chose a region smaller than $\epsilon =0.1$ to find values but in principle this could be even smaller. The smaller the $\epsilon$ the larger the value of $E^{*}$. For instance, fixing $\lambda =0.1$ and $\epsilon=0.1$ the value of the Higgsploding scale is approximately: \begin{align} E^{*}\approx 300m. \end{align} While for $\lambda =0.01$ and $\epsilon=0.1$ the value of the Higgsploding scale is approximately: \begin{align} E^{*} \approx 3000m. \end{align} Finnaly for for $\lambda =0.001$ and $\epsilon=0.1$ we get: \begin{align} E^{*} \approx 30000m. \end{align} This behavior is similar to what is predicted in~\cite{precision} for the general form of the Higgsploding scale: \begin{align} E^{*}= C \frac{m}{\lambda} \end{align} In our case, we are obtaining $C \approx 30$, because we fixed $\epsilon=0.1$. The value for this constant is close to one, respecting the naturality argument. These values come from the choice of $\epsilon$ and possibly larger choices give a more natural value for this constant. To obtain more precise values for the Higgsplosion scale, we need to understand the limitations of Eq.~(\ref{eqimpor}). In this example, if one obtains the next order in $\epsilon$ correction, this becomes an achievable task. Now that we more or less covered what we can say about the Higgsplosion scale let us enter the discussions of applicability of the perturbation theory and a new interpretation for the Higgsplosion mechanism using two toy models. \section{0-Dimensional Case of Study.} Let us try to understand better the approximations that we are doing. The case of study is a modification of the integral Eq.~(\ref{integral0}) (the modification is almost trivial as we just extend the domain of integration): \begin{align} Z[m,\lambda] = \int_{-\infty}^{\infty} \-da\-de {x} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!}} \, . \end{align} Ultimately we are interested in calculating the expectation value of $2N$ ``fields": \begin{align} A[m,\lambda, N] = \int_{-\infty}^{\infty} \-da\-de {x} x^{2N} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!}} \, . \end{align} These are the ``$2N$-point Green functions" (moments) for a zero dimensional field theory. We could try to work with the connected amplitudes (cumulants), however, because we are interested in testing the approximation methods it is similar to work with both cases. We can relate $A[m,\lambda,N]$ and $Z[m,\lambda]$ by doing derivatives with respect to $m$. This case is useful because we can in fact solve the integral to compare with the approximations. The first approximation that we will do is small $\lambda$, for both functions. We want to see if this approximation of small $\lambda$ can be trusted in the high multiplicity limit. For the $Z[m,\lambda]$ the perturbative expansion can be found easily by expanding the exponential and exchanging the integration and summation: \begin{align} Z_{p}[m,\lambda] \thicksim \sum_{0}^{\infty} \left( \left( \frac{2}{m} \right)^{1/2} \left( \frac{1}{6m^{2}}\right)^{n} \frac{\Gamma(2n+\frac{1}{2})}{n!} \right) \lambda^{n} \quad \text{as} \quad \lambda \rightarrow 0 \, . \end{align} The expansion for the amplitude is analogous and we obtain: \begin{align} A_{p}[m,\lambda,N] \thicksim \sum_{0}^{\infty} \left( \left( \frac{2}{m}\right)^{\frac{1+2N}{2}} \left( \frac{1}{6m^{2}} \right)^{n} \frac{\Gamma(2n+N+\frac{1}{2})}{n!} \right) \lambda^{n} \quad \text{as} \quad \lambda \rightarrow 0 \, . \end{align} Both expressions are valid when $\lambda$ is small compared to $m$. Let us see if the partial sums of these series are a good approximation for the full result, that in this case we can compute: \begin{align} Z[m,\lambda] = \sqrt{\frac{3m}{\lambda}} e^{\frac{3m^{2}}{4\lambda}} K\left(\frac{1}{4},\frac{3m^{2}}{4\lambda}\right) \, , \end{align} \begin{align} A[m,\lambda,N] = 2^{-\frac{3}{4} + \frac{3N}{2}} 3^{\frac{1}{4}+\frac{N}{2}} \lambda^{-\frac{3}{4}-\frac{N}{2}} ( \sqrt{2\lambda} \Gamma(\frac{1}{4} + \frac{N}{2}) F_{11}\left( \frac{1}{4}+\frac{N}{2},\frac{1}{2},\frac{3m^{2}}{2\lambda}\right) - \end{align} \begin{align} \nonumber 2\sqrt{3m^{2}} \Gamma(\frac{3}{4} + \frac{N}{2}) F_{11}\left( \frac{3}{4}+\frac{N}{2},\frac{3}{2},\frac{3m^{2}}{2\lambda}\right) ) \, , \end{align} where $F_{11}[a,b,z]$ is the hypergeometric function and $K[a,z]$ is the modified Bessel function of the second kind. To test the approximations, we set $m=1$ and scan different scales for the coupling. The first one is the approximation for the partition function that is expected to be good for small $\lambda$ and some values are listed in the Table~\ref{tabela1}. For all the analysis done here we will use the notation $X^{(n)}$, meaning the partial sum up to the nth-term of the series of $X$. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $\lambda$ & 0.01 & 0.1 & 1 \\ \hline $Z_{ap}^{(1)}$ & 2.50976 & 2.5379 & 2.8199 \\ \hline $Z_{ap}^{(2)}$ & 2.50978 & 2.5420 & 3.0484 \\ \hline $Z_{ap}^{(3)}$ & 2.50978 & 2.5405 & 3.3625 \\ \hline $Z_{ex}$ & 2.50352 & 2.4773 & 2.3033 \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the partition function.} \label{tabela1} \end{table} Considering the approximation for $Z[\lambda]$, we can see that going for larger values of $\lambda$, the partial sum starts to become worst. In this case, it is not so off from the exact value, let us see if this is the same for the amplitude $A[\lambda,N]$. It is important to remember that this analysis is only for the partial sum, and different summation machines could improve these results. When we introduce different values of $N$ the approximation starts to break down, signaling that the appropriate coupling is indeed $\lambda N=g$, as it is shown in the Tables~\ref{tabela2},~\ref{tabela3} and~\ref{tabela3}. For all the numerical values we set $m=1$. \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.1$ & $1$ & $10$ & $30$ & $60$ & $100$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.1]$ & $2.50663$ & $1.641 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.1]$ & $2.55329$ & $4.944 \times 10^{9}$ & $2.8576 \times 10^{189}$ & $7.6885 \times 10^{706}$ & $1.8251 \times 10^{1591}$ & $3.2199 \times 10^{2871}$ \\ \hline $A_{ap}^{(2)}[0.1]$ & $2.68385$ & $9.5886 \times 10^{9}$ & $2.5401 \times 10^{191}$ & $5.8860 \times 10^{709}$ & $5.3124 \times 10^{1594}$ & $2.699 \times 10^{2875}$ \\ \hline $A_{ex}[0.1]$ & $2.36727$ & $3.6169 \times 10^{8}$ & $4.7914 \times 10^{161}$ & $2.1524 \times 10^{580}$ & $7.1316 \times 10^{1271}$ & $2.5246 \times 10^{2250}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.1$.} \label{tabela2} \end{table} \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.01$ & $0.1$ & $1$ & $3$ & $6$ & $10$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.01]$ & $2.5066$ & $1.6911 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.01]$ & $2.5222$ & $1.9714 \times 10^{9}$ & $3.008 \times 10^{188}$ & $7.7343 \times 10^{705}$ & $1.8278 \times 10^{1590}$ & $3.2216 \times 10^{2870}$ \\ \hline $A_{ap}^{(2)}[0.01]$ & $2.5225$ & $2.0178 \times 10^{9}$ & $2.8123 \times 10^{189}$ & $5.9557 \times 10^{707}$ & $5.5977 \times 10^{1592}$ & $2.7024 \times 10^{2873}$ \\ \hline $A_{ex}[0.01]$ & $2.4911$ & $1.3521 \times 10^{9}$ & $3.1363 \times 10^{181}$ & $5.2283 \times 10^{665}$ & $5.8780 \times 10^{1471}$ & $2.8990 \times 10^{2614}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.01$.} \label{tabela3} \end{table} \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.001$ & $0.01$ & $0.1$ & $0.3$ & $0.6$ & $1$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.001]$ & $2.50663$ & $1.6911 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.001]$ & $2.50819$ & $1.6741 \times 10^{9}$ & $4.5119 \times 10^{187}$ & $8.1922 \times 10^{704}$ & $1.8551 \times 10^{1589}$ & $3.2389 \times 10^{2869}$ \\ \hline $A_{ap}^{(2)}[0.001]$ & $2.50820$ & $1.6746 \times 10^{9}$ & $7.0239 \times 10^{187}$ & $6.6976 \times 10^{705}$ & $5.7149 \times 10^{1590}$ & $2.7316 \times 10^{2871}$ \\ \hline $A_{ex}[0.001]$ & $2.50506$ & $1.6085 \times 10^{9}$ & $3.2239 \times 10^{186}$ & $5.2109 \times 10^{697}$ & $2.1500 \times 10^{1565}$ & $7.0608 \times 10^{2810}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.001$.} \label{tabela4} \end{table} It is easy to see that for $\lambda =0.1$ only the $N=1$ case is close to the exact result and going for larger $N$ only make things worse. While for $\lambda =0.001$ we can go up to $N=100$ and still get a close enough result. This shows that besides $\lambda$ being small, $g$ need to be controlled as well. We can even do an extreme result: \begin{align} \lambda N = 0.0001 \, , \quad \lambda = 0.000001 \, ,\quad N=100 \, . \end{align} For this case the exact answer and partial sum of the first term in the series are close: \begin{align} A_{ex} = 1.66816 \times 10^{187} \, , \end{align} \begin{align} A_{ap}^{(1)} = 1.6994 \times 10^{187} \, . \end{align} Showing that the true perturbation parameter is $\lambda N= g$. We can make different approximations for this integral to explore different limits. The first thing that we can do is a strong coupling expansion. If we want a strong coupling expansion we need to reorganize the integral, for this we set $m=1$ again, and do the substitution: \begin{align} z= \lambda^{1/4} x \, . \end{align} Following by a renaming of the coupling: \begin{align} c = \frac{1}{\sqrt{\lambda}} \, , \end{align} such that the integral now is: \begin{align} A[c,N] = \sqrt{c} \int_{-\infty}^{\infty} \-da\-de {z} c^{N} z^{2N} e^{- \frac{z^{4}}{4!} - c \frac{z^{2}}{2}} \, . \end{align} The procedure will be the same, we can solve it exactly and do an expansion for small $c$ that is equivalent to a large $\lambda$: \begin{align} A[c,N] = \frac{2}{2N+1} 6^{\frac{N}{2}+\frac{1}{4}} c^{N + \frac{1}{2}} \Gamma(\frac{3}{2}+N) U \left( \frac{1}{4}+\frac{N}{2},\frac{1}{2},\frac{3c^{2}}{2} \right) \, , \end{align} \begin{align} A[c,N] \thicksim \sum_{0}^{\infty} \left( 2^{\frac{2n+6N-1}{4}} 3^{\frac{1+2N+2n}{4}} \Gamma(\frac{1}{4} + \frac{n+N}{2})(-1)^{n} \right)\frac{c^{n+N+\frac{1}{2}}}{n!} \quad \text{as} \quad c \rightarrow 0 \, . \end{align} We can test this approximation for different values of $c$ and $N$, shown in the Tables~\ref{tabela5},~\ref{tabela6} and~\ref{tabela7}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda N$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[1]$ & 0.00652336 & 0.172028 & -5.39346 & -3596 \\ \hline $A_{ap}^{(2)}[1]$ & 0.00652635 & 0.181483 & 24.5033 & 90945.6 \\ \hline $A_{ap}^{(3)}[1]$ & 0.00652626 & 0.1786318 & -65.7756 & $-2.76392 \times 10^{6}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[1]$} & \multicolumn{1}{l|}{0.00652484} & \multicolumn{1}{l|}{0.177318} & \multicolumn{1}{l|}{1.72872} & 2.49116 \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=1$ in the strong coupling limit.} \label{tabela5} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[10]$ & $2.9328 \times 10^{-13}$ & $0.0044344$ & $-1.3902 \times 10^{9}$ & $-5.2795 \times 10^{20}$ \\ \hline $A_{ap}^{(2)}[10]$ & $2.9426 \times 10^{-13}$ & $0.0075253$ & $8.3837 \times 10^{9}$ & $ 3.0380 \times 10^{22}$ \\ \hline $A_{ap}^{(3)}[10]$ & $2.9420 \times 10^{-13}$ & $0.0056700$ & $-5.0286 \times 10^{10}$ & $-1.8249 \times 10^{24}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[10]$} & \multicolumn{1}{l|}{$2.9376 \times 10^{-13}$} & \multicolumn{1}{l|}{$0.0057133$} & \multicolumn{1}{l|}{$2.5129 \times 10^{6}$} & $1.1521 \times 10^{9}$ \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=10$ in the strong coupling limit.} \label{tabela6} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[100]$ & $1.51361 \times 10^{-69}$ & $-4.23803 \times 10^{31}$ & $-2.98781 \times 10^{133}$ & $-9.96931 \times 10^{234}$ \\ \hline $A_{ap}^{(2)}[100]$ & $1.56881 \times 10^{-69}$ & $1.32163 \times 10^{32}$ & $5.22077 \times 10^{134}$ & $1.73547 \times 10^{237}$ \\ \hline $A_{ap}^{(3)}[100]$ & $1.55915 \times 10^{-69}$ & $-1.73165 \times 10^{32}$ & $-9.13326 \times 10^{135}$ & $-3.03593 \times 10^{239}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[100]$} & \multicolumn{1}{l|}{$1.53967 \times 10^{-69}$} & \multicolumn{1}{l|}{$ 1.03189 \times 10^{31}$} & \multicolumn{1}{l|}{$1.13733 \times 10^{125}$} & $3.13631 \times 10^{181}$ \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=100$ in the strong coupling limit.} \label{tabela7} \end{table} In this case, the approximation is good for small $c$ and moderate multiplicity as expected. In terms of $\lambda$ this would be the approximation for $\lambda \rightarrow \infty$. The approximation parameter, in this case, is $cN$ so we can see that at high multiplicity, we can still get a good approximation if we pick values of $cN$ that are small. Here we can explore a different regime where $\lambda$ is large and N large provided that $cN$ is small. The semiclassical limit is something like $\lambda \rightarrow 0 $ and $N \rightarrow \infty$ . If we want to mimic this behavior, we need to make a different approximation. Given these two cases, we can see that the perturbation parameter is not the naive one and $N$ mixes with him to say how good of approximation the partial sum of the series is. With this toy model, we can see that at high multiplicity, the small coupling expansion is useless for a partial sum. If we use Pad\'e in the coefficients, we will get a better approximation, but this is not the spirit for the interpretation of exponential growth of amplitudes at tree level. Using this example, we can see that it is not possible to draw this conclusion using the perturbation theory only. This leaves us with only the semiclassical calculation indicating the exponential growth of the decay rate. If we want to understand this approximation we can try to make a saddle point approximation of this integral as suggested in~\cite{Monin-aug-18} using the effective action: \begin{align} \label{int10} A[m,\lambda, N] = \int_{-\infty}^{\infty} \-da\-de {x} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!} + 2N \ln(x)} \, . \end{align} Because we want to make a saddle point approximation with $N$ as the large parameter we can change the variables to arrange the integral in the canonical form: \begin{align} x = \sqrt{N}z \, . \end{align} Then the Eq.~(\ref{int10}) becomes: \begin{align} A[m,\lambda , N ] = \sqrt{N} e^{N\ln(N)} \int_{-\infty}^{\infty} \-da\-de {z} e^{N W(z)} \, , \end{align} where $W(z)$ is the function that we will find the saddles to expand: \begin{align} W(z) = -m \frac{z^{2}}{2} - g \frac{z^{4}}{4!} +2 \log(z) \, . \end{align} In this action, we used the definition of the t' Hooft like coupling $g$. There is a final step that we need to do before starting the approximation. The integral is symmetric under parity that means that the saddle point only will pick one side for the approximation, because of that we can write the integral as two copies of only one of the sides. This guarantees that we are making the right approximation for that integral: \begin{align} \label{int20} A[m,\lambda,N] = 2 \sqrt{N}e^{N\ln(N)} \int_{0}^{\infty} \-da\-de {z} e^{N W(z)} \, . \end{align} Because we will do a numerical check on this approximation we use the system where $m=1$ so we can compare with the results that we got so far. Now we need to look for saddle points of $W(z)$: \begin{align} W(z)' = -z - g \frac{z^{3}}{3!} + \frac{2}{z} =0 \, . \end{align} This equation has four solutions, we pick only the positive $z_{0}$ solutions and then choose the one where the function has a maximum: \begin{align} W''(z_{0}) = -1 - g \frac{z^{2}_{0}}{2} - \frac{2}{z_{0}^{2}} < 0 \, . \end{align} Using these conditions we get the saddle point that we will use to expand the integral around: \begin{align} z_{0} = \sqrt{\frac{-3 + \sqrt{9 + 12g}}{g}} \, . \end{align} Proceeding with the saddle point approximation we do a change of variables in Eq.~(\ref{int20}): \begin{align} z= z_{0} + \frac{y}{\sqrt{N}} \, . \end{align} Doing the expansion around the saddle point of the term in the exponent gives: \begin{align} N W(z) = NW(z_{0}) + \frac{y^{2}}{2} W''(z_{0}) + \frac{y^{3}}{6\sqrt{N}} W'''(z_{0}) + \dots \, . \end{align} Since we are making an large $N$ expansion, we can expand the exponential in powers of $\frac{1}{\sqrt{N}}$: \begin{align} e^{N W(z)} = e^{N W(z_{0})} e^{\frac{y^{2}}{2}W''(z_{0})} (1 + \frac{y^{3}}{6\sqrt{N}}W'''(z_{0}) + \frac{y^{4}W''''(z_{0}) + 3 y^{6}(W'''(z_{0}))^{2}}{72N} + \dots \, . \end{align} In the large $N$ limit we can extend the domain of integration. This can be done because the integral is localized. The expansion for Eq.~(\ref{int10}) becomes: \begin{align} A[\lambda,N] = 2 e^{N(W(z_{0})+\ln(N))} \int_{-\infty}^{\infty} \-da\-de {y} e^{\frac{y^{2}}{2}W''(z_{0})} \left(1 + \frac{y^{3}}{6\sqrt{N}}W'''(z_{0}) + \frac{y^{4}W''''(z_{0}) + 3 y^{6}(W'''(z_{0}))^{2}}{72N} + \dots \right) \, . \end{align} Doing the integral we get the saddle point approximation for this integral. Here we will just retain terms up to $N^{-1}$ but the rest of the terms can be esily generated following the steps shown: \begin{align} A[\lambda,N] = 2 e^{N(W(z_{0})+\ln(N))} \sqrt{\frac{2\pi}{-W''(z_{0})}} \left( 1 + \frac{W''''(z_{0})}{24(-W''(z_{0}))^{2}N} + \frac{5 (W'''(z_{0}))^{2}}{8(-W''(z_{0}))^{3}N} + \dots \right) \, . \end{align} The expression for the different derivatives of $W$ in the point $z_{0}$ are: \begin{align} -W(z_{0}) = \frac{-3 + 2g + \sqrt{9 +12g} - 4g \ln(\frac{-3 +\sqrt{9+12g}}{g})}{4g} \, , \end{align} \begin{align} -W''(z_{0}) = \frac{6 + 8g - 2 \sqrt{9+12g}}{\sqrt{9 + 12g}-3} \, , \end{align} \begin{align} W'''(z_{0}) = \frac{-8g + 6( \sqrt{9+12g-3}}{g(\frac{-3+ \sqrt{9+12g}}{g})^{3/2}} \, , \end{align} \begin{align} W''''(z_{0} )= g \left( 1 - \frac{12g}{(-3+ \sqrt{9+12g})^{2}} \right) \, . \end{align} Let us start to investigate how good is this approximation in the strong coupling semiclassical limit: \begin{align} N \rightarrow \infty \, , \quad \lambda \rightarrow 0 \, , \quad g \rightarrow \infty \end{align} For this, let us construct the comparison for the following values of the coupling, $\lambda = 0.1 ,0.01 , 0.001$. We scan this approximation in the large $g$ limit, that in principle, is ideal to study the Higgsplosion mechanism. The values for these couplings are in the Tables~\ref{tabela8},~\ref{tabela9}, and~\ref{tabela10}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $100$ & $1000$ & $10000$ \\ \hline $g$ & $10$ & $100$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.1]$ & $4.7939 \times 10^{161}$ & $1.1376 \times 10^{125}$ & $4.6363 \times 10^{79}$ \\ \hline $A_{ap}^{(1)}[0.1]$ & $4.7888 \times 10^{161}$ & $1.115 \times 10^{125}$ & $3.9322 \times 10^{79}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.1]$} & \multicolumn{1}{l|}{$4.7914 \times 10^{161}$} & \multicolumn{1}{l|}{$2.5246 \times 10^{2250}$} & \multicolumn{1}{l|}{$8.3371 \times 10^{25318}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.1$ in the strong coupling limit of $g$.} \label{tabela8} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $1000$ & $10000$ & $100000$ \\ \hline $g$ & $10$ & $100$ & $1000 $ \\ \hline $A_{ap}^{(0)}[0.01]$ & $2.89919 \times 10^{2614}$ & $2.5247 \times 10^{2250}$ & $ 5.5863 \times 10^{1798}$ \\ \hline $A_{ap}^{(1)}[0.01]$ & $2.89888 \times 10^{2614}$ & $2.5197 \times 10^{2250}$ & $5.5015 \times 10^{1798}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.01]$} & \multicolumn{1}{l|}{$2.67897 \times 10^{2614}$} & \multicolumn{1}{l|}{$1.7379 \times 10^{29044}$} & \multicolumn{1}{l|}{$4.3090 \times 10^{293359}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.01$ in the strong coupling limit of $g$.} \label{tabela9} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $10000$ & $100000$ & $1000000$ \\ \hline $g$ & $10$ & $100$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.001]$ & $1.89717 \times 10^{36142}$ & $7.33370 \times 10^{32503}$ & $3.6033 \times 10^{27989}$ \\ \hline $A_{ap}^{(1)}[0.001]$ & $1.89715 \times 10^{36142}$ & $7.31224 \times 10^{32503}$ & $3.5978 \times 10^{27989}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.001]$} & \multicolumn{1}{l|}{$1.993025 \times 10^{30940}$} & \multicolumn{1}{l|}{$2.83770 \times 10^{312318}$} & \multicolumn{1}{l|}{$9.7164 \times 10^{3126099}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.001$ in the strong coupling limit of $g$.} \label{tabela10} \end{table} In these results, the huge numbers are appearing because we are not working only with the connected amplitudes. We can see that the saddle point is a good approximation for $g=10$ and $N=100$ and $1000$. Any other value the approximation is not useful for the partial sum. This indicates that this saddle point approximation is trustworthy in the $g \approx 10 $ limit and deviates for larger values. With this result, we obtain three different regions of investigation. Ordinary perturbation theory for high multiplicity works when $\lambda N = g$ is small. Going for strong perturbation theory in the high multiplicity case, it works for $\frac{N}{\sqrt{\lambda}}$ small, and this means that we can explore the high $\lambda$ limit. Finally, the saddle point approximation works best when $N$ is large, $\lambda$ small and $g$ of order ten. If the semiclassical calculation for the Decay rate follows this behavior, this could be bad news because we want to explore larger values of $g$. It is worth to remember that this is just a toy model, and the real limitation of the Eq.~(\ref{eqimpor}) cannot be extracted only from this. We can take from this that perturbation theory is not useful for partial sums in the high multiplicity limit, only in the ultra-perturbative limit. This is not the end of the story, and a useful approximation for large $g$ is still needed. Despite that for moderate values of $g$ and $N$, the semiclassical approximation can be trusted, and this could be enough to realize the Higgsplosion Mechanism. The next section, we introduce the last toy model and a new interpretation of the Higgsplosion mechanism. \section{Exponential Decay of the Propagator, String Inspired Toy Model} The appearance of an exponentially suppressed propagator is an unusual feature of a Quantum Field Theory. Most of the known cases have some form of non-locality in it. This does not mean that we cannot use these models to describe some physical system. Generally in these theories there is a scale $\Lambda$ where the non-locality becomes dominant. Below such scale, the system is causal and behaves like a local theory. If we want to understand the Higgsplosion mechanism near the Higgsplosion scale $E^{*}$ we could try to use some toy model that mimics this exponential behavior. It turns out that there is a physical system where an exponential decay of the propagator occurs. This system appears in String Field Theory~\cite{string3}, and we do not need to understand the String side fully to use it. Understanding String Field Theory would be a thesis on its own. We show the origins for such an action just for the sake of consistency, but given the theory, we can treat like any other Quantum Field Theory. This feature of exponential decay of propagators is a common occurrence in String Theory and String Field Theory~\cite{uv}. The physical system is the Tachyon condensation in open strings~\cite{string1,string2,string3}. If an open string is attached to an unstable $D$-brane then there is a tachyon in the spectrum. This tachyonic vacuum will then describe a vacuum where the unstable $D$-brane decays and disappears. This feature is similar to what is proposed in this Thesis about the Higgsplosion entering a new phase because all the single particle states are gone. Using String Field Theory, it was possible to show that indeed there was no particle excitation in this vacuum configuration. If we try to describe this process using local fields, in the level 0\footnote{Level expansion is the expansion in the different excitations of the String Field. This is justified because the couplings to the higher excitations decrease exponentially. The level 0 is just the tachyon, and level 1 is the tachyon plus a massless vector boson, and so on.} sector of the open bosonic String Field Theory we have the following action: \begin{align} S_{tachyon} = \int \-da\-de [26]{x} \left[ \frac{1}{2} \phi e^{-\frac{\Box}{M^{2}}}(\Box + \mu^{2})\phi - \frac{g}{3} \phi^{3} \right] \, , \end{align} where $\phi$ is the tachyonic field with mass $m^{2}=-\mu^{2}$ that describe the instability of the $D$-brane. The exponential factor with the D'Alambertian operator gives the non-locality of this action, and the factor $M^{2}$ is the non-local scale. This action is studied in detail in~\cite{string1}. The propagator has the exponential decay feature: \begin{align} \Delta_{F}^{-1} = e^{\frac{p^{2}}{M^{2}}}(p^{2}-\mu^{2}) \, . \end{align} Here we study a toy model for the broken phase of the scalar field that has this exponential suppression. The action that we propose to work with is: \begin{align} S = \int \-da\-de [4]{x} \left[ \frac{1}{2} \phi e^{-\frac{\Box}{(E^{*})^{2}}}(\Box - m^{2})\phi - \frac{\lambda}{4!} \phi^{4} \right] \, , \end{align} in the broken phase where $m^{2}=-\mu^{2}$. The minimum configuration is the same as the local version because the field solution is constant: \begin{align} \phi_{\min}^{2} = \frac{3! \mu^{2}}{\lambda} \, . \end{align} Doing a shift in the field to remove the expectation value: \begin{align} \sigma = \phi - \phi_{\min} \, . \end{align} We can write the action as: \begin{align} S = \int \-da\-de [4]{x} \left[ \frac{1}{2} \sigma \left( e^{-\frac{\Box}{(E^{*})^{2}}}(\Box + \mu^{2}) -3\mu^{2} \right)\sigma + \mathcal{L}_{int} \right] \, . \end{align} We will ignore the interaction part because we want to analyze the spectrum of the free theory. The propagator in this phase is similar to the one obtained before: \begin{align} \Delta_{F}^{-1} = e^{\frac{p^{2}}{(E^{*})^{2}}}(-p^{2}+\mu^{2})-3\mu^{2} \, . \end{align} We want to look for the pole structure of this propagator. If we have more than one pole this will mean the appearance of a ghost in the spectrum, and the theory will not be valid above the mass of the ghost. Doing a change of variables to find these poles: \begin{align} -k^{2} = x \mu^{2} \, , \end{align} the equation that we need to solve is: \begin{align} \label{eqeq1} P(\nu,x)=e^{-\nu x}(x+1)-3 =0 \, . \end{align} In this equation we have $\nu = \frac{\mu^{2}}{(E^{*})^{2}}$ and the mass of the excitation is: \begin{align} m^{2} = x \mu^{2} \, . \end{align} It is possible to see the behavior of this function varying $\nu$ in the Figure~\ref{fig}. It has a distinct feature similar to the tachyonic case. There is a region where there is no pole ($\nu \gtrsim 0.14$). This region there is no particle in the spectrum, and we can interpret as the theory have Higgsploded. If we want to describe the theory above this scale, we would need to change the degrees of freedom. It has one special point where there is one particle in the spectrum ($\nu_{crit} \approx 0.14$). Finally, there exists a region where we have a particle and a ghost, the ghost being near infinity and getting closer as the coupling approaches the transition ($0<\nu \lesssim 0.14$). \begin{figure}[h!] \centering \includegraphics[width=15cm]{string} \caption{Plot of Eq~(\ref{eqeq1}) for different scales $\nu =0.1, 0.14123$ and $0.3$.} \label{fig} \end{figure} This is very similar to the spectrum found in~\cite{string1} that indicates the decay of the $D$-brane. In the Higgsplosion scenario, this could indicate that the system ``decays" and we need another description above that scale. This is not as strong as the UV finiteness but is a remarkable feature nonetheless. That could explain the similarities from a non-local theory pointed out in~\cite{Monin-aug-18}. We can see the values of $E^{*}$ given that $\mu$ is of the order of the weak scale demanding that the theory has no particles: \begin{align} E^{*} \approx 1.3 \, \text{TeV} \quad \text{for} \quad \mu = 500 \, \text{GeV} \, , \end{align} \begin{align} E^{*} \approx 0.66 \, \text{TeV} \quad \text{for} \quad \mu = 250 \, \text{GeV} \, , \end{align} \begin{align} E^{*} \approx 2.6 \, \text{TeV} \quad \text{for} \quad \mu = 1 \, \text{TeV} \, . \end{align} These scales are too low, but this is just a toy model for the behavior proposed in the Higgsplosion mechanism. If we obtain a better expression for the off-shell propagator, we can try to model inside a non-local theory of this kind to investigate more deeply the dynamics near this scale. This example shows how strange is the exponential decay of a propagator, and this possible interpretation of theory ``decay". In scales much lower than $E^{*}$ we can treat it using the local Lagrangian without a problem. As we get closer to $E^{*}$ we need to use this Lagrangian to discuss the physics. Finally, passing this scale, we would need another description in terms of the remaining degrees of freedom of the theory. The non-local features emerging near the Higgsplosion scale can be interpreted as the $n$ particles behaving collectively to form a structure. The scalar mass is shielded in this interpretation despite the absence of UV finitness because of the $n$ particle structure formed, in a similar way as a composite scalar is protected from the UV. \chapter{Introduction} \label{c1} \pagenumbering{arabic} The discovery of the Higgs Boson~\cite{higgs1} opened a new era for particle physics~\cite{higgs2}. The last fundamental piece necessary for the Standard Model (SM) to work as intended. Since its discovery, we entered a new phase of precision measurement and confirmation of the Standard Model. Despite its great achievements, it is known that the Standard Model cannot be the full history. That is motivated by the lack of understanding of observational results that comes from other sources, such as, Dark Matter~\cite{dm1,dm2} and Dark Energy~\cite{DE}, which are not accounted for by the Standard Model. Even the lack of a theoretical understanding of what Quantum Gravity\cite{gravity1,gravity2,gravity3,gravity4} looks like can enter as evidence for the need of Beyond Standard Model Physics(BSM). These results indicate that maybe the Standard Model is not the end. Even if one ignores these hints, some unsolved puzzles can be identified already within the Standard Model. One of them is the fine-tuning problem associated with the weak scale~\cite{nat1,nat2}. Naively it is expected that the squared mass parameter of a scalar particle should be of the order of the cutoff of the theory. In the Standard Model, the cutoff can be assumed as the Planck scale $\Lambda_{p}$, the scale where Quantum Gravity becomes relevant: \begin{align} \Lambda_{p} \approx 10^{19} \text{GeV} \, . \end{align} This expectation happens because there is no symmetry to protect the theory from receiving large contributions to the squared mass term. The story is different from a fermion particle, where the presence of chiral symmetry in the $m_{f} \rightarrow 0 $ limit shields the fermions from being quadratically sensitive to the Ultra-Violet (UV). This does not occur for the scalar particle without any additional symmetry. Thus, it is surprising that the measured Higgs mass $m_{h}\approx 125$~GeV~\cite{higgs1} and the absence of other states or signs of new physics, indicates the presence of a scalar much lighter than a cutoff. That means the occurrence of a fine-tuning of the contribution from BSM physics in such a way that the Higgs mass is small. The theory has large numbers that conspire to give a small physical contribution: \begin{align} m_{h}^{2} \approx m_{0}^{2} + \delta m_{BSM}^{2} \, . \end{align} There are a few potential solutions to the fine-tuning problem. The most famous are Supersymmetry~\cite{susy1,susy2,susy3,susy4} and Composite Higgs~\cite{comp1,comp2,comp3,comp4,comp5}. In this thesis, we review a new possible mechanism that can render the Higgs mass parameter small naturally and potentially make the Standard Model UV finite. This new mechanism is called Higgsplosion and was proposed in 2017 by Valentin V. Khoze and Michael Spannowsky~\cite{Khoze-higgsplosion,Khoze-jun-17}. This thesis aims to understand the proposal in detail and learn more about Quantum Field Theory at high multiplicity as well as the applicability of ordinary perturbation theory in such a regime. The study of Higgsplosion is intimately related to the question of what happens to a Quantum Field Theory when we have high multiplicity processes. Inside the Quantum Field Theory framework, people developed powerful tricks~\cite{Brown-nov-92,Voloshin-apr-93} that made possible to compute high multiplicity processes. In all of the computation, one finds the unusual feature that the leading order is already growing exponentially with the number of final states. At the time it was interpreted as implying that the perturbation theory is not valid in this regime~\cite{Goldberg-may-90}. The leading term would not be a good approximation, and any partial sum would not reproduce the correct answer. In this thesis, this claim is reviewed, so we can understand precisely how the perturbation theory works in such a regime of a Quantum Field Theory. The picture changed later when Son~\cite{Son-may-95} developed a semiclassical computation to obtain expressions for high multiplicity processes that, in principle, can be trusted in a fixed limit of the theory. The decay rate for a high multiplicity process obtained in~\cite{Son-may-95} had an exponential form, but in the region of applicability it does not have an exponential growth. The next breakthrough came when these semiclassical computations were generalized to the strong 't Hooft like coupling ($\lambda n$) regime for $\phi^{4}$ in the broken phase in (1+3)D. In such a limit, the same behavior of exponential growth of some objects at high multiplicity appears. This fact is what motivated Valentin V. Khoze and Michael Spannowsky to propose the Higgsplosion mechanism. It was not well understood what the limitations of their results were, and we discuss this in detail. These result gave strong evidence that Higgsplosion may happen at least in this model, which we discuss also. In the Higgsplosion mechanism, these results are used together with some basic Quantum Field Theory to show that unitarity is preserved even with this exponential growth, but with the price of the propagator vanishing exponentially fast at high energies. This exponential suppression renders loops finite, and the theory stays at a UV interacting fixed point. That is a strong claim, and the role of this thesis is to investigate this and understand better if Higgsplosion can happen in a Quantum Field Theory. If this is true, then there will be consequences to the Higgs sector of the Standard Model that could explain the fine-tuning problem and in some sense UV complete the whole theory. Even if it turns out that it does not apply to the Standard Model, it could be right in some limiting case of other models, and we can learn more about Quantum Field Theory in a different regime. The structure of this thesis is the following. In chapter~\ref{c1}, we review the notation and tools that we use. In chapter~\ref{c2}, we calculate some high multiplicity amplitudes at the threshold (the limit where all outgoing particles are at rest) and explore beyond threshold amplitudes. At the end of chapter~\ref{c2}, we show some recent results that we use later to discuss the possibility of Higgsplosion. We choose to focus more on the perturbative approach to see if Higgsplosion happens in this regime, but these results coming from the semiclassical calculation are useful to understand the current state of the Higgsplosion Proposal. In chapter~\ref{c3}, we present the Higgsplosion mechanism itself and what it can bring to the table. After that, we discuss some problems with the claims of Higgsplosion, as well as the known criticism of it, and present some potential solutions. At the end of chapter~\ref{c3}, we present two toy models that are useful to understand the applicability of perturbation theory and a new proposed interpretation of the Higgsplosion mechanism. We try to point out which directions are worth exploring to settle the open questions that have been raised about this mechanism. \section{Toolbox} \subsection{Green Functions} In this thesis, we use different types of $n$-point correlators, so it is worth defining the notation here. These correlators are used to construct physical amplitudes through the standard LSZ reduction formula~\cite{qft}. Knowing these $n$-point functions, we can construct any S-matrix element of the theory. In this point, there is no mention of perturbation theory aside from the assumption of asymptotic states that enters in LSZ\footnote{This excludes theories that we cannot separate the particles from the interaction, for instance, a confined system.}. First we define the $n$-point function as: \begin{align} \label{green} G^{(n)}(x_{1},\dots,x_{n}) \equiv \ev{T \left(\phi(x_{1})\dots \phi(x_{n}) \right)}{\Omega} \, , \end{align} where $T$ is the time ordering operator. We can use a diagrammatic representation for this object inside perturbation theory of the form represented in Figure~\ref{fazer1}. \begin{figure}[h!] \centering \includegraphics[width=8cm]{pert} \caption{Diagrammatic representation for n-point green functions inside perturbation theory.} \label{fazer1} \end{figure} Given this Green function we can define its connected part diagrammatically were all external points are connected (interacting): \begin{align}\label{cone} G_{c}^{(n)}(x_{1},\dots,x_{n}) \equiv G^{(n)}(x_{1},\dots,x_{n}) - \text{disconnected parts} \, . \end{align} We will see later that we can define generating functional for these connected Green functions in such a way that we can work only using $G_{c}^{(n)}$. This definition is a generalization of the concept of cumulants in probability theory~\cite{cumu}. In particular, for $n=2$ (the propagator) all diagrams are connected: \begin{align} G_{c}^{(2)}(x_{1},x_{2}) = G^{(2)}(x_{1},x_{2}) \, . \end{align} Finally, we can define a Green function that cannot be separated into subprocesses by cutting a single line. These are called one-particle irreducible Green functions(1PI): $G_{1PI}^{(n)}$. We will see how to obtain these objects using functional methods in Section~\ref{sfun}. They are the fundamental blocks that we can use to construct arbitrary processes. For instance, the process represented in~\ref{not1} is not 1PI. \begin{figure}[h!] \centering \includegraphics[width=8cm]{not1pi} \caption{Example of a process that is not 1PI.} \label{not1} \end{figure} With these objects we can define its Fourier transform: \begin{align}\label{furi} (2\pi)^{4}\delta^{4}(p_{1}+\dots+ p_{n}) G_{c}^{(n)}(p_{1}, \dots, p_{n}) = \int \-da\-de [4]{x_{1}}\dots \-da\-de [4]{x_{n}} e^{-i(p_{1}\cdot x_{1}+\dots +p_{n}\cdot x_{n})}G_{c}^{(n)}(x_{1},\dots,x_{n}) \, . \end{align} Usually, we work in Fourier space and with the definition that $+p$ are entering momenta. It can be seen above in Eq~(\ref{furi}), that we have a total momentum conservation delta function. We can use these objects to compute off-shell amplitudes by picking any momentum in a physical amplitude and letting it be virtual. In other words, work only with the Green function in momentum space and ignore the overall conservation delta function. The full propagator in momentum space is then: \begin{align} \label{ca} G^{(2)}(p,-p) \equiv G^{(2)}(p) \, , \end{align} and with this we can define the amputated Green function that plays an important role in constructing amplitudes: \begin{align} G_{amp}^{(n)} (p_{1},\dots,p_{n}) \equiv \prod_{1}^{n} \left( G^{(2)}(p_{i}) \right)^{-1} G_{c}^{(n)}(p_{1},\dots,p_{n}) \, . \end{align} Diagramatically we are removing all the external legs of an amplitude using the full propagator. This can be used to generalize LSZ to off-shell amplitudes. We will not re-derive LSZ as this is standard textbook material~\cite{qft}. Nonetheless, to understand this last statement let us consider the LSZ reduction formula for a real scalar field: \begin{align} \label{lsz} i T_{n \rightarrow m} (p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') = \lim_{p^{2}_{i} \rightarrow m^{2}}(2\pi)^{4} \delta^{4} \left( \sum_{1}^{n} p_{i} - \sum_{1}^{m} p_{i}' \right) \left( i\sqrt{Z_{\phi}} \right)^{-(n+m)} \times \\ \times (p_{1}^{2}-m^{2}) \dots (p_{m}'^{2}-m^{2}) G_{c}^{(n+m)}(p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') \, , \nonumber \end{align} where $Z_{\phi}$ is the wave function normalization, and $T_{n \rightarrow m}$ is the transfer matrix elements that are related to the interacting part of the S-matrix: \begin{align} \mathbb{S} =\mathbb{I} + i\mathbb{T} \, . \end{align} The object to the right of the delta function in Eq.~(\ref{lsz}) is the invariant amplitude $\mathcal{M}$ for this process: \begin{align} T_{n \rightarrow m} (p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') = (2\pi)^{4} \delta^{4} \left( \sum_{1}^{n} p_{i} - \sum_{1}^{m} p_{i}'\right)(-i\mathcal{M}(n \rightarrow m)) \, . \end{align} The generalization goes as follows, instead of removing the propagator near the mass shell, we remove the full propagator and generate an off-shell amplitude. This amplitude became the physical amplitude when we put all particles on-shell. We can see that we are almost removing the inverse of the full propagator near the on-shell limit in Eq.~(\ref{lsz}), just changing the overall $Z_{\phi}$ factor: \begin{align} G^{(2)}(p) \overset{p^{2}\rightarrow m^{2}}{=} \frac{i Z_{\phi}}{p^{2}-m^{2}+i\epsilon} \, , \end{align} this $m$ is the physical mass, different than the bare mass $m_{0}$ that appear in the free propagator. The inverse is defined in such away that: \begin{align} G^{(2)} \cdot \left( G^{(2)} \right)^{-1} = -i \, . \end{align} The off-shell generalization is direct, we change these propagators near the on-shell limit to the full propagators and use the definition of the amputated Green function: \begin{align}\label{genlsz} -i\mathcal{M}(n \rightarrow m) = (i\sqrt{Z}_{\phi})^{(n+m)} G_{amp}^{(n+m)}(p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') \, . \end{align} It is possible to see that the amputated Green function are the off-shell amplitudes aside from an overall normalization factor\footnote{If you work with a theory where $Z_{\phi}$ is zero up to some loop order, then the amputed Green function is the amplitude directly up to the same order.}. We can use this definition to work out a case that is used in this thesis, the $1 \to 1$ ``scattering": \begin{align} \mathcal{M}(1 \rightarrow 1) = - Z_{\phi} \left( G^{(2)} \right)^{-1} \, . \end{align} This case is interesting because we can use the Optical Theorem~\cite{qft} to relate the imaginary part of this amplitude to the total decay rate: \begin{align} \Im \left( \mathcal{M}(1 \rightarrow 1) \right) = m \Gamma_{total}(p) = -Z_{\phi} \Im \left( (G^{(2)}(p))^{-1}\right) \, , \end{align} where the first equality comes from the Optical Theorem, while the second one comes from the $1 \to 1$ ``scattering" amplitude obtained trough generalized LSZ, Eq.~(\ref{genlsz}). Using the inverse of the full propagator (we will comment further on this form in the Section~\ref{secdy}): \begin{align} \label{fullprop} \left( G^{(2)} \right)^{-1} = -\left( p^{2}-m^{2} - \Sigma(p^{2}) \right) \, , \end{align} we arrive at one of the most important relations that we will use in this thesis: \begin{align} \Gamma_{total}(p^{2}) = -\frac{Z_{\phi}}{m}\Im \left( \Sigma(p^{2}) \right) \, . \end{align} Thus, if the off-shell total decay rate grows exponentially, the imaginary part of $\Sigma(p^{2})$ grows as well. The physical total decay rate can be recovered by going to the mass shell. This feature of working with off-shell quantities let us gain more information about the theory in general, and it is a powerful tool in Quantum Field Theory. The exponential growth of the imaginary part of $\Sigma(p^{2})$ can in principle suppress the propagator, Eq.~(\ref{fullprop}), at a scale even when the real part of this function is well behaved (we cannot say much about its real part). Now, let us investigate further Eq.~(\ref{fullprop}) because this is a central point of this thesis. \subsection{Dyson Resummation and the Full Propagator} \label{secdy} Here we discuss important properties of the full propagator presented in Eq.~(\ref{fullprop}). Using the interacting part of the 1PI two-point function that we define as: \begin{align} \label{1pid} \Sigma (p^{2}) \, , \end{align} we can recover all the information about the full propagator $G^{(2)}$. With $\Sigma(p^{2})$ we can re-construct the full propagator as a geometric sum of these graphs\footnote{It is interesting to note that Eq.~(\ref{umpa}) is not so straightfoward. It almost comes as a definition in Quantum Field Theory. This happens because we cannot reorganize terms in a divergent series, since summation is not infinitely associative and commutative. The 1PI organization of the perturbative series that appears in Quantum Field Theory is not immediate from perturbation theory alone. However, it is possible to justify this ordering using the definition of a Quantum Action as we show in Section~\ref{sfun}. Therefore, it is a consequence of the meaning of what a Quantum Field Theory is, and not something additional to that.}: \begin{align}\label{umpa} G^{(2)}= G_{0}^{(2)}+ G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0} + G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0} + \dots \, . \end{align} \begin{figure}[h!] \centering \includegraphics[width=15cm]{1piprop} \caption{Diagrammatic representation of the 1PI resummation.} \label{diga} \end{figure} Diagrammatically this is represented in Figure~\ref{diga}. If we do this resummation we get the representation of the full propagator used before in Eq.~(\ref{fullprop}). Indeed, one has: \begin{align} G^{(2)}(p^{2})= \frac{G^{(2)}_{0}(p^{2})}{1+i\Sigma(p^{2}) G_{0}^{(2)}(p^{2})} \, , \end{align} and using the usual free propagator: \begin{align} G^{(2)}_{0} = \frac{i}{p^{2}-m^{2}_{0}+i\epsilon} \, , \end{align} we get the representation for the full propagator as: \begin{align}\label{propa} G^{(2)} = \frac{i}{p^{2}-m^{2}_{0}-\Sigma(p^{2})+i\epsilon} \, . \end{align} It is important to note that the standard perturbation theory is inside $\Sigma(p^{2})$. We are free to do this resummation, and any non-perturbative effect is not lost but rendered inside the 1PI function $\Sigma(p^{2})$. This resummation can be done even when $\Sigma(p^{2})$ is large because we can interpret this geometric series as a divergent series representation of the full propagator. Being a divergent series representation and having this geometric nature we can, in this case, find a region where the series converges. For instance, in a given renormalization scheme, we can fix $\Sigma(m^{2})=0$, summing this series near the mass shell condition means that we are inside the convergence radius. After we resum this series, the expression can be expanded to the whole complex plane just like the regular geometric series for $x=2$ or any other complex value: \begin{align} 1+2+4+8+ \dots = \frac{1}{1-2} = -1 \end{align} Typically in a divergent series, this is not the whole story, because of non-perturbative effects. For the expansion in $\Sigma(p^{2})$ given in Eq.~(\ref{umpa}), there is no effect of this kind. That does not mean that we solved the theory because we do not know how to calculate $\Sigma(p^{2})$ exactly. This result, Eq.~(\ref{propa}), is one of the few non-perturbative results in Quantum Field Theory that we currently have. With that information, we can guarantee the relation between the imaginary part of the 1PI function and the total decay rate as defined above, provided that the theory is unitary. This relation is re-derived without talking about this resummation (this is called Dyson Resummation in the literature) at the end of this section when we introduce functional methods. With this solved, we can start to investigate what else we can say about the full propagator, Eq.~(\ref{propa}). \subsection{K\"all\'en-Lehmann Spectral Representation} The object of interest here is the two-point function: \begin{align} \label{eqalgo} \ev{T \left( \phi(x)\phi(y) \right)}{\Omega} \, . \end{align} To explore this, we pick one time configuration and then in the end recover Eq.~(\ref{eqalgo}) constructing the time ordering. Choosing an ordering where $x^{0} < y^{0}$, we have: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} \, . \end{align} Although we cannot compute this exactly in a interacting theory, we can extract much information from it. If we introduce a set of complete states between the operators and use translation invariance we can write: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \sum_{n} \mel{\Omega}{\phi(0)}{n }\mel{n}{\phi(0)}{\Omega } e^{-i p_{n} \cdot (x-y)} \, , \end{align} where $n$ runs over all states in the theory, discrete and continuous (the sum becomes an integral over the continuous states). As expected this object depends only on the difference between the points $x$ and $y$. Now, we can introduce a delta function in a suggestive way to re-write this expression: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \int \frac{\-da\-de [4]{p}}{(2\pi)^{4}} e^{-i p \cdot (x-y)} \left( \sum_{n} (2\pi)^{4} \delta^{4}(p-p_{n}) \left| \mel{\Omega}{\phi(0)}{n }\right|^{2} \right) \, . \end{align} We can now define the spectral density: \begin{align} \tilde{\rho}(p) = \sum_{n} (2\pi)^{4} \delta^{4}(p-p_{n}) \left| \mel{\Omega}{\phi(0)}{n } \right|^{2} \, , \end{align} this measures the contribution to the two-point function of the states with momentum $p$. It receives contributions from bound states as well as multi-particle ones. This density is a Lorentz invariant object and vanishes when $p$ is not in the future lightcone~\cite{qft}. Using this we can write it as: \begin{align} \tilde{\rho}(p) = 2\pi \rho(p^{2}) \theta(p^{0}) \, . \end{align} Assuming there are no negative norm states it follows that the spectral density is positive semi-definite for all $p$ inside the lightcone: \begin{align} \rho(p^{2}) \geq 0 \, . \end{align} We can write the non-ordered two-point function with this spectral decomposition: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} =\int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{-i p \cdot (x-y)} \rho(p^{2})\theta(p^{0}) \, , \end{align} and using the propagator in position space: \begin{align} \Delta(x,y,m^{2}) = \int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{-i p \cdot (x-y)} \theta(p^{0})\delta(p^{2}-m^{2}) \end{align} it is possible to write this ordered two-point function as: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \int_{0}^{\infty} \-da\-de {s} \rho(s) \Delta(x,y,s) \, . \end{align} To recover the time ordering in this two-point function we can use: \begin{align} \ev{T\left(\phi(x)\phi(y)\right)}{\Omega} = \theta(x^{0}-y^{0})\ev{\phi(x)\phi(y)}{\Omega} + \theta(y^{0}-x^{0})\ev{\phi(y)\phi(x)}{\Omega} \end{align} together with the following relation: \begin{align} e^{-iE_{p}(x^{0}-y^{0})} \theta(x^{0}-y^{0}) + e^{+iE_{p}(x^{0}-y^{0})}\theta(y^{0}-x^{0}) = \lim_{\epsilon \rightarrow 0 } \frac{- E_{p}}{\pi i} \int_{-\infty}^{\infty} \frac{\-da\-de {E}}{E^{2}-E_{p}^{2}+i\epsilon}e^{i E (x^{0}-y^{0})} \, . \end{align} To construct the ordered two-point function Eq.~(\ref{eqalgo}), i.e. the Feynman propagator: \begin{align} \ev{T(\phi(x)\phi(y))}{\Omega} = \int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{i p \cdot (x-y)} i\Delta_{F}(p^{2}) \, , \end{align} were we have the full Feynman propagator in momentum space as\footnote{The $i$ was removed from the definition of this propagator to facilitate the analysis in the complex plane. Everything will be similar if we study $-i\Delta_{F}$ and kept the initial definition.} \begin{align} \label{eq1} \Delta_{F}(p^{2}) = \int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{p^{2}-s+i\epsilon} \, . \end{align} The Eq.~(\ref{eq1}) is known as K\"all\'en-Lehmann spectral representation of the full propagator. We did not use any information about the interaction or any expansion. This decomposition is a non-perturbative result. With this representation, we can derive a significant amount of information about the interacting theory. In an interacting theory, the spectral density will have singularities at locations of physical particles, illustrated in Figure~\ref{fig:spec}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{spectral} \caption{Example of a spectral density. An usual theory has a single pole at its first excitation and then some bound states before having a continuum spectrum for the 2 particle states and above.} \label{fig:spec} \end{figure} Because the spectral density is positive we can calculate the imaginary part using Cauchy theorem. Given the analytic structure of the propagator (simple poles for one-particle state and bound states, branch cuts for multi-particle states) we can write a contour integral representation where the contour is a circle around the real line where the cut lives: \begin{align} \Delta_{F}(p^{2}) = \frac{1}{2\pi i} \int_{\gamma} \-da\-de {s} \frac{\Delta_{F}(s)}{p^{2}-s} \, . \end{align} This expression holds as long as $p^{2}$ is inside the contour and the contour path does not cross any singularity. Taking the radius of the contour $R \rightarrow \infty$ and assuming that $\rho(s) \rightarrow 0$ as $s \rightarrow \infty$ we can write: \begin{align} \Delta_{F}(p^{2}) = \frac{1}{2\pi i} \int_{s_{0}}^{\infty} \frac{\-da\-de {s} \text{Disc}\left( \Delta_{F}(s)\right) }{p^{2}-s+i\epsilon} \, , \end{align} were $s_{0}$ is the localization of the first singularity that we assume to be the single particle state, and the discontinuity of the function along the cut is equal to the imaginary part of it: \begin{align} \text{Disc}\left( \Delta_{F}(s) \right) = 2i \Im \left( \Delta_{F}(s) \right) \, . \end{align} With this information, we can use the definition of the spectral decomposition Eq.~(\ref{eq1}) to write: \begin{align} \label{eq4} \rho(s) = -\frac{1}{\pi} \Im \left( \Delta_{F}(p^{2}) \right) \, . \end{align} This feature is a consequence that the propagator is real everywhere unless when the particle is on-shell. Another important feature is a constraint on the power with which the propagator can vanish at large momentum. Assuming that we can Wick rotate Eq.~(\ref{eq1}) (this is a non-trivial statement in an interacting theory) we get: \begin{align} p^{0} \rightarrow ip^{0}_{E} \quad \, , \quad p^{2} \rightarrow - p^{2}_{E} \, . \end{align} This means that the modulo of the full propagator is: \begin{align} \left| \Delta_{F}(-p^{2}_{E}) \right| = \left| \int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \, , \end{align} this implies: \begin{align} \left| \Delta_{F}(-p^{2}_{E}) \right| \geqslant \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \, , \end{align} for any $s_{0}$ because the density is positive semi-definite. Taking the limit of Euclidean momentum going to infinity we arrive at some point in $p^{2}_{E}> s_{0}$ for a fixed $s_{0}$. Then: \begin{align} \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \Delta_{F}(-p^{2}_{E}) \right| \geqslant \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \geqslant \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}} \right| = C \end{align} for some finite positive number $C$: \begin{align} C = \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \, . \end{align} Thus, with these assumptions, the propagator cannot fall faster than $1/p^{2}$ as $p^{2} \rightarrow \infty$. This does not mean that the propagator cannot ``look" as it is falling faster than $p^{-2}$ in intermediary regions (if we fix a $s_{0}$ such that $C=0$). This feature will be important in this thesis because it is a strong non-perturbative constraint to the two-point function. Possible loopholes of this are discussed in chapter~\ref{c3}. An important aspect of this derivation is that we can use any local operator to get similar non-perturbative information: \begin{align} \Delta^{O}(x) = \ev{O(x)O(0)}{\Omega} \, , \end{align} as longs as $O(x)$ is invariant under translations: \begin{align} \label{eq2} \Delta^{O}(x) = \int \frac{\-da\-de {s}}{2\pi} \int \frac{\-da\-de [3]{p}}{(2\pi)^{3}} \frac{\rho_{O}(s)}{2E_{s,p}} e^{-ip \cdot x} \end{align} where we define the spectral density of this operator as: \begin{align} \rho_{O}(s) = \sum_{n} \delta(s-p^{0}) |\mel{\Omega}{O(0)}{n }|^{2} \, . \end{align} Now it is important to make some remarks. The first thing is to know that Eq.~(\ref{eq1}) and Eq.~(\ref{eq2}) may contain UV divergences. This is always the case for a 4-dimensional theory. This ultimately mean that Eq.~(\ref{eq1}) and Eq.~(\ref{eq2}) are ill-defined. The usual step to take is to regularize and renormalize these expressions. The nature of the distribution $\rho(s)$ dictates how we handle these divergences. If we assume $\rho(s)$ is a tempered distribution\footnote{ Space of tempered distributions or Schwartz space is the function space of all infinitely differentiable functions that are rapidly decreasing at infinity along with all partial derivatives.} then $\rho(s)$ must grow at $s \rightarrow \infty$ no faster than a polynomial. In this case we can re-arrange the expression by adding $m$ unknown coefficients to improve the behavior of the integral. These coefficients will be fixed by the renormalization condition: \begin{align} \label{eq3} \Delta_{F}(p^{2}) = P_{m-1}(p^{2}) + p^{2m} \int \-da\-de {s} \rho(s) \frac{\Delta(p^{2},s)}{s^{m}} \, , \end{align} where $P_{m-1}$ is a polynomial of degree $m-1$. We have to do as many subtractions as needed to render the integral finite. The coefficients of these polynomials are fixed by experimental input. One way to justify Eq.~(\ref{eq3}) is to consider the case where we have logarithmic divergence. Doing one subtraction for the propagator, Eq.~(\ref{eq1}), we get: \begin{align} \Delta_{F}(p^{2})-\Delta_{F}(p_{0}^{2}) = \int_{0}^{\infty} \-da\-de {s} \rho(s) \left( \frac{1}{p^{2}-s+i\epsilon} - \frac{1}{p^{2}_{0}-s+i\epsilon} \right) \, , \end{align} with the object on the right side now finite, and we can then define the once subtracted propagator as: \begin{align} \Delta_{F}(p^{2}) = \Delta_{F}(p_{0}^{2}) + (p^{2}_{0}-p^{2})\int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{(p^{2}-s+i\epsilon)(p^{2}_{0}-s+i\epsilon)} \, . \end{align} Then, using one renormalization condition, we can fix the part depending on the arbitrary parameter $p_{0}$ and everything will be similar to doing renormalization in the usual way. In this language, regularizing the integral with a finite number of subtractions means that the theory is renormalizable. The story changes if we let $\rho(s)$ be a different kind of distribution~\cite{Khoze-sep-18,Jaffe-jan-67}, for instance, growing exponentially at large s. Then, what follows is that we need to do an inifinite number of subtractions. This can mean, in a worst case scenario, the necessity to fix an infinite number of constants using boundary or renormalization conditions. Theories with these kind of distributions normally are non-local, quasi-local or non-renormalizable. It can happen that after an infinite number of subtractions, only a finite number of constants need to be fixed in these three cases, but this is not a general feature. The behavior of $\rho(s)$ at infinity is fundamental to write the relation in Eq.~(\ref{eq4}). Doing this step with caution, we use the subtracted propagator where the new distributions vanish at large s. Lastly, we can relate $Z_{\phi}$ with the spectral distribution if we remove the one-particle state from it. Using the fact that the field operators obey canonical commutation relations, this gives a constraint that one-particle plus multi-particle states should add up to one: \begin{align} 1 = Z_{\phi} + \int_{m^{2}}^{\infty} ds \eta(s) \, , \end{align} with $\eta(s)$ being the spectral density after the removal of the one-particle state. This fixes $Z_{\phi}$ to be a number smaller than 1: \begin{align} 0 < Z_{\phi} < 1 \, . \end{align} The closer $Z_{\phi}$ is to zero, the more the multi-particle states dominate. In the limit of $Z_{\phi}=0$, we need to change the description of the theory because we do not have one-particle states anymore. That would appear as a reorganization of the degrees of freedom in the theory. With the previous understanding of the propagator, we need to introduce one more tool to start doing calculations. \subsection{Functional Methods} \label{sfun} Let us introduce important objects that we use throughout the thesis. The first one is the generating functional for a $n$-point Green function: \begin{align}\label{ze} Z[\rho] = \ev{T \left( e^{i\int \-da\-de [4]{x} \rho(x)\phi(x)} \right) }{\Omega} = \braket{\Omega}_{\rho} \, . \end{align} With this object, we can generate any Green function by differentiating with respect to the external source $\rho(x)$: \begin{align} (-i)^{n} \frac{\delta^{n}}{\delta \rho(x_{1}) \dots \delta \rho(x_{n})} Z[\rho] = \ev{T \left( \phi(x_{1})\dots \phi(x_{n})e^{i\int \-da\-de [4]{x} \rho(x)\phi(x)} \right) }{\Omega} \end{align} in such a way that in the limit where the source goes to zero, we recover the $n$-point function Eq.~(\ref{green}). Given Eq.~(\ref{ze}), we can construct the generating functional for the connected $n$-point function defined in Eq.~(\ref{cone}) as: \begin{align} Z[\rho] = e^{iW[\rho]} \, . \end{align} This means that: \begin{align} G_{c}^{(n)} = i (-i)^{n} \frac{\delta^{n}}{\delta \rho(x_{1}) \dots \delta \rho(x_{n})} W[\rho] \Bigg|_{\rho =0} \, . \end{align} The last important definition is the quantum action, the generating functional for the 1PI Green function. To get this object we do a Legendre transform of $W[\rho]$: \begin{align} \label{fdv} \phi(x_{i}) \equiv \fdv{W}{\rho(x_{i})} = \ev{\phi(x_{i})}{\Omega}_{\rho} \, , \end{align} \begin{align} \rho(x_{i}) \equiv - \fdv{\Gamma[\phi]}{\phi(x_{i})} \, , \end{align} \begin{align} \Gamma[\phi] \equiv W[\rho] - \int \-da\-de [4]{x} \rho(x)\phi(x) \, , \end{align} where we trade the $\rho$ dependence by a $\phi$ dependence. The $n$-point 1PI Green function is obtained by taking functional derivatives with respect to the field: \begin{align} G_{1PI}^{(n)} = \frac{\delta^{n}}{\delta \phi(x_{1}) \dots \delta \phi(x_{n})} \Gamma[\phi] \Bigg|_{\phi =0} \, . \end{align} With these objects defined, we can start to analyze some relevant results. The first result is the importance of the expectation value of the field with the presence of a source. If we find a way to compute this in the presence of an arbitrary source, then we have all the information needed to recover the $n$-point functions: \begin{align} \ev{\phi(x_{i})}{\Omega}_{\rho} \longleftrightarrow \frac{\delta W[\rho]}{\delta \rho(x_{i})} \, . \end{align} The only object that we cannot recover is the vacuum-vacuum amplitude $\bra{\Omega}\ket{\Omega}$, which is irrelevant when dealing with particle physics. It turns out that it is possible to find an equation for this object. It is precisely the expectation value of the classical equation of motion. That why Eq.~(\ref{fdv}) is called the classical field. It is not, in fact, all classical because non-linearity appears as $n$-point functions in this equation of $1$-point function. The next result is about the full propagator and its relation to the 1PI two-point function. Given the connected two-point function: \begin{align} G_{c}(x_{i},x_{2}) = -i \frac{\delta^{2}}{\delta \rho(x_{1})\delta\rho(x_{2})} W \, , \end{align} we can relate to the 1PI two point function using the identity: \begin{align} \fdv{\rho(x_{i})}{\rho(x_{j})} = \delta^{4}(x_{i}-x_{j}) = \fdv{\rho(x_{i})}{\phi(x_{k})} \fdv{\phi(x_{k})}{\rho(x_{j})} = \\ = - \left(\frac{\delta^{2}\Gamma}{\delta \phi(x_{i})\delta\phi(x_{k})} \right) \left( \frac{\delta^{2}W}{\delta \rho(x_{k})\delta\rho(x_{j})} \right) \, . \end{align} This is just the inversion equation for the connected propagator written in terms of the 1PI propagator. This means that the 1PI propagator is the inverse of the connected propagator. In momentum space: \begin{align} G_{c}^{(2)}(p^{2}) = -\left( G^{(2)}_{1PI}(p^{2}) \right)^{-1} \, , \end{align} this shows the overall consistency of Eq.~(\ref{fullprop}). The last thing that is worth pointing out about these objects is that the quantum action at tree level is the classical action, so we can use functional derivatives to derive the Feynman rules of any theory: \begin{align} \Gamma[\phi] = S[\phi] + \mathcal{O}(\hbar) \, . \end{align} Usually, at this point, we would introduce a path integral representation for these generating functional and start calculating processes. However, here we go a different route because we are interested in high multiplicity amplitudes, and usually, Feynman diagrams do not help. Even at tree level, there are too many diagrams to count, and this method would not be useful. The method that we use is introduced in the next chapter. Now we are ready to start calculating some high multiplicity amplitudes and try to understand what is happening in this regime. After the exploration of these processes, we introduce the newly proposed mechanism of Higgsplosion and discuss the possibility of its occurrence in a scalar Quantum Field Theory. \chapter{Perturbative Investigation of High Multiplicity Amplitudes} \label{c2} \pagenumbering{arabic} The focus of this chapter is the study of high multiplicity processes. The primary motivation for it comes from trying to understand the Higgsplosion proposal~\cite{Khoze-higgsplosion}. Nevertheless, this is not the only reason to look for these processes. We typically do not explore this regime in a Quantum Field Theory, and it is not clear what to expect. Maybe the particle interpretation of the field excitation ceases to be valid or useful in this regime. Because this is a complicated problem, first we do a perturbative investigation of this limit. The goal is to obtain enough information such that we can understand the applicability of perturbation theory in this regime. Ultimately this is answered at the end of chapter~\ref{c3}. In the presence of those perturbative results, we can start to explore different approaches for high multiplicity calculations. We do not work these additional results deeply because we chose to focus on the perturbative calculations. After we recover most of the essential results for high multiplicity scalar Quantum Field Theory, we start to work out the Higgsplosion framework. \section{Tree Level Amplitude at Threshold} We are interested in calculating the decay rate at high multiplicity of final states in a scalar theory. This could, in principle, be calculated with Feynman diagrams. However, the high number of final states makes this a tedious and challenging task. For example, if we are interested in $1 \to 5$ processes at tree level in an unbroken $\phi^{4}$ theory, we have only ten diagrams, showed in Figure~\ref{fig:1to5}. If we go for $1 \to 7$ processes, we get 280 diagrams, as it is represented in Figure~\ref{fig:1to7}. Going beyond nine particles in the final state, we rapidly pass the 1000 diagrams and becomes increasingly hard. This counting is only at tree level, adding quantum corrections creates more diagrams, and the Feynman diagrammatic approach becomes almost useless. \begin{figure}[h!] \centering \includegraphics[width=10cm]{1to5} \caption{Diagrams contributing for the $1 \to 5$ process in a $\phi^{4}$ theory in the unbroken phase. Generated with FeynArts~\cite{feyart}.} \label{fig:1to5} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=13cm]{1to7} \caption{Diagrams contributing for the $1 \to 7$ process in a $\phi^{4}$ theory in the unbroken phase. Generated with FeynArts~\cite{feyart}.} \label{fig:1to7} \end{figure} Here we use a different approach that was proposed by Brown~\cite{Brown-nov-92}. In this approach, we calculate the amplitude that enters in the decay rate taking advantage of the LSZ reduction formula, Eq.~(\ref{lsz}). The decay rate that we are interested in calculating is of the form: \begin{align} \label{decay} \Gamma_{1 \rightarrow n}(p^{2}) = \frac{1}{2m} \int \-da\-de {\Pi_{n}} \abs{\mathcal{A}(1 \rightarrow n)}^{2} \, , \end{align} where $\-da\-de {\Pi_{n}}$ is the Lorentz invariant phase space factor, including the $1/n!$ factor since the end particles are identical. This process is a highly virtual one because there is only one scalar field, and it is stable. However, in the middle of a process, this could contribute as is represented in the Figure~\ref{higgspersion}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{higgspersion} \caption{Process where the off-shell amplitude can contribute.} \label{higgspersion} \end{figure} The amplitude as written in Eq.~(\ref{decay}) is the one without amputating the incoming leg. The reduction formula for this case is: \begin{align} \mathcal{A}(1\rightarrow n)[x,p_{1},\dots,p_{n}] = (iZ_{\phi})^{-(n+1)/2} \lim_{p_{i}^{2}\rightarrow m^{2}} \int \-da\-de [4]{x_{1}}\ldots \-da\-de [4]{x_{n}} e^{-ip_{1} \cdot x_{1}}\ldots \\ e^{-ip_{n}\cdot x_{n}}(p_{1}^{2}-m^{2})\ldots (p_{n}^{2}-m^{2}) \ev{T \left( \phi(x_{1})\ldots\phi(x_{n})\phi(x) \right) }{\Omega} \nonumber \, . \end{align} Here we have to be careful because of the use of a nonstandard notation. The initial off-shell particle is in the position representation, and the rest are in the momentum representation. This amplitude has mixed momentum and position dependence. All of this dependence vanishes at threshold: \begin{align} \vec{p}_{i} = 0 \, , \quad \text{for all $i$} \, , \end{align} where $p_{i}$ are the outgoing momenta in the amplitude. We calculate first these amplitudes in the threshold limit, and in the end, try to recover the momentum dependence. Another point to notice is that, if we want to relate this amplitude to the Feynman diagram computation, we need to amputate the virtual particle. The Feynman amplitude would be $\mathcal{M}$, and its relation to $\mathcal{A}$ is: \begin{align} \mathcal{M}(p,p_{i})= \left( p^{2}-m^{2}\right) \mathcal{A}(p,p_{i}) \, . \end{align} From now on, we can can ignore the $Z_{\phi}$ factor, because all the computations are done up to one-loop. The effects of the $Z_{\phi}$ enters only at higher loops for the class of theories that we work in this thesis. We can drop out also the overal phase factor for this process because we square this amplitude in the end. The interesting observation used by Brown is that we can re-write the $(n+1)$-point correlator in terms of functional derivatives using Eq.~(\ref{fdv}): \begin{align}\label{tomano} \ev{T \left( \phi(x_{1})\ldots\phi(x_{n})\phi(x) \right) }{\Omega} =\fdv{\rho(x_{1})}\ldots\fdv{\rho(x_{n})}\ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, , \end{align} where the expectation value is taken in presence of an arbitrary source. Using Eq.~(\ref{tomano}), we write the amplitude as: \begin{align} \label{amp} \mathcal{A}(1\rightarrow n) = \prod_{i=1}^{n} \left[ \lim_{p_{i}^{2} \rightarrow m^{2}} \int \-da\-de [4]{x_{i}} e^{-ip_{i}x_{i}}(p_{i}^{2}-m^{2}) \fdv{\rho(x_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, . \end{align} It is possible to find a differential equation for $\ev{\phi(x)}{\Omega}_{\rho} \bigg|_{\rho = 0}$ that is just like the classical equation of motion, and then find an analytic solution for $\mathcal{A}(1\rightarrow n)$. This equation simplifies when we want to calculate only the tree-level contribution. Let us specialize for this case now, for the $\phi^{4}$ theory (any other interaction or kind of matter would follow a similar path). \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} It is known that the tree level approximation for the expectation value is just the classical solution in presence of an arbitrary source~\cite{Brown-nov-92}: \begin{align} \ev{\phi(x)}{\Omega}_{\rho} \rightarrow \phi_{cl}(x)[\rho] \, . \end{align} This statement will be made a little more rigorous when we compute loop corrections at Section~\ref{loop}. Choosing the $\phi^{4}$ theory and using the usual particle physics normalization we get the equation of motion: \begin{align} \label{eqmov} (\Box + m^{2}) \phi_{cl}(x) + \frac{\lambda}{3!} \phi_{cl}^{3}(x)=\rho(x) \, . \end{align} We want to find a solution of this equation to get the $\rho$ dependence of the field and then do $n$-functional derivatives to obtain the amplitude, Eq.~(\ref{amp}). Because this is a differential equation, we need boundary conditions. They are set by the Feynman prescription of the propagator that tell us how to project to the right vacuum. We have transformed the problem of finding the tree level amplitude into solving a non-linear second order differential equation with an arbitrary source, which is still a difficult problem. The approach developed by Brown is to focus on the threshold limit, where the source can be taken to have a simple enough form that the equation can be solved, as we will show. Note that we want the dependence of the field with respect to the source, not the actual form of the field solution by itself. For this, we need to be careful because the source needs to be able to excite all modes of the field. Since we want the amplitude at threshold, this means that there is no spatial momentum in the final states. In that limit, the source and the field are homogeneous in space and depend only on time. Hence, the threshold limit simplifies the equation to only one dimension. The next step to find a solution for Eq.~(\ref{eqmov}) is to choose a simple exponential source: \begin{align} \label{salsa} \rho(t) = \rho_{0}e^{i\omega t} \, . \end{align} The equation of motion then becames: \begin{align} \label{socorro} (\partial_{t}^{2}+m^{2})\phi_{tree}+\frac{\lambda}{3!}\phi_{tree}^{3}= \rho(t) \, . \end{align} This is still a non-linear problem but now is solvable. We look for a solution in perturbation theory, using $\lambda$ as our deformation parameter that turns on and off the non-linearity. We first consider the free equation: \begin{align} (\partial_{t}^{2}+m^{2})\phi_{tree}^{0}=\rho_{0} e^{i\omega t} \, , \end{align} whose solution is: \begin{align} \label{zdete} \phi_{tree}^{0} = - \frac{\rho_{0}}{\omega^{2}-m^{2}+i\epsilon} e^{i\omega t} = z(t) \, . \end{align} Turning the coupling on generates a series of the form: \begin{align} \label{pert} \phi_{tree}= z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots \, . \end{align} Plugging this in the equation of motion: \begin{align} (\partial_{t}^{2}+m^{2})(z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots)+\frac{\lambda}{3!}(z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots)^{3}= \rho_{0}e^{i\omega t} \, , \end{align} shows that the $\phi_{tree}(t)$ can be written only in terms of $z(t)$, and the source dependence comes only from it, for instance the first term in the expansion is: \begin{align}\label{fi} \phi_{tree}^{(1)} = \lambda \frac{z_{0}(\rho_{0},\omega)}{3!(9\omega^{2}-m^{2})} e^{3i\omega t} = - \lambda \frac{z(t)^{3}}{3!(9\omega^{2}-m^{2})z_{0}(\rho_{0},m)^{2}} \, . \end{align} The notation that we used is: \begin{align} z_{0}(\rho_{0},\omega) = \frac{\rho_{0}}{\omega^{2}-m^{2}+i\epsilon} \, , \end{align} in such a say that the double limit of $\rho_{0} \rightarrow 0$ and $ \omega \rightarrow m$ this function becomes a constant $z_{0}$. We can trade the functional derivative with respect to the source for ordinary $z(t)$ derivatives in the amplitude, Eq~(\ref{amp}): \begin{align} \label{co} \mathcal{A}(1\rightarrow n) = \prod_{i=1}^{n} \left[ \lim_{p_{i}^{2} \rightarrow m^{2}} \int \-da\-de [4]{x_{i}} e^{-ip_{i}x_{i}}(p_{i}^{2}-m^{2}) \fdv{\rho(x_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} = \end{align} \begin{align} \nonumber = \prod_{i=1}^{n} \left[ \lim_{\omega \rightarrow m} \int \-da\-de [4]{x_{i}} e^{-i\omega t_{i}}(\omega^{2}-m^{2}) \fdv{\rho(t_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, . \end{align} Now we can use the dependence of the source from Eq.~(\ref{zdete}) and Eq.~(\ref{pert}): \begin{align} \label{sour} \frac{\delta \phi[z]}{\delta \rho(t_{i})} = \frac{\delta z[\rho]}{\delta \rho(t_{i})} \frac{\partial \phi(z)}{\partial z} = -\frac{1}{\omega^{2}-m^{2}} \delta(t-t_{i}) \delta^{3}(x_{i})\frac{\partial \phi(z)}{\partial z} \, . \end{align} Using Eq.~(\ref{sour}) in Eq.~(\ref{co}) we can see the simplification of the problem, after doing the delta integrations and ignoring the overall phase: \begin{align} \label{amp1} \mathcal{A}(1 \rightarrow n) = \left( \pdv{z} \right)^{n} \phi_{tree}[z(t)] \Bigg|_{z=0} \, , \end{align} where the on-shell condition corresponds to the limit $\omega \rightarrow m$ and $\rho \rightarrow 0$, in such a way that $z_{0}$ remains finite. The solution for $\phi_{tree}$ can be obtained order by order in $\lambda$ like the first term obtained in Eq~(\ref{fi}). One finds that the perturbative series Eq.~(\ref{pert}) can be ressumed: \begin{align} \label{unbroken} \phi_{tree}(t)= \frac{z(t)}{1-z^{2}\frac{\lambda}{48m^{2}}} \, . \end{align} It is easy to check that this is indeed a solution in a well-defined limit. Defining $\alpha = \frac{\lambda}{48m^{2}}$ we compute: \begin{align} \dot{\phi}_{tree} = \dot{z} \frac{(1+\alpha z^{2})}{(1- \alpha z^{2})^{2}} \, , \end{align} \begin{align} \ddot{\phi}_{tree} =\frac{ \ddot{z}(1-\alpha z^{4}) + \dot{z}^{2}(6\alpha z +2 \alpha^{2} z^{3})}{(1-\alpha z^{2})^{3}} \, , \end{align} and using the definition of $z(t)$, Eq.~(\ref{zdete}) we have that $z \propto e^{i \omega t}$ so: \begin{align} \dot{\phi}_{tree} = i \omega z \frac{(1+\alpha z^{2})}{(1- \alpha z^{2})^{2}} \, , \end{align} \begin{align} \ddot{\phi}_{tree} =-\omega^{2} \frac{(\alpha^{2}z^{5}+6\alpha z^{3}+ z)}{(1-\alpha z^{2})^{3}} \, . \end{align} Putting this in the equation of motion, Eq.~(\ref{socorro}), and simplifying the denominator we get: \begin{align} \label{direito} \alpha^{2} z^{5}(m^{2}-\omega^{2}) + z^{3}(\frac{\lambda}{3!}-6\omega^{2}\alpha -2m^{2}\alpha) + z(m^{2}-\omega^{2})=-(\omega^{2}-m^{2})z(1-2\alpha z^{2}+\alpha^{2}z^{4}) \, . \end{align} We can see that this is a solution when $\omega \rightarrow m$ since the middle term is just the definition of $\alpha$. Here we can take $\rho \rightarrow 0 $ in such a way that $z(t)$ remains finite. Even though we arrived at a solution using perturbation theory, we have obtained a representation, Eq~(\ref{unbroken}) that can be trivially continued to the full complex $z(t)$ plane. An interesting thing comming from the form of the amplitude Eq.~(\ref{amp1}) is that we can work without the source if we change the boundary conditions. In the solution this can be seen trivially in the right side of the Eq.~(\ref{direito}), since the source contribution vanishes on-shell. This happens because we only want the solution on-shell, and before doing this limit the $\rho_{0} \rightarrow 0$ is similar to $z \rightarrow 0$. Then, we can solve without any source to find these amplitudes. Doing this changes the boundary conditions of the solution until we set $\rho=0$, because we are solving with a source at all times. This dictates that the solution, Eq.~(\ref{unbroken}), should vanish in positive Euclidean times: \begin{align} Im(t) \rightarrow\infty \, . \end{align} This boundary condition remains for the broken case and loop corrections. Another interesting point is that we started with a real field but got a complex solution. This complexification is happening because of the source, Eq.~(\ref{salsa}). If we had chosen a real source, the solution would be real as well. However, we only use the source as a trick to get the scattering amplitude. It is arbitrary and can be chosen to be of this particular form. With that in mind we can now find the decay amplitude, Eq.~(\ref{amp1}), by taking $n$-derivatives with respect to $z(t)$ in Eq.~(\ref{unbroken}). To facilitate this, we can use a series representation of this expression and pick up the nth-term in it: \begin{align} \frac{z}{1-\alpha z^{2}} = \sum_{0}^{\infty} z^{2n+1} \alpha^{n} \, . \end{align} The amplitude is then: \begin{align} \mathcal{A}(1 \rightarrow n) = \frac{n!}{2\alpha} [ (\alpha)^{(1+n)/2)} + (-\alpha)^{(1+n)/2}] \, . \end{align} It is possible to draw some conclusions about this amplitude at tree level and threshold. First it is necessary to remember that this is a partially off-shell amplitude and because of that it is not a physical object by itself. Nevertheless we can use this amplitude to construct physical observables. This amplitude has the interesting features: \begin{itemize} \item It vanishes for even final states: \begin{align} \mathcal{A}(1 \rightarrow 2k) =0 \, . \end{align} \item For odd final states it has the factorial growth: \begin{align} \mathcal{A}(1 \rightarrow 2k+1) = (2k+1)! \left( \frac{\lambda}{48m^{2}} \right)^{k} \, . \end{align} \end{itemize} This factorial growth persists to the decay rate even after dividing by the $n!$ coming from the identical nature of the final states. That could be an indication that in the high multiplicity limit, the decay rate grows or that perturbation theory cannot be trusted for a high multiplicity computation. The total final state energy of the system at rest is: \begin{align} E=(2k+1)m \, . \end{align} and the phase space in this case is zero (it is just a point). If we assume an infinitesimal sphere around $E$ and that the momentum dependence is constant in this region we get only an overall small term trying to combat the factorial growth, that we will call $V_{n}(E)$: \begin{align} \Gamma_{2k+1}(E) = (2k+1)! \left( \frac{\lambda}{48m^{2}} \right)^{2k} V_{2k+1}(E) \, . \end{align} This is just a naive approximation if we want to know the phase space contribution we need to be able to go beyond the threshold in such a way that we can get results around a specific configuration. This is already potentially problematic for the perturbative unitarity of the theory. The square amplitude divided by the symmetry factor grows factorially and can, in principle, pass any unitary bounds of these processes: \begin{align} \frac{\left| \mathcal{M}\right|^{2}}{n!} \propto n! \end{align} \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Broken Phase} The $\phi^{4}$ has another regime where we can explore this processes. In this phase the mass term is negative so it is convenient to use the definition: \begin{align} m^{2}=-\mu^{2} \, . \end{align} The reflection symmetry is broken and the configuration that minimizes the potential from Eq.~(\ref{eqmov}) is no longer zero: \begin{align} \label{min} \phi_{min}^{2} = \frac{3!\mu^{2}}{\lambda} \, . \end{align} If we want to find the tree level amplitude at threshold it is easier to work in the shifted field, where we do not have an expectation value~\cite{Brown-nov-92,Smith-apr-93}: \begin{align} \label{shift} \sigma_{tree}=\phi_{tree}-\phi_{\min} \, . \end{align} Using this definition the equation of motion becames: \begin{align} \label{eqbro} (\Box + 2\mu^{2}) \sigma_{tree}(x) + \frac{\lambda \phi_{min}}{2!}\sigma_{tree}^{2}(x) + \frac{\lambda}{3!} \sigma_{tree}^{3}(x)=\rho(x) \, . \end{align} The steps from Eq.~(\ref{eqbro}) to Eq.~(\ref{broken}) are essentially the same as detailed in the unbroken phase above. We need to solve this equation to find $\sigma$ as a functional of the source $\rho$. Choosing again the same exponential source Eq.~(\ref{salsa}), transforms this problem into finding the solution for the sourceless case with the boundary condition that: \begin{align} \sigma \rightarrow 0 \quad \mbox{as} \quad Im(t) \rightarrow \infty \, . \end{align} Now, we look for a perturbative solution for the spatially homogeneous case of Eq.~(\ref{eqbro}). Doing that we find the perturbative series in terms of the unperturbed solution: \begin{align} z(t)= z_{0}e^{i \sqrt{2}\mu t} \, , \end{align} where the physical mass is now $\sqrt{2}\mu$ in the on-shell limit. It is easy to check that a solution for Eq.~(\ref{eqbro}), taking the $\rho \rightarrow 0$ limit but keeping $z_{0}$ finite is: \begin{align} \label{broken} \sigma(t)= \frac{z}{1-\frac{z}{2\phi_{min}}} \, . \end{align} Keep in mind that $\phi_{min}$ has all the coupling dependence. We find such a solution performing a perturbative expansion, ressuming, and analytically continuing to the full complex $z(t)$ plane. To check that this is a solution is direct: \begin{align} \ddot{\sigma} = -2\mu^{2} \frac{\frac{z^{2}}{2\phi_{\min}}-z}{(1-\frac{z}{2\phi_{\min}})^{3}} \, . \end{align} Plugging this in the equation of motion and simplifying the denominator we get: \begin{align} z^{3}(-\frac{\lambda}{12}+\frac{\mu^{2}}{2\phi_{min}^{2}})+z^{2}(-3\frac{\mu^{2}}{\phi_{\min}}+\frac{\lambda \phi_{\min}}{2}) + z(-2\mu^{2}+2\mu^{2})=0 \, , \end{align} where we already used that $\omega^{2} \rightarrow 2\mu^{2}$ and set $\rho_{0} \rightarrow 0$. The l.h.s vanishes using Eq.~(\ref{min}). Now that we have this solution, we can compute the tree level threshold amplitude for the broken phase, following the same logic as already explained in the unbroken case: \begin{align} \label{ampbroken1} \mathcal{A}_{B}(1 \rightarrow n) = \left( \pdv{z} \right)^{n} \sigma[z] \Bigg|_{z=0} \, . \end{align} As before we find an series expansion and pick up the nth-term to get: \begin{align} \mathcal{A}_{B}(1 \rightarrow n) = n! \left( \frac{\lambda}{24\mu^{2}} \right)^{(n-1)/2} \, . \end{align} We can see that the factorial growth is still present in this phase. The difference is that now we can have even final states. We also get a factorially growing decay rate: \begin{align} \Gamma_{B}(1 \rightarrow n) = n! \left( \frac{\lambda}{24\mu^{2}} \right)^{(n-1)} V_{n}(E) \, . \end{align} As highlighted before this can be a potential danger for the unitarity of the theory. If these processes start to dominate with factorial power, then the cross section for a process of few particles going to $n$ start to grows as well. This growth is inconsistent with the perturbative unitarity of the theory. There are a few possible explanations for this feature and ways to save this growth. We will continue working within the perturbative approach, at threshold, and see if the factorial growth persists after quantum corrections, at the one-loop level. Later on, we explore going beyond threshold such that the assumption that $V_{n}(E)$ is constant can be checked and obtain a better decay rate expression. \section{One-Loop Amplitude at Threshold} \label{loop} Until now, we saw that the tree level amplitude at the threshold for the $\phi^{4}$ theory displays a factorial growth already in the first term of the series. We expect that these series that appear in Quantum Field Theory to be divergent. However, it is not the usual case when the first term of the series is already large. This, in principle, could mean that we cannot even trust the first term of the series as a good approximation. If we want to understand this better, we need to compute quantum corrections for this theory to see if somehow these factorial growths get tamed. It is possible to implement loop corrections in this formalism if we expand the operators in a $\hbar$ expansion, where the first term corresponds to the tree level. In this expansion, we have to be careful because $\hbar$ has dimension, and so far have been using units such that: \begin{align} \hbar = 1 \, . \end{align} To get around this, we can introduce a deformation parameter $\epsilon$ where $\hbar$ would appear, such that the limit $\epsilon \rightarrow 0$ is the classical limit that $\hbar \rightarrow 0$. The expansion of the field operator is: \begin{align} \hat{\phi}(x)= \phi_{0} \hat{I} + \sqrt{\epsilon} \hat{\phi}_{1/2}+\epsilon \hat{\phi}_{1} + \mathcal{O}(\epsilon^{3/2}) \, . \end{align} The expansion is in $\sqrt{\epsilon}$ because we are working at the level of the equation of motion. We will see from the calculation that the one-loop correction appears in the $\epsilon$ term using this convention. With this definition, we can find the expectation value of the field order by order in $\epsilon$, and then the amplitude can be computed up to that same order by an appropriate functional derivative. Now, let us specialize in the cases that we have considered above to see how quantum corrections modify them. \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} The first case of interest is the $\phi^{4}$ theory in the unbroken phase~\cite{loop1}, the equation of motion in terms of the field operator, before expanding in $\epsilon$ is: \begin{align} \ev{T\left[ (\Box + m^{2})\hat{\phi}+ \frac{\lambda}{3!} \hat{\phi}^{3}\right]}{\Omega}_{\rho}=0 \, . \end{align} We will use the notation: \begin{align} \ev{\hat{\phi}_{i}}{\Omega}_{\rho} = \phi_{i}[\rho] \, . \end{align} Expanding the field operator up to $\mathcal{O}(\epsilon^{3/2})$ the equation of motion is\footnote{The source that appear in the equation is the classical external one. It appears after we bring the d'Alembert operator out of the expectation value, passing trough the time ordering.} \begin{align} \label{loopexp} \left[ (\Box+m^{2})\phi_{0}(x) +\frac{\lambda}{3!}\phi_{0}^{3}(x)-\rho(x) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon}\left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1/2}(x) \right] + \end{align} \begin{align} \nonumber +\epsilon \left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1}(x) + \frac{\lambda \phi_{0}}{2} \ev{T\left(\hat{\phi}_{1/2}(x)\hat{\phi}_{1/2}(x)\right)}{\Omega}_{\rho} \right]+ \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} Now we do the threshold limit. This means that $\phi_{0}$ is as calculated before, Eq.~(\ref{unbroken}): \begin{align} \label{unbroken1} \phi_{0} = \frac{z(t)}{1-\frac{\lambda}{48m^{2}}z(t)^{2}} \, . \end{align} The equation of order $\sqrt{\epsilon}$ defines the two-point function that appears at order $\epsilon$ in Eq.~(\ref{loopexp}). It is the zero mode equation for the differential operator: \begin{align} \label{diamante} \diamondsuit=\Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(t) \, , \end{align} where the tree level and one-loop fields are at threshold, and the $\phi_{1/2}$ is allowed to have spatial dependence that appears in $\diamondsuit$ and its Green function. We will see that the boundary conditions kill all zero modes of $\diamondsuit$, so this equation give us that $\phi_{1/2}$ is zero as expected. This is a common feature since we are treating things at the level of the equation of motion. The order $\epsilon$ is what we are interested in and gives us the one-loop contribution for the amplitude. In Eq.~(\ref{loopexp}), there is a contribution of the two-point Green function of $\hat{\phi}_{1/2}$. The operator that we need to invert to find this Green function is Eq.~(\ref{diamante}), and in the end, set $x'= x$. This Green function will be taken at the same point so we can expect divergences to appear. These divergences are familiar from more standard Quantum Field Theory arguments. Using $\phi_{0}$ we can write the operator, Eq.~(\ref{diamante}), as: \begin{align} \label{carvao} \diamondsuit= \Box+m^{2} + \frac{\lambda}{2}\left( \frac{z(t)}{1-\frac{\lambda}{48m^{2}}z^{2}} \right)^{2} \, . \end{align} From the start we have a problem, this operator is not Hermitian because $z(t)$ is complex. This makes our job a little harder. To deal with this fact, we proceed as proposed in~\cite{loop1}. Going to Euclidian time, we can make this problem simpler. However, in Euclidean time we have a pole on the countour of integration. To adress this, we can do a shift in the Euclidian time variable to avoid the pole and then analytically continue the solution to the whole complex Euclidean plane. It is convenient to work in terms of~\cite{loop1}: \begin{align} u(\tau) = e^{m\tau} \, , \end{align} where $u(\tau)$ is defined as: \begin{align} \label{rot1} -\frac{\lambda}{48m^{2}} z(t)^{2} = u(\tau)^{2} \, , \end{align} and the new Euclidian time coordinate is: \begin{align} \tau = i t + i \frac{i \pi}{2m} + \frac{\ln(\frac{\lambda z_{0}^{2}}{48m^{2}})}{2m} \, . \end{align} The tree level solution takes the form: \begin{align} \phi_{0}(t) = i \sqrt{\frac{48m^{2}}{\lambda}} \frac{u(\tau)}{1+u(\tau)^{2}} = i \sqrt{\frac{48m^{2}}{\lambda}} \frac{\sech(m\tau)}{2} \, . \end{align} Then the operator to be inverted, Eq.~(\ref{carvao}), reads: \begin{align} \diamondsuit = \left( -\partial_{\tau}^{2} - \nabla^{2} + m^{2} - 6m^{2}\sech^{2}(m\tau) \right) \, . \end{align} Doing a partial Fourier transform only in the spatial section we can write the Green function of this operator as: \begin{align} \label{fourier} G(\vec{x},\vec{x}' ; \tau, \tau') = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{k}(\tau,\tau') e^{i \vec{k}.(\vec{x}-\vec{x}')} \, , \end{align} the Green function satisfying: \begin{align} \diamondsuit(\vec{x},\tau) G(\vec{x},\vec{x}' ; \tau, \tau') = \delta^{3}(\vec{x}-\vec{x}')\delta(\tau-\tau') \, . \end{align} Using Eq.~(\ref{fourier}) we focus on the following Green function: \begin{align} \left( -\partial_{\tau}^{2} + \theta^{2} - 6m^{2}\sech^{2}(m\tau) \right)G_{\theta}(\tau,\tau') =\delta(\tau -\tau') \, , \end{align} where we defined $\theta^{2}=\vec{k}^{2}+m^{2}$. From $G_{\theta}$ we can then find the full Green functions by doing momenta integrals using Eq.~(\ref{fourier}). We are doing this computation in $(3+1)D$, but the generalization to other dimensions is straightforward only changing the numbers of $k$ integrals. In fact, the Green function that appears in Eq.~(\ref{loopexp}) is evaluated at coincident space-time points, therfore, we have: \begin{align} \label{samepoint} \ev{T\left( \hat{\phi}_{1/2}(x)\hat{\phi}_{1/2}(x)\right)}{\Omega}_{\rho} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{\theta}(\tau,\tau) \, . \end{align} Surprisingly this Green function is very similar to a known quantum mechanical potential (Poschl-Teller Potential)~\cite{pol}. We will transform this problem into that quantum mechanical problem and then show how to solve this potential exactly. After that, we continue with the solution to find the Green function and in the end, the loop correction. To look for this Green function, we search for two regular solutions to the homogeneous equation, one regular at $\tau \rightarrow \infty$ the other at $\tau \rightarrow -\infty$. With both solutions $f_{-}(\tau)$, $f_{+}(\tau)$ and with the Wronskian $\mathcal{W}$: \begin{align} \label{wronk} \mathcal{W} = f_{+}(\tau)f_{-}(\tau)' - f_{+}(\tau)'f_{-}(\tau) \, , \end{align} we then construct the Green function: \begin{align} \label{gf} G_{\theta}(\tau,\tau') = \frac{ f_{+}(\tau)f_{-}(\tau')}{\mathcal{W}} \qquad \textnormal{for} \qquad \tau > \tau' \, , \end{align} \begin{align} G_{\theta}(\tau,\tau') = \frac{ f_{+}(\tau')f_{-}(\tau)}{\mathcal{W}} \qquad \textnormal{for} \qquad \tau' > \tau \, . \end{align} Both solutions can be written in a Schrodinger like form: \begin{align} \left( -\partial_{\tau}^{2} - 6m^{2} \sech^{2}(m\tau) \right) f(\tau) = -\theta^{2} f(\tau) \, , \end{align} where we identify $\tilde{E}_{\theta}=-\theta^{2}$ and the Hamiltonian being the operator on the left. We can solve this only using algebra. To facilitate we can work using coordinates without dimension by doing a change of variables, $x=m \tau$: \begin{align} \hat{H} = \left( -\partial_{x}^{2} - 6 \sech^{2}(x) \right) \, . \end{align} The dimensionless energy is defined as: \begin{align} E_{\theta}=-\frac{\theta^{2}}{m^{2}} \, , \end{align} such that we need to solve the eigenvalue problem: \begin{align} \label{prob1} \hat{H}f(x)=E_{\theta}f(x) \, . \end{align} It turns out that to find the solution for this Hamiltonian we need to generalize it to: \begin{align} \label{prob} \hat{H}_{l} = p^{2} - l(l+1) \sech^{2}(x) \, . \end{align} Our case is $l=2$. Now we will do a little sidetrack to solve this eigenvalue problem because even though this is an exactly solvable quantum mechanical system, it is not so trivial to find the solution. \subsection{Eigenvalues and Eigenfunctions of the Poschl-Teller Potential} This section can be skipped without affecting the core of the thesis. Here we want to solve the quantum mechanical problem, Eq.~(\ref{prob}): \begin{align} \hat{H}_{l} \ket{p,l} = E_{l} \ket{p,l} \, . \end{align} We want to find the eigenfunctions for the special case of $l=2$. For each $l$ we have a quantum system with eigenvalues $E_{l}$. The spectrum that we are interested in is the continuum band of the Poschl-Teller potential: \begin{align} \label{banana} V(x)= -l(l+1)\sech^{2}(x) \, . \end{align} The esiest one is $l=0$, the system is the free particle, and the energy does not depend on $l$ trivially: \begin{align} \hat{H}_{0} \ket{p,0}= p^{2}\ket{p,0} \, . \end{align} The special propriety of this system is that it belongs to a class of factorizable potentials. This feature is reminiscent from the supersymmetric version of this potential~\cite{pt1}: \begin{align} V_{s}(x)= -l(l+1)\sech^{2}(x) + l^{2} \, . \end{align} Because we are interested in the continuum spectrum of Eq.~(\ref{banana}), we will not introduce Supersymmetric Quantum Mechanics~\cite{susy1,susy2}. Nevertheless, Supersymmetry is the basis of why this process work and why this potential is solvable. The factorization of the Hamiltonian Eq.~(\ref{prob}) can be archived using the following operators: \begin{align} \label{caca} a_{l} = p - i l \tanh(x) \, , \end{align} \begin{align} \label{coco} a_{l}^{\dagger} = p + i l \tanh(x) \, . \end{align} They are choosen in this form to satisfy: \begin{align} \comm{a_{l}^{\dagger}}{a_{l}} = 2l \sech^{2}(x) \, , \end{align} With the operators Eq.~(\ref{caca}) and Eq.~(\ref{coco}) we can construct the initial Hamiltonian Eq.~(\ref{prob}): \begin{align} \label{caraca} H_{l}^{+} \equiv a_{l}^{\dagger}a_{l}= H_{l}+l^{2} \, . \end{align} The object in the l.h.s of Eq.~(\ref{caraca}) is the Hamiltonian for the supersymmetric description of this potential. We can define its parter Hamiltonian exchanging the order of the operators: \begin{align} \label{caracafermionico} H_{l}^{-} \equiv a_{l}a_{l}^{\dagger}=H_{l-1}-l^{2} \, . \end{align} In this case both supersymmetric partners are related only by a change in constants inside the Hamiltonian. This class of systems is called Shape Invariant Potentials~\cite{shape1}, and this plays a pivot role in making this potential solvable. With the definitions of Eq.~(\ref{caraca}) and Eq.~(\ref{caracafermionico}) we can see that, if we have an eigenstate of $H^{+}$ with contiuum spectrum: \begin{align} H_{l}^{+} \ket{E^{+},l}=E^{+}_{l} \ket{E^{+},l} \, , \end{align} there exists an eigenstate of $H^{-}_{l}$ with the same eigenvalue, except for the ground state: \begin{align} \label{mesalva} a_{l}H_{l}^{+} \ket{E^{+},l}= H_{l}^{-} \left( a_{l} \ket{E^{+},l} \right) =E^{+}_{l} \left( a_{l} \ket{E^{+},l} \right) \, . \end{align} This occurs the other way around as well. This is a consequence of the supersymmetry on the system, both spectrums are related by the operators $a_{l}$ and $a_{l}^{\dagger}$ as it is represented in Figure~\ref{supa}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{susy} \caption{Relation between the two parterns Hamiltonians..} \label{supa} \end{figure} Because this potential is shape invariant, using Eq.~(\ref{caraca}) and Eq.~(\ref{caracafermionico}) we can see that the spectrum does not depend on $l$. The shape invariance tells us that a change of $l$ relates $H^{+}$ and $H^{-}$. This implies that the eigenvalues will be the same as the free case, $l=0$: \begin{align} E_{l} = p^{2} \, . \end{align} Now, to construct the different eigenestates we can use Eq.~(\ref{mesalva}) to build up all the states of $H_{l}$. For $l=0$ we have: \begin{align} \bra{x}\ket{p,0}=e^{i p x} \, , \end{align} and for l=1: \begin{align} \ket{p,1}=a^{\dagger}_{1}\ket{p,0} \, . \end{align} Just to see that indeed this is a eigenstate of $H_{1}$ we can check directly: \begin{align} H_{1} \ket{p,1} = (\eta_{1}-1)a^{\dagger}_{1} \ket{p,0} = a_{1}^{\dagger} \xi_{1} \ket{p,0} - a_{1}^{\dagger} \ket{p,0} = \\ = (p^{2}+1)a_{1}^{\dagger}\ket{p,0} - a_{1}^{\dagger}\ket{p,0} = p^{2} (a_{1}^{\dagger}\ket{p,0}=p^{2} \ket{p,1} \, . \end{align} The wave function for the $l=1$ case is: \begin{align} \bra{x}\ket{p,1} = \mel{x}{a_{1}^{\dagger}}{p,0}= \mel{x}{\hat{p}+i \tanh(\hat{x})}{p,0} = \left( p+i \tanh(x) \right) e^{ipx} \, . \end{align} Finally, the case of interest is the $l=2$ case: \begin{align} \bra{x}\ket{p,2} = \mel{x}{a^{\dagger}_{2}}{p,1}=\left( 1+p^{2}+3ip \tanh(x)-3\tanh^{2}(x)\right) e^{ipx} \, . \end{align} Using this wave function, we can extend this solution to all complex $p$ plane to get both functions to construct the Green function Eq.~(\ref{gf}): \begin{align} p =\pm \frac{\theta}{m} \, . \end{align} Because we need regular solutions at $\pm \infty$, both $p$'s will be used in this contruction: \begin{align} \label{sol1} f_{-}(x)= \left( 1-\frac{\theta^{2}}{m^{2}} +3\frac{\theta}{m}\tanh(x) -3\tanh^{2}(x) \right)e^{\frac{\theta x}{m}} \, , \end{align} \begin{align} \label{sol2} f_{+}(x)= \left(1-\frac{\theta^{2}}{m^{2}} -3\frac{\theta}{m}\tanh(x) -3\tanh^{2}(x)\right) e^{-\frac{\theta x}{m}} \, . \end{align} \subsection{Back to $\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} Now that we have both solutions Eq.~(\ref{sol1}) and Eq.~(\ref{sol2}) to construct the Green function of Eq.~(\ref{gf}) we just need to write them in terms of the variables that we are using: \begin{align} u(x) = e^{x} \, , \end{align} \begin{align} f_{-}(x) = \frac{u^{4}(-\frac{\theta^{2}}{m^{2}}+3\frac{\theta}{m}-2) + u^{2}(-2\frac{\theta^{2}}{m^{2}}+8)-\frac{\theta^{2}}{m^{2}}-3\frac{\theta}{m}-2}{(1+u^{2})^{2}} u^{\frac{\theta}{m}} \, , \end{align} \begin{align} f_{+}(x) = \frac{u^{4}(-\frac{\theta^{2}}{m^{2}}-3\frac{\theta}{m}-2) + u^{2}(-2\frac{\theta^{2}}{m^{2}}+8)-\frac{\theta^{2}}{m^{2}}+3\frac{\theta}{m}-2}{(1+u^{2})^{2}} u^{-\frac{\theta}{m}} \, . \end{align} Having these two solutions we can calculate the Wronskian, Eq.~(\ref{wronk}): \begin{align} \mathcal{W} = 2\frac{\theta}{m^{4}}(\theta^{2}-m^{2})(\theta^{2}-4m^{2}) \, . \end{align} With this, we can construct the equal time Green function: \begin{align} G_{\theta}(\tau,\tau) = \frac{f_{+}(m\tau)f_{-}(m\tau)}{\mathcal{W}}= \end{align} \begin{align} \nonumber = \frac{1}{2\theta (1+u^{2})^{4}} \left( (1+u^{8}) + (u^{6}+u^{2})\frac{4(\theta^{2}+2m^{2})}{(\theta^{2}-m^{2})} + u^{4} \frac{6(\theta^{4}-m^{2}\theta^{2}+12m^{4})}{(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \right) \, . \end{align} The Green function in this form is not so useful. We can use partial fraction expansion in the $u$ variable to separate the different kind of contributions to the Green function. Doing so it is straightforward to see that we have a finite part and a divergent part when doing the Fourier integral in Eq.~(\ref{fourier}). We can separate both parts in the following way that will facilitate the renormalization latter: \begin{align} G_{\theta}(\tau,\tau) = G_{\theta}(\tau,\tau)^{div} + G_{\theta}(\tau,\tau)^{fin} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{div} = \frac{1}{2\theta} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\frac{1}{\theta^{3}} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{fin} = \frac{6m^{4}}{\theta^{3}(\theta^{2}-m^{2})} \left( \frac{u^{2}+u^{6}}{(1+u^{2})^{4}} \right)+ \frac{6m^{4}u^{4}}{(1+u^{2})^{4}}\left( \frac{14\theta^{2}-8m^{2}}{\theta^{3}(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \right) \, . \end{align} Now we are ready to integrate this to get the full Green function that enters into the one-loop equation: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{\theta}(\tau,\tau) \, , \end{align} with $\theta^{2}= \vec{k}^{2}+m^{2}$. The divergent part of the two-point function is: \begin{align} G^{div}(\vec{x},\vec{x};\tau,\tau) = \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\mathcal{I}_{2} \, , \end{align} where $\mathcal{I}_{1}$ is quadratically divergent and $\mathcal{I}_{2}$ has an logarithmic divergence: \begin{align} \mathcal{I}_{1} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{(\vec{k}^{2}+m^{2})^{1/2}} \, , \end{align} \begin{align} \mathcal{I}_{2} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{(\vec{k}^{2}+m^{2})^{3/2}} \, . \end{align} To get the finite part we need to do two integrals: \begin{align} \label{int1} \mathcal{I}_{f1} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{6m^{4}}{\theta^{3}(\theta^{2}-m^{2})} \, , \end{align} \begin{align} \label{int2} \mathcal{I}_{f2}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{6m^{4}(14\theta^{2}-8m^{2})}{\theta^{3}(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \, . \end{align} The angular part of these integrals is trivial to solve, and the only difficult part is the radial integration. One important thing to remember is that the fields that generate such propagators have fixed boundary conditions to give the right vacuum projection. The boundary conditions fix the prescription to pass through the poles, and this means that the denominator in Eq.~(\ref{int1}) and Eq.~(\ref{int2}) has a $-m^{2}+i\epsilon$ term. This information is important only for the poles inside the domain of integration. The next step to solve these integrals is to change to a dimensionless variable: \begin{align} \theta = \omega m \, . \end{align} Doing this transformation it is straightforward to find the solution for these integrals: \begin{align} \mathcal{I}_{f1} = \frac{3m^{2}}{\pi^{2}} \int_{0}^{\infty} \-da\-de {\omega} \frac{1}{\omega^{2}(\omega^{2}-1)^{1/2}}= \frac{3m^{2}}{\pi^{2}} \, , \end{align} \begin{align} \mathcal{I}_{f2}= \lim_{\epsilon \rightarrow 0} \frac{3m^{2}}{\pi^{2}} \int_{0}^{\infty} \-da\-de {\omega} \frac{(14\omega^{2}-8)}{\omega^{2}(\omega^{2}-1)^{1/2}(\omega^{2}-4+i\epsilon)}= \end{align} \begin{align} \nonumber =\lim_{\epsilon \rightarrow 0} \frac{6m^{2}}{\pi^{2}} \left[ \frac{4i \sqrt{-3+i\epsilon}\sqrt{4i+\epsilon}+\sqrt[4]{-1}(24-7i\epsilon)\sinh^{-1}(\sqrt[4]{-1}\sqrt{4i+\epsilon})}{\sqrt{i\epsilon-3} (\epsilon +4i)^{3/2}} \right] \, . \end{align} Doing the $\epsilon$ limit we get: \begin{align} \mathcal{I}_{f2}= \frac{6m^{2}}{\pi^{2}} +\frac{3im^{2} \sqrt{3}}{\pi} - \frac{3m^{2}\sqrt{3}}{\pi^{2}} \ln(\frac{2+\sqrt{3}}{2-\sqrt{3}}) \, . \end{align} Thus, the full two-point function, Eq.~(\ref{samepoint}), is: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\left( \mathcal{I}_{2}+\frac{1}{2\pi^{2}} \right) - \frac{6m^{2}u^{4}}{(1+u^{2})^{4}} F \, , \end{align} where we use the notation from~\cite{loop1}: \begin{align} F = \frac{\sqrt{3}}{2\pi^{2}} \left( \ln(\frac{2+\sqrt{3}}{2-\sqrt{3}}) - i \pi \right) \, . \end{align} An important point to make is that the $\epsilon$ limit is evaluated from the right such that these expressions make sense. This result is in accordance with~\cite{loop1}. Now, we need to renormalize our theory, we could do dimensional regularization to extract only the divergent part from the integrals using minimal subtraction. However, it is simpler to just absorb the whole divergent part, using the same scheme as in~\cite{loop1}. To visualize better the renormalization we can re-write the equation of motion using: \begin{align} \lambda \rightarrow \lambda_{R} + \delta \lambda \, , \end{align} \begin{align} m^{2} \rightarrow m^{2}_{R} + \delta m^{2} \, , \end{align} where the corrections start at order $\epsilon$, being a power series in this variable. To see that these two constants absorb the divergences, we insert back the two-point function in the equation of motion Eq.~(\ref{loopexp}) to get: \begin{align} \label{qqtofazendo} \left[ (-\partial_{\tau}^{2}+m^{2})\phi_{0}(t) +\frac{\lambda}{3!}\phi_{0}^{3}(t)-\rho(t) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1/2}(x) \right]+ \end{align} \begin{align} +\epsilon \left[ \left( -\partial_{\tau}^{2}+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1}(x) + \frac{\lambda \phi_{0}}{2} \left( \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) - \frac{6m^{2}u^{4}}{(1+u^{2})^{4}} F \right) \right]+ \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} Using the definition of the tree level solution, Eq.~(\ref{unbroken}), the divergent part has the distinctive forms: \begin{align} \phi_{0} \left( \frac{\lambda m \mathcal{I}_{1}}{4} \right) \, , \end{align} \begin{align} \phi_{0}^{3} \left( -\frac{3\lambda^{2}}{48}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) \right) \, . \end{align} So we can see that the redefinition of mass and coupling could absorb these divergences. We write these constants like: \begin{align} m^{2} = m_{R}^{2} +\epsilon \delta m^{2}_{1} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda = \lambda_{R} +\epsilon \delta \lambda_{1}+ \mathcal{O}(\epsilon^{2}) \, . \end{align} Using this definition in the equation of motion, Eq.~(\ref{qqtofazendo}), we get: \begin{align} \left[ (-\partial_{\tau}^{2}+m^{2}_{R})\phi_{0}(t) +\frac{\lambda_{R}}{3!}\phi_{0}^{3}(t)-\rho(t) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(t)\right) \phi_{1/2}(x) \right]+ \end{align} \begin{align} \nonumber +\epsilon \left[ \left( -\partial_{\tau}^{2}+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(\tau) \right) \phi_{1}(\tau) + \phi_{0} \left( \delta m^{2}_{1} + \frac{\lambda_{R}I_{1}}{4} \right) + \phi_{0}^{3} \left( \frac{\delta\lambda_{1}}{3!}-\frac{\lambda_{R}^{2}}{16}(I_{2}+\frac{1}{2\pi^{2}}) \right) - \frac{6m^{2}_{R}u^{4}}{(1+u^{2})^{4}} F \right]+ \end{align} \begin{align} \nonumber + \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} The obvious choice for the counter terms are: \begin{align} \delta m^{2}_{1} =- \frac{\lambda_{R}\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} \frac{\delta\lambda_{1}}{3!} =\frac{\lambda_{R}^{2}}{16}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) \, . \end{align} Choosing this renormalization we can proceed to solve the one-loop generator of the amplitudes. The tree level solution stays the same as before, except for the replacement of bare for renormalized quantities. The one-loop equation simplifies to: \begin{align} \left( -\partial_{\tau}^{2}+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(\tau) \right) \phi_{1}(\tau) = \frac{6m_{R}^{2}u^{4}}{(1+u^{2})^{4}}F \frac{\lambda_{R}\phi_{0}}{2} \, . \end{align} Writing the equation in terms of $u(\tau)=e^{m_{R}\tau}$ we get: \begin{align}\label{qqto} \left( -\partial_{\tau}^{2}+m^{2}_{R} -\frac{24m_{R}^{2}u^{2}}{(1+u^{2})^{2}} \right) \phi_{1}(\tau) = 12 i m_{R}^{3} F \sqrt{3\lambda_{R}} \frac{u^{5}}{(1+u^{2})^{5}} \, . \end{align} From the form of the right side of Eq.~(\ref{qqto}) we can try to look for solutions of the form: \begin{align} \phi_{1}(\tau) = \alpha \frac{u^{5}}{(1+u^{2})^{3}} \, , \end{align} and confirm by direct computation that this is a solution if: \begin{align} \alpha = -\frac{iF \lambda_{R}}{8} \sqrt{\frac{48m_{R}^{2}}{\lambda_{R}}} \, . \end{align} Having this solution we can analytically continue back to real time using the relation Eq.~(\ref{rot1}), using the renormalized mass inside $z(t)$. This gives us: \begin{align} \phi_{1}(t) =- \frac{F \lambda_{R}}{8} \left( \frac{\lambda_{R}}{48m_{R}^{2}} \right)^{2} \frac{z^{5}}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})^{3}} \, . \end{align} Then the full contribution for the generator of the amplitudes at one-loop is: \begin{align} \phi_{0}(t)+\phi_{1}(t)=\phi_{0+1}(t) = \frac{z}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})} \left[ 1 - \epsilon \frac{F\lambda_{R}}{8} \frac{(\frac{\lambda_{R}}{48m_{R}^{2}})^{2}z^{4}}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})^{2}} \right] \, . \end{align} To calculate the amplitude from this solution we just need to differentiate $n$ times it with respect to $z(t)$, following Eq.~(\ref{amp1}): \begin{align} \label{ampun} \mathcal{A}_{1loop} (1 \rightarrow 2k+1) = (2k+1)! \left( \frac{\lambda_{R}}{48m_{R}^{2}} \right)^{k} \left[ 1-\frac{F \lambda_{R} k(k-1)}{16} \right] \, , \end{align} where we set $\epsilon$ to one. This expression was obtained first in~\cite{loop1}. Here we can see that the factorial growth persists. The next correction only makes things worse, and this can be seen as an indication that we are using the wrong approximation for this regime. We expect that in a large $k$ approximation of the amplitude, the corrections to be of order $1/k$. In this case, the true object that needs to be small such that this approximation is useful is $\lambda_{R}k^{2}$. In the regime where this is small, this expression is well defined. It is not known how much we can trust this expression outside this regime even though we arrived at it only using that $\lambda_{R}$ is small. This is because, for a large $n$, the loop correction will be larger than the tree level one, signaling that the approximation is not good. We discuss this further in the context of a simpler toy model at the end of chapter~\ref{c3}. For now, let us see if this behavior is the same in the broken phase of this theory. \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Broken Phase} In the case of broken reflection symmetry we need to be more careful in the renormalization because the shift done at tree level in Eq.~(\ref{shift}) is not the appropriate one. We use the shift in the variables done before Eq.~(\ref{shift}), and when we get to renormalization, we will do the appropriate adjustments as in~\cite{Smith-apr-93}. Aside from this, everything else is very similar to the unbroken case. We expand the $\hat{\sigma}$ operator: \begin{align} \hat{\sigma}(x) = \sigma_{0} \hat{I} + \sqrt{\epsilon} \hat{\sigma}_{1/2}(x) +\epsilon \hat{\sigma}_{1}(x) + \mathcal{O}(\epsilon^{3/2}) \, . \end{align} The equation of motion, Eq.~(\ref{eqbro}), using this expansion is: \begin{align} \left[ (\Box + M^{2})\sigma_{0}(x) + \frac{\lambda \phi_{min}}{2} \sigma_{0}^{2}(x) + \frac{\lambda}{3!}\sigma_{0}^{3}(x) - \rho(x) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box + M^{2} + \lambda\phi_{min} \sigma_{0}(x)+ \frac{\lambda}{2}\sigma_{0}^{2} \right)\sigma_{1/2}(x) \right]+ \end{align} \begin{align} + \epsilon \left[ \left( \Box + M^{2} + \lambda\phi_{min} \sigma_{0}(x)+ \frac{\lambda}{2}\sigma_{0}^{2} \right) \sigma_{1}(x) + \frac{\lambda}{2}(\phi_{\min}+\sigma_{0})\ev{T\left( \hat{\sigma}_{1/2}(x)\hat{\sigma}_{1/2}(x) \right)}{\Omega}_{\rho} \right] + \mathcal{O}(\epsilon^{3/2}) =0 \, , \end{align} where $\phi_{min}^{2}= \frac{3M^{2}}{\lambda}$ and $M= \sqrt{2}\mu$ is the mass of the excitation. We are interested in solving the one-loop contribution at threshold. Just like before the tree level is already solved when we ask for homogeneous solution and the next order only give the information about the two-point function that enters in the one-loop equation. The tree level solution is the one that we calculated before: \begin{align} \sigma_{0}(t)= \frac{z(t)}{1- \frac{z(t)}{2\phi_{\min}}} \, . \end{align} The operator that we need to invert to find the two-point function now is: \begin{align} \diamondsuit_{b} = \Box + M^{2} +\lambda\phi_{min} \sigma_{0} + \frac{\lambda\sigma_{0}^{2}}{2} \, . \end{align} Using the solution $ \sigma_{0}(t)$ we get to the same problem of non-hermiticity of the operator. Doing the same step as before, we perform a Wick rotation taking care of the pole in the Euclidean line: \begin{align} -\frac{z(t)}{2\phi_{min}} = u(\tau) = e^{M\tau} \, , \end{align} \begin{align} \tau = it + i \frac{\pi}{M} +\frac{1}{M} \ln(\frac{z_{0}}{2\phi_{min}}) \, . \end{align} Doing this change of variables, the operator that we want to invert becomes: \begin{align} \diamondsuit_{b} = -\partial_{\tau}^{2} -\nabla^{2} + M^{2} -2\phi_{min}^{2} \lambda \frac{u}{(1+u)^{2}} \, . \end{align} We can re-write the last term in a familiar form: \begin{align} \diamondsuit_{b} = -\partial_{\tau}^{2} -\nabla^{2} + M^{2} - \frac{3M^{2}}{2} \sech^{2}(\frac{M\tau}{2}) \, . \end{align} Now the steps are very similar, we do a partial Fourier transform in the spatial part. The remaining operator that we need to invert is almost what we had before: \begin{align} \left( -\partial_{\tau}^{2} +\theta^{2} - \frac{3M^{2}}{2} \sech^{2}(\frac{M\tau}{2})\right) G_{\theta}(\tau,\tau') = \delta(\tau-\tau') \, , \end{align} where $\theta^{2}=\vec{k}^{2}+M^{2}$. Doing a change of variables to an dimensionless one we can cast the functions in a known form: \begin{align} \frac{M \tau}{2} =\xi \, , \end{align} \begin{align} \left( -\frac{M^{2}}{4}\partial_{\xi}^{2} +\theta^{2}-\frac{3M^{2}}{2} \sech^{2}(\xi) \right) f(\xi)=0 \, . \end{align} This is almost the equation that we had before, Eq.~({\ref{prob1}), just re-scaling the $\theta$: \begin{align} \theta \rightarrow 2\theta \, . \end{align} In terms of $u(\tau)$ the solution will change the power because of the definition of $\xi$: \begin{align} u^{2} \rightarrow u \, . \end{align} We already solved these equations in the last section, so the same point Green function before integrating is: \begin{align} G_{\theta}(\tau,\tau) = G_{\theta}(\tau,\tau)^{div} + G_{\theta}(\tau,\tau)^{fin} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{div} = \frac{1}{4\theta} + \frac{3M^{2}}{\theta^{3}} \frac{u}{(1+u)^{2}} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{fin} = \frac{3M^{4}}{4\theta^{3}(4\theta^{2}-M^{2})}\frac{u}{(1+u)^{4}} + \frac{3M^{4}(56\theta^{2}-8M^{2})}{4\theta^{3}(4\theta^{2}-M^{2})(4\theta^{2}-4M^{2})}\frac{u^{2}}{(1+u)^{4}} + \end{align} \begin{align} \nonumber +\frac{3 M^{4}}{4 \theta^{3}(4\theta^{2}-M^{2})} \frac{u^{3}}{(1+u)^{4}} \, . \end{align} The same definition of the divergent integrals $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$ are used to write the divergent part of the two-point function as: \begin{align} G(\vec{x},\vec{x};\tau,\tau)^{div} = \frac{1}{4}\mathcal{I}_{1} + \frac{u}{(1+u)^{2}} \frac{3M^{2} \mathcal{I}_{2}}{4} \, . \end{align} Now for the finite part, we need to solve the following two integrals: \begin{align} \mathcal{I}_{f3}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{\theta^{3}(4\theta^{2}-M^{2})} \, , \end{align} \begin{align} \mathcal{I}_{f4}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{56\theta^{2}-8M^{2}}{\theta^{3}(4\theta^{2}-M^{2})(4\theta^{2}-4M^{2})} \, . \end{align} This time there is no pole inside the domain of integration, so we don't need to worry about the Feynman prescription. The finite part in terms of these integrals is: \begin{align} G(\vec{x},\vec{x};\tau,\tau)^{fin} = \frac{3M^{4}}{4} \left( \mathcal{I}_{f3} \frac{u +u^{3}}{(1+u)^{4}} + \mathcal{I}_{f2} \frac{u^{2}}{(1+u)^{4}} \right) \, . \end{align} The integrals can be solved directly, in both cases we use the dimensionless coordinates $\theta=\omega M$: \begin{align} \mathcal{I}_{f3} = \frac{1}{M^{2}} \left( \frac{6 - \sqrt{3}\pi}{12\pi^{2}} \right) \, , \end{align} \begin{align} \mathcal{I}_{f4} = \frac{1}{M^{2}} \left( \frac{6 + \sqrt{3} \pi}{6\pi^{2}} \right) \, . \end{align} Using this information we can construct the two-point function. We just write in an appropriate form to interpret after during renormalization: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \frac{1}{4} \mathcal{I}_{1} + \frac{u}{(1+u)^{2}} \left( \frac{3M^{2}}{4} \mathcal{I}_{2} + \frac{3 M^{2}}{8\pi^{2}} - \frac{M^{2} \sqrt{3}}{16\pi} \right) + \frac{M^{2} \sqrt{3}}{4\pi} \frac{u^{2}}{(1+u)^{4}} \, . \end{align} We need to deal with the divergent part now. We can absorb them in the renormalization of the constants: \begin{align} M^{2} = M_{R}^{2} +\epsilon \delta M^{2}_{1} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda = \lambda_{R} + \epsilon \delta \lambda_{1} + \mathcal{O}(\epsilon^{2}) \, . \end{align} To do the renormalization properly let us focus only in the contribution for the one-loop equation, that is in the form: \begin{align} \label{nacaba} \frac{\lambda}{2} (\phi_{min}+\sigma_{0}) G(\vec{x},\vec{x};\tau,\tau) \, . \end{align} As we said before it is better to work in the unshifted field because this shift receives quantum corrections. Using the definition $\frac{\phi_{0}}{\phi_{\min}}= \gamma$ we can write Eq.~(\ref{nacaba}) in general as: \begin{align} \frac{\lambda \phi_{min}}{2} \gamma \left[ A - \frac{B}{4}(\gamma^{2}-1) +\frac{C}{16}(\gamma^{2}-1)^{2} \right] \, , \end{align} where the constants are: \begin{align} A= \frac{\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} B = \frac{3M^{2}}{4} \mathcal{I}_{2} + \frac{3 M^{2}}{8\pi^{2}} - \frac{M^{2} \sqrt{3}}{16\pi} \, , \end{align} \begin{align} C = \frac{M^{2} \sqrt{3}}{4\pi} \, . \end{align} Only $A$ and $B$ have divergences, the contribution for the one-loop equation is written as: \begin{align} \gamma \frac{\lambda \phi_{min}}{2} \left( A-\frac{B}{4} \right) - \frac{\lambda \phi_{min}}{2}\gamma^{3}B + \frac{\lambda \phi_{min}}{2} C \left( \frac{\gamma^{5}}{16}-\frac{\gamma^{3}}{8}+ \frac{\gamma}{16} \right) \, . \end{align} The renormalization scheme of~\cite{Smith-apr-93} will absorb the two initial terms. Because we are using the unshifted field it is easy to read the renormalized mass and coupling. We just need to be careful because in the unshifted field the mass has the ``wrong'' sign: \begin{align} m^{2}= m_{R}^{2}+\epsilon \delta m_{1}^{2} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda= \lambda_{R} + \epsilon \delta \lambda_{1} + \mathcal{O}(\epsilon^{2}) \, . \end{align} Using this in the equation of motion, we see that: \begin{align} \delta m_{1}^{2} = \frac{\lambda_{R}}{8} \left(\mathcal{I}_{1} - M_{R}^{2}(\frac{3 \mathcal{I}_{2}}{4} + \frac{3}{8\pi^{2}} -\frac{\sqrt{3}}{16\pi}) \right) \, , \end{align} \begin{align} \frac{\delta \lambda_{1}}{3!} = \frac{\lambda_{R}^{2}}{48} \left( \frac{3\mathcal{I}_{2}}{2} +\frac{3}{4\pi^{2}} - \frac{\sqrt{3}}{16\pi} \right) \, . \end{align} In this renormalization scheme we have to solve now the equation for $\sigma_{1}$, where we shift by the renormalized couplings. The equation for the tree level stays the same and the one-loop became: \begin{align} \left( -\partial_{\tau}^{2} + M^{2}_{R} + \lambda_{R}\phi_{min}^{R} \sigma_{0}(\tau)+ \frac{\lambda_{R}}{2}\sigma_{0}^{2}(\tau) \right) \sigma_{1}(\tau) = -\frac{\lambda_{R} \phi_{min}^{R}M_{R}^{2}\sqrt{3}}{8\pi} \left( \frac{\gamma^{5}}{16}-\frac{\gamma^{3}}{8}+ \frac{\gamma}{16} \right) \, . \end{align} We can write everything in terms of $u(\xi)$ where $\xi=M_{R}\tau$ is the dimensionless time coordinate. The equation using these variables is: \begin{align} \label{eqeq} \left( \partial_{\xi}^{2} -1 + \frac{6u}{(1+u)} - \frac{6u^{2}}{(1+u)^{2}} \right)\sigma_{1}(\xi) = \frac{3M_{R}\sqrt{\lambda_{R}} u^{2}(1-u)}{8\pi(1+u)^{5}} \, . \end{align} To solve this equation we look for an ansatz of the form: \begin{align} \sigma_{1} = \alpha \frac{u^{2}}{(1+u)^{3}} \, . \end{align} Plugging this solution in Eq.~(\ref{eqeq}), it is immediate to see that $\alpha =\frac{M_{R}\sqrt{\lambda_{R}}}{8\pi}$. This means that the one-loop solution is: \begin{align} \sigma_{1} = \frac{M_{R}\sqrt{\lambda_{R}}}{8\pi} \frac{u^{2}}{(1+u)^{3}} \, . \end{align} The complete solution for the generator of the tree level and threshold amplitudes at one-loop is then: \begin{align} \sigma_{0}+\sigma_{1}=\sigma_{0+1}(\tau)= \frac{-2\phi_{\min}^{R} u}{(1+u)} \left( 1+ \epsilon\frac{\lambda_{R}^{3/2}}{96\pi M_{R}}\left(-2\phi_{min}^{R} \frac{u}{(1+u)^{2}}\right) \right) \, . \end{align} With this solution we can analytically continue to the whole complex $\tau$ plane to get the answer in real time, in terms of $z(t)$: \begin{align} \sigma_{0+1}(t) = \frac{z}{1-\frac{z}{2\phi_{\min}^{R}}} \left(1 + \epsilon\frac{\lambda_{R}^{3/2}}{96\pi M_{R}} \frac{z}{(1-\frac{z}{2\phi_{min}^{R}})^{2}}\right) \, . \end{align} Having this solution we can find the one-loop amplitude at threshold doing $n$ derivatives in $z(t)$ using Eq.~(\ref{ampbroken1}) (and setting $\epsilon$ to one): \begin{align} \label{ampbro} \mathcal{A}_{1loop}^{B}(1 \rightarrow n) = n! \left( \frac{1}{2\phi_{0}^{R}}\right)^{n-1}\left(1+\frac{\lambda_{R}\sqrt{3}}{96\pi} n(n-1)\right) \, . \end{align} We can see that this contribution is real, different from the unbroken case Eq.~(\ref{ampun}). It has a different sign, and it still has factorial growth even in the first order. It is clear that the important coupling is, in this case, $\lambda n^{2}$. This is the object that needs to be small such that the approximation makes sense. This is similar to the case before, so in a high multiplicity limit, it is not so straightforward to recover information from these initial terms. This discussion will be continued at the end of chapter~\ref{c3}, were we discuss the range of validity of perturbation theory for high multiplicity calculation. If we want to understand this better, we need to try to recover momentum dependence in these amplitudes. Without this, we cannot say anything about the decay rate that is meaningful. It is worth to point out that all the results so far are exact in 0 spatial dimensions, giving just quantum mechanics. There is no phase space in this case, and we still have the factorial growth of these amplitudes. This could be problematic to the unitarity of the theory if this equation can be trusted in the region of interest. Next, we show a possible way to understand why this factorial growth is happening in the perturbative regime. After that, we start to investigate how we can recover the momentum dependence doing some trivial cases and trying to generalize for high multiplicity. In the end, we review some general results in the literature about this regime so we can start the exploration of the Higgsplosion proposal. \subsection{Discussion About the Factorial Growth} Usually, the series that we deal with in Quantum Mechanics and Quantum Field Theory are divergent. This divergence is not a problem because we know how to deal with this kind of series with different summation machines like Borel or Pad\'e~\cite{bender1,bender2}. To understand this other kind of divergence in the amplitude, we need to understand why the series is divergent in the first place. The analysis is done for $\lambda x^{4}$ in Quantum Mechanics, but we can generalize to the Field Theory case. We can see that there is a connection between the graphs with $N$ vertices with the coefficients of a given series: \begin{align} \sum_{\text{graphs}} \text{all graphs with $N$ verticies} = a_{N} \, . \end{align} Now, if we have $N$ vertices in $x^{4}$ theory the structure would be something as is represented in Figure~(\ref{xx}); \begin{figure}[h!] \centering \includegraphics[width=10cm]{combi} \caption{Combinatorics for the $\phi^{4}$ interaction.} \label{xx} \end{figure} We have to connect all these vertices. In a usual process, we have to connect some external lines, but they are only a finite amount so we can ignore in the large-$N$ limit. If we pick one of the lines, we have $(4N-1)$ possible connections, the next line we have $(4N-3)$. Going all the way we have: \begin{align} a_{N} \sim (4N-1)!! \quad \text{as} \quad N \rightarrow \infty \, . \end{align} In this case, we are over counting. There are different arrangements for the vertices that give the same result, $N!$. We still are overcounting because the lines on a vertex are identical, giving a factor of $4!$. In the end, we can write the coefficient of the series as: \begin{align} a_{N} \sim \frac{(4N-1)!!}{N! (24)^{N}} \quad \text{as} \quad N \rightarrow \infty \, . \end{align} It could be that some of these graphs are disconnected and do not contribute to physical processes. In the counting of the graphs, the chance that this happens is small so we do not consider this. In the large-$N$ limit, it does not make a difference. Now, in this limit the coefficient of the series behaves like: \begin{align} a_{N} \sim N! C^{N} \quad \text{as} \quad N \rightarrow \infty \, . \end{align} The factorial growth of the series indicates the divergent nature of it. This is just a heuristic argument, but it captures the spirit of what is happening. We could get some integrals that give a small contribution and change this picture. The divergence of the series occurs because, at small $N$, the small coefficients dominate. However, when going to large-$N$, the number of diagrams is so large that no matter how small the coefficient is, it will blow up. The behavior of a typical divergent series can be seen in the Figure~\ref{divergent}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{divergent} \caption{Typical Behavior of a divergent series. The optimal truncation of a series like this would be when it hits the lowest point because of the definition of an asymptotic series.} \label{divergent} \end{figure} Now, we can see why these amplitudes are growing factorially. We have too many diagrams for large multiplicity amplitudes already at tree level as it is shown in Figure~\ref{fig:1to5} and Figure~\ref{fig:1to7}. There is no small contribution that wins in the large-$n$ limit. The picture is significantly different in this case because there is never a region where the small coefficient dominates. This means that we still can extract information from the series, but any partial sum will give a bad approximation for the function. If we use summation machines to the loop corrections, we get the real behavior of these amplitudes. In this framework, we cannot trust the perturbative expression even at the tree level, but they have information about the actual function. In high multiplicity processes not only $\lambda$ matters but $n$ as well. This argument does not hold for different kinds of approximations like semiclassical calculation, but in those cases, it should be possible to do a similar analysis so we can understand the limitations of the results. \section{Investigation of Beyond Threshold Amplitude} So far, we only calculated threshold amplitudes. With this information only, we cannot reconstruct the decay rate because at the threshold the phase space is just a point. Nevertheless, this result is exact in 0 spatial dimensions, being the usual $x^{4}$ theory. If we want to see if the decay rate has an exponential growth at high multiplicity, we need to analyze if the phase space contribution has enough strength to combat the factorial growth of Eq.~(\ref{ampun}) and Eq.~(\ref{ampbro}). Going beyond the threshold at high multiplicity is an incredibly difficult task. There are too many momenta in the final state. To surpass this problem, we can try to construct these amplitudes in the near threshold limit where all final particles are non-relativistic. In this Section, we try to generalize Brown's method for beyond threshold amplitudes and discuss the difficulties of doing so. After that, we use Feynman rules for small multiplicity to try to understand what we should expect in the near threshold limit. Finally, in the end, we comment about some results in the literature of beyond threshold amplitude and what we can take from them. \subsection{Naive Generalization of Brown's Methods and its Problems} We saw that the Brown method~\cite{Brown-nov-92} of using the expectation value of the field with respect to a source works well for the threshold amplitude. If we want to generalize this, a first naive attempt is trying a source of the type: \begin{align} \label{salsa2} \rho(x_{\mu}) = \rho_{0} e^{ik_{\mu}x^{\mu}} \, . \end{align} In the limit where $\vec{k} \rightarrow 0 $ we recover the solution of before. If we do this in the tree level solution, we need to solve the equation: \begin{align} (\Box + m^{2}) \phi_{0} + \frac{\lambda}{3!} \phi_{0}^{3} = \rho_{0} e^{ik_{\mu}x^{\mu}} \, , \end{align} it turns out that we can solve this equation in the same way as the threshold case. The only difference is that the mass shell condition will be for the four-momentum. The solution is similar, because: \begin{align} \Box e^{n i k_{\mu}x^{\mu}} = -n^{2} k^{2} e^{n i k_{\mu} x^{\mu}} \, . \end{align} Then, in the on-shell limit we get the same result as before: \begin{align} \phi_{0}(x)= \frac{z(x_{\mu})}{\left( 1-\frac{\lambda}{48m^{2}}z(x_{\mu})^{2}\right)} \, . \end{align} were $z(x_{\mu})$ is a similar expression compared to Eq.~(\ref{zdete}): \begin{align} z(x_{\mu})= z_{0}e^{i k_{\mu} x^{\mu}} \, , \end{align} \begin{align} z_{0} = -\frac{\rho_{0}}{k^{2}-m^{2}} \, . \end{align} This is strange, because in the LSZ reduction formula we get something like: \begin{align} \fdv{\phi_{0}[\rho]}{\rho(x_{i})}= \frac{1}{p^{2}_{i}-m^{2}} \delta^{4}(x-x_{i}) \pdv{\phi_{0}}{z} \, . \end{align} The problem is that, in the end, we set $\rho$ to zero and all the momentum dependence vanishes, obtaining a threshold result again. Even though this is a solution of the equation of motion with a space-time dependent source, this is not enough to find beyond threshold amplitudes. It seems that this source, Eq.~(\ref{salsa2}) can only excite one frequency mode, not having enough information about the field to construct beyond threshold amplitudes: \begin{align} \int \-da\-de [4]{x} \phi_{0}(x)\rho(x) \propto \phi_{0}(k) \, . \end{align} The alternative to this is to find a more comprehensive source that we can solve that has the right threshold limit and has at least some non-relativistic generalization. It turns out that this is a hard task because of the non-linearity and for now we cannot advance further. Before proceeding to the next part, it is worth pointing out one thing. The limit of $z=0$ in the LSZ reduction formula may appear strange and in this case, be responsible for vanishing the momentum dependence. If we do this computation with caution, we see that in the double limit of $k^{2} \rightarrow m^{2}$ and $\rho \rightarrow 0$ we are left with a constant $z_{0}$ term, so it is possible to try out instead $z=z_{0}e^{ik_{\mu}x^{\mu}}$. However, if we do all the work before going to mass shell the expression for $z(x)$ is finite, and we can take the limit for the source going to zero there, taking $z_{0}$ to zero first. The order of these limits is essential, and in the LSZ reduction formula, the $\rho$ going to zero is the first one, so this should not alter the results. \subsection{Tree Level Investigation of Beyond Threshold ($1 \to 3$)} If we want to recover the momenta dependence of a high multiplicity amplitude, it is worth to calculate simpler cases. The region of interest is with all external particles in a non-relativistic limit. Here we work the first non-trivial case of 1 particle going to 3 with non-relativistic momenta. This case is useful to check the threshold computation done in the previous Section, using Feynman diagrams, and see how is the shape of the first momentum correction. The process that we are interested in is represented in Figure~\ref{1t3}. \begin{figure}[h!] \centering \includegraphics[width=8cm]{1to3} \caption{Amputated amplitude that we are interested in is the $1 \to 3$ process. Generated with FeynArts~\cite{feyart}.} \label{1t3} \end{figure} Here we have to remember that we do not remove the incoming leg, like we usually do. Then the amplitude is: \begin{align} \mathcal{M}(1 \rightarrow 3) = (p^{2}-m^{2})\mathcal{A}(1 \rightarrow 3) \, . \end{align} Feynman diagrams construct $\mathcal{M}(1 \rightarrow 3)$ and $p_{\mu}$ is the momentum of the incoming particle. Using the usual Feynman rule for the $\phi^{4}$ theory as is represented in Figure~\ref{fe} \begin{figure}[h!] \centering \includegraphics[width=8cm]{feynrule} \caption{Feynman rule for the $\phi^{4}$ case in the normalization that we are using. Generated with FeynArts~\cite{feyart}.} \label{fe} \end{figure} We get the amplitude as: \begin{align} \label{ampamp} \mathcal{A}(1 \rightarrow 3) = \frac{\lambda}{p^{2}-m^{2}} \, . \end{align} To investigate the non-relativistic limit of this expression we need to expand $p^{2}$ in the appropriate manner. Calling the outgoing momenta as $q_{i}$, in the non-relativistic limit we have $\abs{\vec{q}_{i}} \ll m $. The definition of the incoming momentum is: \begin{align} p = q_{1}+q_{2}+q_{3} \, , \end{align} where we write each external momentum as: \begin{align} q_{i} = (\gamma_{i} m, \vec{q}_{i}) \, . \end{align} In the non-relativistic limit we expand the first component as: \begin{align} q_{i} = (m + \frac{\vec{q}_{i}^{\hspace{0.1cm}2}}{2m} - \frac{\vec{q}_{i}^{\hspace{0.1cm}4}}{8m^{3}} + \dots, \vec{q}_{i}) \, , \end{align} this means that the denominator of Eq.~(\ref{ampamp}) can be written as: \begin{align} (q_{1}+q_{2}+q_{3})^{2}-m^{2}= 2 \left( m^{2} + q_{1}\cdot q_{2} + q_{1}\cdot q_{3}+ q_{2}\cdot q_{3} \right)= \end{align} \begin{align} \nonumber 2 \Bigg( 4m^{2} + \vec{q}^{\hspace{0.1cm}2}_{1} +\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{3} - \vec{q}_{1}\vec{q}_{2} - \vec{q}_{1}\vec{q}_{3} - \vec{q}_{2}\vec{q}_{3} + \end{align} \begin{align} \nonumber +\frac{1}{4m^{2}} ( \vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3}-\vec{q}^{\hspace{0.1cm}4}_{1}-\vec{q}^{\hspace{0.1cm}4}_{2}-\vec{q}^{\hspace{0.1cm}4}_{3}) + \dots \Bigg) \, . \end{align} The interesting feature of this expression that remains in the high multiplicity cases is that at leading order we can write it in terms of the non-relativistic energy of the outgoing particles in a general frame: \begin{align} \label{nonenergy} E = \frac{1}{2m} \sum \vec{q}_{i}^{\hspace{0.1cm}2} -\frac{1}{2mn} \left( \sum \vec{q}_{i} \right)^{2} = \frac{n-1}{2mn} \sum \vec{q}_{i}^{\hspace{0.1cm}2} - \frac{1}{nm} \sum_{i \neq j} \vec{q}_{i}\vec{q}_{j} \, . \end{align} Using Eq.~(\ref{nonenergy}) the denominator of Eq.~(\ref{ampamp}) can be written as: \begin{align} \label{caa} (q_{1}+q_{2}+q_{3})^{2}-m^{2}= 8m^{2}+6mE +\frac{1}{2m^{2}} ( \vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3}-\vec{q}^{\hspace{0.1cm}4}_{1}-\vec{q}^{\hspace{0.1cm}4}_{2}-\vec{q}^{\hspace{0.1cm}4}_{3}) + \dots \, . \end{align} The first important thing to notice is that in the next order we cannot write the denominator in terms of only $E$. It is easy to see this because $E^2$ would contain odd powers of momenta and this does not appear in Eq.~(\ref{caa}). This feature remains when we increase the number of external particles. We can see that in the large $n$ limit this problem vanishes, leaving only even powers of momentum in Eq.~(\ref{nonenergy}). If we write the amplitude in a expansion of small spatial momenta (small E) we get: \begin{align} \label{ampi} \mathcal{A}(1 \rightarrow 3) = \frac{\lambda}{8m^{2}} \Bigg( 1-\frac{3}{4} \frac{E}{m} + \frac{9}{8} \frac{E^{2}}{m^{2}} - \end{align} \begin{align} \nonumber -\frac{1}{8m^{4}}( -\vec{q}^{\hspace{0.1cm}3}_{1}\vec{q}_{2}-\vec{q}^{\hspace{0.1cm}3}_{1}\vec{q}_{3}-\vec{q}^{\hspace{0.1cm}3}_{3}\vec{q}_{1}-\vec{q}^{\hspace{0.1cm}3}_{2}\vec{q}_{1}-\vec{q}^{\hspace{0.1cm}3}_{2}\vec{q}_{3}-\vec{q}^{\hspace{0.1cm}3}_{3}\vec{q}_{2} + 2\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2}+2\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+2\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3} )+ \dots \Bigg) \, . \end{align} From this result, we can see that for small momenta the first term depends only on the non-relativistic energy. If we go to high orders in the momentum expansion, it is expected that we cannot describe the system only with $E$ and other functions will have an important play. From Eq.~(\ref{nonenergy}) we can see that these other objects are subdominant, and only the non-relativistic energy dominates. However, even with this expression, we cannot say much about the decay rate. If we are working in a limit where the kinetic energy is small, we are in a shell of the phase space as represented in Figure~\ref{deca} \begin{figure}[h!] \centering \includegraphics[width=10cm]{decayratehavewant} \caption{Difference between the decay rate that we can get using a non-relativistic approximation to what it would be useful to have.} \label{deca} \end{figure} This means that the decay rate can only be calculated for this shell that is smaller than $2m$. For a good approximation of the decay rate, we expect to be able to calculate in a broader region. The limit of validity of this expression does not help us much in understanding the behavior of the decay rate, but it is a step in this direction. Next, we do the same calculation for the $1 \to 5$ case where we start to see a trend in behavior. One thing that we can take from this is that the threshold computation works, being the first term of Eq.~(\ref{amp1}). \subsection{Tree Level Investigation of Beyond Threshold ($1\rightarrow 5$) and its Generalities} Let us go one step beyond and analyze the process of $1\rightarrow 5$ particles in the same non-relativistic limit. Now the problem starts to appear because we have ten different combinations of external leg positions for the Feynman diagram, as is shown in Figure~\ref{1t5}. These are all the possible combinations that the internal propagator can have, considering the particles are identical. \begin{figure}[h!] \centering \includegraphics[width=15cm]{1to5a} \caption{Amputated amplitude that we are interested in for the $1\rightarrow 5$ process. Generated with FeynArts~\cite{feyart}.} \label{1t5} \end{figure} The amplitude can be written as: \begin{align} \label{15} \mathcal{A}(1 \rightarrow 5) = \frac{1}{p^{2}-m^{2}} \sum_{ijk} \frac{1}{(q_{i}+q_{j}+q_{k})^{2}-m^{2}} \lambda^{2} \, , \end{align} where $i,j,k$ are the sum of the following combinations written in the Table~\ref{tab:1}. \begin{table}[h!] \centering \begin{tabular}{|l|l|l|} \hline i & j & k \\ \hline 1 & 2 & 3 \\ \hline 1 & 2 & 4 \\ \hline 1 & 2 & 5 \\ \hline 1 & 3 & 4 \\ \hline 1 & 3 & 5 \\ \hline 1 & 4 & 5 \\ \hline 2 & 3 & 4 \\ \hline 2 & 3 & 5 \\ \hline 2 & 4 & 5 \\ \hline 3 & 4 & 5 \\ \hline \end{tabular} \caption{Possible combinations for the sum in i, j and k.} \label{tab:1} \end{table} The problem now is to do the non-relativistic limit, because we have two propagators. One of them with all the momenta and other only with three at a time. Doing the non-relativistic limit just like before we can write the amplitude as: \begin{align} \label{abosa} \mathcal{A}(1 \rightarrow 5) = \left( \frac{\lambda^{2}}{2}\right)^{2} \frac{1}{\Delta_{1}+\Delta_{2} t^{2} + \Delta_{3}t^{4}} \sum_{ijk} \frac{1}{\Delta_{4}+\Delta_{5}^{ijk}t^{2} -\frac{1}{4m^{2}}\Delta_{6}^{ijk} t^{4}} \, . \end{align} In this expression $t$ is just a fictitious parameter to help expand this in small spatial momentum, it keeps track of the power of $\left|\vec{q}_{i}\right|$. In the end we expand Eq.~(\ref{abosa}) in $t$ and set $t=1$ to get the non-relativistic approximation. These deltas appears from doing the expansion on the denominators of Eq.~(\ref{15}), up to quartic order: \begin{align} \Delta_{1} = 12 m^{2} \, , \end{align} \begin{align} \Delta_{2} = 2 ( \vec{q}_{1}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{5}^{\hspace{0.1cm}2}) - \vec{q}_{1}\vec{q}_{2} - \vec{q}_{1}\vec{q}_{3} - \vec{q}_{1}\vec{q}_{4} - \end{align} \begin{align} \nonumber -\vec{q}_{1}\vec{q}_{5} - \vec{q}_{2}\vec{q}_{3} - \vec{q}_{2}\vec{q}_{4} - \vec{q}_{2}\vec{q}_{5} - \vec{q}_{3}\vec{q}_{4} - \vec{q}_{3}\vec{q}_{5} - \vec{q}_{4}\vec{q}_{5} \, , \end{align} \begin{align} \Delta_{3} = -\frac{1}{2m^{2}} ( \vec{q}_{1}^{\hspace{0.1cm}4}+\vec{q}_{2}^{\hspace{0.1cm}4}+\vec{q}_{3}^{\hspace{0.1cm}4}+\vec{q}_{4}^{\hspace{0.1cm}4}+\vec{q}_{5}^{\hspace{0.1cm}4})+\frac{1}{4m^{2}} ( \vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{2}^{\hspace{0.1cm}2} +\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+ \end{align} \begin{align} \nonumber +\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+\vec{q}_{4}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}) \, , \end{align} \begin{align} \Delta_{4} =4m^{2} \, , \end{align} \begin{align} \Delta_{5}^{ijk} = \vec{q}_{i}^{\hspace{0.1cm}2}+\vec{q}_{j}^{\hspace{0.1cm}2}+\vec{q}_{k}^{\hspace{0.1cm}2} -\vec{q}_{i}\vec{q}_{j}-\vec{q}_{i}\vec{q}_{k}-\vec{q}_{j}\vec{q}_{k} \, , \end{align} \begin{align} \Delta_{6}^{ijk} = \vec{q}_{i}^{\hspace{0.1cm}4} + \vec{q}_{j}^{\hspace{0.1cm}4} + \vec{q}_{k}^{\hspace{0.1cm}4} - \vec{q}_{i}^{\hspace{0.1cm}2}\vec{q}_{j}^{\hspace{0.1cm}2} - \vec{q}_{i}^{\hspace{0.1cm}2}\vec{q}_{k}^{\hspace{0.1cm}2} -\vec{q}_{j}^{\hspace{0.1cm}2}\vec{q}_{k}^{\hspace{0.1cm}2} \, . \end{align} With this information we can do the non-relativistic expansion of the amplitude: \begin{align} \mathcal{A}(1 \rightarrow 5) = \lambda^{2} \sum_{ijk} \left( \frac{1}{192m^{4}} - \frac{1}{2304m^{6}}(\Delta_{2}+3\Delta^{ijk}_{5}) + \dots \right) \, . \end{align} The first term gives the right threshold amplitude, the second term we will re-write in terms of the non-relativistic energy. Using the definition of the non-relativistic energy, Eq.~(\ref{nonenergy}) it is a direct but boring computation to show: \begin{align} \sum_{ijk} \Delta_{2} + 3\Delta_{5}^{ijk} = 95mE \, . \end{align} The amplitude up to first order is: \begin{align} \mathcal{A}(1 \rightarrow 5) = 5! \left( \frac{\lambda}{48m^{2}}\right)^{2}\left(1-\frac{19}{24} \frac{E}{m} + \dots \right) \, . \end{align} This result is in agreement with~\cite{Papadopoulos-nov-92,Libanov-jul-94}. This shows that indeed, the first correction beyond threshold comes only from the non-relativistic energy. Again, if we try to go to the next order, this ceases to be the case because of odd powers of momentum in the expression for the energy square that does not appear in the calculation. The next step in this investigation is to show and discuss some results coming from recursion relation and semiclassical computation. Mostly we will comment and try to interpret inside this framework. From these cases, we can take that the information beyond the threshold is tough to get. \subsection{Recurrence Relations and General Results of Beyond Threshold Amplitudes at High Multiplicity} The focus so far was to study these high multiplicity processes in the perturbative regime. We saw already why these amplitudes are growing factorially. The last piece of information on the perturbative regime comes from recursion relations. Here we highlight the history and main results of it. After that, we comment on some recent results coming from the semiclassical calculation. These results were essential to motivate the studies of high multiplicity processes after the initial wave of results that we covered so far. The first improvement came in 1992 when E.N Argyres and Costa G. Papadopoulos used recurrence relations to find amplitudes with one momentum in the final state~\cite{Papadopoulos-nov-92}. The general form of the recursion relation is represented in Figure~\ref{rec}. They find the solution for general monomial interaction of the form: \begin{align} V(\phi) = \lambda_{m} \frac{\phi^{m}}{m!} \end{align} \begin{figure}[h!] \centering \includegraphics[width=15cm]{recursion} \caption{Recursion relation for $\phi^{4}$ theory in the unbroken phase, $n=n_{1}+n_{2}+n_{3}$.} \label{rec} \end{figure} The specialization for our $\phi^{4}$ case gave the result: \begin{align} \mathcal{A}(1 \rightarrow 2k+1) = (2k)!(2k)(2k+1+\omega) \left( \frac{\lambda_{4}}{48}\right)^{k} \left( k + 1 + \frac{2(3-\omega)}{3+\omega} k + \frac{(3-\omega)(1-\omega)}{(3+\omega)(5+\omega)}(k-1) \right) \, , \end{align} where the mass is set to one and $\omega=2E-1$ with $E$ being the energy of the final particle with nonzero spatial momentum. The threshold result is recovered when $\omega=1$. We can find with these results $2 \rightarrow n$ processes, if we use a negative $\omega$, as it was shown to recover the threshold results: \begin{align} \mathcal{A}(2 \rightarrow n) = 0 \quad n>4 \, . \end{align} In the same paper the authors calculate the case of broken phase with one particle in the final state with arbitrary momentum: \begin{align} \mathcal{A}^{B} (1 \rightarrow n ) = (n-1)! (n-1)(n+\omega) \left( \frac{\lambda_{4}}{12}\right)^{(n-1)/2} \left( n + 2(n-1)\frac{(1-\omega)}{(2+\omega)} - \frac{\omega (1-\omega)}{(2+\omega)(3+\omega)}\right) \, , \end{align} and again the limit of $\omega =1$ checks out. These results seem to show that in the perturbative region, even momentum dependence cannot save the behavior of these amplitudes. After these results, people realized that if we restrict ourselves to the quantum mechanical limit of this, we get the complete result. This motivated the study of transition amplitudes in the $x^{4}$ oscillator by C.A Diamantis, B.C Georgalas, A. B. Lahanas and E. Papantonopolus~\cite{anarmo}. They looked for bounds in the transition amplitudes generated by an external source. This would be a simplification of our case in the (1+0)D case. They introduced the concept of holy grail function $F(\lambda n)$ for the study of these amplitudes: \begin{align} \mathcal{A}(1 \rightarrow n) = \kappa e^{\frac{F(\lambda n)}{\lambda^{2}}} \, . \end{align} In~\cite{anarmo}, they showed that in the trusted region, these transitions never exponentially grow. However, this is not a definitive answer because we cannot go to all the parameter space. This result was significant to understand the unitarity of the quantum mechanical case, but better expressions for the amplitudes were needed. The situation changed dramatically when M.V. Libanov, V.A. Rubanov, D. T. Son and S. V. Troitsky published two papers about the exponentiation of these amplitudes~\cite{Libanov-mar-95,Libanov-jul-94}. It was conjectured that the amplitudes at high multiplicity should have the special form: \begin{align} \mathcal{A}(1 \rightarrow n) \propto \sqrt{n!} e^{\frac{F(\lambda n,\epsilon)}{\lambda}} \, , \end{align} inspired by the instantonic cross section. The behavior of the amplitude is completely determined by the holy grail function $F$. To show evidence of this conjecture they redid all the results so far in this formalism and wrote the corresponding holy grail function. For the $\frac{\lambda \phi^{4}}{4}$ case~\cite{Libanov-mar-95} at tree level and treashold: \begin{align} F_{unbroken} = \frac{\lambda n}{2} \ln(\frac{\lambda n}{8}) - \frac{\lambda n}{2} \, , \end{align} \begin{align} F_{broken} = \frac{\lambda n}{2} \ln(\frac{\lambda n}{2}) - \frac{\lambda n}{2} \, . \end{align} For the treshold amplitude at tree level of the O(2) model with interaction $\frac{\lambda (\phi_{1}^{2}+\phi_{2}^{2})^{2}}{4}$~\cite{Libanov-mar-95}: \begin{align} F_{unbroken} = \lambda n \left( \ln (\frac{\lambda n(\sqrt{m_{1}}+\sqrt{m_{2}})^{2}}{8(m_{1}+m_{2})}) -1 \right) \, , \end{align} \begin{align} F_{broken} = \lambda n \left( \ln (\frac{\lambda n(\sqrt{m_{1}}+\sqrt{2m_{2}})^{2}}{2(m_{1}+2m_{2})}) -1 \right) \, . \end{align} Then they introduced new results for the beyond threshold and loop corrections. They solved the recursion relations of Figure~\ref{rec} for the large $n$ limit using the fact that at leading order only the non-relativistic energy contribute. In this region they showed that for the unbroken $\frac{\lambda \phi^{4}}{4}$ case at tree level: \begin{align} \mathcal{A}(1 \rightarrow n) = n! \left( \frac{\lambda}{8} \right)^{(n-1)/2} e^{-\frac{5}{6}E} \, , \end{align} where $m=1$ and $E$ is the non-relativistic kinetic energy of the final particles. Other important result was to show that at leading order the $n$-loop correction exponentiate to the form: \begin{align} \mathcal{A}(1 \rightarrow n) = \mathcal{A}_{tree}(1 \rightarrow n) e^{B\lambda n^{2}} \, . \end{align} In the second paper, they started to lay the ground for the generalization of the WKB method to produce semiclassical calculation in Quantum Field Theory. For the first time, we have a closed expression for the large multiplicity amplitudes with all the momentum dependence. All of these results were still perturbative, and they keep showing the same behavior. The next year D.T Son~\cite{Son-may-95} showed how to generalize the WKB method to calculate semiclassical amplitudes in the regime: \begin{align} \lambda \rightarrow 0 \, , \quad n \rightarrow \infty \, , \quad \lambda n = g = \text{cte} \, , \quad \epsilon = \text{cte} \, , \end{align} where $\epsilon$ is the non-relativistic energy per particle per unit of mass in the final state: \begin{align} \epsilon = \frac{E-nm}{m} \end{align} This was one of the missing pieces to understand these processes. The question is now written in terms of the right coupling $g= \lambda n$. That coupling has a t' Hooft like form similar to an large-N Yang-Mills expansion. Finding a expansion for small and large g, we can then see if these processes can grow exponentially. The amazing thing about this semiclassical calculation is that we can get the decay rate automatically, without the need to integrate the $n$ particle phase space. The semiclassical calculation only computes few $\rightarrow$ many processes, but it is expected that the exponential part is independent of the number of initial particles provided they are small. At the time only the limit $g \ll 1$ could be explored, it is not so useful at first glance: \begin{align}\label{smallg} F(g,\epsilon) = g \ln(\frac{g}{16}) - g + \frac{3g}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} g \epsilon + \frac{ \sqrt{3}}{4\pi} g^{2} \, . \end{align} This was still not enough because we need the expression in the limit of large $g$ to say anything sensible. It was, nevertheless, a big step in the right direction. A decade passed and the new generation of results started to appear, first the generalization of recurrence relations for different particle content to approximate these results to the Standard Model by Valentin V. Khoze~\cite{Khoze-apr-14}. These results were still perturbative but showed the same behavior of the simpler scalar case. After that, the most important results for understanding high multiplicity processes came from Valetin V. Khoze again~\cite{Khoze-jun-18}. They used the D.T. Son method plus some new tricks to obtain a semiclassical amplitude in the right limit of $g \gg 1 $. This result only exists for $\phi^{4}$ in (1+3)D and broken phase but this is a revolutionary solution nonetheless: \begin{align} \label{eq10} \Gamma(\epsilon) \propto e^{n \left( \ln(\frac{g}{4}) +0.85\sqrt{g} -1 + \frac{3}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} \epsilon \right)} \, . \end{align} Eq.~(\ref{eq10}) is valid in the regime: \begin{align} \lambda \rightarrow 0 \, , \quad n \rightarrow \infty \, ,\quad \lambda n = g \gg 1 , \quad \epsilon \ll 1 \, . \end{align} This was the first result outside ordinary perturbation theory were the exponential growth appeared. The analysis of this result is done at the end of the next chapter. Concluding, it is worth showing another impressive result coming from Joerg Jackel and Sebastian Schenk~\cite{Jaeckel-jun-18}. They did the full perturbative analysis for the quantum mechanical case. In there it is shown that the amplitude indeed exponentiate and after resuming the series there is no unitarity violation. This shows that in this case, we cannot trust the partial sum of the initial terms of the perturbative series, as we expected. \chapter{} \pagenumbering{arabic} \section{} \section{} \chapter{Conclusion} \pagenumbering{arabic} In this thesis, we have examined the Higgsplosion mechanism. The Higgsplosion mechanism appears naturally when we study high multiplicity processes. This mechanism has the potential to generate exciting features inside a Quantum Field Theory. To understand this we need to study the results that lead to it. Firstly, we have studied the systematic construction of high multiplicity amplitudes in a perturbative setup. In particular, we investigated the $\phi^{4}$ theory up to one-loop at treashold. The analysis of these amplitudes showed the factorial growth behavior that is a danger to unitarity. Even quantum corrections could not tame this behavior, indicating that ordinary perturbation theory is not suitable for these processes. It was then presented some beyond threshold results that showed the same behavior, but were perturbative. In this setup, we cannot say if the expressions are blowing because the approximation is terrible or because it is an essential feature of the theory. Next, we introduce the semiclassical result, Eq.~(\ref{eqimpor}), that shows this factorial behavior outside ordinary perturbation theory. This kind of approximation was tested in the zero-dimensional toy model. We show then that usual perturbation theory and strong perturbation theory have a limited domain of validity. The semiclassical like expansion probed a different region than both of the approximations, showing an overall consistency with the result. However, a better understanding of the limitations of Eq.~(\ref{eqimpor}) is still needed. Then, we introduced the Higgsplosion mechanism. We reviewed the original formulation of the Higgsplosion and showed that, at least using the semiclassical calculation, it could be possible to have in (1+3)D broken $\phi^{4}$. The extensions of the Higgsplosion mechanism depend on how you read the results. Using the original interpretation of UV finiteness renders the theory finite and the coupling stops running. The generalization for other models depends yet on a semiclassical approximation with different matter contents. The interpretation proposed in this thesis for the Higgsplosion mechanism is that the theory reorganizes the degrees of freedom at the Higgsploding scale. After this reorganization, another theory takes place in the UV. This is similar to the D-brane decay from tachyon condensation in string theory. The apparent non-locality can be seen in this setup because the $n$-scalars are behaving collectively in a finite size structure. After the change in the description, it is expected that the theory develops different kinds of interactions, and the flow to the UV is not necessarily finite. This new interpretation needs to be tested, and one way is to prove that Higgsplosion occurs at least in the broken $\phi^{4}$ theory in (1+3)D. The state of the lattice now does not bring anything new to the table as we discussed, but the potential is there. Lattice investigation of this system could be fruitful because of its non-perturbative nature. The application for the Standard Model is yet far to be realized. It is not clear, however, if this mechanism occurs in this system. After this is resolved, then the generalization for different matter seems to be immediate. The Higgs would be the portal for this new phenomenon in the Standard Model. \section*{Acknowledgments} This is the acknowledgements section. You should replace this with your own acknowledgements. \chapter{Higgsplosion and Higgspersion} \label{c3} \pagenumbering{arabic} \section{The Rise of the Higgsplosion} \subsection{The Higgsplosion Mechanism} It is know that in a Scalar Field Theory a high multiplicity amplitude could violate perturbative unitarity, making the theory inconsistent in this limit~\cite{Arkhipov-nov-82,Goldberg-may-90}. We did this computation and confirmed these results~\cite{Brown-nov-92,Voloshin-feb-92,loop1,Smith-apr-93}. In a concrete example the off-shell amplitude, like is shown in Figure~\ref{offshell}, for a $n$-particle decay grows factorially with $n$: \begin{align} \mathcal{A}(1 \rightarrow n ) \propto \lambda^{n} n! \, , \end{align} such that the decay rate behave like: \begin{align} \Gamma(1 \rightarrow n) \propto \lambda^{n} n! V_{n}(E) \, . \end{align} \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgsplosion} \caption{Off-shell $1 \to n$ particles process.} \label{offshell} \end{figure} Even after we remove one factor of $n!$ from the expression because the particles at the end are identical, the decay rate grows factorially. At the time that they found this result, they interpreted it as a sign that the perturbation theory became effectively strong coupled when $n > 1/\lambda $. The picture changed in 2017~\cite{Khoze-higgsplosion} when Valentin V. Khoze and Michael Spannowsky proposed that such behavior renders the cross-section of physical processes unitary, and even generate additional effects inside the theory. The central point of the proposal is to look for the full propagator of a virtual scalar in some intermediary process as is depicted in Figure~\ref{sper}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgspersion} \caption{Diagrammatic representation of the Higgspersion. The propagator between processes is the full propagator of the field $\phi$.} \label{sper} \end{figure} In the Standard Model, this role would be played by the Higgs. The full propagator can be written in the form: \begin{align} \Delta_{H}(p) = \frac{i}{p^{2} - M^{2}(p^{2}) + i \frac{m}{Z_{\phi}}\Gamma(p^{2})} \, , \end{align} where $M^{2}(p^{2})$ has the contribution from the bare mass and the real part of the 1PI function Eq.~(\ref{1pid}). The results of a factorially growing decay rate would mean that the propagator became strongly suppressed by $p^{2}$ factors. This blowing up of the decay rate is called Higgsplosion and the propagator suppression Higgspersion. This mechanism would save unitarity because, in an intermediary process, the suppression wins at large $p^{2}$ in a physical process. \begin{align} \sigma_{n} \propto \sqrt{s} \frac{\Gamma_{n}(s)}{s^{2} +(\frac{m}{Z_{\phi}})^{2}\Gamma^{2}(s)} \, . \end{align} Where $\Gamma(s)$ is the total offshell decay width for the scalar field. This remains finite, even when $\Gamma_{n} \rightarrow \infty$. The decay rate going to infinity is not problematic because this is not a real process, even though we can use it to construct real ones. The major consequence of this effect is the suppression of loops at high energy. In any process involving loops, we can trade the free propagator by the full propagator if we go to high enough orders as is exemplified in Figure~\ref{rego}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{fullpropsubst} \caption{Diagrammatic representation of changing the free propagator for the full propagator inside processes. The hashed circle represents the full propagator of the field $\phi$.} \label{rego} \end{figure} When considering the full propagator, the integral in the momentum will shutoff at the Higgsploding scale because of the exponential suppression of the propagator. This scale $E^{*}$ is a new dynamical scale of the theory that appears in the energy scale where the decay rate grows exponentially. The strong claims that are made because of this proposal are that the killing of the loops makes the couplings stop running. This means that the theory becomes UV finite and we are at a nontrivial fixed point at this scale~\cite{wilson}. There is strong evidence that this can happen at least in $\phi^{4}$ theory in the broken phase where we have semiclassical computations in a region where we can trust the decay rate, and it has the characteristic behavior~\cite{Khoze-jun-18}: \begin{align} \label{eqimpor} \Gamma(\epsilon) \propto e^{n \left( \ln(\frac{g}{4}) +0.85\sqrt{g} -1 + \frac{3}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} \epsilon \right)} \, , \end{align} this expression is calculated in the limit $\lambda \rightarrow 0 $, $n \rightarrow \infty$, $\lambda n =g \gg 1$ and $\epsilon \rightarrow 0$. In the first part of this thesis, we redid the perturbative computations of these amplitudes at and beyond threshold and commented some results coming from different methods. Now the focus is to understand what are the consequences if Higgsplosion is true. If the Higgsplosion does not occur, we need to understand why. To study this, we need to review what are the significant steps for the Higgsplosion to work and then analyze each step to see if there are any flaws or inconsistencies. The central player of the Higgsplosion mechanism is the exponential growth of the $n$-particle decay rate at some scale. As we argued, it is difficult to obtain these expressions using ordinary perturbation theory because we are stuck near the threshold of some point in energy space. The picture changed when semiclassical computation confirmed the results coming from perturbation theory, that the decay rate could indeed grow. This is by no means the end of the story as better decay rates expressions need to be found, and we do not fully understand what is the parameter region exactly where Eq.~(\ref{eqimpor}) can be trusted. If the Higgsploding mechanism exists in a Scalar Field Theory, we need to be able to compute the decay rate at high $p^{2}$ to see it. The next essential ingredient is the Optical Theorem and the Dyson resummed propagator. The Optical Theorem holds when we have a unitary theory~\cite{qft} and, as we argued in chapter~\ref{c1}, the propagator is valid even at the non-perturbative level. These are the principal players of this mechanism. If we want to know if this happens, we need to dig deeper into these points to see what we can extract from it. It is very intriguing how simple steps can potentially generate a powerful new phenomenon in a Quantum Field Theory that can be accessible in principle perturbatively. The concept of Higgsplosion was presented and by itself it is straightforward to understand, now we turn our attention to the underlying assumptions to see if they are consistent in such a way that we can have this mechanism. To do this we review and introduce some potential consequences of Higgsplosion. After that, we review some of the modern criticism made in two papers~\cite{Belyaev-aug-18,Monin-aug-18} and discuss some results from the lattice~\cite{lattice1,lattice2}. Having done that, we will try to estimate $E^{*}$ for the case where we have the decay rate Eq.~(\ref{eqimpor}). Finally, we try to understand the role of the perturbation expansion, its validity and work out a toy model where the propagator decays exponentially. \subsection{The Potential Power of the Higgsplosion} The Higgsplosion mechanism could generate an interesting phenomena even when we have only scalar fields. We saw in the computation of the one-loop amplitude in the broken Eq.~(\ref{ampbro}) and unbroken Eq.~(\ref{ampun}) phase that we had to deal with divergences, for the unbroken phase: \begin{align} \delta m^{2}_{1} =- \frac{\lambda_{R}\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} \frac{\delta\lambda_{1}}{3!} =\frac{\lambda_{R}^{2}}{16} \left( \mathcal{I}_{2}+\frac{1}{2\pi^{2}} \right) \, , \end{align} and for the broken phase: \begin{align} \delta m_{1}^{2} = \frac{\lambda_{R}}{8}\left[ \mathcal{I}_{1} - M_{R}^{2}\left( \frac{3 \mathcal{I}_{2}}{4} + \frac{3}{8\pi^{2}} -\frac{\sqrt{3}}{16\pi}\right) \right] \, , \end{align} \begin{align} \frac{\delta \lambda_{1}}{3!} = \frac{\lambda_{R}^{2}}{48} \left[ \frac{3 \mathcal{I}_{2}}{2} +\frac{3}{4\pi^{2}} - \frac{\sqrt{3}}{16\pi} \right] \, . \end{align} These divergences were appearing because in the loop integral we integrated to arbitrary high momentum in the propagator. This changes when we consider the Higgsplosion mechanism. The full propagator can substitute the propagators inside the loop if we reorganize order by order the expansion like it is represented in the Figure~\ref{rego}. Making this substitution in the theory, it suddenly becomes clear that we should not integrate into all momenta. The Higgsplosion shuts down the propagator above the scale $E^{*}$, so we integrate up to a sphere of radius $E^{*}$. This means that the theory is finite, and we have a natural scale to measure our observables. Doing the running of the coupling, we see that it shut down when we hit the Higgsploding scale, entering the new phase in the theory as it is represented in Figure~\ref{fazer}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{RGscalar} \caption{Representation of the coupling flow in a $\phi^{4}$ theory with Higgsplosion. The $\lambda^{*}_{max}$ is the largest coupling for which we can trust the expressions indicating the Higgsplosion. It could be that even outside this region the theory Higgsplodes at some point, then it would have a barrier where all couplings go to constant independently of where it started.} \label{fazer} \end{figure} In $\phi^{4}$ the flow makes the coupling grow, this means that if we do not start at an ultra perturbative value, we will quickly move away from it. The expressions for the decay rate can only be trusted in this limit. If we start at a given $\lambda$ that is too big we do not know if it will Higgsplode. This new phase depends on the value of the coupling to be achieved. Passing through this value, we need to find an $n$ and $\epsilon$ that can generate the growing decay rate of Eq.~(\ref{eqimpor}). This means that for each coupling value exists a Higgsploding scale . The story could potentially be different if the Higgs of the Standard Model Higgsplodes, the coupling there is decreasing, signaling a potential vacuum decay. The theory, in principle, would always flow to the perturbative regime where we can trust the semiclassical calculation and would always Higgsplode provided it does not vacuum decay first. This is just a schematic picture, and further calculations are needed to understand the full flow of the coupling. The same behavior occurs for the mass, meaning that in a Higgsplosion mechanism the mass is of order $E^{*}$, not of the heavier particle that exists on the theory. This is shielding the scalar from receiving quantum correction above this scale and in principle could be used in the Standard Model to render the Higgs mass natural. The nature of this shielding needs to be understood if this mechanism happens in ordinary Quantum Field Theory, probably indicating an enhancement of symmetry in the system. This feature can be interpreted as a UV fixed point that appears dynamically. We do not know if it will stay forever on it. The reorganization of degrees of freedom could generate new dynamics, and the running would need to be adequately analyzed. It could be that after the theory Higgsplodes, we cannot trust the calculations done with the Scalar Field because the theory ceases to create one-particle states as represented diagrammatically in Figure~\ref{aa}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{conjecture} \caption{Representation of the possible phases inside the scalar theory if Higgsplosion occurs, but it is shut down above a certain scale to preserve the proprieties that we expect from a normal Quantum Field Theory. It may be an indication that the UV completion of this theory is on a smaller scale than the Landau pole.} \label{aa} \end{figure} For the rest of the observables, the story stays pretty similar. The Higgspersion forces any process to be unitary and the coupling freezes. This feature is what makes Higgsplosion too good to be true. We can try to imagine what would happen if we implemented something like this in the Standard Model. When we want to apply this to the Standard Model, we have to be real careful. There is no semiclassical result for the case of additional matter fields. We cannot trust the perturbative calculations to say that the Standard Model will Higgsplode, but we can try to extract some qualitative results. The first thing that we want to know is the scale $E^{*}$. We cannot make a reliable prediction for the Standard Model case, but if we trust naturality arguments for the Higgs mass parameter, the theory should Higgsplode at $10$ to $10^{3}$ TeV. This is an assumption coming from outside the theory and it can be that this is not the case. If the Higgs, in fact, Higgsplodes then any loop involving it would be rendered finite. This, however, does not apply to loops with other fields in the Standard Model. The running of the couplings would be modified above this scale, but there is no reason to freeze them. If, however, all particles Higgsplode as proposed in~\cite{Khoze-jun-17}, then the Standard Model would be UV finite. Perturbatively we can see that there is not much difference between the scalar and the fermion Higgsploding or even a spin one particle as it is represented at Figure~\ref{hpuniverse}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgsplodeuniv} \caption{Diagrammatic representation of different high multiplicity decays.} \label{hpuniverse} \end{figure} However, the perturbative analysis cannot be trusted for high multiplicity computation, as we will discuss at the end of this chapter. The analysis done using the perturbative assumption was made in~\cite{Khoze-jun-17,precision}. While in this thesis, we are interested in the Standard Model application of this mechanism, we find it necessary to understand this proposal deeply before trying to use it. The focus shifts now to understand the problems with this mechanism and some useful discussions that ensued. We need to understand if the Higgsplosion can occur in the first place, and this is related to how much we can trust in the parameter space the semiclassical expression in Eq.~(\ref{eqimpor}). In the end, we use some results coming from String Theory to study the exponential decay of a propagator in a toy model. \section{Some Questions about Higgsplosion} \subsection{Criticism From ``Problems with Higgsplosion"} We saw the evolution in power to compute the decay rate in a Scalar Field Theory across time. With modern techniques~\cite{Libanov-jul-94}, it was possible to write the decay rate in the exponential form: \begin{align} \Gamma(E,n) \propto e^{\frac{F(\lambda n, \epsilon)}{\lambda}} \, , \end{align} For this expression to work, the conjecture is that at high multiplicity, only the non-relativistic energy contribute. The whole deal with the Higgsplosion framework is to compute this function $F$ in an appropriate regime and analyze its behavior. For small $g$ and $\epsilon$ it was found an expression for the holy grail function~\cite{Libanov-jul-94}, Eq.~(\ref{smallg}): \begin{align} F(g,\epsilon) = g \ln(\frac{g}{16}) - g + \frac{3g}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} g \epsilon + \frac{ \sqrt{3}}{4\pi} g^{2} \, , \end{align} where we ignore terms like $g^{3}$,$g^{2}\epsilon$ and $g\epsilon^{2}$. We can see that as $g \rightarrow 0$, $F \rightarrow -\infty$, and the decay rate is suppressed. While the opposite is also true, increasing the coupling makes the decay rate grow exponentially. However, as it is pointed out in~\cite{Belyaev-aug-18} this expression is only valid in the limit: \begin{align} g \ll 1 \, ,\quad \epsilon \ll 1 \, . \end{align} So we cannot trust this expression for couplings of order 1: \begin{align} g \approx 1 \, , \quad n \approx \frac{1}{\lambda} \, . \end{align} In this range of validity (small g and $\epsilon$), the expression is always negative and Higgplosion would not occur. This changed when new methods for computing F were developed~\cite{Khoze-jun-18} and it was extended for the new region $g \gg 1$: \begin{align} F(g,\epsilon) = g \left( \ln(\frac{g}{4}) +0.85g^{1/2}-1 + \frac{3}{2}\left( \ln(\frac{\epsilon}{3\pi})+1 \right) - \frac{25}{12} \epsilon +\dots \right) \, . \end{align} We can see that for sufficiently large coupling and small $\epsilon$ this function increases. The detailed analysis of this result is made in section~\ref{s10}. This result is one of the most important for the Higgsplosion proposal. It shows that in an appropriate limit, the decay rate can indeed grow. However, as it is pointed out in~\cite{Belyaev-aug-18}, this semiclassical solution exists only for $\phi^{4}$ in the broken phase in (1+3)D. It is by no means a general result for an arbitrary scalar field, and the generalization for the Standard Model is not clear. It is argued in~\cite{Belyaev-aug-18} that the contributions of order $g^{2}\epsilon$ need to be taken into account so we can determine the validity range of $\epsilon$ in this function. Without these terms, we cannot determine what values of $\epsilon$ can be trusted in this approximation. Such mixed terms could prevent the exponential growth of the decay rate and by consequence prevent the Higgsplosion from happening. The tricky part of this expression is that it has multiple parameters, and each of them plays a crucial role. On the other hand, at least in the ultra-small limit for $\epsilon$, this expression can be trusted, and we can find a coupling that makes the decay rate grow. Going away from this coupling could kill this growth, so we need to be careful with the RG flow in this theory. If for instance, we have a small value of $\epsilon$ where the decay rate explodes, then we never probe larger values of $\epsilon$ because the theory would always decay before reaching this point. Nevertheless the need for a better expression of $F(g,\epsilon)$ as argued in~\cite{Belyaev-aug-18} is legitimate and something important to focus for the better understanding of the Higgsplosion claim and even Quantum Field Theory in general. In the last part of the paper~\cite{Belyaev-aug-18}, they turn the attention to the full propagator. There it is argued that we cannot resum the propagator because the expression of the 1PI diagram is not convergent. That was already addressed, so we do not comment any further here. Lastly, it is pointed out in~\cite{Belyaev-aug-18} that unitarity needs to be restored somehow if Higgsplosion does not enter to play. The behavior of Eq.~(\ref{eqimpor}) shows some potential danger for unitarity violation. From their side, they argue that a better expression for $F$ would play the role of restoring unitarity. From the Higgplosion side, the Higgspersion plays that role in the theory. \subsection{Criticism From "Inconsistencies with Higgsplosion"} Another source of discussion about Higgsplosion arises with the paper from Monin~\cite{Monin-aug-18}. In this paper, he argued for the impossibility of the Higgsplosion mechanism inside local Quantum Field Theory. To understand it better, let us run by its arguments and then discuss a little about these results. We already saw the persistence in the exponential growth of the amplitude. Usually, we are talking about the decay rates but is worth pointing out that in the same way that the decay rate explodes the spectral density also explode: \begin{align} \rho (E,N) \propto \epsilon^{\frac{3N}{2}} e^{\tilde{c} N^{3/2} \sqrt{\lambda}} \left( 1 + O(N\ln(N))\right) \, , \end{align} with $\tilde{c}$ being some constant. This, in the context of Higgsplosion, is related to the exponential decay of the propagator in the UV. While the author argues that this behavior is inconsistent with local Quantum Field Theory, he points out that the natural cutoff for the theory is much smaller than the Landau pole if these expressions cease to be valid at some point. The persistence of the exponential behavior will happen if this is the case. Now, let us discuss a little why this behavior is unusual in an ordinary Quantum Field Theory. Normally, we have the spectral representation for the propagator (this could be any two insertions of a local operator): \begin{align} \Delta_{F}(p^{2}) = \int \-da\-de {s} \frac{1}{p^{2}-s+i\epsilon} \rho(s) \, . \end{align} It is expected that this expression is divergent, and we need to do several subtractions to have a finite result. This procedure is related to the usual renormalization that adds $m$ independent parameters that need to be fixed by experiments. In this context, a non-renormalizable or non-local theory would have infinite subtractions and by consequence an infinite number of new parameters that would need to fix: \begin{align} \Delta_{F} (p^{2})= P_{m-1}(p^{2}) + p^{2m}\int \-da\-de {s} \frac{1}{p^{2}-s+i\epsilon} \frac{\rho(s)}{s^{m}} \, . \end{align} The general form of the propagator allows us to extract what is the spectral density in terms of its imaginary part: \begin{align} -\frac{1}{\pi} \Im \left( \Delta_{F}(p^{2}) \right) = \rho (p^{2}) \, . \end{align} Up to this point, this is standard and very general Quantum Field Theory. What is done next is to assume what kind of distributions these operators are. We usually use tempered distributions, but they are very limiting as pointed out in~\cite{Khoze-sep-18}. This happens because we cannot treat a non-renormalizable theory with such distribution~\cite{Jaffe-jan-67}. As a consequence, restricting to tempered distribution would create a possible inconsistency with the Higgsplosion mechanism. These distributions only exist with a finite number of subtractions and by consequence cannot fall faster than an arbitrary polynomial. An exponential decay would be inconsistent with this choice. This is addressed in~\cite{Khoze-sep-18}, where they argue that Higgsplosion could exist within local Quantum Field Theory if we do not restrict to such distributions. They show that in principle, the Higgsplosion mechanism does not tell how fast the propagator falls apart from being exponential, and because of that, it can be consistent with local Quantum Field Theory. Something that we find strange in this process is the necessity of doing an infinite number of subtractions in the Higgsplosion scenario. The usual Scalar Field Theory in (1+3)D with potential $\phi^{4}$ is known to be perturbatively renormalizable and local, and somehow this is potentially being lost by this phenomena. If the theory became non-local above this scale, this would be strange behavior as well. It can happen that most of these parameters are not relevant, and we need to fix only a finite number of constants, and the theory then is local. That raises a flag about the possibility of such feature. Maybe such mechanism is shut down after some scale saving the behavior at high energies and not making the theory UV finite. In intermediary energies, the theory would enter in this new phase. We could see this new phase as a large number of field excitations behaving collectively. This reorganization of the degrees of freedom would happen at the Higgsplosion scale, but after that, the new degree of freedom will become relevant. This will generate by itself a new class of interactions that could regain the theory at the UV. In this picture, the Higgsplosion is a phase transition inside the theory. In this interpretation, the Higgsploding scale became the cutoff where a UV completion would enter. Such UV completion describes the dynamics of this reorganized degrees of freedom in the high energy limit. This is discussed further in the last section of this chapter. Finally, we think that the discussion in the second appendix of~\cite{Monin-aug-18} is useful to understand better what is happening in the perturbative calculation. We present here what is the discussion and follow from it later to dig deeper into this. In there, they propose a toy model, the (0+0)D Quantum Field Theory with the quartic interaction: \begin{align} \label{integral0} A(\lambda,N) = \int_{0}^{\infty} x^{2N} e^{-\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4}} \, . \end{align} The nice thing about this integral is that we can solve it exactly to test different approximation schemes. The solution for this integral is: \begin{align} A(\lambda,N) = \lambda^{-(\frac{N}{2} + \frac{1}{4})} \Gamma(N+\frac{1}{2}) F_{1 , 1}\left( \frac{N}{2}+\frac{1}{4},\frac{1}{2}, \frac{1}{4\lambda} \right) \, , \end{align} where $F_{1,1}$ is the Hypergeometric function. Doing a normal perturbation theory in the integral generate us a asymptotic series that is divergent in nature: \begin{align} \label{epepep} A(\lambda, N) \approx 2^{N-\frac{1}{2}} \Gamma(N + \frac{1}{2}) \left( 1 - \frac{\lambda}{4}(2N+3)(2N+1) + \dots \right) \, . \end{align} The interesting thing of this series is that not only $\lambda$ needs to be small, but in fact, $\lambda N $ needs to be small such that this expression is a good approximation. Because we are interested in the large $\lambda N $ limit we need a different kind of approximation. In this case, we can work out the proposal of~\cite{Monin-aug-18} and use a nontrivial saddle point to do the perturbation theory: \begin{align} x^{2N} = e^{2N \ln(x)} \end{align} Taking this into account, we can arrive at the expansion for $\lambda \rightarrow 0$, and $N \rightarrow \infty$: \begin{align} \label{epep} A(\lambda,N) \approx \frac{\sqrt{\pi}}{(8\lambda N)^{1/4}} e^{\frac{N}{2}(\ln(\frac{2N}{\lambda}) -1 - \sqrt{\frac{2}{\lambda N}})}(1 + O(\frac{1}{\sqrt{\lambda N}})) \, . \end{align} This expression is valid as $\lambda N \gg 1$. Such expression is what we need to discuss the Higgsplosion. In this case, we can check that both Eq.~(\ref{epepep}) and Eq.~(\ref{epep}) are correct. Many parameters play important roles, and the balance between them determines the validity region of the expansion. Both papers discussing the problems~\cite{Belyaev-aug-18,Monin-aug-18} with Higgsplosion show the necessity to find a better expression for the beyond threshold amplitude and decay rate. We discuss in the next section the arguments coming from lattice simulations that could contest the Higgsplosion hypothesis. After that, we try to extimate the value of $E^{*}$ from Eq.~(\ref{eqimpor}). \subsection{Lattice Results About Higgsplosion and Some Comments.} The need for better analytic expression for the decay rate can be surpassed if we can extract information from lattice about this process. The main problem of this, ignoring the inherent problem of obtaining the result in 4D (triviality problem~\cite{trivi1,trivi2,trivi3}), is that the objects of interest are highly virtual. It is not possible to calculate the off-shell decay rate or the imaginary part of the inverse of the propagator. With this in mind, Yeo-Yie Charng searched for reasonable bounds in $\phi^{4}$ at $3D$ and $2D$ for the Higgsplosion~\cite{lattice1,lattice2}. Instead of looking for the decay rate, he looked for the cross section of two fermions going to $n$ scalars as it is represented in Figure~\ref{fer}. \begin{figure}[h!] \centering \includegraphics[width=10cm]{higgspersiona} \caption{Scattering of two fermions going to N scalars.} \label{fer} \end{figure} Then, with the cross section $\sigma_{n}$ he defined the inclusive cross section $\sigma$: \begin{align} \sigma = \sum_{n} \sigma_{n} \, . \end{align} In this object we don't expect the factorial growth because the Higgspersion mechanism would dominate. Because the computation of $\sigma$ is hard he finds an inequality involving an object that it is easy to find in the lattice: \begin{align} \int \-da\-de {s} \sigma(s) \leq (\frac{1}{Z'} -1) \, , \end{align} where $Z'$ is defined as: \begin{align} \frac{\partial (G^{(2)})^{-1}}{\partial p^{2}} \vert_{p^{2}=0} = \frac{1}{Z'} \, . \end{align} This is a non-perturbative bound in nature. This bounds how large the integral of the total cross section can be. This is what we expect from unitarity, no growing cross sections at arbitrary energy. He proceeded to simulate $\phi^{4}$ to obtain the values of $Z'$ and see how much of a bound we have. For this object, in $3D$, the result that he gets with 85\% confidence level is: \begin{align} \int \-da\-de {s} \sigma(s) \leq 0.026 \, . \end{align} So we expect the inclusive cross-section to be small. By the predictions of the Higgsplosion, we expect that this cross section to be exponentially suppressed at large $s$, making this possible. For the details of the simulation, it is recommended to read the original paper~\cite{lattice2}. This result on its own does not say much about the possibility of Higgsplosion only that it is consistent, provided Higgspersion works as intended to restore unitarity. Maybe a more in-depth investigation from lattice could settle down any doubt about the possibility of such a mechanism. Calculating the imaginary part of the inverse of the off-shell propagator could do the trick. Until then the Higgsplosion mechanism checks out at least in the lattice. \section{Analysis of the Decay Rate Expression, What we can say about $E^{*}$} \label{s10} The focus of this section is trying to find a value for the Higgsplosion scale $E^{*}$. This turns out to be a hard problem because we do not know in what parameter space the Eq.~(\ref{eqimpor}) is trustworthy. Specifically, we need to know how large $\epsilon$ can be. Before studying Eq.~(\ref{eqimpor}), let us look for the older result using a semiclassical approximation for the small $g$ limit, Eq.~(\ref{smallg}). Here we want to show that in this limit, we do not have any exponential growth. To do that, we can fix a coupling and plot different values of $\epsilon$, trying to find a value of $n$ that makes $F(g,n)$ positive. We will choose three values of the coupling $\lambda$: 0.1, 0.001, 0.0001. These values are inside the perturbative regime for $\lambda \rightarrow 0$. The problem of the small $g$ limit is that the maximum allowed value of $n$ is such that $g$ is at maximum of order one. For larger values, we cannot say anything, and a more in-depth investigation is necessary. These are the safer values to ensure that the expression is inside the domain of validity. With that in mind, we have the plots for the different couplings in Figures~\ref{smalll}. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{01small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda =0.1$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{001small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.01$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{0001small} \caption{Plot of the different values of $\epsilon$ for the small coupling expression of the decay rate. The maximum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.001$.} \end{subfigure} \caption{The colored region is the trusted region between $\epsilon=0.1$ and $\epsilon= 0.001$, in reality, this region goes all the way to minus infinity the smaller the $\epsilon$. We chose to plot only the regions closer to zero.} \label{smalll} \end{figure} In these three plots, it is clear that the function $F(g,\epsilon)$ never grows. This indicates that there is no exponential growth and by consequence, no Higgsplosion. The story is different when we go for the strong $g$ limit. In this limit, we need to use Eq.~(\ref{eqimpor}) and use values of $n$ such that $\lambda n$ in the minimum is one. We can now plot the different values for $\epsilon$ in each choice of $\lambda$ just as before, represented in Figure~\ref{stronggg} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{01strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.1$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{001strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.01$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{0001strong} \caption{Plot of the different values of $\epsilon$ for the strong coupling expression of the decay rate. The minimum allowed value of $n$ is such that $\lambda n = 1$, fixing $\lambda=0.001$.} \end{subfigure} \caption{The colored region is the trusted region between $\epsilon=0.1$ and $\epsilon= 0.001$, in reality, this region goes all the way to minus infinity the smaller the $\epsilon$. We chose to plot only the regions closer to zero.} \label{stronggg} \end{figure} In the large $g$ region we can get exponential growth. There is always a point where the behavior shifts from exponential suppression to exponential growth. This point is what caraterizes the Higgsplosion scale $E^{*}$. For each $\lambda$ we have a different Higgsplosion scale. The precise determination of this scale is only possible if we know the maximum allowed value for $\epsilon$. We chose a region smaller than $\epsilon =0.1$ to find values but in principle this could be even smaller. The smaller the $\epsilon$ the larger the value of $E^{*}$. For instance, fixing $\lambda =0.1$ and $\epsilon=0.1$ the value of the Higgsploding scale is approximately: \begin{align} E^{*}\approx 300m. \end{align} While for $\lambda =0.01$ and $\epsilon=0.1$ the value of the Higgsploding scale is approximately: \begin{align} E^{*} \approx 3000m. \end{align} Finnaly for for $\lambda =0.001$ and $\epsilon=0.1$ we get: \begin{align} E^{*} \approx 30000m. \end{align} This behavior is similar to what is predicted in~\cite{precision} for the general form of the Higgsploding scale: \begin{align} E^{*}= C \frac{m}{\lambda} \end{align} In our case, we are obtaining $C \approx 30$, because we fixed $\epsilon=0.1$. The value for this constant is close to one, respecting the naturality argument. These values come from the choice of $\epsilon$ and possibly larger choices give a more natural value for this constant. To obtain more precise values for the Higgsplosion scale, we need to understand the limitations of Eq.~(\ref{eqimpor}). In this example, if one obtains the next order in $\epsilon$ correction, this becomes an achievable task. Now that we more or less covered what we can say about the Higgsplosion scale let us enter the discussions of applicability of the perturbation theory and a new interpretation for the Higgsplosion mechanism using two toy models. \section{0-Dimensional Case of Study.} Let us try to understand better the approximations that we are doing. The case of study is a modification of the integral Eq.~(\ref{integral0}) (the modification is almost trivial as we just extend the domain of integration): \begin{align} Z[m,\lambda] = \int_{-\infty}^{\infty} \-da\-de {x} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!}} \, . \end{align} Ultimately we are interested in calculating the expectation value of $2N$ ``fields": \begin{align} A[m,\lambda, N] = \int_{-\infty}^{\infty} \-da\-de {x} x^{2N} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!}} \, . \end{align} These are the ``$2N$-point Green functions" (moments) for a zero dimensional field theory. We could try to work with the connected amplitudes (cumulants), however, because we are interested in testing the approximation methods it is similar to work with both cases. We can relate $A[m,\lambda,N]$ and $Z[m,\lambda]$ by doing derivatives with respect to $m$. This case is useful because we can in fact solve the integral to compare with the approximations. The first approximation that we will do is small $\lambda$, for both functions. We want to see if this approximation of small $\lambda$ can be trusted in the high multiplicity limit. For the $Z[m,\lambda]$ the perturbative expansion can be found easily by expanding the exponential and exchanging the integration and summation: \begin{align} Z_{p}[m,\lambda] \thicksim \sum_{0}^{\infty} \left( \left( \frac{2}{m} \right)^{1/2} \left( \frac{1}{6m^{2}}\right)^{n} \frac{\Gamma(2n+\frac{1}{2})}{n!} \right) \lambda^{n} \quad \text{as} \quad \lambda \rightarrow 0 \, . \end{align} The expansion for the amplitude is analogous and we obtain: \begin{align} A_{p}[m,\lambda,N] \thicksim \sum_{0}^{\infty} \left( \left( \frac{2}{m}\right)^{\frac{1+2N}{2}} \left( \frac{1}{6m^{2}} \right)^{n} \frac{\Gamma(2n+N+\frac{1}{2})}{n!} \right) \lambda^{n} \quad \text{as} \quad \lambda \rightarrow 0 \, . \end{align} Both expressions are valid when $\lambda$ is small compared to $m$. Let us see if the partial sums of these series are a good approximation for the full result, that in this case we can compute: \begin{align} Z[m,\lambda] = \sqrt{\frac{3m}{\lambda}} e^{\frac{3m^{2}}{4\lambda}} K\left(\frac{1}{4},\frac{3m^{2}}{4\lambda}\right) \, , \end{align} \begin{align} A[m,\lambda,N] = 2^{-\frac{3}{4} + \frac{3N}{2}} 3^{\frac{1}{4}+\frac{N}{2}} \lambda^{-\frac{3}{4}-\frac{N}{2}} ( \sqrt{2\lambda} \Gamma(\frac{1}{4} + \frac{N}{2}) F_{11}\left( \frac{1}{4}+\frac{N}{2},\frac{1}{2},\frac{3m^{2}}{2\lambda}\right) - \end{align} \begin{align} \nonumber 2\sqrt{3m^{2}} \Gamma(\frac{3}{4} + \frac{N}{2}) F_{11}\left( \frac{3}{4}+\frac{N}{2},\frac{3}{2},\frac{3m^{2}}{2\lambda}\right) ) \, , \end{align} where $F_{11}[a,b,z]$ is the hypergeometric function and $K[a,z]$ is the modified Bessel function of the second kind. To test the approximations, we set $m=1$ and scan different scales for the coupling. The first one is the approximation for the partition function that is expected to be good for small $\lambda$ and some values are listed in the Table~\ref{tabela1}. For all the analysis done here we will use the notation $X^{(n)}$, meaning the partial sum up to the nth-term of the series of $X$. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $\lambda$ & 0.01 & 0.1 & 1 \\ \hline $Z_{ap}^{(1)}$ & 2.50976 & 2.5379 & 2.8199 \\ \hline $Z_{ap}^{(2)}$ & 2.50978 & 2.5420 & 3.0484 \\ \hline $Z_{ap}^{(3)}$ & 2.50978 & 2.5405 & 3.3625 \\ \hline $Z_{ex}$ & 2.50352 & 2.4773 & 2.3033 \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the partition function.} \label{tabela1} \end{table} Considering the approximation for $Z[\lambda]$, we can see that going for larger values of $\lambda$, the partial sum starts to become worst. In this case, it is not so off from the exact value, let us see if this is the same for the amplitude $A[\lambda,N]$. It is important to remember that this analysis is only for the partial sum, and different summation machines could improve these results. When we introduce different values of $N$ the approximation starts to break down, signaling that the appropriate coupling is indeed $\lambda N=g$, as it is shown in the Tables~\ref{tabela2},~\ref{tabela3} and~\ref{tabela3}. For all the numerical values we set $m=1$. \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.1$ & $1$ & $10$ & $30$ & $60$ & $100$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.1]$ & $2.50663$ & $1.641 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.1]$ & $2.55329$ & $4.944 \times 10^{9}$ & $2.8576 \times 10^{189}$ & $7.6885 \times 10^{706}$ & $1.8251 \times 10^{1591}$ & $3.2199 \times 10^{2871}$ \\ \hline $A_{ap}^{(2)}[0.1]$ & $2.68385$ & $9.5886 \times 10^{9}$ & $2.5401 \times 10^{191}$ & $5.8860 \times 10^{709}$ & $5.3124 \times 10^{1594}$ & $2.699 \times 10^{2875}$ \\ \hline $A_{ex}[0.1]$ & $2.36727$ & $3.6169 \times 10^{8}$ & $4.7914 \times 10^{161}$ & $2.1524 \times 10^{580}$ & $7.1316 \times 10^{1271}$ & $2.5246 \times 10^{2250}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.1$.} \label{tabela2} \end{table} \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.01$ & $0.1$ & $1$ & $3$ & $6$ & $10$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.01]$ & $2.5066$ & $1.6911 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.01]$ & $2.5222$ & $1.9714 \times 10^{9}$ & $3.008 \times 10^{188}$ & $7.7343 \times 10^{705}$ & $1.8278 \times 10^{1590}$ & $3.2216 \times 10^{2870}$ \\ \hline $A_{ap}^{(2)}[0.01]$ & $2.5225$ & $2.0178 \times 10^{9}$ & $2.8123 \times 10^{189}$ & $5.9557 \times 10^{707}$ & $5.5977 \times 10^{1592}$ & $2.7024 \times 10^{2873}$ \\ \hline $A_{ex}[0.01]$ & $2.4911$ & $1.3521 \times 10^{9}$ & $3.1363 \times 10^{181}$ & $5.2283 \times 10^{665}$ & $5.8780 \times 10^{1471}$ & $2.8990 \times 10^{2614}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.01$.} \label{tabela3} \end{table} \begin{table}[h!] \centering {\small \begin{tabular}{|c|c|c|c|c|c|c|} \hline $g$ & $0.001$ & $0.01$ & $0.1$ & $0.3$ & $0.6$ & $1$ \\ \hline $N$ & $1$ & $10$ & $100$ & $300$ & $600$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.001]$ & $2.50663$ & $1.6911 \times 10^{9}$ & $1.671 \times 10^{187}$ & $5.0883 \times 10^{703}$ & $3.0313 \times 10^{1587}$ & $1.9279 \times 10^{2867}$ \\ \hline $A_{ap}^{(1)}[0.001]$ & $2.50819$ & $1.6741 \times 10^{9}$ & $4.5119 \times 10^{187}$ & $8.1922 \times 10^{704}$ & $1.8551 \times 10^{1589}$ & $3.2389 \times 10^{2869}$ \\ \hline $A_{ap}^{(2)}[0.001]$ & $2.50820$ & $1.6746 \times 10^{9}$ & $7.0239 \times 10^{187}$ & $6.6976 \times 10^{705}$ & $5.7149 \times 10^{1590}$ & $2.7316 \times 10^{2871}$ \\ \hline $A_{ex}[0.001]$ & $2.50506$ & $1.6085 \times 10^{9}$ & $3.2239 \times 10^{186}$ & $5.2109 \times 10^{697}$ & $2.1500 \times 10^{1565}$ & $7.0608 \times 10^{2810}$ \\ \hline \end{tabular} } \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.001$.} \label{tabela4} \end{table} It is easy to see that for $\lambda =0.1$ only the $N=1$ case is close to the exact result and going for larger $N$ only make things worse. While for $\lambda =0.001$ we can go up to $N=100$ and still get a close enough result. This shows that besides $\lambda$ being small, $g$ need to be controlled as well. We can even do an extreme result: \begin{align} \lambda N = 0.0001 \, , \quad \lambda = 0.000001 \, ,\quad N=100 \, . \end{align} For this case the exact answer and partial sum of the first term in the series are close: \begin{align} A_{ex} = 1.66816 \times 10^{187} \, , \end{align} \begin{align} A_{ap}^{(1)} = 1.6994 \times 10^{187} \, . \end{align} Showing that the true perturbation parameter is $\lambda N= g$. We can make different approximations for this integral to explore different limits. The first thing that we can do is a strong coupling expansion. If we want a strong coupling expansion we need to reorganize the integral, for this we set $m=1$ again, and do the substitution: \begin{align} z= \lambda^{1/4} x \, . \end{align} Following by a renaming of the coupling: \begin{align} c = \frac{1}{\sqrt{\lambda}} \, , \end{align} such that the integral now is: \begin{align} A[c,N] = \sqrt{c} \int_{-\infty}^{\infty} \-da\-de {z} c^{N} z^{2N} e^{- \frac{z^{4}}{4!} - c \frac{z^{2}}{2}} \, . \end{align} The procedure will be the same, we can solve it exactly and do an expansion for small $c$ that is equivalent to a large $\lambda$: \begin{align} A[c,N] = \frac{2}{2N+1} 6^{\frac{N}{2}+\frac{1}{4}} c^{N + \frac{1}{2}} \Gamma(\frac{3}{2}+N) U \left( \frac{1}{4}+\frac{N}{2},\frac{1}{2},\frac{3c^{2}}{2} \right) \, , \end{align} \begin{align} A[c,N] \thicksim \sum_{0}^{\infty} \left( 2^{\frac{2n+6N-1}{4}} 3^{\frac{1+2N+2n}{4}} \Gamma(\frac{1}{4} + \frac{n+N}{2})(-1)^{n} \right)\frac{c^{n+N+\frac{1}{2}}}{n!} \quad \text{as} \quad c \rightarrow 0 \, . \end{align} We can test this approximation for different values of $c$ and $N$, shown in the Tables~\ref{tabela5},~\ref{tabela6} and~\ref{tabela7}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda N$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[1]$ & 0.00652336 & 0.172028 & -5.39346 & -3596 \\ \hline $A_{ap}^{(2)}[1]$ & 0.00652635 & 0.181483 & 24.5033 & 90945.6 \\ \hline $A_{ap}^{(3)}[1]$ & 0.00652626 & 0.1786318 & -65.7756 & $-2.76392 \times 10^{6}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[1]$} & \multicolumn{1}{l|}{0.00652484} & \multicolumn{1}{l|}{0.177318} & \multicolumn{1}{l|}{1.72872} & 2.49116 \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=1$ in the strong coupling limit.} \label{tabela5} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[10]$ & $2.9328 \times 10^{-13}$ & $0.0044344$ & $-1.3902 \times 10^{9}$ & $-5.2795 \times 10^{20}$ \\ \hline $A_{ap}^{(2)}[10]$ & $2.9426 \times 10^{-13}$ & $0.0075253$ & $8.3837 \times 10^{9}$ & $ 3.0380 \times 10^{22}$ \\ \hline $A_{ap}^{(3)}[10]$ & $2.9420 \times 10^{-13}$ & $0.0056700$ & $-5.0286 \times 10^{10}$ & $-1.8249 \times 10^{24}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[10]$} & \multicolumn{1}{l|}{$2.9376 \times 10^{-13}$} & \multicolumn{1}{l|}{$0.0057133$} & \multicolumn{1}{l|}{$2.5129 \times 10^{6}$} & $1.1521 \times 10^{9}$ \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=10$ in the strong coupling limit.} \label{tabela6} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|l|} \hline $\lambda$ & 10000 & 100 & 1 & 0.01 \\ \hline $c$ & 0.01 & 0.1 & 1 & 10 \\ \hline $A_{ap}^{(1)}[100]$ & $1.51361 \times 10^{-69}$ & $-4.23803 \times 10^{31}$ & $-2.98781 \times 10^{133}$ & $-9.96931 \times 10^{234}$ \\ \hline $A_{ap}^{(2)}[100]$ & $1.56881 \times 10^{-69}$ & $1.32163 \times 10^{32}$ & $5.22077 \times 10^{134}$ & $1.73547 \times 10^{237}$ \\ \hline $A_{ap}^{(3)}[100]$ & $1.55915 \times 10^{-69}$ & $-1.73165 \times 10^{32}$ & $-9.13326 \times 10^{135}$ & $-3.03593 \times 10^{239}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[100]$} & \multicolumn{1}{l|}{$1.53967 \times 10^{-69}$} & \multicolumn{1}{l|}{$ 1.03189 \times 10^{31}$} & \multicolumn{1}{l|}{$1.13733 \times 10^{125}$} & $3.13631 \times 10^{181}$ \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $N=100$ in the strong coupling limit.} \label{tabela7} \end{table} In this case, the approximation is good for small $c$ and moderate multiplicity as expected. In terms of $\lambda$ this would be the approximation for $\lambda \rightarrow \infty$. The approximation parameter, in this case, is $cN$ so we can see that at high multiplicity, we can still get a good approximation if we pick values of $cN$ that are small. Here we can explore a different regime where $\lambda$ is large and N large provided that $cN$ is small. The semiclassical limit is something like $\lambda \rightarrow 0 $ and $N \rightarrow \infty$ . If we want to mimic this behavior, we need to make a different approximation. Given these two cases, we can see that the perturbation parameter is not the naive one and $N$ mixes with him to say how good of approximation the partial sum of the series is. With this toy model, we can see that at high multiplicity, the small coupling expansion is useless for a partial sum. If we use Pad\'e in the coefficients, we will get a better approximation, but this is not the spirit for the interpretation of exponential growth of amplitudes at tree level. Using this example, we can see that it is not possible to draw this conclusion using the perturbation theory only. This leaves us with only the semiclassical calculation indicating the exponential growth of the decay rate. If we want to understand this approximation we can try to make a saddle point approximation of this integral as suggested in~\cite{Monin-aug-18} using the effective action: \begin{align} \label{int10} A[m,\lambda, N] = \int_{-\infty}^{\infty} \-da\-de {x} e^{-m\frac{x^{2}}{2} - \lambda \frac{x^{4}}{4!} + 2N \ln(x)} \, . \end{align} Because we want to make a saddle point approximation with $N$ as the large parameter we can change the variables to arrange the integral in the canonical form: \begin{align} x = \sqrt{N}z \, . \end{align} Then the Eq.~(\ref{int10}) becomes: \begin{align} A[m,\lambda , N ] = \sqrt{N} e^{N\ln(N)} \int_{-\infty}^{\infty} \-da\-de {z} e^{N W(z)} \, , \end{align} where $W(z)$ is the function that we will find the saddles to expand: \begin{align} W(z) = -m \frac{z^{2}}{2} - g \frac{z^{4}}{4!} +2 \log(z) \, . \end{align} In this action, we used the definition of the t' Hooft like coupling $g$. There is a final step that we need to do before starting the approximation. The integral is symmetric under parity that means that the saddle point only will pick one side for the approximation, because of that we can write the integral as two copies of only one of the sides. This guarantees that we are making the right approximation for that integral: \begin{align} \label{int20} A[m,\lambda,N] = 2 \sqrt{N}e^{N\ln(N)} \int_{0}^{\infty} \-da\-de {z} e^{N W(z)} \, . \end{align} Because we will do a numerical check on this approximation we use the system where $m=1$ so we can compare with the results that we got so far. Now we need to look for saddle points of $W(z)$: \begin{align} W(z)' = -z - g \frac{z^{3}}{3!} + \frac{2}{z} =0 \, . \end{align} This equation has four solutions, we pick only the positive $z_{0}$ solutions and then choose the one where the function has a maximum: \begin{align} W''(z_{0}) = -1 - g \frac{z^{2}_{0}}{2} - \frac{2}{z_{0}^{2}} < 0 \, . \end{align} Using these conditions we get the saddle point that we will use to expand the integral around: \begin{align} z_{0} = \sqrt{\frac{-3 + \sqrt{9 + 12g}}{g}} \, . \end{align} Proceeding with the saddle point approximation we do a change of variables in Eq.~(\ref{int20}): \begin{align} z= z_{0} + \frac{y}{\sqrt{N}} \, . \end{align} Doing the expansion around the saddle point of the term in the exponent gives: \begin{align} N W(z) = NW(z_{0}) + \frac{y^{2}}{2} W''(z_{0}) + \frac{y^{3}}{6\sqrt{N}} W'''(z_{0}) + \dots \, . \end{align} Since we are making an large $N$ expansion, we can expand the exponential in powers of $\frac{1}{\sqrt{N}}$: \begin{align} e^{N W(z)} = e^{N W(z_{0})} e^{\frac{y^{2}}{2}W''(z_{0})} (1 + \frac{y^{3}}{6\sqrt{N}}W'''(z_{0}) + \frac{y^{4}W''''(z_{0}) + 3 y^{6}(W'''(z_{0}))^{2}}{72N} + \dots \, . \end{align} In the large $N$ limit we can extend the domain of integration. This can be done because the integral is localized. The expansion for Eq.~(\ref{int10}) becomes: \begin{align} A[\lambda,N] = 2 e^{N(W(z_{0})+\ln(N))} \int_{-\infty}^{\infty} \-da\-de {y} e^{\frac{y^{2}}{2}W''(z_{0})} \left(1 + \frac{y^{3}}{6\sqrt{N}}W'''(z_{0}) + \frac{y^{4}W''''(z_{0}) + 3 y^{6}(W'''(z_{0}))^{2}}{72N} + \dots \right) \, . \end{align} Doing the integral we get the saddle point approximation for this integral. Here we will just retain terms up to $N^{-1}$ but the rest of the terms can be esily generated following the steps shown: \begin{align} A[\lambda,N] = 2 e^{N(W(z_{0})+\ln(N))} \sqrt{\frac{2\pi}{-W''(z_{0})}} \left( 1 + \frac{W''''(z_{0})}{24(-W''(z_{0}))^{2}N} + \frac{5 (W'''(z_{0}))^{2}}{8(-W''(z_{0}))^{3}N} + \dots \right) \, . \end{align} The expression for the different derivatives of $W$ in the point $z_{0}$ are: \begin{align} -W(z_{0}) = \frac{-3 + 2g + \sqrt{9 +12g} - 4g \ln(\frac{-3 +\sqrt{9+12g}}{g})}{4g} \, , \end{align} \begin{align} -W''(z_{0}) = \frac{6 + 8g - 2 \sqrt{9+12g}}{\sqrt{9 + 12g}-3} \, , \end{align} \begin{align} W'''(z_{0}) = \frac{-8g + 6( \sqrt{9+12g-3}}{g(\frac{-3+ \sqrt{9+12g}}{g})^{3/2}} \, , \end{align} \begin{align} W''''(z_{0} )= g \left( 1 - \frac{12g}{(-3+ \sqrt{9+12g})^{2}} \right) \, . \end{align} Let us start to investigate how good is this approximation in the strong coupling semiclassical limit: \begin{align} N \rightarrow \infty \, , \quad \lambda \rightarrow 0 \, , \quad g \rightarrow \infty \end{align} For this, let us construct the comparison for the following values of the coupling, $\lambda = 0.1 ,0.01 , 0.001$. We scan this approximation in the large $g$ limit, that in principle, is ideal to study the Higgsplosion mechanism. The values for these couplings are in the Tables~\ref{tabela8},~\ref{tabela9}, and~\ref{tabela10}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $100$ & $1000$ & $10000$ \\ \hline $g$ & $10$ & $100$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.1]$ & $4.7939 \times 10^{161}$ & $1.1376 \times 10^{125}$ & $4.6363 \times 10^{79}$ \\ \hline $A_{ap}^{(1)}[0.1]$ & $4.7888 \times 10^{161}$ & $1.115 \times 10^{125}$ & $3.9322 \times 10^{79}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.1]$} & \multicolumn{1}{l|}{$4.7914 \times 10^{161}$} & \multicolumn{1}{l|}{$2.5246 \times 10^{2250}$} & \multicolumn{1}{l|}{$8.3371 \times 10^{25318}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.1$ in the strong coupling limit of $g$.} \label{tabela8} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $1000$ & $10000$ & $100000$ \\ \hline $g$ & $10$ & $100$ & $1000 $ \\ \hline $A_{ap}^{(0)}[0.01]$ & $2.89919 \times 10^{2614}$ & $2.5247 \times 10^{2250}$ & $ 5.5863 \times 10^{1798}$ \\ \hline $A_{ap}^{(1)}[0.01]$ & $2.89888 \times 10^{2614}$ & $2.5197 \times 10^{2250}$ & $5.5015 \times 10^{1798}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.01]$} & \multicolumn{1}{l|}{$2.67897 \times 10^{2614}$} & \multicolumn{1}{l|}{$1.7379 \times 10^{29044}$} & \multicolumn{1}{l|}{$4.3090 \times 10^{293359}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.01$ in the strong coupling limit of $g$.} \label{tabela9} \end{table} \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline $N$ & $10000$ & $100000$ & $1000000$ \\ \hline $g$ & $10$ & $100$ & $1000$ \\ \hline $A_{ap}^{(0)}[0.001]$ & $1.89717 \times 10^{36142}$ & $7.33370 \times 10^{32503}$ & $3.6033 \times 10^{27989}$ \\ \hline $A_{ap}^{(1)}[0.001]$ & $1.89715 \times 10^{36142}$ & $7.31224 \times 10^{32503}$ & $3.5978 \times 10^{27989}$ \\ \hline \multicolumn{1}{|l|}{$A_{ex}[0.001]$} & \multicolumn{1}{l|}{$1.993025 \times 10^{30940}$} & \multicolumn{1}{l|}{$2.83770 \times 10^{312318}$} & \multicolumn{1}{l|}{$9.7164 \times 10^{3126099}$} \\ \hline \end{tabular} \caption{Comparison between the partial sum and the exact result for the amplitude with $\lambda=0.001$ in the strong coupling limit of $g$.} \label{tabela10} \end{table} In these results, the huge numbers are appearing because we are not working only with the connected amplitudes. We can see that the saddle point is a good approximation for $g=10$ and $N=100$ and $1000$. Any other value the approximation is not useful for the partial sum. This indicates that this saddle point approximation is trustworthy in the $g \approx 10 $ limit and deviates for larger values. With this result, we obtain three different regions of investigation. Ordinary perturbation theory for high multiplicity works when $\lambda N = g$ is small. Going for strong perturbation theory in the high multiplicity case, it works for $\frac{N}{\sqrt{\lambda}}$ small, and this means that we can explore the high $\lambda$ limit. Finally, the saddle point approximation works best when $N$ is large, $\lambda$ small and $g$ of order ten. If the semiclassical calculation for the Decay rate follows this behavior, this could be bad news because we want to explore larger values of $g$. It is worth to remember that this is just a toy model, and the real limitation of the Eq.~(\ref{eqimpor}) cannot be extracted only from this. We can take from this that perturbation theory is not useful for partial sums in the high multiplicity limit, only in the ultra-perturbative limit. This is not the end of the story, and a useful approximation for large $g$ is still needed. Despite that for moderate values of $g$ and $N$, the semiclassical approximation can be trusted, and this could be enough to realize the Higgsplosion Mechanism. The next section, we introduce the last toy model and a new interpretation of the Higgsplosion mechanism. \section{Exponential Decay of the Propagator, String Inspired Toy Model} The appearance of an exponentially suppressed propagator is an unusual feature of a Quantum Field Theory. Most of the known cases have some form of non-locality in it. This does not mean that we cannot use these models to describe some physical system. Generally in these theories there is a scale $\Lambda$ where the non-locality becomes dominant. Below such scale, the system is causal and behaves like a local theory. If we want to understand the Higgsplosion mechanism near the Higgsplosion scale $E^{*}$ we could try to use some toy model that mimics this exponential behavior. It turns out that there is a physical system where an exponential decay of the propagator occurs. This system appears in String Field Theory~\cite{string3}, and we do not need to understand the String side fully to use it. Understanding String Field Theory would be a thesis on its own. We show the origins for such an action just for the sake of consistency, but given the theory, we can treat like any other Quantum Field Theory. This feature of exponential decay of propagators is a common occurrence in String Theory and String Field Theory~\cite{uv}. The physical system is the Tachyon condensation in open strings~\cite{string1,string2,string3}. If an open string is attached to an unstable $D$-brane then there is a tachyon in the spectrum. This tachyonic vacuum will then describe a vacuum where the unstable $D$-brane decays and disappears. This feature is similar to what is proposed in this Thesis about the Higgsplosion entering a new phase because all the single particle states are gone. Using String Field Theory, it was possible to show that indeed there was no particle excitation in this vacuum configuration. If we try to describe this process using local fields, in the level 0\footnote{Level expansion is the expansion in the different excitations of the String Field. This is justified because the couplings to the higher excitations decrease exponentially. The level 0 is just the tachyon, and level 1 is the tachyon plus a massless vector boson, and so on.} sector of the open bosonic String Field Theory we have the following action: \begin{align} S_{tachyon} = \int \-da\-de [26]{x} \left[ \frac{1}{2} \phi e^{-\frac{\Box}{M^{2}}}(\Box + \mu^{2})\phi - \frac{g}{3} \phi^{3} \right] \, , \end{align} where $\phi$ is the tachyonic field with mass $m^{2}=-\mu^{2}$ that describe the instability of the $D$-brane. The exponential factor with the D'Alambertian operator gives the non-locality of this action, and the factor $M^{2}$ is the non-local scale. This action is studied in detail in~\cite{string1}. The propagator has the exponential decay feature: \begin{align} \Delta_{F}^{-1} = e^{\frac{p^{2}}{M^{2}}}(p^{2}-\mu^{2}) \, . \end{align} Here we study a toy model for the broken phase of the scalar field that has this exponential suppression. The action that we propose to work with is: \begin{align} S = \int \-da\-de [4]{x} \left[ \frac{1}{2} \phi e^{-\frac{\Box}{(E^{*})^{2}}}(\Box - m^{2})\phi - \frac{\lambda}{4!} \phi^{4} \right] \, , \end{align} in the broken phase where $m^{2}=-\mu^{2}$. The minimum configuration is the same as the local version because the field solution is constant: \begin{align} \phi_{\min}^{2} = \frac{3! \mu^{2}}{\lambda} \, . \end{align} Doing a shift in the field to remove the expectation value: \begin{align} \sigma = \phi - \phi_{\min} \, . \end{align} We can write the action as: \begin{align} S = \int \-da\-de [4]{x} \left[ \frac{1}{2} \sigma \left( e^{-\frac{\Box}{(E^{*})^{2}}}(\Box + \mu^{2}) -3\mu^{2} \right)\sigma + \mathcal{L}_{int} \right] \, . \end{align} We will ignore the interaction part because we want to analyze the spectrum of the free theory. The propagator in this phase is similar to the one obtained before: \begin{align} \Delta_{F}^{-1} = e^{\frac{p^{2}}{(E^{*})^{2}}}(-p^{2}+\mu^{2})-3\mu^{2} \, . \end{align} We want to look for the pole structure of this propagator. If we have more than one pole this will mean the appearance of a ghost in the spectrum, and the theory will not be valid above the mass of the ghost. Doing a change of variables to find these poles: \begin{align} -k^{2} = x \mu^{2} \, , \end{align} the equation that we need to solve is: \begin{align} \label{eqeq1} P(\nu,x)=e^{-\nu x}(x+1)-3 =0 \, . \end{align} In this equation we have $\nu = \frac{\mu^{2}}{(E^{*})^{2}}$ and the mass of the excitation is: \begin{align} m^{2} = x \mu^{2} \, . \end{align} It is possible to see the behavior of this function varying $\nu$ in the Figure~\ref{fig}. It has a distinct feature similar to the tachyonic case. There is a region where there is no pole ($\nu \gtrsim 0.14$). This region there is no particle in the spectrum, and we can interpret as the theory have Higgsploded. If we want to describe the theory above this scale, we would need to change the degrees of freedom. It has one special point where there is one particle in the spectrum ($\nu_{crit} \approx 0.14$). Finally, there exists a region where we have a particle and a ghost, the ghost being near infinity and getting closer as the coupling approaches the transition ($0<\nu \lesssim 0.14$). \begin{figure}[h!] \centering \includegraphics[width=15cm]{string} \caption{Plot of Eq~(\ref{eqeq1}) for different scales $\nu =0.1, 0.14123$ and $0.3$.} \label{fig} \end{figure} This is very similar to the spectrum found in~\cite{string1} that indicates the decay of the $D$-brane. In the Higgsplosion scenario, this could indicate that the system ``decays" and we need another description above that scale. This is not as strong as the UV finiteness but is a remarkable feature nonetheless. That could explain the similarities from a non-local theory pointed out in~\cite{Monin-aug-18}. We can see the values of $E^{*}$ given that $\mu$ is of the order of the weak scale demanding that the theory has no particles: \begin{align} E^{*} \approx 1.3 \, \text{TeV} \quad \text{for} \quad \mu = 500 \, \text{GeV} \, , \end{align} \begin{align} E^{*} \approx 0.66 \, \text{TeV} \quad \text{for} \quad \mu = 250 \, \text{GeV} \, , \end{align} \begin{align} E^{*} \approx 2.6 \, \text{TeV} \quad \text{for} \quad \mu = 1 \, \text{TeV} \, . \end{align} These scales are too low, but this is just a toy model for the behavior proposed in the Higgsplosion mechanism. If we obtain a better expression for the off-shell propagator, we can try to model inside a non-local theory of this kind to investigate more deeply the dynamics near this scale. This example shows how strange is the exponential decay of a propagator, and this possible interpretation of theory ``decay". In scales much lower than $E^{*}$ we can treat it using the local Lagrangian without a problem. As we get closer to $E^{*}$ we need to use this Lagrangian to discuss the physics. Finally, passing this scale, we would need another description in terms of the remaining degrees of freedom of the theory. The non-local features emerging near the Higgsplosion scale can be interpreted as the $n$ particles behaving collectively to form a structure. The scalar mass is shielded in this interpretation despite the absence of UV finitness because of the $n$ particle structure formed, in a similar way as a composite scalar is protected from the UV. \chapter{Introduction} \label{c1} \pagenumbering{arabic} The discovery of the Higgs Boson~\cite{higgs1} opened a new era for particle physics~\cite{higgs2}. The last fundamental piece necessary for the Standard Model (SM) to work as intended. Since its discovery, we entered a new phase of precision measurement and confirmation of the Standard Model. Despite its great achievements, it is known that the Standard Model cannot be the full history. That is motivated by the lack of understanding of observational results that comes from other sources, such as, Dark Matter~\cite{dm1,dm2} and Dark Energy~\cite{DE}, which are not accounted for by the Standard Model. Even the lack of a theoretical understanding of what Quantum Gravity\cite{gravity1,gravity2,gravity3,gravity4} looks like can enter as evidence for the need of Beyond Standard Model Physics(BSM). These results indicate that maybe the Standard Model is not the end. Even if one ignores these hints, some unsolved puzzles can be identified already within the Standard Model. One of them is the fine-tuning problem associated with the weak scale~\cite{nat1,nat2}. Naively it is expected that the squared mass parameter of a scalar particle should be of the order of the cutoff of the theory. In the Standard Model, the cutoff can be assumed as the Planck scale $\Lambda_{p}$, the scale where Quantum Gravity becomes relevant: \begin{align} \Lambda_{p} \approx 10^{19} \text{GeV} \, . \end{align} This expectation happens because there is no symmetry to protect the theory from receiving large contributions to the squared mass term. The story is different from a fermion particle, where the presence of chiral symmetry in the $m_{f} \rightarrow 0 $ limit shields the fermions from being quadratically sensitive to the Ultra-Violet (UV). This does not occur for the scalar particle without any additional symmetry. Thus, it is surprising that the measured Higgs mass $m_{h}\approx 125$~GeV~\cite{higgs1} and the absence of other states or signs of new physics, indicates the presence of a scalar much lighter than a cutoff. That means the occurrence of a fine-tuning of the contribution from BSM physics in such a way that the Higgs mass is small. The theory has large numbers that conspire to give a small physical contribution: \begin{align} m_{h}^{2} \approx m_{0}^{2} + \delta m_{BSM}^{2} \, . \end{align} There are a few potential solutions to the fine-tuning problem. The most famous are Supersymmetry~\cite{susy1,susy2,susy3,susy4} and Composite Higgs~\cite{comp1,comp2,comp3,comp4,comp5}. In this thesis, we review a new possible mechanism that can render the Higgs mass parameter small naturally and potentially make the Standard Model UV finite. This new mechanism is called Higgsplosion and was proposed in 2017 by Valentin V. Khoze and Michael Spannowsky~\cite{Khoze-higgsplosion,Khoze-jun-17}. This thesis aims to understand the proposal in detail and learn more about Quantum Field Theory at high multiplicity as well as the applicability of ordinary perturbation theory in such a regime. The study of Higgsplosion is intimately related to the question of what happens to a Quantum Field Theory when we have high multiplicity processes. Inside the Quantum Field Theory framework, people developed powerful tricks~\cite{Brown-nov-92,Voloshin-apr-93} that made possible to compute high multiplicity processes. In all of the computation, one finds the unusual feature that the leading order is already growing exponentially with the number of final states. At the time it was interpreted as implying that the perturbation theory is not valid in this regime~\cite{Goldberg-may-90}. The leading term would not be a good approximation, and any partial sum would not reproduce the correct answer. In this thesis, this claim is reviewed, so we can understand precisely how the perturbation theory works in such a regime of a Quantum Field Theory. The picture changed later when Son~\cite{Son-may-95} developed a semiclassical computation to obtain expressions for high multiplicity processes that, in principle, can be trusted in a fixed limit of the theory. The decay rate for a high multiplicity process obtained in~\cite{Son-may-95} had an exponential form, but in the region of applicability it does not have an exponential growth. The next breakthrough came when these semiclassical computations were generalized to the strong 't Hooft like coupling ($\lambda n$) regime for $\phi^{4}$ in the broken phase in (1+3)D. In such a limit, the same behavior of exponential growth of some objects at high multiplicity appears. This fact is what motivated Valentin V. Khoze and Michael Spannowsky to propose the Higgsplosion mechanism. It was not well understood what the limitations of their results were, and we discuss this in detail. These result gave strong evidence that Higgsplosion may happen at least in this model, which we discuss also. In the Higgsplosion mechanism, these results are used together with some basic Quantum Field Theory to show that unitarity is preserved even with this exponential growth, but with the price of the propagator vanishing exponentially fast at high energies. This exponential suppression renders loops finite, and the theory stays at a UV interacting fixed point. That is a strong claim, and the role of this thesis is to investigate this and understand better if Higgsplosion can happen in a Quantum Field Theory. If this is true, then there will be consequences to the Higgs sector of the Standard Model that could explain the fine-tuning problem and in some sense UV complete the whole theory. Even if it turns out that it does not apply to the Standard Model, it could be right in some limiting case of other models, and we can learn more about Quantum Field Theory in a different regime. The structure of this thesis is the following. In chapter~\ref{c1}, we review the notation and tools that we use. In chapter~\ref{c2}, we calculate some high multiplicity amplitudes at the threshold (the limit where all outgoing particles are at rest) and explore beyond threshold amplitudes. At the end of chapter~\ref{c2}, we show some recent results that we use later to discuss the possibility of Higgsplosion. We choose to focus more on the perturbative approach to see if Higgsplosion happens in this regime, but these results coming from the semiclassical calculation are useful to understand the current state of the Higgsplosion Proposal. In chapter~\ref{c3}, we present the Higgsplosion mechanism itself and what it can bring to the table. After that, we discuss some problems with the claims of Higgsplosion, as well as the known criticism of it, and present some potential solutions. At the end of chapter~\ref{c3}, we present two toy models that are useful to understand the applicability of perturbation theory and a new proposed interpretation of the Higgsplosion mechanism. We try to point out which directions are worth exploring to settle the open questions that have been raised about this mechanism. \section{Toolbox} \subsection{Green Functions} In this thesis, we use different types of $n$-point correlators, so it is worth defining the notation here. These correlators are used to construct physical amplitudes through the standard LSZ reduction formula~\cite{qft}. Knowing these $n$-point functions, we can construct any S-matrix element of the theory. In this point, there is no mention of perturbation theory aside from the assumption of asymptotic states that enters in LSZ\footnote{This excludes theories that we cannot separate the particles from the interaction, for instance, a confined system.}. First we define the $n$-point function as: \begin{align} \label{green} G^{(n)}(x_{1},\dots,x_{n}) \equiv \ev{T \left(\phi(x_{1})\dots \phi(x_{n}) \right)}{\Omega} \, , \end{align} where $T$ is the time ordering operator. We can use a diagrammatic representation for this object inside perturbation theory of the form represented in Figure~\ref{fazer1}. \begin{figure}[h!] \centering \includegraphics[width=8cm]{pert} \caption{Diagrammatic representation for n-point green functions inside perturbation theory.} \label{fazer1} \end{figure} Given this Green function we can define its connected part diagrammatically were all external points are connected (interacting): \begin{align}\label{cone} G_{c}^{(n)}(x_{1},\dots,x_{n}) \equiv G^{(n)}(x_{1},\dots,x_{n}) - \text{disconnected parts} \, . \end{align} We will see later that we can define generating functional for these connected Green functions in such a way that we can work only using $G_{c}^{(n)}$. This definition is a generalization of the concept of cumulants in probability theory~\cite{cumu}. In particular, for $n=2$ (the propagator) all diagrams are connected: \begin{align} G_{c}^{(2)}(x_{1},x_{2}) = G^{(2)}(x_{1},x_{2}) \, . \end{align} Finally, we can define a Green function that cannot be separated into subprocesses by cutting a single line. These are called one-particle irreducible Green functions(1PI): $G_{1PI}^{(n)}$. We will see how to obtain these objects using functional methods in Section~\ref{sfun}. They are the fundamental blocks that we can use to construct arbitrary processes. For instance, the process represented in~\ref{not1} is not 1PI. \begin{figure}[h!] \centering \includegraphics[width=8cm]{not1pi} \caption{Example of a process that is not 1PI.} \label{not1} \end{figure} With these objects we can define its Fourier transform: \begin{align}\label{furi} (2\pi)^{4}\delta^{4}(p_{1}+\dots+ p_{n}) G_{c}^{(n)}(p_{1}, \dots, p_{n}) = \int \-da\-de [4]{x_{1}}\dots \-da\-de [4]{x_{n}} e^{-i(p_{1}\cdot x_{1}+\dots +p_{n}\cdot x_{n})}G_{c}^{(n)}(x_{1},\dots,x_{n}) \, . \end{align} Usually, we work in Fourier space and with the definition that $+p$ are entering momenta. It can be seen above in Eq~(\ref{furi}), that we have a total momentum conservation delta function. We can use these objects to compute off-shell amplitudes by picking any momentum in a physical amplitude and letting it be virtual. In other words, work only with the Green function in momentum space and ignore the overall conservation delta function. The full propagator in momentum space is then: \begin{align} \label{ca} G^{(2)}(p,-p) \equiv G^{(2)}(p) \, , \end{align} and with this we can define the amputated Green function that plays an important role in constructing amplitudes: \begin{align} G_{amp}^{(n)} (p_{1},\dots,p_{n}) \equiv \prod_{1}^{n} \left( G^{(2)}(p_{i}) \right)^{-1} G_{c}^{(n)}(p_{1},\dots,p_{n}) \, . \end{align} Diagramatically we are removing all the external legs of an amplitude using the full propagator. This can be used to generalize LSZ to off-shell amplitudes. We will not re-derive LSZ as this is standard textbook material~\cite{qft}. Nonetheless, to understand this last statement let us consider the LSZ reduction formula for a real scalar field: \begin{align} \label{lsz} i T_{n \rightarrow m} (p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') = \lim_{p^{2}_{i} \rightarrow m^{2}}(2\pi)^{4} \delta^{4} \left( \sum_{1}^{n} p_{i} - \sum_{1}^{m} p_{i}' \right) \left( i\sqrt{Z_{\phi}} \right)^{-(n+m)} \times \\ \times (p_{1}^{2}-m^{2}) \dots (p_{m}'^{2}-m^{2}) G_{c}^{(n+m)}(p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') \, , \nonumber \end{align} where $Z_{\phi}$ is the wave function normalization, and $T_{n \rightarrow m}$ is the transfer matrix elements that are related to the interacting part of the S-matrix: \begin{align} \mathbb{S} =\mathbb{I} + i\mathbb{T} \, . \end{align} The object to the right of the delta function in Eq.~(\ref{lsz}) is the invariant amplitude $\mathcal{M}$ for this process: \begin{align} T_{n \rightarrow m} (p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') = (2\pi)^{4} \delta^{4} \left( \sum_{1}^{n} p_{i} - \sum_{1}^{m} p_{i}'\right)(-i\mathcal{M}(n \rightarrow m)) \, . \end{align} The generalization goes as follows, instead of removing the propagator near the mass shell, we remove the full propagator and generate an off-shell amplitude. This amplitude became the physical amplitude when we put all particles on-shell. We can see that we are almost removing the inverse of the full propagator near the on-shell limit in Eq.~(\ref{lsz}), just changing the overall $Z_{\phi}$ factor: \begin{align} G^{(2)}(p) \overset{p^{2}\rightarrow m^{2}}{=} \frac{i Z_{\phi}}{p^{2}-m^{2}+i\epsilon} \, , \end{align} this $m$ is the physical mass, different than the bare mass $m_{0}$ that appear in the free propagator. The inverse is defined in such away that: \begin{align} G^{(2)} \cdot \left( G^{(2)} \right)^{-1} = -i \, . \end{align} The off-shell generalization is direct, we change these propagators near the on-shell limit to the full propagators and use the definition of the amputated Green function: \begin{align}\label{genlsz} -i\mathcal{M}(n \rightarrow m) = (i\sqrt{Z}_{\phi})^{(n+m)} G_{amp}^{(n+m)}(p_{1},\dots,p_{n} ; p_{1}', \dots, p_{m}') \, . \end{align} It is possible to see that the amputated Green function are the off-shell amplitudes aside from an overall normalization factor\footnote{If you work with a theory where $Z_{\phi}$ is zero up to some loop order, then the amputed Green function is the amplitude directly up to the same order.}. We can use this definition to work out a case that is used in this thesis, the $1 \to 1$ ``scattering": \begin{align} \mathcal{M}(1 \rightarrow 1) = - Z_{\phi} \left( G^{(2)} \right)^{-1} \, . \end{align} This case is interesting because we can use the Optical Theorem~\cite{qft} to relate the imaginary part of this amplitude to the total decay rate: \begin{align} \Im \left( \mathcal{M}(1 \rightarrow 1) \right) = m \Gamma_{total}(p) = -Z_{\phi} \Im \left( (G^{(2)}(p))^{-1}\right) \, , \end{align} where the first equality comes from the Optical Theorem, while the second one comes from the $1 \to 1$ ``scattering" amplitude obtained trough generalized LSZ, Eq.~(\ref{genlsz}). Using the inverse of the full propagator (we will comment further on this form in the Section~\ref{secdy}): \begin{align} \label{fullprop} \left( G^{(2)} \right)^{-1} = -\left( p^{2}-m^{2} - \Sigma(p^{2}) \right) \, , \end{align} we arrive at one of the most important relations that we will use in this thesis: \begin{align} \Gamma_{total}(p^{2}) = -\frac{Z_{\phi}}{m}\Im \left( \Sigma(p^{2}) \right) \, . \end{align} Thus, if the off-shell total decay rate grows exponentially, the imaginary part of $\Sigma(p^{2})$ grows as well. The physical total decay rate can be recovered by going to the mass shell. This feature of working with off-shell quantities let us gain more information about the theory in general, and it is a powerful tool in Quantum Field Theory. The exponential growth of the imaginary part of $\Sigma(p^{2})$ can in principle suppress the propagator, Eq.~(\ref{fullprop}), at a scale even when the real part of this function is well behaved (we cannot say much about its real part). Now, let us investigate further Eq.~(\ref{fullprop}) because this is a central point of this thesis. \subsection{Dyson Resummation and the Full Propagator} \label{secdy} Here we discuss important properties of the full propagator presented in Eq.~(\ref{fullprop}). Using the interacting part of the 1PI two-point function that we define as: \begin{align} \label{1pid} \Sigma (p^{2}) \, , \end{align} we can recover all the information about the full propagator $G^{(2)}$. With $\Sigma(p^{2})$ we can re-construct the full propagator as a geometric sum of these graphs\footnote{It is interesting to note that Eq.~(\ref{umpa}) is not so straightfoward. It almost comes as a definition in Quantum Field Theory. This happens because we cannot reorganize terms in a divergent series, since summation is not infinitely associative and commutative. The 1PI organization of the perturbative series that appears in Quantum Field Theory is not immediate from perturbation theory alone. However, it is possible to justify this ordering using the definition of a Quantum Action as we show in Section~\ref{sfun}. Therefore, it is a consequence of the meaning of what a Quantum Field Theory is, and not something additional to that.}: \begin{align}\label{umpa} G^{(2)}= G_{0}^{(2)}+ G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0} + G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0}(-i\Sigma) G^{(2)}_{0} + \dots \, . \end{align} \begin{figure}[h!] \centering \includegraphics[width=15cm]{1piprop} \caption{Diagrammatic representation of the 1PI resummation.} \label{diga} \end{figure} Diagrammatically this is represented in Figure~\ref{diga}. If we do this resummation we get the representation of the full propagator used before in Eq.~(\ref{fullprop}). Indeed, one has: \begin{align} G^{(2)}(p^{2})= \frac{G^{(2)}_{0}(p^{2})}{1+i\Sigma(p^{2}) G_{0}^{(2)}(p^{2})} \, , \end{align} and using the usual free propagator: \begin{align} G^{(2)}_{0} = \frac{i}{p^{2}-m^{2}_{0}+i\epsilon} \, , \end{align} we get the representation for the full propagator as: \begin{align}\label{propa} G^{(2)} = \frac{i}{p^{2}-m^{2}_{0}-\Sigma(p^{2})+i\epsilon} \, . \end{align} It is important to note that the standard perturbation theory is inside $\Sigma(p^{2})$. We are free to do this resummation, and any non-perturbative effect is not lost but rendered inside the 1PI function $\Sigma(p^{2})$. This resummation can be done even when $\Sigma(p^{2})$ is large because we can interpret this geometric series as a divergent series representation of the full propagator. Being a divergent series representation and having this geometric nature we can, in this case, find a region where the series converges. For instance, in a given renormalization scheme, we can fix $\Sigma(m^{2})=0$, summing this series near the mass shell condition means that we are inside the convergence radius. After we resum this series, the expression can be expanded to the whole complex plane just like the regular geometric series for $x=2$ or any other complex value: \begin{align} 1+2+4+8+ \dots = \frac{1}{1-2} = -1 \end{align} Typically in a divergent series, this is not the whole story, because of non-perturbative effects. For the expansion in $\Sigma(p^{2})$ given in Eq.~(\ref{umpa}), there is no effect of this kind. That does not mean that we solved the theory because we do not know how to calculate $\Sigma(p^{2})$ exactly. This result, Eq.~(\ref{propa}), is one of the few non-perturbative results in Quantum Field Theory that we currently have. With that information, we can guarantee the relation between the imaginary part of the 1PI function and the total decay rate as defined above, provided that the theory is unitary. This relation is re-derived without talking about this resummation (this is called Dyson Resummation in the literature) at the end of this section when we introduce functional methods. With this solved, we can start to investigate what else we can say about the full propagator, Eq.~(\ref{propa}). \subsection{K\"all\'en-Lehmann Spectral Representation} The object of interest here is the two-point function: \begin{align} \label{eqalgo} \ev{T \left( \phi(x)\phi(y) \right)}{\Omega} \, . \end{align} To explore this, we pick one time configuration and then in the end recover Eq.~(\ref{eqalgo}) constructing the time ordering. Choosing an ordering where $x^{0} < y^{0}$, we have: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} \, . \end{align} Although we cannot compute this exactly in a interacting theory, we can extract much information from it. If we introduce a set of complete states between the operators and use translation invariance we can write: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \sum_{n} \mel{\Omega}{\phi(0)}{n }\mel{n}{\phi(0)}{\Omega } e^{-i p_{n} \cdot (x-y)} \, , \end{align} where $n$ runs over all states in the theory, discrete and continuous (the sum becomes an integral over the continuous states). As expected this object depends only on the difference between the points $x$ and $y$. Now, we can introduce a delta function in a suggestive way to re-write this expression: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \int \frac{\-da\-de [4]{p}}{(2\pi)^{4}} e^{-i p \cdot (x-y)} \left( \sum_{n} (2\pi)^{4} \delta^{4}(p-p_{n}) \left| \mel{\Omega}{\phi(0)}{n }\right|^{2} \right) \, . \end{align} We can now define the spectral density: \begin{align} \tilde{\rho}(p) = \sum_{n} (2\pi)^{4} \delta^{4}(p-p_{n}) \left| \mel{\Omega}{\phi(0)}{n } \right|^{2} \, , \end{align} this measures the contribution to the two-point function of the states with momentum $p$. It receives contributions from bound states as well as multi-particle ones. This density is a Lorentz invariant object and vanishes when $p$ is not in the future lightcone~\cite{qft}. Using this we can write it as: \begin{align} \tilde{\rho}(p) = 2\pi \rho(p^{2}) \theta(p^{0}) \, . \end{align} Assuming there are no negative norm states it follows that the spectral density is positive semi-definite for all $p$ inside the lightcone: \begin{align} \rho(p^{2}) \geq 0 \, . \end{align} We can write the non-ordered two-point function with this spectral decomposition: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} =\int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{-i p \cdot (x-y)} \rho(p^{2})\theta(p^{0}) \, , \end{align} and using the propagator in position space: \begin{align} \Delta(x,y,m^{2}) = \int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{-i p \cdot (x-y)} \theta(p^{0})\delta(p^{2}-m^{2}) \end{align} it is possible to write this ordered two-point function as: \begin{align} \ev{\phi(x)\phi(y)}{\Omega} = \int_{0}^{\infty} \-da\-de {s} \rho(s) \Delta(x,y,s) \, . \end{align} To recover the time ordering in this two-point function we can use: \begin{align} \ev{T\left(\phi(x)\phi(y)\right)}{\Omega} = \theta(x^{0}-y^{0})\ev{\phi(x)\phi(y)}{\Omega} + \theta(y^{0}-x^{0})\ev{\phi(y)\phi(x)}{\Omega} \end{align} together with the following relation: \begin{align} e^{-iE_{p}(x^{0}-y^{0})} \theta(x^{0}-y^{0}) + e^{+iE_{p}(x^{0}-y^{0})}\theta(y^{0}-x^{0}) = \lim_{\epsilon \rightarrow 0 } \frac{- E_{p}}{\pi i} \int_{-\infty}^{\infty} \frac{\-da\-de {E}}{E^{2}-E_{p}^{2}+i\epsilon}e^{i E (x^{0}-y^{0})} \, . \end{align} To construct the ordered two-point function Eq.~(\ref{eqalgo}), i.e. the Feynman propagator: \begin{align} \ev{T(\phi(x)\phi(y))}{\Omega} = \int \frac{\-da\-de [4]{p}}{(2\pi)^{3}} e^{i p \cdot (x-y)} i\Delta_{F}(p^{2}) \, , \end{align} were we have the full Feynman propagator in momentum space as\footnote{The $i$ was removed from the definition of this propagator to facilitate the analysis in the complex plane. Everything will be similar if we study $-i\Delta_{F}$ and kept the initial definition.} \begin{align} \label{eq1} \Delta_{F}(p^{2}) = \int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{p^{2}-s+i\epsilon} \, . \end{align} The Eq.~(\ref{eq1}) is known as K\"all\'en-Lehmann spectral representation of the full propagator. We did not use any information about the interaction or any expansion. This decomposition is a non-perturbative result. With this representation, we can derive a significant amount of information about the interacting theory. In an interacting theory, the spectral density will have singularities at locations of physical particles, illustrated in Figure~\ref{fig:spec}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{spectral} \caption{Example of a spectral density. An usual theory has a single pole at its first excitation and then some bound states before having a continuum spectrum for the 2 particle states and above.} \label{fig:spec} \end{figure} Because the spectral density is positive we can calculate the imaginary part using Cauchy theorem. Given the analytic structure of the propagator (simple poles for one-particle state and bound states, branch cuts for multi-particle states) we can write a contour integral representation where the contour is a circle around the real line where the cut lives: \begin{align} \Delta_{F}(p^{2}) = \frac{1}{2\pi i} \int_{\gamma} \-da\-de {s} \frac{\Delta_{F}(s)}{p^{2}-s} \, . \end{align} This expression holds as long as $p^{2}$ is inside the contour and the contour path does not cross any singularity. Taking the radius of the contour $R \rightarrow \infty$ and assuming that $\rho(s) \rightarrow 0$ as $s \rightarrow \infty$ we can write: \begin{align} \Delta_{F}(p^{2}) = \frac{1}{2\pi i} \int_{s_{0}}^{\infty} \frac{\-da\-de {s} \text{Disc}\left( \Delta_{F}(s)\right) }{p^{2}-s+i\epsilon} \, , \end{align} were $s_{0}$ is the localization of the first singularity that we assume to be the single particle state, and the discontinuity of the function along the cut is equal to the imaginary part of it: \begin{align} \text{Disc}\left( \Delta_{F}(s) \right) = 2i \Im \left( \Delta_{F}(s) \right) \, . \end{align} With this information, we can use the definition of the spectral decomposition Eq.~(\ref{eq1}) to write: \begin{align} \label{eq4} \rho(s) = -\frac{1}{\pi} \Im \left( \Delta_{F}(p^{2}) \right) \, . \end{align} This feature is a consequence that the propagator is real everywhere unless when the particle is on-shell. Another important feature is a constraint on the power with which the propagator can vanish at large momentum. Assuming that we can Wick rotate Eq.~(\ref{eq1}) (this is a non-trivial statement in an interacting theory) we get: \begin{align} p^{0} \rightarrow ip^{0}_{E} \quad \, , \quad p^{2} \rightarrow - p^{2}_{E} \, . \end{align} This means that the modulo of the full propagator is: \begin{align} \left| \Delta_{F}(-p^{2}_{E}) \right| = \left| \int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \, , \end{align} this implies: \begin{align} \left| \Delta_{F}(-p^{2}_{E}) \right| \geqslant \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \, , \end{align} for any $s_{0}$ because the density is positive semi-definite. Taking the limit of Euclidean momentum going to infinity we arrive at some point in $p^{2}_{E}> s_{0}$ for a fixed $s_{0}$. Then: \begin{align} \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \Delta_{F}(-p^{2}_{E}) \right| \geqslant \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}+s} \right| \geqslant \lim_{p^{2}_{E} \rightarrow \infty} p^{2}_{E} \left| \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \frac{1}{p^{2}_{E}} \right| = C \end{align} for some finite positive number $C$: \begin{align} C = \int_{0}^{s_{0}} \-da\-de {s} \rho(s) \, . \end{align} Thus, with these assumptions, the propagator cannot fall faster than $1/p^{2}$ as $p^{2} \rightarrow \infty$. This does not mean that the propagator cannot ``look" as it is falling faster than $p^{-2}$ in intermediary regions (if we fix a $s_{0}$ such that $C=0$). This feature will be important in this thesis because it is a strong non-perturbative constraint to the two-point function. Possible loopholes of this are discussed in chapter~\ref{c3}. An important aspect of this derivation is that we can use any local operator to get similar non-perturbative information: \begin{align} \Delta^{O}(x) = \ev{O(x)O(0)}{\Omega} \, , \end{align} as longs as $O(x)$ is invariant under translations: \begin{align} \label{eq2} \Delta^{O}(x) = \int \frac{\-da\-de {s}}{2\pi} \int \frac{\-da\-de [3]{p}}{(2\pi)^{3}} \frac{\rho_{O}(s)}{2E_{s,p}} e^{-ip \cdot x} \end{align} where we define the spectral density of this operator as: \begin{align} \rho_{O}(s) = \sum_{n} \delta(s-p^{0}) |\mel{\Omega}{O(0)}{n }|^{2} \, . \end{align} Now it is important to make some remarks. The first thing is to know that Eq.~(\ref{eq1}) and Eq.~(\ref{eq2}) may contain UV divergences. This is always the case for a 4-dimensional theory. This ultimately mean that Eq.~(\ref{eq1}) and Eq.~(\ref{eq2}) are ill-defined. The usual step to take is to regularize and renormalize these expressions. The nature of the distribution $\rho(s)$ dictates how we handle these divergences. If we assume $\rho(s)$ is a tempered distribution\footnote{ Space of tempered distributions or Schwartz space is the function space of all infinitely differentiable functions that are rapidly decreasing at infinity along with all partial derivatives.} then $\rho(s)$ must grow at $s \rightarrow \infty$ no faster than a polynomial. In this case we can re-arrange the expression by adding $m$ unknown coefficients to improve the behavior of the integral. These coefficients will be fixed by the renormalization condition: \begin{align} \label{eq3} \Delta_{F}(p^{2}) = P_{m-1}(p^{2}) + p^{2m} \int \-da\-de {s} \rho(s) \frac{\Delta(p^{2},s)}{s^{m}} \, , \end{align} where $P_{m-1}$ is a polynomial of degree $m-1$. We have to do as many subtractions as needed to render the integral finite. The coefficients of these polynomials are fixed by experimental input. One way to justify Eq.~(\ref{eq3}) is to consider the case where we have logarithmic divergence. Doing one subtraction for the propagator, Eq.~(\ref{eq1}), we get: \begin{align} \Delta_{F}(p^{2})-\Delta_{F}(p_{0}^{2}) = \int_{0}^{\infty} \-da\-de {s} \rho(s) \left( \frac{1}{p^{2}-s+i\epsilon} - \frac{1}{p^{2}_{0}-s+i\epsilon} \right) \, , \end{align} with the object on the right side now finite, and we can then define the once subtracted propagator as: \begin{align} \Delta_{F}(p^{2}) = \Delta_{F}(p_{0}^{2}) + (p^{2}_{0}-p^{2})\int_{0}^{\infty} \-da\-de {s} \rho(s) \frac{1}{(p^{2}-s+i\epsilon)(p^{2}_{0}-s+i\epsilon)} \, . \end{align} Then, using one renormalization condition, we can fix the part depending on the arbitrary parameter $p_{0}$ and everything will be similar to doing renormalization in the usual way. In this language, regularizing the integral with a finite number of subtractions means that the theory is renormalizable. The story changes if we let $\rho(s)$ be a different kind of distribution~\cite{Khoze-sep-18,Jaffe-jan-67}, for instance, growing exponentially at large s. Then, what follows is that we need to do an inifinite number of subtractions. This can mean, in a worst case scenario, the necessity to fix an infinite number of constants using boundary or renormalization conditions. Theories with these kind of distributions normally are non-local, quasi-local or non-renormalizable. It can happen that after an infinite number of subtractions, only a finite number of constants need to be fixed in these three cases, but this is not a general feature. The behavior of $\rho(s)$ at infinity is fundamental to write the relation in Eq.~(\ref{eq4}). Doing this step with caution, we use the subtracted propagator where the new distributions vanish at large s. Lastly, we can relate $Z_{\phi}$ with the spectral distribution if we remove the one-particle state from it. Using the fact that the field operators obey canonical commutation relations, this gives a constraint that one-particle plus multi-particle states should add up to one: \begin{align} 1 = Z_{\phi} + \int_{m^{2}}^{\infty} ds \eta(s) \, , \end{align} with $\eta(s)$ being the spectral density after the removal of the one-particle state. This fixes $Z_{\phi}$ to be a number smaller than 1: \begin{align} 0 < Z_{\phi} < 1 \, . \end{align} The closer $Z_{\phi}$ is to zero, the more the multi-particle states dominate. In the limit of $Z_{\phi}=0$, we need to change the description of the theory because we do not have one-particle states anymore. That would appear as a reorganization of the degrees of freedom in the theory. With the previous understanding of the propagator, we need to introduce one more tool to start doing calculations. \subsection{Functional Methods} \label{sfun} Let us introduce important objects that we use throughout the thesis. The first one is the generating functional for a $n$-point Green function: \begin{align}\label{ze} Z[\rho] = \ev{T \left( e^{i\int \-da\-de [4]{x} \rho(x)\phi(x)} \right) }{\Omega} = \braket{\Omega}_{\rho} \, . \end{align} With this object, we can generate any Green function by differentiating with respect to the external source $\rho(x)$: \begin{align} (-i)^{n} \frac{\delta^{n}}{\delta \rho(x_{1}) \dots \delta \rho(x_{n})} Z[\rho] = \ev{T \left( \phi(x_{1})\dots \phi(x_{n})e^{i\int \-da\-de [4]{x} \rho(x)\phi(x)} \right) }{\Omega} \end{align} in such a way that in the limit where the source goes to zero, we recover the $n$-point function Eq.~(\ref{green}). Given Eq.~(\ref{ze}), we can construct the generating functional for the connected $n$-point function defined in Eq.~(\ref{cone}) as: \begin{align} Z[\rho] = e^{iW[\rho]} \, . \end{align} This means that: \begin{align} G_{c}^{(n)} = i (-i)^{n} \frac{\delta^{n}}{\delta \rho(x_{1}) \dots \delta \rho(x_{n})} W[\rho] \Bigg|_{\rho =0} \, . \end{align} The last important definition is the quantum action, the generating functional for the 1PI Green function. To get this object we do a Legendre transform of $W[\rho]$: \begin{align} \label{fdv} \phi(x_{i}) \equiv \fdv{W}{\rho(x_{i})} = \ev{\phi(x_{i})}{\Omega}_{\rho} \, , \end{align} \begin{align} \rho(x_{i}) \equiv - \fdv{\Gamma[\phi]}{\phi(x_{i})} \, , \end{align} \begin{align} \Gamma[\phi] \equiv W[\rho] - \int \-da\-de [4]{x} \rho(x)\phi(x) \, , \end{align} where we trade the $\rho$ dependence by a $\phi$ dependence. The $n$-point 1PI Green function is obtained by taking functional derivatives with respect to the field: \begin{align} G_{1PI}^{(n)} = \frac{\delta^{n}}{\delta \phi(x_{1}) \dots \delta \phi(x_{n})} \Gamma[\phi] \Bigg|_{\phi =0} \, . \end{align} With these objects defined, we can start to analyze some relevant results. The first result is the importance of the expectation value of the field with the presence of a source. If we find a way to compute this in the presence of an arbitrary source, then we have all the information needed to recover the $n$-point functions: \begin{align} \ev{\phi(x_{i})}{\Omega}_{\rho} \longleftrightarrow \frac{\delta W[\rho]}{\delta \rho(x_{i})} \, . \end{align} The only object that we cannot recover is the vacuum-vacuum amplitude $\bra{\Omega}\ket{\Omega}$, which is irrelevant when dealing with particle physics. It turns out that it is possible to find an equation for this object. It is precisely the expectation value of the classical equation of motion. That why Eq.~(\ref{fdv}) is called the classical field. It is not, in fact, all classical because non-linearity appears as $n$-point functions in this equation of $1$-point function. The next result is about the full propagator and its relation to the 1PI two-point function. Given the connected two-point function: \begin{align} G_{c}(x_{i},x_{2}) = -i \frac{\delta^{2}}{\delta \rho(x_{1})\delta\rho(x_{2})} W \, , \end{align} we can relate to the 1PI two point function using the identity: \begin{align} \fdv{\rho(x_{i})}{\rho(x_{j})} = \delta^{4}(x_{i}-x_{j}) = \fdv{\rho(x_{i})}{\phi(x_{k})} \fdv{\phi(x_{k})}{\rho(x_{j})} = \\ = - \left(\frac{\delta^{2}\Gamma}{\delta \phi(x_{i})\delta\phi(x_{k})} \right) \left( \frac{\delta^{2}W}{\delta \rho(x_{k})\delta\rho(x_{j})} \right) \, . \end{align} This is just the inversion equation for the connected propagator written in terms of the 1PI propagator. This means that the 1PI propagator is the inverse of the connected propagator. In momentum space: \begin{align} G_{c}^{(2)}(p^{2}) = -\left( G^{(2)}_{1PI}(p^{2}) \right)^{-1} \, , \end{align} this shows the overall consistency of Eq.~(\ref{fullprop}). The last thing that is worth pointing out about these objects is that the quantum action at tree level is the classical action, so we can use functional derivatives to derive the Feynman rules of any theory: \begin{align} \Gamma[\phi] = S[\phi] + \mathcal{O}(\hbar) \, . \end{align} Usually, at this point, we would introduce a path integral representation for these generating functional and start calculating processes. However, here we go a different route because we are interested in high multiplicity amplitudes, and usually, Feynman diagrams do not help. Even at tree level, there are too many diagrams to count, and this method would not be useful. The method that we use is introduced in the next chapter. Now we are ready to start calculating some high multiplicity amplitudes and try to understand what is happening in this regime. After the exploration of these processes, we introduce the newly proposed mechanism of Higgsplosion and discuss the possibility of its occurrence in a scalar Quantum Field Theory. \chapter{Perturbative Investigation of High Multiplicity Amplitudes} \label{c2} \pagenumbering{arabic} The focus of this chapter is the study of high multiplicity processes. The primary motivation for it comes from trying to understand the Higgsplosion proposal~\cite{Khoze-higgsplosion}. Nevertheless, this is not the only reason to look for these processes. We typically do not explore this regime in a Quantum Field Theory, and it is not clear what to expect. Maybe the particle interpretation of the field excitation ceases to be valid or useful in this regime. Because this is a complicated problem, first we do a perturbative investigation of this limit. The goal is to obtain enough information such that we can understand the applicability of perturbation theory in this regime. Ultimately this is answered at the end of chapter~\ref{c3}. In the presence of those perturbative results, we can start to explore different approaches for high multiplicity calculations. We do not work these additional results deeply because we chose to focus on the perturbative calculations. After we recover most of the essential results for high multiplicity scalar Quantum Field Theory, we start to work out the Higgsplosion framework. \section{Tree Level Amplitude at Threshold} We are interested in calculating the decay rate at high multiplicity of final states in a scalar theory. This could, in principle, be calculated with Feynman diagrams. However, the high number of final states makes this a tedious and challenging task. For example, if we are interested in $1 \to 5$ processes at tree level in an unbroken $\phi^{4}$ theory, we have only ten diagrams, showed in Figure~\ref{fig:1to5}. If we go for $1 \to 7$ processes, we get 280 diagrams, as it is represented in Figure~\ref{fig:1to7}. Going beyond nine particles in the final state, we rapidly pass the 1000 diagrams and becomes increasingly hard. This counting is only at tree level, adding quantum corrections creates more diagrams, and the Feynman diagrammatic approach becomes almost useless. \begin{figure}[h!] \centering \includegraphics[width=10cm]{1to5} \caption{Diagrams contributing for the $1 \to 5$ process in a $\phi^{4}$ theory in the unbroken phase. Generated with FeynArts~\cite{feyart}.} \label{fig:1to5} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=13cm]{1to7} \caption{Diagrams contributing for the $1 \to 7$ process in a $\phi^{4}$ theory in the unbroken phase. Generated with FeynArts~\cite{feyart}.} \label{fig:1to7} \end{figure} Here we use a different approach that was proposed by Brown~\cite{Brown-nov-92}. In this approach, we calculate the amplitude that enters in the decay rate taking advantage of the LSZ reduction formula, Eq.~(\ref{lsz}). The decay rate that we are interested in calculating is of the form: \begin{align} \label{decay} \Gamma_{1 \rightarrow n}(p^{2}) = \frac{1}{2m} \int \-da\-de {\Pi_{n}} \abs{\mathcal{A}(1 \rightarrow n)}^{2} \, , \end{align} where $\-da\-de {\Pi_{n}}$ is the Lorentz invariant phase space factor, including the $1/n!$ factor since the end particles are identical. This process is a highly virtual one because there is only one scalar field, and it is stable. However, in the middle of a process, this could contribute as is represented in the Figure~\ref{higgspersion}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{higgspersion} \caption{Process where the off-shell amplitude can contribute.} \label{higgspersion} \end{figure} The amplitude as written in Eq.~(\ref{decay}) is the one without amputating the incoming leg. The reduction formula for this case is: \begin{align} \mathcal{A}(1\rightarrow n)[x,p_{1},\dots,p_{n}] = (iZ_{\phi})^{-(n+1)/2} \lim_{p_{i}^{2}\rightarrow m^{2}} \int \-da\-de [4]{x_{1}}\ldots \-da\-de [4]{x_{n}} e^{-ip_{1} \cdot x_{1}}\ldots \\ e^{-ip_{n}\cdot x_{n}}(p_{1}^{2}-m^{2})\ldots (p_{n}^{2}-m^{2}) \ev{T \left( \phi(x_{1})\ldots\phi(x_{n})\phi(x) \right) }{\Omega} \nonumber \, . \end{align} Here we have to be careful because of the use of a nonstandard notation. The initial off-shell particle is in the position representation, and the rest are in the momentum representation. This amplitude has mixed momentum and position dependence. All of this dependence vanishes at threshold: \begin{align} \vec{p}_{i} = 0 \, , \quad \text{for all $i$} \, , \end{align} where $p_{i}$ are the outgoing momenta in the amplitude. We calculate first these amplitudes in the threshold limit, and in the end, try to recover the momentum dependence. Another point to notice is that, if we want to relate this amplitude to the Feynman diagram computation, we need to amputate the virtual particle. The Feynman amplitude would be $\mathcal{M}$, and its relation to $\mathcal{A}$ is: \begin{align} \mathcal{M}(p,p_{i})= \left( p^{2}-m^{2}\right) \mathcal{A}(p,p_{i}) \, . \end{align} From now on, we can can ignore the $Z_{\phi}$ factor, because all the computations are done up to one-loop. The effects of the $Z_{\phi}$ enters only at higher loops for the class of theories that we work in this thesis. We can drop out also the overal phase factor for this process because we square this amplitude in the end. The interesting observation used by Brown is that we can re-write the $(n+1)$-point correlator in terms of functional derivatives using Eq.~(\ref{fdv}): \begin{align}\label{tomano} \ev{T \left( \phi(x_{1})\ldots\phi(x_{n})\phi(x) \right) }{\Omega} =\fdv{\rho(x_{1})}\ldots\fdv{\rho(x_{n})}\ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, , \end{align} where the expectation value is taken in presence of an arbitrary source. Using Eq.~(\ref{tomano}), we write the amplitude as: \begin{align} \label{amp} \mathcal{A}(1\rightarrow n) = \prod_{i=1}^{n} \left[ \lim_{p_{i}^{2} \rightarrow m^{2}} \int \-da\-de [4]{x_{i}} e^{-ip_{i}x_{i}}(p_{i}^{2}-m^{2}) \fdv{\rho(x_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, . \end{align} It is possible to find a differential equation for $\ev{\phi(x)}{\Omega}_{\rho} \bigg|_{\rho = 0}$ that is just like the classical equation of motion, and then find an analytic solution for $\mathcal{A}(1\rightarrow n)$. This equation simplifies when we want to calculate only the tree-level contribution. Let us specialize for this case now, for the $\phi^{4}$ theory (any other interaction or kind of matter would follow a similar path). \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} It is known that the tree level approximation for the expectation value is just the classical solution in presence of an arbitrary source~\cite{Brown-nov-92}: \begin{align} \ev{\phi(x)}{\Omega}_{\rho} \rightarrow \phi_{cl}(x)[\rho] \, . \end{align} This statement will be made a little more rigorous when we compute loop corrections at Section~\ref{loop}. Choosing the $\phi^{4}$ theory and using the usual particle physics normalization we get the equation of motion: \begin{align} \label{eqmov} (\Box + m^{2}) \phi_{cl}(x) + \frac{\lambda}{3!} \phi_{cl}^{3}(x)=\rho(x) \, . \end{align} We want to find a solution of this equation to get the $\rho$ dependence of the field and then do $n$-functional derivatives to obtain the amplitude, Eq.~(\ref{amp}). Because this is a differential equation, we need boundary conditions. They are set by the Feynman prescription of the propagator that tell us how to project to the right vacuum. We have transformed the problem of finding the tree level amplitude into solving a non-linear second order differential equation with an arbitrary source, which is still a difficult problem. The approach developed by Brown is to focus on the threshold limit, where the source can be taken to have a simple enough form that the equation can be solved, as we will show. Note that we want the dependence of the field with respect to the source, not the actual form of the field solution by itself. For this, we need to be careful because the source needs to be able to excite all modes of the field. Since we want the amplitude at threshold, this means that there is no spatial momentum in the final states. In that limit, the source and the field are homogeneous in space and depend only on time. Hence, the threshold limit simplifies the equation to only one dimension. The next step to find a solution for Eq.~(\ref{eqmov}) is to choose a simple exponential source: \begin{align} \label{salsa} \rho(t) = \rho_{0}e^{i\omega t} \, . \end{align} The equation of motion then becames: \begin{align} \label{socorro} (\partial_{t}^{2}+m^{2})\phi_{tree}+\frac{\lambda}{3!}\phi_{tree}^{3}= \rho(t) \, . \end{align} This is still a non-linear problem but now is solvable. We look for a solution in perturbation theory, using $\lambda$ as our deformation parameter that turns on and off the non-linearity. We first consider the free equation: \begin{align} (\partial_{t}^{2}+m^{2})\phi_{tree}^{0}=\rho_{0} e^{i\omega t} \, , \end{align} whose solution is: \begin{align} \label{zdete} \phi_{tree}^{0} = - \frac{\rho_{0}}{\omega^{2}-m^{2}+i\epsilon} e^{i\omega t} = z(t) \, . \end{align} Turning the coupling on generates a series of the form: \begin{align} \label{pert} \phi_{tree}= z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots \, . \end{align} Plugging this in the equation of motion: \begin{align} (\partial_{t}^{2}+m^{2})(z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots)+\frac{\lambda}{3!}(z(t) + \lambda \phi_{tree}^{(1)}+\lambda^{2} \phi_{tree}^{(2)} + \ldots)^{3}= \rho_{0}e^{i\omega t} \, , \end{align} shows that the $\phi_{tree}(t)$ can be written only in terms of $z(t)$, and the source dependence comes only from it, for instance the first term in the expansion is: \begin{align}\label{fi} \phi_{tree}^{(1)} = \lambda \frac{z_{0}(\rho_{0},\omega)}{3!(9\omega^{2}-m^{2})} e^{3i\omega t} = - \lambda \frac{z(t)^{3}}{3!(9\omega^{2}-m^{2})z_{0}(\rho_{0},m)^{2}} \, . \end{align} The notation that we used is: \begin{align} z_{0}(\rho_{0},\omega) = \frac{\rho_{0}}{\omega^{2}-m^{2}+i\epsilon} \, , \end{align} in such a say that the double limit of $\rho_{0} \rightarrow 0$ and $ \omega \rightarrow m$ this function becomes a constant $z_{0}$. We can trade the functional derivative with respect to the source for ordinary $z(t)$ derivatives in the amplitude, Eq~(\ref{amp}): \begin{align} \label{co} \mathcal{A}(1\rightarrow n) = \prod_{i=1}^{n} \left[ \lim_{p_{i}^{2} \rightarrow m^{2}} \int \-da\-de [4]{x_{i}} e^{-ip_{i}x_{i}}(p_{i}^{2}-m^{2}) \fdv{\rho(x_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} = \end{align} \begin{align} \nonumber = \prod_{i=1}^{n} \left[ \lim_{\omega \rightarrow m} \int \-da\-de [4]{x_{i}} e^{-i\omega t_{i}}(\omega^{2}-m^{2}) \fdv{\rho(t_{i})} \right] \ev{\phi(x)}{\Omega}_{\rho} \Bigg|_{\rho = 0} \, . \end{align} Now we can use the dependence of the source from Eq.~(\ref{zdete}) and Eq.~(\ref{pert}): \begin{align} \label{sour} \frac{\delta \phi[z]}{\delta \rho(t_{i})} = \frac{\delta z[\rho]}{\delta \rho(t_{i})} \frac{\partial \phi(z)}{\partial z} = -\frac{1}{\omega^{2}-m^{2}} \delta(t-t_{i}) \delta^{3}(x_{i})\frac{\partial \phi(z)}{\partial z} \, . \end{align} Using Eq.~(\ref{sour}) in Eq.~(\ref{co}) we can see the simplification of the problem, after doing the delta integrations and ignoring the overall phase: \begin{align} \label{amp1} \mathcal{A}(1 \rightarrow n) = \left( \pdv{z} \right)^{n} \phi_{tree}[z(t)] \Bigg|_{z=0} \, , \end{align} where the on-shell condition corresponds to the limit $\omega \rightarrow m$ and $\rho \rightarrow 0$, in such a way that $z_{0}$ remains finite. The solution for $\phi_{tree}$ can be obtained order by order in $\lambda$ like the first term obtained in Eq~(\ref{fi}). One finds that the perturbative series Eq.~(\ref{pert}) can be ressumed: \begin{align} \label{unbroken} \phi_{tree}(t)= \frac{z(t)}{1-z^{2}\frac{\lambda}{48m^{2}}} \, . \end{align} It is easy to check that this is indeed a solution in a well-defined limit. Defining $\alpha = \frac{\lambda}{48m^{2}}$ we compute: \begin{align} \dot{\phi}_{tree} = \dot{z} \frac{(1+\alpha z^{2})}{(1- \alpha z^{2})^{2}} \, , \end{align} \begin{align} \ddot{\phi}_{tree} =\frac{ \ddot{z}(1-\alpha z^{4}) + \dot{z}^{2}(6\alpha z +2 \alpha^{2} z^{3})}{(1-\alpha z^{2})^{3}} \, , \end{align} and using the definition of $z(t)$, Eq.~(\ref{zdete}) we have that $z \propto e^{i \omega t}$ so: \begin{align} \dot{\phi}_{tree} = i \omega z \frac{(1+\alpha z^{2})}{(1- \alpha z^{2})^{2}} \, , \end{align} \begin{align} \ddot{\phi}_{tree} =-\omega^{2} \frac{(\alpha^{2}z^{5}+6\alpha z^{3}+ z)}{(1-\alpha z^{2})^{3}} \, . \end{align} Putting this in the equation of motion, Eq.~(\ref{socorro}), and simplifying the denominator we get: \begin{align} \label{direito} \alpha^{2} z^{5}(m^{2}-\omega^{2}) + z^{3}(\frac{\lambda}{3!}-6\omega^{2}\alpha -2m^{2}\alpha) + z(m^{2}-\omega^{2})=-(\omega^{2}-m^{2})z(1-2\alpha z^{2}+\alpha^{2}z^{4}) \, . \end{align} We can see that this is a solution when $\omega \rightarrow m$ since the middle term is just the definition of $\alpha$. Here we can take $\rho \rightarrow 0 $ in such a way that $z(t)$ remains finite. Even though we arrived at a solution using perturbation theory, we have obtained a representation, Eq~(\ref{unbroken}) that can be trivially continued to the full complex $z(t)$ plane. An interesting thing comming from the form of the amplitude Eq.~(\ref{amp1}) is that we can work without the source if we change the boundary conditions. In the solution this can be seen trivially in the right side of the Eq.~(\ref{direito}), since the source contribution vanishes on-shell. This happens because we only want the solution on-shell, and before doing this limit the $\rho_{0} \rightarrow 0$ is similar to $z \rightarrow 0$. Then, we can solve without any source to find these amplitudes. Doing this changes the boundary conditions of the solution until we set $\rho=0$, because we are solving with a source at all times. This dictates that the solution, Eq.~(\ref{unbroken}), should vanish in positive Euclidean times: \begin{align} Im(t) \rightarrow\infty \, . \end{align} This boundary condition remains for the broken case and loop corrections. Another interesting point is that we started with a real field but got a complex solution. This complexification is happening because of the source, Eq.~(\ref{salsa}). If we had chosen a real source, the solution would be real as well. However, we only use the source as a trick to get the scattering amplitude. It is arbitrary and can be chosen to be of this particular form. With that in mind we can now find the decay amplitude, Eq.~(\ref{amp1}), by taking $n$-derivatives with respect to $z(t)$ in Eq.~(\ref{unbroken}). To facilitate this, we can use a series representation of this expression and pick up the nth-term in it: \begin{align} \frac{z}{1-\alpha z^{2}} = \sum_{0}^{\infty} z^{2n+1} \alpha^{n} \, . \end{align} The amplitude is then: \begin{align} \mathcal{A}(1 \rightarrow n) = \frac{n!}{2\alpha} [ (\alpha)^{(1+n)/2)} + (-\alpha)^{(1+n)/2}] \, . \end{align} It is possible to draw some conclusions about this amplitude at tree level and threshold. First it is necessary to remember that this is a partially off-shell amplitude and because of that it is not a physical object by itself. Nevertheless we can use this amplitude to construct physical observables. This amplitude has the interesting features: \begin{itemize} \item It vanishes for even final states: \begin{align} \mathcal{A}(1 \rightarrow 2k) =0 \, . \end{align} \item For odd final states it has the factorial growth: \begin{align} \mathcal{A}(1 \rightarrow 2k+1) = (2k+1)! \left( \frac{\lambda}{48m^{2}} \right)^{k} \, . \end{align} \end{itemize} This factorial growth persists to the decay rate even after dividing by the $n!$ coming from the identical nature of the final states. That could be an indication that in the high multiplicity limit, the decay rate grows or that perturbation theory cannot be trusted for a high multiplicity computation. The total final state energy of the system at rest is: \begin{align} E=(2k+1)m \, . \end{align} and the phase space in this case is zero (it is just a point). If we assume an infinitesimal sphere around $E$ and that the momentum dependence is constant in this region we get only an overall small term trying to combat the factorial growth, that we will call $V_{n}(E)$: \begin{align} \Gamma_{2k+1}(E) = (2k+1)! \left( \frac{\lambda}{48m^{2}} \right)^{2k} V_{2k+1}(E) \, . \end{align} This is just a naive approximation if we want to know the phase space contribution we need to be able to go beyond the threshold in such a way that we can get results around a specific configuration. This is already potentially problematic for the perturbative unitarity of the theory. The square amplitude divided by the symmetry factor grows factorially and can, in principle, pass any unitary bounds of these processes: \begin{align} \frac{\left| \mathcal{M}\right|^{2}}{n!} \propto n! \end{align} \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Broken Phase} The $\phi^{4}$ has another regime where we can explore this processes. In this phase the mass term is negative so it is convenient to use the definition: \begin{align} m^{2}=-\mu^{2} \, . \end{align} The reflection symmetry is broken and the configuration that minimizes the potential from Eq.~(\ref{eqmov}) is no longer zero: \begin{align} \label{min} \phi_{min}^{2} = \frac{3!\mu^{2}}{\lambda} \, . \end{align} If we want to find the tree level amplitude at threshold it is easier to work in the shifted field, where we do not have an expectation value~\cite{Brown-nov-92,Smith-apr-93}: \begin{align} \label{shift} \sigma_{tree}=\phi_{tree}-\phi_{\min} \, . \end{align} Using this definition the equation of motion becames: \begin{align} \label{eqbro} (\Box + 2\mu^{2}) \sigma_{tree}(x) + \frac{\lambda \phi_{min}}{2!}\sigma_{tree}^{2}(x) + \frac{\lambda}{3!} \sigma_{tree}^{3}(x)=\rho(x) \, . \end{align} The steps from Eq.~(\ref{eqbro}) to Eq.~(\ref{broken}) are essentially the same as detailed in the unbroken phase above. We need to solve this equation to find $\sigma$ as a functional of the source $\rho$. Choosing again the same exponential source Eq.~(\ref{salsa}), transforms this problem into finding the solution for the sourceless case with the boundary condition that: \begin{align} \sigma \rightarrow 0 \quad \mbox{as} \quad Im(t) \rightarrow \infty \, . \end{align} Now, we look for a perturbative solution for the spatially homogeneous case of Eq.~(\ref{eqbro}). Doing that we find the perturbative series in terms of the unperturbed solution: \begin{align} z(t)= z_{0}e^{i \sqrt{2}\mu t} \, , \end{align} where the physical mass is now $\sqrt{2}\mu$ in the on-shell limit. It is easy to check that a solution for Eq.~(\ref{eqbro}), taking the $\rho \rightarrow 0$ limit but keeping $z_{0}$ finite is: \begin{align} \label{broken} \sigma(t)= \frac{z}{1-\frac{z}{2\phi_{min}}} \, . \end{align} Keep in mind that $\phi_{min}$ has all the coupling dependence. We find such a solution performing a perturbative expansion, ressuming, and analytically continuing to the full complex $z(t)$ plane. To check that this is a solution is direct: \begin{align} \ddot{\sigma} = -2\mu^{2} \frac{\frac{z^{2}}{2\phi_{\min}}-z}{(1-\frac{z}{2\phi_{\min}})^{3}} \, . \end{align} Plugging this in the equation of motion and simplifying the denominator we get: \begin{align} z^{3}(-\frac{\lambda}{12}+\frac{\mu^{2}}{2\phi_{min}^{2}})+z^{2}(-3\frac{\mu^{2}}{\phi_{\min}}+\frac{\lambda \phi_{\min}}{2}) + z(-2\mu^{2}+2\mu^{2})=0 \, , \end{align} where we already used that $\omega^{2} \rightarrow 2\mu^{2}$ and set $\rho_{0} \rightarrow 0$. The l.h.s vanishes using Eq.~(\ref{min}). Now that we have this solution, we can compute the tree level threshold amplitude for the broken phase, following the same logic as already explained in the unbroken case: \begin{align} \label{ampbroken1} \mathcal{A}_{B}(1 \rightarrow n) = \left( \pdv{z} \right)^{n} \sigma[z] \Bigg|_{z=0} \, . \end{align} As before we find an series expansion and pick up the nth-term to get: \begin{align} \mathcal{A}_{B}(1 \rightarrow n) = n! \left( \frac{\lambda}{24\mu^{2}} \right)^{(n-1)/2} \, . \end{align} We can see that the factorial growth is still present in this phase. The difference is that now we can have even final states. We also get a factorially growing decay rate: \begin{align} \Gamma_{B}(1 \rightarrow n) = n! \left( \frac{\lambda}{24\mu^{2}} \right)^{(n-1)} V_{n}(E) \, . \end{align} As highlighted before this can be a potential danger for the unitarity of the theory. If these processes start to dominate with factorial power, then the cross section for a process of few particles going to $n$ start to grows as well. This growth is inconsistent with the perturbative unitarity of the theory. There are a few possible explanations for this feature and ways to save this growth. We will continue working within the perturbative approach, at threshold, and see if the factorial growth persists after quantum corrections, at the one-loop level. Later on, we explore going beyond threshold such that the assumption that $V_{n}(E)$ is constant can be checked and obtain a better decay rate expression. \section{One-Loop Amplitude at Threshold} \label{loop} Until now, we saw that the tree level amplitude at the threshold for the $\phi^{4}$ theory displays a factorial growth already in the first term of the series. We expect that these series that appear in Quantum Field Theory to be divergent. However, it is not the usual case when the first term of the series is already large. This, in principle, could mean that we cannot even trust the first term of the series as a good approximation. If we want to understand this better, we need to compute quantum corrections for this theory to see if somehow these factorial growths get tamed. It is possible to implement loop corrections in this formalism if we expand the operators in a $\hbar$ expansion, where the first term corresponds to the tree level. In this expansion, we have to be careful because $\hbar$ has dimension, and so far have been using units such that: \begin{align} \hbar = 1 \, . \end{align} To get around this, we can introduce a deformation parameter $\epsilon$ where $\hbar$ would appear, such that the limit $\epsilon \rightarrow 0$ is the classical limit that $\hbar \rightarrow 0$. The expansion of the field operator is: \begin{align} \hat{\phi}(x)= \phi_{0} \hat{I} + \sqrt{\epsilon} \hat{\phi}_{1/2}+\epsilon \hat{\phi}_{1} + \mathcal{O}(\epsilon^{3/2}) \, . \end{align} The expansion is in $\sqrt{\epsilon}$ because we are working at the level of the equation of motion. We will see from the calculation that the one-loop correction appears in the $\epsilon$ term using this convention. With this definition, we can find the expectation value of the field order by order in $\epsilon$, and then the amplitude can be computed up to that same order by an appropriate functional derivative. Now, let us specialize in the cases that we have considered above to see how quantum corrections modify them. \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} The first case of interest is the $\phi^{4}$ theory in the unbroken phase~\cite{loop1}, the equation of motion in terms of the field operator, before expanding in $\epsilon$ is: \begin{align} \ev{T\left[ (\Box + m^{2})\hat{\phi}+ \frac{\lambda}{3!} \hat{\phi}^{3}\right]}{\Omega}_{\rho}=0 \, . \end{align} We will use the notation: \begin{align} \ev{\hat{\phi}_{i}}{\Omega}_{\rho} = \phi_{i}[\rho] \, . \end{align} Expanding the field operator up to $\mathcal{O}(\epsilon^{3/2})$ the equation of motion is\footnote{The source that appear in the equation is the classical external one. It appears after we bring the d'Alembert operator out of the expectation value, passing trough the time ordering.} \begin{align} \label{loopexp} \left[ (\Box+m^{2})\phi_{0}(x) +\frac{\lambda}{3!}\phi_{0}^{3}(x)-\rho(x) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon}\left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1/2}(x) \right] + \end{align} \begin{align} \nonumber +\epsilon \left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1}(x) + \frac{\lambda \phi_{0}}{2} \ev{T\left(\hat{\phi}_{1/2}(x)\hat{\phi}_{1/2}(x)\right)}{\Omega}_{\rho} \right]+ \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} Now we do the threshold limit. This means that $\phi_{0}$ is as calculated before, Eq.~(\ref{unbroken}): \begin{align} \label{unbroken1} \phi_{0} = \frac{z(t)}{1-\frac{\lambda}{48m^{2}}z(t)^{2}} \, . \end{align} The equation of order $\sqrt{\epsilon}$ defines the two-point function that appears at order $\epsilon$ in Eq.~(\ref{loopexp}). It is the zero mode equation for the differential operator: \begin{align} \label{diamante} \diamondsuit=\Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(t) \, , \end{align} where the tree level and one-loop fields are at threshold, and the $\phi_{1/2}$ is allowed to have spatial dependence that appears in $\diamondsuit$ and its Green function. We will see that the boundary conditions kill all zero modes of $\diamondsuit$, so this equation give us that $\phi_{1/2}$ is zero as expected. This is a common feature since we are treating things at the level of the equation of motion. The order $\epsilon$ is what we are interested in and gives us the one-loop contribution for the amplitude. In Eq.~(\ref{loopexp}), there is a contribution of the two-point Green function of $\hat{\phi}_{1/2}$. The operator that we need to invert to find this Green function is Eq.~(\ref{diamante}), and in the end, set $x'= x$. This Green function will be taken at the same point so we can expect divergences to appear. These divergences are familiar from more standard Quantum Field Theory arguments. Using $\phi_{0}$ we can write the operator, Eq.~(\ref{diamante}), as: \begin{align} \label{carvao} \diamondsuit= \Box+m^{2} + \frac{\lambda}{2}\left( \frac{z(t)}{1-\frac{\lambda}{48m^{2}}z^{2}} \right)^{2} \, . \end{align} From the start we have a problem, this operator is not Hermitian because $z(t)$ is complex. This makes our job a little harder. To deal with this fact, we proceed as proposed in~\cite{loop1}. Going to Euclidian time, we can make this problem simpler. However, in Euclidean time we have a pole on the countour of integration. To adress this, we can do a shift in the Euclidian time variable to avoid the pole and then analytically continue the solution to the whole complex Euclidean plane. It is convenient to work in terms of~\cite{loop1}: \begin{align} u(\tau) = e^{m\tau} \, , \end{align} where $u(\tau)$ is defined as: \begin{align} \label{rot1} -\frac{\lambda}{48m^{2}} z(t)^{2} = u(\tau)^{2} \, , \end{align} and the new Euclidian time coordinate is: \begin{align} \tau = i t + i \frac{i \pi}{2m} + \frac{\ln(\frac{\lambda z_{0}^{2}}{48m^{2}})}{2m} \, . \end{align} The tree level solution takes the form: \begin{align} \phi_{0}(t) = i \sqrt{\frac{48m^{2}}{\lambda}} \frac{u(\tau)}{1+u(\tau)^{2}} = i \sqrt{\frac{48m^{2}}{\lambda}} \frac{\sech(m\tau)}{2} \, . \end{align} Then the operator to be inverted, Eq.~(\ref{carvao}), reads: \begin{align} \diamondsuit = \left( -\partial_{\tau}^{2} - \nabla^{2} + m^{2} - 6m^{2}\sech^{2}(m\tau) \right) \, . \end{align} Doing a partial Fourier transform only in the spatial section we can write the Green function of this operator as: \begin{align} \label{fourier} G(\vec{x},\vec{x}' ; \tau, \tau') = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{k}(\tau,\tau') e^{i \vec{k}.(\vec{x}-\vec{x}')} \, , \end{align} the Green function satisfying: \begin{align} \diamondsuit(\vec{x},\tau) G(\vec{x},\vec{x}' ; \tau, \tau') = \delta^{3}(\vec{x}-\vec{x}')\delta(\tau-\tau') \, . \end{align} Using Eq.~(\ref{fourier}) we focus on the following Green function: \begin{align} \left( -\partial_{\tau}^{2} + \theta^{2} - 6m^{2}\sech^{2}(m\tau) \right)G_{\theta}(\tau,\tau') =\delta(\tau -\tau') \, , \end{align} where we defined $\theta^{2}=\vec{k}^{2}+m^{2}$. From $G_{\theta}$ we can then find the full Green functions by doing momenta integrals using Eq.~(\ref{fourier}). We are doing this computation in $(3+1)D$, but the generalization to other dimensions is straightforward only changing the numbers of $k$ integrals. In fact, the Green function that appears in Eq.~(\ref{loopexp}) is evaluated at coincident space-time points, therfore, we have: \begin{align} \label{samepoint} \ev{T\left( \hat{\phi}_{1/2}(x)\hat{\phi}_{1/2}(x)\right)}{\Omega}_{\rho} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{\theta}(\tau,\tau) \, . \end{align} Surprisingly this Green function is very similar to a known quantum mechanical potential (Poschl-Teller Potential)~\cite{pol}. We will transform this problem into that quantum mechanical problem and then show how to solve this potential exactly. After that, we continue with the solution to find the Green function and in the end, the loop correction. To look for this Green function, we search for two regular solutions to the homogeneous equation, one regular at $\tau \rightarrow \infty$ the other at $\tau \rightarrow -\infty$. With both solutions $f_{-}(\tau)$, $f_{+}(\tau)$ and with the Wronskian $\mathcal{W}$: \begin{align} \label{wronk} \mathcal{W} = f_{+}(\tau)f_{-}(\tau)' - f_{+}(\tau)'f_{-}(\tau) \, , \end{align} we then construct the Green function: \begin{align} \label{gf} G_{\theta}(\tau,\tau') = \frac{ f_{+}(\tau)f_{-}(\tau')}{\mathcal{W}} \qquad \textnormal{for} \qquad \tau > \tau' \, , \end{align} \begin{align} G_{\theta}(\tau,\tau') = \frac{ f_{+}(\tau')f_{-}(\tau)}{\mathcal{W}} \qquad \textnormal{for} \qquad \tau' > \tau \, . \end{align} Both solutions can be written in a Schrodinger like form: \begin{align} \left( -\partial_{\tau}^{2} - 6m^{2} \sech^{2}(m\tau) \right) f(\tau) = -\theta^{2} f(\tau) \, , \end{align} where we identify $\tilde{E}_{\theta}=-\theta^{2}$ and the Hamiltonian being the operator on the left. We can solve this only using algebra. To facilitate we can work using coordinates without dimension by doing a change of variables, $x=m \tau$: \begin{align} \hat{H} = \left( -\partial_{x}^{2} - 6 \sech^{2}(x) \right) \, . \end{align} The dimensionless energy is defined as: \begin{align} E_{\theta}=-\frac{\theta^{2}}{m^{2}} \, , \end{align} such that we need to solve the eigenvalue problem: \begin{align} \label{prob1} \hat{H}f(x)=E_{\theta}f(x) \, . \end{align} It turns out that to find the solution for this Hamiltonian we need to generalize it to: \begin{align} \label{prob} \hat{H}_{l} = p^{2} - l(l+1) \sech^{2}(x) \, . \end{align} Our case is $l=2$. Now we will do a little sidetrack to solve this eigenvalue problem because even though this is an exactly solvable quantum mechanical system, it is not so trivial to find the solution. \subsection{Eigenvalues and Eigenfunctions of the Poschl-Teller Potential} This section can be skipped without affecting the core of the thesis. Here we want to solve the quantum mechanical problem, Eq.~(\ref{prob}): \begin{align} \hat{H}_{l} \ket{p,l} = E_{l} \ket{p,l} \, . \end{align} We want to find the eigenfunctions for the special case of $l=2$. For each $l$ we have a quantum system with eigenvalues $E_{l}$. The spectrum that we are interested in is the continuum band of the Poschl-Teller potential: \begin{align} \label{banana} V(x)= -l(l+1)\sech^{2}(x) \, . \end{align} The esiest one is $l=0$, the system is the free particle, and the energy does not depend on $l$ trivially: \begin{align} \hat{H}_{0} \ket{p,0}= p^{2}\ket{p,0} \, . \end{align} The special propriety of this system is that it belongs to a class of factorizable potentials. This feature is reminiscent from the supersymmetric version of this potential~\cite{pt1}: \begin{align} V_{s}(x)= -l(l+1)\sech^{2}(x) + l^{2} \, . \end{align} Because we are interested in the continuum spectrum of Eq.~(\ref{banana}), we will not introduce Supersymmetric Quantum Mechanics~\cite{susy1,susy2}. Nevertheless, Supersymmetry is the basis of why this process work and why this potential is solvable. The factorization of the Hamiltonian Eq.~(\ref{prob}) can be archived using the following operators: \begin{align} \label{caca} a_{l} = p - i l \tanh(x) \, , \end{align} \begin{align} \label{coco} a_{l}^{\dagger} = p + i l \tanh(x) \, . \end{align} They are choosen in this form to satisfy: \begin{align} \comm{a_{l}^{\dagger}}{a_{l}} = 2l \sech^{2}(x) \, , \end{align} With the operators Eq.~(\ref{caca}) and Eq.~(\ref{coco}) we can construct the initial Hamiltonian Eq.~(\ref{prob}): \begin{align} \label{caraca} H_{l}^{+} \equiv a_{l}^{\dagger}a_{l}= H_{l}+l^{2} \, . \end{align} The object in the l.h.s of Eq.~(\ref{caraca}) is the Hamiltonian for the supersymmetric description of this potential. We can define its parter Hamiltonian exchanging the order of the operators: \begin{align} \label{caracafermionico} H_{l}^{-} \equiv a_{l}a_{l}^{\dagger}=H_{l-1}-l^{2} \, . \end{align} In this case both supersymmetric partners are related only by a change in constants inside the Hamiltonian. This class of systems is called Shape Invariant Potentials~\cite{shape1}, and this plays a pivot role in making this potential solvable. With the definitions of Eq.~(\ref{caraca}) and Eq.~(\ref{caracafermionico}) we can see that, if we have an eigenstate of $H^{+}$ with contiuum spectrum: \begin{align} H_{l}^{+} \ket{E^{+},l}=E^{+}_{l} \ket{E^{+},l} \, , \end{align} there exists an eigenstate of $H^{-}_{l}$ with the same eigenvalue, except for the ground state: \begin{align} \label{mesalva} a_{l}H_{l}^{+} \ket{E^{+},l}= H_{l}^{-} \left( a_{l} \ket{E^{+},l} \right) =E^{+}_{l} \left( a_{l} \ket{E^{+},l} \right) \, . \end{align} This occurs the other way around as well. This is a consequence of the supersymmetry on the system, both spectrums are related by the operators $a_{l}$ and $a_{l}^{\dagger}$ as it is represented in Figure~\ref{supa}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{susy} \caption{Relation between the two parterns Hamiltonians..} \label{supa} \end{figure} Because this potential is shape invariant, using Eq.~(\ref{caraca}) and Eq.~(\ref{caracafermionico}) we can see that the spectrum does not depend on $l$. The shape invariance tells us that a change of $l$ relates $H^{+}$ and $H^{-}$. This implies that the eigenvalues will be the same as the free case, $l=0$: \begin{align} E_{l} = p^{2} \, . \end{align} Now, to construct the different eigenestates we can use Eq.~(\ref{mesalva}) to build up all the states of $H_{l}$. For $l=0$ we have: \begin{align} \bra{x}\ket{p,0}=e^{i p x} \, , \end{align} and for l=1: \begin{align} \ket{p,1}=a^{\dagger}_{1}\ket{p,0} \, . \end{align} Just to see that indeed this is a eigenstate of $H_{1}$ we can check directly: \begin{align} H_{1} \ket{p,1} = (\eta_{1}-1)a^{\dagger}_{1} \ket{p,0} = a_{1}^{\dagger} \xi_{1} \ket{p,0} - a_{1}^{\dagger} \ket{p,0} = \\ = (p^{2}+1)a_{1}^{\dagger}\ket{p,0} - a_{1}^{\dagger}\ket{p,0} = p^{2} (a_{1}^{\dagger}\ket{p,0}=p^{2} \ket{p,1} \, . \end{align} The wave function for the $l=1$ case is: \begin{align} \bra{x}\ket{p,1} = \mel{x}{a_{1}^{\dagger}}{p,0}= \mel{x}{\hat{p}+i \tanh(\hat{x})}{p,0} = \left( p+i \tanh(x) \right) e^{ipx} \, . \end{align} Finally, the case of interest is the $l=2$ case: \begin{align} \bra{x}\ket{p,2} = \mel{x}{a^{\dagger}_{2}}{p,1}=\left( 1+p^{2}+3ip \tanh(x)-3\tanh^{2}(x)\right) e^{ipx} \, . \end{align} Using this wave function, we can extend this solution to all complex $p$ plane to get both functions to construct the Green function Eq.~(\ref{gf}): \begin{align} p =\pm \frac{\theta}{m} \, . \end{align} Because we need regular solutions at $\pm \infty$, both $p$'s will be used in this contruction: \begin{align} \label{sol1} f_{-}(x)= \left( 1-\frac{\theta^{2}}{m^{2}} +3\frac{\theta}{m}\tanh(x) -3\tanh^{2}(x) \right)e^{\frac{\theta x}{m}} \, , \end{align} \begin{align} \label{sol2} f_{+}(x)= \left(1-\frac{\theta^{2}}{m^{2}} -3\frac{\theta}{m}\tanh(x) -3\tanh^{2}(x)\right) e^{-\frac{\theta x}{m}} \, . \end{align} \subsection{Back to $\frac{\lambda\phi^{4}}{4!}$ in the Unbroken Phase} Now that we have both solutions Eq.~(\ref{sol1}) and Eq.~(\ref{sol2}) to construct the Green function of Eq.~(\ref{gf}) we just need to write them in terms of the variables that we are using: \begin{align} u(x) = e^{x} \, , \end{align} \begin{align} f_{-}(x) = \frac{u^{4}(-\frac{\theta^{2}}{m^{2}}+3\frac{\theta}{m}-2) + u^{2}(-2\frac{\theta^{2}}{m^{2}}+8)-\frac{\theta^{2}}{m^{2}}-3\frac{\theta}{m}-2}{(1+u^{2})^{2}} u^{\frac{\theta}{m}} \, , \end{align} \begin{align} f_{+}(x) = \frac{u^{4}(-\frac{\theta^{2}}{m^{2}}-3\frac{\theta}{m}-2) + u^{2}(-2\frac{\theta^{2}}{m^{2}}+8)-\frac{\theta^{2}}{m^{2}}+3\frac{\theta}{m}-2}{(1+u^{2})^{2}} u^{-\frac{\theta}{m}} \, . \end{align} Having these two solutions we can calculate the Wronskian, Eq.~(\ref{wronk}): \begin{align} \mathcal{W} = 2\frac{\theta}{m^{4}}(\theta^{2}-m^{2})(\theta^{2}-4m^{2}) \, . \end{align} With this, we can construct the equal time Green function: \begin{align} G_{\theta}(\tau,\tau) = \frac{f_{+}(m\tau)f_{-}(m\tau)}{\mathcal{W}}= \end{align} \begin{align} \nonumber = \frac{1}{2\theta (1+u^{2})^{4}} \left( (1+u^{8}) + (u^{6}+u^{2})\frac{4(\theta^{2}+2m^{2})}{(\theta^{2}-m^{2})} + u^{4} \frac{6(\theta^{4}-m^{2}\theta^{2}+12m^{4})}{(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \right) \, . \end{align} The Green function in this form is not so useful. We can use partial fraction expansion in the $u$ variable to separate the different kind of contributions to the Green function. Doing so it is straightforward to see that we have a finite part and a divergent part when doing the Fourier integral in Eq.~(\ref{fourier}). We can separate both parts in the following way that will facilitate the renormalization latter: \begin{align} G_{\theta}(\tau,\tau) = G_{\theta}(\tau,\tau)^{div} + G_{\theta}(\tau,\tau)^{fin} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{div} = \frac{1}{2\theta} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\frac{1}{\theta^{3}} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{fin} = \frac{6m^{4}}{\theta^{3}(\theta^{2}-m^{2})} \left( \frac{u^{2}+u^{6}}{(1+u^{2})^{4}} \right)+ \frac{6m^{4}u^{4}}{(1+u^{2})^{4}}\left( \frac{14\theta^{2}-8m^{2}}{\theta^{3}(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \right) \, . \end{align} Now we are ready to integrate this to get the full Green function that enters into the one-loop equation: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} G_{\theta}(\tau,\tau) \, , \end{align} with $\theta^{2}= \vec{k}^{2}+m^{2}$. The divergent part of the two-point function is: \begin{align} G^{div}(\vec{x},\vec{x};\tau,\tau) = \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\mathcal{I}_{2} \, , \end{align} where $\mathcal{I}_{1}$ is quadratically divergent and $\mathcal{I}_{2}$ has an logarithmic divergence: \begin{align} \mathcal{I}_{1} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{(\vec{k}^{2}+m^{2})^{1/2}} \, , \end{align} \begin{align} \mathcal{I}_{2} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{(\vec{k}^{2}+m^{2})^{3/2}} \, . \end{align} To get the finite part we need to do two integrals: \begin{align} \label{int1} \mathcal{I}_{f1} = \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{6m^{4}}{\theta^{3}(\theta^{2}-m^{2})} \, , \end{align} \begin{align} \label{int2} \mathcal{I}_{f2}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{6m^{4}(14\theta^{2}-8m^{2})}{\theta^{3}(\theta^{2}-m^{2})(\theta^{2}-4m^{2})} \, . \end{align} The angular part of these integrals is trivial to solve, and the only difficult part is the radial integration. One important thing to remember is that the fields that generate such propagators have fixed boundary conditions to give the right vacuum projection. The boundary conditions fix the prescription to pass through the poles, and this means that the denominator in Eq.~(\ref{int1}) and Eq.~(\ref{int2}) has a $-m^{2}+i\epsilon$ term. This information is important only for the poles inside the domain of integration. The next step to solve these integrals is to change to a dimensionless variable: \begin{align} \theta = \omega m \, . \end{align} Doing this transformation it is straightforward to find the solution for these integrals: \begin{align} \mathcal{I}_{f1} = \frac{3m^{2}}{\pi^{2}} \int_{0}^{\infty} \-da\-de {\omega} \frac{1}{\omega^{2}(\omega^{2}-1)^{1/2}}= \frac{3m^{2}}{\pi^{2}} \, , \end{align} \begin{align} \mathcal{I}_{f2}= \lim_{\epsilon \rightarrow 0} \frac{3m^{2}}{\pi^{2}} \int_{0}^{\infty} \-da\-de {\omega} \frac{(14\omega^{2}-8)}{\omega^{2}(\omega^{2}-1)^{1/2}(\omega^{2}-4+i\epsilon)}= \end{align} \begin{align} \nonumber =\lim_{\epsilon \rightarrow 0} \frac{6m^{2}}{\pi^{2}} \left[ \frac{4i \sqrt{-3+i\epsilon}\sqrt{4i+\epsilon}+\sqrt[4]{-1}(24-7i\epsilon)\sinh^{-1}(\sqrt[4]{-1}\sqrt{4i+\epsilon})}{\sqrt{i\epsilon-3} (\epsilon +4i)^{3/2}} \right] \, . \end{align} Doing the $\epsilon$ limit we get: \begin{align} \mathcal{I}_{f2}= \frac{6m^{2}}{\pi^{2}} +\frac{3im^{2} \sqrt{3}}{\pi} - \frac{3m^{2}\sqrt{3}}{\pi^{2}} \ln(\frac{2+\sqrt{3}}{2-\sqrt{3}}) \, . \end{align} Thus, the full two-point function, Eq.~(\ref{samepoint}), is: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}\left( \mathcal{I}_{2}+\frac{1}{2\pi^{2}} \right) - \frac{6m^{2}u^{4}}{(1+u^{2})^{4}} F \, , \end{align} where we use the notation from~\cite{loop1}: \begin{align} F = \frac{\sqrt{3}}{2\pi^{2}} \left( \ln(\frac{2+\sqrt{3}}{2-\sqrt{3}}) - i \pi \right) \, . \end{align} An important point to make is that the $\epsilon$ limit is evaluated from the right such that these expressions make sense. This result is in accordance with~\cite{loop1}. Now, we need to renormalize our theory, we could do dimensional regularization to extract only the divergent part from the integrals using minimal subtraction. However, it is simpler to just absorb the whole divergent part, using the same scheme as in~\cite{loop1}. To visualize better the renormalization we can re-write the equation of motion using: \begin{align} \lambda \rightarrow \lambda_{R} + \delta \lambda \, , \end{align} \begin{align} m^{2} \rightarrow m^{2}_{R} + \delta m^{2} \, , \end{align} where the corrections start at order $\epsilon$, being a power series in this variable. To see that these two constants absorb the divergences, we insert back the two-point function in the equation of motion Eq.~(\ref{loopexp}) to get: \begin{align} \label{qqtofazendo} \left[ (-\partial_{\tau}^{2}+m^{2})\phi_{0}(t) +\frac{\lambda}{3!}\phi_{0}^{3}(t)-\rho(t) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1/2}(x) \right]+ \end{align} \begin{align} +\epsilon \left[ \left( -\partial_{\tau}^{2}+m^{2} + \frac{\lambda}{2}\phi_{0}^{2}(x) \right) \phi_{1}(x) + \frac{\lambda \phi_{0}}{2} \left( \frac{1}{2}\mathcal{I}_{1} + \frac{6m^{2}u^{2}}{(1+u^{2})^{2}}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) - \frac{6m^{2}u^{4}}{(1+u^{2})^{4}} F \right) \right]+ \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} Using the definition of the tree level solution, Eq.~(\ref{unbroken}), the divergent part has the distinctive forms: \begin{align} \phi_{0} \left( \frac{\lambda m \mathcal{I}_{1}}{4} \right) \, , \end{align} \begin{align} \phi_{0}^{3} \left( -\frac{3\lambda^{2}}{48}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) \right) \, . \end{align} So we can see that the redefinition of mass and coupling could absorb these divergences. We write these constants like: \begin{align} m^{2} = m_{R}^{2} +\epsilon \delta m^{2}_{1} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda = \lambda_{R} +\epsilon \delta \lambda_{1}+ \mathcal{O}(\epsilon^{2}) \, . \end{align} Using this definition in the equation of motion, Eq.~(\ref{qqtofazendo}), we get: \begin{align} \left[ (-\partial_{\tau}^{2}+m^{2}_{R})\phi_{0}(t) +\frac{\lambda_{R}}{3!}\phi_{0}^{3}(t)-\rho(t) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(t)\right) \phi_{1/2}(x) \right]+ \end{align} \begin{align} \nonumber +\epsilon \left[ \left( -\partial_{\tau}^{2}+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(\tau) \right) \phi_{1}(\tau) + \phi_{0} \left( \delta m^{2}_{1} + \frac{\lambda_{R}I_{1}}{4} \right) + \phi_{0}^{3} \left( \frac{\delta\lambda_{1}}{3!}-\frac{\lambda_{R}^{2}}{16}(I_{2}+\frac{1}{2\pi^{2}}) \right) - \frac{6m^{2}_{R}u^{4}}{(1+u^{2})^{4}} F \right]+ \end{align} \begin{align} \nonumber + \mathcal{O}(\epsilon^{3/2}) =0 \, . \end{align} The obvious choice for the counter terms are: \begin{align} \delta m^{2}_{1} =- \frac{\lambda_{R}\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} \frac{\delta\lambda_{1}}{3!} =\frac{\lambda_{R}^{2}}{16}(\mathcal{I}_{2}+\frac{1}{2\pi^{2}}) \, . \end{align} Choosing this renormalization we can proceed to solve the one-loop generator of the amplitudes. The tree level solution stays the same as before, except for the replacement of bare for renormalized quantities. The one-loop equation simplifies to: \begin{align} \left( -\partial_{\tau}^{2}+m^{2}_{R} + \frac{\lambda_{R}}{2}\phi_{0}^{2}(\tau) \right) \phi_{1}(\tau) = \frac{6m_{R}^{2}u^{4}}{(1+u^{2})^{4}}F \frac{\lambda_{R}\phi_{0}}{2} \, . \end{align} Writing the equation in terms of $u(\tau)=e^{m_{R}\tau}$ we get: \begin{align}\label{qqto} \left( -\partial_{\tau}^{2}+m^{2}_{R} -\frac{24m_{R}^{2}u^{2}}{(1+u^{2})^{2}} \right) \phi_{1}(\tau) = 12 i m_{R}^{3} F \sqrt{3\lambda_{R}} \frac{u^{5}}{(1+u^{2})^{5}} \, . \end{align} From the form of the right side of Eq.~(\ref{qqto}) we can try to look for solutions of the form: \begin{align} \phi_{1}(\tau) = \alpha \frac{u^{5}}{(1+u^{2})^{3}} \, , \end{align} and confirm by direct computation that this is a solution if: \begin{align} \alpha = -\frac{iF \lambda_{R}}{8} \sqrt{\frac{48m_{R}^{2}}{\lambda_{R}}} \, . \end{align} Having this solution we can analytically continue back to real time using the relation Eq.~(\ref{rot1}), using the renormalized mass inside $z(t)$. This gives us: \begin{align} \phi_{1}(t) =- \frac{F \lambda_{R}}{8} \left( \frac{\lambda_{R}}{48m_{R}^{2}} \right)^{2} \frac{z^{5}}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})^{3}} \, . \end{align} Then the full contribution for the generator of the amplitudes at one-loop is: \begin{align} \phi_{0}(t)+\phi_{1}(t)=\phi_{0+1}(t) = \frac{z}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})} \left[ 1 - \epsilon \frac{F\lambda_{R}}{8} \frac{(\frac{\lambda_{R}}{48m_{R}^{2}})^{2}z^{4}}{(1-\frac{\lambda_{R}}{48m_{R}^{2}}z^{2})^{2}} \right] \, . \end{align} To calculate the amplitude from this solution we just need to differentiate $n$ times it with respect to $z(t)$, following Eq.~(\ref{amp1}): \begin{align} \label{ampun} \mathcal{A}_{1loop} (1 \rightarrow 2k+1) = (2k+1)! \left( \frac{\lambda_{R}}{48m_{R}^{2}} \right)^{k} \left[ 1-\frac{F \lambda_{R} k(k-1)}{16} \right] \, , \end{align} where we set $\epsilon$ to one. This expression was obtained first in~\cite{loop1}. Here we can see that the factorial growth persists. The next correction only makes things worse, and this can be seen as an indication that we are using the wrong approximation for this regime. We expect that in a large $k$ approximation of the amplitude, the corrections to be of order $1/k$. In this case, the true object that needs to be small such that this approximation is useful is $\lambda_{R}k^{2}$. In the regime where this is small, this expression is well defined. It is not known how much we can trust this expression outside this regime even though we arrived at it only using that $\lambda_{R}$ is small. This is because, for a large $n$, the loop correction will be larger than the tree level one, signaling that the approximation is not good. We discuss this further in the context of a simpler toy model at the end of chapter~\ref{c3}. For now, let us see if this behavior is the same in the broken phase of this theory. \subsection{$\frac{\lambda\phi^{4}}{4!}$ in the Broken Phase} In the case of broken reflection symmetry we need to be more careful in the renormalization because the shift done at tree level in Eq.~(\ref{shift}) is not the appropriate one. We use the shift in the variables done before Eq.~(\ref{shift}), and when we get to renormalization, we will do the appropriate adjustments as in~\cite{Smith-apr-93}. Aside from this, everything else is very similar to the unbroken case. We expand the $\hat{\sigma}$ operator: \begin{align} \hat{\sigma}(x) = \sigma_{0} \hat{I} + \sqrt{\epsilon} \hat{\sigma}_{1/2}(x) +\epsilon \hat{\sigma}_{1}(x) + \mathcal{O}(\epsilon^{3/2}) \, . \end{align} The equation of motion, Eq.~(\ref{eqbro}), using this expansion is: \begin{align} \left[ (\Box + M^{2})\sigma_{0}(x) + \frac{\lambda \phi_{min}}{2} \sigma_{0}^{2}(x) + \frac{\lambda}{3!}\sigma_{0}^{3}(x) - \rho(x) \right] + \end{align} \begin{align} \nonumber + \sqrt{\epsilon} \left[ \left( \Box + M^{2} + \lambda\phi_{min} \sigma_{0}(x)+ \frac{\lambda}{2}\sigma_{0}^{2} \right)\sigma_{1/2}(x) \right]+ \end{align} \begin{align} + \epsilon \left[ \left( \Box + M^{2} + \lambda\phi_{min} \sigma_{0}(x)+ \frac{\lambda}{2}\sigma_{0}^{2} \right) \sigma_{1}(x) + \frac{\lambda}{2}(\phi_{\min}+\sigma_{0})\ev{T\left( \hat{\sigma}_{1/2}(x)\hat{\sigma}_{1/2}(x) \right)}{\Omega}_{\rho} \right] + \mathcal{O}(\epsilon^{3/2}) =0 \, , \end{align} where $\phi_{min}^{2}= \frac{3M^{2}}{\lambda}$ and $M= \sqrt{2}\mu$ is the mass of the excitation. We are interested in solving the one-loop contribution at threshold. Just like before the tree level is already solved when we ask for homogeneous solution and the next order only give the information about the two-point function that enters in the one-loop equation. The tree level solution is the one that we calculated before: \begin{align} \sigma_{0}(t)= \frac{z(t)}{1- \frac{z(t)}{2\phi_{\min}}} \, . \end{align} The operator that we need to invert to find the two-point function now is: \begin{align} \diamondsuit_{b} = \Box + M^{2} +\lambda\phi_{min} \sigma_{0} + \frac{\lambda\sigma_{0}^{2}}{2} \, . \end{align} Using the solution $ \sigma_{0}(t)$ we get to the same problem of non-hermiticity of the operator. Doing the same step as before, we perform a Wick rotation taking care of the pole in the Euclidean line: \begin{align} -\frac{z(t)}{2\phi_{min}} = u(\tau) = e^{M\tau} \, , \end{align} \begin{align} \tau = it + i \frac{\pi}{M} +\frac{1}{M} \ln(\frac{z_{0}}{2\phi_{min}}) \, . \end{align} Doing this change of variables, the operator that we want to invert becomes: \begin{align} \diamondsuit_{b} = -\partial_{\tau}^{2} -\nabla^{2} + M^{2} -2\phi_{min}^{2} \lambda \frac{u}{(1+u)^{2}} \, . \end{align} We can re-write the last term in a familiar form: \begin{align} \diamondsuit_{b} = -\partial_{\tau}^{2} -\nabla^{2} + M^{2} - \frac{3M^{2}}{2} \sech^{2}(\frac{M\tau}{2}) \, . \end{align} Now the steps are very similar, we do a partial Fourier transform in the spatial part. The remaining operator that we need to invert is almost what we had before: \begin{align} \left( -\partial_{\tau}^{2} +\theta^{2} - \frac{3M^{2}}{2} \sech^{2}(\frac{M\tau}{2})\right) G_{\theta}(\tau,\tau') = \delta(\tau-\tau') \, , \end{align} where $\theta^{2}=\vec{k}^{2}+M^{2}$. Doing a change of variables to an dimensionless one we can cast the functions in a known form: \begin{align} \frac{M \tau}{2} =\xi \, , \end{align} \begin{align} \left( -\frac{M^{2}}{4}\partial_{\xi}^{2} +\theta^{2}-\frac{3M^{2}}{2} \sech^{2}(\xi) \right) f(\xi)=0 \, . \end{align} This is almost the equation that we had before, Eq.~({\ref{prob1}), just re-scaling the $\theta$: \begin{align} \theta \rightarrow 2\theta \, . \end{align} In terms of $u(\tau)$ the solution will change the power because of the definition of $\xi$: \begin{align} u^{2} \rightarrow u \, . \end{align} We already solved these equations in the last section, so the same point Green function before integrating is: \begin{align} G_{\theta}(\tau,\tau) = G_{\theta}(\tau,\tau)^{div} + G_{\theta}(\tau,\tau)^{fin} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{div} = \frac{1}{4\theta} + \frac{3M^{2}}{\theta^{3}} \frac{u}{(1+u)^{2}} \, , \end{align} \begin{align} G_{\theta}(\tau,\tau)^{fin} = \frac{3M^{4}}{4\theta^{3}(4\theta^{2}-M^{2})}\frac{u}{(1+u)^{4}} + \frac{3M^{4}(56\theta^{2}-8M^{2})}{4\theta^{3}(4\theta^{2}-M^{2})(4\theta^{2}-4M^{2})}\frac{u^{2}}{(1+u)^{4}} + \end{align} \begin{align} \nonumber +\frac{3 M^{4}}{4 \theta^{3}(4\theta^{2}-M^{2})} \frac{u^{3}}{(1+u)^{4}} \, . \end{align} The same definition of the divergent integrals $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$ are used to write the divergent part of the two-point function as: \begin{align} G(\vec{x},\vec{x};\tau,\tau)^{div} = \frac{1}{4}\mathcal{I}_{1} + \frac{u}{(1+u)^{2}} \frac{3M^{2} \mathcal{I}_{2}}{4} \, . \end{align} Now for the finite part, we need to solve the following two integrals: \begin{align} \mathcal{I}_{f3}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{1}{\theta^{3}(4\theta^{2}-M^{2})} \, , \end{align} \begin{align} \mathcal{I}_{f4}= \int \frac{\-da\-de [3]{k}}{(2\pi)^{3}} \frac{56\theta^{2}-8M^{2}}{\theta^{3}(4\theta^{2}-M^{2})(4\theta^{2}-4M^{2})} \, . \end{align} This time there is no pole inside the domain of integration, so we don't need to worry about the Feynman prescription. The finite part in terms of these integrals is: \begin{align} G(\vec{x},\vec{x};\tau,\tau)^{fin} = \frac{3M^{4}}{4} \left( \mathcal{I}_{f3} \frac{u +u^{3}}{(1+u)^{4}} + \mathcal{I}_{f2} \frac{u^{2}}{(1+u)^{4}} \right) \, . \end{align} The integrals can be solved directly, in both cases we use the dimensionless coordinates $\theta=\omega M$: \begin{align} \mathcal{I}_{f3} = \frac{1}{M^{2}} \left( \frac{6 - \sqrt{3}\pi}{12\pi^{2}} \right) \, , \end{align} \begin{align} \mathcal{I}_{f4} = \frac{1}{M^{2}} \left( \frac{6 + \sqrt{3} \pi}{6\pi^{2}} \right) \, . \end{align} Using this information we can construct the two-point function. We just write in an appropriate form to interpret after during renormalization: \begin{align} G(\vec{x},\vec{x};\tau,\tau) = \frac{1}{4} \mathcal{I}_{1} + \frac{u}{(1+u)^{2}} \left( \frac{3M^{2}}{4} \mathcal{I}_{2} + \frac{3 M^{2}}{8\pi^{2}} - \frac{M^{2} \sqrt{3}}{16\pi} \right) + \frac{M^{2} \sqrt{3}}{4\pi} \frac{u^{2}}{(1+u)^{4}} \, . \end{align} We need to deal with the divergent part now. We can absorb them in the renormalization of the constants: \begin{align} M^{2} = M_{R}^{2} +\epsilon \delta M^{2}_{1} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda = \lambda_{R} + \epsilon \delta \lambda_{1} + \mathcal{O}(\epsilon^{2}) \, . \end{align} To do the renormalization properly let us focus only in the contribution for the one-loop equation, that is in the form: \begin{align} \label{nacaba} \frac{\lambda}{2} (\phi_{min}+\sigma_{0}) G(\vec{x},\vec{x};\tau,\tau) \, . \end{align} As we said before it is better to work in the unshifted field because this shift receives quantum corrections. Using the definition $\frac{\phi_{0}}{\phi_{\min}}= \gamma$ we can write Eq.~(\ref{nacaba}) in general as: \begin{align} \frac{\lambda \phi_{min}}{2} \gamma \left[ A - \frac{B}{4}(\gamma^{2}-1) +\frac{C}{16}(\gamma^{2}-1)^{2} \right] \, , \end{align} where the constants are: \begin{align} A= \frac{\mathcal{I}_{1}}{4} \, , \end{align} \begin{align} B = \frac{3M^{2}}{4} \mathcal{I}_{2} + \frac{3 M^{2}}{8\pi^{2}} - \frac{M^{2} \sqrt{3}}{16\pi} \, , \end{align} \begin{align} C = \frac{M^{2} \sqrt{3}}{4\pi} \, . \end{align} Only $A$ and $B$ have divergences, the contribution for the one-loop equation is written as: \begin{align} \gamma \frac{\lambda \phi_{min}}{2} \left( A-\frac{B}{4} \right) - \frac{\lambda \phi_{min}}{2}\gamma^{3}B + \frac{\lambda \phi_{min}}{2} C \left( \frac{\gamma^{5}}{16}-\frac{\gamma^{3}}{8}+ \frac{\gamma}{16} \right) \, . \end{align} The renormalization scheme of~\cite{Smith-apr-93} will absorb the two initial terms. Because we are using the unshifted field it is easy to read the renormalized mass and coupling. We just need to be careful because in the unshifted field the mass has the ``wrong'' sign: \begin{align} m^{2}= m_{R}^{2}+\epsilon \delta m_{1}^{2} + \mathcal{O}(\epsilon^{2}) \, , \end{align} \begin{align} \lambda= \lambda_{R} + \epsilon \delta \lambda_{1} + \mathcal{O}(\epsilon^{2}) \, . \end{align} Using this in the equation of motion, we see that: \begin{align} \delta m_{1}^{2} = \frac{\lambda_{R}}{8} \left(\mathcal{I}_{1} - M_{R}^{2}(\frac{3 \mathcal{I}_{2}}{4} + \frac{3}{8\pi^{2}} -\frac{\sqrt{3}}{16\pi}) \right) \, , \end{align} \begin{align} \frac{\delta \lambda_{1}}{3!} = \frac{\lambda_{R}^{2}}{48} \left( \frac{3\mathcal{I}_{2}}{2} +\frac{3}{4\pi^{2}} - \frac{\sqrt{3}}{16\pi} \right) \, . \end{align} In this renormalization scheme we have to solve now the equation for $\sigma_{1}$, where we shift by the renormalized couplings. The equation for the tree level stays the same and the one-loop became: \begin{align} \left( -\partial_{\tau}^{2} + M^{2}_{R} + \lambda_{R}\phi_{min}^{R} \sigma_{0}(\tau)+ \frac{\lambda_{R}}{2}\sigma_{0}^{2}(\tau) \right) \sigma_{1}(\tau) = -\frac{\lambda_{R} \phi_{min}^{R}M_{R}^{2}\sqrt{3}}{8\pi} \left( \frac{\gamma^{5}}{16}-\frac{\gamma^{3}}{8}+ \frac{\gamma}{16} \right) \, . \end{align} We can write everything in terms of $u(\xi)$ where $\xi=M_{R}\tau$ is the dimensionless time coordinate. The equation using these variables is: \begin{align} \label{eqeq} \left( \partial_{\xi}^{2} -1 + \frac{6u}{(1+u)} - \frac{6u^{2}}{(1+u)^{2}} \right)\sigma_{1}(\xi) = \frac{3M_{R}\sqrt{\lambda_{R}} u^{2}(1-u)}{8\pi(1+u)^{5}} \, . \end{align} To solve this equation we look for an ansatz of the form: \begin{align} \sigma_{1} = \alpha \frac{u^{2}}{(1+u)^{3}} \, . \end{align} Plugging this solution in Eq.~(\ref{eqeq}), it is immediate to see that $\alpha =\frac{M_{R}\sqrt{\lambda_{R}}}{8\pi}$. This means that the one-loop solution is: \begin{align} \sigma_{1} = \frac{M_{R}\sqrt{\lambda_{R}}}{8\pi} \frac{u^{2}}{(1+u)^{3}} \, . \end{align} The complete solution for the generator of the tree level and threshold amplitudes at one-loop is then: \begin{align} \sigma_{0}+\sigma_{1}=\sigma_{0+1}(\tau)= \frac{-2\phi_{\min}^{R} u}{(1+u)} \left( 1+ \epsilon\frac{\lambda_{R}^{3/2}}{96\pi M_{R}}\left(-2\phi_{min}^{R} \frac{u}{(1+u)^{2}}\right) \right) \, . \end{align} With this solution we can analytically continue to the whole complex $\tau$ plane to get the answer in real time, in terms of $z(t)$: \begin{align} \sigma_{0+1}(t) = \frac{z}{1-\frac{z}{2\phi_{\min}^{R}}} \left(1 + \epsilon\frac{\lambda_{R}^{3/2}}{96\pi M_{R}} \frac{z}{(1-\frac{z}{2\phi_{min}^{R}})^{2}}\right) \, . \end{align} Having this solution we can find the one-loop amplitude at threshold doing $n$ derivatives in $z(t)$ using Eq.~(\ref{ampbroken1}) (and setting $\epsilon$ to one): \begin{align} \label{ampbro} \mathcal{A}_{1loop}^{B}(1 \rightarrow n) = n! \left( \frac{1}{2\phi_{0}^{R}}\right)^{n-1}\left(1+\frac{\lambda_{R}\sqrt{3}}{96\pi} n(n-1)\right) \, . \end{align} We can see that this contribution is real, different from the unbroken case Eq.~(\ref{ampun}). It has a different sign, and it still has factorial growth even in the first order. It is clear that the important coupling is, in this case, $\lambda n^{2}$. This is the object that needs to be small such that the approximation makes sense. This is similar to the case before, so in a high multiplicity limit, it is not so straightforward to recover information from these initial terms. This discussion will be continued at the end of chapter~\ref{c3}, were we discuss the range of validity of perturbation theory for high multiplicity calculation. If we want to understand this better, we need to try to recover momentum dependence in these amplitudes. Without this, we cannot say anything about the decay rate that is meaningful. It is worth to point out that all the results so far are exact in 0 spatial dimensions, giving just quantum mechanics. There is no phase space in this case, and we still have the factorial growth of these amplitudes. This could be problematic to the unitarity of the theory if this equation can be trusted in the region of interest. Next, we show a possible way to understand why this factorial growth is happening in the perturbative regime. After that, we start to investigate how we can recover the momentum dependence doing some trivial cases and trying to generalize for high multiplicity. In the end, we review some general results in the literature about this regime so we can start the exploration of the Higgsplosion proposal. \subsection{Discussion About the Factorial Growth} Usually, the series that we deal with in Quantum Mechanics and Quantum Field Theory are divergent. This divergence is not a problem because we know how to deal with this kind of series with different summation machines like Borel or Pad\'e~\cite{bender1,bender2}. To understand this other kind of divergence in the amplitude, we need to understand why the series is divergent in the first place. The analysis is done for $\lambda x^{4}$ in Quantum Mechanics, but we can generalize to the Field Theory case. We can see that there is a connection between the graphs with $N$ vertices with the coefficients of a given series: \begin{align} \sum_{\text{graphs}} \text{all graphs with $N$ verticies} = a_{N} \, . \end{align} Now, if we have $N$ vertices in $x^{4}$ theory the structure would be something as is represented in Figure~(\ref{xx}); \begin{figure}[h!] \centering \includegraphics[width=10cm]{combi} \caption{Combinatorics for the $\phi^{4}$ interaction.} \label{xx} \end{figure} We have to connect all these vertices. In a usual process, we have to connect some external lines, but they are only a finite amount so we can ignore in the large-$N$ limit. If we pick one of the lines, we have $(4N-1)$ possible connections, the next line we have $(4N-3)$. Going all the way we have: \begin{align} a_{N} \sim (4N-1)!! \quad \text{as} \quad N \rightarrow \infty \, . \end{align} In this case, we are over counting. There are different arrangements for the vertices that give the same result, $N!$. We still are overcounting because the lines on a vertex are identical, giving a factor of $4!$. In the end, we can write the coefficient of the series as: \begin{align} a_{N} \sim \frac{(4N-1)!!}{N! (24)^{N}} \quad \text{as} \quad N \rightarrow \infty \, . \end{align} It could be that some of these graphs are disconnected and do not contribute to physical processes. In the counting of the graphs, the chance that this happens is small so we do not consider this. In the large-$N$ limit, it does not make a difference. Now, in this limit the coefficient of the series behaves like: \begin{align} a_{N} \sim N! C^{N} \quad \text{as} \quad N \rightarrow \infty \, . \end{align} The factorial growth of the series indicates the divergent nature of it. This is just a heuristic argument, but it captures the spirit of what is happening. We could get some integrals that give a small contribution and change this picture. The divergence of the series occurs because, at small $N$, the small coefficients dominate. However, when going to large-$N$, the number of diagrams is so large that no matter how small the coefficient is, it will blow up. The behavior of a typical divergent series can be seen in the Figure~\ref{divergent}. \begin{figure}[h!] \centering \includegraphics[width=15cm]{divergent} \caption{Typical Behavior of a divergent series. The optimal truncation of a series like this would be when it hits the lowest point because of the definition of an asymptotic series.} \label{divergent} \end{figure} Now, we can see why these amplitudes are growing factorially. We have too many diagrams for large multiplicity amplitudes already at tree level as it is shown in Figure~\ref{fig:1to5} and Figure~\ref{fig:1to7}. There is no small contribution that wins in the large-$n$ limit. The picture is significantly different in this case because there is never a region where the small coefficient dominates. This means that we still can extract information from the series, but any partial sum will give a bad approximation for the function. If we use summation machines to the loop corrections, we get the real behavior of these amplitudes. In this framework, we cannot trust the perturbative expression even at the tree level, but they have information about the actual function. In high multiplicity processes not only $\lambda$ matters but $n$ as well. This argument does not hold for different kinds of approximations like semiclassical calculation, but in those cases, it should be possible to do a similar analysis so we can understand the limitations of the results. \section{Investigation of Beyond Threshold Amplitude} So far, we only calculated threshold amplitudes. With this information only, we cannot reconstruct the decay rate because at the threshold the phase space is just a point. Nevertheless, this result is exact in 0 spatial dimensions, being the usual $x^{4}$ theory. If we want to see if the decay rate has an exponential growth at high multiplicity, we need to analyze if the phase space contribution has enough strength to combat the factorial growth of Eq.~(\ref{ampun}) and Eq.~(\ref{ampbro}). Going beyond the threshold at high multiplicity is an incredibly difficult task. There are too many momenta in the final state. To surpass this problem, we can try to construct these amplitudes in the near threshold limit where all final particles are non-relativistic. In this Section, we try to generalize Brown's method for beyond threshold amplitudes and discuss the difficulties of doing so. After that, we use Feynman rules for small multiplicity to try to understand what we should expect in the near threshold limit. Finally, in the end, we comment about some results in the literature of beyond threshold amplitude and what we can take from them. \subsection{Naive Generalization of Brown's Methods and its Problems} We saw that the Brown method~\cite{Brown-nov-92} of using the expectation value of the field with respect to a source works well for the threshold amplitude. If we want to generalize this, a first naive attempt is trying a source of the type: \begin{align} \label{salsa2} \rho(x_{\mu}) = \rho_{0} e^{ik_{\mu}x^{\mu}} \, . \end{align} In the limit where $\vec{k} \rightarrow 0 $ we recover the solution of before. If we do this in the tree level solution, we need to solve the equation: \begin{align} (\Box + m^{2}) \phi_{0} + \frac{\lambda}{3!} \phi_{0}^{3} = \rho_{0} e^{ik_{\mu}x^{\mu}} \, , \end{align} it turns out that we can solve this equation in the same way as the threshold case. The only difference is that the mass shell condition will be for the four-momentum. The solution is similar, because: \begin{align} \Box e^{n i k_{\mu}x^{\mu}} = -n^{2} k^{2} e^{n i k_{\mu} x^{\mu}} \, . \end{align} Then, in the on-shell limit we get the same result as before: \begin{align} \phi_{0}(x)= \frac{z(x_{\mu})}{\left( 1-\frac{\lambda}{48m^{2}}z(x_{\mu})^{2}\right)} \, . \end{align} were $z(x_{\mu})$ is a similar expression compared to Eq.~(\ref{zdete}): \begin{align} z(x_{\mu})= z_{0}e^{i k_{\mu} x^{\mu}} \, , \end{align} \begin{align} z_{0} = -\frac{\rho_{0}}{k^{2}-m^{2}} \, . \end{align} This is strange, because in the LSZ reduction formula we get something like: \begin{align} \fdv{\phi_{0}[\rho]}{\rho(x_{i})}= \frac{1}{p^{2}_{i}-m^{2}} \delta^{4}(x-x_{i}) \pdv{\phi_{0}}{z} \, . \end{align} The problem is that, in the end, we set $\rho$ to zero and all the momentum dependence vanishes, obtaining a threshold result again. Even though this is a solution of the equation of motion with a space-time dependent source, this is not enough to find beyond threshold amplitudes. It seems that this source, Eq.~(\ref{salsa2}) can only excite one frequency mode, not having enough information about the field to construct beyond threshold amplitudes: \begin{align} \int \-da\-de [4]{x} \phi_{0}(x)\rho(x) \propto \phi_{0}(k) \, . \end{align} The alternative to this is to find a more comprehensive source that we can solve that has the right threshold limit and has at least some non-relativistic generalization. It turns out that this is a hard task because of the non-linearity and for now we cannot advance further. Before proceeding to the next part, it is worth pointing out one thing. The limit of $z=0$ in the LSZ reduction formula may appear strange and in this case, be responsible for vanishing the momentum dependence. If we do this computation with caution, we see that in the double limit of $k^{2} \rightarrow m^{2}$ and $\rho \rightarrow 0$ we are left with a constant $z_{0}$ term, so it is possible to try out instead $z=z_{0}e^{ik_{\mu}x^{\mu}}$. However, if we do all the work before going to mass shell the expression for $z(x)$ is finite, and we can take the limit for the source going to zero there, taking $z_{0}$ to zero first. The order of these limits is essential, and in the LSZ reduction formula, the $\rho$ going to zero is the first one, so this should not alter the results. \subsection{Tree Level Investigation of Beyond Threshold ($1 \to 3$)} If we want to recover the momenta dependence of a high multiplicity amplitude, it is worth to calculate simpler cases. The region of interest is with all external particles in a non-relativistic limit. Here we work the first non-trivial case of 1 particle going to 3 with non-relativistic momenta. This case is useful to check the threshold computation done in the previous Section, using Feynman diagrams, and see how is the shape of the first momentum correction. The process that we are interested in is represented in Figure~\ref{1t3}. \begin{figure}[h!] \centering \includegraphics[width=8cm]{1to3} \caption{Amputated amplitude that we are interested in is the $1 \to 3$ process. Generated with FeynArts~\cite{feyart}.} \label{1t3} \end{figure} Here we have to remember that we do not remove the incoming leg, like we usually do. Then the amplitude is: \begin{align} \mathcal{M}(1 \rightarrow 3) = (p^{2}-m^{2})\mathcal{A}(1 \rightarrow 3) \, . \end{align} Feynman diagrams construct $\mathcal{M}(1 \rightarrow 3)$ and $p_{\mu}$ is the momentum of the incoming particle. Using the usual Feynman rule for the $\phi^{4}$ theory as is represented in Figure~\ref{fe} \begin{figure}[h!] \centering \includegraphics[width=8cm]{feynrule} \caption{Feynman rule for the $\phi^{4}$ case in the normalization that we are using. Generated with FeynArts~\cite{feyart}.} \label{fe} \end{figure} We get the amplitude as: \begin{align} \label{ampamp} \mathcal{A}(1 \rightarrow 3) = \frac{\lambda}{p^{2}-m^{2}} \, . \end{align} To investigate the non-relativistic limit of this expression we need to expand $p^{2}$ in the appropriate manner. Calling the outgoing momenta as $q_{i}$, in the non-relativistic limit we have $\abs{\vec{q}_{i}} \ll m $. The definition of the incoming momentum is: \begin{align} p = q_{1}+q_{2}+q_{3} \, , \end{align} where we write each external momentum as: \begin{align} q_{i} = (\gamma_{i} m, \vec{q}_{i}) \, . \end{align} In the non-relativistic limit we expand the first component as: \begin{align} q_{i} = (m + \frac{\vec{q}_{i}^{\hspace{0.1cm}2}}{2m} - \frac{\vec{q}_{i}^{\hspace{0.1cm}4}}{8m^{3}} + \dots, \vec{q}_{i}) \, , \end{align} this means that the denominator of Eq.~(\ref{ampamp}) can be written as: \begin{align} (q_{1}+q_{2}+q_{3})^{2}-m^{2}= 2 \left( m^{2} + q_{1}\cdot q_{2} + q_{1}\cdot q_{3}+ q_{2}\cdot q_{3} \right)= \end{align} \begin{align} \nonumber 2 \Bigg( 4m^{2} + \vec{q}^{\hspace{0.1cm}2}_{1} +\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{3} - \vec{q}_{1}\vec{q}_{2} - \vec{q}_{1}\vec{q}_{3} - \vec{q}_{2}\vec{q}_{3} + \end{align} \begin{align} \nonumber +\frac{1}{4m^{2}} ( \vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3}-\vec{q}^{\hspace{0.1cm}4}_{1}-\vec{q}^{\hspace{0.1cm}4}_{2}-\vec{q}^{\hspace{0.1cm}4}_{3}) + \dots \Bigg) \, . \end{align} The interesting feature of this expression that remains in the high multiplicity cases is that at leading order we can write it in terms of the non-relativistic energy of the outgoing particles in a general frame: \begin{align} \label{nonenergy} E = \frac{1}{2m} \sum \vec{q}_{i}^{\hspace{0.1cm}2} -\frac{1}{2mn} \left( \sum \vec{q}_{i} \right)^{2} = \frac{n-1}{2mn} \sum \vec{q}_{i}^{\hspace{0.1cm}2} - \frac{1}{nm} \sum_{i \neq j} \vec{q}_{i}\vec{q}_{j} \, . \end{align} Using Eq.~(\ref{nonenergy}) the denominator of Eq.~(\ref{ampamp}) can be written as: \begin{align} \label{caa} (q_{1}+q_{2}+q_{3})^{2}-m^{2}= 8m^{2}+6mE +\frac{1}{2m^{2}} ( \vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2} +\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3}-\vec{q}^{\hspace{0.1cm}4}_{1}-\vec{q}^{\hspace{0.1cm}4}_{2}-\vec{q}^{\hspace{0.1cm}4}_{3}) + \dots \, . \end{align} The first important thing to notice is that in the next order we cannot write the denominator in terms of only $E$. It is easy to see this because $E^2$ would contain odd powers of momenta and this does not appear in Eq.~(\ref{caa}). This feature remains when we increase the number of external particles. We can see that in the large $n$ limit this problem vanishes, leaving only even powers of momentum in Eq.~(\ref{nonenergy}). If we write the amplitude in a expansion of small spatial momenta (small E) we get: \begin{align} \label{ampi} \mathcal{A}(1 \rightarrow 3) = \frac{\lambda}{8m^{2}} \Bigg( 1-\frac{3}{4} \frac{E}{m} + \frac{9}{8} \frac{E^{2}}{m^{2}} - \end{align} \begin{align} \nonumber -\frac{1}{8m^{4}}( -\vec{q}^{\hspace{0.1cm}3}_{1}\vec{q}_{2}-\vec{q}^{\hspace{0.1cm}3}_{1}\vec{q}_{3}-\vec{q}^{\hspace{0.1cm}3}_{3}\vec{q}_{1}-\vec{q}^{\hspace{0.1cm}3}_{2}\vec{q}_{1}-\vec{q}^{\hspace{0.1cm}3}_{2}\vec{q}_{3}-\vec{q}^{\hspace{0.1cm}3}_{3}\vec{q}_{2} + 2\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{2}+2\vec{q}^{\hspace{0.1cm}2}_{1}\vec{q}^{\hspace{0.1cm}2}_{3}+2\vec{q}^{\hspace{0.1cm}2}_{2}\vec{q}^{\hspace{0.1cm}2}_{3} )+ \dots \Bigg) \, . \end{align} From this result, we can see that for small momenta the first term depends only on the non-relativistic energy. If we go to high orders in the momentum expansion, it is expected that we cannot describe the system only with $E$ and other functions will have an important play. From Eq.~(\ref{nonenergy}) we can see that these other objects are subdominant, and only the non-relativistic energy dominates. However, even with this expression, we cannot say much about the decay rate. If we are working in a limit where the kinetic energy is small, we are in a shell of the phase space as represented in Figure~\ref{deca} \begin{figure}[h!] \centering \includegraphics[width=10cm]{decayratehavewant} \caption{Difference between the decay rate that we can get using a non-relativistic approximation to what it would be useful to have.} \label{deca} \end{figure} This means that the decay rate can only be calculated for this shell that is smaller than $2m$. For a good approximation of the decay rate, we expect to be able to calculate in a broader region. The limit of validity of this expression does not help us much in understanding the behavior of the decay rate, but it is a step in this direction. Next, we do the same calculation for the $1 \to 5$ case where we start to see a trend in behavior. One thing that we can take from this is that the threshold computation works, being the first term of Eq.~(\ref{amp1}). \subsection{Tree Level Investigation of Beyond Threshold ($1\rightarrow 5$) and its Generalities} Let us go one step beyond and analyze the process of $1\rightarrow 5$ particles in the same non-relativistic limit. Now the problem starts to appear because we have ten different combinations of external leg positions for the Feynman diagram, as is shown in Figure~\ref{1t5}. These are all the possible combinations that the internal propagator can have, considering the particles are identical. \begin{figure}[h!] \centering \includegraphics[width=15cm]{1to5a} \caption{Amputated amplitude that we are interested in for the $1\rightarrow 5$ process. Generated with FeynArts~\cite{feyart}.} \label{1t5} \end{figure} The amplitude can be written as: \begin{align} \label{15} \mathcal{A}(1 \rightarrow 5) = \frac{1}{p^{2}-m^{2}} \sum_{ijk} \frac{1}{(q_{i}+q_{j}+q_{k})^{2}-m^{2}} \lambda^{2} \, , \end{align} where $i,j,k$ are the sum of the following combinations written in the Table~\ref{tab:1}. \begin{table}[h!] \centering \begin{tabular}{|l|l|l|} \hline i & j & k \\ \hline 1 & 2 & 3 \\ \hline 1 & 2 & 4 \\ \hline 1 & 2 & 5 \\ \hline 1 & 3 & 4 \\ \hline 1 & 3 & 5 \\ \hline 1 & 4 & 5 \\ \hline 2 & 3 & 4 \\ \hline 2 & 3 & 5 \\ \hline 2 & 4 & 5 \\ \hline 3 & 4 & 5 \\ \hline \end{tabular} \caption{Possible combinations for the sum in i, j and k.} \label{tab:1} \end{table} The problem now is to do the non-relativistic limit, because we have two propagators. One of them with all the momenta and other only with three at a time. Doing the non-relativistic limit just like before we can write the amplitude as: \begin{align} \label{abosa} \mathcal{A}(1 \rightarrow 5) = \left( \frac{\lambda^{2}}{2}\right)^{2} \frac{1}{\Delta_{1}+\Delta_{2} t^{2} + \Delta_{3}t^{4}} \sum_{ijk} \frac{1}{\Delta_{4}+\Delta_{5}^{ijk}t^{2} -\frac{1}{4m^{2}}\Delta_{6}^{ijk} t^{4}} \, . \end{align} In this expression $t$ is just a fictitious parameter to help expand this in small spatial momentum, it keeps track of the power of $\left|\vec{q}_{i}\right|$. In the end we expand Eq.~(\ref{abosa}) in $t$ and set $t=1$ to get the non-relativistic approximation. These deltas appears from doing the expansion on the denominators of Eq.~(\ref{15}), up to quartic order: \begin{align} \Delta_{1} = 12 m^{2} \, , \end{align} \begin{align} \Delta_{2} = 2 ( \vec{q}_{1}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{5}^{\hspace{0.1cm}2}) - \vec{q}_{1}\vec{q}_{2} - \vec{q}_{1}\vec{q}_{3} - \vec{q}_{1}\vec{q}_{4} - \end{align} \begin{align} \nonumber -\vec{q}_{1}\vec{q}_{5} - \vec{q}_{2}\vec{q}_{3} - \vec{q}_{2}\vec{q}_{4} - \vec{q}_{2}\vec{q}_{5} - \vec{q}_{3}\vec{q}_{4} - \vec{q}_{3}\vec{q}_{5} - \vec{q}_{4}\vec{q}_{5} \, , \end{align} \begin{align} \Delta_{3} = -\frac{1}{2m^{2}} ( \vec{q}_{1}^{\hspace{0.1cm}4}+\vec{q}_{2}^{\hspace{0.1cm}4}+\vec{q}_{3}^{\hspace{0.1cm}4}+\vec{q}_{4}^{\hspace{0.1cm}4}+\vec{q}_{5}^{\hspace{0.1cm}4})+\frac{1}{4m^{2}} ( \vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{2}^{\hspace{0.1cm}2} +\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{1}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+ \end{align} \begin{align} \nonumber +\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{3}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{2}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}\vec{q}_{4}^{\hspace{0.1cm}2}+\vec{q}_{3}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}+\vec{q}_{4}^{\hspace{0.1cm}2}\vec{q}_{5}^{\hspace{0.1cm}2}) \, , \end{align} \begin{align} \Delta_{4} =4m^{2} \, , \end{align} \begin{align} \Delta_{5}^{ijk} = \vec{q}_{i}^{\hspace{0.1cm}2}+\vec{q}_{j}^{\hspace{0.1cm}2}+\vec{q}_{k}^{\hspace{0.1cm}2} -\vec{q}_{i}\vec{q}_{j}-\vec{q}_{i}\vec{q}_{k}-\vec{q}_{j}\vec{q}_{k} \, , \end{align} \begin{align} \Delta_{6}^{ijk} = \vec{q}_{i}^{\hspace{0.1cm}4} + \vec{q}_{j}^{\hspace{0.1cm}4} + \vec{q}_{k}^{\hspace{0.1cm}4} - \vec{q}_{i}^{\hspace{0.1cm}2}\vec{q}_{j}^{\hspace{0.1cm}2} - \vec{q}_{i}^{\hspace{0.1cm}2}\vec{q}_{k}^{\hspace{0.1cm}2} -\vec{q}_{j}^{\hspace{0.1cm}2}\vec{q}_{k}^{\hspace{0.1cm}2} \, . \end{align} With this information we can do the non-relativistic expansion of the amplitude: \begin{align} \mathcal{A}(1 \rightarrow 5) = \lambda^{2} \sum_{ijk} \left( \frac{1}{192m^{4}} - \frac{1}{2304m^{6}}(\Delta_{2}+3\Delta^{ijk}_{5}) + \dots \right) \, . \end{align} The first term gives the right threshold amplitude, the second term we will re-write in terms of the non-relativistic energy. Using the definition of the non-relativistic energy, Eq.~(\ref{nonenergy}) it is a direct but boring computation to show: \begin{align} \sum_{ijk} \Delta_{2} + 3\Delta_{5}^{ijk} = 95mE \, . \end{align} The amplitude up to first order is: \begin{align} \mathcal{A}(1 \rightarrow 5) = 5! \left( \frac{\lambda}{48m^{2}}\right)^{2}\left(1-\frac{19}{24} \frac{E}{m} + \dots \right) \, . \end{align} This result is in agreement with~\cite{Papadopoulos-nov-92,Libanov-jul-94}. This shows that indeed, the first correction beyond threshold comes only from the non-relativistic energy. Again, if we try to go to the next order, this ceases to be the case because of odd powers of momentum in the expression for the energy square that does not appear in the calculation. The next step in this investigation is to show and discuss some results coming from recursion relation and semiclassical computation. Mostly we will comment and try to interpret inside this framework. From these cases, we can take that the information beyond the threshold is tough to get. \subsection{Recurrence Relations and General Results of Beyond Threshold Amplitudes at High Multiplicity} The focus so far was to study these high multiplicity processes in the perturbative regime. We saw already why these amplitudes are growing factorially. The last piece of information on the perturbative regime comes from recursion relations. Here we highlight the history and main results of it. After that, we comment on some recent results coming from the semiclassical calculation. These results were essential to motivate the studies of high multiplicity processes after the initial wave of results that we covered so far. The first improvement came in 1992 when E.N Argyres and Costa G. Papadopoulos used recurrence relations to find amplitudes with one momentum in the final state~\cite{Papadopoulos-nov-92}. The general form of the recursion relation is represented in Figure~\ref{rec}. They find the solution for general monomial interaction of the form: \begin{align} V(\phi) = \lambda_{m} \frac{\phi^{m}}{m!} \end{align} \begin{figure}[h!] \centering \includegraphics[width=15cm]{recursion} \caption{Recursion relation for $\phi^{4}$ theory in the unbroken phase, $n=n_{1}+n_{2}+n_{3}$.} \label{rec} \end{figure} The specialization for our $\phi^{4}$ case gave the result: \begin{align} \mathcal{A}(1 \rightarrow 2k+1) = (2k)!(2k)(2k+1+\omega) \left( \frac{\lambda_{4}}{48}\right)^{k} \left( k + 1 + \frac{2(3-\omega)}{3+\omega} k + \frac{(3-\omega)(1-\omega)}{(3+\omega)(5+\omega)}(k-1) \right) \, , \end{align} where the mass is set to one and $\omega=2E-1$ with $E$ being the energy of the final particle with nonzero spatial momentum. The threshold result is recovered when $\omega=1$. We can find with these results $2 \rightarrow n$ processes, if we use a negative $\omega$, as it was shown to recover the threshold results: \begin{align} \mathcal{A}(2 \rightarrow n) = 0 \quad n>4 \, . \end{align} In the same paper the authors calculate the case of broken phase with one particle in the final state with arbitrary momentum: \begin{align} \mathcal{A}^{B} (1 \rightarrow n ) = (n-1)! (n-1)(n+\omega) \left( \frac{\lambda_{4}}{12}\right)^{(n-1)/2} \left( n + 2(n-1)\frac{(1-\omega)}{(2+\omega)} - \frac{\omega (1-\omega)}{(2+\omega)(3+\omega)}\right) \, , \end{align} and again the limit of $\omega =1$ checks out. These results seem to show that in the perturbative region, even momentum dependence cannot save the behavior of these amplitudes. After these results, people realized that if we restrict ourselves to the quantum mechanical limit of this, we get the complete result. This motivated the study of transition amplitudes in the $x^{4}$ oscillator by C.A Diamantis, B.C Georgalas, A. B. Lahanas and E. Papantonopolus~\cite{anarmo}. They looked for bounds in the transition amplitudes generated by an external source. This would be a simplification of our case in the (1+0)D case. They introduced the concept of holy grail function $F(\lambda n)$ for the study of these amplitudes: \begin{align} \mathcal{A}(1 \rightarrow n) = \kappa e^{\frac{F(\lambda n)}{\lambda^{2}}} \, . \end{align} In~\cite{anarmo}, they showed that in the trusted region, these transitions never exponentially grow. However, this is not a definitive answer because we cannot go to all the parameter space. This result was significant to understand the unitarity of the quantum mechanical case, but better expressions for the amplitudes were needed. The situation changed dramatically when M.V. Libanov, V.A. Rubanov, D. T. Son and S. V. Troitsky published two papers about the exponentiation of these amplitudes~\cite{Libanov-mar-95,Libanov-jul-94}. It was conjectured that the amplitudes at high multiplicity should have the special form: \begin{align} \mathcal{A}(1 \rightarrow n) \propto \sqrt{n!} e^{\frac{F(\lambda n,\epsilon)}{\lambda}} \, , \end{align} inspired by the instantonic cross section. The behavior of the amplitude is completely determined by the holy grail function $F$. To show evidence of this conjecture they redid all the results so far in this formalism and wrote the corresponding holy grail function. For the $\frac{\lambda \phi^{4}}{4}$ case~\cite{Libanov-mar-95} at tree level and treashold: \begin{align} F_{unbroken} = \frac{\lambda n}{2} \ln(\frac{\lambda n}{8}) - \frac{\lambda n}{2} \, , \end{align} \begin{align} F_{broken} = \frac{\lambda n}{2} \ln(\frac{\lambda n}{2}) - \frac{\lambda n}{2} \, . \end{align} For the treshold amplitude at tree level of the O(2) model with interaction $\frac{\lambda (\phi_{1}^{2}+\phi_{2}^{2})^{2}}{4}$~\cite{Libanov-mar-95}: \begin{align} F_{unbroken} = \lambda n \left( \ln (\frac{\lambda n(\sqrt{m_{1}}+\sqrt{m_{2}})^{2}}{8(m_{1}+m_{2})}) -1 \right) \, , \end{align} \begin{align} F_{broken} = \lambda n \left( \ln (\frac{\lambda n(\sqrt{m_{1}}+\sqrt{2m_{2}})^{2}}{2(m_{1}+2m_{2})}) -1 \right) \, . \end{align} Then they introduced new results for the beyond threshold and loop corrections. They solved the recursion relations of Figure~\ref{rec} for the large $n$ limit using the fact that at leading order only the non-relativistic energy contribute. In this region they showed that for the unbroken $\frac{\lambda \phi^{4}}{4}$ case at tree level: \begin{align} \mathcal{A}(1 \rightarrow n) = n! \left( \frac{\lambda}{8} \right)^{(n-1)/2} e^{-\frac{5}{6}E} \, , \end{align} where $m=1$ and $E$ is the non-relativistic kinetic energy of the final particles. Other important result was to show that at leading order the $n$-loop correction exponentiate to the form: \begin{align} \mathcal{A}(1 \rightarrow n) = \mathcal{A}_{tree}(1 \rightarrow n) e^{B\lambda n^{2}} \, . \end{align} In the second paper, they started to lay the ground for the generalization of the WKB method to produce semiclassical calculation in Quantum Field Theory. For the first time, we have a closed expression for the large multiplicity amplitudes with all the momentum dependence. All of these results were still perturbative, and they keep showing the same behavior. The next year D.T Son~\cite{Son-may-95} showed how to generalize the WKB method to calculate semiclassical amplitudes in the regime: \begin{align} \lambda \rightarrow 0 \, , \quad n \rightarrow \infty \, , \quad \lambda n = g = \text{cte} \, , \quad \epsilon = \text{cte} \, , \end{align} where $\epsilon$ is the non-relativistic energy per particle per unit of mass in the final state: \begin{align} \epsilon = \frac{E-nm}{m} \end{align} This was one of the missing pieces to understand these processes. The question is now written in terms of the right coupling $g= \lambda n$. That coupling has a t' Hooft like form similar to an large-N Yang-Mills expansion. Finding a expansion for small and large g, we can then see if these processes can grow exponentially. The amazing thing about this semiclassical calculation is that we can get the decay rate automatically, without the need to integrate the $n$ particle phase space. The semiclassical calculation only computes few $\rightarrow$ many processes, but it is expected that the exponential part is independent of the number of initial particles provided they are small. At the time only the limit $g \ll 1$ could be explored, it is not so useful at first glance: \begin{align}\label{smallg} F(g,\epsilon) = g \ln(\frac{g}{16}) - g + \frac{3g}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} g \epsilon + \frac{ \sqrt{3}}{4\pi} g^{2} \, . \end{align} This was still not enough because we need the expression in the limit of large $g$ to say anything sensible. It was, nevertheless, a big step in the right direction. A decade passed and the new generation of results started to appear, first the generalization of recurrence relations for different particle content to approximate these results to the Standard Model by Valentin V. Khoze~\cite{Khoze-apr-14}. These results were still perturbative but showed the same behavior of the simpler scalar case. After that, the most important results for understanding high multiplicity processes came from Valetin V. Khoze again~\cite{Khoze-jun-18}. They used the D.T. Son method plus some new tricks to obtain a semiclassical amplitude in the right limit of $g \gg 1 $. This result only exists for $\phi^{4}$ in (1+3)D and broken phase but this is a revolutionary solution nonetheless: \begin{align} \label{eq10} \Gamma(\epsilon) \propto e^{n \left( \ln(\frac{g}{4}) +0.85\sqrt{g} -1 + \frac{3}{2}(\ln(\frac{\epsilon}{3\pi}) +1) - \frac{25}{12} \epsilon \right)} \, . \end{align} Eq.~(\ref{eq10}) is valid in the regime: \begin{align} \lambda \rightarrow 0 \, , \quad n \rightarrow \infty \, ,\quad \lambda n = g \gg 1 , \quad \epsilon \ll 1 \, . \end{align} This was the first result outside ordinary perturbation theory were the exponential growth appeared. The analysis of this result is done at the end of the next chapter. Concluding, it is worth showing another impressive result coming from Joerg Jackel and Sebastian Schenk~\cite{Jaeckel-jun-18}. They did the full perturbative analysis for the quantum mechanical case. In there it is shown that the amplitude indeed exponentiate and after resuming the series there is no unitarity violation. This shows that in this case, we cannot trust the partial sum of the initial terms of the perturbative series, as we expected. \chapter{The path ahead} \pagenumbering{arabic} \section{Discussion of the state of the art.} \section{What directions we can go} \section{Conclusion}
{ "timestamp": "2020-12-17T02:21:29", "yymm": "2012", "arxiv_id": "2012.09028", "language": "en", "url": "https://arxiv.org/abs/2012.09028" }
\section{Introduction} \label{intro} Mass loss from massive stars impacts their evolution \citep[e.g.,][]{2012ARA&A..50..107L} as well as the evolution and dynamics of the surrounding interstellar medium \citep[ISM;][]{1975ApJ...200L.107C}. One of the most visible manifestations of stellar mass loss, a bow shock, forms when the stellar wind emanating from a star moving through the ISM reaches supersonic relative velocities \citep[e.g.,][]{Wilkin_1996}. The properties of such stellar wind bow shocks encode information about the mass-loss history of the star \citep[e.g.,][]{2008ApJ...685L.141R,Mackey_Mohamed_Neilson_Langer_Meyer_2012,Gvaramadze_Menten_Kniazev_Langer_Mackey_Kraus_Meyer_Kami?ski_2013} and the structure of the surrounding ISM \citep[e.g.,][]{2011ApJ...737..100T}. Most observed bow shocks are associated with massive runaway stars; however, they are also observed around a variety of stellar sources including asymptotic giant branch stars \citep[e.g.,][]{Ueta_2006}, pulsars \citep[e.g.,][]{cordes_romani_1996}, cataclysmic variables \citep[e.g.,][]{Buren_93}, and Algols \citep[e.g.,][]{Mayer_2016}. These bow shocks are typically detected at optical \citep[e.g.,][]{gull_1979} and infrared (IR) wavelengths \citep[e.g.,][]{buren_mccray_1988,Ueta_2006,Ueta_Izumiura_Yamamura_Nakada_Matsuura_Ita_Tanab_Fukushi_2014}, though a few have been detected at X-ray \citep[e.g.,][]{Lopez_2012}, ultraviolet \citep[e.g.,][]{LeBertre_2012}, and radio \citep[e.g.,][]{Benaglia_2010} wavelengths. In recent years, several dedicated surveys have revealed large numbers of bow shock nebulae in the Milky Way \citep[e.g.,][]{Peri_Benaglia_Brookes_Stevens_Isequilla_2011,Peri_2015,Kobulnicky_2016}, opening new avenues of research into stellar winds and ISM characteristics. In this paper, we probe the connections between polarimetric observations and the physics of stellar wind bow shocks. (Hereafter, we will use the term ``bow shock" to describe not only a true physical shock, but also a region of enhanced density arising from wind-ISM interactions and having the same geometrical shape as a bow shock.) Polarization by scattering samples the opacity of a medium, and encodes information about the relative orientation of a scattering region in relation to illuminating sources and the observer. In the case of electron (Thomson) scattering, interaction of unpolarized incident radiation with a free electron produces scattered radiation that is $100\%$ linearly polarized when the scattering angle is $90\degr$, independent of wavelength; the angle of polarization is perpendicular to the plane defined by the incident and scattered rays \citep{rybicki_1979}. In the case of dust scattering, asymmetric dust grains produce scattered radiation whose linear polarization magnitude and position angle are wavelength-dependent, and which may also be circularly polarized \citep{Henyey_1941,White_1979}. Polarization has been detected in two bow shock sources near the Galactic centre, with magnitudes up to a few percent \citep{Buchholz_2011,2013A&A...551A..35R}. Such values are easily measured with current polarimetric instrumentation, suggesting that polarization may be a valuable technique with which to study the wealth of newly discovered bow shocks. Although many researchers have developed computational models of stellar wind bow shocks \citep[e.g., ][]{Gustafsson_2010,Mohamed_2013,Christie_2016}, polarization signatures have not generally been considered. However, a few recent papers have modelled the polarization of specific objects with bow shocks. \cite{2013msao.confE.172N} analytically modelled the near-IR polarization from a bow shock around Betelgeuse. \citealt{Shahzamanian_2016} and \citealt{zajacek_2017} used a sophisticated 3-D Monte Carlo radiative transfer (MCRT) code to simulate the polarization behaviour of a dust-scattering bow shock and other possible circumstellar structures around the Dusty S-cluster Object (DSO), an unusual infrared-excess source near the Galactic centre. This contribution is the first of two papers in which we use Monte Carlo numerical methods to explore the polarization signatures arising from generalized stellar wind bow shock structures. Our code (\textit{SLIP}; \citealt{Hoffman_2007}) is related to the one used by \citealt{Shahzamanian_2016} and \citealt{zajacek_2017}, but our implementation is different, as discussed below in Section~\ref{methods}. The MCRT approach is easily adaptable to non-spherical geometries while allowing for consideration of optical depth effects (i.e., the influence of multiple scattering on the polarization of escaping light). Our goal in this paper is to formulate the problem of predicting the polarization produced within an idealized bow shock structure and to investigate the effects of various input parameters on the resulting polarization behavior, assuming Thomson scattering only for simplicity. The second paper (hereafter Paper~II) will investigate the effects of dust opacity on observed polarization, a scenario with broader applications. Our paper is organized as follows. In Section \ref{methods}, we discuss the \textit{SLIP} code and the features of our models. In Section \ref{analytic}, we present analytic results for our idealized bow shock cases, valid strictly in the optically thin limit. Although limited in applicability, the analytic results provide context for interpreting the numerical results from \textit{SLIP}. In this section we also discuss comparisons between the analytic and numerical simulations. In Section \ref{results}, we present and interpret numerical results for the polarization produced in both resolved and unresolved cases, as functions of the temperature and optical depth of the scattering material in the bow shock. We discuss how our results may aid in interpretation of observed polarization signals in Section \ref{obsimp}. Finally, we offer concluding remarks in Section \ref{conclusion}. \section{Methods} \label{methods} We constructed our simulations using the Supernova LIne Profile (\textit{SLIP}) code (\citealt{Hoffman_2007}). \textit{SLIP} uses the MCRT method (e.g., \citealt{Whitney_2011}) to track photons through a three-dimensional spherical polar grid as in \cite{Whitney_Wolff_2002}. For the axisymmetric simulations presented here, we define a grid with $100$ radial cells and $101$ cells in the polar ($\theta$) direction. At the centre of this grid we place a finite spherical photon source, surrounded by a circumstellar scattering region composed of pure hydrogen in local thermodynamic equilibrium (LTE). We do not assume this circumstellar material (CSM) is heated by the central star. Instead we define its temperature $T$ (which for simplicity we assume is constant throughout the region) as a user-specified input parameter governing the ionization fraction $x$ within the scattering region. Given a specified reference optical depth $\tau_0$, \textit{SLIP} first calculates the number density of free electrons via the equation~$n_+ = \tau_0/0.4m_H\Delta R_0$, where $m_H$ is the proton mass and $\Delta R_0$ is the radial thickness of the scattering region at the reference location. These quantities are defined in greater detail later in this section. With this value of $n_+$ and the input temperature $T$, we then apply the Saha equation to calculate $n_0$, the number density of neutral atoms: \begin{equation} \frac{n_+}{n_0} = \frac{Z_+}{Z_0} \frac{2}{n_e h^3} (2 \pi m_e k T)^{\frac{3}{2}} e^{\frac{-\chi_i}{kT}} \label{saha} \end{equation} \noindent In this equation, $n_e$ represents the number density of free electrons, $m_e$ the electron mass, and $k$ the Boltzman constant. $Z_+$ and $Z_0$ represent the partition functions of the ion and neutral atom, respectively, and $\chi$ is the ionization potential. From the calculated $n_0$ value, we obtain the ionization fraction $x=n_+/n_{tot}$ and finally the opacity of the CSM, $\kappa=0.4x$. By doing this, we assume a constant ionization fraction and opacity throughout the CSM, which simplifies the Monte Carlo calculations described below. The code does not take into account any expansion of the CSM, which is a reasonable approximation for the case of a roughly stationary stellar wind bow shock. Following the basic MCRT prescription, \textit{SLIP} emits virtual, initially unpolarized ``photons" from the central star (or other photon source) and tracks them as they scatter within the CSM. The code determines a photon's behaviour by generating weighted random numbers corresponding to known probability distributions that depend on the optical depth $\tau$ and albedo $a$ of the scattering region \citep{Whitney_2011}. A strength of our implementation is that in addition to the star (or ``central source"), \textit{SLIP} also allows photons to be emitted from within the CSM itself (which we refer to as the ``distributed source"). In the distributed emission case, we allow photons to be emitted isotropically from the volume of the CSM. Because the CSM density is not constant (see the discussion of the bow shock implementation below), we use the rejection method to ensure that the number of emitted photons at a given location is proportional to the local CSM density. In the sections below, we investigate the differences between these two emission scenarios. As photons interact with the scattering region, \textit{SLIP} performs the numerical optical depth integration described in \citet{Code_Whitney_1995} and \citet{Whitney_2011}. After each integration, a random number compared with the photon's albedo determines whether it scatters or becomes absorbed; the photon's Stokes parameters are updated after each scattering event by applying the standard Mueller matrix multiplication \citep{Chandrasekhar_1946,Code_Whitney_1995,Whitney_2011}. Once a photon exits the simulation (i.e., it ``escapes''), its Stokes parameters are combined with those of all previously tracked photons in the appropriate output bin corresponding to the observer's viewing angle. A single \textit{SLIP} run produces results for all viewing angles ($i=0\degr-180\degr$). Within each output bin, we sum the Stokes vectors due to all $N$ photons in the bin and apply normalization factors in $\theta$ and $\phi$ to ensure that output fluxes have the correct units. We determine the $1\sigma$ uncertainty for each Stokes parameter in each bin by calculating the standard deviation of that parameter over all $N$ photons in the bin and normalizing it to $\sqrt{N}$ to account for the Poisson statistics of this counting experiment \citep{Wood_96a,Whitney_2011}. For simplicity, in this paper we consider electron (Thomson) scattering only, both for the case of pure scattering (albedo $a=1$) and for the case of scattering plus hydrogen absorption ($a<1$). Although \textit{SLIP} has the capability to simulate polarized spectra, because electron scattering is a gray process, our results are monochromatic for the pure-scattering case. That is, these results are comparable to polarization observations at any wavelength. When we consider hydrogen absorption, we choose a representative optical wavelength of 6040 \AA~and discuss how absorption effects modify the pure-scattering results. At higher temperatures for which our calculated ionization fraction is very close to 1, these electron-scattering scenarios simulate a fully ionized environment such as a region of shocked gas. This focus on electron scattering and single bow-shock structures distinguishes the simulations in this paper from those of \citealt{Shahzamanian_2016} and \citealt{zajacek_2017}. In Paper~II, we will present wavelength-dependent dust-scattering results from \textit{SLIP} and compare them with the bow-shock contribution to the polarization of the DSO as calculated by \citealt{Shahzamanian_2016} and \citealt{zajacek_2017}. \begin{figure} \includegraphics[width = \columnwidth]{densitymap-jlh} \caption{Cross-section of our model geometry, along with a depiction of the bow shock density as a function of angle (greyscale). The star is at the origin and moving in the direction of the arrow ($+z$). The central green solid line represents the central radius of the bow shock, which in our models we define with the Wilkin analytical solution (Eq.~\ref{radwill}). Due to the difficulty of representing this equation graphically, in this figure we have used a graphical approximation of this function; however, the greyscale image is a discretisation of the actual Wilkin equation. The red and blue outer dashed lines represent our adopted inner and outer CSM radii, separated by a constant radial thickness $f$ as described in Section \ref{methods}. The density decreases from the bow head toward the wings of the shock (Eq.~\ref{rho}); we adopt an exponential decline in density in the far wings of the shock (Eq.~\ref{exp}). The central source is shown exaggerated in size for reference. The angle $\theta$ is the polar angle measured from the $+z$ axis in our model grid, while the angle $i$ is the inclination or viewing angle for a distant observer.} \label{geom} \end{figure} Rather than simulating a particular object (as in \citealt{2013msao.confE.172N}, \citealt{Shahzamanian_2016}, and \citealt{zajacek_2017}), our goal here is to understand the polarization produced by electron scattering within a generalized bow shock. Thus, to describe our scattering region, we adopt the \cite{Wilkin_1996} analytic model of an axisymmetric bow shock formed when a star drives a wind into the stationary ISM while also moving along a straight line. This formulation assumes a spherically symmetric stellar wind and a locally uniform ISM. The resulting bow shock structure and properties depend on the properties of the stellar wind, the speed of the star through the ISM, and the local ISM density. The solution provides for the shape, mass surface density, and velocity flow in an infinitesimally thin axisymmetric bow shock. The essential properties of this solution are the standoff radius of the bow head, the opening angle of the bow shock, and a characteristic surface density for the structure. The standoff radius $R_0$ is defined as the location along the star's trajectory at which the ram pressures of the ISM and stellar wind are equal, i.e., $\rho_\text{w} V_\text{w}^2 = \rho_{I} V_\star^2$. Here $\rho_\text{w}$ represents the density of the stellar wind, $V_w$ the stellar wind velocity, $V_{\star}$ the stellar velocity, and $\rho_{I}$ the ISM density. With the stellar mass-loss rate represented by $\dot m_w$, this condition yields \begin{equation} R_0=\sqrt[]{\frac{\dot m_w V_w}{4\pi \rho_{I} V_\star^2}} \label{standoff} \end{equation} \noindent \citep{Wilkin_1996}. Using momentum conservation and force balance, the bow shock radius as a function of polar angle is then given by \begin{equation} R(\theta)= \sqrt{3}R_0 \csc \theta \, \sqrt{1-\theta \cot\theta}\; . \label{radwill} \end{equation} We use this equation to define the central radius of our model bow shock structure (Fig.~\ref{geom}). As described in \S~\ref{results}, we choose $R_0$ to give a convenient scale to our simulations. Note that at $\theta=\pi/2$, the extent of the bow shock is $\sqrt{3}R_0$. \cite{Wilkin_1996} also determined the mass surface density $\sigma$ of the idealized, infinitesimally thin bow shock shell as a function of polar angle using conservation of momentum: \begin{equation} \sigma(\theta) = \frac{1}{2}\,R_0 \,\rho_{I} \frac{[2 \alpha (1-\cos\theta)+\tilde{\varpi}^{2}]^{2}}{\tilde{\varpi} \sqrt{(\theta-\sin\theta \cos\theta)^{2}+(\tilde{\varpi}^{2} - \sin^2\theta)^{2}}}\; . \label{sigma} \end{equation} \noindent Here $\tilde{\varpi}$ is a convenient parametrization defined by $\tilde{\varpi}^{2} = 3(1-\theta \cot\theta)$. In the wings of the bow shock, $\tilde{\varpi} \gg 1$, giving $\sigma \propto \tilde{\varpi}$. The symbol $\alpha$ parametrizes the ratio of the translational speed of the star to its stellar wind velocity ($\alpha={V_*}/{V_w}$); in principle, the \cite{Wilkin_1996} model is valid only for $0<\alpha<1$. When $\alpha = 0$, the stellar wind forms a spherical bubble and the standoff radius is undefined, whereas $\alpha > 1$ means the star is travelling faster than its wind. For hot, massive stars with radiation-driven winds \citep{Cassinelli-textbook}, the wind velocity is much faster than that of the star, so that $\alpha \ll 1$. On the other hand, for cool stars, the wind velocity can be slow relative to that of the star. For instance, the value of $\alpha$ for the O star $\zeta$ Pup is 0.1 \citep{Puls_1996}, while for Betelgeuse $\alpha$ is close to unity \citep{Mackey_Mohamed_Neilson_Langer_Meyer_2012}. In our models, we assume $\alpha=0.1$ to represent the hot-star case. Within \textit{SLIP}, it is not possible to encode an infinitesimally thin shell geometry with a divergent surface density. Instead, we construct a finite scattering region that reproduces the mass surface density function from Equation \ref{sigma}. As noted above, we define the shock's mid-region with the Wilkin shape (Eq.~\ref{radwill}). Then we calculate the volume density necessary to match the Wilkin mass surface density (Eq.~\ref{sigma}) via $\rho (\theta) = \sigma(\theta) \,b(\theta)/\Delta R(\theta)$, where $\Delta R(\theta)$ is the radial thickness of the finite bow shock region. Here $b(\theta)$ is a geometrical correction factor arising from the $\theta$ dependence of the bow shock's radius; we discuss this factor in detail in Appendix \ref{appendix}. Parametrizing the CSM thickness with the fractional quantity $f$ (where $f$ is constant over the shape and $0<f<1$), we calculate $\Delta R(\theta)$ as follows: \begin{equation} \Delta R(\theta)=R_{\textrm{out}}(\theta)-R_{\textrm{in}}(\theta) \equiv fR(\theta)\;. \label{chi} \end{equation} \noindent In this equation, $R(\theta)$ is the radius of the bow shock at the interface of the ISM and stellar wind, given by Eq.~\ref{radwill}, $R_{\textrm{in}}(\theta)$ is the inner radius of the finite structure, and $R_{\textrm{out}}(\theta)$ is the outer radius. Approximations to these three functions are depicted as coloured lines in Fig.~\ref{geom}, while the actual discretized density is shown in greyscale. For a given value of $\theta$, $R_{\textrm{in}}$ and $R_{\textrm{out}}$ are equidistant from $R_0$. We checked how changing the radial thickness $\Delta R(\theta)$ affects the simulated polarization signatures in the case of pure scattering ($a=1$). For values ranging from $f=0.1$ to $f=0.5$ (representing physically thin shells), we found insignificant variation in the polarization behaviour at any viewing angle. Thus, in our simulations, we assume $f=0.25$, which ensures the thickness of the shell is at least one grid cell within the code structure. With the definitions above, the volume density within our scattering region is given by \begin{equation} \rho(\theta) = \frac{R_0 \rho_{I}b(\theta)}{2\Delta R(\theta)} \left\lbrace \frac{[2 \alpha (1-\cos\theta)+\tilde{\varpi}^{2}]^{2}}{\tilde{\varpi} \sqrt{(\theta-\sin\theta \cos\theta)^{2}+(\tilde{\varpi}^{2} - \sin^2\theta)^{2}}}\right\rbrace . \label{rho} \end{equation} In the models presented here, we vary the density of the CSM by using as an input parameter the optical depth at a convenient arbitrary reference angle, $\theta_0=1.76 \textrm{ rad}=95.4^{\circ}$. We refer to this reference optical depth as $\tau_0$ and scale $\rho(\theta_0)$ to match it (effectively choosing $\rho_{I}$ to give the desired $\tau_0$). We then use Eq.~\ref{rho} to determine the density for other values of $\theta$. This results in a CSM density that is nearly, but not exactly, constant with $\theta$ (Fig.~\ref{taudens}). We then calculate $\tau(\theta)$ based on the density and thickness of the CSM. The variation of density and optical depth as a function of polar angle can be seen in Fig.~\ref{taudens}. The increase in optical depth with $\theta$ is due to the increasing behaviour of both $\sigma(\theta)$ (Eq.~\ref{sigma}; see discussion in \citealt{Wilkin_1996}) and $b(\theta)$ (Appendix~\ref{appendix}). To maintain a finite simulation size, we truncate the bow shock for large values of $\theta$ as described in Section \ref{results} below. \begin{figure} \includegraphics[width=\columnwidth]{taudens_scaled} \caption{Variation in mass density ($\rho$ [g cm$^{-3}$]; \textit{black points, right-hand axis}) and local normalized optical depth ($\tau/\tau_0$; \textit{red points; left-hand axis}) as a function of polar angle $\theta$. For each model, we specify the optical depth $\tau_0$ at the reference angle $\theta_0$ (\textit{dashed lines}; \S~\ref{methods}). The discrete nature of the optical depth is due to the distribution of the analytical bow shock shape across model grid cells. The behavior of the optical depth shows that the average number of scattering events per photon increases slowly with $\theta$ up to the cutoff angle (\S~\ref{results}) and decreases rapidly thereafter.} \label{taudens} \end{figure} In the geometry of Fig.~\ref{geom}, $+Q$ Stokes vectors correspond to equatorial scattering, vertical polarization vectors (i.e., in the $\pm z$ direction), and polarization position angles near $\Psi=0\degr$. Negative or $-Q$ Stokes vectors correspond to polar scattering, horizontal polarization vectors (i.e., in the plane orthogonal to $\pm z$), and position angles near $\Psi=90\degr$. Stokes $\pm U$ denotes diagonal polarization vectors rotated $45\degr$ from the $\pm Q$ vectors. (In our axisymmetric models, $U$ averages to zero for unresolved cases.) Because we consider only electron scattering, a symmetric process, our models produce no Stokes $V$ (circular) polarization. Thus, the fractional polarization $p$ (usually expressed as a percentage) is defined as \begin{equation} p(\%) = \frac{\sqrt[]{Q^2+U^2}}{I} \times 100. \end{equation} \section{Results from analytical model}\label{analytic} Before embarking on a parameter study using the MCRT methods of the \textit{SLIP} code, we first consider semi-analytic results for scattering within a bow shock in the optically thin limit. Because the stellar wind bow shock of \cite{Wilkin_1996} is explicitly axisymmetric, the methods of \cite{brown_1977} can be used to determine its expected polarization as a function of viewing angle in the spatially unresolved case. \cite{brown_1977} derived a simple expression for the linear polarization from an axisymmetric and optically thin scattering region illuminated by a central point source. Considering scattered light only, the fractional polarization can be expressed as \begin{equation} p=\frac{\sin^2 i}{h(\gamma)+\sin^2 i}\;, \label{pdef} \end{equation} \noindent where $i$ is the viewing angle measured from the $z$-axis as shown in Fig.~\ref{geom}, $\gamma$ is a ``shape factor'' to be discussed below, and $h(\gamma) = 2(1+\gamma)/(1-3\gamma)$. Brown \& McLean use the symbol $\alpha$ in the expression for $p$ (their Eqn.~17), but we choose to define $h(\gamma)\equiv2\alpha$ because we have already introduced a different $\alpha$ in the context of the bow shock geometry. The shape factor $\gamma$ is given by \begin{equation} \gamma =\frac {\int_{r=0}^{\infty}\int_{\mu = -1}^{1} n(r,\mu) \mu^2 dr d\mu}{\int_{r=0}^{\infty}\int_{\mu = -1}^{1} n(r,\mu) dr d\mu,} \label{gamma} \end{equation} \noindent where $\mu =\cos{\theta}$ (with $\theta$ representing the polar angle measured from the $z$-axis; Fig.~\ref{geom}) and $n(r,\mu)$ is the number density of the scattering region \citep{brown_1977}. Values of $\gamma$ range from 0 to 1, with $\gamma=1/3$ representing a spherical envelope, $\gamma=0$ a planar disk, and $\gamma=1$ a bipolar jet. These geometries produce maximum polarization values (at viewing angles of $90\degr$) of $0\%$, $33\%$, and $100\%$ respectively. In the specific case of the Wilkin model, we have \begin{equation} n(r,\mu) = \frac{\sigma(\mu) }{\Delta R(\mu)}\; . \label{column} \end{equation} \noindent When we substitute our expressions for $\sigma$ from Eq.~(\ref{sigma}) and $\Delta R$ from Eq.~(\ref{chi}) into Eq.~\ref{column}, and then put the resulting expression for $n(r, \mu)$ into Eq.~\ref{gamma}, we determine the shape factor $\gamma$ for our modified Wilkin bow shock. Because the bow shock is not a closed shape, we take the angular integrals from $\theta=0\degr$ to $\theta=131\degr$ only. The resulting $\gamma$ factor depends only on $f$, the fractional thickness of the shell, and $\alpha$, the velocity ratio (both defined in Section \ref{methods}). Numerical evaluation of the integrals in Eq.~\ref{gamma} for $f=0.25$ and values of $\alpha$ between 0.1 and 10 yields $\gamma \approx 0.241-0.295$. Corresponding values of $h(\gamma)$ range from $8.96$ to $22.52$. Given these generally large values of $h(\gamma)$, we expect that for low scattering optical depths, the polarization should scale with viewing inclination as $p \propto \sin^2 i$, which is symmetric about $i=90^\circ$. For representative values of $\alpha = 0.1$ and $h(\gamma) = 8.96$, we conclude that the theoretical electron-scattering polarization for an unresolved bow shock structure is \begin{equation} p(\%)=11.16 \, \sin^2 i . \label{analyteq} \end{equation} We constructed a set of \textit{SLIP} models with $f=0.25$, $\alpha=0.1$, and $a=1$, with photons arising from the central source only, to compare with these analytical results (Fig.~\ref{pvopt_pflux}). We considered reference optical depths of $\tau_0\leq 0.07$ only in order to ensure that the average number of scatters per photon was very close to 1. Our simulations show a viewing angle dependence and symmetric behaviour about $90^{\circ}$ in agreement with the prediction of Eq.~\ref{analyteq}, which serves to verify that our numerical approach is valid. The values arising from the simulation are generally consistent with the analytic model for these optical depths, with small differences attributable to our discretization of the Wilkin function for the \textit{SLIP} models. The symmetry about $90^{\circ}$ begins to break down slightly as $\tau_0$ increases, which is expected given the variation in actual optical depth with viewing angle (Fig~\ref{taudens}). \begin{figure} \includegraphics[width=\columnwidth]{pvopt_pflux.eps} \caption{Fractional polarization (with respect to scattered light only) as a function of optical depth at the standoff radius ($\tau_0$) for \textit{SLIP} models of an optically thin, unresolved bow shock viewed at $i=90^{\circ}$ (\textit{gold}), $i=75^{\circ}$ and $105^{\circ}$ (\textit{red}), and $i=45^{\circ}$ and $135^{\circ}$ (\textit{blue}). Horizontal lines represent the analytical prediction (symmetric about $i=90^{\circ}$) for each angle (Eq.~\ref{analyteq}). Our numerical simulations reproduce the theoretical predictions well, with some expected deviation from symmetry at larger optical depths. Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols.} \label{pvopt_pflux} \end{figure} \section{Model predictions from \textit{SLIP}} \label{results} In order to perform numerical calculations of the polarization created in a Wilkin bow shock, we must take into account the fact that our simulations involve a grid of finite size, whose maximum extent we set at $R_{\textrm{max}} = 6.68$ AU. Our approach is to modify the density description in the \cite{Wilkin_1996} model to accommodate our finite grid. We use the density of the bow shock as prescribed by Eq.~\ref{rho}, up to a certain cutoff angle $\theta_c$. For $\theta>\theta_c$, we assume the bow shock density declines exponentially rather than being sharply truncated by the outer limit of our simulation (which we found resulted in spurious polarization at the edges). This modified density in the wings of the bow shock is given functionally by \begin{equation} \rho(\theta > \theta_{\rm c}) = \rho(\theta_c,\varpi) \exp[-(\theta-\theta_c)/\delta\theta_0]\; , \label{exp} \end{equation} \noindent where $\delta\theta_0$ is a constant angle governing the steepness of the density decline. This modification of the Wilkin density structure does not affect the accuracy of our results, for two reasons. First, an infinitesimally physical thin shell is not physically realistic, especially at large distances from the bow head, as the shell must spatially ``thicken'' with distance by virtue of gas pressure gradients and Kelvin-Helmhotz instabilities \citep{Mohamed_2012,Mackey_Mohamed_Gvaramadze_Kotak_Langer_Meyer_Moriya_Neilson_2014}. Second, the geometry for a thin shell ensures that with increasing distance from the star, the solid angle subtended by a shell ring (i.e., a ring about the symmetry axis) decreases with distance. As a consequence, from the perspective of scattering stellar photons, the large-scale wings of the bow shock offer a diminishing cross-section for intercepting and scattering starlight. This also means that the increasing size of the grid cells at larger radii does not significantly affect our results. We investigated the impact on polarization of the cutoff angle $\theta_c$ and the steepness $\delta\theta_0$ of the exponential decay function by varying both parameters in our simulations. We emphasize that in these and all our subsequent models we measure fractional polarization with respect to the total light, rather than scattered light only (as in Eqns. \ref{pdef} and \ref{analyteq}). In testing the effects of $\theta_c$ and $\delta\theta_0$, we used the central photon source with reference optical depth of $\tau_0=0.5$ and a CSM temperature $T$ of 10,000 K. For an unresolved bow shock, we found that as the cutoff angle increases, the peak polarization value and the variation of polarization with viewing angle $i$ is nearly unchanged. We thus chose a convenient value of $\theta_c=2.1$ rad ($122^{\circ}$) as the cutoff angle for all the other models presented in this paper. This choice for $\theta_c$ ensures that the entire CSM structure is included within our simulation grid. All the values we tested for $\delta\theta_0$ resulted in similar polarization values and behaviour. We chose $\delta\theta_0=0.3$ rad ($17^{\circ}$) for all the models shown hereafter. We also tested the behaviour of the polarization in our simulations as a function of $\alpha$, the velocity ratio defined in Section~\ref{methods}. Fixing the albedo of the scattering region at $a=1$, emitting photons from the central source, and using the same values of $\tau_0$ and $T$ as in our previous test cases, we found that as $\alpha$ increases, the polarization value increases as well. From Equation~\ref{rho}, we see that with a given thickness function $\Delta R(\theta)$, the volume density $\rho$ increases with $\alpha$ for angles greater than $\theta=0$. Thus, increasing the value of $\alpha$ should have a similar effect to increasing the optical depth $\tau_0$ for $a=1$, which does indeed increase polarization overall (Section~\ref{thomson}). For the simulations presented below, we set $\alpha=0.1$ as discussed in Section~\ref{methods}. Finally, we studied how changing the standoff radius $R_0$ of the bow shock changes the polarization behaviour. When the albedo $a$ is fixed at 1 (the pure scattering case), changing $R_0$ does not affect the polarization. However, when the albedo is not explicitly fixed (the case of scattering with absorption), changing the standoff radius changes the albedo and thus the polarization. This is because $R_0$ is used to calculate the physical thickness $\Delta R(\theta)$ of the bow shock (Eqs.~\ref{radwill} and \ref{chi}), which in turn affects its opacity. When $a$ is not fixed, it is calculated using the opacity of the region (\S~\ref{vara}): a larger value of $R_0$ corresponds to a lower density for a given $\tau_0$, which leads to a larger opacity and a lower albedo. We chose $R_0 = 1.4$ AU for all our models, because for variable $a$ this $R_0$ value produces polarization behaviour as a function of viewing angle similar to the analytical results in the optically thin case (Section~\ref{analytic}). (For comparison, the radius of our central source is $1 R_\odot\approx0.005$ AU; this value has no physical significance other than to make the central star effectively a point source.) With $R_0 = 1.4$ AU and $R_{\textrm{max}} = 6.68$ AU, the density within the bow shock goes to zero between $\theta=134\degr$ and $\theta=140\degr$ (where the bow shock radii intersect the boundary of the simulation). To create our numerical simulations, we used the University of Denver's high-performance computing cluster (HPC), which consists of 180 Intel Xeon processors running at 2.44 GHz. Each of our model runs used 16 CPUs with $10^8$ photons per CPU. This yielded polarization uncertainties on the order of $\sigma_p(\%) \sim 0.01$. Completing each run took $\sim 60-70$ minutes, with slightly longer times for larger values of $\tau_0$. Our simulations can be broadly divided into models assuming pure Thomson scattering with no absorption ($a=1$) and those including some absorption (variable $a$). In each case, we studied the effect of various parameters on the polarization behaviour for both resolved and unresolved cases. In the resolved cases, we preserve spatial information from our simulations, while in the unresolved cases, we combine all photons from a given viewing angle into a single set of polarization values. We present our results below. \subsection{Pure Thomson Scattering} \label{thomson} To simulate the case of pure Thomson scattering, we fixed the albedo of the bow shock environment at 1. In this case, all emitted photons scatter in the bow shock and ultimately escape. We explored the dependence of polarization on CSM temperature, standoff radius, and optical depth for both central and distributed photon sources. We found that for a given source, only the optical depth affects the simulated polarization; varying the CSM temperature and standoff radius produced no change in either polarization magnitude or behaviour as a function of viewing angle. In the rest of this section, we present the detailed behaviour of polarization as a function of optical depth, for both resolved and unresolved scenarios. We investigated three representative optical depths: $\tau_0=0.1, 0.5$, and $2.0$. In all the cases shown here, $T=10,000$ K, $\theta_c = 122^{\circ}$, $\delta\theta_0 = 17^{\circ}$, and $\alpha = 0.1$. In all these simulations, we found polarization position angles very close to $\Psi=0\degr$, so we have not displayed the position angle results. \subsubsection{Optical depth dependence -- resolved bow shock} \label{result_opt_r} In Fig.~\ref{map_diffopt}, we display the intensity, percent polarization, and polarized intensity images for a resolved bow shock with three different optical depths at two representative inclination angles symmetric around the $z=0$ plane, $55^{\circ}$ and $125^{\circ}$.~(Polarized intensity is calculated by multiplying $\%p$ by intensity; in these maps it represents the polarized light arising from the system.) In the central-source cases (left column), the intensity maps show only a small dot at the location of the star due to our choice of a linear intensity scale that shows the distributed-source behavior well. The scattered light from the bow shock contributes intensity too faint to be seen on this scale. The central-source polarization maps are similar for the two symmetric inclination angles; they show a generally elliptical polarization pattern, which is created by the combination of all $90^\circ$ scattering paths, as shown schematically in Fig.~\ref{bowsketch_los}. For a given inclination angle, the overall polarization magnitude decreases with increasing optical depth, which is generally expected given that multiple scatters typically randomize the polarization of an ensemble of photons. For a given optical depth, the polarization near the bow head is smaller for the larger inclination angle. Figure~\ref{bowsketch_los} shows that the path length for photons scattering at $90^\circ$ near the bow head at the lower inclination angle (panel $b$, paths 1 and 2) is much smaller than in the case of the higher inclination angle (panel $c$, paths 1 and 2). Because of this, multiple scattering is more important for higher inclinations and optical depths. In this case, because the outgoing photons scatter in the same plane, the dominant effect of multiple scattering is to remove polarized photons from the beam rather than randomizing their position angles. This effect can be seen in the decrease of polarized intensity with inclination angle in the lower panels (Fig.~\ref{map_diffopt}). The central-source polarized intensity maps show that the majority of scattered photons reach our line of sight from locations near the bow head; the scattering material is very tenuous in the outer regions, so very few photons scatter there (but those that do become highly polarized in the process). We note that although the resolved maps look similar in polarization between the two angles, they are quite distinct in polarized intensity, particularly at higher optical depths. This suggests that polarized intensity maps may provide an observational tool for constraining bow shock inclinations. In the distributed-source case (photons arising only from the CSM; \textit{right side}), Fig.~\ref{map_diffopt} shows that the total intensity is concentrated near the bow head because the CSM density is higher in that region and thus more photons are emitted from there. In this case, photons are emitted with an isotropic distribution of initial directions from within the volume of the CSM. Thus, photons scatter more times on average than in the central-source model with the same input parameters. This increased scattering, combined with cancellation from neighbouring photon origins and the contribution from ``surface" photons (those arising from the outer edge of the bow shock) that reach the observer directly, causes a significant decrease in the polarization arising from any given location in the CSM, compared with the central-source case (middle panels of Fig.~\ref{map_diffopt}). The polarization is highest at the edge of the CSM because of a scattering asymmetry. In most parts of the CSM, polarization angles are highly randomized, so photons that reach the viewer can have any polarization angle. However, limb photons cannot scatter in all directions and thus tend to have a preferred polarization angle. The difference in polarization morphology between central-source and distributed-source models suggests that observational polarization maps \citep[such as those of][]{2013A&A...551A..35R} can be useful for constraining the photon origin and thus the relative brightnesses of the star and the CSM. \begin{figure*} \includegraphics[scale = 0.8]{maps_diffopt} \caption{Intensity, polarization, and polarized intensity maps for resolved bow shocks illuminated by a central source (\textit{left}) and the distributed source (\textit{right}; photons arise from within the CSM as described in \S~\ref{methods}). In the central-source intensity maps, arrows indicate the location of the star. We show two inclination angles symmetric about $90^{\circ}$. Optical depth increases from left to right in each row. Intensities are in arbitrary units. } \label{map_diffopt} \end{figure*} \begin{figure*} \includegraphics[scale=0.7]{bow_qmap} \caption{Sketch showing the $90^{\circ}$ scattering paths for central-source photons at four different viewing angles $i$. The numbered arrows indicate the limiting paths that produce negative $q$ polarization as seen by an observer in the $i$ direction (polar scattering). In each panel, there will also be $90^{\circ}$ scattering paths for photons initially directed out of the page, defining the width of the scattering ellipses; these paths, which produce positive $q$ polarization (equatorial scattering), are not shown in the sketch. Dashed lines indicate the direction to the observer; short dotted segments mark the location of the density falloff in the wings of the bow shock (Section~\ref{results}). Small coloured images for each inclination angle depict the distribution of $q$ polarization as seen by the observer, for $\tau_0 = 0.1$ (\textit{left}) and $\tau_0 = 2.0$ (\textit{right}). The colours range from $-100\%$ (\textit{darkest blue}) to $+100\%$ (\textit{darkest red}).} \label{bowsketch_los} \end{figure*} By contrast, the distributed-source polarized intensity maps look very similar to those produced by the central-source models and show similar variations with inclination and optical depth. Thus, observed polarized intensity maps would not be able to distinguish reliably between photons emitted from the central star and photons emitted from the bow shock. \subsubsection{Optical depth dependence -- unresolved bow shock} \label{result_opt_ur} \begin{figure*} \includegraphics[width=\textwidth]{pva_diffopt} \caption{Polarization as a function of inclination angle for an unresolved bow shock with different values of $\tau_0$, for photons arising from the central source (\textit{left}) and from the CSM (distributed-source; \textit{right}). All other parameters are held constant as described in \S~\ref{thomson}. Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols.} \label{pva_opt} \end{figure*} In Fig.~\ref{pva_opt}, we display the polarization variation as a function of viewing angle for the unresolved case, considering four different values of the reference optical depth $\tau_0$. For both central and distributed emission cases, all models show a primary peak in percent polarization at an inclination angle of $90\degr$, as well as a secondary peak at angles greater than $130\degr$ whose exact location depends on $\tau_0$. In Fig.~\ref{pva_opt}, the maximum polarization occurs at an inclination angle of $90^{\circ}$ for all optical depths and both photon sources. This can be understood in terms of the analytical models of \citet{brown_1977}, who showed that for the optically thin case, the polarization produced by scattering in an axisymmetric envelope is proportional to $\sin^2 i$. For higher $\tau_0$ values, however, our models depart from the theoretical $\sin^2 i$ dependence of the polarization, particularly at higher viewing angles. As the optical depth increases, the secondary peak becomes enhanced with respect to the primary peak, and even exceeds it at larger optical depths than we display here. (We tested a range of $\tau_0$ values to establish this behaviour, but only display a few in Fig.~\ref{pva_opt} for clarity.) We hypothesize that this effect is due to multiple scattering becoming more common at higher optical depths. In order to understand the effect of multiple scattering on the polarization behaviour, we created central-source and distributed-source simulations for $\tau_0 = 0.5$ and $\tau_0 = 2.0$ in which we disaggregated the results by number of scatters; we display the results in Fig.~\ref{pva_le}. Indeed, we see from this figure that the singly scattered photons is consistent with the theoretical $\sin^2 i$ dependence (with a slight ``shoulder" at low $\tau_0$ due to the onset of the density falloff; Eq.~\ref{exp}. Other slight departures from the idealized function are due to the discretisation effects discussed in \S~\ref{analytic}). The multiply scattered photons diverge from this behaviour more strongly as $\tau_0$ increases, particularly at larger viewing angles where the path length through the CSM is longer (Fig.~\ref{bowsketch_los}). We also see that the overall width of the polarization curve decreases for larger numbers of scatters (Fig.~\ref{pva_le}), particularly at higher optical depths. We attribute this to the increasing contribution from scattering paths producing negative $q$ (``polar scattering") polarization in these cases. (Stokes $u$ is zero on average for these axisymmetric models, so $q$ is the dominant contributor to the total polarization $p$.) In the central-source case, the scattering paths producing positive $q$ polarization (``equatorial scattering") have a constant average initial (pre-scattering) path length through the CSM independent of viewing angle; thus the $+q$ polarization varies as $\sin^2i$ due to projection effects. (These positive-$q$ paths are not shown in Fig.~\ref{bowsketch_los}: they initially run from the central source directly out of the page, then scatter toward the observer in the direction indicated by the arrows. They create the red regions in the inset $q$ maps.) By contrast, the negative-$q$ paths shown in Fig.~\ref{bowsketch_los} have path lengths through the CSM that vary with inclination angle, and these are longer than the $+q$ paths for most angles. This means that increasing optical depth results in a higher magnitude of $-q$ polarization, as shown explicitly in Fig.~\ref{qva_opt}. With no absorption, more photons scatter into other lines of sight, while the few that escape toward the observer have scattered multiple times in the same plane and are thus more highly polarized \citep[as discussed in][]{Wood_96a}. On the other hand, higher optical depths and more scatters produce more negative $q$ polarization and smaller values of $p$ in Fig.~\ref{pva_le}. For the viewing angles with negative $q$ values, the polarization position angle $\Psi$ flips from $0\degr$to $90\degr$. We therefore conclude that the secondary peak near $i=130\degr$ in the unresolved, central-source models with higher optical depths (Fig.~\ref{pva_opt}) is caused by a strong increase in $-q$ polarization when multiple scattering becomes important. Most of our models also show a polarization peak near $150\degr$ due to the fact that at this angle, the line of sight no longer intersects the near side of the CSM because of our simulation boundary (\S~\ref{results}). In this case, the path lengths that pass through the near side of the CSM are very long, so almost no photons escape there; the resulting polarization is primarily due to photons that are singly scattered from the interior far wall of the CSM (path 3 in Fig.~\ref{bowsketch_los}, panel $d$). In the distributed case, the polarization predominantly arises from the limb of the bow shock and from the wings farthest from the bow head (Fig.~\ref{map_diffopt}). Photons from the limb tend to produce $+q$ polarization (in addition to some $u$, which cancels out in the unresolved case) because they are most likely to reach the observer by singly scattering near the edge of the CSM, producing the familiar tangential polarization pattern. Photons arising from the plane facing the observer produce zero net polarization because they are equally likely to escape after scattering in any direction, and thus cancellation is high. In the wings, however, this symmetry breaks due to the density falloff; in this case photons are most likely to escape after singly scattering in the regions farthest from the bow head, producing negative $q$ values. For the unresolved distributed models (Fig.~\ref{pva_opt}), the polarization as a function of viewing angle behaves very similarly to the case of the central-source models, as expected because the bow-shock geometry of the CSM is the same between the two cases \citep{brown_1977}. We see the same $\sin^2 i$ behaviour, modified by increasing contributions from $-q$ polarization at higher viewing angles (Fig.~\ref{qva_opt}) as we see more contribution from the far side of the bow shock. The secondary peak in the distributed case occurs at larger viewing angles than in the central-source case because the CSM density falloff translates into fewer photons emitted from those angles. Interestingly, although the central-source and distributed models show very similar polarization behaviour as a function of optical depth (Fig.~\ref{pva_opt}), they behave quite differently as a function of number of scatters for a given optical depth (Fig.~\ref{pva_le}). In the distributed models, multiple scattering \textit{increases} the polarization over single scattering at intermediate viewing angles. We attribute this effect to the fact that polarization in the distributed cases arises primarily from the limb, where column densities are high. Although this polarization is likely dominated by singly scattered photons originating near the outer surface, a few multiply scattered photons reaching us through the dense material at the limb can create large polarization percentages due to scattering in the same plane \citep{Wood_96a}. For higher optical depths and more scatterings, however, the two emission cases become quite similar, as expected once the photon source becomes ``forgotten.'' \begin{figure*} \includegraphics[width=\textwidth]{pva_diffscat} \caption{Polarization as a function of inclination angle for the same models as in Fig.~\ref{pva_opt}, with different curves for photons scattered different numbers of times. In the legends, ``NOS" denotes number of scatters. ``All" refers to the photons that have been scattered any number of times. The red dotted line in each panel traces the theoretical $\sin^2 i$ function \citep{brown_1977}, normalized to the peak of the single-scattering curve in each panel. Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols.} \label{pva_le} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{qva_diffopt} \caption{Percent Stokes $q$ polarization as a function of inclination angle for four different values of optical depth $\tau_0$, for photons arising from the central source (\textit{left}) and from the CSM (distributed source; \textit{right}). Black points and lines represent optically thin cases, while red points and lines represent higher optical depths. Red dotted lines represent the theoretical $\sin^2(i)$ function normalized to the peak of the $\tau_0 = 2.0$ curves. Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols. Positive values of $q$ correspond to polarization position angles of $\Psi=0\degr$, while negative values correspond to $\Psi=90\degr$.} \label{qva_opt} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{pvopt} \caption{Polarization as a function of optical depth $\tau_0$ at three different inclination angles (labelled in degrees), for photons arising from the central source (\textit{left}) and from the CSM (distributed source; \textit{right}). Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols.} \label{pvtau_opt} \end{figure*} In Fig.~\ref{pvtau_opt}, we compare the variation of polarization with optical depth for three different inclination angles and the two photon sources. As expected based on previous results, the central-source and distributed cases show similar behaviour. For the lower viewing angles, we see the ``peaking'' effect described by \citet{Wood_96a}, in which polar scattering begins to dominate over equatorial scattering for higher optical depths. At $i=45^\circ$, the polarization magnitude is relatively low for all $\tau_0$ values due to large contributions from $-q$ scattering paths (Fig.~\ref{bowsketch_los}). At $i=90^\circ$, the location of the first polarization peak in all our models, the polarization is a maximum for all optical depths due to the loss of paths 3 and 4 combined with a very short path length through the CSM at the bow head for paths 1 and 2 (which allows more photons to escape without scattering). At $i=130^\circ$, the location of the second polarization peak for the central-source case, the behaviour is quite different: our models show a dramatic \textit{increase} in polarization magnitude as a function of optical depth for $\tau_0>1$, with central-source models increasing more steeply than distributed models. At this inclination angle, the path lengths for scattering producing $-q$ polarization are at their longest (Fig.~\ref{bowsketch_los}$c$); increasing optical depth increases the number of scatterings photons undergo in the same plane, while filtering out photons with lower polarization; this increases the $-q$ contribution as discussed above. Hence, the polarization increases with increasing optical depth, and the effect is more pronounced for the central-source models because the path lengths through the CSM are longer in these cases. Our results can be used along with observational data to constrain the inclination angle and optical depth of a given bow shock nebula, assuming electron scattering is the primary polarizing mechanism. An unresolved bow shock would be observed at a single value of $i$ and $\tau_0$. Once corrected for interstellar polarization (and for orientation on the sky in the case of $q$, e.g. via proper motion measurements), observed values of $p$ and $q$ for such an object would yield horizontal lines in Figs.~\ref{pva_opt}, \ref{qva_opt}, and \ref{pvtau_opt}. These lines would nearly always intersect the model curves in at least two places for Figs.~\ref{pva_opt} and \ref{qva_opt}, but this would place limits on the possible values of the inclination angle, especially in cases where the optical depth can be estimated from other measurements. Also, if the observed Stokes $q$ parameter were negative, we could say based on Fig.~\ref{qva_opt} that the bow shock was optically thick and viewed at an inclination angle greater than $90^{\circ}$. With an observed value of $p$, using Fig.~\ref{pvtau_opt} we could constrain the inclination angle if we had spectral information that probed the CSM optical depth, or constrain the optical depth if we had radial and transverse velocity information that limited possible inclination angles. \subsection{Thomson Scattering with Absorption} \label{vara} In this section, we investigate cases in which the albedo $a$ of the CSM is not unity (that is, at each interaction, photons have a chance of being absorbed rather than scattering). The \textit{SLIP} code can assign a user-specified albedo to the scattering material, but it also has the capability to calculate a self-consistent albedo using the input temperature and optical depth. In our simulations, the CSM is composed of pure hydrogen, both ionized and neutral. Thus, in the case of variable albedo, we assume photons may be absorbed by hydrogen atoms via both bound-free and free-free processes. The resulting absorption opacity is a function of photon wavelength. Although \textit{SLIP} can consider any range of wavelengths, for simplicity we assume a single optical wavelength of 6040 \AA; this represents an intermediate value in the hydrogen opacity curve and avoids absorption edges. With this wavelength, the combinations of temperature and optical depth we consider give rise to albedo values that span the possible range from 0 to 1 (Table~\ref{tab:albedo}). When we allow the albedo to vary, we first calculate the hydrogen absorption opacity $\kappa_H$ for 6040 \AA~via Eq.~2 in \citet{wood_96}. Using the ionization fraction $x$ found as above in \S~\ref{methods}, we then set the albedo to be the ratio of scattering to total opacity: $a=0.4x/(0.4x+\kappa_H)$. Because we assume $x$ to be constant throughout the CSM for computational simplicity, $a$ is constant also. Table \ref{tab:albedo} presents the calculated albedo values for different temperatures and optical depths for our assumed wavelength of 6040 \AA. For a given optical depth, the albedo increases with CSM temperature. In the subsections below, we discuss our model predictions of the polarization behaviour as a function of optical depth and temperature when the albedo is allowed to vary. As in the pure-scattering case, position angles for these models are generally near $\Psi=0\degr$. \begin{table} \caption{Albedo values calculated by \textit{SLIP} when $a$ is not constrained to be 1, for an assumed wavelength of 6040 \AA~ and different CSM temperatures and reference optical depths (\S~\ref{vara}).} \label{tab:albedo} \centering \begin{tabular} {|c|c|c|c|c|} \hline $\tau_0$ & 5,000 K & 8,000 K & 10,000 K & 20,000 K\\ \hline 0.5 & 0.468 & 0.862 & 0.927 & 0.985\\ 2.0 & 0.180 & 0.609 & 0.761 & 0.942\\ \hline \end{tabular} \end{table} \subsubsection{Temperature dependence -- resolved bow shock} \label{result_temp_r} As the CSM temperature increases, the albedo increases for a constant input optical depth, as shown in Table \ref{tab:albedo}. This causes our results to deviate from the pure Thomson-scattering results (\S~\ref{thomson}), especially at lower temperatures. \begin{figure*} \includegraphics[width=\textwidth]{maps_difftemp_05} \caption{Intensity, polarization, and polarized intensity maps for resolved bow shocks illuminated by a central source (\textit{left}) and the distributed source (\textit{right}) for the case of CSM albedo $a<1$ (\S~\ref{result_temp_r}) and an optical depth of $\tau_0=0.5$. We show two inclination angles symmetric about $90^{\circ}$. CSM temperature increases from left to right in each row. Intensities are in arbitrary units. In the central-source intensity maps, arrows indicate the location of the star. Higher resolution figure can be found \href{http://portfolio.du.edu/hoffman-group/page/66722}{here}} \label{map_05_difftemp} \end{figure*} Fig.~\ref{map_05_difftemp} shows maps of intensity, percent polarization and polarized intensity for two different viewing angles and three different temperatures for $\tau_0 = 0.5$. In the central-source case (\textit{left side}), the scattered intensity is too faint to be seen on this linear scale, as discussed above in \S~\ref{result_opt_r}. In this case we also see little change in polarization as the temperature increases (corresponding to increasing albedo; Table~\ref{tab:albedo}). This is because the overall number of photon interactions is small at this low optical depth. As in the pure scattering case, the polarization near the bow head is lower for the higher viewing angle. In this case, photons are removed from the beam by absorption in addition to scattering, but the result is the same. Polarized intensity is concentrated near the bow head as in the pure scattering case; it increases with increasing temperature as the photons undergo more scattering events relative to absorption events, which increases their likelihood of escaping. In the distributed case (\textit{right side}), there is little variation in polarization with respect to either temperature or viewing angle, again due to the low number of interactions. The polarized intensity maps show a very similar behaviour to those of the central-source case, with more polarized intensity at higher temperatures. When absorption is present, the relation between the polarization and polarized intensity maps for central-source and distributed cases is quite similar to that discussed above for the pure-scattering scenario (\S~\ref{result_opt_r}). As we noted there, the difference in polarization maps suggests a possible observational diagnostic for the CSM:star brightness ratio. By contrast, if we compare the maps including absorption to the corresponding pure-scattering maps in Fig.~\ref{map_diffopt} (\textit{middle column}), we see very little difference, suggesting that polarization observations may not be able to constrain the albedo of the scattering material in cases of low optical depth. In Fig.~\ref{map_2_difftemp}, we present the intensity, polarization, and polarized intensity maps for the case of variable albedo and an optical depth of $\tau_0 = 2.0$. These maps were created using models with the same number of input photons as Figs.~\ref{map_diffopt} and \ref{map_05_difftemp}, but look grainy because so many of the emitted photons become absorbed in the case of higher optical depth. Because of the relationship between albedo and temperature (Table \ref{tab:albedo}), absorption effects are strongest for $T=5000$ K (\textit{left column of each set}). In the central-source case (\textit{left side}), we once again find a very small intensity contribution from scattered light (\S~\ref{result_opt_r}). At lower temperatures, the polarization maps show a ``dark belt'' at mid-latitudes that is not present at higher temperatures. This belt delineates the region of highest optical depth in the CSM, with $\theta$ values slightly less than the cutoff angle (Fig.~\ref{taudens}; Fig.~\ref{bowsketch_los}). In this region, photons that would normally reach the observer via multiple scattering are instead being absorbed. As the temperature increases, photons are again more likely to scatter at each interaction, so the dark belt disappears. At the higher viewing angle, the polarization is highest in the lower portion of the image. This can be attributed to the increased importance of photons backscattering from the CSM interior (Fig.~\ref{bowsketch_los}, cases $c$ and $d$), combined with a lower density in the CSM facing the observer. Like the polarization, the polarized intensity is concentrated towards the lower portion of the image for the higher inclination angle, whereas for the lower angle the polarized intensity is highest near the bow head. These differences are explained by the longer line of sight for higher angles (described in \S~\ref{result_opt_r}), which greatly increases the probability of absorption. Polarized intensity increases with temperature, as expected due to the decreasing importance of absorption at higher temperatures. In the distributed case (\textit{right side}), the intensity images for the first time show a significant contribution from the interior of the bow shock at higher inclination angles, as emission from the front side is suppressed by absorption. The polarization is more widely distributed across the shape for lower temperatures, but becomes more concentrated near the edges (similar to the cases of pure scattering and absorption at low optical depth) as temperature increases. At lower temperatures, most of the scattered photons become absorbed and very few escape, making cancellation effects less efficient and allowing a polarization signal to arise from regions other than the edges. At higher temperatures, more scatters increase cancellation and we approach previously considered cases. The polarized intensity maps behave similarly for the distributed case as for the central-source case. Taken together, Figs.~\ref{map_diffopt}, \ref{map_05_difftemp}, and \ref{map_2_difftemp} suggest that observational constraints on the temperature of the bow shock (in cases where electron scattering dominates) may be possible, but only in cases of higher density/optical depth. For less dense shock structures, the resolved polarization and polarized intensity maps appear similar whether or not absorption is included. However, at higher densities, new features appear when absorption is important, such as the dark belt in polarization and the interior of the shock cone in intensity and polarized intensity. These features could serve as temperature and density indicators in actual observations. \begin{figure*} \includegraphics[width=\textwidth]{maps_difftemp_2} \caption{As in Fig.~\ref{map_05_difftemp}, but for $\tau_0 = 2.0$. ``Ringing'' patterns are not physical, but rather due to the discrete model grid (Fig.~\ref{taudens}). Higher resolution figure can be found \href{http://portfolio.du.edu/hoffman-group/page/66722}{here}} \label{map_2_difftemp} \end{figure*} \subsubsection{Temperature dependence -- unresolved bow shock} \label{result_temp_ur} \begin{figure*} \includegraphics[width=\textwidth]{pva_difftemp} \caption{Polarization as a function of inclination angle for an unresolved bow shock with different CSM temperatures, for the case of CSM albedo $a<1$ (\S~\ref{result_temp_r}). Photons arise from the central source (\textit{left}) or from the CSM (distributed-source; \textit{right}). Low optical depths are shown in the top row and higher optical depths in the bottom row. Error bars representing $1\sigma$ uncertainties in each model bin (\S~\ref{methods}) are smaller than the plotted symbols.} \label{pva_difftemp} \end{figure*} In Fig.~\ref{pva_difftemp}, we display the polarization variation as a function of viewing angle for models with absorption in the unresolved case, varying both optical depth (\textit{rows}) and temperature (\textit{columns}). In the lower optical depth regime (\textit{top row}), the increase in albedo with temperature (Table \ref{tab:albedo}) causes the degree of polarization to increase at most viewing angles for both central and distributed photon sources. When the albedo is low, photons tend to be absorbed rather than scattered, which lowers the overall degree of polarization (as seen in \citealt{wood_96}). As the albedo increases, photons that have been scattered and thus polarized are more likely to escape the bow shock. Hence we see an increase in polarization for higher temperatures. At high optical depths, the albedo is generally small, and increases with increasing temperatures (Table \ref{tab:albedo}). Thus at lower temperatures, only small numbers of photons can escape from the bow shock, and those that escape tend to be highly polarized. As the temperature increases, more photons can escape without scattering; this decreases the overall fractional polarization value. We see these effects in the case of the optically thick CSM illuminated by a central source (Fig.~\ref{pva_difftemp}, \textit{lower left panel}), where polarization values are very high (up to 45\%) and the peak near $90\degr$ is suppressed for all temperatures. There is a prominent second peak near $i=130\degr$; as the temperature increases, the degree of polarization decreases at this higher viewing angle. We attribute the suppression of the $90\degr$ peak to the combination of higher optical depths and lower albedos, which together increase the chance for a photon to be absorbed. Inspection of the flux characteristics of these models shows that most of the photons escape in the wings of the bow shock, where the optical depth is lower due to our cutoff angle. Thus the secondary peak we discussed in the pure-scattering case (\S~\ref{result_opt_ur}) dominates the polarization in these models. The secondary peak is also prominent in the optically thick, distributed-source cases (\textit{lower right}), although the polarization values are smaller than for the central-source models because more photons escape directly from near the surface of the CSM. The $90\degr$ peak is still present for most temperatures. At $T=5000$ K, however, only the secondary peak contributes, while the $90\degr$ peak is completely suppressed by absorption (Fig.~\ref{map_2_difftemp}, right-hand side). The polarization is almost entirely due to photons arising and scattering near the interior surface of the CSM. Because very little polarized intensity arises from the outer surface, in this extreme scenario the secondary peak shifts to a viewing angle of $\approx 110\degr$, at which the interior first begins to be visible. In the high-density cases for both photon sources, the models with the highest temperatures approach the behaviour of the pure scattering case as $a\rightarrow 1$. \subsubsection{Optical depth dependence -- resolved bow shock} \label{result_opt_r_vara} Using Figs.~\ref{map_05_difftemp} and \ref{map_2_difftemp}, we can also assess our resolved results as a function of optical depth. The intensity maps vary significantly with optical depth in the case of the distributed source. At the higher inclination angle, the intensity is concentrated near the bow head for $\tau_0 = 0.5$, whereas for $\tau_0 = 2.0$ the intensity arises primarily from the wings and interior of the bow shock structure. For all temperatures, the degree of polarization decreases with increasing optical depth. We attribute this behaviour to the decrease in albedo with $\tau_0$ shown in Table \ref{tab:albedo}. For the central source at the lower temperature of $5000$ K, the ``dark belt'' effect occurs for higher optical depths only, due to a lower albedo combined with increased photon interactions. For the distributed source, the polarization is primarily concentrated near the edges as in the pure scattering case. However, in the lower-temperature case viewed from $i = 125\degr$, some polarization arises from the upper portion of the bow shock for $\tau_0=2.0$, which is not seen at $\tau_0=0.5$. This occurs because when absorption is frequent, cancellation of Stokes vectors cannot happen for $\tau_0=2.0$ as efficiently as in the case of $\tau_0=0.5$, so some net polarization remains. In polarized intensity, the two optical depths produce very different maps. For the central-source case, at $\tau_0 = 0.5$ the polarized intensity is concentrated near the bow head for both viewing angles, while for $\tau_0 = 2.0$ at the higher viewing angle, the polarized intensity is concentrated towards the lower portion. This is because when the density near the bow head is high and $a<1$, photons have a better chance of being absorbed in those regions. In the lower portion of the map, for $\theta$ values greater than the cutoff angle, the density is much lower; thus most of the photons that are polarized can escape the bow shock. These photons arise primarily from the interior of the shock cone, which is visible at the higher angle. We see a similar effect in the distributed-source case. Because of these optical depth variations, observed polarized intensity maps can potentially constrain the optical depth of the bow shock material as well as the structure's inclination angle. Comparison of observed maps with these predictions can also help identify the source of illumination and thus relative brightnesses of star and CSM, as discussed above (\S~\ref{result_opt_r}). \subsubsection{Optical depth dependence -- unresolved bow shock} \label{result_opt_ur} We can isolate optical depth-dependent behaviour for unresolved cases by comparing top to bottom panels in Fig.~\ref{pva_difftemp}. For a constant temperature, the location of the polarization peak is different for the two optical depths. In the optically thin case, the peak is near $90\degr$ \citep[as predicted by analytic models, e.g.][]{brown_1977} for both the central and distributed cases. In the optically thick case, the peak shifts to higher inclination angles for both photon sources. For a constant temperature, increasing optical depth leads to decreasing albedo. Thus, when $\tau_0$ is high, very few photons can escape from the denser central regions of the bow shock. Instead they escape from higher viewing angles, giving rise to the secondary peaks for higher optical depth. In the central-source case, the model with $T=5000$ K and high optical depth produces the highest polarization in any of our models, because it has the lowest albedo. As discussed in Section \ref{result_temp_ur}, this scenario results in a low number of escaping photons (mainly those scattering from the interior surface) and thus high polarization magnitudes. At $90\degr$, instead of a polarization peak, this extreme case shows a small ``notch'' that we attribute to the prominence of the ``dark belt'' discussed in \S~\ref{result_temp_r}: at edge-on inclinations, this belt will dominate the polarization signal, with very few photons escaping from either the bow head or the interior. In the distributed-source case, the models evolve from single-peaked to a double-peaked shapes as $\tau_0$ increases. At higher optical depths, the $90\degr$ peak is suppressed and the secondary peak begins to dominate, due to the fact that scattered photons can more easily escape at higher inclinations once absorption is present. At the lowest temperature, for which the albedo is close to 0, the $90\degr$ peak completely disappears and the polarization is due entirely to photons arising and scattering near the interior surface of the CSM (\S~\ref{result_temp_ur}). \section{Observational implications} \label{obsimp} We close by discussing potential observational implications of the electron-scattering results presented here (subject to the model limitations discussed below in \S~\ref{conclusion}). These are useful as limiting cases and to lay the groundwork for future models that will include both electrons and dust as polarizing mechanisms. In the case of a resolved bow shock, detailed polarization maps are rare in the literature, so it is not currently possible to compare our image predictions with actual observations. (The observations by \citealt{2013A&A...551A..35R} provide a notable exception, but these authors observed a known dusty source and obtained only 9 polarization measurements across the bow shock.) Our results show that in future observational efforts, both polarization and polarized intensity maps may provide useful diagnostics. Polarization maps are relatively insensitive to viewing angle except in the case where absorption is significant (Fig.~\ref{map_2_difftemp}). However, because the differences between central- and distributed-source models are greatest in polarization (Figs.~\ref{map_diffopt}, \ref{map_05_difftemp}, and \ref{map_2_difftemp}), these maps may provide information about the relative brightnesses of source and bow shock. This could lead to more realistic models for individual stars that consider both central and distributed photon sources (\S~\ref{conclusion}). Polarization maps can also reveal information about the temperature of the bow shock when absorption is important. In particular, an observed ``dark belt'' (Fig.~\ref{map_2_difftemp}) would indicate a relatively low CSM temperature and high density. Polarized intensity maps can distinguish between two symmetric viewing angles in the case of higher optical depths (Figs.~\ref{map_diffopt} and \ref{map_2_difftemp}). Although we have not presented them here, \textit{SLIP} can also produce position angle maps for comparison with observations. The position angles in our models are consistently $\approx0\degr$ for most viewing angles, but flip to near $90\degr$ at high inclinations and optical depths when $q$ is negative. For unresolved bow shocks (or cases in which a bow shock is predicted to exist, e.g. \citealt{Neilson_Ignace_Smith_Henson_Adams_2014}), we measure a single polarization value corresponding to a single viewing angle. This corresponds to a horizontal line in figures such as Figs.~\ref{pva_opt}, \ref{qva_opt}, \ref{pvtau_opt}, and \ref{pva_difftemp}. If interstellar polarization can reliably be removed, this could place constraints on the viewing angle if optical depth can be estimated (Figs.~~\ref{pva_opt} and \ref{qva_opt}), or vice versa (Fig.~\ref{pvtau_opt}). A measurement of a negative value of Stokes $q$ (accounting for the orientation of the bow shock on the sky, e.g. using the proper motion of the star) would provide a particularly strong viewing angle constraint (Fig.~\ref{qva_opt}). Finally, a polarization measurement compared with the curves in Fig.~\ref{pva_difftemp} could provide constraints on the CSM temperature, particularly at low optical depths or for centrally-illuminated shocks. \section{Conclusions and future work} \label{conclusion} We investigated the polarization arising from electron scattering within an idealized stellar wind bow shock, for cases of illumination by a central star and self-illumination by the shock region. We studied how different parameters impacted the polarization behaviour for both pure scattering and scattering with absorption cases. As expected, polarization is highly dependent on viewing angle for all models. Multiple scattering significantly modifies the behaviour of the polarization with respect to analytical predictions assuming single scattering. For very low optical depths, our simulations reproduce the analytical $\sin^2 i$ dependence of \citet{brown_1977}, but many of our models show a secondary peak at higher inclination angles attributable to increased $-q$ polarization caused by multiple scattering. In the case of pure scattering (albedo $a=1$), we find that the optical depth of the bow shock significantly affects the resulting polarization behaviour, while its temperature does not. In addition, while changing the photon source (light arising from the central star vs. from within the bow shock) does not drastically modify the polarization curves for the unresolved case, it does change the appearance of the polarization and polarized intensity maps for resolved bow shocks. We have presented the central- and distributed-source cases separately here for clarity, but typically both should contribute simultaneously to the observed polarization. \textit{SLIP} has the capability to combine the two cases by specifying the relative brightnesses of the star and CSM; we will investigate these cases in the future when modeling particular bow shocks. When the albedo is not fixed at 1, but instead calculated using input parameters, we find that the polarization depends both on temperature and optical depth. In this case, absorption effects cause dramatic departures from $\sin^2 i$ behaviour, particularly for higher optical depths and lower temperatures. These effects also produce resolved polarization maps that differ from those of the pure-scattering and low optical depth cases. We have chosen a representative optical wavelength of 6040 \AA~to represent these cases, but this can be changed to correspond to specific observed scenarios. We made several simplifying assumptions in creating these models, which should be kept in mind when interpreting the results. First, we chose a specific value of $\alpha={V_*}/{V_w}=0.1$ to correspond to winds from hot stars (\S~\ref{results}). For cooler stars, $\alpha$ will be larger, and this will increase the density of the bow shock via Eq.~\ref{rho} \citep[see also Fig.~4 of ][]{Wilkin_1996}. Thus, we expect that the results for cooler stars will be similar to those of the high optical-depth cases we discuss here. We also chose a specific standoff radius $R_0$ (\S~\ref{results}) for consistency in the models presented here. In the pure-scattering case, polarization behaviour does not depend on $R_0$, but for the more realistic case of variable albedo, the polarization may differ from the results presented here. This is due to the way we defined the thickness and density of the \citet{Wilkin_1996} bow shock, as discussed in \S~\ref{results}. A study investigating the use of polarization as a diagnostic of the stellar mass-loss rate or ISM density would need to assume or measure a value for $R_0$ in order to generate models with the appropriate CSM opacity and albedo. Such a study could be undertaken with \textit{SLIP}, but is beyond the scope of this paper because of the wide range of possible $R_0$ values. In Paper II, we plan to compare \textit{SLIP} models with polarization measurements of bow-shock sources with measured $R_0$ values, and will adjust the models accordingly. We have not investigated the effect of ionized stellar wind material filling the interior of the bow shock, but we expect this would decrease the overall polarization magnitude without significantly affecting its behaviour as a function of viewing angle (particularly in the case of photons arising from the central source). We will explore the polarization contributions of interior scattering material in Paper II. We also note that the bow shock solution presented by \citet{Wilkin_1996} is an idealization that assumes a stable and highly evolved bow shock, as shown by hydrodynamic models \citep{Mohamed_2012}. Resolved polarization or polarized intensity maps that show bow shock shapes similar to those in our models would thus provide information about the age of the observed bow shock, which in turn can reveal the evolutionary state of the star, as discussed in \citet{Mohamed_2012}. Younger bow shocks or bow shocks with instabilities due to a high-density region of the ISM \citep{Meyer_2014} or a star moving with a high space velocity \citep{Meyer_2015} will show different morphologies than the idealized shape considered here. We expect these cases will display broadly similar polarization features, but detailed studies will require additional modeling. We plan to investigate clumpy shock structures in a future contribution. We recognize that dust scattering is an important contributor to the observed polarization of actual bow shocks that we have not treated here. In fact, most observations of stellar wind bow shocks have been obtained using IR data \citep[e.g.,][]{Kobulnicky_2016,Ueta_2006,Ueta_Izumiura_Yamamura_Nakada_Matsuura_Ita_Tanab_Fukushi_2014,Peri_Benaglia_Brookes_Stevens_Isequilla_2011}. The \textit{SLIP} code can treat dust scattering, and we will investigate its behaviour in Paper II. We will discuss the variation in polarization behaviour at different wavelengths as well as for different dust grain models. \section{Acknowledgements} We thank Dr. D. Meyer, Dr. T. Ueta, and our anonymous referee for thoughtful comments that have greatly improved the paper. We also thank Dr. B. Whitney for helpful code consultations. This work has been supported by NSF awards AST-0807477 and AST-1210372 and by a Sigma Xi Grant-In-Aid of Research. \newpage \bibliographystyle{mnras}
{ "timestamp": "2018-03-16T01:01:01", "yymm": "1712", "arxiv_id": "1712.04958", "language": "en", "url": "https://arxiv.org/abs/1712.04958" }
\section{Introduction} \noindent In the beginning af the 1950s the stellar triple-$\alpha$ process was proposed as the production mechanism for $^{12}\mathrm{C}$~\cite{Opik1951,Salpeter1952}. A main role in this process is played by the first excited $0^+$ state in $^{12}\mathrm{C}$, also known as the Hoyle state~\cite{Hoyle1953}. It was early realised that the Hoyle state might have a peculiar structure, strongly influenced by $\alpha$ particle clusterisation~\cite{Morinaga1956}. Studying the decay of the Hoyle state to the $3\alpha$ continuum is one of the methods that has been used to probe its structure. In particular it has been shown that the ratio between its probability for decaying directly to the $3\alpha$ continuum vs. the probability for sequential decay through the $^8\mathrm{Be}$ ground state has an impact on the calculated production rate for $^{12}\mathrm{C}$ in stellar environments~\cite{Ogata2009,Garrido2011,Ishikawa2013}. Consequently, the three-body breakup of the Hoyle state and the phase-space distribution of the emitted $\alpha$ particles have been the subject of an extended experimental campaign, stretching over the past twenty-five years~\cite{Freer1994,Raduta2011,Manfredi2012,Kirsebom2012,Rana2013,Itoh2014,Smith2017,DellAquila2017}. As a result, upper bounds on the direct decay branch have been obtained, the most recent, and also most restrictive, limits being \num{4.7e-4}~\cite{Smith2017} and \num{4.2e-4}~\cite{DellAquila2017} at \SI{95}{\percent} confidence level. In contrast to these results are a couple of measurements that give non-zero values of the direct decay branch, namely \cite{Raduta2011} and \cite{Rana2013}, which put the direct decay branch at \num{1.7 \pm 0.5e-1} and \num{9.1 \pm 1.4e-3}, respectively. The results are based on particular models for the sequential and direct decays: The sequential branch is modelled using a $\delta$ function to describe the $^8\mathrm{Be}$ resonance. This approach ignores the freedom to populate the $^8\mathrm{Be}$ system also off-resonance and a significant portion of the three-body phase space is thereby excluded. The direct decay is assumed to be a uniform phase-space decay, an assumption which is not taking the Coulomb interaction between the $\alpha$ particles into account. Because the Hoyle state decays through emission of low-energy $\alpha$ particles, far below the Coulomb barrier, Coulomb effects should heavily influence the phase space distribution in a direct decay. In this paper we employ a sequential $R$-matrix model to address the shortcomings of the simpler models. The main justification for using the sequential model is that it, in several cases, has been shown to describe three-body decays at least as well as more sophisticated theoretical calculations~\cite{Fynbo2003,Kirsebom2010}. We develop a toy model of the final-state Coulomb interaction and show that three-body effects are important for our interpretation of the decay spectrum of the Hoyle state. Furthermore we use the model to mock up the decay spectrum of a hypothetical direct decay and discuss the possibility of interference between sequential and direct decay channels. \section{The sequential model} \noindent It is possible to regard the three-body decay of $^{12}\mathrm{C}^*$ as either direct or sequential, by which we mean \begin{align*} &^{12}\mathrm{C}^* \rightarrow \alpha + \alpha + \alpha \quad &&\text{(Direct)} \\ &^{12}\mathrm{C}^* \rightarrow {^8\mathrm{Be}}^* + \alpha \rightarrow \alpha + \alpha + \alpha \quad &&\text{(Sequential)}. \end{align*} The sequential interpretation was proposed in 1936 in order to explain the angular and energy distributions of $\alpha$ particles from the ${^{11}\mathrm{B}}(p,3\alpha)$ reaction observed in early cloud chamber experiments. In the sequential picture, the dynamics of the breakup are determined by the properties of the intermediate nucleus, and the $\alpha$-particle distributions were used to deduce the energies and widths of the lowest states of the unstable $^8\mathrm{Be}$ nucleus~\cite{Dee1936,Bethe1937,Wheeler1941}. Later, several theoretical frameworks appeared that could be used to analyse the three-body breakup as a sequential process~\cite{Watson1952,Migdal1955,Phillips1960,Duck1964,Schaefer1970}. \subsection{General formalism} \noindent We use a sequential model to calculate the expected phase-space distribution of the $\alpha$ particles emitted from the unstable Hoyle state in {$^{12}\mathrm{C}$}. For a particular permutation of the $\alpha$ particles, the decay amplitude is given by \begin{align} \label{eq:single_amp} f_c^{m_a}&(123) = \sum_{m_b} \langle J_b l_1 m_b (m_a-m_b) \vert J_a m_a \rangle \nonumber \\ &\times \bigl[i^{l_1} Y_{l_1}^{m_a-m_b}(\Omega_1)\bigr] \bigl[i^{l_2} Y_{l_2}^{m_b}(\Omega_{23})\bigr] \nonumber \\ &\times \gamma_c \bigl(2P_{l_1} / \rho_1\bigr)^{\frac{1}{2}}\exp\bigl[i(\omega_{l_1} - \phi_{l_1})\bigr] F_c(E_{23}), \end{align} where $F_c(E_{23})$ is a factor describing the resonant strength of the intermediate system. In the single-level approximation we have \begin{align} \label{eq:single_level} F_c(E_{23}) = \frac{\gamma_{\lambda_b l_2}\bigl(2P_{l_2} / \rho_{23}\bigr)^{\frac{1}{2}}\exp\bigl[i(\omega_{l_2} - \phi_{l_2})\bigr]}{E_{\lambda_b} - E_{23} - \bigl[S_{l_2} - B_{l_2} + iP_{l_2}\bigr] \gamma_{\lambda_b l_2}^2} . \end{align} To obtain the total decay weight the expression is symmetrised in the permutation of the $\alpha$ particles: \begin{align} \label{eq:total_weight} W = \sum_{m_a} \Bigl\lvert \sum_c \Bigl\lbrace f_c^{m_a}(123) + f_c^{m_a}(231) + f_c^{m_a}(312) \Bigr\rbrace \Bigr\rvert^2 . \end{align} The various symbols appearing in eqs. \eqref{eq:single_amp}--\eqref{eq:total_weight} are explained in \tref{tab:notation}. When we later use eq. \eqref{eq:total_weight} to calculate decay weights, we refer to it as \emph{Model I}. \begin{table}[htbp] \centering \caption{Explanation of the parameters appearing in eqs. \eqref{eq:single_amp}--\eqref{eq:total_weight}.} \medskip \small \label{tab:notation} \begin{tabular}{r p{6.5cm}} \hline \T $J_a, m_a$ & Angular momentum quantum numbers for the initial state. \\ $J_b, m_b$ & Same for the intermediate state. \\ $l_1, l_2$ & Orbital angular momentum in the primary and secondary breakup, respectively \\ $\lambda_b$ & The level populated in the intermediate system. Implicitly specifies $J_b$ and $l_2$.\\ $c$ & Decay channel specifying $\lbrace l_1, \lambda_b \rbrace$.\\ $\gamma_c$ & Reduced width amplitude for decay of the initial state through channel $c$. \\ $\gamma_{\lambda_b l_2}$ & Same for decay of the intermediate state. \\ $\Omega_1$ & Direction of the first emitted $\alpha$ in the rest frame of the initial state. \\ $\Omega_{23}$ & Direction of the second emitted $\alpha$ in the rest frame of the intermediate state. \\ $E_{23}$ & Relative energy between $\alpha_2$ and $\alpha_3$ \\ $\rho_1$ & $=k_1 a_1$, where $k_1$ is the wave number and $a_1$ is the channel radius for the primary breakup channel. \\ $\rho_{23}$ & Same for the secondary breakup channel.\\ $P_{l_1}, P_{l_2}$ & Penetrability for the primary and secondary breakup channels. \\ $\omega_{l_1}, \omega_{l_2}$ & Coulomb phase shifts. \\ $\phi_{l_1}, \phi_{l_2}$ & Hard-sphere phase shifts. \\ $E_{\lambda_b}$ & Level energy of $\lambda_b$ in the intermediate system. \\ $S_{l_2}, B_{l_2}$ & Shift function and boundary condition for the secondary breakup channel. \B \\ \hline \end{tabular} \end{table} \emph{Model I} has some desirable features: First, the amplitude is determined by standard $R$-matrix level parameters, which can be obtained from $\alpha\alpha$ scattering or from the analysis of $\beta$-delayed $\alpha$ spectra from the decay of $^8\mathrm{Li}$ and/or $^8\mathrm{B}$. Second, the model takes the identity of the $\alpha$ particles into account by treating them as bosons and symmetrising with respect to their labelling. Finally, it has been shown to fit the phase-space distributions of the $\alpha$ particles emitted by several excited states in ${^{12}\mathrm{C}}^*$, for instance the $J^\pi = 1^+$ state at $E_x = \SI{12.71}{\mega\electronvolt}$~\cite{Balamuth1974,Fynbo2003,Kirsebom2010}, the $2^+$ state at $E_x=\SI{16.11}{\mega\electronvolt}$~\cite{Schaefer1970,Laursen2016} and the $2^-$ state at $E_x = \SI{16.57}{\mega\electronvolt}$~\cite{Cockburn1970}, as well as observations of the $^3\mathrm{H}(^3\mathrm{H},nn\alpha)$ reaction at low energy~\cite{Brune2015}. \subsection{Final state Coulomb interactions} \label{sec:fsci} \noindent \emph{Model I} takes final-state Coulomb interactions (FSCI) into account by including the penetrabilities for the primary and secondary breakup channels. This is only correct if the $\alpha_1 + {^8\mathrm{Be}}$ and $\alpha_2 + \alpha_3$ pairs are allowed to propagate to infinity in their relative coordinates. When the lifetime of the intermediate ${^8\mathrm{Be}}$ state becomes very short, however, that picture breaks down, and the treatment using only two-body Coulomb interactions becomes inaccurate, a point which has also been discussed by others~\cite{Cockburn1970,Fynbo2003}. A phenomenological approach to improving the description of FSCI has been proposed and tested against data from the decay of the $1^+$ state at $E_x = \SI{12.71}{\mega\electronvolt}$, which proceeds through the short-lived $2^+$ state at $E_x = \SI{3.0}{\mega\electronvolt}$ in $^8\mathrm{Be}$~\cite{Fynbo2003}. The idea is to let the fragments of the primary decay, initially separated by the channel radius $a_1$, propagate as usual out to some distance, $\tilde{r}$. At this point we replace the penetration factor of the $\alpha_1+{^8\mathrm{Be}}$ pair by the product of penetration factors for the $\alpha_1 + \alpha_2$ and $\alpha_1 + \alpha_3$ pairs. Formally, we make the following substitution in eq. \eqref{eq:single_amp}, \begin{align} \label{eq:correction} \frac{P_{l_1}}{\rho_1} \; \rightarrow \; \frac{P_{l_1}}{\rho_1} \Biggl[\frac{\tilde{\rho}_1}{\tilde{P}_{l_1}}\frac{\tilde{P}_{l_2}(E_{12})}{\tilde{\rho}_{12}}\frac{\tilde{P}_{l_2}(E_{13})}{\tilde{\rho}_{13}}\Biggr], \end{align} where the tilde functions are the usual $R$-matrix functions evaluated at $\tilde{r}$ and $E_{ij}$ is the relative energy between $\alpha_i$ and $\alpha_j$. In this way the Coulomb interactions of each $\alpha$ pair is treated symmetrically. This modified version of the sequential decay model is our \emph{Model II}. A slightly different modification of the penetration factor was made in \cite{Brune2015}. While their modification may give the correct behaviour for small $E_{12}$ and $E_{13}$, its interpretation in terms of transmission probabilities is not as clear as eq. \eqref{eq:correction}. \subsection{Lifetime of the intermediate state} \label{sec:lifetime} \noindent By using a single $\tilde{r}$ throughout the entire phase space we implicitly assume that the lifetime of the intermediate system is constant and independent of the division of energy between the decay fragments. It has been shown that the lifetime of a nuclear resonance is in fact energy dependent and can be calculated from the resonant phase shift~\cite{Wigner1954,Smith1960,Smith1962,Baz1965}: \begin{align} \label{eq:lifetime} \tau_2 = \hbar \frac{d\delta_2}{dE_{23}} + \frac{a_2}{v_{23}}, \end{align} where $\delta_2$ is the $\alpha\alpha$ scattering phase shift, $a_2$ is the channel radius for the secondary breakup channel and $v_{23}$ is the relative velocity of the $\alpha$ particles emitted in the secondary breakup, which we approximate by its asymptotic value for $r \rightarrow \infty$. From this result we see that the lifetime is largest if the intermediate system is populated on-resonance, where the phase shift increases sharply. Off-resonance the lifetime is shorter, and it may even become negative. We use the lifetime from eq. \eqref{eq:lifetime} and a simple, classical picture to estimate the distance between the primary fragments when the secondary breakup takes place: Suppose that the primary fragments are formed on the channel surface of the primary system, i.e. they are initially seperated by a distance $a_1$. The fragments now separate at a velocity, $v_1$, determined by their relative kinetic energy. Since at this point the fragments are tunnelling through the Coulomb barrier, the relative kinetic energy is, in a classical picture, not a well-defined quantity, but we assume it to be equal to the relative kinetic energy for $r \rightarrow \infty$. An estimate for $\tilde{r}$ in terms of the lifetime of the secondary resonance, $\tau_2$, then becomes \begin{align} \label{eq:rtilde} \tilde{r} = a_1 + v_1 \tau_2 . \end{align} In order to get a feeling for the magnitude and behaviour of $\tilde{r}$ we look at two examples: The decay of the Hoyle state proceeding through the $0^+$ ground state in $^8\mathrm{Be}$ with a $Q$-value of \SI{379}{\kilo\electronvolt}, and the decay of the $1^+$ state at $E_x = \SI{12.71}{\mega\electronvolt}$, which proceeds through the $2^+$ first excited state in $^8\mathrm{Be}$ with a $Q$-value of \SI{5434}{\kilo\electronvolt}. We use the $R$-matrix parameters listed in \tref{tab:parameters}. The resulting value of $\tilde{r}$ is shown in \fref{fig:rtilde} as function of the internal energy in the intermediate system. \begin{table}[htbp] \centering \caption{$R$-matrix parameters for the relevant levels in $^8\mathrm{Be}$. $E$ and $\Gamma_|obs|$ of the $0^+$ level are taken from \cite{Tilley2004} while $E$ and $\gamma^2$ of the $2^+$ level are taken from \cite{Bhattacharya2006}. The other figures were calculated using a channel radius of \SI{4.5}{\femto\meter} and standard $R$-matrix formulas~\cite{Lane1958}.} \medskip \begin{tabular}{c c c c} \hline $J^\pi$ & $E$ (\si{\kilo\electronvolt}) & $\Gamma_|obs|$ (\si{\kilo\electronvolt}) & $\gamma^2$ (\si{\kilo\electronvolt}) \T\B \\ \hline $0^+$ & \num{91.84 \pm 0.04} & \num{5.57 \pm 0.25 e-3} & \num{830 \pm 38} \T \\ $2^+$ & \num{3129 \pm 6} & \num{1477 \pm 13} & \num{1075 \pm 9} \B\\ \hline \end{tabular} \label{tab:parameters} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{plotdistance} \includegraphics[width=0.9\columnwidth]{plotdistance2} \caption{Distance travelled by the fragments of the primary breakup before the breakup of the intermediate system plotted against the relative energy of the fragments in the secondary breakup; calculated from eq. \eqref{eq:rtilde} using the parameter values in \tref{tab:parameters}. The upper graph is relevant for the breakup of the Hoyle state through $^8\mathrm{Be(0^+)}$, while the lower graph shows the situation for breakup of the \SI{12.7}{\mega\electronvolt} $1^+$ state through $^8\mathrm{Be(2^+)}$.} \label{fig:rtilde} \end{figure} For the Hoyle-state decay we see that for most values of $E_{23}$, the resulting value of $\tilde{r}$ is in fact quite small, and we should expect FSCI to have a pronounced effect on the breakup. If the intermediate system is populated on-resonance, however, it lives long enough to travel $\simeq\SI{e6}{\femto\meter}$ before breaking up. In this case we expect that the approximations of \emph{Model I} are very good. For the decay of the $1^+$ state $\tilde{r}$ show only small variations around an average of approximately \SI{15}{\femto\meter}. In previous studies \emph{Model II}, using a constant $\tilde{r}$ of around \SI{15}{\femto\meter}, has been shown to provide a reasonable fit to experimental data for the $1^+$ state~\cite{Fynbo2003,Refsgaard2016} and for the $2^+$ state at $E_x=\SI{16.11}{\mega\electronvolt}$, which also decays through the $2^+$ resonance in $^{8}\mathrm{Be}$~\cite{Laursen2016}\footnote{Due to a calculational error in Refs.~\cite{Fynbo2003} and \cite{Laursen2016} the value quoted in these references ($\tilde{r} = \SI{10}{\femto\meter}$) is too small. Better agreement with data is found for a somewhat larger value of $\tilde{r}$.}. It is remarkable that the simple estimate of eq. \eqref{eq:rtilde}, which does not include any adjustable parameters (except for the channel radii), is in agreement with the empirical values. We conclude that for some decays \emph{Model II} is a good approximation, but also that we can not assume it to be generally applicable. Therefore we introduce \emph{Model III}, where the decay weight is calculated from eq.~\eqref{eq:total_weight} and the correction for FSCI in eq.~\eqref{eq:correction} is applied using a variable $\tilde{r}$, found from eq.~\eqref{eq:rtilde}. Based on the considerations in the preceding paragraph we expect \emph{Model III} to perform as well, or better, than \emph{Model II}. \begin{table}[h] \centering \caption{Overview and description of the various models presented in the text.} \label{tab:overview} \medskip \small \begin{tabular}{r p{6.5cm}} \hline \T \emph{Model I} & Sequential $R$-matrix model. Symmetric with respect to exchange of any $\alpha$ pair. Decay weight calculated directly from eqs.~\eqref{eq:single_amp}--\eqref{eq:total_weight}. \\ \emph{Model II} & Similar to \emph{Model I}, but the change shown in eq.~\eqref{eq:correction} has been made in order to accomodate the finite lifetime and travel length of the intermediate fragment. \\ \emph{Model III} & Similar to \emph{Model II}, but with variable lifetime of the intermediate fragment. \B \\ \hline \end{tabular} \end{table} \subsection{The Dalitz plot} \label{sec:comparison} \noindent Often the Dalitz plot is used to represent the three-body final states that are observed in experiments and to visualise the predictions of theoretical models~\cite{Dalitz1953}. The coordinates of the plot are defined by \begin{align} \label{eq:Dalitz} x = \frac{\sqrt{3}(E_1-E_3)}{Q} \quad \text{and} \quad y = \frac{2E_2 - E_1 - E_3}{Q}, \end{align} where $E_i$ is the kinetic energy of the $i$th $\alpha$ particle in the rest frame of the decaying nucleus, ordered such that $E_1 > E_2 > E_3$, and $Q=\sum_{i}E_i$. All decays fulfilling energy and momentum conservation can be represented by a point inside the pie-wedge shaped region seen in Figs.~\ref{fig:hoyle} and \ref{fig:direct}. A point near the origin represents a decay where the available energy is shared equally between the three breakup fragments, while a point near the bottom right corner represents a decay with a small relative energy between the two lowest-energy fragments. Points near the top right corner of the plot represent decay where two of the fragments are emitted in opposite directions, leaving only very little energy to the third fragment. \section{Sequential breakup} \noindent Let us assume that the Hoyle state decays sequentially through the $0^+$ ground state of $^8\mathrm{Be}$. In \fref{fig:hoyle} we show the decay weight calculated with \emph{Model I} and \emph{Model III}, where we have chosen channel radii $a_1 = \SI{5.1}{\femto\meter}$ and $a_2 = \SI{4.5}{\femto\meter}$, corresponding to a value for the nucleon radius of $r_0 = \SI{1.42}{\femto\meter}$. We see that the weight is sharply peaked on a diagonal line corresponding to a relative energy between the lowest-energy alphas of $E_{23}=\SI{91.84}{\kilo\electronvolt}$. This line is usually interpreted as a signature of the sequential decay through $^8\mathrm{Be}(0^+)$, however, we also note that the models predict a low-intensity tail stretching from the diagonal line towards the apex of the Dalitz plot, \emph{Model III} most significantly so. \begin{figure}[htbp] \centering \includegraphics[width=0.48\columnwidth]{hoyle_raw.pdf} \hspace*{\fill} \includegraphics[width=0.48\columnwidth]{hoyle_const.pdf} \caption{Expected phase space distribution of $\alpha$ particles emitted in a sequential breakup of the Hoyle state as calculated with \emph{Model I} (left) and \emph{Model III} (right) plotted on a linear color scale. Note that the peak value is several orders of magnitude higher than the color scale limit.} \label{fig:hoyle} \end{figure} The intensity outside the ${^8\mathrm{Be}}$ ground-state peak is related to the so-called \emph{ghost anomaly}, which appears for nuclear levels near thresholds~\cite{Beckner1961,Barker1962}. In fact, if we consider a normalised $R$-matrix lineshape \begin{align} w(E) = \pi^{-1} \frac{\Gamma_\lambda/2}{\bigl[E_{\lambda} - E - \gamma^2 (S - B)\bigr]^2 + \bigl[\Gamma_\lambda/2\bigr]^2} \end{align} then we can approximate the area under a narrow peak as \begin{align} \int_{E_\lambda - \delta E}^{E_\lambda + \delta E} w(E)dE \,\simeq\, \biggl[1+\gamma^2\Bigl(\frac{dS}{dE}\Bigr)_{E_\lambda} \biggr]^{-1}. \end{align} From this expression we see that the peak area is dependent on both the reduced width and, through the derivative of the shift function, on the channel radius. For $a_2 = \SI{4.5}{\femto\meter}$ we find that only \SI{57 \pm 2}{\percent} of the ${^8\mathrm{Be}}$ ground-state strength appears in the observed ground state peak, the uncertainties coming from the quoted uncertainty on the partial width of the ground state. Since we are free to choose other values for the channel radius, it should be pointed out that the estimate is quite sensitive to this parameter. With the larger radius $a_2 = \SI{7.0}{\femto\meter}$ the peak area increases to \SI{86 \pm 1}{\percent}. The strength we see outside the peak in \fref{fig:hoyle} is the hint of a ghost anomaly, although heavily suppressed by Coulomb-barrier effects in the primary decay channel. To quantify how large a fraction of the decays we expect to observe outside the ground-state peak, we use a Monte-Carlo routine to integrate the decay weight over the region where $E_{23}>E_|gs| + \delta E$. The resulting fractional intensities are listed in \tref{tab:results} for a few values of $\delta E$. The values vary within \SI{\pm 10}{\percent} when the channel radii are varied between \SI{1.42}{\femto\meter} and \SI{2}{\femto\meter}. The same order of sensitivity is seen for variations of $\Gamma_|gs|$ within the experimental uncertainties. \begin{table}[htbp] \centering \caption{Fractional intensity of decays with $E_{23} > E_|gs| + \delta E$, calculated using Monte-Carlo integration of the three models listed in \tref{tab:overview}. In \emph{Model II} we have used $\tilde{r}=\SI{16}{\femto\meter}$. Also shown are the values, $I_F$, obtained from a more sophisticated calculation, involving the solution of the Faddeev equations for the $3\alpha$ system~\cite{Ishikawa2017}.} \label{tab:results} \medskip \small \begin{tabular}{c c c c c} \hline {$\delta E \;(\si{\kilo\electronvolt})$} & {$I_|I|$} & {$I_|II|$} & {$I_|III|$} & $I_F$ \T\B \\ \hline 10 & \num{2.3e-4} & \num{8.8e-4} & \num{1.1e-2} & \num{5.2e-4} \T \\ 20 & \num{1.1e-4} & \num{7.2e-4} & \num{8.4e-3} & \num{3.1e-4} \\ 30 & \num{6.0e-5} & \num{5.6e-4} & \num{6.8e-3} & \num{2.2e-4} \\ 50 & \num{1.7e-5} & \num{3.4e-4} & \num{4.0e-3} & \num{1.4e-4} \B \\ \hline \end{tabular} \end{table} Looking at \fref{fig:hoyle} we note that the result of \emph{Model I} has a striking visual similarity with the prediction in Fig.~1(a) of Ref.~\cite{Ishikawa2014}, which was obtained by solving the Faddeev equations using $\alpha\alpha$ and $3\alpha$ interactions. In \tref{tab:results} we see that also quantitatively the three-body calculation is in closer accord with the results from \emph{Model I} and \emph{Model II} than those from \emph{Model III}. Experimentally, an upper limit for the fractional intensity for $\delta E \approx \SI{50}{\kilo\electronvolt}$ was recently found to be between \num{4.7e-4} and \num{4.2e-4}~\cite{Smith2017,DellAquila2017} at \SI{95}{\percent} C. L., which is consistent with \emph{Model I} and \emph{Model II}. It is remarkable that \emph{Model III} predicts a value which is an order of magnitude larger than the experimental upper limit. \emph{Model III} clearly fails to describe the Hoyle state decay. This is surprising, since \emph{Model III}, which is based on the $R$-matrix framework and a physically motivated model of the three-body Coulomb interaction, reproduces values of $\tilde{r}$ found from the analysis of decay spectra of higher-lying states in $^{12}\mathrm{C}$, as discussed in Sec. \ref{sec:lifetime}. One major difference between the Hoyle state and the higher-lying states is that the Hoyle state sits behind a Coulomb barrier of around \SI{35}{\femto\meter}, while the barrier for the $1^+$ state at $E_x =\SI{12.71}{\mega\electronvolt}$ is only a few \si{\femto\meter} wide. As was mentioned at the introduction of \emph{Model III}, it treats all relative motion classically using asymptotic values of the kinetic energies. This approach is clearly problematic, in particular when the particles are moving inside classically forbidden regions, where the concept of velocity becomes ill-defined. Both theoretical and experimental investigations suggest that the effective velocity of a particle tunnelling through a wide barrier is, if anything, significantly larger than the asymptotic value~\cite{Winful2006}. Taking this into account we expect the values of $\tilde{r}$ shown in \fref{fig:rtilde} to be somewhat underestimated for the Hoyle-state decay. Using larger values of $\tilde{r}$ would tend to diminish the importance of three-body Coulomb interactions and to bring the results of our \emph{Model III} in better agreement with both theoretical end experimental results. \section{Direct breakup} \label{sec:direct} \noindent Would it be possible to tweak the sequential model and make it predict the phase-space distribution of a direct decay? It is indeed possible to describe direct reactions in the $R$-matrix framework, but it requires the inclusion of infinitely many levels in the compound nucleus~\cite{Wigner1947,Lane1958}. In practice, however, what is most often done is to include a single \emph{background pole}; a very broad level at high excitation energy. Therefore, we attempt to calculate the phase space distribution of a direct breakup of the Hoyle state by replacing the $^8\mathrm{Be}$ ground state with a $0^+$ resonance at $E_|bg| = \SI{20}{\mega\electronvolt}$ and a width of $\Gamma_|bg| = \SI{200}{\mega\electronvolt}$. The result is shown in \fref{fig:direct}. Note that the distribution is not sensitive to our particular choice of $E_|bg|$ and $\Gamma_|bg|$, as long as the level energy is far outside the range of energies that are relevant for the Hoyle state decay. \begin{figure}[htbp] \centering \includegraphics[width=0.48\columnwidth]{direct_raw.pdf} \hspace*{\fill} \includegraphics[width=0.48\columnwidth]{direct_const.pdf} \caption{Prediction of the phase space distribution for the direct breakup of the Hoyle state, calculated using \emph{Model I} (left) and \emph{Model III} (right).} \label{fig:direct} \end{figure} We see a relative suppression of the decay weight near the lower right corner, which represents decays with a small $E_{23}$. The main difference between \emph{Model I} and \emph{Model III} is that \emph{Model III} also predicts a suppression near the top right corner of the Dalitz plot. Intuitively this is a sensible result, since the FSCI would tend to suppress decays where any of the $\alpha$-particle pairs appear with a small relative energy. It seems that we should expect a hypothetical direct decay of the Hoyle state to show up as a sharp peak near the apex of the Dalitz plot. We believe that our model for the direct decay is more accurate than the na\"{i}ve estimates using a uniform phase space decay used in~\cite{Raduta2011,Manfredi2012,Kirsebom2012,Rana2013,Itoh2014,Smith2017,DellAquila2017}, since we, at least in some approximation, include Coulomb interactions between each $\alpha$ particle in the final state. An alternative way to predict the phase-space distribution of a direct $3\alpha$ decay is presented in Ref.~\cite{Smith2017b}, where a uniform phase-space decay is combined with a Coulomb-barrier transmission probability calculated using the WKB approximation in hyperspherical coordinates~\cite{Garrido2005}. The obtained phase space distribution is very similar to the result of our \emph{Model III}. The transmission probability derived in \cite{Garrido2005} can be calculated for both direct and sequential breakup, but the method does not predict which approximation is the most suitable. It is one strength of \emph{Model III} that the penetration factor, through the variable $\tilde{r}$, can be modified continuously between the sequential and direct limits, and that each part of phase space can be treated in the appropriate approximation. \subsection{Interference between decay channels} \noindent We know from experiment that the Hoyle state has a sizeable sequential branch ($\simeq\SI{100}{\percent}$). Therefore we will never observe a pure, direct decay, but only a mixture of sequential and direct decay, which means that we need to revise the single-level approximation of eq. \eqref{eq:single_level}. A procedure for treating multiple levels in the intermediate system of sequential reactions has been proposed in \cite{Barker1967,Barker1988}, and we replace eq. \eqref{eq:single_level} with \begin{align} \label{eq:multi_level} F_c(E_{23}) =\sum_{\mu_b}\bigl[ A_{\lambda_b \mu_b} \gamma_{\mu_b l_2} \bigr] \bigl(2P_{l_2} / \rho_{23}\bigr)^{\frac{1}{2}}\exp\bigl[i(\omega_{l_2} - \phi_{l_2})\bigr], \end{align} where $A_{\lambda_b \mu_b}$ is the level matrix for the intermediate system, defined by the relation \begin{align} (A^{-1})_{\lambda_b \mu_b} = (E_{\lambda_b}-E_{23})\delta_{\lambda_b \mu_b} - \sum_{c}(S_c - B_c + iP_c) \gamma_{\lambda_b c}\gamma_{\mu_b c} . \end{align} With this modification it is straightforward to calculate the theoretical phase-space distribution for various mixtures of sequential and direct decay. The reduced width amplitude, $\gamma_c$, of eq. \eqref{eq:single_amp} is the parameter which specifies the contribution of each decay channel (see also \tref{tab:notation}). If we consider the possibility that the Hoyle state can decay through both the ground state of $^8\mathrm{Be}$ and through the background pole introduced in Sec. \ref{sec:direct} we need two reduced width amplitudes, which we label $\gamma_|gs|$ and $\gamma_|bg|$. The mixing ratio $\delta = \gamma_|bg| / \gamma_|gs|$ determines the phase-space distribution of the decay products. In order to make a quantitative assessment of the effect of interference between the two decay channels we evaluate the decay weight using \emph{Model III} and find the fractional intensity for decays with $E_{23} > E_|gs| + \SI{10}{\kilo\electronvolt}$. The result is shown in \fref{fig:interference}. \begin{figure}[htbp] \centering \includegraphics[width=0.92\columnwidth]{integral_2} \caption{Fraction of Hoyle state decays with $E_{23} > E_{gs} + \SI{10}{\kilo\electronvolt}$ calculated using \emph{Model III}. Two levels have been included in the intermediate $^8\mathrm{Be}$ system: The narrow ground state and a broad background pole. The mixing ratio, $\delta$, is defined in the text.} \label{fig:interference} \end{figure} Intuitively we should expect interference effects to be important only in the region of the Dalitz plot where both decay channels have an appreciable amplitude, which, judging from Figs. \ref{fig:hoyle} and \ref{fig:direct}, is near the apex of the Dalitz plot. The sign of $\delta$ determines whether the amplitudes in this part of the plot interfere constructively or destructively. It is clear from \fref{fig:interference} that destructive interference occur for $\delta\simeq +60$, where the fraction of Hoyle state decays with $E_{23} > E_|gs| + \SI{10}{\kilo\electronvolt}$ is diminished by an order of magnitude. \section{Conclusion} \noindent We have presented a schematic model to describe the effect of Coulomb interactions in the $3\alpha$ continuum and combined it with a well-established $R$-matrix formalism for sequential processes. We applied the model to the $3\alpha$ breakup of the Hoyle state in $^{12}\mathrm{C}$ and attempted to predict the phase-space distribution of the emitted $\alpha$ particles in a purely sequential decay. We observed considerable strength outside the $^8\mathrm{Be}$ ground-state peak, and our results suggest that the current experimental limit on the direct decay branch is very close to the point where we should start to observe this strength. The spectrum was seen to be quite sensitive to the way in which final-state Coulomb interactions are taken into account, and we expect that a careful measurement of the Hoyle-state decay will provide information on how to effectively treat three-body Coulomb interactions. We also presented a model which we believe contains the most important physics for direct three-body decay, as opposed to the simplistic assumptions of uniform phase-space decays, colinear decays etc., which appear in the literature. The Dalitz plot of the direct decay model showed an intensity peak near the origin, corresponding to decays with equal sharing of energy between the three $\alpha$ particles. Finally we showed that interference between the sequential and a possible direct decay channel could significantly alter the decay spectrum of the Hoyle state. \section*{Acknowledgements} \noindent We thank Prof. Souichi Ishikawa for making his results available to us. OSK acknowledges support from the Villum Foundation. We also acknowledge financial support from the European Research Council under ERC starting grant LOBENA, No.\ 307447.
{ "timestamp": "2018-02-19T02:05:04", "yymm": "1712", "arxiv_id": "1712.05251", "language": "en", "url": "https://arxiv.org/abs/1712.05251" }
\section{Introduction} \label{sec:Introduction} The prospect of pileup induced backgrounds at the High Luminosity LHC (HL-LHC) has stimulated interest in developing technology for charged particle timing at high rates~\cite{White:2013taa}. Since the hermetic timing approach (where a large fraction of tracks are used to time interaction vertices) requires a large area coverage, it is natural to investigate both MicroPattern Gas and Silicon structures as candidate detector technologies to address this approach. However, since the necessary time resolution for pileup mitigation is of the order of 20-30 picoseconds (ps), both technologies require significant modification to reach the desired performance. Photodetectors and charged particle detectors with time resolutions in the sub-nanosecond regime continue to have an impact in both High Energy physics and medical imaging. In High Energy physics the most widespread application is for particle identification, wherein the mass of particles of known momentum is measured with the time-of-flight (TOF) particle identification technique. The current state-of-the-art technology has recently been reviewed in Ref.~\cite{Vavra:2017jv}. Existing collider experiments (e.g. ALICE) now employ large scale TOF systems with performance at the sub-100\,ps level but several promising new technologies have demonstrated $\sim10$\,ps performance or better (for example the Microchannel PMT we use as a Cherenkov detector during the test beam measurements discussed below). An early incentive for the development of MicroPattern Detectors (see e.g.\,\cite{Heijne:1979ub}) was the promise of faster response and improved timing precision - a consequence of the more rapid signal collection and lower sensor capacitance. Except for early work at CERN (e.g. 1970-80s in the case of~\cite{Heijne:1979ub}), however, the emphasis in MicroPattern Silicon detectors rapidly moved to spatial resolution rather than temporal precision. Similarly, there has been little emphasis, in the 20 years since the GEM~\cite{Sauli:1997qp} and Micromegas~\cite{Giomataris:1995fq} MicroPattern Gas Detectors (MPGD) were introduced, on exploiting their potential for fast timing. Nevertheless, in one of the original Micromegas papers~\cite{Giomataris:1998rc}, it was shown that sub-nanosecond time jitter could be obtained for single electrons photo-produced at the cathode surface. This is the approach that will be followed in the current paper, i.e. the timing attributes of Micromegas are used in a photodetector. In 2015, our collaboration proposed a structure detecting extreme ultraviolet (UV) Cherenkov light produced in a MgF$_2$ crystal coupled to semitransparent CsI photocathode and a two-stage Micromegas amplifying structure with electron amplification in both stages. This detetector has several advantages compared to a single-stage structure: higher gain, a reduced ion-backflow and a better separation between the electron-peak and the ion-tail of the signal. In Ref.~\cite{Papaevangelou:2016knm}, we reported encouraging results of single photoelectron timing resolution using fast laser pulses, obtained with a chamber operated in \textit{sealed mode}. Subsequently, we improved the chamber integrity and the detector grounding in order to guarantee stable operation. For the results presented here we concentrate on a single pixel PICOSEC chamber (1\,cm diameter), and use a bulk Micromegas amplification structure with a woven stainless steel mesh~\cite{Giomataris:2004aa}. Beam tests using 150 GeV muons at CERN were carried out using several Micromegas detectors with various photocathode materials, gases, different discharge protection schemes, and read-out elements. In this paper we present the main results obtained, demonstrating the capability of our detector to reach time resolution of tens of ps, an improvement by two orders of magnitude compared to the standard MPGD detector performance. Detailed analysis methods and simulations were developed to better understand single electron detector response and time resolution. The manuscript is divided as follows: the PICOSEC detection concept is presented in Sec.~\ref{sec:Concept}, while the technical description of the first prototype is given in Sec.~\ref{sec:Prototype}. The experimental setups used to measure the time resolution of single photoelectrons with the laser and 150\,GeV muons in the beam are detailed in Sec.~\ref{sec:LaserSetup} and Sec.~\ref{sec:BeamSetup}, respectively. After a description of the individual waveform analysis in Sec.~\ref{sec:PulseAnalysis}, the laser tests results are presented in Sec.~\ref{sec:ResultsLaser}, while those from beam tests are summarized in Sec.~\ref{sec:ResultsBeam}. The conclusions in Sec.~\ref{sec:Conclusions} complete this paper. \section{The PICOSEC detection concept and the experimental setup} \label{sec:Concept} The detection concept presented here consists of a ``two-stage'' Micromegas detector coupled to a front window that acts as Cherenkov radiator coated with a photocathode, as shown in Fig.~\ref{fig:DetectionConcept}. A MgF$_2$ crystal is a typical radiator with light transmission down to a wavelength of~115\,nm, a typical photocathode is CsI~\cite{SEGUINOT1990133} with high quantum efficiency for photons below 200\,nm. This configuration provides a large bandwidth for Cherenkov light production-detection in the extreme UV. The drift region is very thin (100-300\,\textmu m), which minimizes diffusion effects on the signal timing (of several ns in a typical MPGD-based drift region). Due to the high electric field, photoelectrons also undergo pre-amplification in the drift region. The readout is a bulk Micromegas~\cite{Giomataris:2004aa}, which consists of a woven mesh (18\,\textmu m-diameter wires) and an anode plane, separated by a gap of 128\,\textmu m, mechanically defined by pillars. This type of readout, operated in Neon- or CF$_4$-based gas mixtures, can reach gains of $10^5-10^6$, high enough to detect single photoelectrons~\cite{Derre:1999wh}. Moreover, the electron drift velocity is high enough for a fast charge collection: ranging from 9 to 16\,cm/\textmu s for drift fields between 10 and 20\,kV/cm in the ``COMPASS'' gas (80\%Ne + 10\%C$_2$H$_6$ + 10\%CF$_4$), according to simulations with Magboltz~\cite{BIAGI1999234}. In normal operation, a relativistic charged particle traversing the radiator produces UV photons, which are simultaneously (RMS less than 10\,ps) converted into primary (photo)~electrons at the photocathode. These primary electrons are preamplified in the drift region due to the high electric field ($\sim$20\,kV/cm); then, they partially traverse the mesh ($\sim$25\% due to the field configuration), and are finally amplified in the amplification gap, where a high electric field ($\sim$40\,kV/cm) is applied. \begin{figure}[htb!] \centering \includegraphics[width=0.99\textwidth]{PicosecDetectionConcept.png} \caption{The PICOSEC detection concept. The passage of a charged particle through the Cherenkov radiator produces UV photons, which are then absorbed at the photocathode and partially converted into electrons. These electrons are subsequently preamplified and then amplified in the two high-field drift stages, and induce a signal which is measured between the anode and the mesh.} \label{fig:DetectionConcept} \end{figure} The arrival of the amplified electrons at the anode produces a fast signal (with a risetime of $\sim$0.5\,ns) referred to as the electron-peak, while the movement of the ions produced in the amplification gap generates a slower component - ion-tail ($\sim$100\,ns). A typical waveform is shown in Fig.~\ref{fig:PicosecSignal}. The maximum drift time of ions is below 630\,ns, which is low enough not to affect the detector rate capability. \begin{figure}[htb!] \centering \includegraphics[width=0.80\textwidth]{BeamData_Pulse02.pdf} \caption{An example of an induced signal from the PICOSEC detector generated by 150\,GeV muons (blue points), recorded together with the timing reference of the microchannel plate MCP signal (red points) discussed in the text. The PICOSEC signal contains a fast component produced by the electrons, and a slower component generated by the ion drift. The fast electron-peak amplitude and the Signal Arrival Time, defined in the waveform analysis of Sec.~\ref{sec:PulseAnalysis}, are also shown.} \label{fig:PicosecSignal} \end{figure} It should be noted that due to preamplification in the thin drift gap the relative contribution to the overall signal of direct ionization produced by the traversing particle is negligible. In the ``COMPASS gas'' and for the conditions described in Sec.~\ref{sec:BeamSetup}, relativistic muons create $\sim 21$~ion clusters/cm with few ionization electrons per cluster. The probability to produce enough ionization charge that undergoes the same amplification (i.e. in the first $\sim$30 \,\textmu m) as the typical 10 photoelectrons from the Cherenkov signal is only a few percent. \subsection{Prototype description} \label{sec:Prototype} A sketch of the first PICOSEC prototype is presented in Fig.~\ref{fig:PrototypeSchema}. The readout (Fig.~\ref{fig:MMReadout}) is a bulk Micromegas detector built on top of a 1.6\,mm thick Printed Circuit Board (PCB) with a single anode (1\,cm diameter, 18\,\textmu m copper thickness) and an amplification gap of 128\,\textmu m, i.e. the distance between the anode and the mesh wires. The mesh used for the prototype is formed by 18\,\textmu m-diameter wires and has a 51\% optical transparency. At the bottom part of the PCB, there is a 18\,\textmu m thick copper layer used as ground reference. The amplification gap (between the mesh and the anode in Fig.~\ref{fig:DetectionConcept}) is defined by only six 200\,\textmu m-diameter pillars to minimize their influence in the two amplification stages. The pillars are arranged in a circle and are fully contained in the sensitive area. Four kapton rings, of 50\,\textmu m thickness each, are placed between the mesh and the crystal (i.e.~radiator) to define the height of the drift gap of 200\,\textmu m. During the laser tests, the crystal was made out of a 3\,mm thick MgF$_2$, on which a 5.5\,nm chromium layer was deposited serving as photocathode\footnote{A 1\,mm thick quartz crystal with a deposit of 10\,nm Aluminum, or 100\,nm diamond was also tested with comparable results.}. During beam tests, an additional 18\,nm-thick CsI layer was deposited over the chromium substrate in order to increase the number of photoelectrons produced by charged particles. In both configurations, a 10\,nm-thick metallic ring is placed on the crystal to establish the potential of the photocathode. \begin{figure}[htb!] \centering \includegraphics[width=0.70\textwidth]{MMReadout.jpg} \caption{Photograph of the readout structure of the bulk Micromegas detector in the first PICOSEC prototype. Six pillars, arranged in a hexagonal pattern, support the mesh in the central region of the amplification gap. The mesh and anode voltages are supplied by the two visible strip-lines onto which two coaxial cables are soldered outside the active volume. Their shielding is soldered to the solid copper ground layer on the lower side of the readout PCB.} \label{fig:MMReadout} \end{figure} The whole detector is installed inside a stainless steel chamber, which is then filled with the ``COMPASS gas'' at 1\,bar absolute pressure. Other gases, like 80\%CF$_4$+20\%C$_2$H$_6$ at 0.5\,bar absolute pressure, have also been used but this article will focus on the results obtained with the ``COMPASS gas''. The vessel has a transparent (quartz) entrance window to allow the passage of either UV~light or laser pulse; it has two gas valves for gas circulation, as well as a large vacuum port for evacuating the vessel. \begin{figure}[htb!] \centering \includegraphics[width=0.80\textwidth]{20171206_Schema_psMM.pdf} \caption{Sketch of the first prototype of the PICOSEC detector, described in detail in the text. The scale of some components is exaggerated for clarity.} \label{fig:PrototypeSchema} \end{figure} Referring to Fig.~\ref{fig:PrototypeSchema}, cathode, anode and mesh elements are electrically connected by SMA or SHV feedthroughs, as indicated. The cathode is connected to one CAEN High Voltage Supply (HVS) channel, the anode to a CIVIDEC preamplifier\footnote{\url{https://cividec.at/index.php?module=public.product&idProduct=34&scr=0}} (2\,GHz, 40\,dB, a gain of 100), biased by a separate channel of the HVS, and the mesh is connected to ground by a 50\,m long BNC cable, terminated with a $50$\,Ohm resistor in order to avoid signal reflections. Special attention was paid to proper grounding throughout the electronics design, and referred to the ground layer of the Micromegas readout. Each of the high voltage lines has a dedicated low-pass filter to suppress ripples from the HVS. \subsection{Laser test: single photoelectron measurements} \label{sec:LaserSetup} The time response of the PICOSEC detector for single photoelectrons was measured at the Saclay Laser-matter Interaction Center (IRAMIS/SLIC, CEA). The experimental setup (Fig.~\ref{fig:LaserSchema}) includes a femtosecond laser with a pulse rate ranging from 9\,kHz to 4.7\,MHz at 267-288\,nm wavelength and a focal length of $\sim$1\,mm. The laser beam is split into two equal parts, one arriving directly at the prototype and the other at a fast photodiode (PD0). The PD0 signal serves as the reference time, simultaneously recorded with the PICOSEC detector signal during the measurements. The high laser intensity at the PD0 and the fast risetime of this device result in a reference time accuracy of approximately 13\,ps. The intensity of the laser arriving at the detector is reduced by a series of light attenuators: electroformed fine nickel meshes (100-2000\,LPI) with optical transmission varying between 10\% and 25\%, yielding attenuation factors of 4, 5, 10, and their combinations. The Micromegas detector signal goes through a CIVIDEC preamplifier before being digitized and registered together with the PD0 signal by a 2.5\,GHz oscilloscope at a rate of 20\,GSamples/s (i.e. one sample every 50\,ps). \begin{figure}[htb!] \centering \includegraphics[width=0.90\textwidth]{LaserSchema.jpg} \caption{Schematic of the experimental setup during the laser tests, described in detail in the text.} \label{fig:LaserSchema} \end{figure} The PICOSEC detector was operated with the ``COMPASS gas'' at 1\,bar absolute pressure. The anode voltage (HV2 in Fig.~\ref{fig:DetectionConcept}) was scanned between 450\,V and 525\,V in steps of 25\,V, while the drift voltage (HV1) was varied in steps of 25\,V in different ranges, depending on the anode voltage. These experimental conditions, voltages used and the measured time resolution, are summarized in Table~\ref{tab:LaserTimeResCOMPASS}. In general, the lowest voltage used corresponds to a detector gain high enough to distinguish the signal from the noise level (gain $\sim 10^5$), while the highest voltage is the maximum value for which the detector operates in stable conditions (up to gains of $\sim 10^6$). For each voltage configuration, more than $10^4$ events were recorded with the oscilloscope, and subsequently analyzed offline. During the first data-taking campaign, the photocathode efficiency was less than 0.5\%, which led us to derive the trigger directly from the PICOSEC signal, in the interest of data collection efficiency. Results are based on this first campaign as runs contain enough events for the analysis. The PICOSEC trigger threshold varied between 10 and 90\,mV, as shown in Table~\ref{tab:LaserTimeResCOMPASS}, leading to a bias of the recorded PICOSEC pulse height spectrum. In a later data-taking campaign, the photocathode was replaced to increase the signal efficiency up to $\sim$5\%; this allowed deriving the trigger decision from the fast photodiode. The detector was operated in the same voltage conditions as the data set with a trigger bias on the PICOSEC amplitude, so they could be used to confirm that the charge distribution follows a Polya function~\cite{Zerguerras:2014yra}, as discussed in Sec.~\ref{sec:ResultsLaser}. \subsection{Beam tests with 150\,GeV muons} \label{sec:BeamSetup} The time response of the detector to 150\,GeV muons was measured during several beam periods in 2016 and 2017 at the CERN SPS H4 secondary beamline. The experimental setup (Fig.~\ref{fig:BeamSchema}) allows the characterization of up to three PICOSEC detectors, situated at the positions Pos0, Pos1 and Pos2. Two trigger scintillators of $5 \times 5$\,mm$^2$ operate in anti-coincidence with a veto scintillator whose aperture (hole) matches the same area. This trigger configuration efficiently selects muons that don't undergo scattering and suppresses triggers from particle showers. One Hamamatsu MCP PMT\footnote{Model R3809U-50: \url{https://www.hamamatsu.com/jp/en/R3809U-50.html}} is used as time reference; its entrance window (3\,mm thick quartz) is placed perpendicular to the beam and serves as Cherenkov radiator. From the time jitter between two identical MCPs we determine the reference time accuracy of 5.4$\,\pm$\,0.2\,ps. A telescope of three tracking GEM detectors with two-dimensional strip readout is used to reconstruct the trajectory of each muon with a combinatorial Kalman filter based algorithm, and to determine its impact position at each detector. \begin{figure}[htb!] \centering \includegraphics[width=0.90\textwidth]{BeamSchema4.pdf} \caption{Layout of the experimental setup (not to scale) during the beam tests. The incoming beam enters from the right side of the figure; events are triggered by the coincidence of two $5\times 5$\,mm$^2$ scintillators in anti-coincidence with a ``veto'' scintillator. Three GEM detectors provide tracking information of the incoming charged particles, and the timing information is measured in three PICOSEC detectors (Pos0, Pos1, Pos2). Details are given in the text.} \label{fig:BeamSchema} \end{figure} The PICOSEC (and MCP reference) waveforms are recorded in the beam tests in the same way as in the laser tests using an oscilloscope with 20\,GSamples/s and 2.5\,GHz bandwidth, while the tracking (GEM detector) data are recorded simultaneously in an APV25 based SRS DAQ~\cite{Martoiu:2013sm}. To ensure event alignment in the two DAQ systems, the internal SRS event number is sent as a bit stream to one oscilloscope channel. The DAQ trigger is generated by the scintillators. Each PICOSEC detector is operated at different anode and drift voltages, which are respectively scanned in steps of 25\,V. As in the case of laser tests, the minimum and maximum drift voltages respectively correspond to the cases where the signal is distinguishable from the noise level and the detector operates in stable conditions. However, the detector gain ($10^4-10^5$) was lower than in laser tests because the initial number of electrons was higher. For a fixed anode voltage, the drift voltage was $\sim$100\,V lower than in laser tests. For each voltage configuration, more than 4000 events are recorded with the oscilloscope, and subsequently analyzed offline. During periods without beam or long accesses, each detector is illuminated with a UV lamp to measure the response to single photoelectrons at different voltage settings. This information is later analyzed to estimate the mean number of photoelectrons produced by muons during beam tests. \subsection{Waveform analysis} \label{sec:PulseAnalysis} In this section, we briefly describe the analysis performed on both laser and beam test data. For each PICOSEC signal, the baseline offset and noise level are determined using the 75\,ns precursor of the pulse. Then, the ``electron-peak'' amplitude ($V_\mathrm{max}$) is defined as the difference between the highest point of the waveform and the baseline. For the timing measurement, a Constant Fraction (CF) method based on a sigmoid function is used to minimize the contribution of the noise. Other algorithms to determine the CF have been used with similar results. In this approach, a sigmoid function is fit to the leading edge of the electron-peak. This function is defined as \begin{equation} V(t) = \frac{P_0}{1 + \exp\left(-P_2 \times (t - P_1)\right)} + P_3 \label{eq:sigmoidfit} \end{equation} where $P_0$ and $P_3$ are respectively the maximum and the minimum values, $P_1$ is the inflection time (i.e. where the slope changes derivative), and $P_2$ quantifies the speed of the sigmoid change (i.e. is correlated to the signal risetime). The time corresponding to a 20\% CF is calculated as follows: \begin{equation} t_{z} = P_1 - \frac{1}{P_2} \ \log\Bigg[\frac{P_0}{0.2 \times V_{\mathrm{max}} - P_3} - 1\Bigg] \label{eq:sigmoidrep} \end{equation} For the photodetectors used as the time reference (i.e. MCP, or PD0 in the case of laser tests) a simpler approach is applied as signals are almost immune to any source of noise: after the calculation of the pulse baseline and amplitude, a cubic interpolation between four points around CF=20\% is used to extract with better precision the temporal position of the signal. The ``Signal Arrival Time'' (SAT) is then defined as the difference between the PICOSEC CF time and that of the reference detector, as illustrated in Fig.~\ref{fig:PicosecSignal}. The ``electron-peak charge'' is defined as the integral of the waveform between the start and the end points, defined as the first points situated before and after the maximum whose amplitude is less than one standard deviation away from the baseline offset. For those pulses with no clear separation between the electron-peak and the ion-tail, the end point has been alternatively defined as the time when the pulse derivative changes sign. The resulting value is then transformed to Coulombs, using the input impedance of\,$50$\,Ohm. The measured electron-peak charge-to-amplitude ratio is 0.0033\,pC/mV. \section{Laser test results} \label{sec:ResultsLaser} Two aspects of the PICOSEC time response in the laser measurements are discussed below. Firstly, we discuss the dependence of the time response on signal amplitude as this dependence (particularly concerning the role of the drift field and fluctuations in the preamplification at a given field) elucidates the physical origin of the PICOSEC time resolution. Secondly, we convolute this amplitude dependence with the actual amplitude distribution corresponding to a single photoelectron. Using this convolution, i.e. the full ``single photoelectron time response'', we can then estimate the PICOSEC response for the case of many photoelectrons produced in the Cherenkov signal from 150 GeV muons discussed in the next section. Since the experimental data on the SAT resolution approximately follow a Gaussian time distribution, we could simply report the standard deviation as the time resolution of the PICOSEC detector. However, there is a small tail at high SAT values, due to small charge (or amplitude) signals with late arrival time, which accounts for a small fraction of the total events. This results in a correlation between the SAT and the electron-peak charge (or amplitude). This correlation is quantified for each voltage setting by the sample of PICOSEC signals in narrow ranges of electron-peak charge and fitting with a Gaussian distribution the corresponding SAT values. A typical dependence of the resulting mean and standard deviation values on the electron-peak charge is shown in Fig.~\ref{fig:DelayTResCharge}. For both variables, there is a decrease with the charge, which can be described by the following parametric function: \begin{equation} y = \frac{b}{x^w} + a \label{eq:slewing} \end{equation} where $y$ is either the mean or standard deviation of the SAT, $x$ is the electron-peak charge; and \textit{a}, \textit{b} and \textit{w} are three free parameters. For the mean values, this function is used to fit the experimental points with the constraint that the parameters \textit{b} and \textit{w} must be the same for all datasets with the same anode voltage, while the parameter $a$ could take different values for each drift voltage setting. As shown in Fig.~\ref{fig:DelayTResCharge} (left), this multiple fit function works well. The values for the standard deviation of the SAT (i.e. the time resolution) follow a common curve for all data with the same anode voltage and can be thus fitted with the same function of Eq.~\ref{eq:slewing}. Fig.~\ref{fig:DelayTResCharge} (right) demonstrates that the fit works well. The data points deviate slightly from the curve at low electron-peak signal amplitudes. Overall, Fig.~\ref{fig:DelayTResCharge} (right) shows an improved time resolution for an earlier onset of the avalanche (i.e. a higher signal charge). In summary, these results indicate: a) a decrease of the parameter $a$ with drift voltage (c.f. Fig.~\ref{fig:DelayTResCharge} (left)), that reflects the dependence of the electron drift velocity on drift field, and that b) the time resolution properties of the PICOSEC detector are described by a single function and are mainly determined by the electron-peak charge. In fact there are two physical parameters of the drift region which could, through their dependence on the applied drift voltage, affect the timing performance: a) the longitudinal diffusion coefficient (i.e. $\sigma_\mathrm{t} \sim D^{1/2}_\mathrm{L} / v_\mathrm{drift}$), and b) the mean free path to the first ionizing collision. The observed scaling with the pulse amplitude for fixed anode voltage suggests that the improvement of the time resolution is driven by the latter effect (i.e. “b)” ), while the contribution from the variation in longitudinal diffusion is less significant in this regime. \begin{figure}[htb!] \centering \includegraphics[width=0.49\textwidth]{fig6a.pdf} \includegraphics[width=0.49\textwidth]{fig6b.pdf} \caption{Laser test: Mean of the SAT values (left) and time resolution (right) as a function of the electron-peak charge in case of single photoelectron data, for an anode voltage of 525\,V and drift voltages between 200 and 350\,V. The solid curves in the left distribution are the result of fitting the functional form (see text, Eq.~\ref{eq:slewing}) to the experimental points for each drift voltage, with the constraint that the parameters \textit{b} and \textit{w} must be the same for all drift voltages. Meanwhile, the solid curve in the right distribution is the result of fitting the same equation to all experimental points, without any distinction of the drift voltage. Statistical uncertainties are shown.} \label{fig:DelayTResCharge} \end{figure} Yet another indication of the dominant role of the drift region can be derived from Fig.~\ref{fig:TResVsAnode}, where the dependence of the time resolution on electron-peak charge is shown for different anode voltages. To preserve the same level of signal amplitudes at lower anode voltages, the drift fields have been correspondingly increased. Signals with the same electron-peak charge and lower anode voltage (and thus necessitating a higher preamplification) show a better time resolution, i.e., pulses with a higher preamplification gain have better timing properties than those with a higher amplification gain. \begin{figure}[htb!] \centering \includegraphics[width=0.70\textwidth]{fig7.pdf} \caption{Laser test: Dependence of the time resolution on the electron-peak charge for anode voltages of 450\,V (red circles, drift voltages between 300 and 425\,V), 475\,V (green squares, drift voltages between 300 and 400\,V), 500\,V (blue triangles, drift voltages between 275 and 400\,V), and 525\,V (magenta inverted triangles, drift voltages between 200 and 350\,V). The continuous lines are the result of fitting the functional form of Eq.~\ref{eq:slewing} to the experimental points for the same anode voltage (see text). Statistical uncertainties are shown.} \label{fig:TResVsAnode} \end{figure} The CF algorithm discussed above is used to eliminate the expected correlation between signal amplitude and SAT observed for signals with similar shapes but different amplitudes (known as ``time walk correction''~\cite{Delagnes:2016hdo}) normally observed when timing is derived from a fixed threshold. Nevertheless, there are also well known examples where both amplitude and signal risetime can vary from pulse to pulse, requiring ``amplitude and risetime correction'' for SAT determination. We also considered this hypothesis since the time resolution varies by several hundreds of picoseconds, for different signal amplitudes \textemdash even with the CF method. However, as shown in Fig.~\ref{fig:AveragePulse}, the average electron-peak shape remains essentially identical for different electron-peak charges. For this reason, the correlation between electron-peak charge and the signal arrival time observed in Fig.~\ref{fig:AveragePulse} must be a consequence of the physical mechanism generating the PICOSEC signal rather than an artifact of the timing algorithm. \begin{figure}[htb!] \centering \includegraphics[width=0.90\textwidth]{fig8.pdf} \caption{Laser test: Average of the electron-peak shape normalized to unity for electron-peak charges of 1.0-1.1\,pC (continuous red line), 2.0-2.5\,pC (segmented green line) and 3-4\,pC (dashed blue line). The figure shows a zoom to the leading edge, while the inset shows the complete electron-peak component. The detector was operated at an anode voltage of 450\,V and a drift voltage of 350\,V with the ``COMPASS gas'' at 1\,bar absolute pressure.} \label{fig:AveragePulse} \end{figure} \subsection{Derivation of the overall ``single photoelectron time distribution function''} \label{sec:singlephotoelectron} As described in Sec.~\ref{sec:LaserSetup}, data are collected with an electronic trigger generated by the PICOSEC detector for part of the dataset. The threshold level was in some cases high in comparison to the Root Mean Square (RMS) baseline noise (typically $\sim$2.5\,mV), as detailed in Table~\ref{tab:LaserTimeResCOMPASS}. Supposing that the derived dependence of the mean and standard deviation of the SAT with the electron-peak charge are also valid for pulse amplitudes lower than the threshold, the time resolution of the PICOSEC detector signal at a given operating point is estimated by the equation: \begin{equation} \sigma^2 = \sum^n_{i = 1} a^2_i \ \sigma^2_i + \sum^n_{i = 1} \sum^n_{j = i + 1} a_i \times a_j \times \bigg(\sigma^2_i + \sigma^2_j + \left(\mu_i - \mu_j\right)^2\bigg) \label{eq:slewthrcorrection} \end{equation} where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the SAT in an interval $i$ ($i=1,n$) of the electron-peak charge ($Q_i$), and $a_i$ is the probability density function (PDF) for a given charge $Q_i$, i.e. $a_i = A(Q_i)$, and $\sum^n_{i = 1} a_i = 1$. The term $(\mu_i - \mu_j)$ removes the difference in SAT at different electron-peak charges, caused by the measured correlation between these two variables. For all cases, the minimum electron-peak charge $Q_1$ is set to 0.033\,pC (i.e. 10\,mV in amplitude), equivalent to four times the typical RMS baseline noise. Meanwhile, the $A(Q)$ PDFs are obtained by fitting each electron-peak charge distribution by a Polya function~\cite{Zerguerras:2014yra} which is expressed as: \begin{equation} A(Q|N,Q_e,\theta) = \frac{(\theta + 1)^{N (\theta + 1)}}{\Gamma(N (\theta + 1))} \left(\frac{Q}{Q_e}\right)^{N (\theta + 1) -1} \exp\Bigg[- (\theta + 1) \frac{Q}{Q_e}\Bigg] \label{eq:Polyafunction} \end{equation} where $Q_e$ is the mean charge per single photoelectron, $N$ is the number of photoelectrons, and $\theta$ is the Polya shape parameter. This function describes well the single electron-peak charge response of the PICOSEC detector ($N = 1$), as shown in Fig.~\ref{fig:PolyaFitting}, including also the dataset without a PICOSEC trigger threshold bias (Fig.~\ref{fig:PolyaFitting}, right). In each fit, bin sizes and fitting regions were varied in order to estimate the systematic errors, which were then combined with the statistical uncertainties. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{fig9b.pdf} \includegraphics[width=0.48\textwidth]{fig9a.pdf} \caption{Laser test: Two examples of the electron-peak charge distributions generated by single photoelectrons: one biased by the PICOSEC detector threshold (left), and another unbiased- using only the reference photodetector in the trigger chain (right). The voltage settings in both cases are 450\,V for the anode and 350\,V for the drift. In both cases, the charge distribution is fit by a Polya function (red line), and with a separate noise contribution (blue line in the right plot). Statistical and systematic uncertainties are shown.} \label{fig:PolyaFitting} \end{figure} The raw and corrected time resolution values of the PICOSEC detector for single photoelectron detection at the different operation settings are shown in Table~\ref{tab:LaserTimeResCOMPASS}. Two uncertainties are included in the calculation: the uncertainty in the parametrization of the mean and of the standard deviation of the SAT with the charge, and the uncertainty in the Polya parametrization. The best measured value of the time resolution is 76.0 $\pm$ 0.4\,ps, which is obtained for the lowest applied anode voltage (450\,V). Meanwhile, as can be seen from the dependence of the time resolution on the drift and anode voltages (Fig.~\ref{fig:TResVsDrift}), the time resolution is better for higher drift voltage. We did not explore the whole parameter space but a further improvement is expected if the drift voltage can be further increased, while keeping the gain almost constant (by reducing the anode voltage correspondingly). In fact, a simulation of the detector response~\cite{Paraschou:2017kp} has shown that the detector time jitter is mainly defined by the drift (pre-amplification) stage, while the contribution of the amplification stage is negligible. \begin{figure}[htb!] \centering \includegraphics[width=0.70\textwidth]{fig10.pdf} \caption{Laser test: Dependence of the corrected time resolution on the drift voltage for anode voltages of 450\,V (red circles), 475\,V (green squares), 500\,V (blue triangles) and 525\,V (magenta inverted triangles). Statistical uncertainties are shown.} \label{fig:TResVsDrift} \end{figure} \begin{table*}[htb!] \centering \caption{Laser tests: The experimental conditions, voltages used and the time resolution (raw and corrected values) of the PICOSEC detector, operated with the ``COMPASS gas'' at 1\,bar absolute pressure. The threshold is the amplitude of the smallest signal recorded at the trigger level and the RMS is the standard deviation of the baseline. The raw time resolution values are estimated using the CF algorithm, while the corrected values are estimated after correcting the correlation of the SAT with the electron-peak charge, as discussed in detail in Sec.~\ref{sec:singlephotoelectron}.} \begin{tabular}{cc|cc|cc} \hline Anode & Drift & Threshold & RMS & \multicolumn{2}{c}{Time resolution (ps)}\\ (V) & (V) & (mV) & (mV) & Raw & Corrected\\ \hline 450 & 300 & 14.3 $\pm$ 0.3 & 2.5 $\pm$ 0.2 & 164.0 $\pm$ 4.2 & 184.6 $\pm$ 1.4\\ & 325 & 15.8 $\pm$ 0.3 & 2.5 $\pm$ 0.2 & 147.0 $\pm$ 4.0 & 169.5 $\pm$ 1.1\\ & 350 & 16.1 $\pm$ 0.4 & 2.5 $\pm$ 0.2 & 121.6 $\pm$ 1.8 & 140.1 $\pm$ 1.0\\ & 375 & 26.9 $\pm$ 0.6 & 2.5 $\pm$ 0.2 & 88.4 $\pm$ 0.5 & 108.7 $\pm$ 0.6\\ & 400 & 37.5 $\pm$ 1.1 & 2.9 $\pm$ 0.2 & 77.0 $\pm$ 0.5 & 90.3 $\pm$ 0.5\\ & 425 & 79.4 $\pm$ 2.7 & 5.6 $\pm$ 0.3 & 69.5 $\pm$ 0.6 & 76.0 $\pm$ 0.4\\ \hline 475 & 300 & 11.5 $\pm$ 0.3 & 2.5 $\pm$ 0.2 & 180.0 $\pm$ 6.0 & 187.8 $\pm$ 0.8\\ & 325 & 16.1 $\pm$ 0.4 & 2.5 $\pm$ 0.2 & 140.0 $\pm$ 1.0 & 160.3 $\pm$ 0.7\\ & 350 & 30.3 $\pm$ 0.7 & 2.6 $\pm$ 0.2 & 90.7 $\pm$ 0.6 & 123.8 $\pm$ 1.0\\ & 375 & 31.6 $\pm$ 1.1 & 2.6 $\pm$ 0.2 & 89.0 $\pm$ 0.6 & 105.3 $\pm$ 0.5\\ & 400 & 44.6 $\pm$ 2.2 & 2.9 $\pm$ 0.2 & 79.1 $\pm$ 0.5 & 86.0 $\pm$ 0.3\\ \hline 500 & 275 & 20.6 $\pm$ 0.4 & 3.1 $\pm$ 0.3 & 175.0 $\pm$ 3.1 & 230.0 $\pm$ 3.0\\ & 300 & 21.1 $\pm$ 0.5 & 3.4 $\pm$ 0.4 & 150.8 $\pm$ 1.8 & 186.0 $\pm$ 2.0\\ & 325 & 30.6 $\pm$ 0.8 & 3.1 $\pm$ 0.2 & 115.8 $\pm$ 1.2 & 145.5 $\pm$ 1.0\\ & 350 & 41.8 $\pm$ 1.2 & 3.4 $\pm$ 0.3 & 98.3 $\pm$ 0.9 & 121.2 $\pm$ 1.0\\ & 375 & 87.9 $\pm$ 2.6 & 5.9 $\pm$ 0.3 & 85.3 $\pm$ 0.5 & 92.6 $\pm$ 0.6\\ & 400 & 93.7 $\pm$ 4.7 & 5.7 $\pm$ 0.2 & 78.8 $\pm$ 0.5 & 83.8 $\pm$ 0.3\\ \hline 525 & 200 & 11.1 $\pm$ 0.2 & 2.6 $\pm$ 0.2 & 290.0 $\pm$ 7.0 & 337.5 $\pm$ 2.0\\ & 225 & 11.1 $\pm$ 0.2 & 2.7 $\pm$ 0.2 & 261.8 $\pm$ 3.0 & 278.0 $\pm$ 1.2\\ & 250 & 15.6 $\pm$ 0.3 & 2.6 $\pm$ 0.2 & 210.3 $\pm$ 3.0 & 254.2 $\pm$ 2.0\\ & 275 & 15.7 $\pm$ 0.4 & 2.6 $\pm$ 0.2 & 180.2 $\pm$ 2.0 & 208.4 $\pm$ 1.0\\ & 300 & 29.9 $\pm$ 0.6 & 2.7 $\pm$ 0.2 & 133.0 $\pm$ 1.4 & 174.8 $\pm$ 1.0\\ & 325 & 41.9 $\pm$ 0.4 & 3.0 $\pm$ 0.2 & 111.9 $\pm$ 0.8 & 141.6 $\pm$ 0.7\\ & 350 & 43.1 $\pm$ 1.6 & 3.0 $\pm$ 0.2 & 100.7 $\pm$ 0.9 & 110.5 $\pm$ 0.5\\ \hline \end{tabular} \label{tab:LaserTimeResCOMPASS} \end{table*} \section{Beam tests results with 150\,GeV muons} \label{sec:ResultsBeam} The same analysis as in Sec.~\ref{sec:ResultsLaser} is applied to the SAT distributions of 150\,GeV muons, as a correlation with the electron-peak charge is expected. However, as shown in Fig.~\ref{fig:DelayTResChargeBeam150GeV} (left), the mean of the SAT distribution is almost constant for each setting; this is explainable by the high drift fields (and preamplification gains) at which the PICOSEC detector is operated. Meanwhile, the time resolution decreases as the electron-peak charge increases (Fig.~\ref{fig:DelayTResChargeBeam150GeV}, right). \begin{figure}[htb!] \centering \includegraphics[width=0.49\textwidth]{TimeDistMeanCharge_BeamJuly2017_Normal.pdf} \includegraphics[width=0.49\textwidth]{TimeResMeanCharge_BeamJuly2017_Normal.pdf} \caption{Beam test: Dependence of the signal arrival time (left) and the time resolution (right) on the electron-peak charge for 150\,GeV muons, for anode (A) voltages between 250\,V and 300\,V and drift voltages (D) between 400\,V and 500\,V. Statistical uncertainties are shown.} \label{fig:DelayTResChargeBeam150GeV} \end{figure} As the mean of the SAT distribution is almost independent of the electron-peak charge, each SAT distribution generated by 150\,GeV muons is fit by a two Gaussian distribution (both Gaussians centered at the same value) and the time resolution is reported as the standard deviation of the full distribution\footnote{A single Gaussian fit has also been used with similar results.}. The time resolution results obtained are as low as 24.0 $\pm$ 0.3\,ps, as shown in Fig.~\ref{fig:BeamSATBest} for anode and drift voltages of 275\,V and 475\,V, respectively. \begin{figure}[htb!] \centering \includegraphics[width=0.80\textwidth]{BeamSATBest5.pdf} \caption{Beam test: An example of the signal arrival time distribution for 150\,GeV muons, and the superimposed fit with a two Gaussian function (red line for the combination and dashed blue and magenta lines for each Gaussian function), for an anode and drift voltage of 275\,V and 475\,V, respectively. Statistical uncertainties are shown.} \label{fig:BeamSATBest} \end{figure} From a scan over a wide range of voltage settings we obtain the dependence of the time resolution on the drift and anode voltages, as shown in Fig.~\ref{fig:BeamTResVsDrift}. This figure clearly shows that the time resolution improves for higher drift voltages, while the gain is kept constant by reducing in the same proportion the anode voltage. The optimal time resolution is reached for drift voltages of 450-475\,V, which are the maximum settings at which the detector can be stably operated, i.e. there is no discharge during the beam run. \begin{figure}[htb!] \centering \includegraphics[width=0.80\textwidth]{TimeResDriftJuly2017.pdf} \caption{Beam test: Dependence of the time resolution on the drift and anode voltage for a PICOSEC detector irradiated by 150\,GeV muons. For each curve at a given anode voltage, the maximum drift voltage corresponds to the maximum gain at which the detector can work in stable conditions. Statistical uncertainties are shown.} \label{fig:BeamTResVsDrift} \end{figure} The mean number of photoelectrons ($N$) is estimated for those voltage settings, from the response to a single photoelectron calibration using the UV lamp. In a first step, the electron-peak charge distribution of the UV lamp runs is fit by Eq.~\ref{eq:Polyafunction} (where $N=1$), in order to estimate the parameters $Q_e$ and $\theta$. The electron-peak charge values $(Q_i)^n_{i = 1}$ of the 150~GeV muon run (where $n$ is the number of values) were then used to define a likelihood function \begin{equation} \mathscr{L}(N|(Q_i)^n_{i = 1}) = \prod^n_{i = 1} \Bigg(\sum^\infty_{j = 0} \frac{N^j e^{-N}}{j!} \times A(Q_i|j,Q_e,\theta)\Bigg) \label{eq:Likelihood} \end{equation} where $Q_e$ and $\theta$ are results of previous fit, $N$ is the mean number of photoelectrons per muon including the geometrical acceptance and $A(Q)$ is the Polya-function defined in Eq.~\ref{eq:Polyafunction}. This function was then maximized in order to estimate $N$. An example of the results of these two fits is shown in Fig.~\ref{fig:PolyaMuonFitting}, while the value obtained is $N=10.4 \pm 0.4$. The uncertainty of this estimation is dominated by the fit uncertainty of the UV lamp runs. \begin{figure}[htb!] \centering \includegraphics[width=0.80\textwidth]{MuonLEDChargeDist.pdf} \caption{Beam test: An example of the electron-peak charge distribution (black points) generated by 150\,GeV muons and compared to the statistical prediction (red line) obtained from a maximum likelihood method. Inset: the electron-peak distribution generated by a signal from the UV lamp (black points) is fit by the single electron-peak distribution (red line) described by Eq.~\ref{eq:Polyafunction} plus a noise contribution (blue line). The settings are 275\,V for the anode and 475\,V for the drift voltages, in both cases. Statistical uncertainties are shown.} \label{fig:PolyaMuonFitting} \end{figure} \section{Conclusions} \label{sec:Conclusions} In this paper, we present a new detector concept, called PICOSEC, composed of a ``two-stage'' Micromegas detector coupled to a Cherenkov radiator and equipped with a photocathode. The good timing resolution performance for single photoelectrons ($\sigma_t\sim 76$\,ps) and for 150\,GeV muons ($\sigma_t\sim 24$\,ps) is promising and motivates further development towards practical applications. Among the significant issues to be addressed to ensure suitability for a large area detector to be used in high rate experiment are: 1) the development of efficient and robust photocathodes (or secondary emitters), and 2) scalability, including the development of the corresponding readout electronics. \section*{Acknowledgements} We acknowledge the support of the RD51 collaboration, in the framework of RD51 common projects. We also thank K. Kordas for valuable suggestions concerning the analysis of the data. J.~Bortfeldt acknowledges the support from the COFUND-FP-CERN-2014 program (grant number 665779). M.~Gallinaro acknowledges the support from the Funda\c{c}\~ao para a Ci\^{e}ncia e a Tecnologia (FCT), Portugal. D.~Gonz\'alez-D\'iaz acknowledges the support from MINECO (Spain) under the Ramon y Cajal program (contract RYC-2015-18820). F.J.~Iguaz acknowledges the support from the Enhanced Eurotalents program (PCOFUND-GA-2013-600382). S. White acknowledges partial support through the US CMS program under DOE contract No. DE-AC02-07CH11359. \bibliographystyle{JHEP}
{ "timestamp": "2018-03-15T01:09:54", "yymm": "1712", "arxiv_id": "1712.05256", "language": "en", "url": "https://arxiv.org/abs/1712.05256" }
\section{Introduction} Spintronics is about generation, detection and manipulation of spin degree of freedom of particles. Most early studies focused on the electron spins \cite{DasSarma}. However, an electric current normally accompanies an electron spin current and consumes much energy, leading to a Joule heating. The Joule heating becomes the critical problem in nano electronics and spintronics although many efforts have been made. Recently, magnon spintronics, or magnonics in which magnons are spin carriers, attracts much attention because of its fundamental interest \cite{xiansi,hubin} and its lower energy consumption in comparison with that of electron spintronics \cite{book1,magnonics1,magnonics2}. Nernst effect commonly refers to the generation of a transverse voltage/current by a thermal gradient in an electronic system under a perpendicular magnetic field. In a ferromagnetic metal and in the absence of an external magnetic field, a thermal gradient can generate a transverse charge current or voltage proportional to the vector product of the thermal gradient and the magnetization in the linear response region. This is the anomalous Nernst effect, the thermal electric manifestation of the anomalous Hall effect \cite{ANE}. It is natural to ask whether there is a similar effect for magnons. Moving magnons experience gyroscopic forces because of nonzero Berry curvature of a magnetic system although magnons are charge neutral quasiparticles that do not have the Lorentz force. As a result, a transverse magnon current is generated when magnons are driven by a longitudinal force such as a thermal gradient in the absence of a magnetic field which is termed as the anomalous magnon Nernst effect (AMNE). In this paper, we focus on a perpendicularly magnetized honeycomb lattice with the nearest-neighbor pseudodipolar interaction and the next-nearest-neighbor Dzyaloshinskii-Moriya interaction (DMI), whose magnon bands can be topologically nontrivial with various topological phases \cite{ours}. We investigate the magnon transport of this system in the presence of a thermal gradient using the semiclassical equations of motion of magnons and the Boltzmann equation in linear transport regime. We found that the system has topologically nontrivial magnon bands. The system changes from one topologically nontrivial phase to another as the DMI strength varies. The AMNE coefficient depends on temperature nonmonotonically. It starts from 0 at 0 K and goes back to 0 at high temperature limit with a maximum at an intermediate temperature. The nonmonotonical temperature-dependence of AMNE is due to non-trivial Berry curvature distribution of a given band in the momentum space and thermally activated magnon population in the bands. In certain parameter space, there is a sign reversal of the AMNE at low temperature because the magnon Berry curvature near the band bottom at $\Gamma$ point has small non-zero values of the opposite sign as those near band top at K and K$^\prime$ points with a much bigger value. In the presence of staggered anisotropy on A, B sublattices, the system can also be topologically trivial, and the K and K$^\prime$ valleys contribute opposite transverse magnon currents due to the opposite Berry curvatures. However, the total transverse magnon current does not vanish. The boundary that AMNE coefficient changes its sign is also determined numerically. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{Fig1.eps}\\ \caption{(a) Schematic illustration of a perpendicularly magnetized honeycomb lattice. The red and green arrows denote the nearest-neighbor and the next-nearest-neighbor vectors, respectively. (b) Magnon spectrum $\omega(\mathbf{k})$ of an infinite system for $K=10J$, $F=5J$, and $D=\Delta=0$. The Brillouin zone is indicated by the black hexagon. } \label{system} \end{figure} \section{Model and Results} We consider classical magnetic moments on a honeycomb lattice in the $xy$ plane as illustrated in Figure 1(a), and the Hamiltonian is \begin{multline} \mathcal{H}=-\frac{J}{2}\sum_{\left\langle i,j \right\rangle} \mathbf{m}_i\cdot \mathbf{m}_j-\frac{F}{2}\sum_{\left\langle i,j \right\rangle} (\mathbf{m}_i\cdot \mathbf{e}_{ij})(\mathbf{m}_j\cdot \mathbf{e}_{ij}) -D\sum_{\langle\langle i,j\rangle\rangle}\nu_{ij} \hat{\mathbf{z}}\cdot\left(\mathbf{m}_i\times\mathbf{m}_j\right)-\sum_{i}\frac{K_i}{2}m_{iz}^2, \label{Hami} \end{multline} where the first term is the nearest-neighbor ferromagnetic Heisenberg exchange interaction ($J>0$). The second and third terms arise from the spin-orbit coupling (SOC) \cite{pseudo,DMI}. $\mathbf{e}_{ij}$ is the unit vector pointing from site $i$ to $j$. $F$ is the strength of the nearest-neighbor pseudodipolar interaction, which is the second-order effect of the SOC [The nearest-neighbor Dzyaloshinskii-Moriya interaction (DMI) would be the first-order effect of SOC if it exists, but it vanishes because the center of the A-B bond is an inversion center of the honeycomb lattice]. The next-nearest-neighbor DMI measured by $D$ is in general no zero. $\nu_{ij}=\frac{2}{\sqrt{3}}\hat{\mathbf{z}}\cdot(\mathbf{e}_{li} \times\mathbf{e}_{lj})=\pm 1$, where $l$ is the nearest neighbor site of $i$ and $j$. The last term is the sublattice-dependent anisotropy whose easy-axis is along the $z$ direction with anisotropy coefficients of $K_i=K+\Delta$ for $i\in$ A and $K-\Delta$ for $i\in$ B. $\mathbf{m}_i$ is the unit vector of the magnetic moment at site $i$. The spin dynamics is governed by the Landau-Lifshitz-Gilbert (LLG) equation \cite{LLG,ours}, \begin{equation} \frac{\mathrm{d}\mathbf{m}_i}{\mathrm{d} t}=-\gamma\mathbf{m}\times \mathbf{H}^ \text{eff}_i+\alpha \mathbf{m}_i\times \frac{\mathrm{d}\mathbf{m}_i}{\mathrm{d} t}, \label{LLG} \end{equation} where $\gamma$ is the gyromagnetic ratio and $\alpha$ is the Gilbert damping constant. $\mathbf{H}^\text{eff}_i=\frac{\partial\mathcal{H}} {\mu_0\mu\partial\mathbf{m}_i}$ is the effective field at site $i$. The lattice constant $a$ and $J$ are used as the length unit and the energy unit out of five parameters in \eqref{Hami}. The magnetic field and time are in the units of $\sqrt{J\mu_0/a^3}$ and $\sqrt{a^3\mu_0/(J\gamma^2)}$, respectively, where $\mu_0$ is the vacuum permeability. When the anisotropy is sufficiently large, spins are perpendicularly magnetized \cite{split}. To obtain the spin wave spectrum, we linearize the LLG equation following the standard procedures \cite{ours}. The spin wave spectrum and wavefunctions are obtained by solving the eigenvalue problem $gH(\mathbf{k})\psi_n=\omega_n(\mathbf{k})\psi_n$, where $H(\mathbf{k})$ is a $4\times 4$ Hermitian matrix \begin{equation}H= \left(\begin{matrix} M_\mathrm{A}^{+} & 0 & -\ell(\mathbf{k}) & -g_{+}(\mathbf{k})\\ 0 & M_\mathrm{A}^{-} & -g_{-}(\mathbf{k}) & -\ell(\mathbf{k})\\ -\ell^{*}(\mathbf{k}) & g_{-}^{*}(\mathbf{k}) & M_\mathrm{B}^{-} & 0\\ -g_{+}^{*}(\mathbf{k}) & \ell^{*}(\mathbf{k}) & 0 & M_\mathrm{B}^{+} \end{matrix}\right), \label{eigen} \end{equation} with $\ell(\mathbf{k})=\left(J+\frac{F}{2}\right) \sum_{j=1,2,3}e^{i\mathbf{k}\cdot\mathbf{a}_j}$, $g_{\pm} (\mathbf{k})=\frac{F}{2}\sum_{j=1,2,3}e^{\pm2i\theta{j}} e^{i\mathbf{k}\cdot\mathbf{a}_j}$ ($\theta_{j}$ is the angle between $\mathbf{a}_j$ and $x$ direction). $M_\mathrm{A}^{\pm}=M+\Delta\pm d(\mathbf{k})$ and $M_\mathrm{B}^{\pm}=M-\Delta\pm d(\mathbf{k})$ with $M=K+3J$ and $d(\mathbf{k})=2D\sum_{j=1,3,5}\sin(\mathbf{k}\cdot\mathbf{b}_j)$. $g=\sigma_0\bigotimes\sigma_3$ (with $\sigma_0$ being the $2\times2$ identity matrix and $\sigma_3$ the Pauli matrix). $\psi_n$ is the $n$th eigenvector of eigen-frequency $\omega_n$, satisfying the generalized orthogonality $\psi_i^\dagger g \psi_j=\delta_{ij}$. At K and K$^\prime$, the frequencies of the two magnon bands are respectively, \begin{eqnarray} \omega_{1}^\mathrm{K(K^\prime)}=M-3\sqrt{3}D+(-)\Delta,\label{gap1}\\ \omega_{2}^\mathrm{K(K^\prime)}=\sqrt{(M+3\sqrt{3}D)^2-\frac{9}{4}F^2}-(+)\Delta, \label{gap2}\end{eqnarray} where ``$(+)$" and ``$(-)$" on the right hand side are for K$^\prime$. The magnon band for $K=10J$, $F=5J$ and $D=\Delta=0$ is shown in Figure 1(b), which has a direct gap of $\Delta_g=M-\sqrt{M^2-9F^2/4}$ at both K and K$^\prime$ (valleys for the upper band and peaks for the lower band). The direct gap at the valleys can close and reopen as $D$ and $\Delta$ varies, resulting in topological phase transitions. The Berry curvature $\boldsymbol{\Omega}_{n}$ of $n$th band and the corresponding Chern number $\mathcal{C}_n$ can be calculated by using a gauge-invariant formula \cite{Shindou}, \begin{gather} \boldsymbol{\Omega}_{n}=i\nabla_\mathbf{k}\times\left(\psi_n^\dagger g \nabla_\mathbf{k}\psi_n\right);\\ \mathcal{C}_n=\frac{1}{2\pi}\iint_{\mathbf{k}\in \mathrm{BZ}}\Omega_n d^2\mathbf{k}, \end{gather} where the integration is over the Brillouin zone (BZ), and $\Omega_n=\boldsymbol{\Omega}_n\cdot\hat{\mathbf{z}}$ is the $z$ component of the Berry curvature that is given by a gauge-invariant formula similar to that in electronic systems \cite{chern} \begin{equation} \boldsymbol{\Omega}_{n}=i\mathrm{Tr}\left[P_n\left(\frac{\partial P_n} {\partial k_x}\frac{\partial P_n}{\partial k_y}-\frac{\partial P_n}{\partial k_y}\frac{\partial P_n}{\partial k_x}\right)\right]\hat{\mathbf{z}}, \end{equation} where $P_n$ is the projection matrix of the $n$th band defined as $P_n=\psi_n \psi_n^\dagger g$. \begin{figure} \centering \includegraphics[width=8.cm]{Fig2.eps}\\ \caption{(a) Phase diagram in $D/J-\Delta/J-F/J$ space for $K=10J$. Phases are classified by the Chern numbers of the upper and lower magnon bands. Three phases of $C_l=-C_u=1$; $C_l=-C_u=-1$; and $C_l=C_u=0$ are separated by two orange boundary surfaces. The green line, $D=\frac{\sqrt{3}F^2}{16M}$ and $\Delta=0$, is the intersection of the two boundary surfaces. The magenta lines, $\Delta=\pm\Delta_g/2$ and $D=0$, are the intersections of the boundary surfaces with the $D=0$ plane. $O_1$ ($F=5J$, $D=\Delta=0$), $O_2$ ($F=5J$, $D=0.4J$, $\Delta=0$), and $O$ ($F=D=0$, $\Delta=-1.5J$) are 3 representative points in topologically nontrivial phase of $C_l=-C_u=-1$; $C_l=-C_u=1$, and topologically trivial phase $C_l=C_u=0$, respectively. (b) The $z$ component of Berry curvature $\Omega=\boldsymbol{\Omega}\cdot\hat{\mathbf{z}}$ for $O_1$, $O$, $O_2$ (from top to bottom). The left panel is for the lower magnon band and the right panel for the upper magnon band. The color bars are shown at the middle. The contour line of $\Omega=0$ is shown by the dashed circles. The white hexagons is the fisrt Brillouin zone. } \label{berry} \end{figure} Figure 2(a) is the phase diagram in $D/J-\Delta/J-F/J$ space for $K=10J$. The various topological phases are classified by Chern numbers $\mathcal{C}_l$ and $\mathcal{C}_u$ of lower and upper magnon bands. $\mathcal{C}_l+\mathcal{C}_u=0$ satisfies the ``zero sum rule" \cite{chern,book_niu}. The magnon band Chern number change its value when magnon band gap closes and reopens at valley K or K$^\prime$. Thus, the band gap closing at K or K$^\prime$ defines two phase boundary surfaces of $\omega_1^{K\prime}=\omega_2^{K\prime}$ and $\omega_1^K=\omega_2^K$ (See Eqs. \eqref{gap1} and \eqref{gap2}). For convenience, we define \begin{equation} \Delta_c=\frac{1}{2}\left[\sqrt{(M+3\sqrt{3}D)^2-\frac{9}{4}F^2}-(M-3\sqrt{3}D)\right], \end{equation} and two phase boundary surfaces are $\Delta=\pm \Delta_c$, denoted as the orange surfaces. They divide the whole space into four regions. In the region of $\Delta_c<0$ and $\Delta_c<\Delta<-\Delta_c$, $C_u$ is $1$. The density plot of $\Omega$ for $F=5J$, $\Delta=D=0$ ($O_1$ in Fig. 2(a)) is shown in the top panel of Fig. 2(b). Interestingly, the lower band has two contour curves of $\Omega=\Omega_l=0$ around $\Gamma$ denoted by black dash lines. The two contour curves divide the first Brillouin zone into three parts. $\Omega$ is slightly positive inside the inner contour curve around $\Gamma$ for the lower band as shown in the top left panel. Between two contour curves, $\Omega$ is slightly negative. $\Omega$ is positive outside the outer contour curve as shown in the top left panel of Fig. 2(b), but significant non-zero $\Omega$ occurs only around K and K$^\prime$. In the region of $\Delta_c>0$ and $-\Delta_c<\Delta<\Delta_c$, the upper magnon band has Chern number $-1$. The bottom panel of Fig. 2(b) is the density plot of $\Omega$ of lower (left panel) and upper (right panel) bands for a representative point of $F=5J$, $\Delta=0$, $D=0.4J$ ($O_2$ in Fig. 2(a)) in this topologically nontrivial phase. The lower band has only one contour curve of $\Omega=0$ (black dash curve) around $\Gamma$ that divides the first Brillouin zone into two parts. Inside the contour curve, $\Omega$ is slightly positive as shown in the bottom left panel of Fig. 2(b). It is negative outside the contour curve with significant non-zero value around K and K$^\prime$. The system is in topologically trivial phase for both lower and upper bands in the other two regions. $\Omega$ around K and K$^\prime$ valleys have opposite sign so that the Chern numbers are 0 for both bands. We consider $O$ in Fig. 2(a) ($F=D=0$, $\Delta=1.5J$) as a representative point in the phase. The middle panel of Fig. 2(b) shows the density plot of $\Omega$ at $O$ for the two bands. Indeed, Berry curvatures $\Omega$ at K and K$^\prime$ have opposite value, and Chern numbers are zeros. For $\Delta=0$, the band gaps at K and K$^\prime$ close and reopen at the same time and the Chern number of the upper band changes from $-1$ to $+1$ if we tune the DMI crossing the line of $D=\frac {\sqrt{3}F^2}{16M}$ and $\Delta=0$ [the green line in Figure 2(a)]. The system changes from one topologically nontrivial phase to another. The features of the phase diagram discussed above preserves as long as system ground state is the perpendicular ferromagnetic state. Let us consider the magnon transport in an infinite system. Apply a thermal gradient along $x$ direction, the motion of a magnon wavepacket is governed by the semiclassical equations \cite{Niu,Murakami}, \begin{eqnarray} \dot{\mathbf{r}}=\frac{1}{\hbar}\frac{\partial \varepsilon}{\partial \mathbf{k}}-\dot{\mathbf{k}}\times\boldsymbol{\Omega}; \label{c1}\\ \dot{\mathbf{k}}=\frac{1}{\hbar}\mathbf{F}=-\frac{1}{\hbar}\frac {\partial \varepsilon}{\partial \mathbf{r}}+\frac{q}{\hbar} \dot{\mathbf{r}}\times\mathbf{B}, \label{c2} \end{eqnarray} Where $\varepsilon(\mathbf{r},\mathbf{k})=\hbar\omega(\mathbf{k})+\phi( \mathbf{r})$ is the energy of the magnon with $\phi(\mathbf{r})$ being the potential energy, and $\mathbf{F}$ is the total force on the magnon. $q$ is the charge of the particle and $q=0$ for a magnon. In the presence of a thermal gradient, the Boltzmann equation of the magnon is \begin{equation} \dot{\mathbf{r}}\cdot \frac{\partial f}{\partial \mathbf{r}}= -\frac{f-f_0}{\tau}\equiv-\frac{f_1}{\tau}, \label{B.E.} \end{equation} where $f(\mathbf{r},\mathbf{k})$ is the magnon distribution function. $f_0=1/(e^{\beta\hbar\omega}-1)$ is the Bose-Einstein distribution of zero chemical potential at local temperature $T$ [$\beta=(k_BT)^{-1}$]. $\tau$ is magnon relaxation time. $f_1$ is the deviation of the distribution function from its equilibrium values. In the linear response regime where the thermal gradient is small, Eq. \eqref{B.E.} can be written as \begin{equation} \dot{\mathbf{r}}\cdot \frac{\partial f_0}{\partial \mathbf{r}}=-\frac{f_1}{\tau}. \end{equation} One can prove the following identity, \begin{equation} \dot{\mathbf{r}}\cdot \frac{\partial f_0}{\partial \mathbf{r}} =\left(-\frac{\hbar\omega}{T}\nabla T\right)\cdot\frac{\partial f_0}{\hbar\partial \mathbf{k}}. \end{equation} Substituting Eq. (14) into the left hand side of Eq. (13), it yields \begin{equation} \left(-\frac{\hbar\omega}{T}\nabla T\right)\cdot\frac{\partial f_0}{\hbar\partial \mathbf{k}}=-\frac{f_1}{\tau}. \end{equation} Thus, one can identify a thermal force $\mathbf{F}_T=\left(- \frac{\hbar\omega}{T}\nabla T\right)$ proportional to the magnon frequency and the thermal gradient \cite{thermalforce}. Insert \eqref{c2} into \eqref{c1} with $\mathbf{F}=\mathbf{F}_T$, we obtain \begin{equation} \dot{\mathbf{r}}=\frac{\partial \omega}{\partial \mathbf{k}}+\frac{\omega}{T} \nabla T\times\boldsymbol{\Omega}. \end{equation} The magnon current density is given by $\mathbf{j}_m= \sum_{n,\mathbf{k}}\left[\dot{\mathbf{r}}f(n,\mathbf{k})\right]$, where the summation is over all magnon states. Keep terms linear in the thermal gradient and convert the summation to integration, we have \begin{equation} \mathbf{j}_m=\tensor{\boldsymbol{\kappa}}(-k_B\nabla T), \end{equation} where the longitudinal heat conductance $\kappa_{xx}$ and the anomalous Nernst coefficient $\kappa_{xy}$ are \begin{eqnarray} \kappa_{xx}=\frac{\tau}{(2\pi)^2}\sum_{n}\iint \beta\left( \frac{\partial \omega_n}{\partial k_x}\right)^2 \rho\left(\beta\hbar\omega_n \right) d^2\mathbf{k},\label{kxx}\\ \kappa_{xy}=\frac{1}{(2\pi)^2}\sum_{n}\iint\beta\omega_n \Omega_nf_0\left(\beta\hbar\omega_n\right)d^2\mathbf{k}\label{kxy}, \end{eqnarray} where $\rho(x)=\frac{xe^x}{(e^x-1)^2}$, $f_0(x)=\frac{1}{e^x-1}$, and $n=1,2$ labels the lower and upper magnon bands. Figure 3(a) shows the temperature dependence of $\kappa_{xx}$ and $\kappa_{xy}$ in two different topologically-nontrivial phases specified by $O_1$ and $O_2$ in Fig. 2(a). In order to have a quantitative feeling about the results, we use $\mathrm{Sr_2IrO_4}$ parameters of $a=0.55$ nm \cite{para}, $J=19.6\mu_0\mu_B^2/a^3$ \cite{pseudo}, and $\gamma=2.21 \times10^5$ rad/s/(A/m) in all the following discussions. The longitudinal heat conductance $\kappa_{xx}$ is always positive as expected from thermodynamic laws that the magnons move from the hot side to the cold side. Eq. \eqref{kxy} says that the AMNE coefficient is determined by the Berry curvature distribution in the momentum space and the magnon equilibrium distribution function. Since magnon number in the lower band is bigger than that in the higher band according to the Bose-Einstein distribution, the sign of AMNE coefficient is always determined by the Berry curvature of the lower magnon band. At very low temperature, only the magnons near$\Gamma$ point [band bottom (top) of the lower (upper) band) are excited. The sign of AMNE coefficient is determined by $\Omega$ around $\Gamma$, and its value is small because Berry curvature $\Omega$ is very close to zero, if not exactly zero, and the magnon number is also small there. At a higher temperature when the magnon number near K and K$^\prime$ points [band top (bottom) of the lower (lower) band] are large enough and dominate the AMNE due to significant non-zero values of the Berry curvature only near there. At even higher temperature when equal-partition theorem become true so that $f_0\approx k_BT/(\hbar\omega)$, the AMNE coefficient is close to zero because $\kappa_{xy}$ is approximately proportional to $(\mathcal{C}_u+\mathcal{C}_l)=0$ \cite{chern,book_niu}, i.e. the contributions from two bands cancel with each other. The general behavior of AMNE coefficient $\kappa_{xy}$ mentioned above can be illustrated by two representative points in two distinct topologically nontrivial phases of $C_l=-1$ (for $O_1$) and $C_l=1$ (for $O_2$). For $O_1$ whose Berry curvature distribution is given in the top panel of Fig. 2(b), $\kappa_{xy}$ is always positive, a transverse magnon current along $\mathbf{m}_0\times (-\nabla T)$, because $\Omega$ are positive near both $\Gamma$ and K ($K^\prime$) points. For $O_2$, at very low temperatures when the magnon number around K and K$^\prime$ are negligible and only the magnons near $\Gamma$ point are excited, $\kappa_{xy}$ decreases and becomes more and more negative initially with the increase of temperature because $\Omega$ is negative near $\Gamma$ point. However, when magnons near K and K$^\prime$ points are excited, $\kappa_{xy}$ starts to increase with temperature, and becomes postive after an intermediate temperature because $\Omega$ has large positive values near K and K$^\prime$. Thus, in this phase the sign of the AMNE coefficient reverses at the intermediate temperature. The numerical results of $\kappa_{xx}$ and $\kappa_{xy}$ at higher temperature are shown in the inset of Figure 3(a). The longitudinal heat conductance $\kappa_{xx}$ saturates at high temperature. AMNE coefficient $\kappa_{xy}$ at $O_1$ ($O_2$) increases from 0 to a maximum positive (negative) value as the temperature increases, and then gradually go back to 0 when magnons in the upper band are thermally excited. This indicates that there is an optimal temperature for the maximal AMNE coefficient. If this temperature does not exceed the Curie temperature, it should be used for the largest AMNE. \begin{figure} \centering \includegraphics[width=8.5cm]{Fig3.eps}\\ \caption{(a) The longitudinal magnon conductance $\kappa_{xx}$ (left axis) and AMNE coefficient $\kappa_{xy}$ (right axis) for parameters at $O_1$ and $O_2$. The inset shows the high-temperature values of the same quantities. (b) (Panels 1 to 3) The density plots of $\kappa_{xy}$ [in units of $\mathrm{eV^{-1}s^{-1}}$] in $D/J$- $\Delta/J$ plane for $K=10J$ and $F=5J$ at different temperatures. The black solid lines are topological phase boundaries, and the black dashed lines are the contour lines of $\kappa_{xy}=0$. WHY COLOR BARS are negative on top and po on bot.? What are the use of black lines? (Panel 4) The sign of the maximum value of $\kappa_{xy}$. Red region is for positive $\kappa_{xy}$ and blue region is for negative $\kappa_{xy}$. The dashed line is the contour curve of $\kappa_{xy}=0$. } \label{results} \end{figure} In the topologically trivial phase, the Berry curvatures $\Omega$ Of the same band has opposite values near K and K$^\prime$ points. Thus the contributions to AMNE from different valleys cancel each other, and the net transverse magnon current can be in either direction, depending on the parameters. Figure 3(b) is the density plots of $\kappa_{xy}$ as a function of $D/J$ and $\Delta/J$ at different temperatures (for $K=10J$ and $F=5J$). Because of the featured distribution of Berry curvature near $\Gamma$ discussed above, the sign change of $\kappa_{xy}$ happens at larger $D$ at lower temperatures, and is different to the topological phase boundaries as shown by the black solid lines. However, the sign change of $\kappa_{xy}$ is closely related to the topological phase transition, as shown in the last panel of Figure 3(b). The sign change of $\kappa_{xy}$ coincides with the topological phase transition line of $D=\frac{\sqrt{3}F^2}{16M}$ and $\Delta=0$. Tuning the DMI can drive the system from one topologically nontrivial phase to another at $\Delta=0$. The sign change of $\kappa_{xy}$ at the maximum point changes at the same time due to the sign-reversal of Berry curvatures. This also means for the parameters of negative $\kappa_{xy}$ in the last panel, there is a temperature-induced sign reversal of $\kappa_{xy}$. In the above discussions, we studied the magnon Nernst effect, a transverse magnon current generated by a longitudinal thermal gradient. Similar to electronic systems, there are other related effects, such as a transverse magnon current induced by a longitudinal chemical potential gradient (magnon Hall effect and anomalous magnon Hall effect), and a transverse magnon heat current induced by a longitudinal chemical potential gradient (magnon Peltier effect). These effects can be investigated in the same way as what have done here for the same Berry curvature physics. Similar topological phase transitions and sign-reversal of AMNE was also predicted in pyrochlore lattices \cite{mook}. In the calculation of thermal transport coefficients, the thermal energy $k_BT$ is allowed to be much higher than $J$. In real materials, the temperature is limited by the Curie temperature that is order of $J/k_B$. For example, $J=20$ meV ($2.5\times10^5 \mu_0\mu^2/a^3$) and the Curie temperature is about 240 K \cite{exp2012} for $\mathrm{Sr_2IrO_4}$. The sign-reversal temperature is $9.5J/k_B$ as shown in Figure 3(a). Thus, the temperature is much smaller than the sign-reversal temperature in this case so the AMNE coefficient should be always positive. The reason why the Berry curvature near $\Gamma$ point has opposite sign, and the factors that affect the Berry curvature distribution are still open questions. \section{Conclusion} In conclusion, we studied the thermal magnon transport of perpendicularly magnetized honeycomb lattice with the nearest-neighbor pseudodipolar interaction and the next-nearest-neighbor DMI. We show that the system has various topological nontrivial phases. Due to the nontrivial Berry curvature, a transverse magnon current appears when a thermal gradient is applied, resulting in an anomalous Magnon Nernst effect. The sign of the anomalous Magnon Nernst effect is reversed by tuning DMI and temperature. \section*{Acknowledgements} This work was supported by National Natural Science Foundation of China (Grant No. 11374249) and Hong Kong RGC (Grant No. 16300117 and 16301816). X.S.W acknowledge support from UESTC and China Postdoctoral Science Foundation (Grant No. 2017M612932). \section*{Reference}
{ "timestamp": "2017-12-15T02:01:58", "yymm": "1712", "arxiv_id": "1712.05027", "language": "en", "url": "https://arxiv.org/abs/1712.05027" }
\section{} \citet{ros64,ros73} and \citet{ros89} conducted a multi-decade (1955 -- 1986) imaging survey for novae in M31 using the 1.22m reflector at the Asiago observatory, supplemented from 1973 onwards with observations from the 1.82m telescope at Mount Ekar. During the course of the survey, Rosino discovered a total of 142 nova candidates in M31. As described in \citet{sha15}, based on spatial coincidence, a total of six of these outbursts (R029 = M31N 1960-12a, R039 = 1962-11a, R048 = 1963-09c, R066 = 1966-08a, R079 = 1968-09a, R081 = 1968-10c) were found to be associated with 4 recurrent nova candidates (M31N 1960-12a = 2013-05b, M31N 1926-06a = 1962-11a, M31N 1963-09c = 1968-09a = 2010-10e = 2015-10c, M31N 1966-08a = 1968-10c). Perhaps the most interesting of these eruptions is the last pair, M31N 1966-08a = 1968-10c (R066 = R081), which were observed on 12 August 1966 and 25 October 1968, respectively. Available observations restrict the duration of the eruptions to less than 2 days. Not only is the interval between eruptions ($\sim2.2$ yr) extremely short -- shorter than any known recurrent nova with the exception of the remarkable system M31N 2008-12a with a $\sim1$~yr recurrence time \citep[e.g., see][and references therein]{dar17a, dar17b, hen17} -- but unlike 2008-12a (where an eruption has been detected every year for the past 9 years), M31N 1966-08a was only seen in eruption twice in the past half century. It would be surprising if M31N 1966-08a were a recurrent nova in M31 or a foreground Galactic dwarf nova, as such systems typically remain near maximum light for several days, or more. Given that the object lies just 16$'$ from the center of M31 in a region of the galaxy that is routinely monitored down to $m\sim18$ by a number of amateur and professional astronomers alike, missing additional eruptions would be particularly unlikely. On the other hand, the less predictable and much shorter duration flares from dMe stars (hours) could have more easily escaped detection. To explore the possibility that M31N 1966-08a might be a foreground Galactic flare star, we examined the deep photographic images of M31 from \citet{mas06}. A stellar object, J004123.75+411459.6, with $V=20.3$ and $U-B=1.9$, $V-R=1.5$, $R-I=2.0$ was found just 0.8$''$ south of the nominal position of M31N 1966-08a ({\it R.A.\/} = $00^{\mathrm h}~41^{\mathrm m}~23.75^{\mathrm s}$, {\it Decl.\/} = $+41^{\circ}~15'~00.4''$). The star has also been detected in the infrared as 2MASS~00412375+4114596, with $J=15.3$, $H=14.9$, and $K=14.4$ \citep{sku06}. Given its proximity, we have assumed that this star is the progenitor of M31N 1966-08a and 1968-10c. The implied outburst amplitude of $\sim3-4$ mag would be unusually low for a recurrent nova (without precedent, in fact), but not unprecedented for a dMe star, which can have flares of up to $5-6$ magnitudes \citep[e.g., see][for the bright flares in AD Leo and YZ CMi, respectively]{haw91,kow10}. To confirm the flare star hypothesis, on 2017 Nov. 23.27 UT we obtained a spectrum of J004123.75+411459.6 with the LRS2 spectrograph on the Hobby-Eberly Telescope (HET). The spectrum, shown in Figure~1, is that of a classic dMe flare star. The strength of the TiO bandheads at $\sim$5430\AA\ and $\sim$6150\AA\ suggest a spectral type of approximately M3 \citep[see][]{boc07}, and the visibility of the CaH feature at $\sim$6380\AA\ suggests that the star is a dwarf, not a giant. Finally, we note that the measured emission-line Balmer decrement (H$\alpha$:H$\beta$) is $\sim$4.2, which is typical of dMe flare stars in quiescence. In summary, given the brevity of the two observed outbursts, the relatively low outburst amplitude, and especially the spectrum of the quiescent optical counterpart, we conclude that M31N 1966-08a = 1968-10c is not a recurrent nova in M31 nor a Galactic dwarf nova, but rather a Galactic dMe flare star projected against the Andromeda galaxy. \begin{figure} \begin{center} \includegraphics[scale=0.65,angle=0]{m31n196608a.pdf} \caption{The HET spectrum of the quiescent counterpart of M31N 1966-08a, J004123.75+411459.6 (the absolute flux calibration is approximate). The spectrum is typical of a dMe flare star, displaying strong TiO bandheads at $\sim$5430\AA\ and $\sim$6150\AA, along with narrow Balmer emission lines. Detection of the CaH absorption feature at $\sim$6380\AA\ suggests that the star is a dwarf. } \end{center} \end{figure}
{ "timestamp": "2017-12-15T02:01:55", "yymm": "1712", "arxiv_id": "1712.05023", "language": "en", "url": "https://arxiv.org/abs/1712.05023" }
\section{Introduction} The ubiquitous deployment of nanocrystals (NCs) in photonics stems from the impressive tunability of their physical and chemical properties, combined with the nano-positioning opportunities offered by support-free colloids and from the possibility of mass-production at low costs \cite{Kovalenko2015}. These features have also promoted NCs as efficient biological markers for imaging \cite{Zheng2015, Lyu2016}, color filters in liquid crystal displays \cite{Kim2013a}, as well as functionalizing elements in light emitting and light harvesting devices \cite{Talapin2010}. On the other hand, advanced nanophotonic applications are emerging based on the generation, manipulation and detection of single photons \cite{OBrien2009, Aharonovich2016}. Indeed, leveraging single-photon statistics and quantum coherence for sub-diffraction imaging \cite{GattoMonticone2014}, quantum cryptography\cite{Sangouard2012}, simulation \cite{Tillmann2015}, enhanced precision measurements and information processing \cite{Knill2001} have become roadmap targets for the next 10-20 years \cite{Qmanifesto}. Single-photon sources based on quantum emitters hold promise for these applications because of their on-demand operation \cite{Lounis2005, Chu2017, Loredo2017, Sipahigil2014, Lettow2010a}. However, despite great efforts in the last years to attain controllable sources by coupling solid-state emitters to nanophotonic structures, each platform privileges either the freedom in the device design \cite{Bermudez-Urena2015, Schroeder2011, Liebermeister2014, Riedrich2014, Schell2013, Shi2016} or the quality of single-photon emission \cite{Somaschi2016, Sapienza2015}. Deterministic positioning and control of quantum emitters remains elusive for epitaxial quantum dots \cite{Arcari2014, Daveau2017, Zadeh2016, Davanco2017}, color centers in bulk diamond \cite{Hausmann2012, Mouradian2015} and organic molecules in crystalline matrices \cite{Turschmann2017, Lombardi2017, Checcucci2016, Skoff2016}. On the other hand, versatile approaches based on today-available NCs present important shortcomings with respect to single-photon applications. Photoinduced charge rearrangements in the passivation layer and in the environment of inorganic semiconductors quantum-dot NCs \cite{Pisanello2013, Liu2017} lead to spectral instability of the exciton line \cite{Empedocles1997}, hindering basic quantum optics operations with the emitted photons. Moreover, intermittence in the photoluminescence \cite{Efros2016}, named blinking, seriously affects the average fluorescence quantum yield and hence the photon state purity. Although important results have been obtained by improving synthesis protocols \cite{Chandrasekaran2017} or introducing perovskite materials \cite{Park2015, Raino2016}, the emitter photostability in time or frequency is still below expectations. Notably, similar issues characterize the emission of color centres in nanodiamonds, including those which possess superb optical properties in bulk such as the widely studied negatively charged silicon vacancy \cite{Jantzen2016, Sipahigil2014}, or chromium-related defects \cite{Tran2017}. Hence, despite the wealth of materials and protocols, there still are fundamental limitations for the use of NCs in single-photon applications. We here propose and report on self-assembled and support-free organic NCs (hundreds \SI{}{\nano\meter} in size) of anthracene doped with single fluorescent dibenzoterrylene molecules (DBT:Ac). We demonstrate that the remarkable features of the bulk system \cite{Toninelli2010a, Nicolet2007, Trebbia2009}, belonging to the family of Polycyclic Aromatic Hydrocarbons (PAH), are preserved in a nanocrystalline environment. In particular, DBT:Ac NCs exhibit bright and photostable single-photon emission at room temperature that is spectrally stable and almost lifetime-limited (\SI{50}{MHz}) at cryogenic temperatures. The combination of such properties is unique and opens the way to the use of organic NCs for quantum technologies and for single-photon applications in general. \section{Results and discussion} We adapted a simple, cost-effective and well-established reprecipitation method \cite{Horn2001, Kasai1992, Kang2004, Baba2011} to grow Ac NCs doped with controlled concentration of DBT molecules (for details see the Experimental Section). In this procedure, a dilute solution of the compounds prepared with a water-soluble solvent (acetone, in our case) is injected into sonicating water where it divides into many droplets. The solvent gradually dissolves and correspondingly the concentration in the micro-doplets becomes super-saturated until the compounds, which instead are not water-soluble, are reprecipitated in the form of NCs. The size and shape of the resulting NCs can ideally be controlled by varying the thermodynamic conditions \cite{Chung2006}. For morphological and optical characterization, a drop of the suspension of NCs in water is deposited on a coverglass substrate and dried in desiccator. Typical scanning electron microscopy (SEM) and atomic force microscopy (AFM) images are displayed in Figures \ref{fig1}a and \ref{fig1}b, respectively. For some NCs it is possible to identify peculiar features of crystalline Ac - such as the hexagonal-like morphology - while others exhibit a round-like shape, possibly due to a few \SI{}{\nano\meter} acetone-rich solvent cage. By analysing the AFM images of 92 NCs (Figure \ref{fig1}d), we deduce an average equivalent diameter of ($113 \pm 64$) \SI{}{\nano\meter} and an average thickness of ($65 \pm 13$) \SI{}{\nano\meter}, compatible with a platelet-like shape. Such values and shape are particularly promising for the coupling to evanescent fields in proximity to surfaces \cite{Skoff2016}. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Fig/fig1-new3.png} \caption{\textbf{Morphology of DBT:Ac NCs grown via reprecipitation}: Typical SEM \textbf{(a)} and AFM topography \textbf{(b)} images. \textbf{(c)} Cross section showing the NC thickness profile along the white line in panel \textbf{(b)}. \textbf{(d)} Statistical analysis on AFM images yielding a NC equivalent diameter and standard deviation of ($113 \pm 64$)\SI{}{\nano\meter} and an average thickness of ($65 \pm 13$)\SI{}{\nano\meter}. \textbf{(e)} Normalized XRD pattern in which only the peaks from the (001) plane and higher order reflections are well resolved, due to the NCs' platelet-like morphology (with c-axis perpendicular to the substrate).} \label{fig1} \end{figure} The crystalline nature suggested by the clear-cut edges and flat surfaces (see Figure \ref{fig1}c) is verified by X-ray diffraction (XRD) measurements. The XRD pattern shown in Figure \ref{fig1}e exhibits a strong diffracted peak at \SI{9.17}{\degree} - that corresponds to the (001) plane - and other equivalent periodic peaks corresponding to the (002), (003), and (004) planes, matching the crystallographic data for an anthracene monoclinic system \cite{Brock1990}. This also reveals that the Ac NCs, once deposited on the substrate, are mainly iso-oriented with the c-axis perpendicular to the substrate. Let us note that the transition dipole moment of a DBT molecule in the main insertion site of an Ac crystals is mostly oriented along the b-axis \cite{Nicolet2007b}, and thus results parallel to the substrate. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Fig/fig2-bis.png} \caption{\textbf{Photophysics of the NCs at RT}:\textbf{(a)} Off-resonant pumping scheme employed for single molecule microscopy at RT. \textbf{(b)} Comparison between white light and fluorescence wide-field EMCCD images of the same region demonstrating that about 90\% of the Ac NCs are successfully doped with DBT. \textbf{(c)} Measured photon anti-bunching ($g^2(0)=0.05$ from the fit, red solid line) from the emission of a single NC without any background correction (black dots). \textbf{(d)} Time-resolved measurement of the fluorescence decay from a single DBT:Ac NC (black line). A single exponential fit (red line) yields an excited state lifetime of $(4.2 \pm 0.1)$\SI{}{\nano\second}, as a free fitting parameter with the relative standard error. The inset shows the distribution of the excited-state lifetime collected from 40 NCs. The red curve is a Gaussian fit centered at \SI{4.2}{\nano\second} with FWHM = \SI{0.4}{\nano\second}. \textbf{(e)} Saturation measurement performed on a single DBT:Ac NC (black dots). The fitted red curve yields a saturation intensity of $I_\textup{s} = (80\pm 6)$ \SI{}{\kilo\watt\per\square\centi\metre} and a maximum detected count rate $R_{\infty}= (1.50 \pm 0.02)$ Mcps. In the inset, a typical BFP image of a single NC.} \label{fig2} \end{figure} The sub-micrometric size of the crystalline matrix may compromise the optical properties of DBT molecules embedded therein, due to strain within the crystal and imperfections at the interfaces. Indeed, besides the case of quantum dots, it was reported for other nanocrystalline systems that such effects determine fluorescence instability and linewidth broadening\cite{Meltzer2001, Jantzen2016}. We thus perform single molecules microscopy and spectroscopy on DBT:Ac NCs with a home-built epifluorescence scanning confocal microscope - described in details in the Experimental Section \ref{exp} - that allows for both room and cryogenic temperature investigation. At room temperature (RT), the sample is illuminated with a \SI{767}{\nano\meter} continuous wave (CW) diode laser to pump DBT molecules into the vibrational band of the first singlet electronic excited state (see the simplified Jablonski diagram in Figure \ref{fig2}a). After a fast (ps-timescale) non radiative relaxation process to the lowest level of the vibrational manifold, molecules decay to the electronic ground state (singlet). The resulting red-shifted fluorescence light around \SI{785}{\nano\meter} is detected with an electron multiplied charge couple device (EMCCD). Typical white light and wide-field fluorescence images are compared in Figure \ref{fig2}b, showing that more than 90\% of the Ac NCs are successfully doped with DBT. To prove that the detected fluorescence stems from individual DBT molecules, single isolated crystals are illuminated in confocal mode with an excitation intensity of \SI{15}{\kilo\watt\per\square\centi\metre} (well below saturation) and the correlation between photon arrival times is measured with the Hanbury Brown-Twiss (HBT) setup (see the Experimental Section). Figure \ref{fig2}c shows the histogram of the observed coincidences from a single NC, featuring a strong antibunching dip. The experimental data is fitted at short time delays $\tau$ with the function $g^{2}(\tau)=1-b\cdot \exp(-|\tau|/\Delta t)$, where $\Delta t$ accounts for the excitation and spontaneous emission rates \cite{Trebbia2009} and $b$, the dip depth, is found to be $(95\pm 1)$\%. Among 40 analyzed NCs, 73\% of them displays an antibunching dip larger than 50\%, demonstrating that the proposed recipe is reliable to grow individual Ac NCs in 2/3 of cases doped with single DBT molecules. The purity of this system, \textit{i.e.} the second-order correlation function at zero time delay $g^{2}(0)$, can be as low as $0.05 \pm 0.01$ without any background correction To gain further information on the emitter properties, we study the relaxation dynamics by means of time-correlated single-photon counting (TCSPC) measurements, collecting photons emitted after Ti:Sa-pulsed excitation (average intensity equal to \SI{20}{\kilo\watt\per\square\centi\metre}) with a single-photon avalanche diode (SPAD). Figure \ref{fig2}d shows a typical measured fluorescence decay curve from which the lifetime $\tau_{f}$ of the excited state can be derived via a single exponential fit in the presence of a constant background. The fit (red curve) yields an excited state lifetime of $(4.2 \pm 0.1)$\SI{}{\nano\second}. Repeating the measurement on 40 NCs we obtain the distribution for the excited-state lifetime shown in the inset of Figure \ref{fig2}d, which can be fitted with a Gaussian centered at \SI{4.2}{\nano\second} with full width at half maximum (FWHM) of \SI{0.4}{\nano\second}, in agreement with previous studies on the bulk system \cite{Toninelli2010a, Mazzamuto2014, Polisseni2016}. The brightness of the NC-based single-photon source is quantified by studying the saturation behavior of the system, non-resonantly pumped with the \SI{767}{\nano\meter}-CW laser. Measurements are performed at different excitation intensities, scanning the sample under the confocal laser spot in the small region where the NC is located and detecting the red-shifted fluorescence with a single SPAD. From the obtained fluorescence maps the mean value within an area around the brightest pixel is extracted and corrected for the background counts, which is the mean value within an area out of the NC and is linear with the laser power. Data are plotted as a function of the laser intensity $I$ (black dots in Figure \ref{fig2}e) and fitted with the function describing the saturation of the photon detection rate $R(I)$ \cite{Moerner2003}: \begin{equation}\label{sat} R(I) = R_{\infty}\frac{I}{I+I_s} \end{equation} with $I_s$ the saturation intensity and $R_{\infty}$ the maximum detected count rate. For the molecule reported in Figure \ref{fig2}e the fit-procedure yields as free fitting parameters with the relative standard errors $I_s = (80\pm 6)$ \SI{}{\kilo\watt\per\square\centi\metre} and a maximum detected count rate $R_{\infty}= (1.50 \pm 0.02)$ Mcps. These can be considered as typical values. Accounting for the quantum efficiency of the SPAD, $\eta_{det}=50\%$, the measured $R_{\infty}$ corresponds to a collected photon rate of \SI{3}{\mega\hertz} at the detector. Moreover, comparing this value with the theoretical one of \SI{240}{\mega\hertz} related to the measured lifetime through the relation $R_{\infty}\simeq (1/\tau_{f})$ and assuming unitary quantum yield, we estimate a total collection efficiency of our setup at RT to be around 1\%, ascribed to the limited numerical aperture of the optics and their transmission combined with the molecule emission profile \cite{Checcucci2016}. In order to determine the alignment of DBT molecules within the Ac NCs, the emission from single molecules is detected by imaging the objective back focal plane (BFP) from which the angular radiation pattern can be deduced \cite{Lieb2004}. A typical BFP image is shown in the inset of Figure \ref{fig2}e, where the emission pattern features two side lobes facing each other beyond the critical angle, corresponding to the coupling between the evanescent wave in air with the propagative wave in the coverglass. The geometry and the direction of the two lobes confirms a horizontally aligned molecule, compatible with the XRD observations. To conclude on the observed photophysical properties of the DBT:Ac NCs at RT, let us note that the repeated excitation of the same molecule to study its saturation behavior is a qualitative proof of the stability of its fluorescence. After several hours of measurements at RT, though, some molecules start exhibiting fluorescence blinking behavior, typically before they stop to fluoresce completely. This so-called photobleaching is most probably due to chemical reactions of the dye molecule with ambient oxygen \cite{Kozankiewicz2014}, a process that is more likely to occur in conjunction with the sublimation of Ac at RT. For sub-micrometric crystals we observe that sublimation at RT takes place on a time-scale of about one day but it is completely suppressed when covering the sample with a thin layer of a water-soluble polymer, such as poly(vinyl alcohol). \begin{figure*}[h!] \centering \includegraphics[width=0.8\textwidth]{Fig/fig3.png} \caption{\textbf{Photophysics of the NCs at \SI{2.9}{K}}: \textbf{(a)} Resonant excitation spectrum of a single DBT molecule in an Ac NC. The pumping scheme is sketched in the top-inset. Data (black circles) are fitted with a Lorentzian profile (red curve) and an average over four consecutive measurements yields a FWHM $=(51 \pm 10)$ \SI{}{\mega\hertz}. The bottom-inset shows the linewidth distribution of 35 molecules. \textbf{(b)} 2D plot of the excitation spectrum in time, in a frequency range where two molecules within the same NC are excited. The difference of the two peak central frequencies is plotted as white circles in the map, while the relative distributions are shown as histograms in the top panel. \textbf{(c)} Saturation curve (red circles) and power broadening (blue circles) of the ZPL are displayed with the theoretical fits (solid lines), yielding a maximum number of detected photons $R_{\infty}=(16.8 \pm 0.4)$ kcps. \textbf{(d)-top} Excitation spectrum collected from a single NC within a frequency range of \SI{800}{\giga\hertz} around \SI{784.6}{\nano\meter}. \textbf{(d)-bottom} Inhomogeneous distribution of the ZPLs collected from 20 NCs.} \label{fig3} \end{figure*} At cryogenic temperatures, highly doped DBT:Ac NCs are studied under resonant excitation of the so-called 00-Zero Phonon Line (ZPL) ($\ket{S_{0,\nu_0}} \rightarrow \ket{S_{1, \nu_0}}$). In this pumping scheme, sketched as an inset of Figure \ref{fig3}a, single DBT molecules can be addressed spectrally one at a time by tuning the frequency of a narrow-band laser and exploiting the inhomogeneous distribution of the molecular resonances. In fact, depending on the host matrix, the ZPLs of PAH molecules can be distributed over a frequency range that can be smaller than \SI{1}{\giga\hertz} in unstressed sublimated crystals and as high as \SI{10}{\tera\hertz} in polymers or amorphous materials \cite{Veerman1999, Kramer2002, Kozankiewicz1994}. We found that a DBT concentration about six orders of magnitude higher than the one proposed for RT characterization allows to spectrally select single molecules within our experimental full range of about \SI{800}{\giga\hertz} around \SI{784.6}{\nano\meter}. This will be discussed further on in the manuscript. Figure \ref{fig3}a shows the excitation spectrum of a single DBT molecule illuminated in confocal mode at \SI{0.3}{\watt\per\square\centi\metre} (below saturation), recording the red-shifted fluorescence as a function of the laser frequency (black circles). The spectral line is fitted with a Lorentzian profile (red curve) yielding a FWHM of \SI{51}{\mega\hertz}, with an uncertainty of \SI{10}{\mega\hertz} given by the standard deviation of four consecutive measurements of the same spectrum. Repeating this procedure on 35 molecules in different NCs leads to the distribution displayed in the inset of Figure \ref{fig3}a, with a low-width cutoff consistent with the lifetime-limited value of \SI{40}{\mega\hertz}. The presence of molecules with broader linewidth can be explained in terms of the reduced size of the NC, which provides a less homogeneous environment for DBT molecules than that of bulk Ac. Moreover, interface effects on fluorescence stability and linewidth broadening are more likely to occur. However, let us note that the observed linewidth distribution is narrower than that measured by Gmeiner \textit{et al.} \cite{Gmeiner2016}, confirming the high crystallinity of the Ac NC grown via reprecipitation. Spectral diffusion has so far hindered the deployment of traditional (inorganic) NCs for narrow-band applications, such as single-photon sources for quantum technologies. We hence carefully analyze the spectral stability of the molecule transition frequency. In Figure \ref{fig3}b, the NC fluorescence counts detected from a SPAD are displayed in a 2D color map, obtained repeatedly scanning for 1 hour the excitation frequency of the pump laser over \SI{10}{\giga\hertz}. The excitation of two different molecules can be recognized. The mean values and standard deviations for the two molecule ZPL frequencies over all measurements are $(65 \pm 6)$\SI{}{\mega\hertz} for peak 1 and $(59 \pm 4)$\SI{}{\mega\hertz} for peak 2. However, the common-mode fluctuation of the peak central frequencies is a clear indication of the non-negligible contribution given by the pump laser instability (the laser diode is thermally stabilized but not referred to any absolute frequency standard). To get rid of this contribution and highlight possible spectral diffusion we analyze the distribution of the the two peak central frequencies difference (see the top panel in Figure \ref{fig3}b), plotted as white circles in the map. The maximum variation of such differential value is \SI{17}{\mega\hertz}, which is well within the molecule linewidth and suggests negligible spectral diffusion for DBT:Ac NCs at \SI{3}{\kelvin}. The same analysis has been carried out on couples of molecules in 8 different NCs. We observed sizable fluctuations only in one case where the ZPL central frequency over time exhibited a standard deviation of \SI{54}{\mega\hertz}. As a term for comparison, we remind here that aromatic molecules in polymers or other amorphous hosts present linewidths as large as few GHz, accompanied by large spectral jumps (of the order of tens of \SI{}{\giga\hertz}) \cite{Walser2009, Kozankiewicz1994, Boiron1999}. Also, when molecules are embedded in a poor crystalline environment, a broadening of both linewidth and inhomogeneous distributions is observed, and spectral jumps, even if in a narrow frequency range of tens of \SI{}{\mega\hertz}, are more likely to occur \cite{Gmeiner2016}. Figure \ref{fig3}c shows in logarithmic scale a typical saturation profile of a single molecule and its line broadening at low temperatures, obtained by measuring the excitation spectrum for several pump powers and plotting the detected count rates at resonance (blue circles) and the FWHMs (red circles) as a function of the excitation intensity. Detected counts are fitted with equation \ref{sat}, providing a saturation intensity $I_s=(0.73 \pm 0.03)$\SI{}{\watt\per\square\centi\metre} and a maximum number of detected photons $R_{\infty}=(16.8 \pm 0.4)$kcps (free fitting parameters with the relative standard errors), or equivalently $R_{\infty}=$ \SI{33.6}kcps accounting for the detection efficiency $\eta_{det}$. This count rate is compatible with the collection efficiency of our experimental setup for low temperature measurements of about \SI{0.3e-3}, mainly due to the the orientation of the emissive dipole and the low numerical aperture of the collecting optics. The power broadening of the homogeneous spectral line $\gamma_{hom}(I)$ fits perfectly with the expected saturation law (blue line in Figure \ref{fig3}c) given by the equation \cite{Ambrose1991}: \begin{equation} \gamma_{hom}(I) = \gamma_{hom}(0)\left(1+\frac{I}{I_{s}}\right)^{1/2} \label{broadening} \end{equation} which assumes negligible spectral diffusion, as previously demonstrated. The inhomogeneous broadening of DBT molecules in Ac NCs is studied tuning the excitation laser over the available frequency range of about \SI{800}{\giga\hertz}. In the top of Figure \ref{fig3}d a typical excitation spectrum collected at \SI{2.9}{\kelvin} from a single NC is displayed, where we can distinguish about 80 peaks, each corresponding to a single molecule. The same measurement performed simultaneously on 20 NCs illuminated in wide-field for two orthogonal polarization of the laser pump allows to estimate the inhomogeneous distribution of the ZPLs of DBT molecules in Ac NCs. The result of this analysis is plotted in the histogram on the bottom of Figure \ref{fig3}d. We deduce a mean value of the transition frequency equal to \SI{785.1}{\nano\meter} with a standard deviation of \SI{0.4}{\nano\meter}, which is in agreement with the inhomogeneous broadening measured for other dyes in crystalline systems \cite{Moerner2003}. Finally, we observe that DBT:Ac NCs are ideal for the deterministic integratation into nanophotonic devices, opening new perspectives on the use of molecules in the development of real-world quantum technologies. \section{Conclusions} In this work we demonstrate organic nanocrystals doped with quantum emitters, performing as efficient, photostable and scalable single-photon sources, at both room and cryogenic temperatures. In particular DBT:Ac crystals are presented, with an average size of few-hundreds nanometer. The growth procedure is based on reprecipitation, an inexpensive method that is adapted for a precise tuning of DBT concentration. Atomic force microscopy shows that the crystals grown under our experimental conditions present an average thickness of about \SI{60}{nm} and an average size of \SI{100}{nm}. The reported values can be controlled and reduced by varying the reprecipitation conditions, such as water temperature, droplet size, injected solution concentration and addition of surfactants. X-ray diffraction confirms the crystallinity of the nanoparticles and their platelet-like morphology. At room temperature, single DBT molecules in NCs show a maximum detected count rate of \SI{1.5}{\mega\hertz}, a multi-photon probability lower than 5\% and a well defined dipole orientation. At \SI{2.9}{\kelvin}, the vast majority of molecules exhibits linewidths close to the lifetime-limited value and a relative narrow inhomogeneous distribution of \SI{180}{\giga\hertz} around \SI{785}{\nano\metre}. Accurate investigation on their photostability demonstrates that each NC embeds several molecules with stable fluorescence lines, with no signs of blinking or spectral diffusion on time scales of hours. These results may be extended to different molecular host-guest systems, functionalization protocols and purposes, making active organic nanocrystals a new toolbox for the integration of quantum emitters in photonic and optoelectronic circuits, as well as in complex hybrid devices. \section{Experimental Section}\label{exp} \textbf{DBT:Ac NCs growth protocol.} The DBT:Ac NCs growth procedure consists in injecting \SI{250}{\micro\liter} of a mixture $1:10^6$ of 1mM DBT-toluene and 5mM Ac-acetone solutions into \SI{5}{\milli\liter} water. While continuously sonicating the system for \SI{30}{\minute}, solvents dissolved in water and DBT:Ac crystals are formed in aqueous suspension. Solvents and Ac are purchased from Sigma Aldrich, water is deionized by a Milli-Q Advantage A10 System (\SI{18.2}{\milli\ohm\cm} at \SI{25}{\celsius}) and DBT is purchased from Mercachem.\\ \textbf{Morphological characterization.} Crystals size is evaluated by scanning electron microscopy (SEM, Phenom Pro, PhenomWorld) and atomic force microscopy (AFM, Pico SPM from Molecular Imaging in AC mode equipped with a silicon probe NSG01 (NT-MDT) with \SI{210}{\kilo\hertz} resonant frequency). XRD measurements were performed at CRIST, the Crystallographic Centre of the University of Florence (Italy), with a XRD Bruker New D8 on a sample made of few \SI{}{\micro\liter} of suspension desiccated on a silicon-low background sample holder (Bruker AXS).\\ \textbf{Optical setup.} The optical characterization of DBT molecules within the sub-um Ac crystalline matrix was performed with a versatile home-built scanning fluorescence confocal microscope. The setup is equipped with a closed cycle Helium cryostat (Cryostation by Montana Instruments), capable of cooling samples down to \SI{2.9}{\kelvin}. Molecules can be excited at \SI{767}{\nano\meter} either by a continuous wave laser (CW, Toptica DL110-DFB) and a pulsed Ti:Sapphire (\SI{200}{\femto\second} pulse width, \SI{81.2}{\mega\hertz} repetition rate) laser. Alternatively, at cryogenic temperature, resonant excitation is performed with a narrowband fiber-coupled CW laser (Toptica, LD-0785-0080-DFB-1) centered at \SI{784.6}{\nano\meter}, whose frequency can be scanned continuously over a range of \SI{800}{\giga\hertz}. All laser sources are linearly polarized to allow optimal coupling to single DBT transition by means of a half-wave plate in the excitation path. The laser intensities reported in the main text are calculated from the power measured at the objective entrance divided by the area of the confocal spot measured on the bare substrate (in both cases larger than the diffraction limited spot). For low temperature measurements, the excitation light is focused onto the sample by a long working distance air objective (Mitutoyo $100\times$ Plan Apochromat, NA $= 0.7$, WD = \SI{6}{\milli\meter}) and can be scanned over the sample through a telecentric system and a dual axis galvo-mirror. For room temperature measurements, a high-NA oil immersion objective (Zeiss Plan Apochromat, $100\times$, NA$=1.4$) is used to focus light on the sample which is mounted on a piezoelectric nanopositioner (NanoCube by Physik Instrumente). The Stokes-shifted fluorescence is collected by the same microscope objective used in excitation, separated from the excitation light through a dichroic mirror (Semrock FF776-Di01) and a longpass filter (Semrock RazorEdge 785RS-25) and detected by either an EM-CCD camera (Andor iXon 885, $1004\times1002$ pixels, pixel size \SI{8}{\micro\meter}$\times$\SI{8}{\micro\meter}) or two single-photon avalanche diodes ($\tau$-SPAD-50 Single Photon Counting Modules by PicoQuant). SPADs can be used independently or in a Hanbury Brown-Twiss (HBT) configuration, using a time-correlated single-photon counting (TCSPC) card (PicoHarp, PicoQuant). A converging lens can be inserted in the excitation path to switch between confocal and wide-field illumination while a converging lens is added in the detection path before the EMCCD camera to study the wave-vector distribution of the light emitted by single DBT molecules via BFP imaging. \section{Notes} The authors declare no competing financial interest. \begin{acknowledgement} The authors would like to thank S. Ciattini, L. Chelazzi (CRIST) for helping with XRD measurements, M. Mamusa for dynamic light scattering experiments, F. Intonti for the microinfiltration setup, D. S. Wiersma for access to clean room facilities, M. Bellini and C. Corsi for Ti:sapphire operation, K.G. Sch\"{a}dler and F.H.L. Koppens for helpful feedback on the NCs properties and useful discussions about integration in hybrid devices. This work benefited from the COST Action MP1403 (Nanoscale Quantum Optics). The authors acknowledge financial support from the Fondazione Cassa di Risparmio di Firenze (GRANCASSA) and MIUR program Q-Sec Ground Space Communications. \end{acknowledgement}
{ "timestamp": "2017-12-18T02:05:42", "yymm": "1712", "arxiv_id": "1712.05178", "language": "en", "url": "https://arxiv.org/abs/1712.05178" }
\section{Introduction} \label{sec:intro} The discovery of a quantized conductivity in the Quantum Hall Effect (QHE)~\cite{Klitzing1980} and its subsequent interpretation in terms of a topological invariant, the TKNN invariant~\cite{Thouless1982}, was (retrospectively) one of the first examples of topological phases of matter. Other examples, like Haldane's honeycomb model~\cite{Haldane1988} showed that in contrast to the QHE, a quantization phenomenon could be present even if (on the average) the magnetic field vanishes. These fields are the main mechanism behind being a breaking of time-reversal (TR) invariance, which under the presence of inversion symmetry leads to QHE-like states. In more recent years, the relevance of the spin-orbit interaction was recognized and led to the prediction of topologically non-trivial states~\cite{Kane2005} (these states are TR invariant). A prominent feature of topological insulators is the bulk-boundary correspondence and related existence of edge states, which are protected in the presence of TR invariance~\cite{Hasan2010a}. All these discoveries have led to a very general topological band theory that has allowed to classify different phases of quantum matter according to dimensionality and symmetry class. This includes also the class of topological superconductors, for which particle-hole symmetry plays a role analogous to that of TR symmetry for topological insulators, and where Majorana zero modes play a fundamental role~\cite{Hasan2010a,Kitaev2001}. Furthermore, the recognition of a dependence of the ground state degeneracy on the topology of space for the Fractional QHE, as well as for chiral spin states~\cite{Wen1990}, eventually led to our current understanding, according to which different phases of matter cannot always be distinguished in terms of symmetry considerations. Topology is nowadays recognized to play a fundamental role in our understanding of quantum phases of matter~\cite{Asorey2016}. In the present work we will show how the introduction of orthogonal complex structures in the description of fermionic systems allows to eliminate the Hilbert space redundancy which is familiar for Hamiltonians in the BdG form. We will also establish a direct link between the description of such systems in terms of Clifford and fermionic algebras. In section \ref{S:2} we review the structures that are more relevant for the description of fermionic systems using complex structures. In section \ref{S:3} we then establish different connections between fermionic, Clifford and self-dual algebras. An explicit description of the $\mathbb Z_2$-topological invariant in terms of complex structures is also given. We conclude with a discussion of the results and provide an outlook on future work. \section{Fermionic systems and orthogonal complex structures} \label{S:2} In the standard formalism of second quantization, a fermionic system is described in terms of creation and annihilation operators obeying the canonical anticommutation relations (CAR) \begin{equation} \label{eq:1} \{a_{i},a_{j}^{\dagger}\} = \delta_{ij}, \;\; \{a_{i}^{\dagger},a_{j}^{\dagger}\} =\{a_{i},a_{j}\}=0, \end{equation} in accordance with the Pauli exclusion principle. These operators act on a fermionic Fock space $\pazocal{F}=\bigwedge^{\raisebox{-0.4ex}{\scriptsize $\bullet$}}\pazocal{H}$, where $(\pazocal{H},\langle \cdot,\cdot \rangle)$ denotes the Hilbert space of 1-particle states. When the number of degrees of freedom of the system is infinite (that is, in the quantum field theory limit), the CAR algebra (\ref{eq:1}) can be realized in many \emph{inequivalent} ways. For this reason, it is convenient to distinguish the algebraic properties defining the CAR relations from any particular realization through a Hilbert space representation. This is similar to what happens in group theory: A given abstract group will have in general many inequivalent representations. Given a 1-particle Hilbert space $(\pazocal H,\langle\cdot,\cdot\rangle)$, the corresponding CAR algebra, denoted $\pazocal A_{\mbox{{\tiny CAR}}}(\pazocal H,\langle\cdot,\cdot\rangle)$, is defined~\cite{Bratteli1997} as an algebra with generators of the form $a(u)$ and $a^\dagger(u)$ (for $u\in \pazocal H$), subject to the relations \begin{eqnarray} &\lbrace a(u),a^\dagger(v)\rbrace = \langle u,v\rangle,&\nonumber\\ &\lbrace a(u),a(v)\rbrace=\lbrace a^\dagger(u),a^\dagger(v)\rbrace=0.&\label{CAR} \end{eqnarray} The diagonalization of a Hamiltonian that is quadratic in the fermionic operators is usually performed by means of a Bogoliubov transformation. In the context of abstract CAR algebras, Araki~\cite{Araki1968} developed a formalism that makes use of a ``doubled'' Hilbert space in order to diagonalize quadratic Hamiltonians describing systems with an infinite number of degrees of freedom. This formalism has several points in common with the Nambu approach~\cite{Fujikawa2016}. Nevertheless, there is an alternative approach that makes use of orthogonal complex structures and which, as shown below, provides a setting that is ideal for discussions about Majorana fermions. Before considering the physics of that approach, let us turn to a brief account of the main mathematical facts that we need. For details we refer to \cite{Plymen1994,Gracia-Bond'ia2001}. Consider a real vector space $V$ with $\mathrm{dim}_{\mathbb{R}}(V)=2n$. Let $g(\cdot,\cdot)$ be a positive, symmetric bilinear form on $V$. An \textit{orthogonal complex structure} is a real linear operator $J:V\rightarrow V$ such that $J^{2}=-1$ and $g(Ju,Jv)=g(u,v)$ for all $u,v\in V$. The idea is to use $J$ in order to construct a complexification of $V$, which is different from the ordinary one, $V^\mathbb{C}=V\otimes_{\mathbb{R}}\mathbb{C}$. We define, then, the \textit{complex vector space} $V_{J}$ as the one obtained from $V$, but with multiplication by (complex) scalars given by $(\alpha+i\beta)v\coloneqq\alpha v+\beta Jv$ for $v\in V$ and $\alpha,\beta\in\mathbb{R}$. In other words, multiplication by $i$ on $V_J$ is given by $iv\coloneqq Jv$. If we define an inner product in $V_{J}$ by \begin{equation}\label{eq:<,>_J} \braket{u,v}_{J}\coloneqq g(u,v)+ig(Ju,v), \end{equation} we obtain a complex Hilbert space $(V_{J},\braket{\cdot,\cdot}_{J})$ with complex dimension $n$. The last claim arises from the fact that if we have an orthonormal basis $\{u_{1},\cdots,u_{n}\}$ for $(V_{J},\braket{\cdot,\cdot}_{J})$ then $\{u_{1},Ju_{1},\dots,u_{n},Ju_{n}\}$ is an orthonormal basis for $(V,g(\cdot,\cdot))$. The (complex) Clifford algebra $\mathbb{C}\ell(V)$~\cite{Plymen1994} acts naturally on the exterior algebra $\bigwedge^{\raisebox{-0.4ex}{\scriptsize $\bullet$}} V^{\mathbb C}$, but the resulting representation is not irreducible~\cite{Gracia-Bond'ia2001}. If instead we consider an orthogonal complex structure $J$ on $V$, we obtain an irreducible representation on the Fock space $\pazocal{F}_J(V):=\bigwedge^{\raisebox{-0.4ex}{\scriptsize $\bullet$}} V_J$. As Clifford and CAR algebras are closely related~\cite{Araki1968,Plymen1994,Gracia-Bond'ia2001,Gracia-Bondia1994}, we also obtain an irreducible representation of the CAR algebra $\pazocal A_{\mbox{{\tiny CAR}}}(V_J,\langle\cdot,\cdot\rangle_J)$. In this representation, creation and annihilation operators $a_{J}(v)$ and $a_{J}^{\dagger}(v)$ acting on $\pazocal{F}_{J}(V)$ are given by: \begin{align} &a_{J}^{\dagger}(v)(u_{1}\wedge\dots\wedge u_{k})=v\wedge u_{1}\wedge\dots\wedge u_{k},\nonumber\\ &a_{J}(v)(u_{1}\wedge\dots\wedge u_{k})=\sum_{j=1}^{k}(-1)^{j-1}\braket{v,u_{j}}_{J}u_{1}\wedge\dots\wedge \hat{u}_j\wedge\dots\wedge u_{k},\label{eq:aj} \end{align} for $v\in V$ and $u_1,\ldots,u_k\in V_J$. These operators satisfy the CAR relations $\{a_{J}(u),a_{J}^{\dagger}(v)\}=\langle u,v\rangle_{J}$, $\{a^{\dagger}_{J}(u),a^{\dagger}_{J}(v)\}=\{a_{J}(u),a_{J}(v)\}=0$, and give rise to a representation of the (real) Clifford algebra $C\ell(V)$ on $\pazocal{F}_{J}(V)$. Explicitly, the Clifford generators are given by \begin{equation}\label{eq:Clifford-generators} \pi_{J}(v)\coloneqq a_{J}^{\dagger}(v)+a_{J}(v). \end{equation} The vacuum in $\pazocal{F}_{J}(V)$ can also be characterized as a gaussian state $\omega_J$ with a two-point function given by \begin{equation} \langle 0_J|a_J(u)a_J^\dagger(v)|0_J\rangle\equiv\omega_J(a_J(u)a_J^\dagger(v))= \langle u, v\rangle_J. \end{equation} In fact, this representation can be obtained from $\omega_J$ (regarded as an algebraic state, cf.~\cite{Balachandran2013}) through the Gelfand-Naimark-Segal (GNS) construction. A most important fact is the possibility (when $\mathrm{dim}V=\infty$) of having inequivalent representations. A very useful characterization of the vacuum state $|0_J\rangle$ in the $J$-induced representation is obtained if we extend all operators from $V$ to $V^\mathbb C$, as explained below. The Clifford generators (\ref{eq:Clifford-generators}), as well as the creation/annihilation operators (\ref{eq:aj}) can be regarded as real linear maps from $V$ to $\mathcal L (\pazocal F_J(V))$, the space of bounded linear operators on Fock space. These can be extended to complex linear maps \begin{equation}\label{eq:complex-linear-extension} \tilde\pi_{J},\; \tilde a_{J}, \;\tilde a^\dagger_{J}: V^{\mathbb C} \longrightarrow \mathcal L (\pazocal F_J(V)), \end{equation} which means that, for $\lambda \in \mathbb C$ and $w$ in $V^\mathbb C$, we have $\tilde a_{J}(\lambda w)=\lambda \tilde a_{J}(w)$, as well as $\tilde a^{\dagger}_{J}(\lambda w)=\lambda \tilde a^{\dagger}_{J}(w)$. But also notice that, since the complex structure on $\pazocal F_J(V)$ is determined by $J$, we also have (for $v$ in $V$): \begin{equation} a^\dagger_J(Jv) = i a^\dagger_J(v),\;\; a_J(Jv) = -i a_J(v). \end{equation} The minus sign can be traced back to equations (\ref{eq:<,>_J}) and (\ref{eq:aj}). Summarizing, we have the following important identities ($v\in V$): \begin{eqnarray} \tilde a^\dagger_J (i v) = i a^\dagger_J (v) \equiv J a^\dagger_J (v),\label{eq:identity1}\\ \tilde a_J (i v) = i a_J (v) \equiv J a_J (v),\label{eq:identity2}\\ a_J^\dagger (J v) = i a^\dagger_J (v) \equiv J a_J^\dagger (v),\label{eq:identity3}\\ a_J (J v) = -i a_J (v) \equiv -J a_J (v).\label{eq:identity4} \end{eqnarray} The complex structure $J$ can also be linearly extended to an operator acting on $V^\mathbb C$. Given that $J^2=-1$, it is only on this space that we can consider the eigenvalue problem for $J$. In fact, the space $V^\mathbb C$ turns out to be the direct sum of the eigenspaces for $J$, with eigenvalues $\pm i$. More concretely, consider the projection operators in $V^\mathbb C$ \begin{equation} P_{\pm J} := \frac{1}{2}(1\mp i J), \end{equation} and define $W_{\pm J}: = P_{\pm J} (V^\mathbb C)$. Denote with $g_{\mathbb{C}}$ the complex linear extension of $g$ to $V^\mathbb{C}$. Then, using $\langle\braket{w,z}\rangle\coloneqq 2 g_{\mathbb C}(\overline{w},z)$ as the inner product for $V^\mathbb{C}$, we obtain $W_{-J}=W_{J}^{\perp}$, so that \begin{equation} \label{eq:W+W_bar} V^{\mathbb C}= W_J \oplus W_J^\perp. \end{equation} Furthermore, restricting $\langle\braket{\cdot,\cdot}\rangle$ to $W_J$, we obtain the following unitary isomorphism~\cite{Gracia-Bond'ia2001}: \begin{equation} \label{eq:V_J=W_J} (V_{J},\braket{\cdot,\cdot}_{J})\cong(W_{J},\langle\braket{\cdot,\cdot}\rangle). \end{equation} Let now $u$ be a vector in $W_J^\perp$. Then we have $P_{-J}(u)= u$ or, equivalently, $u= v + i Jv$, for some $v$ in $V$. Using the identities (\ref{eq:identity1})-(\ref{eq:identity4}) we then obtain $\tilde a^\dagger_J(u)=0$. This, in turn, implies $\tilde \pi_J(u) |0_J\rangle =0.$ It can be shown~\cite{Gracia-Bond'ia2001} that the opposite is also true. The resulting ``vacuum condition'' \begin{equation} \label{eq:vacuum-condition} \tilde \pi_J(u) |0_J\rangle =0 \; \Longleftrightarrow\; u\in W_J^\perp, \end{equation} thus provides a full characterization of the vacuum $|0_J\rangle$. There are several aspects of the above construction that are quite relevant from a physical point of view. The first observation is that for every choice of complex structure $J$ we obtain a vacuum $|0_J\rangle$. What we really mean by this is that every choice of $J$ gives rise to an irreducible representation of the CAR algebra on a Fock space $\pazocal F_J(V)$. Suppose we start with a given, fixed complex structure $J_0$ and now want to find the spectrum of a quadratic Hamiltonian which is given to us in terms of the corresponding creation and annihilation operators $a^{(\dagger)}_{J_0}(v)$. The standard way of solving this problem consists in considering linear combinations of such creation and annihilation operators, in such a way that (i) the CAR relations are preserved and (ii) the Hamiltonian becomes diagonal in the new basis. Given any element $h$ in the orthogonal group $O(V,g)$, we obtain a new orthogonal complex structure, by $J_h:= h J_0 h^{-1}$. Moreover, by the universal property of Clifford algebras~\cite{Plymen1994}, such an $h$ induces an automorphism of the Clifford algebra, which is nothing but a Bogoliubov transformation. As the new complex structure is again orthogonal, condition (i) is automatically satisfied. Since the action of $O(V,g)$ on the space $\pazocal J$ of orthogonal complex structures is transitive, condition (ii) is accomplished once we have found a suitable orthogonal transformation. The vacuum condition (\ref{eq:vacuum-condition}) gives a condition, fulfilled by the extended Clifford generators, in terms of an additional, auxiliary space $W_J^\perp$. But notice that the \emph{whole structure} of the CAR algebra depends only on the Hilbert space $V_J$, which is unitarily equivalent to $W_J$ and has been obtained from a triple ($V, g, J$). This is behind the apparent ``Hilbert space redundancy'' so often found in the literature. In the following section we show that there is actually no redundance whatsoever. The role played by $J$ in the definition of the topological $\mathbb Z_2$-index will also be discussed. \section{The self-dual formalism and Hilbert space redundancy} \label{S:3} \subsection{Quadratic Hamiltonians and self-dual algebra} Consider a fermionic Hamiltonian of the form \begin{equation} \label{eq:quadratic-H} H = \sum_{i,j=1}^N \left[ a_i^\dagger A_{ij}a_j +\frac{1}{2}\left( a_i^\dagger B_{ij}a_j^\dagger - a_i \overline{B}_{ij} a_j \right)\right], \end{equation} where $A$ is a hermitian matrix, and $B$ a skew-symmetric one. If $B=0$, the spectrum of $H$ can be readily found upon diagonalizing $A$. If $U$ is a unitary matrix such that $U A U^{\dagger}$ is diagonal, then the Bogoliubov transformation determined by $c_i=\sum_{i}^N U_{ij} a_j$ brings $H$ to a diagonal form. Thus, in this case, diagonalization of the fermionic quadratic form $H=a^\dagger A\, a$ is tantamount to diagonalization of $A$. If $B\neq 0$, we can still regard $H$ as a quadratic form, but only if we ``mix'' creation and annihilation operators. Using a self-explanatory notation, it is easy to check that (up to a constant term) the Hamiltonian can be written in the form \begin{equation} \label{eq:q-form} H=\frac{1}{2}(a^\dagger, a)\left( \begin{array}{cc} A & B \\ -\overline{B} & -\overline{A} \\ \end{array} \right)\left( \begin{array}{c} a \\ a^\dagger \\ \end{array} \right) \end{equation} Araki's construction of the self-dual CAR algebra~\cite{Araki1968} has been devised as an efficient tool to diagonalize quadratic Hamiltonians like (\ref{eq:q-form}), especially in the case $N\rightarrow\infty$. For simplicity, here we will only consider the case of $N$ finite and merely remark that all the arguments presented remain valid in the quantum field theory limit. For $N<\infty$ fixed, let $V=\mathbb R^{2N}$ and let $g$ denote the standard Euclidean metric on $V$. Fix an orthonormal basis $\lbrace e_1,e_2,\ldots,e_{2N} \rbrace$ for $V$ and consider the orthogonal complex structure $J$ defined on basis vectors by $J e_k:= e_{N+k}$, $J e_{N+k}:=-e_k$ $(k=1,\ldots, N)$. The 1-particle Hilbert space corresponding to the Fock representation (\ref{eq:aj}) of the CAR algebra $\pazocal A_{\mbox{{\tiny CAR}}}(V_J,\langle\cdot,\cdot\rangle_J)$, which is just $(V_J,\langle\cdot,\cdot\rangle_J)$, will be denoted here with $(\pazocal{H},\braket{\cdot,\cdot})$. Using the convention \begin{equation} \label{eq:a_ks} a_k \equiv a_J(e_k),\;\; a_k^{\dagger}\equiv a_J^{\dagger}(e_k), \quad k=1,\ldots,N, \end{equation} for creation (annihilation) operators, we may recover the basis vectors $e_k$ from the vacuum $|0_J\rangle$ through \begin{equation} \label{eq:correspondence} e_k= a_k^\dagger |0_J\rangle, \quad k=1,\ldots,N. \end{equation} We will regard the Hamiltonian (\ref{eq:quadratic-H}) as being written in terms of the fermionic operators (\ref{eq:a_ks}). The idea behind the construction of the self-dual CAR algebra is that, since the ground state of $H$ will in general be different from $|0_J\rangle$, it is more convenient to treat both creation and annihilation operators in a symmetric way. In view of the correspondence (\ref{eq:correspondence}), one possibility is to consider the ``bras" $\tilde e_k = \langle 0_J|a_k\in \pazocal H^*$, where $\lbrace \tilde e_k\rbrace_k$ is the basis dual to $\lbrace e_k\rbrace_k$. It is then natural to regard (\ref{eq:q-form}) as a quadratic form on $\pazocal H\oplus \pazocal H^*$. But in order not to lose track of the algebra of observables, the self-dual formalism proposes to start anew by considering a complex Hilbert space $(\pazocal K,\langle\cdot ,\cdot\rangle_{\pazocal K})$ together with a conjugation $\Gamma$, that is, an anti-unitary operator in $\pazocal K$ such that $\Gamma^2=1$. The \textit{self-dual} CAR algebra $\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal K,\Gamma)$ is then defined as the $*$-algebra generated by the identity and operators $B^{\dagger}(u)$, $B(v)$, subject to the following relations ($u, v\in\pazocal K;\; \lambda,\mu \in \mathbb C$): \begin{align} &\{B(u),B^\dagger(v)\}=\braket{u,v}_{\pazocal K},\label{eq:sd1}\\ &B^{\dagger}(\lambda u + \mu v)= \lambda B^{\dagger}(u)+ \mu B^{\dagger}(v),\label{eq:sd2}\\ &B^{\dagger}(u)=B(\Gamma u).\label{eq:sd3} \end{align} Although these relations resemble the usual fermionic anti-commutation relations, they are different. In fact, $\Gamma^2=1$, together with (\ref{eq:sd1}) and (\ref{eq:sd3}), implies $\lbrace B(u), B(v) \rbrace = \langle u,\Gamma v \rangle_{\pazocal K}$. A first important remark is that it is possible to construct an isomorphism between the self-dual CAR algebra $\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal K,\Gamma)$, and the CAR algebra associated to a certain subspace of $\pazocal K$. In fact, if $\pazocal K$ is either even or infinite dimensional, it is possible~\cite{Araki1968} to construct a projection operator $E$ with the following property: \begin{equation} \label{eq:Gamma-E} \Gamma\, E\, \Gamma=1-E. \end{equation} A projection operator satisfying (\ref{eq:Gamma-E}) is called a \emph{basis projection}~\cite{Araki1968}. It follows that there is an algebra isomorphism $\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal K,\Gamma)\cong \pazocal A_{\mbox{{\tiny CAR}}} (E\pazocal K)$, defined on generators by \begin{eqnarray} \label{eq:psi} \psi:\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal K,\Gamma) & \longrightarrow & \pazocal A_{\mbox{{\tiny CAR}}} (E\pazocal K)\nonumber \\ B(u)\;\;\;\; & \longmapsto & a(Eu)+a^{\dagger}(\Gamma(1-E)u).\qquad\label{eq:Isom-Psi} \end{eqnarray} Using the fact that $E$ is an orthogonal projection and that $\Gamma$ is anti-unitary, one checks that $\psi$ preserves the algebraic structure: \begin{eqnarray} \lbrace \psi(B(u)), \psi (B(v)) \rbrace &=& \langle u,v \rangle_{\pazocal K}\nonumber\\ &=& \psi (\lbrace B(u), B(v) \rbrace).\quad \end{eqnarray} In spite of the fact that $E\pazocal K$ is a (proper) subspace of $\pazocal K$, the algebraic structures they give rise to, are completely equivalent. Notice also how the self-dual algebra allows us to codify both creation and annihilation operators using a single type of operator. According to the isomorphism (\ref{eq:Isom-Psi}), any operator of the form $B(Eu)$ can be regarded as a fermionic \emph{annihilation} operator, whereas (because of (\ref{eq:Gamma-E})) $B(\Gamma E u)$ can be regarded as a fermionic \emph{creation} operator. \subsection{Lifting of the Hilbert space redundance and $\mathbb Z_2$-index} Of greater relevance for our purposes is the inverse map to (\ref{eq:psi} (which, to a given CAR algebra $\pazocal A_{\mbox{{\tiny CAR}}} (\pazocal H)$ associates a self-dual CAR algebra $\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal K,\Gamma)$. In order for these two algebras to be related, the Hilbert space $\pazocal H$ should, somehow, be the image of the projection on a bigger Hilbert space $\pazocal K$. But since in this case our initial data is given only by $\pazocal H$, the construction of $\pazocal K$ must involve some enlarging of $\pazocal H$. As we now show, this corresponds to the usual ``doubling'' of the Hilbert space. Consider, then, a Hilbert space $(\pazocal H,\langle \cdot,\cdot\rangle)$. Define now $\pazocal K:=\pazocal{H}\oplus\pazocal{H}$ and introduce the projection operator $E$ on $\pazocal K$ given by $E(x,y)=(x,0)$, so that $\pazocal H= E \pazocal K$. Choose a complex conjugation $T$ on $\pazocal{H}$ and define for $(u,v)\in \pazocal K$ a conjugation $\Gamma(u,v)\coloneqq(Tv,Tu)$. Then we obtain an isomorphism $\pazocal A_{\mbox{{\tiny CAR}}} (\pazocal H)\cong \pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal H \oplus\pazocal H,\Gamma)$, defined on generators by \begin{eqnarray} \label{eq:phi} \phi: \pazocal A_{\mbox{{\tiny CAR}}} (\pazocal H) & \longrightarrow & \pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal H \oplus\pazocal H,\Gamma) \nonumber\\ a(u)& \longmapsto & B^{\dagger}((0,Tu))\equiv B((u,0))\quad \end{eqnarray} that defines an algebra isomorphism which, in the case $\pazocal K=\pazocal H\oplus \pazocal H$, is the inverse of the map $\psi$ defined in (\ref{eq:psi}). The isomorphism (\ref{eq:psi}) (along with its inverse (\ref{eq:phi})) shows in an explicit way why the doubling of the Hilbert space appearing in the diagonalization of quadratic Hamiltonians like (\ref{eq:q-form}) does not introduce any kind of redundance. In fact, whereas it is natural to consider the space $\pazocal H \oplus\pazocal H$ as a convenient mathematical step in bringing the Hamiltonian to diagonal form, the corresponding algebra of observables is the self-dual algebra $\pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal H \oplus\pazocal H,\Gamma)$ which is completely equivalent to the original fermionic algebra $\pazocal A_{\mbox{{\tiny CAR}}} (\pazocal H)$. As the latter only contains information about $\pazocal H$, this proves our statement. We now establish a connection between the self-dual formalism and the one presented in section \ref{S:2}. For this purpose consider the \textit{conjugate Hilbert space} $\overline{\pazocal{H}}$. It has the same underlying set as $\pazocal{H}$ but with scalar multiplication given by $\lambda$$\cdot$$x\coloneqq \overline{\lambda}x$ for $x\in\overline{\pazocal{H}}$ and $\lambda\in\mathbb{C}$. We also have $\braket{x,y}_{\overline{\pazocal{H}}}\coloneqq{\overline{\braket{x,y}}}=\braket{y,x}$. Set now $\pazocal K\coloneqq\pazocal{H}\oplus\overline{\pazocal{H}}$ with $x\in \pazocal K$ written as $x=(x_{1},x_{2})$. The inner product is defined for $x,y\in \pazocal K$ by $\braket{x,y}_{\pazocal K}\coloneqq\braket{x_{1},y_{1}}+\braket{x_{2},y_{2}}_{\overline{\pazocal{H}}}$. Consider also the projection $Px\coloneqq (x_{1},0)$ and the complex conjugation \begin{equation} \label{eq:complex-conjugation} \Gamma(x)\coloneqq(x_{2},x_{1}). \end{equation} This is clearly a conjugation because $\Gamma(\lambda x)=\overline{\lambda}\,\Gamma (x)$. The operator $P$ is a basis projection on $\pazocal K$ with respect to this conjugation. Consider now the \textit{real subspace} $\mathrm{Re}_{\Gamma}\pazocal K\coloneqq\{x\in \pazocal K:\Gamma (x)=x\}$. An element $x\in\mathrm{Re}_{\Gamma}\pazocal K$ must be of the form $x=(x_{1},x_{1})$. It follows that for $x,y\in\mathrm{Re}_{\Gamma}\pazocal K$ we have $\braket{x,y}_{\pazocal K}=2\mathrm {Re}\braket{x_{1},y_{1}}$. Therefore, the generators $\pi_{P}(x)\coloneqq a^{\dagger}(Px)+a( P\Gamma x)$ ($x\in \mathrm{Re}_{\Gamma}\pazocal K $) match exactly the Clifford generators (\ref{eq:Clifford-generators}). Furthermore, for arbitrary $x\in \pazocal K$, we readily check that $\pi_{P}(x)=\psi(B^{\dagger}(x))$, with $\psi$ defined as in (\ref{eq:psi}), with $E=P$. If we now put $V=\mathrm{Re}_{\Gamma}\pazocal K$ and $J=i(2P-1)$, $g(u,v)= \mathrm {Re}\braket{u,v}$, we obtain an isomorphism $(V_{J},\braket{\cdot,\cdot}_{J})\cong (\pazocal H, \braket{\cdot,\cdot})$. On the other hand, starting from a triple $(V,g,J)$ as in section \ref{S:2}, and making use of the fact that $W_{-J}\cong \overline{W}_J$ we obtain, for $\pazocal K = W_J\oplus \overline{W}_J$ and $\Gamma$ as above, an isomorphism $\mathrm{Re}_\Gamma \pazocal K \cong V$. % We can summarize our discussion as follows: Let $(V,g,J)$ as in section \ref{S:2}. Let $\pazocal H$ be the complex Hilbert space $V_J$, with scalar product given by (\ref{eq:<,>_J}). Endow $\pazocal H\oplus \overline{\pazocal H}$ with the complex conjugation (\ref{eq:complex-conjugation}). Then we have the following equivalences: \begin{equation} \label{eq:equiv} \mathbb{C}\ell(V) \cong \pazocal A_{\mbox{{\tiny CAR}}} (\pazocal H) \cong \pazocal A^{\mathrm{ sd}}_{\mbox{\tiny CAR}}(\pazocal H \oplus\overline{\pazocal H},\Gamma). \end{equation} The last equality in (\ref{eq:equiv}) not only confirms that there is no redundancy in the description of the system, but also provides a direct link between (i) the Bogoliubov transformation that diagonalizes $H$ (ii) the corresponding orthogonal complex structure and (iii) the topological $\mathbb Z_2$-index. In fact, let now $h\in O(V,g)$ be such that the Hamiltonian (\ref{eq:q-form}) becomes diagonal in the fermionic operators \begin{equation}\label{eq:Bogoliubov} c (v) = a(p_h) +a^\dagger (q_hv), \end{equation} where \begin{eqnarray} p_h&=&\frac{1}{2}(h-JhJ),\label{eq:p_h}\\ q_h&=&\frac{1}{2}(h+JhJ), \label{eq:q_h} \end{eqnarray} are the linear/antilinear transformations giving rise to the corresponding Bogoliubov transformation. Then, the ground state of $H$ is characterized by the vacuum condition (\ref{eq:vacuum-condition}) corresponding to the complex structure $J_h\coloneqq hJh^{-1}$. Furthermore, the map~\cite{Gracia-Bond'ia2001} \begin{eqnarray}\label{eq:index} \mathrm{index}: \pazocal J &\longrightarrow & \mathbb Z_2\nonumber\\ J_h & \longmapsto & (-1)^{\frac{1}{2}\mathrm{dim}\ker (J +J_h)} \end{eqnarray} gives exactly the topological $\mathbb Z_2$-index (Pfaffian invariant). In the next section we illustrate this assertion with explicit examples. \section{Examples} \label{sec:examples} \subsection{Two-site chain}\label{example1} Let $V=\mathbb R^4$ with $g_{\mbox{\tiny E}}(\cdot,\cdot)$ the standard Euclidean metric. For $e_1,\ldots,e_4$ the standard basis vectors, introduce the following complex structure: \begin{equation} J=\left( \begin{array}{cccc} 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right) \end{equation} Notice that we have $e_3 = Je_1$ and $e_4=Je_2$. Consider now the following Hamiltonian (two-site Kitaev chain): \begin{equation} H=t (a_1^\dagger a_2 +a_2^\dagger a_1) +\Delta (a_1^\dagger a_2^\dagger-a_1a_2)-2\mu (a_1^\dagger a_1 + a_2^\dagger a_2). \end{equation} Introducing parameters $\alpha=\sqrt{\Delta^2 +4\mu^2}$, $\beta_{\pm}=\sqrt{( \alpha\pm \Delta)/(2\alpha)}$ and $\sigma = \mathrm{sgn}(\alpha-t)$, one readily checks that the Bogoliubov transformation that diagonalizes $H$ is induced by the orthogonal transformation \begin{equation} h= \left( \begin{array}{cc} \Phi & 0 \\ 0 & \Psi \\ \end{array} \right), \end{equation} where \begin{equation} \Phi=\left( \begin{array}{cc} \beta_+ & \beta_- \\ -\beta_- & \beta_+ \\ \end{array} \right),\qquad \Psi=\left( \begin{array}{cc} \sigma \beta_- & \sigma \beta_+ \\ -\beta_+ & \beta_- \\ \end{array} \right). \end{equation} For the real maps $p_h,q_h:V\rightarrow V$ (cf. (\ref{eq:p_h}) and (\ref{eq:q_h})), expressed in block form, we find: \begin{equation} p_h =\left( \begin{array}{cc} g & 0 \\ 0 & g \\ \end{array} \right), \quad q_h =\left( \begin{array}{cc} f & 0 \\ 0 & -f \\ \end{array} \right), \quad \end{equation} where $g=(1/2)(\Phi+\Psi)$ and $f=(1/2)(\Phi-\Psi)$ (cf. \cite{Reyes-Lega2016}). For the orthogonal complex structure we obtain \begin{equation} J_h= hJh^\intercal=\frac{1}{\sqrt{\Delta^2 +4\mu^2}}\left( \begin{array}{cccc} 0 & 0 & -2\sigma\mu & \Delta \\ 0 & 0 & -\sigma \Delta & -2\mu \\ 2\sigma\mu & \sigma \Delta & 0 & 0 \\ -\Delta & 2\mu & 0 & 0 \\ \end{array} \right). \end{equation} Finally, the $\mathbb Z_2$-index is given by \begin{equation}\label{eq:theindex} \mathrm{index}(h) := (-1)^{\frac{1}{2}\mathrm{dim}\ker (J +J_h)} = \det h=\sigma. \end{equation} The index defines two regions on the $t$-$\mu$ plane according to the value of $\sigma$. The boundary between these two regions is determined by the condition $\alpha=t$, which can also be written as \begin{equation} \Delta^2 = t^2 -4\mu^2. \end{equation} The result is displayed in figure \ref{fig:subfig4}, where we have taken $\Delta = 1$. The shaded region corresponds to $\sigma=-1$ and the unshaded to $\sigma=1$. \subsection{$N$-site Kitaev chain } Finally we turn our attention to Majorana fermions. A fermionic operator $f$ can be regarded as a superposition of two Majorana fermions, obtained from a splitting into real and imaginary parts: $f=\gamma^{(A)}+i\gamma^{(B)}$, with $\gamma^{(A)}$ and $\gamma^{(B)}$ self-adjoint. We can describe Majorana fermions in terms of orthogonal complex structures as follows. Starting with a triple $(V,g,J)$ as above, we describe fermions as elements of $\pazocal A_{\mbox{{\tiny CAR}}}(V_J,\langle\cdot,\cdot\rangle_J)$. We want to split the generators as $a_J(v) = \gamma^{(A)}(v) +i\gamma^{(B)}(v)$. Taking into account that $a_J(v)$ depends anti-linearly on $v$ (see equation (\ref{eq:identity2})), we obtain the following characterization of the Majorana operators, in terms of the Clifford algebra generators \begin{equation} \gamma^{(A)}(v) = \frac{1}{2}\pi_J(v), \quad \gamma^{(B)}(v) = \frac{1}{2}\pi_J(Jv). \end{equation} We see that the orthogonal complex structure allows us to identify the Majorana modes. Consider the generalization of example \ref{example1} to the case of a Kitaev chain with $N$ sites: \begin{equation}\label{eq:Kitaevchain} H=\sum_{i=l}^N t (a_i^\dagger a_{i+1} +a_{i+1}^\dagger a_i) +\Delta (a_i^\dagger a_{i+1}^\dagger-a_ia_{i+1})-2\mu a_i^\dagger a_i. \end{equation} The matrices $\Phi$ and $\Psi$ will now be $N\times N$ matrices. For open boundary conditions, the exact solution involves the solution of a trascendental equation~\cite{Reyes-Lega2016}. The Majorana edge modes can nevertheless still be obtained numerically, and are given by \begin{equation} \gamma_{\mbox{\tiny edge}}^{(A)} = \sum_{i=1}^N\Phi_{1i}\gamma_i^{(A)},\quad \gamma_{\mbox{\tiny edge}}^{(B)} = \sum_{i=1}^N\Psi_{1i}\gamma_i^{(B)}. \end{equation} \begin{figure} \centering \subfloat[$N=4,r=0$]{ \includegraphics[width=0.33\textwidth]{fig1} \label{fig:subfig1}} \subfloat[$N=4,r=0.1$]{ \includegraphics[width=0.33\textwidth]{fig2} \label{fig:subfig2}} \subfloat[$N=4,r=0.2$]{ \includegraphics[width=0.33\textwidth]{fig3} \label{fig:subfig3}} \subfloat[$N=2,r=0$ ]{ \includegraphics[width=0.33\textwidth]{fig4} \label{fig:subfig4}} \subfloat[$N=6,r=0$ ]{ \includegraphics[width=0.33\textwidth]{fig5} \label{fig:subfig5}} \subfloat[$N=8,r=0$ ]{ \includegraphics[width=0.33\textwidth]{fig6} \label{fig:subfig6}} \caption{The shaded regions correspond to nontrivial values of the $\mathbb Z_2$-index associated to the Kitaev chain Hamiltonian eq.(\ref{eq:Kitaevchain}), with $\Delta=1$. $N$ denotes the number of sites in the chain and $r$ is a parameter that allows us to interpolate between open boundary conditions ($r=0$) and periodic ones ($r=1$).} \label{fig} \end{figure} The $\mathbb Z_2$-invariant can still be computed using formula (\ref{eq:index}). This holds even in the limit $N\rightarrow \infty$, because $J_h-J$ is a Hilbert-Schmidt operator. Notice that in this approach the $\mathbb Z_2$-invariant is computed in the same way (as the index of a Fredholm operator) irrespective of whether we are using periodic or open boundary conditions. However, it is important to remark that the correspondence between the non-trivial value of the invariant and the actual appearance of edge modes only occurs if we use periodic boundary conditions. This is illustrated in figure \ref{fig}, where the shaded regions in the $\mu$-$t$ plane correspond to the value $-1$ for the $\mathbb Z_2$-index. Introducing a real parameter $r\in [0,1]$ such that $r=0$ corresponds to open boundary conditions and $r=1$ to periodic boundary conditions, we see from figures \ref{fig:subfig1}, \ref{fig:subfig2} and \ref{fig:subfig3} that the region where the $\mathbb Z_2$-index takes the value $-1$ will only coincide with the region in parameter space supporting edge states, when periodic boundary conditions ($r=1$) are being used. Otherwise, the index provides information on the parity of the ground state. It is interesting, nevertheless, that for open boundary conditions the index formula (\ref{eq:index}) can still be applied, as depicted in figures \ref{fig:subfig4}, \ref{fig:subfig5} and \ref{fig:subfig6}. % \section{Discussion}\label{S:5} The approach presented here, which is based on the use of orthogonal complex structures for the description of fermionic systems, has allowed us to give a unified account of several aspects of relevance in the context of topological phases of matter. We have shown that, once correctly incorporated, an orthogonal complex structure produces a reduction in the dimension of the Hilbert space, accounting for the ``redundancy'' usually discussed in the context of Hamiltonians of the BdG form. This has also been highlighted by constructing an explicit isomorphism between the fermionic CAR algebra and a (variation) of Araki's self-dual CAR algebra. Furthermore, we have shown in an explicit way how the $\mathbb Z_2$-invariant is related to the complex structure. In particular, our formalism allows for a direct computation of the $\mathbb Z_2$-invariant independently of the boundary conditions chosen. This might be of help in the search for a rigorous proof of the bulk-boundary correspondence for cases where it still remains at the level of a conjecture~\cite{Bourne2016}. An additional advantage of our approach is that it allows for the inclusion of disorder and thus the topological invariance can be explicitly tested under more realistic conditions. We hope to report on these issues in future work. \section*{Acknowledgments} The authors would like to thank A.P. Balachandran for fruitful discussions during different stages of this work. Financial support from the Faculty of Science and the Vice Rectorate for Research of Universidad de los Andes, through project No. P13.700022.005, is gratefully acknowledged.
{ "timestamp": "2018-02-23T02:03:29", "yymm": "1712", "arxiv_id": "1712.05069", "language": "en", "url": "https://arxiv.org/abs/1712.05069" }
\section{Applications}\label{sec:ApplicationsGlueing} \subsection{Rational tangle detection} \begin{observation}\label{obs:AlexGradingOfCFTdLoops} The Alexander grading on $\CFTd(T,M)$ implies that for a tangle $T$, each loop in $L_T$ lies in the kernel of \[\pi_1(\partial M\smallsetminus \partial T)\rightarrow\pi_1(M\smallsetminus \nu(T))\rightarrow H_1(M\smallsetminus \nu(T)),\] where the first map is induced by the inclusion and the second is the Abelianization map. \end{observation} \begin{theorem}\label{thm:CFTdDetectsRatTan} A 4-ended tangle \(T\) in the 3-ball is rational iff \(L_T\) is a single embedded loop with the unique 1-dimensional local system. \end{theorem} \begin{proof The only-if direction is simply a calculation, see Example~\ref{exa:CFTdRatTang}. Conversely, suppose $L_T$ is a single loop which corresponds to an embedded loop on the 4-punctured sphere. It divides the sphere into two disc components, each of which has at least one puncture, since the loop is not nullhomotopic. By Observation~\ref{obs:AlexGradingOfCFTdLoops}, there are exactly two punctures in each disc, so $L_T$ agrees with $L_{T'}$ for some rational tangle $T'$. Then also $L_{\mr(T)}$ agrees with $L_{\mr(T')}$. Let $L=L(T_1,T_2)$ be the link obtained by pairing $T_2=T$ with $T_1=\mr(T)$. If we pair $T'$ with $\mr(T')$, we obtain the 2-component unlink $\bigcirc\amalg\bigcirc$. So by Theorem~\ref{thm:CFTdGlueingAsMorphism}, \[ \HFL(L)\cong\LagrangianFH(L_T,L_T)\cong\LagrangianFH(L_{T'},L_{T'})\cong\HFL(\bigcirc\amalg\bigcirc). \] We now apply the fact that link Floer homology detects unlinks \cite{OSHFLThurston}, so $L$ is the 2-component unlink. The following lemma finishes the proof. \end{proof} \begin{lemma}\label{lem:RatTanDet} Let \(T_1\) and \(T_2\) be two 4-ended tangles without closed components that glue together to the 2-component unlink. Then either \(T_1\) or \(T_2\) is a rational tangle. \end{lemma} \begin{proof Let $S$ be the 4-punctured sphere along which we glue $T_1$ and $T_2$. Let $U$ be the sphere that separates the two unknot components and assume that $S$ and $U$ intersect transversely in a disjoint union of circles. We now proceed by induction on the number of circles in $S\cap U$. First of all, this intersection is non-empty, since $U$ is separating. So we can always find a curve $\gamma$ that bounds a disc $D$ in $U$ which does not contain any other curves in $U\cap S$. If $\gamma$ bounds a disc $D'$ in $S$, $D\cup D'$ bounds a 3-ball, which we can use as a homotopy for $U$ to remove $\gamma$ (along with any other components of $S\cap U$ in $D'$), so we are done by the induction hypothesis. If $\gamma$ does not bound a disc in $S$, it separates two punctures from the other two. So $D$ separates the two strands in $T_1$ or $T_2$. They must obviously be unknotted, since the connected sum of two knots is the unknot iff both knots are unknots. Thus either $T_1$ or $T_2$ is rational. \end{proof} \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.35}\vspace*{-25pt} \begin{subfigure}[b]{0.116\textwidth}\centering $n\left\{\raisebox{-1.1cm}{ \begin{pspicture}(-1.1,-3.3)(1.1,3.3) \psset{linewidth=\stringwidth} \psecurve(1,5)(-1,3)(1,1)(-1,-1) \psecurve[linecolor=white,linewidth=\stringwhite](-1,5)(1,3)(-1,1)(1,-1) \psecurve(-1,5)(1,3)(-1,1)(1,-1) \psline[linestyle=dotted,dotsep=0.4](0,-0.6)(0,0.6) \psecurve(-1,-5)(1,-3)(-1,-1)(1,1) \psecurve[linecolor=white,linewidth=\stringwhite](1,-5)(-1,-3)(1,-1)(-1,1) \psecurve(1,-5)(-1,-3)(1,-1)(-1,1) \psline{->}(1,3)(1.07,3.2) \psline{->}(-1,3)(-1.07,3.2) \end{pspicture}}\right.\quad $ \caption{$T_{n}$}\label{fig:OrientedSkeinRelationTn} \end{subfigure} \begin{subfigure}[b]{0.116\textwidth}\centering $n\left\{\raisebox{-1.1cm}{ \begin{pspicture}(-1.1,-3.3)(1.1,3.3) \psset{linewidth=\stringwidth} \psecurve(-1,5)(1,3)(-1,1)(1,-1) \psecurve[linecolor=white,linewidth=\stringwhite](1,5)(-1,3)(1,1)(-1,-1) \psecurve(1,5)(-1,3)(1,1)(-1,-1) \psline[linestyle=dotted,dotsep=0.4](0,-0.6)(0,0.6) \psecurve(1,-5)(-1,-3)(1,-1)(-1,1) \psecurve[linecolor=white,linewidth=\stringwhite](-1,-5)(1,-3)(-1,-1)(1,1) \psecurve(-1,-5)(1,-3)(-1,-1)(1,1) \psline{->}(1,3)(1.07,3.2) \psline{->}(-1,3)(-1.07,3.2) \end{pspicture}}\right.\quad $ \caption{$T_{-n}$}\label{fig:OrientedSkeinRelationTmn} \end{subfigure} \begin{subfigure}[b]{0.085\textwidth}\centering \begin{pspicture}(-1.1,-3.3)(1.1,3.3) \psset{linewidth=\stringwidth} \psecurve{<-}(-2,6)(-1,3.1)(-1,-3)(-2,-6) \psecurve{<-}(2,6)(1,3.1)(1,-3)(2,-6) \end{pspicture} \caption{$T_0$}\label{fig:OrientedSkeinRelationT0} \end{subfigure} \caption{Basic tangles.}\label{fig:OrientedSkeinRelationTangles}\vspace*{-25pt} \end{wrapfigure} \subsection{Skein exact sequences} We start with a slight generalisation of Ozsv\'{a}th and Szab\'{o}'s exact triangle \cite{OSHFK} which categorifies the oriented skein relation for the Alexander polynomial. However, we remind the reader that all gradings on link Floer homology should be regarded as relative, see Remark~\ref{rmk:RelativeGradings}, so the graded version of the following theorem is not quite as strong as Ozsv\'{a}th and Szab\'{o}'s result in the case $n=1$. \begin{figure}[b] \centering \psset{unit=0.3} \begin{subfigure}[b]{0.48\textwidth} {\normalsize \[\begin{tikzcd}[row sep=0.7cm, column sep=-0.5cm] \CFTd(T_{n}) \arrow{rr}{\varphi_n} & & \CFTd(T_{-n}) \arrow{dl} \\ & \CFTd(T_{0})\otimes V \arrow{lu} \end{tikzcd}\] } \caption{}\label{fig:nTwistSkeinRelationTangle} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} {\normalsize \[\begin{tikzcd}[row sep=0.7cm, column sep=-0.8cm] \HFL(L_{n})\otimes V^{l_{n}} \arrow{rr} & & \HFL(L_{-n})\otimes V^{l_{-n}} \arrow{dl} \\ & \HFL(L_{0})\otimes V^{l_0+1} \arrow{lu} \end{tikzcd}\] } \caption{}\label{fig:nTwistSkeinRelationLinks} \end{subfigure} \caption{The skein exact triangles from Theorem~\ref{thm:nTwistSkeinRelation}.}\label{fig:OrientedSkeinRelation} \end{figure} \begin{theorem}[($n$-twist skein exact triangle)]\label{thm:nTwistSkeinRelation} Let \(T_n\) be the positive \(n\)-twist tangle, \(T_{-n}\) the negative \(n\)-twist tangle and \(T_0\) the trivial tangle, see Figure~\ref{fig:OrientedSkeinRelationTangles}. Furthermore, let \(V\) be a 2-dimensional vector space supported in degrees \(\delta^0 t^{n}\) and \(\delta^0 t^{-n}\), where \(t\) is the colour of the two open strands. Then there is an exact triangle shown in Figure~\ref{fig:nTwistSkeinRelationTangle}. \(\varphi_n\) preserves the (univariate) Alexander grading and changes \(\delta\)- and homological gradings by \(+1\) and \(-1\), respectively; the other two maps preserve all three gradings. Moreover, given three links \(L_{n}\), \(L_{-n}\) and \(L_0\) in \(S^3\), which agree outside a closed 3-ball and in this closed 3-ball agree with the 4-ended tangles \(T_{n}\), \(T_{-n}\) and \(T_{0}\), respectively, then the above triangle together with the Glueing Theorem induces an exact triangle shown in Figure~\ref{fig:nTwistSkeinRelationLinks}, where for \(i\in\{n,-n,0\}\), \(l_i\) is either \(0\) or \(1\), depending on whether the two strands in \(T_i\) belong to different or the same components in \(L_i\), respectively. \end{theorem} \begin{remark}\label{rem:nTwistSkeinRelation} Similar results hold for other orientations; for $n$ even, also multivariate Alexander gradings are preserved. \end{remark} \begin{figure}[t] \centering \begin{subfigure}[b]{0.23\textwidth}\centering {\psset{unit=1} \begin{pspicture}(-1.6,-1.7)(1.6,1.7) \psecurve(1.2,-0.95)(0,-1.5)(-1.5,-1)(1.25,1.15)(-1.2,-1.05)(1.2,-0.95)(0,-1.5)(-1.5,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.18) \psdot[linecolor=red](-1,-0.48) \psdot[linecolor=gold](0,1) \psdot[linecolor=blue](0.04,-1) \psdot[linecolor=darkgreen](1,0.65) \psdot[linecolor=darkgreen](1,-0.78) \end{pspicture} } \caption{$L_{T_2}$}\label{fig:OrientSkeinRatTangleII} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth}\centering {\psset{unit=1} \begin{pspicture}(-1.6,-1.7)(1.6,1.7) \psecurve(-1.2,-0.95)(0,-1.5)(1.5,-1)(-1.25,1.15)(1.2,-1.05)(-1.2,-0.95)(0,-1.5)(1.5,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \psset{dotsize=5pt} \psdot[linecolor=darkgreen](1,0.18) \psdot[linecolor=darkgreen](1,-0.48) \psdot[linecolor=gold](0,1) \psdot[linecolor=blue](-0.04,-1) \psdot[linecolor=red](-1,0.65) \psdot[linecolor=red](-1,-0.78) \end{pspicture} } \caption{$L_{T_{-2}}$}\label{fig:OrientSkeinRatTanglemII} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth}\centering {\psset{unit=1} \begin{pspicture}(-1.6,-1.7)(1.6,1.7) \psecurve(1.2,-0.95)(0,-1.5)(-1.5,-1)(1.2,1.05)(-1.2,0.95)(0,1.5)(1.5,1)(-1.2,-1.05)(1.2,-0.95)(0,-1.5)(-1.5,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.78) \psdot[linecolor=red](-1,-0.31) \psdot[linecolor=red](-1,-0.54) \psdot[linecolor=gold](-0.04,1) \psdot[linecolor=blue](0.04,-1) \psdot[linecolor=darkgreen](1,0.54) \psdot[linecolor=darkgreen](1,0.31) \psdot[linecolor=darkgreen](1,-0.78) \end{pspicture} } \caption{$L_{T_3}$}\label{fig:OrientSkeinRatTangleIII} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth}\centering {\psset{unit=1} \begin{pspicture}(-1.6,-1.7)(1.6,1.7) \psecurve(-1.2,-0.95)(0,-1.5)(1.5,-1)(-1.2,1.05)(1.2,0.95)(0,1.5)(-1.5,1)(1.2,-1.05)(-1.2,-0.95)(0,-1.5)(1.5,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \psset{dotsize=5pt} \psdot[linecolor=darkgreen](1,0.78) \psdot[linecolor=darkgreen](1,-0.31) \psdot[linecolor=darkgreen](1,-0.54) \psdot[linecolor=gold](0.04,1) \psdot[linecolor=blue](-0.04,-1) \psdot[linecolor=red](-1,0.54) \psdot[linecolor=red](-1,0.31) \psdot[linecolor=red](-1,-0.78) \end{pspicture} } \caption{$L_{T_{-3}}$}\label{fig:OrientSkeinRatTanglemIII} \end{subfigure} \caption{Immersed curves of $T_n$ and $T_{-n}$ for $n=2$ and $n=3$.}\label{fig:OrientSkeinRatTangle} \end{figure} \begin{figure}[t] \centering $\begin{tikzcd}[row sep=0.3cm, column sep=1cm] & \delta^{-\frac{1}{2}}c^\Red{1-n} \arrow[pos=0.35]{rrrr}{1} \arrow[-, dotted,in=90,out=-90]{dddddddl} \arrow[-, dashed,in=90,out=-90]{dddd} & & & & \delta^{\frac{1}{2}}c^\Red{1-n} \arrow[-, dashed,in=90,out=-90]{dddd} \\ \delta^0 b^\Red{-n} \arrow[crossing over,pos=0.65]{rrrr}{p_{12}+q_{43}} \arrow[bend left=10,leftarrow]{ru}{p_3} \arrow[bend right=10,swap,pos=0.55]{ru}{p_{412}} \arrow[bend left=10,leftarrow]{dd}{q_{2}} \arrow[bend right=10,swap]{dd}{q_{143}} & & & & \delta^0 d^\Red{-n} \arrow[bend left=10,leftarrow,pos=0.2]{ru}{p_{123}} \arrow[bend right=10,swap]{ru}{p_{4}} \arrow[bend left=10,leftarrow]{dd}{q_{432}} \arrow[bend right=10,swap]{dd}{q_{1}} \\ & \phantom{\vdots} & & & & \phantom{\vdots} \\ \delta^{-\frac{1}{2}}a^\Red{1-n} \arrow[-, dashed,in=90,out=-90]{dddd} \arrow[crossing over,pos=0.65]{rrrr}{1} \arrow[-, dotted]{dr} & & & & \delta^{\frac{1}{2}}a^\Red{1-n} \arrow[-, dotted]{dr} \\ & \delta^{-\frac{1}{2}}c^\Red{n-1} \arrow[pos=0.35]{rrrr}{1} & & & & \delta^{\frac{1}{2}}c^\Red{n-1} \\ \phantom{\vdots} & & & & \phantom{\vdots} \\ & \delta^0 d^\Red{n} \arrow[bend left=10,leftarrow,pos=0.3]{ld}{p_1} \arrow[bend right=10,swap,pos=0.3]{ld}{p_{234}} \arrow[bend left=10,leftarrow]{uu}{q_{4}} \arrow[bend right=10,swap]{uu}{q_{321}} \arrow[pos=0.35]{rrrr}{p_{34}+q_{21}} & & & & \delta^0 b^\Red{n} \arrow[bend left=10,leftarrow]{ld}{p_{341}} \arrow[bend right=10,swap,pos=0.7]{ld}{p_{2}} \arrow[bend left=10,leftarrow]{uu}{q_{214}} \arrow[bend right=10,swap]{uu}{q_{3}} \\ \delta^{-\frac{1}{2}}a^\Red{n-1} \arrow[pos=0.65]{rrrr}{1} & & & & \delta^{\frac{1}{2}}a^\Red{n-1} \arrow[-, dashed,in=-90,out=90,crossing over]{uuuu} \arrow[-, dotted,in=-90,out=90]{uuuuuuur} \end{tikzcd}$ \caption{The morphism $\varphi_n:\CFTd(T_{n})\rightarrow\CFTd(T_{-n})$.}\label{fig:OrientSkeinBoundaryMap} \end{figure} \begin{proof} $T_n$ and $T_{-n}$ are rational tangles. As such, their immersed curve invariants are straightforward to compute from genus 0 Heegaard diagrams, since (as explained in Example~\ref{exa:CFTdRatTang}) the $\beta$-curves of such Heegaard diagrams \emph{are} the invariants. Figure~\ref{fig:OrientSkeinRatTangle} illustrates this for the cases $n=2$ and $n=3$. The peculiar modules $\CFTd(T_{n})$ and $\CFTd(T_{-n})$ for general $n$ are shown in Figure~\ref{fig:OrientSkeinBoundaryMap}, the former on the left, the latter on the right. If $n$ is odd, the dashed lines denote a sequence of alternating generators in sites $a$ and $c$, connected by pairs of morphisms, labelled alternatingly by $p_i$s and $q_i$s. For even $n$, the two components are connected by similar sequences along the dotted lines. The horizontal arrows in Figure~\ref{fig:OrientSkeinBoundaryMap} describe $\varphi_n$. By cancelling all identity components of the mapping cone of $\varphi_n$, we get two copies of \[\begin{tikzcd}[row sep=0.3cm, column sep=1cm] \nmathphantom{\CFTd(T_0)=}\CFTd(T_0)=\delta^0b^\Red{0} \arrow[bend left=10,leftarrow]{r}{p_{34}+q_{21}} \arrow[bend right=10,swap,pos=0.55]{r}{p_{12}+q_{43}} & \delta^0d^\Red{0}. \end{tikzcd}\] Thus, we can write $\CFTd(T_0)\otimes V$ as a cone of $\CFTd(T_n)$ and $\CFTd(T_{-n})$, which gives rise to the exact triangle of the required form. \end{proof} Next, we give a new proof of a theorem by Manolescu \cite[Theorem~1]{Manolescu}. Like Manolescu's triangle, ours does not preserve any gradings. Note that Manolescu uses slightly different conventions from ours, so the two triangles only look the same after reversing the direction of the three arrows. \begin{figure}[t] \centering \begin{subfigure}[b]{0.48\textwidth} {\normalsize \[\begin{tikzcd}[row sep=0.7cm, column sep=-0.5cm] \CFTd(\!\! \raisebox{-6pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \psline(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} } \!\!) \arrow{rr}{\varphi} & & \CFTd(\!\! \raisebox{-6pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \psline(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0,0.7071){0.5}{-135}{-45} \psarc(0,-0.7071){0.5}{45}{135} \end{pspicture} } \!\!) \arrow{dl} \\ & \CFTd(\!\! \raisebox{-6pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \pscircle*[linecolor=white](0,0){0.3} \psline(0.9,-0.9)(-0.9,0.9) \end{pspicture} } \!\!) \arrow{lu} \end{tikzcd}\] } \caption{}\label{fig:ResolutionExactTriangleTangle} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} {\normalsize \[\begin{tikzcd}[row sep=0.7cm, column sep=-1.1cm] \HFL(L_{0})\otimes V^{l_{0}} \arrow{rr} & & \HFL(L_{1})\otimes V^{l_{1}} \arrow{dl} \\ & \HFL(L_{X})\otimes V^{l_X} \arrow{lu} \end{tikzcd}\] } \caption{}\label{fig:ResolutionExactTriangleLinks} \end{subfigure} \caption{The skein exact triangles from Theorem~\ref{thm:ResolutionExactTriangle}.}\label{fig:ResolutionExactTriangle} \end{figure} \begin{theorem}[(resolution skein exact triangle)]\label{thm:ResolutionExactTriangle} There is an exact triangle as shown in Figure~\ref{fig:ResolutionExactTriangleTangle}. Moreover, given three links \(L_{0}\), \(L_{1}\) and \(L_X\) in \(S^3\) which agree outside a closed 3-ball and in this closed 3-ball agree with the 4-ended tangles \(T_0=\!\!\raisebox{-5pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \psline(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} }\!\!,\) \(T_1=\!\!\raisebox{-5pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \psline(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0,0.7071){0.5}{-135}{-45} \psarc(0,-0.7071){0.5}{45}{135} \end{pspicture} }\)\!\! and \(T_X= \!\!\raisebox{-5pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \pscircle*[linecolor=white](0,0){0.3} \psline(0.9,-0.9)(-0.9,0.9) \end{pspicture} }\!\!,\) respectively, then the above triangle, together with the Glueing Theorem induces the exact triangle from Figure~\ref{fig:ResolutionExactTriangleLinks}, where for \(i\in\{0,1,X\}\), \(l_i\) is either \(0\) or \(1\), depending on whether the two strands in \(T_i\) belong to different or the same components in \(L_i\), respectively. \end{theorem} \begin{proof} The map $\varphi$ is given by (the horizontal arrows in) the following diagram on the left: \[\begin{tikzcd}[row sep=2cm, column sep=2.8cm] b \arrow[bend left=10,leftarrow]{d}{p_{34}+q_{21}} \arrow[bend right=10,swap]{d}{p_{12}+q_{43}} \arrow{r}{q_3} \arrow[pos=0.3]{rd}{p_2} & c \arrow[bend left=10,leftarrow]{d}{p_{41}+q_{32}} \arrow[bend right=10,swap]{d}{p_{23}+q_{14}} \\ d \arrow{r}{q_1} \arrow[pos=0.3,swap]{ru}{p_4} & a \end{tikzcd} \quad\cong\quad \begin{tikzcd}[row sep=2cm, column sep=2.8cm] b \arrow[bend left=10,leftarrow,pos=0.8]{dr}{p_{341}} \arrow[bend right=10,swap,pos=0.2]{dr}{p_{2}} \arrow[bend left=10,leftarrow]{r}{q_{214}} \arrow[bend right=10,swap]{r}{q_{3}} & c \\ d \arrow[bend right=10,swap,leftarrow,pos=0.8]{ru}{p_{123}} \arrow[bend left=10,pos=0.2]{ru}{p_{4}} \arrow[bend right=10,swap,leftarrow]{r}{q_{432}} \arrow[bend left=10]{r}{q_{1}} & a \end{tikzcd}\] Using the Clean-Up Lemma (\ref{lem:AbstractCleanUp}) twice with $h=(a\oplus c\xrightarrow{(q_2~p_3)}b)$ and $h=(a\oplus c\xrightarrow{(p_1~q_4)}d)$, respectively, we see that it is chain isomorphic to the diagram on the right, which is $\CFTd(\!\!\raisebox{-5pt}{ \psset{unit=0.2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline(-0.9,-0.9)(0.9,0.9) \pscircle*[linecolor=white](0,0){0.3} \psline(0.9,-0.9)(-0.9,0.9) \end{pspicture} }\!\!)$. Now apply the same arguments as in the proof of Theorem~\ref{thm:nTwistSkeinRelation}. \end{proof} \begin{figure}[t] \begin{subfigure}[b]{0.19\textwidth}\centering \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psrotate(0,0){180}{ \psecurve(-1.2,0.6)(-1.2,1.2)(1.2,0.6)(1.2,1.2)(-1.2,0.6)(-1.2,1.2)(1.2,0.6) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) % \psset{dotsize=5pt} \psdot[linecolor=darkgreen](-1,0.507) \psdot[linecolor=blue](0.155,1) \psdot[linecolor=blue](-0.155,1) \psdot[linecolor=red](1,0.507) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} } \end{pspicture} \caption{}\label{fig:figure8loop1} \end{subfigure} \begin{subfigure}[b]{0.29\textwidth} { \[ \begin{tikzcd}[row sep=1.5cm, column sep=1.2cm,crossing over clearance=3pt] \delta^{0}a^{\red 0} \arrow[bend left=10,leftarrow,pos=0.5]{d}{q_{143}} \arrow[bend right=10,swap,pos=0.5]{d}{q_2} \arrow[bend left=10,leftarrow,pos=0.5]{r}{p_2} \arrow[bend right=10,swap,pos=0.5]{r}{p_{341}} & \delta^{-\frac{1}{2}}b^{\red +1} \\ \delta^{\frac{1}{2}}b^{\red -1} & \delta^{0}c^{\red 0} \arrow[bend left=10,leftarrow,pos=0.5]{u}{q_{3}} \arrow[bend right=10,swap,pos=0.5]{u}{q_{214}} \arrow[bend left=10,leftarrow,pos=0.5]{l}{p_{412}} \arrow[bend right=10,swap,pos=0.5]{l}{p_3} \end{tikzcd} \] } \caption{}\label{fig:figure8loop1pm} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth}\centering \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psecurve(-1.2,0.6)(-1.2,1.2)(1.2,0.6)(1.2,1.2)(-1.2,0.6)(-1.2,1.2)(1.2,0.6) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) % \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.507) \psdot[linecolor=gold](0.155,1) \psdot[linecolor=gold](-0.155,1) \psdot[linecolor=darkgreen](1,0.507) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} \caption{}\label{fig:figure8loop2} \end{subfigure} \begin{subfigure}[b]{0.29\textwidth} { \[ \begin{tikzcd}[row sep=1.5cm, column sep=1.2cm,crossing over clearance=3pt] \delta^{0}a^{\red 0} \arrow[bend left=10,leftarrow,pos=0.5]{d}{q_1} \arrow[bend right=10,swap,pos=0.5]{d}{q_{432}} \arrow[bend left=10,leftarrow,pos=0.5]{r}{p_{234}} \arrow[bend right=10,swap,pos=0.5]{r}{p_1} & \delta^{\frac{1}{2}}d^{\red +1} \\ \delta^{-\frac{1}{2}}d^{\red -1} & \delta^{}c^{\red 0} \arrow[bend left=10,leftarrow,pos=0.5]{u}{q_{321}} \arrow[bend right=10,swap,pos=0.5]{u}{q_4} \arrow[bend left=10,leftarrow,pos=0.5]{l}{p_4} \arrow[bend right=10,swap,pos=0.5]{l}{p_{123}} \end{tikzcd} \] } \caption{}\label{fig:figure8loop2pm} \end{subfigure} \caption{Two figure-8 loops (a) and (c) and their peculiar modules (b) and (d).}\label{fig:figure8loops} \end{figure} \begin{proposition}\label{prop:singularcrossing} There are two morphisms \[ \delta^{-\frac{1}{2}}t^{\pm1}\CFTd\left(\raisebox{-5pt}{ \psset{unit=0.3} \begin{pspicture}(-0.91,-0.91)(0.91,0.91) \psline{->}(-0.9,-0.9)(0.9,0.9) \psline{->}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} }\right)\rightarrow \CFTd\left(\raisebox{-5pt}{ \psset{unit=0.3} \begin{pspicture}(-0.91,-0.91)(0.91,0.91) \psline{->}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.3} \psline{->}(-0.9,-0.9)(0.9,0.9) \end{pspicture} }\right) \] whose mapping cones are homotopic to the peculiar modules represented by the ``figure-8'' loops shown in Figure~\ref{fig:figure8loops}. Since these loops are invariant under taking the mirror, they agree with the mapping cones of maps from the negative crossing to the trivial tangle. \end{proposition} \begin{figure}[t] \begin{subfigure}[b]{0.48\textwidth} { \[ \begin{tikzcd}[row sep=0.5cm, column sep=0.22cm,crossing over clearance=3pt] & \delta^{-\frac{1}{2}}d^{\red +1} \arrow[out=0,in=180]{rrrr}{1} &&&& \delta^{\frac{1}{2}}d^{\red +1} \arrow[bend left=10,leftarrow,pos=0.5]{dd}{q_4} \arrow[bend right=10,swap,pos=0.6]{dd}{q_{321}} \\ &&&& \delta^{0}a^{\red 0} \arrow[crossing over, bend left=10,leftarrow,pos=0.35]{ur}{p_{234}} \arrow[crossing over, bend right=10,swap,pos=0.4]{ur}{p_1} \\ &&&&& \delta^{0}c^{\red 0} \\ \delta^{-\frac{1}{2}}b^{\red +1} \arrow[out=30,in=180,pos=0.3]{rrrruu}{p_2} \arrow[swap,out=20,in=180,pos=0.5]{rrrrru}{q_3} \arrow[bend left=10,leftarrow,pos=0.5]{uuur}{p_{34}+q_{21}} \arrow[bend right=10,swap,pos=0.85]{uuur}{p_{12}+q_{43}} &&&& \delta^{\frac{1}{2}}b^{\red -1} \arrow[bend left=10,leftarrow,pos=0.65]{ur}{p_3} \arrow[bend right=10,swap,pos=0.65]{ur}{p_{412}} \arrow[crossing over, bend left=10,leftarrow,pos=0.55]{uu}{q_2} \arrow[crossing over, bend right=10,swap,pos=0.75]{uu}{q_{143}} \end{tikzcd} \] } \caption{} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} { \[ \begin{tikzcd}[row sep=0.5cm, column sep=0.22cm,crossing over clearance=3pt] & \delta^{-\frac{1}{2}}d^{\red -1} \arrow[out=-20,in=180,pos=0.2]{rrrrdd}{p_4} \arrow[crossing over, out=-30,in=180,swap,pos=0.3]{rrrd}{q_1} &&&& \delta^{\frac{1}{2}}d^{\red +1} \arrow[bend left=10,leftarrow,pos=0.5]{dd}{p_4} \arrow[bend right=10,swap,pos=0.6]{dd}{q_{321}} \\ &&&& \delta^{0}a^{\red 0} \arrow[crossing over, bend left=10,leftarrow,pos=0.35]{ur}{p_{234}} \arrow[crossing over, bend right=10,swap,pos=0.4]{ur}{p_1} \\ &&&&& \delta^{0}c^{\red 0} \\ \delta^{-\frac{1}{2}}b^{\red -1} \arrow[out=0,in=180,swap]{rrrr}{1} \arrow[bend left=10,leftarrow,pos=0.5]{uuur}{p_{34}+q_{21}} \arrow[bend right=10,swap,pos=0.5]{uuur}{p_{12}+q_{43}} &&&& \delta^{\frac{1}{2}}b^{\red -1} \arrow[bend left=10,leftarrow,pos=0.65]{ur}{p_3} \arrow[bend right=10,swap,pos=0.65]{ur}{p_{412}} \arrow[crossing over, bend left=10,leftarrow,pos=0.55]{uu}{q_2} \arrow[crossing over, bend right=10,swap,pos=0.75]{uu}{q_{143}} \end{tikzcd} \] } \caption{} \end{subfigure} \caption{The two maps from Proposition~\ref{prop:singularcrossing}.}\label{fig:singularcrossing} \end{figure} \begin{proof} The two maps between the peculiar invariants of the two tangles are shown in Figure~\ref{fig:singularcrossing}. After cancelling the identity arrows in both mapping cones, we obtain the two peculiar modules shown in Figure~\ref{fig:figure8loops}. Also, reversing all arrows, swapping $p_i$ and $q_i$ and reversing the Alexander grading leaves both of them invariant. Doing this to the mapping cone gives us maps between the mirrors of the two tangles, but in the opposite direction. \end{proof} \begin{figure}[t]\centering { $ \begin{tikzcd}[row sep=0.5cm, column sep=0.45cm,crossing over clearance=3pt] & b^{\red 2-n} \arrow[bend left=10,leftarrow,pos=0.35]{dd}{q_{214}} \arrow[bend right=10,swap,pos=0.75]{dd}{q_3} &&&& b^{\red 4-n} \arrow[bend left=10,leftarrow,pos=0.35]{dd}{q_{214}} \arrow[bend right=10,swap,pos=0.75]{dd}{q_3} && \!\!\!\cdots\!\!\! && b^{\red n} \arrow[bend left=10,leftarrow,pos=0.65]{dd}{q_{214}} \arrow[bend right=10,swap,pos=0.75]{dd}{q_3} &&&& d^{\red n} \\ a^{\red 1-n} \arrow[bend left=10,leftarrow]{ru}{p_2} \arrow[bend right=10,swap,pos=0.15]{ru}{p_{341}} \arrow[crossing over,in=180,out=0,swap,pos=0.1]{rrrrrd}{p_{41}} &&&& a^{\red 3-n} \arrow[bend left=10,leftarrow]{ru}{p_2} \arrow[bend right=10,swap,pos=0.15]{ru}{p_{341}} \arrow[dotted,crossing over,in=180,out=0,swap,pos=0.1]{rrrrrd}{p_{41}} && \!\!\!\cdots\!\!\! && a^{\red n-1} \arrow[bend left=10,leftarrow]{ru}{p_2} \arrow[bend right=10,swap,pos=0.15]{ru}{p_{341}} \arrow[crossing over,pos=0.9,in=180,out=0]{urrrrr}{p_1} \\ & c^{\red 1-n} \arrow[bend left=10,leftarrow]{ld}{p_{412}} \arrow[bend right=10,swap,pos=0.3]{ld}{p_3} \arrow[in=180,out=0,pos=0.03,swap]{rrru}{q_{14}} &&&& c^{\red 3-n} \arrow[bend left=10,leftarrow]{ld}{p_{412}} \arrow[bend right=10,swap,pos=0.3]{ld}{p_3} \arrow[dotted,in=180,out=0,pos=0.03,swap]{rrru}{q_{14}} && \!\!\!\cdots\!\!\! && c^{\red n-1} \arrow[bend left=10,leftarrow]{ld}{p_{412}} \arrow[bend right=10,swap,pos=0.4]{ld}{p_3} \arrow[pos=0.65,in=-150,out=0,pos=0.6]{uurrrr}{q_4} \\ b^{\red -n} \arrow[crossing over,bend left=10,leftarrow,pos=0.5]{uu}{q_2} \arrow[crossing over,bend right=10,swap,pos=0.55]{uu}{q_{143}} &&&& b^{\red 2-n} \arrow[crossing over,bend left=10,leftarrow,pos=0.65]{uu}{q_2} \arrow[crossing over,bend right=10,swap,pos=0.65]{uu}{q_{143}} \arrow[swap,in=0,out=180,pos=0.8,leftarrow]{llluuu}{1} && \!\!\!\cdots\!\!\! && b^{\red n-2} \arrow[crossing over,bend left=10,leftarrow,pos=0.65]{uu}{q_2} \arrow[crossing over,bend right=10,swap,pos=0.65]{uu}{q_{143}} \arrow[swap,dotted,in=0,out=180,pos=0.8,leftarrow]{llluuu}{1} &&&& b^{\red n} \arrow[swap,in=0,out=180,pos=0.8,leftarrow]{llluuu}{1} \arrow[bend left=10,leftarrow,pos=0.2]{uuur}{p_{34}+q_{21}} \arrow[bend right=10,swap,pos=0.3]{uuur}{p_{12}+q_{43}} \end{tikzcd}$ } \caption{A complex of peculiar modules homotopic to $\CFTd(T_n)$.}\label{fig:ntwistascomplex} \end{figure} \begin{remark}\label{rem:singularcrossings} It is interesting to compare the ``figure-8'' curve to the local Heegaard diagram for a singular crossing $\!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{->}(0.9,-0.9)(-0.9,0.9) \psline{->}(-0.9,-0.9)(0.9,0.9) \psdot(0,0) \end{pspicture} }\!\!$ in \cite{OSSHFS}, as the number of generators agree for the second loop, up to an additional tensor factor. Also note that the proposition above gives rise to an exact triangle similar to the one in \cite{OSrescube}. Moreover, we can write the $n$-twist tangle $T_{n}$ from Figure~\ref{fig:OrientedSkeinRelationTn}, with both strands oriented upwards, as a complex in the objects $\!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{->}(0.9,-0.9)(-0.9,0.9) \psline{->}(-0.9,-0.9)(0.9,0.9) \psdot(0,0) \end{pspicture} }\!\!$ and $ \!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{->}(-0.9,-0.9)(0.9,0.9) \psline{->}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} }\!\! $. Indeed, cancelling the identity components in the complex from Figure~\ref{fig:ntwistascomplex} gives us a loop representing $T_{n}$. \pagebreak[3] Similarly, we can obtain a complex for $T_{-n}$ by applying the mirror operation. Furthermore, it is also easy to find such complexes for other orientations of $T_{-n}$ and $T_n$. Note that these complexes look very much like the ones we get in Bar-Natan's Khovanov homology of tangles \cite{BarNatanKhT}. We assume that every tangle can be written as a complex in the two objects $\!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{->}(0.9,-0.9)(-0.9,0.9) \psline{->}(-0.9,-0.9)(0.9,0.9) \psdot(0,0) \end{pspicture} }\!\!$ and $ \!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{->}(-0.9,-0.9)(0.9,0.9) \psline{->}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} }\!\! $, or the two objects $ \!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psline{-}(-0.9,-0.9)(0.9,0.9) \psline{<->}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} \end{pspicture} }\!\! $ and $ \!\!\raisebox{-5pt}{ \psset{unit=0.2} \psset{arrowsize=1.5pt 2} \begin{pspicture}(-1.05,-1.45)(1.05,1.05) \psrotate(0,0){90}{ \psline{<->}(-0.9,-0.9)(0.9,0.9) \psline{-}(0.9,-0.9)(-0.9,0.9) \pscircle*[linecolor=white](0,0){0.5} \psarc(0.7071,0){0.5}{135}{225} \psarc(-0.7071,0){0.5}{-45}{45} } \end{pspicture} }\!\! $, depending on the orientation. In fact, we can iteratively use the type AA glueing structure from Theorem~\ref{thm:CFTdGeneralGlueing} together with the skein exact sequence from Theorem~\ref{thm:nTwistSkeinRelation} to locally modify tangles until we obtain a complex of peculiar modules of trivial and 1-crossing tangles, up to a large number of tensor factors from glueing. Then, one (only) needs to get rid of these extra factors. For example, in the case of the $(2,-3)$-pretzel tangle, this is indeed possible. It is also interesting to compare our ``figure-8'' curve to the curve that Hedden, Herald, Kirk associate with a trivial tangle in \cite[Figure~10]{HHK13} in the context of instanton tangle Floer homology in the pillowcase. \end{remark} \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.3} \begin{pspicture}(-5.6,-5.6)(5.6,5.6) \psset{linewidth=\stringwidth} \pscircle[linestyle=dotted](0,0){5.5} \rput(0,0){$\left\{\textcolor{white}{\rule[-0.8cm]{1.9cm}{1.8cm}}\right\}$} \rput{-90}(4.45,0){$2m+1$} \rput{90}(-4.45,0){$2n$} \psecurve(-1,1)(-3,-1)(0,-2.4)(3,-1)(1,1) \pscustom{ \psline{<-}(-3,4.5)(-3,3) \psecurve(-1,5)(-3,3)(-1,1)(-3,-1) } \rput(-2,0.3){$\vdots$} \psline[linestyle=dotted,dotsep=0.4](-2,-0.6)(-2,0.6) \psecurve[linecolor=white,linewidth=\stringwhite](-1,-5)(-3,-3)(-1,-1)(-3,1) \pscustom{ \psline(-3,-4.5)(-3,-3) \psecurve(-1,-5)(-3,-3)(-1,-1)(-3,1) } \pscustom{ \psline{<-}(3,4.5)(3,3) \psecurve(1,5)(3,3)(1,1)(3,-1) } \psline[linestyle=dotted,dotsep=0.4](2,-0.6)(2,0.6) \psecurve[linecolor=white,linewidth=\stringwhite](1,-5)(3,-3)(1,-1)(3,1) \pscustom{ \psline(3,-4.5)(3,-3) \psecurve(1,-5)(3,-3)(1,-1)(3,1) } \psecurve[linecolor=white,linewidth=\stringwhite](-1,-1)(-3,1)(0,2.4)(3,1)(1,-1) \psecurve(-1,-1)(-3,1)(0,2.4)(3,1)(1,-1) \rput(-2,4){$t_1$} \rput(2,4){$t_2$} \rput(-2.2,-4){$t_1$} \rput(2.2,-4){$t_2$} \end{pspicture} \caption{The pretzel tangle from Theorem~\ref{thm:pretzeltangleCalc}.}\label{fig:pretzeltangle2nm2mp1}\vspace*{-20pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \subsection{\texorpdfstring{Peculiar modules of $(2n,-(2m+1))$-pretzel tangles}{Peculiar modules of (2n,-(2m+1))-pretzel tangles}}\label{subsec:pretzels} \begin{theorem}\label{thm:pretzeltangleCalc} The peculiar modules of \((2n,-(2m+1))\)-pretzel tangles for \(n,m>0\), oriented as in Figure~\ref{fig:pretzeltangle2nm2mp1}, are equal to those shown in Figure~\ref{fig:ResultPretzels}. \end{theorem} \begin{remark} For $n=m=1$, this calculation was already done in Example~\ref{exa:HFTdpretzeltangle} directly from the definition of peculiar modules. The general case uses the combinatorial algorithm for computing peculiar modules from Corollary~\ref{cor:PeculiarModulesFromNiceDiagrams}. The Mathematica package \cite{PQM.m} allows us to easily confirm Theorem~\ref{thm:pretzeltangleCalc} for fixed pairs $(n,m)$. The cases $(n,m)=(3,4)$ and $(5,2)$ are included as examples in the manual for~\cite{PQM.m}. \end{remark} \begin{theorem}\label{thm:GeneralMutationInvariance} Let \(T\) be a tangle in the closed 3-ball \(B^3\) and \(T'\) the tangle obtained by relabelling the sites such that \(a\) and \(c\), and \(b\) and \(d\) are interchanged. If \(T\) is oriented, orient \(T'\) such that the orientation at the first tangle ends of \(T\) and \(T'\) (and hence any others) agree, by either changing the orientation of all strands or leaving them all the same. Then, if \(\CFTd(T)\) and \(\CFTd(T')\) are (graded) chain homotopic, mutation of these tangles preserves (graded) link Floer homology. \end{theorem} \begin{proof} This follows directly from the definition of mutation and the Glueing Theorem. \end{proof} \begin{figure}[p] \centering \psset{unit=0.5} \begin{subfigure}[b]{0.99\textwidth}\centering \begin{pspicture}(-13,-5.5)(13,5.5) \rput(-12.5,-4){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \psline(4,4)(4.75,4) \psline(5.25,4)(4,5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) } \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% \psdots[linecolor=gold] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4 } \rput(6,-4){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(-6.5,-0.5)(-5.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-7.5,0.5)(-6.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-6.5,1.5)(-5.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-5.5,2.5)(-4.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) \psframe*(-4.5,3.5)(-3.5,4.5) \psframe*(-3.5,4.5)(-2.5,5.5) } \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(-5,0)(-5.75,1) \psline(-6,0)(-6.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-7,0)(-7.5,0.5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,2)(-0.75,2) \psline(0,1)(-1.25,2) \psline(-4,0)(-5,0) \psline(-6,0)(-7,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) \psline(-5.75,1)(-6.25,1.5) \psline(-6.25,1)(-6.75,1.5) } \psdots[linecolor=blue] (0,0)(0,2) \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (-5.75,1)(-6.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)(-5,0)(-7,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% \psdots[linecolor=gold] (-2,0)(-4,0)(-6,0) } \rput(-6,4){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(-6.5,-0.5)(-5.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-7.5,0.5)(-6.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-6.5,1.5)(-5.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-5.5,2.5)(-4.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) \psframe*(-4.5,3.5)(-3.5,4.5) \psframe*(-3.5,4.5)(-2.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(-5,0)(-5.75,1) \psline(-6,0)(-6.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-7,0)(-7.5,0.5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) } \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,2)(-0.75,2) \psline(0,1)(-1.25,2) \psline(-4,0)(-5,0) \psline(-6,0)(-7,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) \psline(-5.75,1)(-6.25,1.5) \psline(-6.25,1)(-6.75,1.5) \psdots[linecolor=gold] (0,0)(0,2) \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (-5.75,1)(-6.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)(-5,0)(-7,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% \psdots[linecolor=blue] (-2,0)(-4,0)(-6,0) }} \rput(12.5,4){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \psline(4,4)(4.75,4) \psline(5.25,4)(4,5) } \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% \psdots[linecolor=blue] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4)% }} \rput[r](11.5,5){\small$t_1^{-n}t_2^{n+2m}$} \psecurve{->}(10.5,4)(11.5,5)(12.5,4)(12.5,3.25) \rput[l](-11.5,-5){\small$\,t_1^{n}t_2^{-n-2m}$} \psecurve{->}(-10.5,-4)(-11.5,-5)(-12.5,-4)(-12.5,-3.25) \rput[l](-5,5){\small$\,t_1^{-n}t_2^{n-2m-2}$} \psecurve{->}(-4,4)(-5,5)(-6,4.25)(-6,3.25) \rput[r](5,-5){\small$t_1^{n}t_2^{-n+2m+2}$} \psecurve{->}(4,-4)(5,-5)(6,-4.25)(6,-3.25) \end{pspicture} \caption{$n\leq m+1$}\label{fig:ResultPretzelsCase1} \end{subfigure} \\ \begin{subfigure}[b]{0.99\textwidth}\centering \begin{pspicture}(-13,-8.5)(13,8.5) \rput(-12.5,-7){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \psline(4,4)(4.75,4) \psline(5.25,4)(4,5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) } \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% \psdots[linecolor=gold] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4 } \rput(2.5,-7){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) } \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,1)(-0.75,2) \pscurve(0,2)(-0.25,1.8)(-1,1.8)(-1.25,2) \psline(-4,0)(-4.5,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) } \psdots[linecolor=blue] (0,0)(0,2) \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% \psdots[linecolor=gold] (-2,0)(-4,0) } \rput(-8.5,-2){ {\psset{linecolor=lightgray} \psframe*(3.5,-2.5)(4.5,-1.5) \psframe*(2.5,-1.5)(3.5,-0.5) \psframe*(4.5,-1.5)(5.5,-0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(6.5,2.5)(7.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(5.5,3.5)(6.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) \psframe*(4.5,4.5)(5.5,5.5) } \psline(0.5,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2.75,0)(1.75,1) \psline(3.25,0)(2.25,1) \psline(4.75,0)(3.75,1) \psline(5.25,0)(4.25,1) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) \psline(4.25,-1.5)(3.75,-1) \psline(4.75,-1.5)(4.25,-1) \psline(6.25,0.5)(5.75,1) \psline(6.75,0.5)(6.25,1) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(3.75,-1)(2.75,0) \psline(4.25,-1)(3.25,0) \psline(5.25,-0.5)(4.75,0) \psline(5.75,-0.5)(5.25,0) \psline(7.25,1.5)(6.75,2) \psline(7.75,1.5)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) } \psdots[linecolor=red]% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) (3.75,-1)(4.25,-1)% \psdots[linecolor=darkgreen] (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% (2.75,0)(3.25,0)% (4.75,0)(5.25,0)% \psdots[linecolor=gold] (2,2)% (2,4)(4,4 } \rput(-2.5,7){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) } \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,1)(-0.75,2) \pscurve(0,2)(-0.25,1.8)(-1,1.8)(-1.25,2) \psline(-4,0)(-4.5,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) \psdots[linecolor=gold] (0,0)(0,2) \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% \psdots[linecolor=blue] (-2,0)(-4,0) }} \rput(8.5,2){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(3.5,-2.5)(4.5,-1.5) \psframe*(2.5,-1.5)(3.5,-0.5) \psframe*(4.5,-1.5)(5.5,-0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(6.5,2.5)(7.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(5.5,3.5)(6.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) \psframe*(4.5,4.5)(5.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(0.5,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2.75,0)(1.75,1) \psline(3.25,0)(2.25,1) \psline(4.75,0)(3.75,1) \psline(5.25,0)(4.25,1) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) \psline(4.25,-1.5)(3.75,-1) \psline(4.75,-1.5)(4.25,-1) \psline(6.25,0.5)(5.75,1) \psline(6.75,0.5)(6.25,1) } \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(3.75,-1)(2.75,0) \psline(4.25,-1)(3.25,0) \psline(5.25,-0.5)(4.75,0) \psline(5.75,-0.5)(5.25,0) \psline(7.25,1.5)(6.75,2) \psline(7.75,1.5)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psdots[linecolor=red]% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) (3.75,-1)(4.25,-1)% \psdots[linecolor=darkgreen] (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% (2.75,0)(3.25,0)% (4.75,0)(5.25,0)% \psdots[linecolor=blue] (2,2)% (2,4)(4,4 }} \rput(12.5,7){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \psline(2,2)(2.75,2) \psline(3.25,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \psline(4,4)(4.75,4) \psline(5.25,4)(4,5) } \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% \psdots[linecolor=blue] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4)% }} \rput[r](11.5,8){\small$t_1^{-n}t_2^{n+2m}$} \psecurve{->}(10.5,7)(11.5,8)(12.5,7)(12.5,6.25) \rput[l](-11.5,-8){\small$\,t_1^{n}t_2^{-n-2m}$} \psecurve{->}(-10.5,-7)(-11.5,-8)(-12.5,-7)(-12.5,-6.25) \rput[l](-1.5,8){\small$\,t_1^{-n}t_2^{n-2m-2}$} \psecurve{->}(-0.5,7)(-1.5,8)(-2.5,7.25)(-2.5,6.25) \rput[r](1.5,-8){\small$t_1^{n}t_2^{-n+2m+2}$} \psecurve{->}(0.5,-7)(1.5,-8)(2.5,-7.25)(2.5,-6.25) \rput[r](-7.75,2){\small$t_1^{n-2m-2}t_2^{-n}$} \psline{->}(-7.75,2)(-6.75,2) \rput[l](7.75,-2){\small$\,t_1^{-n+2m+2}t_2^{n}$} \psline{->}(7.75,-2)(6.75,-2) \end{pspicture} \caption{$n>m+1$}\label{fig:ResultPretzelsCase2} \end{subfigure} \caption{The peculiar module of the $(2n,-(2m+1))$-pretzel tangle, shown in Figure~\ref{fig:pretzeltangle2nm2mp1}. We use the same conventions as in Example~\ref{exa:HFTdpretzeltangle}, see also Figures~\ref{fig:firstcomplex} and~\ref{fig:examplesimplifiedgraph}. All generators for site a and c, ie the red and green vertices, are in the same $\delta$-grading. The diagonals connecting pairs of red and green generators of the same Alexander gradings should be continued in such a way that they do not intersect each other. }\label{fig:ResultPretzels} \end{figure} \begin{figure}[p] \begin{subfigure}[b]{\textwidth}\centering \psset{unit=0.45,linearc=0.25} \begin{pspicture}(-19.5,-17.5)(9.5,17.5) \rput(0,10){ \pscustom*[linecolor=lightgray,linewidth=0pt]{ \psline[liftpen=1](-3,0)(-3,-4)(-4,-4) \psline[liftpen=1](-4,-4)(-4,-5) \psline[liftpen=1](-4,-5)(3,-5) \psline[liftpen=1](3,-5)(3,-4) \psline[linecolor=violet,liftpen=1](3,-4)(-2,-4)(-2,0)(-3,0) } \psline[linecolor=blue](4,-5)(-6,-5)(-6,3)(0,3)(0,-2)(5,-2)(5,6)(-9,6)(-9,-8)(-3,-8) \psline[linecolor=violet](-3,0)(-3,-4)(-5,-4)(-5,2)(-1,2)(-1,-3)(6,-3)(6,7)(-10,7)(-10,-9)(-3,-9) \psline[linecolor=violet](-3,0)(-2,0)(-2,-4)(4,-4) \psline[linecolor=violet](4,-6)(-7,-6)(-7,4)(1,4)(1,-1)(4,-1)(4,5)(-8,5)(-8,-7)(-3,-7) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](4.5,-4)(4,-4) \psline[linecolor=blue,linestyle=dotted,dotsep=1pt](4.5,-5)(4,-5) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](4.5,-6)(4,-6) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](-2.5,-7)(-3,-7) \psline[linecolor=blue,linestyle=dotted,dotsep=1pt](-2.5,-8)(-3,-8) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](-2.5,-9)(-3,-9) \psline[linecolor=red](-3,0)(-3,1)(3,1)(3,0) \psline[linecolor=red](-3,0)(-4,0)(-4,-10) \psline[linecolor=red](3,0)(3,-7) \pscircle[fillstyle=solid,fillcolor=white](-3,0){0.5} \pscircle[fillstyle=solid,fillcolor=white,linecolor=darkgreen](3,0){0.5} {\psset{dotstyle=|} \psdot[dotangle=-45](-1,1)\uput{0.2}[45](-1,1){$\overline{d}$} \psdot[dotangle=-45](0,1)\uput{0.2}[45](0,1){$d$} \psdot[dotangle=-45](1,1)\uput{0.2}[45](1,1){$\underline{d}$} \psdot[dotangle=45](3,-1)\uput{0.2}[-45](3,-1){$\underline{c}_1$} \psdot[dotangle=45](3,-2)\uput{0.2}[-45](3,-2){$c_1$} \psdot[dotangle=45](3,-3)\uput{0.2}[-45](3,-3){$\overline{c}_1$} \psdot[dotangle=45](3,-4)\uput{0.2}[-45](3,-4){$\overline{c}_2$} \psdot[dotangle=45](3,-5)\uput{0.2}[-45](3,-5){$c_2$} \psdot[dotangle=45](3,-6)\uput{0.2}[-45](3,-6){$\underline{c}_2$} \psdot[dotangle=-45](-4,-4)\uput{0.2}[-135](-4,-4){$\overline{y}_1$} \psdot[dotangle=-45](-4,-5)\uput{0.2}[-135](-4,-5){$y_1$} \psdot[dotangle=-45](-4,-6)\uput{0.2}[-135](-4,-6){$\underline{y}_1$} \psdot[dotangle=-45](-4,-7)\uput{0.2}[-135](-4,-7){$\underline{y}_2$} \psdot[dotangle=-45](-4,-8)\uput{0.2}[-135](-4,-8){$y_2$} \psdot[dotangle=-45](-4,-9)\uput{0.2}[-135](-4,-9){$\overline{y}_2$} \uput{0.7}[65](3,0){$q_4$} \uput{0.7}[180](3,0){$p_4$} } } \rput(0,-10){ \pscustom*[linecolor=lightgray,linewidth=0pt]{ \psline[liftpen=1](3,7)(3,8) \psline[liftpen=1](3,8)(8,8)(8,-5)(-6,-5)(-6,2)(-4,2) \psline[liftpen=1](-4,2)(-4,1) \psline[liftpen=1](-4,1)(-5,1)(-5,0)(-4,0) \psline[liftpen=1](-4,0)(-4,-4)(7,-4)(7,7)(3,7) } \psline[linecolor=blue](2,8)(8,8)(8,-5)(-6,-5)(-6,2)(0,2)(0,-2)(5,-2)(5,5)(-5,5) \psline[linecolor=violet](-4,0)(-5,0)(-5,1)(-1,1)(-1,-3)(6,-3)(6,6)(-5,6) \psline[linecolor=violet](-4,0)(-4,-4)(7,-4)(7,7)(2,7) \psline[linecolor=violet](2,9)(9,9)(9,-6)(-7,-6)(-7,3)(1,3)(1,-1)(4,-1)(4,4)(-5,4) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](-5.5,4)(-5,4) \psline[linecolor=blue,linestyle=dotted,dotsep=1pt](-5.5,5)(-5,5) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](-5.5,6)(-5,6) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](1.5,7)(2,7) \psline[linecolor=blue,linestyle=dotted,dotsep=1pt](1.5,8)(2,8) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](1.5,9)(2,9) \psline[linecolor=red](-4,0)(3,0) \psline[linecolor=red](-4,0)(-4,7) \psline[linecolor=red](3,0)(3,10) \psline[linestyle=dotted,linecolor=red](-4,10)(-4,7) \psline[linestyle=dotted,linecolor=red](3,13)(3,10) \pscircle[fillstyle=solid,fillcolor=white](-4,0){0.5} \pscircle[fillstyle=solid,fillcolor=white,linecolor=darkgreen](3,0){0.5} {\psset{dotstyle=|} \psdot[dotangle=-45](-1,0)\uput{0.2}[-135](-1,0){$\underline{b}$} \psdot[dotangle=-45](0,0)\uput{0.2}[-135](0,0){$b$} \psdot[dotangle=-45](1,0)\uput{0.2}[-135](1,0){$\overline{b}$} \psdot[dotangle=-45](3,4)\uput{0.2}[-135](3,4){$\overline{c}_{2m+1}$} \psdot[dotangle=-45](3,5)\uput{0.2}[-135](3,5){$c_{2m+1}$} \psdot[dotangle=-45](3,6)\uput{0.2}[-135](3,6){$\underline{c}_{2m+1}$} \psdot[dotangle=-45](3,7)\uput{0.2}[-135](3,7){$\underline{c}_{2m}$} \psdot[dotangle=-45](3,8)\uput{0.2}[-135](3,8){$c_{2m}$} \psdot[dotangle=-45](3,9)\uput{0.2}[-135](3,9){$\overline{c}_{2m}$} \psdot[dotangle=-45](-4,1)\uput{0.2}[45](-4,1){$\underline{y}_{2m+1}$} \psdot[dotangle=-45](-4,2)\uput{0.2}[45](-4,2){$y_{2m+1}$} \psdot[dotangle=-45](-4,3)\uput{0.2}[45](-4,3){$\overline{y}_{2m+1}$} \psdot[dotangle=-45](-4,4)\uput{0.2}[45](-4,4){$\overline{y}_{2m}$} \psdot[dotangle=-45](-4,5)\uput{0.2}[45](-4,5){$y_{2m}$} \psdot[dotangle=-45](-4,6)\uput{0.2}[45](-4,6){$\underline{y}_{2m}$} \uput{0.7}[65](3,0){$q_3$} \uput{0.7}[135](3,0){$p_3$} } } \rput(-16,10){ \pscustom*[linecolor=lightgray,linewidth=0pt]{ \psline[liftpen=1](-1,-1)(-1,1)(0,1) \psline[liftpen=1](0,1)(0,0)(1,0) \psline[liftpen=1](1,0)(1,-1)(-1,-1) } \psline[linecolor=violet](1,0)(0,0)(0,2)(3,2)(3,-2)(-2,-2) \psline[linecolor=violet](1,0)(1,-1)(-2,-1)(-2,3)(4,3)(4,-3)(1,-3) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](0.5,-3)(1,-3) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](-2.5,-2)(-2,-2) \psline[linecolor=red](-1,0)(-1,1)(1,1)(1,0) \psline[linecolor=red](-1,0)(-1,-3) \psline[linecolor=red](1,0)(2,0)(2,-4) \pscircle[fillstyle=solid,fillcolor=white,linecolor=darkgreen](-1,0){0.5} \pscircle[fillstyle=solid,fillcolor=white](1,0){0.5} \psline[linecolor=darkgreen](-0.75,0)(-0.25,0) {\psset{dotstyle=|} \psdot[dotangle=45](0,1)\uput{0.2}[135](0,1){$d'$} \psdot[dotangle=45](2,-2)\uput{0.2}[-45](2,-2){$x_1$} \psdot[dotangle=45](2,-3)\uput{0.2}[-45](2,-3){$x_2$} \psdot[dotangle=-45](-1,-1)\uput{0.2}[-135](-1,-1){$a_1$} \psdot[dotangle=-45](-1,-2)\uput{0.2}[-135](-1,-2){$a_2$} \uput{0.7}[115](-1,0){$q_1$} } } \rput(-16,-10){ \pscustom*[linecolor=lightgray,linewidth=0pt]{ \psline[liftpen=1](-1,2)(-1,0)(0,0) \psline[liftpen=1](0,0)(0,-1)(2,-1)(2,0)(3,0)(3,-2)(-2,-2)(-2,2)(-1,2) } \psline[linecolor=violet](2,0)(2,-1)(0,-1)(0,1)(4,1)(4,-3)(-3,-3)(-3,3)(0,3) \psline[linecolor=violet](2,0)(3,0)(3,-2)(-2,-2)(-2,2)(3,2) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](0.5,3)(0,3) \psline[linecolor=violet,linestyle=dotted,dotsep=1pt](3.5,2)(3,2) \psline[linecolor=red](-1,0)(2,0) \psline[linecolor=red](-1,0)(-1,4) \psline[linecolor=red](2,0)(2,3) \psline[linecolor=red,linestyle=dotted](-1,17)(-1,4) \psline[linecolor=red,linestyle=dotted](2,16)(2,3) \pscircle[fillstyle=solid,fillcolor=white,linecolor=darkgreen](-1,0){0.5} \pscircle[fillstyle=solid,fillcolor=white](2,0){0.5} \rput(-1,0){\psline[linecolor=darkgreen](0.75;-135)(0.25;-135)} {\psset{dotstyle=|} \psdot[dotangle=-45](0,0)\uput{0.2}[45](0,0){$b'$} \psdot[dotangle=-45](2,1)\uput{0.2}[45](2,1){$x_{2n}$} \psdot[dotangle=-45](2,2)\uput{0.2}[45](2,2){$x_{2n-1}$} \psdot[dotangle=-45](-1,2)\uput{0.2}[45](-1,2){$a_{2n}$} \psdot[dotangle=-45](-1,3)\uput{0.2}[45](-1,3){$a_{2n-1}$} \uput{0.7}[65](-1,0){$p_2$} } } \end{pspicture} \caption{A niceified Heegaard diagram for a $(2n,-(2m+1))$-pretzel tangle with $n,m>0$.}\label{fig:CalculationPretzelsStep1HD} \end{subfigure} \begin{subfigure}[b]{0.97\textwidth}\centering \bigskip \begin{tabular}{cccccccc} & $\textcolor{red}{a_iy_j}$ & $\textcolor{blue}{b'y_j}$ & $\textcolor{blue}{x_ib}$ & $\textcolor{darkgreen}{x_ic_j}$ && $\textcolor{gold}{d'y_j}$ & $\textcolor{gold}{x_id}$ \\ & & $\textcolor{blue}{\underline{b}y_j}$ & $\textcolor{blue}{\underline{y}_jb}$ & $\textcolor{darkgreen}{\underline{c}_jy_{k}}$ & $\textcolor{darkgreen}{\underline{y}_kc_{j}}$ & $\textcolor{gold}{\underline{d}y_j}$ & $\textcolor{gold}{\underline{y}_jd}$ \\ & & $\textcolor{blue}{\overline{b}y_j}$ & $\textcolor{blue}{\overline{y}_jb}$ & $\textcolor{darkgreen}{\overline{c}_jy_{k}}$ & $\textcolor{darkgreen}{\overline{y}_kc_{j}}$ & $\textcolor{gold}{\overline{d}y_j}$ & $\textcolor{gold}{\overline{y}_jd}$ \end{tabular} \medskip \caption{Generators of the Heegaard diagram above, where $1\leq i\leq 2n$ and $1\leq j,k\leq 2m+1$. The generators of the second and third row can be cancelled. }\label{fig:CalculationPretzelsStep1Gens} \end{subfigure} \caption{The first step of the calculation of the peculiar modules of $(2n,-(2m+1))$-pretzel tangles.}\label{fig:CalculationPretzelsStep1} \end{figure} \begin{figure}[h] \centering \begin{subfigure}[b]{0.37\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{darkgreen}{x_{i+1}c_{j-1}} \arrow{dr}{q_{14}} \\ & \textcolor{red}{\ovalbox{$a_iy_j$}} \\ && \textcolor{darkgreen}{x_{i-1}c_{j+1}} \arrow[swap]{ul}{p_{23}} \end{tikzcd} \] } \caption{$1<i<2n$ and $1<j<2m+1$}\label{fig:CalculationPretzelsStep2Redij} \end{subfigure} \begin{subfigure}[b]{0.28\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{gold}{x_{i+1}d} \arrow{d}{q_{1}} \\ \textcolor{red}{\ovalbox{$a_iy_1$}} \\ & \textcolor{darkgreen}{x_{i-1}c_{2}} \arrow[swap]{ul}{p_{23}} \end{tikzcd} \] } \caption{$1<i<2n$ and $j=1$}\label{fig:CalculationPretzelsStep2Redi1} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{darkgreen}{x_{i+1}c_{2m}} \arrow{rd}{q_{14}} \\ & \textcolor{red}{\ovalbox{$a_{i}y_{2m+1}$}} \\ & \textcolor{blue}{x_{i-1}b} \arrow[swap]{u}{p_{2}} \end{tikzcd} \] } \caption{$1<i<2n$ and $j=2m+1$}\label{fig:CalculationPretzelsStep2Redi2mp1} \end{subfigure} \\ \begin{subfigure}[b]{0.37\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{red}{a_{i+1}y_{j-1}} \\ & \textcolor{darkgreen}{\ovalbox{$x_{i}c_{j}$}} \arrow[swap]{ul}{p_{23}} \arrow{dr}{q_{14}} \\ && \textcolor{red}{a_{i-1}y_{j+1}} \end{tikzcd} \] } \caption{$1<i<2n$ and $1<j<2m+1$}\label{fig:CalculationPretzelsStep2Greenij} \end{subfigure} \begin{subfigure}[b]{0.28\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{gold}{x_{i}d} \arrow{r}{p_4} & \textcolor{darkgreen}{\ovalbox{$x_{i}c_{1}$}} \arrow{dr}{q_{14}} \\ && \textcolor{red}{a_{i-1}y_2} \end{tikzcd} \] } \caption{$1<i<2n$ and $j=1$}\label{fig:CalculationPretzelsStep2Greeni1} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{red}{a_{i+1}y_{2m}} \\ & \textcolor{darkgreen}{\ovalbox{$x_{i}c_{2m+1}$}} \arrow[swap]{lu}{p_{23}} & \textcolor{blue}{x_{i}b} \arrow[swap]{l}{q_{3}}\\ \phantom{x_1} \end{tikzcd} \] } \caption{$1<i<2n$ and $j=2m+1$}\label{fig:CalculationPretzelsStep2Greeni2mp1} \end{subfigure} \\ \begin{subfigure}[b]{0.27\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] & \textcolor{red}{a_{i+1}y_{2m+1}} \\ \textcolor{darkgreen}{x_{i}c_{2m+1}} & \textcolor{blue}{\ovalbox{$x_{i}b$}} \arrow[swap]{l}{q_{3}} \arrow{u}{p_{2}} \end{tikzcd} \] } \caption{$1\leq i<2n$}\label{fig:CalculationPretzelsStep2Bluei} \end{subfigure} \begin{subfigure}[b]{0.27\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm] \textcolor{gold}{\ovalbox{$x_{i}d$}} \arrow[swap]{d}{q_{1}} \arrow{r}{p_{4}} & \textcolor{darkgreen}{x_{i}c_{1}} \\ \textcolor{red}{a_{i-1}y_{1}} \end{tikzcd} \] } \caption{$1< i\leq 2n$}\label{fig:CalculationPretzelsStep2Goldi} \end{subfigure} \caption{Some differentials for the computation of the $(2n,-(2m+1))$-pretzel tangle in non-extremal $t_1$-Alexander grading.}\label{fig:CalculationPretzelsAStep2generic} \end{figure} \begin{figure}[p] \begin{subfigure}[b]{0.95\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,F} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-27pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-27pt,yshift=-12pt]A) % -- ([xshift=21pt,yshift=-12pt]B) % -- ([xshift=21pt,yshift=9pt]B) % -- cycle; \fill[lightgray!40] ([xshift=-21pt,yshift=12pt]C) [rounded corners=5pt] % -- ([xshift=-21pt,yshift=-12pt]C) % -- ([xshift=10pt,yshift=-12pt]D) % -- ([xshift=10pt,yshift=12pt]D) % -- cycle; \fill[lightgray!40] ([xshift=-18pt,yshift=9pt]E) [rounded corners=5pt] % -- ([xshift=-18pt,yshift=-9pt]E) % -- ([xshift=30pt,yshift=-9pt]F) % -- ([xshift=30pt,yshift=9pt]F) % -- cycle; \end{pgfonlayer} } ] |[alias=E]| \textcolor{darkgreen}{x_2c_{2m}} \arrow[swap]{drr}{q_{14}} & \textcolor{darkgreen}{x_3c_{2m-1}} \arrow[dashed,swap,pos=0.25]{drr}{q_{14}} & |[alias=F]| \textcolor{darkgreen}{\underline{y}_{2m+1}c_{2m-1}} \arrow[dotted]{dr}{q_{14}} \\ && |[alias=A]| \textcolor{red}{\ovalbox{$a_1y_{2m+1}$}} & |[alias=B]| \textcolor{red}{\ovalbox{$a_2y_{2m}$}} \\ \textcolor{darkgreen}{x_{1}c_{2m}} \arrow{r}{q_4} & |[alias=C]| \textcolor{gold}{d'y_{2m+1}} & \textcolor{blue}{\ovalbox{$\overline{y}_{1}b$}} \arrow{r}{1} \arrow{u}{p_2} & |[alias=D]| \textcolor{blue}{\overline{b}y_{1}} & \textcolor{darkgreen}{\ovalbox{$x_{1}c_{2m+1}$}} \arrow[swap]{l}{p_3} \arrow[swap]{ul}{p_{23}} & \textcolor{blue}{x_1b} \arrow[swap]{l}{q_3} \end{tikzcd} \] } \caption{}\label{fig:CalculationPretzelsStep2BottomEnd} \end{subfigure} \\ \begin{subfigure}[b]{0.95\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,H} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-25pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-25pt,yshift=-13pt]A) % -- ([xshift=19pt,yshift=-13pt]B) % -- ([xshift=19pt,yshift=9pt]B) % -- cycle; \fill[lightgray!40] ([xshift=-16pt,yshift=9pt]G) [rounded corners=5pt] % -- ([xshift=-16pt,yshift=-9pt]G) % -- ([xshift=29pt,yshift=-9pt]H) % -- ([xshift=29pt,yshift=9pt]H) % -- cycle; \fill[lightgray!40] ([xshift=-25pt,yshift=11pt]C) [rounded corners=5pt] % -- ([xshift=-25pt,yshift=-13pt]C) % -- ([xshift=24pt,yshift=-13pt]D) % -- ([xshift=24pt,yshift=11pt]D) % -- cycle; \fill[lightgray!40] ([shift=(145:23pt)]E) % [rounded corners=5pt]-- ([shift=(35:23pt)]E) % -- ([shift=(-35:23pt)]F) % -- ([shift=(-145:23pt)]F) % -- cycle; \end{pgfonlayer} } ] |[alias=G]| \textcolor{darkgreen}{x_2c_{2j}} \arrow[swap]{drr}{q_{14}} & \textcolor{darkgreen}{x_3c_{2j-1}} \arrow[dashed,swap,pos=0.25]{drr}{q_{14}} & |[alias=H]| \textcolor{darkgreen}{\underline{y}_{2m+1}c_{2j-1}} \arrow[dotted]{dr}{q_{14}} \\ && |[alias=A]| \textcolor{red}{\ovalbox{$a_1y_{2j+1}$}} & |[alias=B]| \textcolor{red}{\ovalbox{$a_2y_{2j}$}} &&& |[alias=E]| \textcolor{gold}{\overline{d}y_{2j+3}} \\ & \textcolor{darkgreen}{x_{1}c_{2j}} \arrow{r}{q_4} & \textcolor{gold}{d'y_{2j+1}} & |[alias=C]| \textcolor{darkgreen}{\ovalbox{$\overline{y}_{1}c_{2j+2}$}} \arrow{r}{1} \arrow{ul}{p_{23}} \arrow[in=180,out=90,looseness=0.3,pos=.7]{rrru}{q_4} & \textcolor{darkgreen}{\overline{c}_{2j+2}y_{1}} & |[alias=D]| \textcolor{darkgreen}{\ovalbox{$x_{1}c_{2j+1}$}} \arrow[swap]{l}{1} \arrow{ull}{p_{23}} \arrow{r}{q_4} & |[alias=F]| \textcolor{gold}{d'y_{2j+2}} \end{tikzcd} \] } \caption{$1\leq j< m$}\label{fig:CalculationPretzelsStep2BottomOdd} \end{subfigure} \\ \begin{subfigure}[b]{0.95\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,D} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-16pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-16pt,yshift=-13pt]A) % -- ([xshift=25pt,yshift=-13pt]B) % -- ([xshift=25pt,yshift=9pt]B) % -- cycle; \fill[lightgray!40] ([xshift=-21pt,yshift=9pt]C) [rounded corners=5pt] % -- ([xshift=-21pt,yshift=-9pt]C) % -- ([xshift=29pt,yshift=-9pt]D) % -- ([xshift=29pt,yshift=9pt]D) % -- cycle; \end{pgfonlayer} }] |[alias=C]| \textcolor{darkgreen}{x_3c_{2j-2}} \arrow[dashed,swap,pos=0.25]{drr}{q_{14}} & |[alias=D]| \textcolor{darkgreen}{\underline{y}_{2m+1}c_{2j-2}} \arrow[dotted]{dr}{q_{14}} \\ & |[alias=A]| \textcolor{red}{a_1y_{2j}} & |[alias=B]| \textcolor{red}{\ovalbox{$a_2y_{2j-1}$}} \\ \textcolor{darkgreen}{x_{1}c_{2j-1}} \arrow{r}{q_4} & \textcolor{gold}{d'y_{2j}} & & \textcolor{darkgreen}{\ovalbox{$x_{1}c_{2j}$}} \arrow{ul}{p_{23}} \arrow{r}{q_4} & \textcolor{gold}{d'y_{2j+1}} \end{tikzcd} \] } \caption{$1\leq j\leq m$}\label{fig:CalculationPretzelsStep2BottomEven} \end{subfigure} \\ \begin{subfigure}[b]{0.3\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,D} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-17pt,yshift=11pt]A) [rounded corners=5pt]% -- ([xshift=17pt,yshift=11pt]A) % -- ([xshift=17pt,yshift=-11pt]B) % -- ([xshift=-17pt,yshift=-11pt]B) % -- cycle; \fill[lightgray!40] ([xshift=-16pt,yshift=13pt]C) [rounded corners=5pt]% -- ([xshift=16pt,yshift=13pt]C) % -- ([xshift=16pt,yshift=-11pt]D) % -- ([xshift=-16pt,yshift=-11pt]D) % -- cycle; \end{pgfonlayer} } ] \textcolor{gold}{x_{2}d} \arrow[swap]{d}{q_{1}} \\ \textcolor{red}{\ovalbox{$a_{1}y_1$}} \\ & |[alias=A]| \textcolor{darkgreen}{\ovalbox{$\overline{y}_1c_2$}} \arrow[swap]{d}{1} \arrow[swap]{lu}{p_{23}} \\ |[alias=C]| \textcolor{gold}{\ovalbox{$d'y_1$}} \arrow{r}{p_{4}} & \textcolor{darkgreen}{\ovalbox{$\overline{c}_2y_1$}} \\ |[alias=D]| \textcolor{gold}{\ovalbox{$x_1d$}} \arrow{r}{p_{4}} \arrow{u}{1} & |[alias=B]| \textcolor{darkgreen}{\ovalbox{$x_1c_1$}} \arrow{u}{1} \arrow{r}{q_4} & \textcolor{gold}{d'y_2} \end{tikzcd} \] } \caption{}\label{fig:CalculationPretzelsStep2GoldCancel} \end{subfigure} \caption{Some differentials for the computation of the $(2n,-(2m+1))$-pretzel tangle at generators in maximal $t_1$-Alexander grading before cancellation. The dotted arrows appear iff $n=1$, the dashed arrows iff $n>1$.}\label{fig:CalculationPretzelsAStep2MaximumP} \end{figure} \begin{figure}[b] \begin{subfigure}[b]{0.95\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,D} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-23pt,yshift=8pt]C) [rounded corners=5pt] % -- ([xshift=-23pt,yshift=-10pt]C) % -- ([xshift=21pt,yshift=-10pt]D) % -- ([xshift=21pt,yshift=8pt]D) % -- cycle; \fill[lightgray!40] ([xshift=-25pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-25pt,yshift=-12pt]A) % -- ([xshift=20pt,yshift=-12pt]B) % -- ([xshift=20pt,yshift=9pt]B) % -- cycle; \end{pgfonlayer} } ] \textcolor{gold}{x_{2n}d} \arrow{r}{p_4} & \textcolor{darkgreen}{x_{2n}c_{1}} \arrow{rrd}{q_{14}} \arrow[swap]{rd}{q_{14}} & & \textcolor{blue}{b'y_1} \arrow{d}{q_{143}} & \textcolor{darkgreen}{x_{2n}c_{2}} \arrow[swap]{l}{p_3} \\ && |[alias=A]| \textcolor{red}{a_{2n-1}y_{2}} & |[alias=B]| \textcolor{red}{a_{2n}y_1} \\ &&&& |[alias=C]| \textcolor{darkgreen}{x_{2n-2}c_{3}} \arrow[dashed,swap,pos=0.25]{llu}{p_{23}} & |[alias=D]| \textcolor{darkgreen}{x_{2n-1}c_{2}} \arrow[swap]{llu}{p_{23}} \end{tikzcd} \] } \caption{}\label{fig:CalculationPretzelsStep2TopEnd} \end{subfigure} \\ \begin{subfigure}[b]{\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.35cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,...,D} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-33pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-33pt,yshift=-13pt]A) % -- ([xshift=27pt,yshift=-13pt]B) % -- ([xshift=27pt,yshift=9pt]B) % -- cycle; \fill[lightgray!40] ([xshift=-30pt,yshift=9pt]C) [rounded corners=5pt] % -- ([xshift=-30pt,yshift=-9pt]C) % -- ([xshift=30pt,yshift=-9pt]D) % -- ([xshift=30pt,yshift=9pt]D) % -- cycle; \end{pgfonlayer} } ] \textcolor{blue}{b'y_{2j}} & \textcolor{darkgreen}{x_{2n}c_{2j+1}} \arrow[swap]{l}{p_3} \arrow[swap]{rd}{q_{14}} \arrow{rrd}{q_{14}} & & \textcolor{blue}{b'y_{2j+1}} \arrow{d}{q_{143}} & \textcolor{darkgreen}{x_{2n}c_{2j+2}} \arrow[swap]{l}{p_3} \\ && |[alias=A]| \textcolor{red}{a_{2n-1}y_{2j+2}} & |[alias=B]| \textcolor{red}{a_{2n}y_{2j+1}} \\ &&&& |[alias=C]| \textcolor{darkgreen}{x_{2n-2}c_{2j+3}} \arrow[dashed,swap,pos=0.25]{llu}{p_{23}} & |[alias=D]| \textcolor{darkgreen}{x_{2n-1}c_{2j+2}} \arrow[swap]{llu}{p_{23}} \end{tikzcd} \] } \caption{$1\leq j<m$}\label{fig:CalculationPretzelsStep2TopOdd} \end{subfigure} \\ \begin{subfigure}[b]{0.95\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm,% execute at end picture={ \begin{pgfonlayer}{background} \foreach \Nombre in {A,B,C,D} {\coordinate (\Nombre) at (\Nombre.center);} \fill[lightgray!40] ([xshift=-33pt,yshift=9pt]A) [rounded corners=5pt] % -- ([xshift=-33pt,yshift=-13pt]A) % -- ([xshift=18pt,yshift=-13pt]B) % -- ([xshift=18pt,yshift=9pt]B) % -- cycle; \end{pgfonlayer} } ] \textcolor{blue}{b'y_{2j-1}} & \textcolor{darkgreen}{x_{2n}c_{2j}} \arrow[swap]{l}{p_3} \arrow{rd}{q_{14}} & & \textcolor{blue}{b'y_{2j}} \arrow{d}{q_{143}} & \textcolor{darkgreen}{x_{2n}c_{2j+1}} \arrow[swap]{l}{p_3} \\ && |[alias=A]| \textcolor{red}{a_{2n-1}y_{2j+1}} & |[alias=B]| \textcolor{red}{a_{2n}y_{2j}} \\ &&&& \textcolor{darkgreen}{x_{2n-2}c_{2j+2}} \arrow[dashed,swap,pos=0.25]{llu}{p_{23}} \end{tikzcd} \] } \caption{$1\leq j\leq m$}\label{fig:CalculationPretzelsStep2TopEven} \end{subfigure} \\ \begin{subfigure}[b]{0.4\textwidth} { \[ \begin{tikzcd}[row sep=0.6cm, column sep=0.4cm ] \textcolor{blue}{b'y_{2m}} & \textcolor{darkgreen}{x_{2n}c_{2m+1}} \arrow[swap]{l}{p_{3}} \arrow{rd}{q_{14}} \\ && \textcolor{red}{a_{2n}y_{2m+1}} \\ && \textcolor{blue}{x_{2n-1}b} \arrow[swap]{u}{p_{2}} \end{tikzcd} \] } \caption{}\label{fig:CalculationPretzelsStep2BlueCancel} \end{subfigure} \caption{Some differentials for the computation of the $(2n,-(2m+1))$-pretzel tangle at generators in minimal $t_1$-Alexander grading after cancellation. The dashed arrows appear iff $n>1$.}\label{fig:CalculationPretzelsAStep2MinimumP} \end{figure} \begin{figure}[t] \centering \psset{unit=0.5} \begin{pspicture}(-13,-7.5)(13,7.5) \rput(-12.5,-7){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \pscurve(2,2)(2.25,2.2)(3,2.2)(3.25,2) \psline(2.75,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) } \pscurve{->}(5,0)(4.75,0.1)(4,0.8)(3.75,1) \pscurve{->}(9,0)(8.75,0.1)(8,0.8)(7.75,1) \pscurve{->}(4.25,1)(4.2,0.5)(4,0) \pscurve{->}(8.25,1)(8.2,0.5)(8,0) \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) {\tiny \uput{0.15}[180](1.75,1){oe} \uput{0.15}[0](2.25,1){eo} \uput{0.15}[180](3.75,1){oo} \uput{0.15}[0](4.25,1){ee} \uput{0.15}[180](5.75,1){oe} \uput{0.15}[0](6.25,1){eo} \uput{0.15}[180](7.75,1){oo} \uput{0.15}[0](8.25,1){ee} \uput{0.15}[180](2,3){oo} \uput{0.15}[180](4,5){oo} } \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% {\tiny \uput{0.15}[90](1,2){eo} \uput{0.15}[90](3,4){eo} } \psdots[linecolor=gold] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4 {\tiny \uput{0.15}[-90](2,0){e} \uput{0.15}[-90](4,0){o} \uput{0.15}[-90](6,0){e} \uput{0.15}[-90](8,0){o} \uput{0.15}[135](0,2){e} \uput{0.15}[135](2,2){o} \uput{0.15}[135](2,4){e} \uput{0.15}[135](4,4){o} } } \rput(2.5,-7){ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) } \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,1)(-0.75,2) \pscurve(0,2)(-0.25,1.8)(-1,1.8)(-1.25,2) \psline(-4,0)(-4.5,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) } \pscurve{->}(-1,0)(-1.25,0.1)(-2,0.8)(-2.25,1) \pscurve{->}(-1.75,1)(-1.8,0.5)(-2,0) \psdots[linecolor=blue] (0,0)(0,2) {\tiny \uput{0.15}[-45](0,0){o} \uput{0.15}[-45](0,2){e} } \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% {\tiny \uput{0.15}[0](-1.75,1){ee} \uput{0.15}[180](-2.25,1){oo} \uput{0.15}[0](-3.75,1){eo} \uput{0.15}[180](-4.25,1){oe} \uput{0.15}[0](0,1){eo} } \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% {\tiny \uput{0.15}[-90](1,2){oo} } \psdots[linecolor=gold] (-2,0)(-4,0) {\tiny \uput{0.15}[-90](-2,0){o} \uput{0.15}[-90](-4,0){e} } } \rput(-8.5,-2){ {\psset{linecolor=lightgray} \psframe*(3.5,-2.5)(4.5,-1.5) \psframe*(2.5,-1.5)(3.5,-0.5) \psframe*(4.5,-1.5)(5.5,-0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(6.5,2.5)(7.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(5.5,3.5)(6.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) \psframe*(4.5,4.5)(5.5,5.5) } \psline(0.5,2)(1,2) \pscurve(2,2)(2.25,2.2)(3,2.2)(3.25,2) \psline(2.75,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2.75,0)(1.75,1) \psline(3.25,0)(2.25,1) \psline(4.75,0)(3.75,1) \psline(5.25,0)(4.25,1) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) \psline(4.25,-1.5)(3.75,-1) \psline(4.75,-1.5)(4.25,-1) \psline(6.25,0.5)(5.75,1) \psline(6.75,0.5)(6.25,1) {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(3.75,-1)(2.75,0) \psline(4.25,-1)(3.25,0) \psline(5.25,-0.5)(4.75,0) \psline(5.75,-0.5)(5.25,0) \psline(7.25,1.5)(6.75,2) \psline(7.75,1.5)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) } \psdots[linecolor=red]% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) (3.75,-1)(4.25,-1)% {\tiny \uput{0.15}[180](2,3){oo} \uput{0.15}[180](4,5){oo} } \psdots[linecolor=darkgreen] (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% (2.75,0)(3.25,0)% (4.75,0)(5.25,0)% {\tiny \uput{0.15}[90](1,2){eo} \uput{0.15}[90](3,4){eo} } \psdots[linecolor=gold] (2,2)% (2,4)(4,4 {\tiny \uput{0.15}[135](2,2){o} \uput{0.15}[135](2,4){e} \uput{0.15}[135](4,4){o} } } \rput(-2.5,7){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(-2.5,-0.5)(-1.5,0.5) \psframe*(-4.5,-0.5)(-3.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(-1.5,0.5)(-0.5,1.5) \psframe*(-3.5,0.5)(-2.5,1.5) \psframe*(-5.5,0.5)(-4.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(-2.5,1.5)(-1.5,2.5) \psframe*(-4.5,1.5)(-3.5,2.5) \psframe*(-1.5,2.5)(-0.5,3.5) \psframe*(-3.5,2.5)(-2.5,3.5) \psframe*(-2.5,3.5)(-1.5,4.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(0,0)(0,1) \psline(-1,0)(-1.75,1) \psline(-2,0)(-2.25,1) \psline(-3,0)(-3.75,1) \psline(-4,0)(-4.25,1) \psline(0,2)(-0.25,3) \psline(1,2)(0.25,3) \psline(-0.75,2)(-1.75,3) \psline(-1.25,2)(-2.25,3) \psline(-2.75,2)(-3.75,3) \psline(-3.25,2)(-4.25,3) \psline(-0.75,4)(-1.75,5) \psline(-1.25,4)(-2.25,5) \psline(-4.75,2)(-5.25,2.5) \psline(-5.25,2)(-5.75,2.5) \psline(-2.75,4)(-3.25,4.5) \psline(-3.25,4)(-3.75,4.5) \pscurve{->}(-1,0)(-1.25,0.1)(-2,0.8)(-2.25,1) \pscurve{->}(-1.75,1)(-1.8,0.5)(-2,0) } \psline(1.5,2)(1,2) \psline(0,0)(-1,0) \psline(-2,0)(-3,0) \psline(-1.75,1)(-2.75,2) \psline(-2.25,1)(-3.25,2) \psline(-3.75,1)(-4.75,2) \psline(-4.25,1)(-5.25,2) \psline(-1.75,3)(-2.75,4) \psline(-2.25,3)(-3.25,4) \psline(0.25,3)(-0.75,4) \psline(-0.25,3)(-1.25,4) \psline(0,1)(-0.75,2) \pscurve(0,2)(-0.25,1.8)(-1,1.8)(-1.25,2) \psline(-4,0)(-4.5,0) \psline(-1.75,5)(-2.25,5.5) \psline(-2.25,5)(-2.75,5.5) \psline(-3.75,3)(-4.25,3.5) \psline(-4.25,3)(-4.75,3.5) \psdots[linecolor=gold] (0,0)(0,2) {\tiny \uput{0.15}[-45]{180}(0,0){e} \uput{0.15}[-45]{180}(0,2){o} } \psdots[linecolor=red]% (0,1)% (-1.75,1)(-2.25,1)% (-3.75,1)(-4.25,1)% (0.25,3)(-0.25,3)% (-1.75,3)(-2.25,3)% (-3.75,3)(-4.25,3)% (-1.75,5)(-2.25,5)% {\tiny \uput{0.15}[0]{180}(-1.75,1){oe} \uput{0.15}[180]{180}(-2.25,1){eo} \uput{0.15}[0]{180}(-3.75,1){oo} \uput{0.15}[180]{180}(-4.25,1){ee} \uput{0.15}[0]{180}(0,1){oo} } \psdots[linecolor=darkgreen] (1,2)% (-1,0)(-3,0)% (-0.75,2)(-1.25,2)% (-2.75,2)(-3.25,2)% (-4.75,2)(-5.25,2)% (-0.75,4)(-1.25,4)% (-2.75,4)(-3.25,4)% {\tiny \uput{0.15}[-90]{180}(1,2){eo} } \psdots[linecolor=blue] (-2,0)(-4,0) {\tiny \uput{0.15}[-90]{180}(-2,0){o} \uput{0.15}[-90]{180}(-4,0){e} } }} \rput(8.5,2){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(3.5,-2.5)(4.5,-1.5) \psframe*(2.5,-1.5)(3.5,-0.5) \psframe*(4.5,-1.5)(5.5,-0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(6.5,2.5)(7.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(5.5,3.5)(6.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) \psframe*(4.5,4.5)(5.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(0.5,2)(1,2) \pscurve(2,2)(2.25,2.2)(3,2.2)(3.25,2) \psline(2.75,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2.75,0)(1.75,1) \psline(3.25,0)(2.25,1) \psline(4.75,0)(3.75,1) \psline(5.25,0)(4.25,1) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) \psline(4.25,-1.5)(3.75,-1) \psline(4.75,-1.5)(4.25,-1) \psline(6.25,0.5)(5.75,1) \psline(6.75,0.5)(6.25,1) } \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(3.75,-1)(2.75,0) \psline(4.25,-1)(3.25,0) \psline(5.25,-0.5)(4.75,0) \psline(5.75,-0.5)(5.25,0) \psline(7.25,1.5)(6.75,2) \psline(7.75,1.5)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psdots[linecolor=red]% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) (3.75,-1)(4.25,-1)% {\tiny \uput{0.15}[180]{180}(2,3){eo} \uput{0.15}[180]{180}(4,5){eo} } \psdots[linecolor=darkgreen] (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% (2.75,0)(3.25,0)% (4.75,0)(5.25,0)% {\tiny \uput{0.15}[90]{180}(1,2){oo} \uput{0.15}[90]{180}(3,4){oo} } \psdots[linecolor=blue] (2,2)% (2,4)(4,4 {\tiny \uput{0.15}[135]{180}(2,2){e} \uput{0.15}[135]{180}(2,4){o} \uput{0.15}[135]{180}(4,4){e} } }} \rput(12.5,7){\psrotate(0,0){180}{ {\psset{linecolor=lightgray} \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(3.5,-0.5)(4.5,0.5) \psframe*(5.5,-0.5)(6.5,0.5) \psframe*(7.5,-0.5)(8.5,0.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(4.5,0.5)(5.5,1.5) \psframe*(6.5,0.5)(7.5,1.5) \psframe*(-0.5,1.5)(0.5,2.5) \psframe*(1.5,1.5)(2.5,2.5) \psframe*(3.5,1.5)(4.5,2.5) \psframe*(5.5,1.5)(6.5,2.5) \psframe*(0.5,2.5)(1.5,3.5) \psframe*(2.5,2.5)(3.5,3.5) \psframe*(4.5,2.5)(5.5,3.5) \psframe*(1.5,3.5)(2.5,4.5) \psframe*(3.5,3.5)(4.5,4.5) \psframe*(2.5,4.5)(3.5,5.5) } {\psset{linestyle=dashed,dash=2pt 1pt} \psline(1,0)(0,1) \psline(3,0)(2.25,1) \psline(5,0)(4.25,1) \psline(7,0)(6.25,1) \psline(9,0)(8.25,1) \psline(2,0)(1.75,1) \psline(4,0)(3.75,1) \psline(6,0)(5.75,1) \psline(8,0)(7.75,1) \psline(0,2)(1,2) \pscurve(2,2)(2.25,2.2)(3,2.2)(3.25,2) \psline(2.75,2)(2,3) \psline(4.75,2)(3.75,3) \psline(5.25,2)(4.25,3) \psline(6.75,2)(5.75,3) \psline(7.25,2)(6.25,3) \psline(2,4)(3,4) \pscurve(4,4)(4.25,4.2)(5,4.2)(5.25,4) \psline(4.75,4)(4,5) \pscurve{->}(5,0)(4.75,0.1)(4,0.8)(3.75,1) \pscurve{->}(9,0)(8.75,0.1)(8,0.8)(7.75,1) \pscurve{->}(4.25,1)(4.2,0.5)(4,0) \pscurve{->}(8.25,1)(8.2,0.5)(8,0) } \psline(1,0)(2,0) \psline(3,0)(4,0) \psline(5,0)(6,0) \psline(7,0)(8,0) \psline(0,1)(0,2) \psline(1.75,1)(1,2) \psline(2.25,1)(2,2) \psline(3.75,1)(2.75,2) \psline(4.25,1)(3.25,2) \psline(5.75,1)(4.75,2) \psline(6.25,1)(5.25,2) \psline(7.75,1)(6.75,2) \psline(8.25,1)(7.25,2) \psline(2,3)(2,4) \psline(3.75,3)(3,4) \psline(4.25,3)(4,4) \psline(5.75,3)(4.75,4) \psline(6.25,3)(5.25,4) \psline(4,5)(4,5.5) \psline(9,0)(9.5,0) \psdots[linecolor=red]% (0,1)% (1.75,1)(2.25,1)% (3.75,1)(4.25,1)% (5.75,1)(6.25,1)% (7.75,1)(8.25,1)% (2,3)% (3.75,3)(4.25,3)% (5.75,3)(6.25,3)% (4,5) {\tiny \uput{0.15}[180]{180}(1.75,1){ee} \uput{0.15}[0]{180}(2.25,1){oo} \uput{0.15}[180]{180}(3.75,1){eo} \uput{0.15}[0]{180}(4.25,1){oe} \uput{0.15}[180]{180}(5.75,1){ee} \uput{0.15}[0]{180}(6.25,1){oo} \uput{0.15}[180]{180}(7.75,1){eo} \uput{0.15}[0]{180}(8.25,1){oe} \uput{0.15}[180]{180}(2,3){eo} \uput{0.15}[180]{180}(4,5){eo} } \psdots[linecolor=darkgreen] (1,0)% (3,0)% (5,0)% (7,0)% (9,0)% (1,2)% (2.75,2)(3.25,2)% (4.75,2)(5.25,2)% (6.75,2)(7.25,2)% (3,4)% (4.75,4)(5.25,4)% {\tiny \uput{0.15}[90]{180}(1,2){oo} \uput{0.15}[90]{180}(3,4){oo} } \psdots[linecolor=blue] (2,0)(4,0)(6,0)(8,0)% (0,2)(2,2)% (2,4)(4,4 {\tiny \uput{0.15}[-90]{180}(2,0){e} \uput{0.15}[-90]{180}(4,0){o} \uput{0.15}[-90]{180}(6,0){e} \uput{0.15}[-90]{180}(8,0){o} \uput{0.15}[135]{180}(0,2){o} \uput{0.15}[135]{180}(2,2){e} \uput{0.15}[135]{180}(2,4){o} \uput{0.15}[135]{180}(4,4){e} } }} \end{pspicture} \caption{The last step of the calculation of the peculiar modules of $(2n,-(2m+1))$-pretzel tangles. Some of the vertices are labelled according to the parity of the indices of the generators they correspond to, where ``e'' stands for ``even'' and ``o'' for ``odd''. }\label{fig:PreResultPretzels} \end{figure} \begin{figure} \centering \psset{unit=1.2} \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-2.8,-1)(2.8,1) \rput(-1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(-0.5,-0.5)(-0.75,0.5) \psline(0.5,-0.5)(-0.25,0.5) \pscurve{<-}(0.5,-0.5)(0.25,-0.4)(-0.5,0.3)(-0.75,0.5) \pscurve{<-}(-0.25,0.5)(-0.3,0)(-0.5,-0.5) \pscurve[linestyle=dotted,dotsep=1pt]{->}(-0.5,-0.5)(-0.25,-0.6)(0.25,-0.6)(0.5,-0.5) \psdots[linecolor=red](-0.75,0.5)(-0.25,0.5) \psdots[linecolor=darkgreen](0.5,-0.5) \psdots[linecolor=gold](-0.5,-0.5) \rput[t](0,-0.65){$h=p$} \rput[c](-0.25,-0.35){$p^3$} \rput[c](-0.45,0.6){$p^2$} } \rput(0,0){$=$} \rput(1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(-0.5,-0.5)(-0.75,0.5) \psline(0.5,-0.5)(-0.25,0.5) \psdots[linecolor=red](-0.75,0.5)(-0.25,0.5) \psdots[linecolor=darkgreen](0.5,-0.5) \psdots[linecolor=gold](-0.5,-0.5) } \end{pspicture} \caption{}\label{fig:CalculationPretzelsBasicHomotopies1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-2.8,-1)(2.8,1) \psset{linestyle=dashed,dash=4pt 2pt} \rput{180}(-1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(-0.5,-0.5)(-0.75,0.5) \psline(0.5,-0.5)(-0.25,0.5) \pscurve{<-}(0.5,-0.5)(0.25,-0.4)(-0.5,0.3)(-0.75,0.5) \pscurve{<-}(-0.25,0.5)(-0.3,0)(-0.5,-0.5) \pscurve[linestyle=dotted,dotsep=1pt]{->}(-0.5,-0.5)(-0.25,-0.6)(0.25,-0.6)(0.5,-0.5) \psdots[linecolor=red](-0.75,0.5)(-0.25,0.5) \psdots[linecolor=darkgreen](0.5,-0.5) \psdots[linecolor=blue](-0.5,-0.5) \rput[b]{180}(0,-0.65){$h=q$} \rput[c]{180}(-0.25,-0.35){$q^3$} \rput[c]{180}(-0.45,0.6){$q^2$} } \rput(0,0){$=$} \rput{180}(1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(-0.5,-0.5)(-0.75,0.5) \psline(0.5,-0.5)(-0.25,0.5) \psdots[linecolor=red](-0.75,0.5)(-0.25,0.5) \psdots[linecolor=darkgreen](0.5,-0.5) \psdots[linecolor=blue](-0.5,-0.5) } \end{pspicture} \caption{}\label{fig:CalculationPretzelsBasicHomotopies2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-2.8,-1)(2.8,1) \rput(-1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(0.25,-0.5)(-0.5,0.5) \pscurve(-0.5,-0.5)(-0.25,-0.4)(0.5,-0.4)(0.75,-0.5) \pscurve{<-}(0.25,-0.5)(0,-0.55)(-0.25,-0.55)(-0.5,-0.5) \pscurve{->}(0.75,-0.5)(0.5,0)(0,0.4)(-0.5,0.5) \pscurve[linestyle=dotted,dotsep=1pt]{<-}(-0.5,-0.5)(-0.6,-0.25)(-0.6,0.25)(-0.5,0.5) \psdot[linecolor=red](-0.5,0.5) \psdot[linecolor=darkgreen](0.25,-0.5) \psdot[linecolor=darkgreen](0.75,-0.5) \psdot[linecolor=gold](-0.5,-0.5) \rput[r](-0.65,0.3){$h=p$} \rput[c](0.5,0.4){$p^2$} \rput[t](-0.125,-0.65){$p$} } \rput(0,0){$=$} \rput(1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline{->}(0.25,-0.5)(-0.5,0.5) \pscurve{->}(-0.5,-0.5)(-0.25,-0.4)(0.5,-0.4)(0.75,-0.5) \pscurve(0.25,-0.5)(0,-0.55)(-0.25,-0.55)(-0.5,-0.5) \pscurve(0.75,-0.5)(0.5,0)(0,0.4)(-0.5,0.5) \psdot[linecolor=red](-0.5,0.5) \psdot[linecolor=darkgreen](0.25,-0.5) \psdot[linecolor=darkgreen](0.75,-0.5) \psdot[linecolor=gold](-0.5,-0.5) \rput[r](-0.3,0){$p^2$} \rput[b](0.35,-0.3){$p$} } \end{pspicture} \caption{}\label{fig:CalculationPretzelsBasicHomotopies3} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-2.8,-1)(2.8,1) \psset{linestyle=dashed,dash=4pt 2pt} \rput(0,0){$=$} \rput{180}(1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline{->}(0.25,-0.5)(-0.5,0.5) \pscurve{->}(-0.5,-0.5)(-0.25,-0.4)(0.5,-0.4)(0.75,-0.5) \pscurve(0.25,-0.5)(0,-0.55)(-0.25,-0.55)(-0.5,-0.5) \pscurve(0.75,-0.5)(0.5,0)(0,0.4)(-0.5,0.5) \psdot[linecolor=red](-0.5,0.5) \psdot[linecolor=darkgreen](0.25,-0.5) \psdot[linecolor=darkgreen](0.75,-0.5) \psdot[linecolor=blue](-0.5,-0.5) \rput[l]{180}(-0.3,0){$q^2$} \rput[t]{180}(0.35,-0.3){$q$} } \rput{180}(-1.2,0){ {\psset{linecolor=lightgray} \psframe*(0,0)(1,1) \psframe*(0,0)(-1,-1) } \psline(0.25,-0.5)(-0.5,0.5) \pscurve(-0.5,-0.5)(-0.25,-0.4)(0.5,-0.4)(0.75,-0.5) \pscurve{<-}(0.25,-0.5)(0,-0.55)(-0.25,-0.55)(-0.5,-0.5) \pscurve{->}(0.75,-0.5)(0.5,0)(0,0.4)(-0.5,0.5) \pscurve[linestyle=dotted,dotsep=1pt]{<-}(-0.5,-0.5)(-0.6,-0.25)(-0.6,0.25)(-0.5,0.5) \psdot[linecolor=red](-0.5,0.5) \psdot[linecolor=darkgreen](0.25,-0.5) \psdot[linecolor=darkgreen](0.75,-0.5) \psdot[linecolor=blue](-0.5,-0.5) \rput[l]{180}(-0.65,0.3){$h=q$} \rput[c]{180}(0.5,0.4){$q^2$} \rput[b]{180}(-0.125,-0.65){$q$} } \end{pspicture} \caption{}\label{fig:CalculationPretzelsBasicHomotopies4} \end{subfigure} \caption{The morphisms $h$ for the final step of the proof of Theorem~\ref{thm:pretzeltangleCalc}.}\label{fig:CalculationPretzelsBasicHomotopies} \end{figure} \begin{corollary}\label{cor:pretzeltangleMutation} Mutation about \((2n,-(2m+1))\)-pretzel tangles for \(n,m>0\), oriented as in Figure~\ref{fig:pretzeltangle2nm2mp1}, preserves bigraded link Floer homology, after identifying the Alexander gradings of the two open strands. If we reverse the orientation of one of the two strands, mutation in general only preserves \(\delta\)-graded link Floer homology. \end{corollary} \begin{proof}[of Corollary~\ref{cor:pretzeltangleMutation}] The invariants of the \((2n,-(2m+1))\)-pretzel tangles simply have the desired symmetry. This can be seen as follows: in terms of the loops on our infinite chessboard, a relabelling of the sites as in Theorem~\ref{thm:GeneralMutationInvariance} corresponds to a recolouring of the vertices: $\textcolor{red}{\bullet}\leftrightarrow\textcolor{darkgreen}{\bullet}$ and $\textcolor{blue}{\bullet}\leftrightarrow\textcolor{gold}{\bullet}$. An orientation reversal of both tangle strands corresponds to a rotation of the chessboard by $\pi$. After identifying the Alexander gradings of the two tangle strands, all generators on the diagonals from bottom-left to top-right have the same Alexander grading. Finally, if we reverse the orientation of one strand, we do not need to rotate the curves, but the generators in the same Alexander gradings now sit on the diagonals that go from top-left to the bottom-right. \end{proof} \begin{proof}[of Theorem~\ref{thm:pretzeltangleCalc}] The generators of the peculiar module can already be determined from the decategorified invariants and from the observation of two obvious differentials that can be cancelled as in Example~\ref{exa:HFTdpretzeltangle}. Thus, the vertices of the graphs in Figure~\ref{fig:ResultPretzels} are fixed. What we need to decide is how they are connected. Because of the restrictions given by the gradings, for most cases, there is only one way to connect them such that the result is a peculiar module. The only question is how the diagonal strings of red and green generators connect the generators on the top left to the generators on the bottom right of each of the subfigures of Figure~\ref{fig:ResultPretzels}. For this, we are going to apply the algorithm from Corollary~\ref{cor:PeculiarModulesFromNiceDiagrams}, setting $p_1=0$ and $q_2=0$. We start with a Heegaard diagram obtained by glueing two Heegaard diagrams for the rational tangles with $2n$~twist, respectively $-(2m+1)$~twist together. We can niceify the diagram by doing two handleslides of the $\beta$-curve for the $2n$-twist rational tangle across the other $\beta$-curve. The result is shown in Figure~\ref{fig:CalculationPretzelsStep1HD}. The generators of this diagram are shown in Figure~\ref{fig:CalculationPretzelsStep1Gens}. The generators of the second and third row are the ones that were created during the first and second handleslide, respectively, and thus can be cancelled along the identity arrows connecting those generators of the same site and with the same indices. Note that the nice Heegaard diagram has the same symmetry as the tangles, and thus the complex inherits this symmetry. More precisely, the Heegaard diagram remains invariant under the following operation: in the names of the generators, exchange the letters $d$ and $b$, exchange underlining and overlining, replace $i$ by $2n+1-i$, and $j$ and $k$ by $2m+2-j$ and $2m+2-k$, respectively. Finally, in the algebra, exchange $p_2$ and $q_1$, as well as $p_3$ and $q_4$, and $q_3$ and $p_4$. This symmetry corresponds to mutation about the horizontal axis, which leaves the tangle invariant up to exchanging the two sites $b$ and $d$. We will use this symmetry in the following to simplify some parts of the computation. In Figures~\ref{fig:CalculationPretzelsAStep2generic} and \ref{fig:CalculationPretzelsAStep2MaximumP}, we compute all differentials that start/end at some selected generators, which are enclosed in those figures by boxes. Note that in all figures, generators in the same shaded regions share the same Alexander bigrading. Since the Heegaard diagram is nice, the only contributing domains are bigons and squares, so the computation is purely combinatorial and straightforward. We therefore ask the reader to check for themselves that indeed, the differentials starting/ending at all marked generators in those figures are included. (There are two observations that one might find useful when determining the contributing differentials: firstly, the only bigons in the diagram contribute arrows labelled by the elementary algebra elements $q_3$ and $p_4$. Secondly, all other contributions come from squares, which necessarily have boundary on both $\beta$-curves.) Next, we consider the effect of cancelling generators, first those corresponding to undoing the handleslides and then any remaining identity arrows. Obviously, the pictures in Figure~\ref{fig:CalculationPretzelsAStep2generic} do not change. In Figure~\ref{fig:CalculationPretzelsStep2BottomEnd}, cancellation only contributes an arrow $p_{23}\co\textcolor{darkgreen}{x_{1}c_{2m+1}}\rightarrow\textcolor{red}{a_{1}y_{2m+1}}$. The only possible arrow labelled by a power of $p$ leaving $\textcolor{gold}{d'y_{2m+1}}$ can go to $\textcolor{red}{a_{1}y_{2m+1}}$. Because of the $\partial^2$-relation in the peculiar module, this arrow has to be there. Similarly, we can argue for Figure~\ref{fig:CalculationPretzelsStep2BottomOdd}. Cancellation contributes an arrow $p_{23}\co\textcolor{darkgreen}{x_{1}c_{2j+1}}\rightarrow\textcolor{red}{a_{1}y_{2j+1}}$. It might also contribute another arrow, $p_{3}\co\textcolor{darkgreen}{x_{1}c_{2j+1}}\rightarrow\textcolor{gold}{d'y_{2j+2}}$, stemming from the arrow $q_{4}\co\textcolor{darkgreen}{\overline{y}_{1}c_{2j+2}}\rightarrow\textcolor{gold}{\overline{d}y_{2j+3}}$; however, the $\partial^2$-relation at $\textcolor{darkgreen}{x_{1}c_{2j+1}}$ in the peculiar module tells us that this does not happen. Again, there has to be a contribution $p_{234}\co\textcolor{gold}{d'y_{2j+1}}\rightarrow\textcolor{red}{a_{1}y_{2j+1}}$. In Figure~\ref{fig:CalculationPretzelsStep2TopEven}, there is no arrow $\textcolor{darkgreen}{x_{1}c_{2j}}\rightarrow\textcolor{red}{a_{1}y_{2j}}$, but again, the cancellation must contribute an arrow $p_{234}\co\textcolor{gold}{d'y_{2j}}\rightarrow\textcolor{red}{a_{1}y_{2j}}$. Finally, in Figure~\ref{fig:CalculationPretzelsStep2GoldCancel}, we cancel two arrows, namely $\textcolor{gold}{x_1d}\rightarrow\textcolor{gold}{d'y_1}$ and $\textcolor{darkgreen}{\overline{y}_1c_2}\rightarrow\textcolor{darkgreen}{\overline{c}_2y_1}$. This only contributes one arrow, namely $p_{23}\co\textcolor{darkgreen}{x_{1}c_1}\rightarrow\textcolor{red}{a_1y_{1}}$. Similarly, we argue for those generators in minimal Alexander grading corresponding to the colour $t_1$. Alternatively, we may apply the symmetry of the Heegaard diagram. The corresponding subcomplexes after cancellation are shown in Figure~\ref{fig:CalculationPretzelsAStep2MinimumP}. We have now obtained a reduced complex. To this, we add some arrows labelled by basic algebra elements that lie in the kernel of $\Ad\rightarrow\Ad/(p_1=0=q_2)$ to turn the complex into the peculiar module shown in Figure~\ref{fig:PreResultPretzels}. To see that this is indeed the result of putting all pieces together, one might for example start at the ends of Figure~\ref{fig:CalculationPretzelsStep2GoldCancel} and~\ref{fig:CalculationPretzelsStep2BlueCancel} which sit at the bottom left and top right corners of Figure~\ref{fig:PreResultPretzels}, respectively, and then connect these subcomplexes. To see how the diagonal strings of red and green generators connect the generators on the top left to the generators on the bottom right of Figure~\ref{fig:PreResultPretzels}, one can consider the parity of the generator indices, some of which are marked in the figure. We can now apply the Clean-Up Lemma (\ref{lem:AbstractCleanUp}) using the morphisms $h$ shown in Figure~\ref{fig:CalculationPretzelsBasicHomotopies} to obtain the desired result. \end{proof} \section*{Introduction}\label{sec:intro}\addcontentsline{toc}{section}{Introduction} \subsection{Peculiar modules} Let $L$ be an oriented link in the 3-sphere~$S^3$. Consider an embedded closed 3-ball~$B^3\subset S^3$ whose boundary intersects~$L$ transversely. Then, modulo a parametrization of the boundary $\partial B^3$, the embedding $L\cap B^3\hookrightarrow B^3$ is essentially what we call an oriented tangle. In \cite{HDsForTangles}, I introduced a set of Alexander polynomials $\nabla_T^s$ and a Heegaard Floer theory $\HFT(T)$ for such tangles~$T$. They should be regarded as generalisations of the classical multivariate Alexander polynomial \cite{Alexander} and Ozsváth and Szabó's and J.\,Rasmussen's knot and link Floer homology \cite{OSHFK,Jake,OSHFL}, respectively. Indeed, both tangle invariants have similar properties to their corresponding link invariants. In particular, the graded Euler characteristic of $\HFT(T)$ recovers the polynomial invariants $\nabla_T^s$. Moreover, the polynomials $\nabla_T^s$ satisfy a simple glueing theorem which allows one to prove results about the classical multivariate Alexander polynomial of links, such as invariance under Conway mutation~\cite[Corollary~3.6]{HDsForTangles}. Unfortunately, we do not have a similar glueing theorem for the categorified invariants $\HFT(T)$. The main objective of this paper is to resolve this problem in the case of 4-ended tangles, ie tangles with four ends on $\partial B^3$, see Figure~\ref{fig:INTRO2m3pt}. For this purpose, we upgrade the tangle Floer homology $\HFT(T)$ of 4-ended tangles $T$ to an invariant which we call the peculiar module of $T$ and denote by $\CFTd(T)$. This is done by adding some more structure maps. In fact, we construct an even more general invariant $\CFTminus(T):=\CFTminus(T,M)$, a generalised peculiar module, for tangles $T$ in homology 3-balls $M$ with spherical boundary. As algebraic objects, both generalized and ordinary peculiar modules are curved type~D structures over certain algebras, the generalized and ordinary peculiar algebras $\Aminus$ and $\Ad$, respectively. The algebra~$\Ad$ is a quotient of $\Aminus$, obtained by setting certain variables equal to 0. Similarly, $\CFTd(T)$ can be recovered from $\CFTminus(T)$, just as the hat version $\CFL$ of link Floer homology can be recovered from its $-$-version $\CFLminus$. Both generalised and ordinary peculiar modules are Heegaard Floer type invariants and, as such, rely on some choice of Heegaard diagram. So the first goal is to prove that the invariants are independent of this choice. However, this follows essentially from multi-pointed Heegaard Floer theory. \begin{theorem}[(\ref{thm:PecMod})]\label{thm:INTROPecMod} Given a 4-ended oriented tangle \(T\) in a homology 3-ball with spherical boundary, the bigraded chain homotopy types of \(\CFTminus(T)\) and \(\CFTd(T)\) are invariants of~\(T\). \end{theorem} \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.6} \begin{pspicture}[showgrid=false](-4.2,-3.1)(2.2,3.1) \psset{linewidth=\stringwidth} \psecurve{c-c}(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve{c-c}(2,2)(0.97,2.24)(0,2)(-0.75,1)(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1) \psecurve{c-c}(-6,1.5)(-3.3,1.85)(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve{c-c}(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=white,linewidth=\stringwhite](0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve{c-c}(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve{c-c}(0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve{c-c}(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve{c-c}(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \pscircle[linestyle=dotted](-1,0){3} \end{pspicture} \caption{A diagram of a 4-ended tangle; in this case, the $(2,-3)$-pretzel tangle $T_{2,-3} .}\label{fig:INTRO2m3pt} \vspace{-35pt} \end{wrapfigure} We do not have a glueing theorem for the generalised peculiar modules \(\CFTminus(T)\), except that one can recover $\CFLminus$ of certain closures of the tangle $T$ from \(\CFTminus(T)\) (see Remark~\ref{rem:LazyClosing}). Nonetheless, we do have a glueing formula for peculiar modules \(\CFTd(T)\). Its proof relies on Zarev's Glueing Theorem for his bordered sutured invariants~\cite{ZarevThesis} and an identification of some structure maps of certain bordered sutured invariants for tangles and peculiar modules. The precise statement of our Glueing Theorem uses the $\boxtimes$-tensor product between type~A and type~D structures familiar from bordered Heegaard Floer homology; for details, see Definition~\ref{def:PairingTypeDandA}. \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.2} \vspace*{20pt} \begin{pspicture}[showgrid=false](-11,-7)(11,7) \rput{-45}(0,0){ \pscircle[linecolor=lightgray](4,4){2.5} \pscircle[linecolor=lightgray](-4,-4){2.5} \psline[linecolor=white,linewidth=\stringwhite](1,-4)(-2,-4) \psline[linewidth=\stringwidth,linecolor=gray](1,-4)(-2,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,1)(-4,-2) \psline[linewidth=\stringwidth,linecolor=gray](-4,1)(-4,-2) \psline[linecolor=white,linewidth=\stringwhite](-6,-4)(-9,-4) \psline[linewidth=\stringwidth,linecolor=gray](-6,-4)(-9,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,-6)(-4,-9) \psline[linewidth=\stringwidth,linecolor=gray](-4,-6)(-4,-9) \psline[linecolor=white,linewidth=\stringwhite](-1,4)(2,4) \psline[linewidth=\stringwidth,linecolor=gray](-1,4)(2,4) \psline[linecolor=white,linewidth=\stringwhite](4,-1)(4,2) \psline[linewidth=\stringwidth,linecolor=gray](4,-1)(4,2) \psline[linecolor=white,linewidth=\stringwhite](6,4)(9,4) \psline[linewidth=\stringwidth,linecolor=gray](6,4)(9,4) \psline[linecolor=white,linewidth=\stringwhite](4,6)(4,9) \psline[linewidth=\stringwidth,linecolor=gray](4,6)(4,9) \pscircle[linestyle=dotted,linewidth=\stringwidth](4,4){5} \pscircle[linestyle=dotted,linewidth=\stringwidth](-4,-4){5} \psecurve[linewidth=\stringwidth]{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve[linewidth=\stringwidth]{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve[linewidth=\stringwidth]{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwidth]{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) \rput{45}(-4,-4){$T_1$} \rput{45}(4,4){$T_2$} } \end{pspicture} \caption{A link obtained from two 4-ended tangles.}\label{fig:INTROglueing2tangles} \vspace*{10pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{theorem}[(\ref{thm:CFTdGeneralGlueing}, \ref{prop:reversedmirror})]\label{thm:INTROCFTdGeneralGlueing} Let \(T_1\) and \(T_2\) be two 4-ended tangles and \(L\) the link obtained by glueing them together as illustrated in Figure~\ref{fig:INTROglueing2tangles}. Then the link Floer homology \(\HFL(L)\) can be computed from \(\CFTd(T_1)\) and \(\CFTd(T_2)\). More precisely, there exists a bounded, strictly unital type~AA structure \(\mathcal{P}\) such that \[\CFL(L)\otimes V^{i}\cong\rr(\CFTd(T_1))\boxtimes\,\mathcal{P}\boxtimes\,\CFTd(T_2),\] where \(V\) is some 2-dimensional vector space, \(i\in\{0,1\}\) and $\rr(\cdot)$ is the operation on peculiar modules which reverses Alexander gradings (see Definition~\ref{def:reversedmirror}). \end{theorem} \subsection{Classification of peculiar modules} In sections~\ref{sec:classification} and~\ref{sec:glueingrevisited}, we classify peculiar modules in terms of immersed curves on the 4-punctured sphere. \begin{definition}[(\ref{def:curves})]\label{def:INTROcurves} A \textbf{loop} on the 4-punctured sphere $S=S^2\smallsetminus 4D^2$ is a pair $(\gamma, X)$, where $\gamma$ is an immersion of an oriented circle into $S$ representing a non-trivial primitive element of $\pi_1(S)$ and $X\in \GL_n(\mathbb{F}_2)$ for some integer $n$. For such loops $(\gamma, X)$, we call $X$ the \textbf{local system} of the loop. A \textbf{collection of loops} is a set of loops \(\{(\gamma_i,X_i)\}_{i\in I}\) such that the immersed curves $\gamma_i$ are pairwise non-homotopic. \end{definition} \begin{theorem}[(\ref{exa:pqModSpecialCaseofCC}, \ref{cor:SplittingCatsForEquivalenceComplexes}, \ref{thm:PairingMorLagrangianFH}, \ref{thm:CompleteClassification}, \ref{thm:ImmersedCurveInvariants})]\label{thm:classificationPecMod} With every peculiar module \((C,\partial)\), we can associate a collection of loops \(L(C,\partial)=\{(\gamma_i,X_i)\}_{i\in I}\) such that if \((C',\partial')\) is another peculiar module with \(L(C',\partial')=\{(\gamma'_{i'},X'_{i'})\}_{i'\in I'}\), \((C,\partial)\) and \((C',\partial')\) are homotopic iff there is a bijection \(\iota\co I\rightarrow I'\) such that \(\gamma_i\)~is homotopic to~\(\gamma'_{\iota(i)}\) and \(X_i\)~is similar to~\(X'_{\iota(i)}\) for all $i\in I$. Moreover, the homology of the space of morphisms between two peculiar modules is chain homotopic to the Lagrangian intersection Floer theory between their associated collections of immersed curves: \[H_\ast(\Mor((C,\partial),(C',\partial')))\cong\LagrangianFH(L(C,\partial),L(C',\partial')).\] \end{theorem} \begin{wrapfigure}{r}{0.3333\textwidth}\centering \psset{unit=1.4} \begin{pspicture}(-1.6,-1.7)(1.6,1.7) \psecurve(-1.2,0.95)(0,1.5)(1.5,1)(-1.16,-1.13)(1.2,1.05)(-1.2,0.95)(0,1.5)(1.5,1) \psecurve[linecolor=red](1.3,0.9)(0,1.6)(-1.3,0.9)(1.4,1)(0,0.4)(-1.4,1)(1.3,0.9)(0,1.6)(-1.3,0.9) \psrotate(0,0){180}{ \psecurve[linecolor=darkgreen](1.3,0.9)(0,1.6)(-1.3,0.9)(1.4,1)(0,0.4)(-1.4,1)(1.3,0.9)(0,1.6)(-1.3,0.9) } \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} \caption{The three loops of $L_{T_{2,-3}}$ (with the unique 1-dimensional local systems) on the 4-punctured sphere for the $(2,-3)$-pretzel tangle $T_{2,-3}$ from Figure~\ref{fig:INTRO2m3pt}. }\label{fig:INTROmutationexamplefinalresult} \vspace*{-30pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{definition}[(\ref{def:ImmersedCurveInvariantspqMod})] In particular, with any 4-ended tangle~\(T\) in a homology 3-ball $M$ with spherical boundary, we can associate a collection of loops, denoted by $L_T:=L_{T,M}$, which is a tangle invariant up to homotopy of the underlying curves and similarity of the local systems. \end{definition} \begin{question}\label{que:GeometricMeaningOfTheNumberOfCurves} The number of curves in $L_T$ is obviously a tangle invariant. What is its geometric meaning? \end{question} The proof of Theorem~\ref{thm:classificationPecMod} is based on an algorithm due to Hanselman, Rasmussen and Watson~\cite{HRW} which they use to classify bordered Heegaard Floer invariants for 3-manifolds with torus boundary. There are striking similarities between their bordered Heegaard Floer invariants and peculiar modules, and it would be interesting to see if there exists a closer connection between them apart from their formal properties. Theorem~\ref{thm:classificationPecMod} allows us to restate the Glueing Theorem as follows. \begin{corollary}[(\ref{thm:CFTdGlueingAsMorphism})]\label{cor:INTROCFTdGlueingAsMorphism} With the same notation as in Theorems~\ref{thm:INTROCFTdGeneralGlueing} and~\ref{thm:classificationPecMod}, $$\HFL(L)\otimes V^{i}\cong H_\ast(\Mor(\CFTd(\mr(T_1)),\CFTd(T_2)))\cong\LagrangianFH(L_{\mr(T_1)},L_{T_2}),$$ where \(\mr(T_1)\) denotes the reversed mirror of \(T_1\) (see Definition~\ref{def:reversedmirror}). \end{corollary} Section~\ref{sec:glueingrevisited} contains a variety of examples illustrating this version of the Glueing Theorem. We actually prove Theorem~\ref{thm:classificationPecMod} for all categories of curved complexes over arbitrary marked surfaces, see section~\ref{sec:classification}. As a consequence, we can show that by passing to certain quotients of the peculiar algebra $\Ad$, we do not lose information. This rather abstract observation has the following very practical consequence. \begin{theorem}[(\ref{cor:PeculiarModulesFromNiceDiagrams})]\label{thm:INTROPeculiarModulesFromNiceDiagrams} Peculiar modules for 4-ended tangles can be computed combinatorially. \end{theorem} We illustrate this result in section~\ref{subsec:pretzels} by computing $\CFTd(T)$ for an infinite family of 2-stranded pretzel tangles. Moreover, the general algorithm is implemented in the Mathematica package~\cite{PQM.m}, which can be used to compute the peculiar module and the corresponding immersed curves of any 4-ended tangle. \begin{definition}\label{def:INTROlooptype} Inspired by \cite[Definition~3.2]{HanselmanWatson}, we call a tangle $T$ \textbf{loop-type} if all local systems in $L_T$ are similar to permutation matrices. \end{definition} All tangles for which I have so far computed the invariants are loop-type. This provokes the following question, whose corresponding counterpart for 3-manifolds with torus boundary is open as well~\cite[section~2.4]{HRW}. \begin{question}\label{que:LocalSystemsForTangles} Are all tangles loop-type? \end{question} \subsection{Applications} Peculiar modules detect rational tangles. In fact, this is true for any invariant of 4-ended tangles for which there exists a glueing theorem recovering link Floer homology. This is an easy consequence of unlink detection of link Floer homology and should be regarded as its corresponding analogue for invariants of 4-ended tangles. However, the description of peculiar modules of rational tangles is particularly simple, and hence so is rational tangle detection for our invariants: \begin{theorem}[(\ref{thm:CFTdDetectsRatTan})]\label{thm:INTROCFTdDetectsRatTan} A 4-ended tangle \(T\) in the 3-ball is rational iff \(L_T\) is a single \textit{embedded} loop with the unique 1-dimensional local system. \end{theorem} As another application, we reprove a result originally due to Manolescu~\cite{Manolescu}, namely the existence of an unoriented skein exact sequence, see Theorem~\ref{thm:ResolutionExactTriangle}. Similarly, we obtain the following slight generalisation of Ozsváth and Szabó's oriented skein exact sequence \cite{OSHFK}. \begin{wrapfigure}{r}{0.3333\textwidth} \centering \vspace*{10pt} \psset{unit=0.35} \begin{subfigure}[b]{0.116\textwidth}\centering $n\left\{\raisebox{-1cm}{ \begin{pspicture}(-1.1,-3.1)(1.1,3.1) \psset{linewidth=\stringwidth} \psecurve(1,5)(-1,3)(1,1)(-1,-1) \psecurve[linecolor=white,linewidth=\stringwhite](-1,5)(1,3)(-1,1)(1,-1) \psecurve(-1,5)(1,3)(-1,1)(1,-1) \psline[linestyle=dotted,dotsep=0.4](0,-0.6)(0,0.6) \psecurve(-1,-5)(1,-3)(-1,-1)(1,1) \psecurve[linecolor=white,linewidth=\stringwhite](1,-5)(-1,-3)(1,-1)(-1,1) \psecurve(1,-5)(-1,-3)(1,-1)(-1,1) \end{pspicture}}\right. $ \caption{$T_{n}$}\label{fig:INTROOrientedSkeinRelationTn} \end{subfigure} \begin{subfigure}[b]{0.116\textwidth}\centering $n\left\{\raisebox{-1cm}{ \begin{pspicture}(-1.1,-3.1)(1.1,3.1) \psset{linewidth=\stringwidth} \psecurve(-1,5)(1,3)(-1,1)(1,-1) \psecurve[linecolor=white,linewidth=\stringwhite](1,5)(-1,3)(1,1)(-1,-1) \psecurve(1,5)(-1,3)(1,1)(-1,-1) \psline[linestyle=dotted,dotsep=0.4](0,-0.6)(0,0.6) \psecurve(1,-5)(-1,-3)(1,-1)(-1,1) \psecurve[linecolor=white,linewidth=\stringwhite](-1,-5)(1,-3)(-1,-1)(1,1) \psecurve(-1,-5)(1,-3)(-1,-1)(1,1) \end{pspicture}}\right. $ \caption{$T_{-n}$}\label{fig:INTROOrientedSkeinRelationTmn} \end{subfigure} \begin{subfigure}[b]{0.085\textwidth}\centering $\left.\raisebox{-1cm}{ \begin{pspicture}(-1.1,-3.1)(1.1,3.1) \psset{linewidth=\stringwidth} \psecurve(-2,6)(-1,3)(-1,-3)(-2,-6) \psecurve(2,6)(1,3)(1,-3)(2,-6) \end{pspicture}}\right. $ \caption{$T_0$}\label{fig:INTROOrientedSkeinRelationT0} \end{subfigure} \vspace*{-20pt} \caption{Basic tangles.}\label{fig:INTROOrientedSkeinRelation} \vspace*{-50pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{theorem}[(\ref{thm:nTwistSkeinRelation}, see also \ref{rem:nTwistSkeinRelation})]\label{thm:INTROnTwistSkeinRelation} Let \(T_n\) be the positive \(n\)-twist tangle, \(T_{-n}\) the negative \(n\)-twist tangle and \(T_0\) the trivial tangle, see Figure~\ref{fig:INTROOrientedSkeinRelation}. Then there is an exact triangle $$\begin{tikzcd}[row sep=0.9cm, column sep=-0.5cm] \CFTd(T_{-n}) \arrow{rr} & & \CFTd(T_{n}) \arrow{dl} \\ & \CFTd(T_{0})\otimes V \arrow{lu} \end{tikzcd}$$ where \(V\) is a 2-dimensional vector space. If the tangles are oriented and coloured consistently, one obtains (bi)graded versions of this triangle. Furthermore, it gives rise to an exact triangle relating the (appropriately stabilised) link Floer homologies of links that differ in these three tangles. \end{theorem} \begin{wrapfigure}{r}{0.3333\textwidth} \centering\vspace*{-5pt} {\psset{unit=0.2} \begin{pspicture}(-12,-5.5)(12,5.5) \rput{-45}(-1,0){ \pscircle[linecolor=gray](-4,-4){2.5} \psline[linecolor=white,linewidth=\stringwhite](1,-4)(-2,-4) \psline[linewidth=\stringwidth,linecolor=black](1,-4)(-2,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,1)(-4,-2) \psline[linewidth=\stringwidth,linecolor=black](-4,1)(-4,-2) \psline[linecolor=white,linewidth=\stringwhite](-6,-4)(-9,-4) \psline[linewidth=\stringwidth,linecolor=black](-6,-4)(-9,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,-6)(-4,-9) \psline[linewidth=\stringwidth,linecolor=black](-4,-6)(-4,-9) \pscircle[linestyle=dotted,linewidth=\stringwidth](-4,-4){5} \rput{45}(-4,-4){$R$} } \rput(0,0){$\rightarrow$} \rput{-45}(1,0){ \pscircle[linecolor=gray](4,4){2.5} \psline[linecolor=white,linewidth=\stringwhite](-1,4)(2,4) \psline[linewidth=\stringwidth,linecolor=black](-1,4)(2,4) \psline[linecolor=white,linewidth=\stringwhite](4,-1)(4,2) \psline[linewidth=\stringwidth,linecolor=black](4,-1)(4,2) \psline[linecolor=white,linewidth=\stringwhite](6,4)(9,4) \psline[linewidth=\stringwidth,linecolor=black](6,4)(9,4) \psline[linecolor=white,linewidth=\stringwhite](4,6)(4,9) \psline[linewidth=\stringwidth,linecolor=black](4,6)(4,9) \pscircle[linestyle=dotted,linewidth=\stringwidth](4,4){5} \rput{-135}(4,4){$R$} } \end{pspicture} \caption{Conway mutation.}\label{fig:MutationIntro}}\bigskip \end{wrapfigure} \myfixwrapfigdouble \begin{definition}[(Conway mutation)]\label{def:INTROmutation} Given a link $L$ in $S^3$, let $L'$ be the link obtained by cutting out a tangle diagram~$R$ with four ends from a diagram of~$L$ and glueing it back in after a half-rotation, see Figure~\ref{fig:MutationIntro} for an illustration. We say $L'$ is a \textbf{Conway mutant} of~$L$ and we call~$R$ the \textbf{mutating tangle} in this mutation. If $L$ is oriented, we choose an orientation of $L'$ that agrees with the one for~$L$ outside of~$R$. If this means that we need to reverse the orientation of the two open components of~$R$, then we also reverse the orientation of all other components of~$R$ during the mutation; otherwise we do not change any orientation. \end{definition} Ozsváth and Szabó showed in~\cite{OSmutation} that knot and link Floer homology is not invariant under mutation in general. But in~\cite[Conjecture~1.5]{BaldwinLevine}, Baldwin and Levine conjectured the following. \begin{conjecture}\label{conj:MutInvHFL} Let \(L\) be a link and let \(L'\) be obtained from \(L\) by Conway mutation. Then \(\HFL(L)\) and \(\HFL(L')\) agree after collapsing the bigrading to a single \(\mathbb{Z}\)-grading, known as the \(\delta\)-grading. More colloquially, \(\delta\)-graded link Floer homology is mutation invariant. \end{conjecture} \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.3} \vspace*{-5pt} \begin{pspicture}(-5.5,-5.5)(5.5,5.5) \psset{linewidth=\stringwidth} \pscircle[linestyle=dotted](0,0){5.5} \rput(0,0){$\left\{\textcolor{white}{\rule[-0.8cm]{1.9cm}{1.8cm}}\right\}$} \rput{-90}(4.45,0){$2m+1$} \rput{90}(-4.45,0){$2n$} \psecurve(-1,1)(-3,-1)(0,-2.4)(3,-1)(1,1) \pscustom{ \psline{<-}(-3,4.5)(-3,3) \psecurve(-1,5)(-3,3)(-1,1)(-3,-1) } \rput(-2,0.3){$\vdots$} \psline[linestyle=dotted,dotsep=0.4](-2,-0.6)(-2,0.6) \psecurve[linecolor=white,linewidth=\stringwhite](-1,-5)(-3,-3)(-1,-1)(-3,1) \pscustom{ \psline(-3,-4.5)(-3,-3) \psecurve(-1,-5)(-3,-3)(-1,-1)(-3,1) } \pscustom{ \psline{<-}(3,4.5)(3,3) \psecurve(1,5)(3,3)(1,1)(3,-1) } \psline[linestyle=dotted,dotsep=0.4](2,-0.6)(2,0.6) \psecurve[linecolor=white,linewidth=\stringwhite](1,-5)(3,-3)(1,-1)(3,1) \pscustom{ \psline(3,-4.5)(3,-3) \psecurve(1,-5)(3,-3)(1,-1)(3,1) } \psecurve[linecolor=white,linewidth=\stringwhite](-1,-1)(-3,1)(0,2.4)(3,1)(1,-1) \psecurve(-1,-1)(-3,1)(0,2.4)(3,1)(1,-1) \end{pspicture} \caption{An infinite family of pretzel tangles for $n,m>0$.}\label{fig:INTROpretzeltangle2nm2mp1} \vspace*{-45pt} \end{wrapfigure} In this paper, we prove a slightly stronger version of this conjecture for an infinite family of mutating tangles. \begin{theorem}[(\ref{cor:pretzeltangleMutation})]\label{thm:INTROpretzeltangleMutation} Mutation of \((2n,-(2m+1))\)-pretzel tangles for \(n,m>0\), oriented as in Figure~\ref{fig:INTROpretzeltangle2nm2mp1}, preserves bigraded link Floer homology, after identifying the Alexander gradings of the two open strands. If we reverse the orientation of one of the two strands, mutation of these tangles preserves \(\delta\)-graded link Floer homology. \end{theorem} This generalises an earlier result from my thesis~\cite{MyThesis} for the $(2,-3)$-pretzel tangle. The result again simply follows from an observation that the peculiar invariants for the mutating tangles have a certain symmetry. However, the calculation of the invariants for general \(n,m>0\) is more involved and relies on Theorem~\ref{thm:INTROPeculiarModulesFromNiceDiagrams}. \subsection{Similar work by other people} It is interesting to compare the ideas described in this paper to the combinatorial tangle Floer theory by Petkova and Vértesi \cite{cHFT} and the algebraic tangle homology theory by Ozsváth and Szabó \cite{OSKauffmanStates1,OSKauffmanStates2} as well as their corresponding decategorifications in terms of the representation theory of $\mathcal{U}_q(\mathfrak{gl}(1\vert 1))$ \cite{DecatCTFH,ManionDecat}. In fact, the definition of our generalised peculiar modules $\CFTminus$ is primarily inspired by the invariants from~\cite{OSKauffmanStates2}, see Remark~\ref{rem:ComparisonOS}. In~\cite{EftekharyAlishahiTangles}, Eftekhary and Alishahi define a Heegaard Floer theory for tangles using a suitable generalisation of sutured Floer homology~\cite{EftekharyAlishahiSFT}. They study cobordism maps between their tangle invariants, but they do not discuss glueing properties. It would be interesting to see how these two approaches can be merged. Finally, I also want to mention some impressive work of Lambert-Cole \cite{LambertCole1,LambertCole2}, where he confirms Conjecture~\ref{conj:MutInvHFL} for various families of mutant pairs (different from the one in Theorem~\ref{thm:INTROpretzeltangleMutation}), using entirely different techniques. \begin{acknowledgements}\label{ackref} This paper has grown out of the final chapter of my PhD thesis~\cite{MyThesis}. I would therefore like to take the opportunity to thank my PhD supervisor Jake Rasmussen for his generous support. I consider myself very fortunate to have been his student. My PhD was funded by an EPSRC scholarship covering tuition fees and a DPMMS grant for maintenance, for which I thank the then Head of Department Martin Hyland. Some parts of this paper were written during my stay at the Isaac Newton Institute during the programme \textit{Homology Theories in Low Dimensions} (EPSRC grant number EP/K032208/1). The paper was completed during my time as a CIRGET postdoctoral fellow at the Université de Sherbrooke. I thank my PhD examiners Ivan Smith and András Juhász for many valuable comments on and corrections to my thesis. I also thank Jonathan Hanselman, Robert Lipshitz, Andy Manion, Allison Moore, Ina Petkova, Vera Vértesi and Marcus Zibrowius for helpful conversations. I am immensely grateful to the anonymous referees who provided many valuable and detailed comments on and corrections to an earlier version of this paper. My special thanks go to Liam Watson for his continuing interest in my work. \end{acknowledgements} \section{\texorpdfstring{The invariant $\CFTd$}{The invariant CFTᵈ}}\label{sec:CFTd} \subsection{Heegaard diagrams for tangles} \begin{wrapfigure}{r}{0.3333\textwidth} \centering \psset{unit=0.55} \vspace*{-20pt} \begin{pspicture}[showgrid=false](-5.1,-4.1)(3.1,4.1) \psellipticarc[linestyle=dotted](-1,0)(4,1){0}{180} \psecurve[linecolor=white,linewidth=\stringwhite]{c-c}(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=white,linewidth=\stringwhite]{c-c}(2,2)(0.97,2.24)(0,2)(-0.75,1)(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1) { \psset{linewidth=\stringwidth} \psecurve{c-c}(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve{<-c}(2,2)(0.97,2.24)(0,2)(-0.75,1)(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1) \psecurve{<-c}(-6,1.5)(-3.3,1.85)(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve{c-c}(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=white,linewidth=\stringwhite](0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve{c-c}(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve{c-c}(0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve{c-c}(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve{c-c}(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) } \psellipticarc[linewidth=4pt,linecolor=white](-1,-0.09)(4,1){180}{0} \pscircle[linecolor=red](-1,0){3} \psdots(-3.3,1.85)(-3.3,-1.85)(0.97,-2.24)(0.97,2.24) \psellipticarc(-1,0)(4,1){180}{0} \pscircle(-1,0){4} \uput{3.2}[140](-1,0){1} \uput{3.2}[-140](-1,0){2} \uput{3.2}[-50](-1,0){3} \uput{3.2}[50](-1,0){4} \uput{3.2}[180](-1,0){\red$a$} \uput{3.2}[-90](-1,0){\red$b$} \uput{3.2}[0](-1,0){\red$c$} \uput{3.2}[90](-1,0){\red$d$} \end{pspicture} \caption{The $(2,-3)$-pretzel tangle in $B^3$.}\label{fig:2m3pt} \vspace*{-10pt} \end{wrapfigure} Let us start by recalling the basic definitions from~\cite[sections~1 and~4]{HDsForTangles}, adapted to 4-ended tangles. \begin{definition} A \textbf{4-ended tangle} $T$ in a homology 3-ball~$M$ with spherical boundary is an embedding \[T\co\left(I \amalg I\amalg \coprod S^1,\partial\right)\hookrightarrow \left(M,{\red S^1}\subset S^2=\partial M\right),\] such that the endpoints of the two intervals lie on a fixed oriented circle ${\red S^1}$ on the boundary of~$M$, together with a choice of distinguished tangle end. Starting at this distinguished ($=$first) tangle end and following the orientation of the fixed circle ${\red S^1}$, we number the tangle ends and label the arcs ${\red S^1}\smallsetminus \im(T)$ by $a$, $b$, $c$ and $d$, in that order. Sometimes, we will find it more convenient to use the labels $s_1$ for $a$, $s_2$ for $b$, $s_3$ for $c$ and $s_4$ for $d$ instead. We call a choice of a single arc a \textbf{site} of the tangle~$T$. \\\indent We consider tangles up to ambient isotopy which fixes the distinguished tangle end and the orientation of ${\red S^1}$ (and thus preserves the labelling of the tangle ends and arcs). The images of the two intervals are called the \textbf{open components}, the images of any circles are called the \textbf{closed components} of the tangle. We label these tangle components by variables $t_1$ and $t_2$ for the open components and $t'_1,t'_2,\dots$ for the closed components. We call those variables the \textbf{colours} of~$T$. An orientation of a tangle is a choice of orientation of the two intervals and the circles. \\\indent Note that the orientation of ${\red S^1}$ enables us to distinguish between the two components of $\partial M\smallsetminus{\red S^1}$. The \textbf{back} component of $\partial M\smallsetminus{\red S^1}$ is the one whose boundary orientation agrees with the orientation of ${\red S^1}$, using the right-hand rule and a normal vector field pointing into $M$. We call the other one the \textbf{front} component. \end{definition} \begin{definition}\label{def:HDsfortangles} A \textbf{Heegaard diagram $\mathcal{H}_T$ for a 4-ended tangle} $T$ with $n$ closed components in a $\mathbb{Z}$-homology 3-ball with spherical boundary $M$ is a tuple $(\Sigma_g,\A=\Ac\cup\Aa,\B)$, where \begin{itemize} \item $\Sigma_g$ is an oriented surface of genus $g$ with $2n+4$ boundary components, denoted by~$\Gamma$, which are partitioned into $(n+2)$ pairs, \item $\Ac$ is a set of $(g+n)$ pairwise disjoint circles $\alpha_1,\dots, \alpha_{g+n}$ on $\Sigma_g$, \item $\Aa$ is a set of $4$ pairwise disjoint arcs on $\Sigma_g$, labelled $a$, $b$, $c$, $d$, which are disjoint from~$\Ac$ and whose endpoints lie on~$\Gamma$, and \item $\B$ is a set of $(g+n+1)$ pairwise disjoint circles $\beta_1,\dots, \beta_{g+n+1}$ on~$\Sigma_g$. \end{itemize} We impose the following condition on the data above: the 3-manifold obtained by attaching 2-handles to $\Sigma_g\times [0,1]$ along $\Ac\times\{0\}$ and $\B\times\{1\}$ is equal to the tangle complement~$M\smallsetminus \nu(T)$ such that under this identification, \begin{itemize} \item each pair of circles in $\Gamma$ is a pair of meridional circles for the same tangle component, and each tangle component belongs to exactly one such pair, and \item $\Aa\times\{0\}$ is equal to ${\red S^1}\smallsetminus \nu(\partial T)\subset \partial M$. \end{itemize} If the tangle $T$ is oriented, we also orient the boundary components of $\Sigma_g$ as oriented meridians of the tangle components, using the right-hand rule. \end{definition} \begin{remark}\label{rem:conventions1} As in~\cite{HDsForTangles}, our \textbf{convention on the orientation of the Heegaard surface} is that its normal vector (determined using the right-hand rule) points in the direction of positive gradient of the defining Morse function, ie in the direction of the $\beta$-handlebody. However, we usually draw the Heegaard surfaces such that the normal vector points into the projection plane. \end{remark} \begin{definition} Let $T$ be a 4-ended tangle. A \textbf{peculiar Heegaard diagram} for $T$ is obtained from a tangle Heegaard diagram for $T$ by a local modification around the punctures, as illustrated in Figure~\ref{fig:HD4ended}: we collapse the four boundary components of $\Sigma$ which meet the $\alpha$-arcs, thereby joining the four $\alpha$-arcs to a single $\alpha$-circle $\red S^1$. Then, for each tangle end, we add marked points on either side of $\red S^1$ and connect them by an arc which intersects $\red S^1$ exactly once and no other curve. As illustrated in Figure~\ref{fig:HD4ended}, we label the marked points on the front component of $\partial M\smallsetminus{\red S^1}$ by $p_i$ and those on the back by $q_i$. Furthermore, for each closed component, we contract the corresponding boundary components to points $z_j$ and $w_j$. Thus, we obtain a multi-pointed Heegaard diagram, whose underlying Heegaard surface is now closed and carries basepoints $p_i$, $q_i$, $z_j$ and $w_j$. If the tangle $T$ is oriented, we choose corresponding orientations for the basepoints. By convention, we choose the label $z_j$ for a basepoint corresponding to a closed component iff its orientation agrees with the orientation of the normal vector field of the Heegaard surface. This agrees with Ozsváth and Szabó's conventions in~\cite[Definitions~2.1]{OSHFLThurston} and in~\cite[section~3.5]{OSHFL} noting that their gradient flowlines flow upwards~\cite[section~3.1]{OSHFL}; for an illustration, see Figure~\ref{fig:HDBasepointszw}. As in~\cite{HDsForTangles}, we need to restrict ourselves to \textbf{admissible} diagrams, ie diagrams whose non-zero periodic domains avoiding all basepoints have both negative and positive multiplicities, see~\cite[Definition~4.16]{HDsForTangles}. \begin{figure}[t] \psset{unit=0.07} \centering {\psset{unit=0.6} \begin{subfigure}{0.33\textwidth}\centering \begin{pspicture}(-40,-30)(40,30) \psline[linecolor=red](40,0)(-40,0) \pscircle[fillstyle=solid,fillcolor=white,linecolor=darkgreen](0,0){10} \rput(0,0){$i$} \rput(0,20){back} \rput(0,-20){front} \psframe[linecolor=darkgray](-40,-30)(40,30) \end{pspicture} \caption{An original tangle end.}\label{fig:HD4endedoriginal} \end{subfigure} \quad \begin{subfigure}{0.33\textwidth}\centering \begin{pspicture}(-40,-30)(40,30) \psline[linestyle=dotted, linecolor=red,dotsep=1pt](0,15)(0,-15) \psline[linecolor=red](-40,0)(40,0) \psline(-2,17)(2,13) \psline(-2,13)(2,17) \psline(-2,-17)(2,-13) \psline(-2,-13)(2,-17) \rput(7,15){$q_i$} \rput(7,-15){$p_i$} \psframe[linecolor=darkgray](-40,-30)(40,30) \end{pspicture} \caption{A new tangle end.}\label{fig:HD4endednew} \end{subfigure} } \caption{The difference between peculiar and ordinary Heegaard diagrams for tangles. The former are obtained from the latter by local changes near the boundary of the Heegaard surface.}\label{fig:HD4ended} \end{figure} \end{definition} \begin{remark}\label{rem:PeculiarHDmoves} It is obvious that we can go from a peculiar Heegaard diagram back to an ordinary tangle Heegaard diagram. The only reason for introducing peculiar Heegaard diagrams is to avoid any bordered Heegaard Floer theory, so the proof of invariance of the algebraic structures we are about to define is a minor adaptation of the one for link Floer homology. In particular, note that the number of $\alpha$-circles and $\beta$-circles in a peculiar Heegaard diagram is the same. Obviously, the Heegaard moves from \cite[Lemma~4.13]{HDsForTangles} are equivalent to the following moves for peculiar Heegaard diagrams: \begin{itemize} \item isotopies of the $\alpha$- and $\beta$-curves away from the marked points and the connecting arcs, \item handleslides of $\beta$-curves over $\beta$-curves and handleslides of $\alpha$-curves over $\alpha$-curves other than $\red S^1$, and \item stabilisation. \end{itemize} \end{remark} \begin{remark} The attribute ``peculiar'' should be considered as a homophone of ``$p$-$q$-lier'', a reference to the labels $p_i$ and $q_i$ we have chosen for the marked points. This choice of variables is loosely related to the notation used in \cite{Abouzaid} for describing the wrapped Fukaya category of the $n$-punctured sphere in terms of twisted complexes. From a Heegaard Floer perspective, we will find it more natural to use curved complexes instead, which are in some sense dual to twisted complexes; see also Remark~\ref{rem:comparisonToKontsevich} as well as~\cite[section~III.5]{MyThesis}. \end{remark} Let us briefly recall the definition of the Heegaard Floer homology $\HFT(T)$ from~\cite[section~5]{HDsForTangles}, adapted to 4-ended tangles. \begin{definition}\label{def:RecallCFT} Given a peculiar Heegaard diagram $\mathcal{H}_T$ for a 4-ended tangle $T$, let $\mathbb{T}:=\mathbb{T}(\mathcal{H}_T)$ be the set of tuples of intersection points, also called generators, such that each $\alpha$- and each $\beta$-curve is occupied by exactly one point. In particular, each generator contains an intersection point that occupies the special $\alpha$-circle ${\red S^1}$, which replaced the four $\alpha$-arcs in the original Heegaard diagram from Definition~\ref{def:HDsfortangles}. So each generator $\x\in\mathbb{T}$ occupies exactly one of these four arcs. We denote the corresponding site by $s(\x)$ and write $\mathbb{T}_s$ for the set of all generators $\x$ with $s=s(\x)$. We obtain a partition $$\mathbb{T}=\coprod_{i=1,2,3,4}\mathbb{T}_{s_i}.$$ Let $\Id$ be the ring of idempotents defined as the direct product of four copies of $\mathbb{F}_2$ and write $\iota_i$ for the $i^\text{th}$ element of the standard basis of $\Id$. Then, we define a right $\Id$-module $\CFT(T)$ by letting $\CFT(T).\iota_i$ be the vector space over $\mathbb{F}_2$ freely generated by elements in $\mathbb{T}_{s_i}$. Furthermore, for any two generators $\x$ and $\y$ of $\mathbb{T}$, we can consider the space of domains from $\x$ to $\y$, denoted by $\pi_2(\x,\y)$. For each element $\phi\in\pi_2(\x,\y)$, we can define a moduli space $\mathcal{M}(\phi)$ of holomorphic curves in $\Sigma\times [0,1]\times\mathbb{R}$ representing $\phi$. (Alternatively, one may count holomorphic discs in $\Sym^{g+n+1}(\Sigma_g)$; however, by the main result of \cite{CylindricalReformulation}, this gives the same theory, see also \cite[section~5.2]{OSHFL}.) The dimension of this moduli space is given by the Maslov index $\mu(\phi)$. The differential in $\CFT(T)$ is defined as \[\partial \x=\sum_{\y}\sum_{\substack{\phi\in\pi_2^0(\x,\y)\\ \mu(\phi)=1}}\#\left(\frac{\mathcal{M}(\phi)}{\mathbb{R}}\right)\y,\] where $\pi_2^0(\x,\y)$ denotes the domains avoiding all basepoints and $\#\left(\tfrac{\mathcal{M}(\phi)}{\mathbb{R}}\right)$ denotes the number of points in the quotient of the 1-dimensional moduli spaces $\mathcal{M}(\phi)$ by a natural $\mathbb{R}$-action. The \textbf{non-glueable Heegaard Floer homology $\HFT(T)$ of the tangle $T$} is defined to be the homology of $\CFT(T)$. \end{definition} \begin{definition}\label{def:matching} A \textbf{matching} $P$ is a partition $\{\{i_1,o_1\},\{i_2,o_2\}\}$ of $\{1,2,3,4\}$ into pairs. An \textbf{ordered matching} is a matching in which the pairs are ordered. A 4-ended tangle $T$ gives rise to a matching $P_T$ as follows: the first pair consists of the two endpoints of the open component with colour $t_1$, the second consists of the two endpoints of the second open component of $T$, the one labelled $t_2$. Given an orientation of the two open components of $T$, we order each pair of points such that the inward pointing end comes first, the outward pointing end second. For an illustration, see Figure~\ref{fig:HDBasepointspq}. \end{definition} \begin{figure}[t] \centering {\psset{unit=0.075} \begin{subfigure}{0.48\textwidth}\centering \begin{pspicture}(-40,-30)(40,30) \psline[linecolor=darkgray](33,-2.5)(33,7.5) \rput(0,-1){ \psbezier[linewidth=\stringwidth,ArrowInside=->,ArrowInsidePos=0.96]{c-c}(-15,0)(-15,-27)(15,-27)(15,0) \psdot[linewidth=\stringwidth](0,-20.2) } \psline[linecolor=white,linewidth=\stringwhite](0,-10)(20,-10)(40,10)(-20,10)(-40,-10)(0,-10) \psline[linecolor=darkgray](0,-10)(20,-10)(40,10)(-20,10)(-40,-10)(0,-10) \rput(0,1){ \psbezier[linecolor=white,linewidth=\stringwhite](-15,0)(-15,27)(15,27)(15,0) \psbezier[linewidth=\stringwidth,ArrowInside=->,ArrowInsidePos=0.96]{c-c}(15,0)(15,27)(-15,27)(-15,0) \psdot[linewidth=\stringwidth](0,20.2) } \rput(-15,0){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(15,0){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(-32,-7.5){$\textcolor{darkgray}{\Sigma_g}$} \rput(-21,0){$w_j$} \rput(21,0){$z_j$} \rput(0,25){index $3$} \rput(0,-25){index $0$} \rput[t](33,-4.5){$\textcolor{red}{\alpha}$} \rput[b](33,18.5){$\textcolor{blue}{\beta}$} \psline[linecolor=white,linewidth=\stringwhite](33,7.5)(33,17.5) \psline[linecolor=darkgray]{->}(33,7.5)(33,17.5) \rput(33,7.5){\psset{unit=0.5} \psline[linecolor=darkgray](2,-1)(-2,1)\psline[linecolor=darkgray](4,1)(-4,-1)} \end{pspicture} \caption{}\label{fig:HDBasepointszw} \end{subfigure} \quad \begin{subfigure}{0.48\textwidth}\centering \begin{pspicture}(-40,-20)(40,40) \psline[linestyle=dotted, linecolor=red,dotsep=1pt](12,-3)(18,3) \psline[linestyle=dotted, linecolor=red,dotsep=1pt](-12,3)(-18,-3) \psline[linecolor=darkgray](33,-2.5)(33,7.5) \psline[linecolor=white,linewidth=\stringwhite](0,-10)(20,-10)(40,10)(-20,10)(-40,-10)(0,-10) \psline[linecolor=red]{c-c}(-30,0)(30,0) \psline[linecolor=darkgray](0,-10)(20,-10)(40,10)(-20,10)(-40,-10)(0,-10) \rput(-18,-3){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(-12,3){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(18,3){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(12,-3){\psset{unit=0.75} \psline(2,-1)(-2,1)\psline(4,1)(-4,-1)} \rput(0,1){ \psbezier[linecolor=white,linewidth=\stringwhite](12,-3)(12,17)(18,23)(18,3) \psbezier[linewidth=\stringwidth]{c-c}(12,-3)(12,17)(18,23)(18,3) \psbezier[linecolor=white,linewidth=\stringwhite](-12,3)(-12,23)(-18,17)(-18,-3) \psbezier[linewidth=\stringwidth,ArrowInside=->,ArrowInsidePos=0.8]{c-c}(-12,3)(-12,23)(-18,17)(-18,-3) \psbezier[linewidth=\stringwidth,ArrowInside=->,ArrowInsidePos=0.8]{c-c}(-18,-3)(-18,17)(-12,23)(-12,3) } \rput(0,16){ \psbezier[linewidth=\stringwidth,ArrowInside=->,ArrowInsidePos=0.8]{c-c}(15,0)(15,20.4)(-15,20.4)(-15,0) \psdot[linewidth=\stringwidth](0,15.2) } \rput(-32,-7.5){$\textcolor{darkgray}{\Sigma_g}$} \rput(-6,2){$q_{o_k}$} \rput(-24,-4){$p_{o_k}$} \rput(24,2){$q_{i_k}$} \rput(18,-4){$p_{i_k}$} \rput(0,35){index $3$} \rput[t](33,-4.5){$\textcolor{red}{\alpha}$} \rput[b](33,18.5){$\textcolor{blue}{\beta}$} \psline[linecolor=white,linewidth=\stringwhite](33,7.5)(33,17.5) \psline[linecolor=darkgray]{->}(33,7.5)(33,17.5) \rput(33,7.5){\psset{unit=0.5} \psline[linecolor=darkgray](2,-1)(-2,1)\psline[linecolor=darkgray](4,1)(-4,-1)} \end{pspicture} \caption{}\label{fig:HDBasepointspq} \end{subfigure} } \caption{A summary of our orientation conventions. The two schematic pictures show Heegaard diagrams near the basepoints corresponding to a closed (a) and an open (b) tangle component. If we think of the Heegaard diagram as a combinatorial description of a Morse function on the $\mathbb{Z}$-homology sphere $M$, the tangle components are the gradient flowlines connecting the basepoints $w_j$, $z_j$, $p_{o_k}/q_{o_k}$ and $p_{i_k}/q_{i_k}$ to the index~3 and index~0 critical points. The arrow in the top right corner of each picture shows a normal vector determining the orientation of $\Sigma_g$ via the right-hand rule; see Remark~\ref{rem:conventions1}.}\label{fig:HDBasepoints} \end{figure} \begin{definition}\label{def:RecallGradingsFromHDsForTangles} Given a domain $\phi\in\pi_2(\x,\y)$ between two generators $\x$ and $\y$ in $\mathbb{T}$ and a basepoint $x=p_i,q_i,z_j,w_j$, let $x(\phi)$ denote the multiplicity of $\phi$ at $x$. Then, we define three gradings on the generators of $\CFT(T)$: the \textbf{$\delta$-grading} $\delta$ is a relative $\frac{1}{2}\mathbb{Z}$-grading and defined by \[\delta(\y)-\delta(\x)=\mu(\phi)-\sum_{i=1,2,3,4}\tfrac{1}{2}(p_i(\phi)+q_i(\phi))-\sum_{i=1}^n(z_i(\phi)+w_i(\phi)).\] When comparing this to~\cite[Definition~5.13]{HDsForTangles}, note that we now use peculiar Heegaard diagrams to compute the Maslov index $\mu$. Furthermore, for every component $t$ of the tangle, there is a relative $\mathbb{Z}$-grading $A_t$, which is called the \textbf{Alexander grading}, see \cite[Definition~5.6]{HDsForTangles}. Given an ordered matching $P=\{\{i_1,o_1\},\{i_2,o_2\}\}$, we define $A_{t_k}$ for $k=1,2$ by \[A_{t_k}(\y)-A_{t_k}(\x):=A_{t_k}(\phi):=p_{o_k}(\phi)+q_{o_k}(\phi)-p_{i_k}(\phi)-q_{i_k}(\phi).\] For $j=1,\dots,n$, we define $A_{t'_j}$ by \[A_{t'_j}(\y)-A_{t'_j}(\x):=A_{t'_j}(\phi):=2w_j(\phi)-2z_j(\phi).\] By taking the sum of all Alexander gradings, we obtain a relative $\mathbb{Z}$-grading, the \textbf{reduced Alexander grading} $\overline{A}$. Finally, the \textbf{homological grading} $h$, a relative $\mathbb{Z}$-grading, is defined as \[h=\tfrac{1}{2}\overline{A}-\delta.\] We sometimes denote the Alexander grading on generators by a superscript list of integers (or half-integers, if eg, we want to achieve the same symmetry present in the decategorified invariants from~\cite{HDsForTangles}), like $a^\Red{+1}$ for the univariate or $a^{(\frac{3}{2},-\frac{1}{2})}$ for the multivariate grading. \end{definition} \subsection{Peculiar algebras} \begin{definition} For $n\geq0$, let $\Rpren$ be the free polynomial ring generated by the variables $p_i$, $q_i$ and $U'_j$ and $V'_j$, where $i=1,2,3,4$ and $j=1,\dots,n$. Let $\Apre_n$ be the $\Id$-$\Id$-algebra whose underlying $\Id$-$\Id$-bimodule structure is given by $\iota_{s'}\Apre_n.\iota_{s}:=\Rpren$ for pairs $(s,s')$ of sites and whose algebra multiplication is defined by the unique $\Id$-$\Id$-bimodule homomorphism $\Apre_n\otimes_{\Id}\Apre_n\rightarrow \Apre_n$ which, for all triples $(s,s',s'')$ of sites, restricts to the multiplication map in $\Rpren$: \[\underbrace{\iota_{s''}.\Apre_n.\iota_{s'}}_{\Rpren}\otimes_{\Id}\underbrace{\iota_{s'}.\Apre_n.\iota_{s}}_{\Rpren}\rightarrow \underbrace{\iota_{s''}.\Apre_n.\iota_{s}}_{\Rpren}.\] We define a $\tfrac{1}{2}\mathbb{Z}$-grading on $\Apre_n$, called the $\delta$-grading, by setting \[\delta(\iota_i):=0,\quad \delta(p_i)=\delta(q_i):=\tfrac{1}{2}\quad\text{ and }\quad\delta(U'_j)=\delta(V'_j):=1,\] where $i=1,2,3,4$ and $j=1,\dots,n$, and then extending linearly to all of $\Apre_n$. Similarly, given an ordered matching $P=\{\{i_1,o_1\},\{i_2,o_2\}\}$, we define relative $\mathbb{Z}$-gradings $A_{t_k}$ for $k=1,2$, called Alexander gradings, by \[A_{t_k}(\iota_s):=0,\quad A_{t_k}(p_{o_k})=A_{t_k}(q_{o_k}):=-1\quad\text{ and }\quad A_{t_k}(p_{i_k})=A_{t_k}(q_{i_k}):=1,\] and similarly Alexander gradings $A_{t'_j}$ for $j=1,\dots,n$ by \[A_{t'_j}(\iota_s):=0,\quad A_{t'_j}(U'_{j}):=-2\quad\text{ and }\quad A_{t'_j}(V'_{j}):=2,\] and then extend linearly to $\Apre_n$. These gradings give rise to a reduced Alexander grading and a homological grading as in Definition~\ref{def:RecallGradingsFromHDsForTangles}. \end{definition} \begin{definition} Let $\Rn$ be the free polynomial ring in the variables $U_i$, $U'_j$ and $V'_j$ for $i=1,2,3,4$ and $j=1,\dots,n$. Via the inclusion $$\Rn\hookrightarrow \Rpren,\quad U_i\mapsto p_iq_i,\quad U'_j\mapsto U'_j,\quad V'_j\mapsto V'_j,$$ we can regard $\Apre_n$ as an $\Rn$-algebra. Let $\Aminus$ be the subalgebra of $\Apre_n$ generated as an $\Rn$-algebra by the idempotents in $\Id$ and \[p_i:=\iota_{i-1}.p_i.\iota_i,\quad\text{ and }\quad q_i:=\iota_{i}.q_i.\iota_{i-1},\] where we take the indices $i=1,2,3,4$ modulo 4 with an offset of 1. Note that $p_iq_i=\iota_{i-1}.U_i$ and $q_ip_i=\iota_i.U_i$ as elements in $\Aminus$. Thus, any element in $\Aminus$ can be written uniquely as a sum of elements of the form \[\iota_i.r, \quad p_ip_{i+1}\dots p_{k-1}p_{k}.r\quad\text{ and }\quad q_iq_{i-1}\dots q_{k+1}q_{k}.r,\] where $r\in \Rn$ is a monomial. This is the standard basis on $\Aminus$ as a vector space over $\mathbb{F}_2$. For convenience, we sometimes write the elements $p_ip_{i+1}\dots p_{k-1}p_{k}$ as $p_{i(i+1)\cdots (k-1)k}$ and $q_iq_{i-1}\dots q_{k+1}q_{k}$ as $q_{i(i-1)\cdots (k+1)k}$, where again, we take the indices modulo 4 with an offset of 1. Furthermore, to simplify notation, we set \[ p=p_1+p_2+p_3+p_4\in\Aminus \quad\text{ and }\quad q=q_1+q_2+q_3+q_4\in\Aminus, \] so we can write for example $p^4=p_{1234}+p_{2341}+p_{3412}+p_{4123}$. We call $\Aminus$ the \textbf{generalised peculiar algebra}. \end{definition} \begin{wrapfigure}{r}{0.3333\textwidth} \centering {\vspace*{-5pt} $ \begin{tikzcd}[row sep=0.5cm, column sep=0.5cm] & \overset{\iota_4}{\bullet} \arrow[bend left=10,leftarrow]{dr}[inner sep=1pt]{q_4} \arrow[bend right=10,swap]{dr}[inner sep=1pt]{p_4} & \\ \!\!\!\raisebox{7pt}{$\underset{\iota_1}{~}$}\,\bullet \arrow[bend left=10,leftarrow]{ur}[inner sep=1pt]{q_1} \arrow[bend right=10,swap]{ur}[inner sep=1pt]{p_1} & & \bullet\,\raisebox{7pt}{$\underset{\iota_3}{~}$}\!\!\! \arrow[bend left=10,leftarrow]{dl}[inner sep=1pt]{q_3} \arrow[bend right=10,swap]{dl}[inner sep=1pt]{p_3} \\ & \underset{\iota_2}{\bullet} \arrow[bend left=10,leftarrow]{ul}[inner sep=1pt]{q_2} \arrow[bend right=10,swap]{ul}[inner sep=1pt]{p_2} \end{tikzcd} \vspace{-5pt} $ } \caption{The quiver for an alternative definition of~$\Ad$.}\label{fig:quiverForAd}\vspace*{-20pt} \end{wrapfigure} For most of this paper, we will only be concerned with a certain quotient of $\Aminus$, which was already defined in~\cite{MyThesis}. \begin{definition} Let $\Ad$ be the quotient of $\Aminus$ by the relations $U_i=0$, $U'_j=0$ and $V'_j=0$. This algebra can be interpreted as the path algebra of the quiver in Figure~\ref{fig:quiverForAd} with relations $p_iq_i=0=q_ip_i$, see also Example~\ref{exa:pqModSpecialCaseofCC} and Figure~\ref{fig:nutshelll}. We call $\Ad$ the \textbf{peculiar algebra}. \end{definition} \subsection{Peculiar modules} For the differential in $\CFT$, we only consider holomorphic curves which stay away from the basepoints in our peculiar Heegaard diagrams. We claim that we also obtain a tangle invariant if we add those curves to our differential, recording their multiplicities at the basepoints by elements of the algebras $\Aminus$ or $\Ad$. However, the resulting complex does not satisfy the relation $\partial^2=0$. Instead, we obtain a slightly modified $\partial^2$-relation which enables us to promote $\CFT$ to more sophisticated homological invariants, namely certain \emph{curved} type D structures. As abstract algebraic structures, we defined curved type D structures in Example~\ref{exa:HighBrowDefTypeDoverI}. Let us recall this definition here in slightly more down-to-earth terms. \begin{definition}\label{def:curvedTypeDStructure} Let $I$ be a ring of idempotents and $A$ a $\mathbb{Z}$-graded algebra over $I$. Also fix a central element $a_c\in Z(A)$ of degree $-2$. A \textbf{(right) curved type D structure} over $A$ is a $\mathbb{Z}$-graded $I$-module $M$ (with a preferred choice of basis) together with a (right) $I$-module homomorphism $\partial\co M\rightarrow M\otimes_I A$ of degree $-1$ satisfying \[(1_M\otimes \mu)\circ(\partial\otimes 1_A)\circ\partial=1_M\otimes a_c,\] where $\mu$ denotes composition in $A$. We call $a_c$ the \textbf{curvature} of $M$. A morphism between two curved type D structures $(M,\partial_M)$ and $(N,\partial_N)$ is an $I$-module homomorphism \linebreak$M\rightarrow N\otimes_I A$. For two such morphisms $f$ and $g$, their composition is defined as \[(g\circ f)=(1\otimes\mu)\circ(g\otimes 1_A)\circ f.\] We endow the space of morphisms $\Mor(M,N)$ with a differential~$D$ defined by \[D(f)= \partial_N\circ f+f\circ\partial_M.\] Then indeed $D^2=0$, since we have chosen $a_c$ to be central. This gives us an enriched category over $\Com$, the category of ordinary chain complexes over $\mathbb{F}_2$. The underlying ordinary category is obtained by restricting the morphism spaces to degree 0 elements in the kernel of $D$, giving us the usual notions of chain map and chain homotopy, see Definition~\ref{def:UnderlyingOrdinaryCat} and Example~\ref{exa:UnderlyingOrdCat}. \end{definition} \begin{remark}\label{rem:MatFakAsCurvedComplexes} It is interesting to compare curved type D structures to matrix factorisations as studied by Khovanov-Rozansky \cite{KhRoz}. Given an algebra~$A$ over some field $k$, a matrix factorisation of a potential $w\in A$ consists of two free $A$-modules $M_0$ and $M_1$ with two maps \begin{equation}\label{eqn:matrixfac} \begin{tikzcd}[row sep=0.5cm, column sep=1cm] M_0 \arrow[bend left=10]{r}[inner sep=1pt]{d_0} & M_1 \arrow[bend left=10]{l}[inner sep=1pt]{d_1} \end{tikzcd} \end{equation} such that $d_1d_0=w.\id_{M_0}$ and $d_0d_1=w.\id_{M_1}$. If $\overline{M_0}$ and $\overline{M_1}$ denote the $k$-vector spaces generated by an $A$-basis of $M_0$ and $M_1$, respectively, we can regard $d_0$ and $d_1$ as maps \[\overline{d_0}\co\overline{M_0}\rightarrow \overline{M_1}\otimes_IA\quad\text{ and }\quad\overline{d_1}\co\overline{M_1}\rightarrow \overline{M_0}\otimes_IA.\] Then $(\overline{M_0}\oplus \overline{M_1},\overline{d_0}+\overline{d_1})$ defines a curved type D structure over the $k$-algebra $A$. In general, we cannot go in the other direction. For example, curved complexes associated with manifolds with torus boundary do not, in general, admit a splitting of the form~\eqref{eqn:matrixfac}. This is for the simple reason that the total number of generators can be odd, see for example~\cite[Figure~5]{HRW}. However, for the curved type D structure invariants of tangles, such splittings exist, which is an easy corollary of the classification in terms of immersed curves on the 4-punctured sphere, see sections~\ref{sec:classification} and~\ref{sec:glueingrevisited}. \end{remark} \begin{definition} Given an (ordered) matching $P=\{\{i_1,o_1\},\{i_2,o_2\}\}$, let $\pqMod:=\pqMod_P$ be the category of $\delta$-graded (and Alexander graded) curved complexes over $\Ad$ with curvature \[p^4+q^4.\] We call the objects of this category \textbf{peculiar modules}. Furthermore, let $\gpqMod_{P,n}$ be the category of $\delta$-graded (and Alexander graded) curved complexes over $\Aminus$ with curvature \begin{equation}\label{eqn:generalcurvature} p^4+q^4+U_{i_1}U_{o_1}+U_{i_2}U_{o_2}. \end{equation} We call the objects of this category \textbf{generalised peculiar modules}. \end{definition} \begin{definition}\label{def:CFTd} Given a 4-ended tangle $T$ with $n$ closed components in a $\mathbb{Z}$-homology 3-ball $M$ with spherical boundary and an (admissible) peculiar Heegaard diagram for $T$, let us define a generalised peculiar module $\CFTminus(T):=\CFTminus(T,M)$ in $\gpqMod_{P_T,n}$ whose underlying relatively bigraded right $\Id$-module agrees with $\CFT(T)$. However, the differential $\partial$ on $\CFTminus(T)$ is defined by \begin{equation}\label{eqn:differentialOnCFTd} \partial \x=\sum_{y\in\mathbb{T}}\sum_{\substack{\phi\in\pi_2(\x,\y)\\ \mu(\phi)=1}}\#\left(\frac{\mathcal{M}(\phi)}{\mathbb{R}}\right) \cdot \y\otimes_{\Id} a(\phi), \end{equation} where for $\phi\in\pi_2(\x,\y)$, $a(\phi)$ is the preimage of \[\iota_{s(\y)}.\prod_{i=1,2,3,4} p_i^{p_i(\phi)}\cdot q_i^{q_i(\phi)}\cdot\prod_{j=1}^n (U'_j)^{w_j(\phi)}\cdot (V'_j)^{z_j(\phi)}.\iota_{s(\x)},\] under the inclusion map $\Aminus\hookrightarrow\Apre_n$. We call $(\CFTminus(T),\partial)$ the \textbf{generalised peculiar module of~$T$}. Its image under the functor \[\gpqMod_{P_T,n}\rightarrow\pqMod\] induced by the quotient map $\Aminus\rightarrow\Ad$ is denoted by $\CFTd(T)$, which we call the \textbf{peculiar module} of $T$. \end{definition} \begin{theorem}\label{thm:PecMod} \(\CFTminus(T)\) is indeed a well-defined generalised peculiar module. Furthermore, its relatively bigraded chain homotopy type is an invariant of the tangle~\(T\). Hence \(\CFTd(T)\) is a well-defined peculiar module, whose relatively bigraded chain homotopy type is also an invariant of the tangle~\(T\). \end{theorem} \begin{remark}\label{rmk:RelativeGradings} On link Floer homology, one can promote both Alexander and $\delta$-gradings to absolute gradings via symmetries and the spectral sequence to $\HF(S^3)$~\cite{OSHFL}. I expect that something similar can be done for our tangle invariants. Alternatively, one could simply fix absolute gradings on a specific test tangle, say a trivial tangle, and then define absolute gradings on all other tangles via the pairing with this test tangle (using Theorem~\ref{thm:CFTdGlueingAsMorphism}) and the absolute gradings on $\HFL$. However, in this paper, we are only working with the relative gradings on $\CFTd$ and $\CFTminus$ inherited from those on $\CFT$ which were defined in~\cite{HDsForTangles}. So, throughout this paper, all gradings on $\CFL$ should be regarded as relative, too. \end{remark} \begin{remark}\label{rem:ComparisonOS} The generalized algebra $\Aminus$ and the generalised invariant $\CFTminus$ are inspired by Ozsv\'{a}th and Szab\'{o}'s algebra and tangle invariant from~\cite{OSKauffmanStates2}. Computations suggest that their invariant for one-sided 4-ended tangles is closely related to $\CFTminus$. Conceptually, it might also be interesting to set up their theory for an odd number of tangle strands and then compare $\CFTminus$ to their invariants of $(1,3)$-tangles. \end{remark} \begin{lemma} For any \(\phi\in\pi_2(\x,\y)\), \(a(\phi)\) lies in the image of inclusion map \(\Aminus\hookrightarrow\Apre_n\). \end{lemma} \begin{proof} This follows from the observation that $\partial\phi$ intersected with the $\alpha$-circle ${\red S^1}$ is a path on ${\red S^1}$ connecting the two points of $\x$ and $\y$ on ${\red S^1}$. \end{proof} \begin{lemma}\label{lem:CFTdGradings} \(\partial\) increases the \(\delta\)-grading by 1 and preserves the Alexander grading. (As~usual, the grading on a tensor product is given by the sum of the gradings of the tensor factors.) \end{lemma} \begin{proof} Both statements follows directly from the definitions of the gradings of generators and algebra elements. \end{proof} \begin{lemma} For each \(\x\in\CFTminus\), the sum on the right-hand side of \eqref{eqn:differentialOnCFTd} is finite. \end{lemma} \begin{proof} The proof is essentially the same as in link Floer homology, see \cite[Lemma~4.2]{OSHFL}. Since $\CFTminus$ is finitely generated, it is sufficient to show that the coefficient of each $\y\in\CFTminus$ is a finite sum. Note that by the previous lemma, the difference of the $\delta$-gradings of $\x$ and $\y$ determines the $\delta$-grading of $a(\phi)$. Thus, there are only finitely many choices for the coefficients $a(\phi)$. So let us also fix the multiplicities of $\phi$ at the basepoints. We can now argue as in the proof of \cite[Lemma~4.13]{OSHF3mfds}, using admissibility of the underlying Heegaard diagram. \end{proof} \pagebreak[3] In the following, we need two analytical facts from \cite{OSHFL}. \begin{fact}\label{fact:OSHFL_lemma_5_4} \cite[Lemma~5.4]{OSHFL} Given a homology class \(\phi\) of an \(\alpha\)-injective boundary degeneration, write \(\phi\) as a linear combination of connected components of \(\Sigma\smallsetminus \A\). Then its Maslov index \(\mu(\phi)\) is equal to twice the sum of the coefficients. The same holds for \(\beta\)-injective boundary degenerations. \end{fact} \begin{fact}\label{fact:OSHFL_lemma_5_5} \cite[Theorem~5.5]{OSHFL} Let \(\Sigma\) be a surface of genus \(g\), equipped with a set \(\A\) of \((g+r)\) attaching circles for a handlebody, where $r\geq1$. Let \(\phi\) be a homology class of boundary degenerations whose domain is non-negative and whose Maslov index is equal to 2. Then the domain of \(\phi\) is equal to one of the connected components of \(\Sigma\smallsetminus \A\). Moreover, the number of pseudo-holomorphic boundary degenerations (considered up to reparametrization) whose domains are equal to the same connected component of \(\Sigma\smallsetminus \A\) is odd. The same holds for \(\beta\)-injective boundary degenerations. \end{fact} \begin{proof}[of Theorem~\ref{thm:PecMod}] Checking the $\partial^2$-identity is analogous to the link case; we can follow \cite[proof of Lemma~4.3]{OSHFL} and count ends of moduli spaces of Maslov index 2 curves. We fix two generators $\x$ and $\z$ and consider the disjoint union of moduli spaces $\mathcal{M}(\phi)$, where $\phi$ varies over those curves in $\pi_2(\x,\z)$ with $\mu(\phi)=2$ and $a(\phi)=a$ for some fixed $a\in \Aminus$. (In particular, this fixes the multiplicities of $\phi$ at the basepoints.) If there are no boundary degenerations, there is an even number of ends, so the $\z\otimes a$-component of $\partial^2 \x$ vanishes. If there are boundary degenerations, then by Fact~\ref{fact:OSHFL_lemma_5_4} above, they contribute at least 2 to the Maslov index, so the remaining curve has to be constant, hence $\x=\z$. By Fact~\ref{fact:OSHFL_lemma_5_5}, we get a boundary degeneration for each component of $\Sigma\smallsetminus\A$ and $\Sigma\smallsetminus\B$. For closed tangle components, these boundary degenerations come in pairs which cancel each other. The remaining two $\alpha$-injective boundary degenerations contribute the first two terms of~\eqref{eqn:generalcurvature} and the remaining two $\beta$-injective boundary degenerations contribute the last two terms. All other ends appear in pairs again, so their contributions cancel. It remains to show that the peculiar module is an invariant of the tangle $T$. However, $\CFTminus(T)$ is essentially the chain complex associated with a multi-pointed Heegaard diagram in Heegaard Floer theory. Thus, we obtain invariance as a (curved) type D structure over the free polynomial ring~$\Rpren$. However, the same proof also works if we work over $\Aminus$, since we only allow handleslides of $\alpha$-curves over $\alpha$-curves other than the special $\alpha$-circle $\red S^1$. \end{proof} \begin{figure} \centering \psset{unit=0.15} \begin{subfigure}[b]{0.26\textwidth}\centering \psset{unit=4} \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psarc[linecolor=violet, linewidth=\stringwidth]{<-c}(2,0){1.41432}{-225}{-135} \psarc[linecolor=darkgreen, linewidth=\stringwidth]{c->}(-2,0){1.41432}{-45}{45} \pscircle[linestyle=dotted, linewidth=\stringwidth](0,0){1.45} \rput[c](1.75;135){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;-135){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;45){$\textcolor{violet}{t_2}$} \rput[c](1.75;-45){$\textcolor{violet}{t_2}$} \end{pspicture} \end{subfigure} \quad \begin{subfigure}[b]{0.32\textwidth}\centering \psset{unit=4} \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psline[linecolor=violet, linewidth=\stringwidth]{c->}(1,-1)(-1,1) \psline[linecolor=white, linewidth=\stringwhite](-1,-1)(1,1) \psline[linecolor=darkgreen, linewidth=\stringwidth]{c->}(-1,-1)(1,1) \pscircle[linestyle=dotted, linewidth=\stringwidth](0,0){1.45} \rput[c](1.75;45){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;-135){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;135){$\textcolor{violet}{t_2}$} \rput[c](1.75;-45){$\textcolor{violet}{t_2}$} \end{pspicture} \end{subfigure} \quad \begin{subfigure}[b]{0.34\textwidth}\centering \psset{unit=4} \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psline[linecolor=darkgreen, linewidth=\stringwidth]{c->}(-1,-1)(1,1) \psline[linecolor=white, linewidth=\stringwhite](1,-1)(-1,1) \psline[linecolor=violet, linewidth=\stringwidth]{c->}(1,-1)(-1,1) \pscircle[linestyle=dotted, linewidth=\stringwidth](0,0){1.45} \rput[c](1.75;45){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;-135){$\textcolor{darkgreen}{t_1}$} \rput[c](1.75;135){$\textcolor{violet}{t_2}$} \rput[c](1.75;-45){$\textcolor{violet}{t_2}$} \end{pspicture} \end{subfigure} \bigskip\\ \begin{subfigure}[b]{0.26\textwidth}\centering \begin{pspicture}(-10.3,-10.3)(10.3,10.3) \psrotate(0,0){90}{ \psecurve[linecolor=blue](7,0)(0,0)(-7,0)(-8.3,8.3)(0,10)(8.3,8.3)(7,0)(0,0)(-7,0) } \psarc[linecolor=red](0,0){7}{0}{360} \rput(7;45){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(7;135){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(7;225){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(7;315){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \psdot(0,7) \psdot(0,-7) \rput[c](4.5;135){$\textcolor{darkgreen}{p_1}$} \rput[c](9.5;135){$\textcolor{darkgreen}{q_1}$} \rput[c](4.5;-135){$\textcolor{darkgreen}{p_2}$} \rput[c](9.5;-135){$\textcolor{darkgreen}{q_2}$} \rput[c](4.5;45){$\textcolor{violet}{p_4}$} \rput[c](9.5;45){$\textcolor{violet}{q_4}$} \rput[c](4.5;-45){$\textcolor{violet}{p_3}$} \rput[c](9.5;-45){$\textcolor{violet}{q_3}$} \rput[bl](0,7.5){$d$} \rput[tl](0,-7.5){$b$} \end{pspicture} \end{subfigure} \quad \begin{subfigure}[b]{0.32\textwidth}\centering \begin{pspicture}(-10.3,-10.3)(10.3,10.3) \psrotate(0,0){90}{ \psecurve[linecolor=blue](8,-0.2)(7,0)(2,2)(0,7)(-0.2,8) \psecurve[linecolor=blue](-4,4)(0,7)(-8.3,8.3)(-7,0)(-4,4) \psecurve[linecolor=blue](-8,0.2)(-7,0)(-2,-2)(0,-7)(0.2,-8) \psecurve[linecolor=blue](4,-4)(0,-7)(8.3,-8.3)(7,0)(4,-4) \psarc[linecolor=red](0,0){7}{0}{360} \rput(7;45){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(7;135){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(7;225){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(7;315){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} } \psdot(0,7) \psdot(7,0) \psdot(0,-7) \psdot(-7,0) \rput[c](4.5;135){$\textcolor{violet}{p_1}$} \rput[c](9.5;135){$\textcolor{violet}{q_1}$} \rput[c](4.5;-135){$\textcolor{darkgreen}{p_2}$} \rput[c](9.5;-135){$\textcolor{darkgreen}{q_2}$} \rput[c](4.5;45){$\textcolor{darkgreen}{p_4}$} \rput[c](9.5;45){$\textcolor{darkgreen}{q_4}$} \rput[c](4.5;-45){$\textcolor{violet}{p_3}$} \rput[c](9.5;-45){$\textcolor{violet}{q_3}$} \rput[br](0,7.5){$d$} \rput[tl](7.5,0){$c$} \rput[tl](0,-7.5){$b$} \rput[br](-7.5,0){$a$} \end{pspicture} \end{subfigure} \quad \begin{subfigure}[b]{0.34\textwidth}\centering \begin{pspicture}(-10.3,-10.3)(10.3,10.3) \psecurve[linecolor=blue](8,-0.2)(7,0)(2,2)(0,7)(-0.2,8) \psecurve[linecolor=blue](-4,4)(0,7)(-8.3,8.3)(-7,0)(-4,4) \psecurve[linecolor=blue](-8,0.2)(-7,0)(-2,-2)(0,-7)(0.2,-8) \psecurve[linecolor=blue](4,-4)(0,-7)(8.3,-8.3)(7,0)(4,-4) \psarc[linecolor=red](0,0){7}{0}{360} \rput(7;45){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(7;135){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(7;225){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(7;315){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \psdot(0,7) \psdot(7,0) \psdot(0,-7) \psdot(-7,0) \rput[c](4.5;135){$\textcolor{violet}{p_1}$} \rput[c](9.5;135){$\textcolor{violet}{q_1}$} \rput[c](4.5;-135){$\textcolor{darkgreen}{p_2}$} \rput[c](9.5;-135){$\textcolor{darkgreen}{q_2}$} \rput[c](4.5;45){$\textcolor{darkgreen}{p_4}$} \rput[c](9.5;45){$\textcolor{darkgreen}{q_4}$} \rput[c](4.5;-45){$\textcolor{violet}{p_3}$} \rput[c](9.5;-45){$\textcolor{violet}{q_3}$} \rput[bl](0,7.5){$d$} \rput[bl](7.5,0){$c$} \rput[tr](0,-7.5){$b$} \rput[tr](-7.5,0){$a$} \end{pspicture} \end{subfigure} \bigskip\\ \begin{subfigure}[b]{0.26\textwidth}\centering $\begin{tikzcd}[row sep=1.4cm, column sep=0.5cm] \delta^{0}b^{(\textcolor{darkgreen}{0},\textcolor{violet}{0})} \arrow[leftarrow,bend left=12]{r}{p_{34}+q_{21}} & \delta^{0}d^{(\textcolor{darkgreen}{0},\textcolor{violet}{0})} \arrow[leftarrow,bend left=10]{l}{p_{12}+q_{43}} \end{tikzcd}$ \end{subfigure} \quad \begin{subfigure}[b]{0.32\textwidth}\centering $\begin{tikzcd}[row sep=1.4cm, column sep=0.5cm] \delta^{0}a^{(\textcolor{darkgreen}{\frac{1}{2}},\textcolor{violet}{-\frac{1}{2}})} \arrow[leftarrow,bend right=7,swap]{r}{p_{234}} \arrow[leftarrow,bend left=12]{d}{q_{143}} & \delta^{\frac{1}{2}}d^{(\textcolor{darkgreen}{\frac{1}{2}},\textcolor{violet}{\frac{1}{2}})} \arrow[leftarrow,bend right=7,swap]{l}{p_1} \arrow[leftarrow,bend left=12]{d}{q_4} \\ \delta^{\frac{1}{2}}b^{(\textcolor{darkgreen}{-\frac{1}{2}},\textcolor{violet}{-\frac{1}{2}})} \arrow[leftarrow,bend right=7,swap]{r}{p_3} \arrow[leftarrow,bend left=12]{u}{q_2} & \delta^{0}c^{(\textcolor{darkgreen}{-\frac{1}{2}},\textcolor{violet}{\frac{1}{2}})} \arrow[leftarrow,bend right=7,swap]{l}{p_{412}} \arrow[leftarrow,bend left=12]{u}{q_{321}} \end{tikzcd}$ \end{subfigure} \quad \begin{subfigure}[b]{0.34\textwidth}\centering $\begin{tikzcd}[row sep=1.4cm, column sep=0.35cm] \delta^{0}a^{(\textcolor{darkgreen}{-\frac{1}{2}},\textcolor{violet}{\frac{1}{2}})} \arrow[leftarrow,bend left=5,pos=0.8]{r}{q_1} \arrow[leftarrow,bend right=12,swap]{d}{p_2} & \delta^{-\frac{1}{2}}d^{(\textcolor{darkgreen}{-\frac{1}{2}},\textcolor{violet}{-\frac{1}{2}})} \arrow[leftarrow,bend left=5,pos=0.2]{l}{q_{432}} \arrow[leftarrow,bend right=12,swap]{d}{p_{123}} \\ \delta^{-\frac{1}{2}}b^{(\textcolor{darkgreen}{\frac{1}{2}},\textcolor{violet}{\frac{1}{2}})} \arrow[leftarrow,bend left=5]{r}{q_{214}} \arrow[leftarrow,bend right=12,swap]{u}{p_{341}} & \delta^{0}c^{(\textcolor{darkgreen}{\frac{1}{2}},\textcolor{violet}{-\frac{1}{2}})} \arrow[leftarrow,bend left=7]{l}{q_3} \arrow[leftarrow,bend right=12,swap]{u}{p_4} \end{tikzcd}$ \end{subfigure} \caption{Basic rational tangles, their Heegaard diagrams and peculiar modules. The superscripts of the generators specify the Alexander grading. Compare this to \protect\cite[Figure~17]{HDsForTangles}. }\label{fig:CFTdForSomeRatTangles} \end{figure} \begin{sidewaysfigure}[p] \vspace*{435pt} \centering \begin{subfigure}[b]{0.35\textwidth}\centering \psset{unit=0.8, linewidth=1.1pt} { \begin{pspicture}[showgrid=false](-5.2,-3.1)(3.2,3.1) \psset{linewidth=\stringwidth} \psecurve[linecolor=violet]{c-c}(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=violet]{<-c}(2,2)(0.97,2.24)(0,2)(-0.75,1)(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1) \psecurve[linecolor=darkgreen]{<-c}(-6,1.5)(-3.3,1.85)(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve[linecolor=violet]{c-c}(0.75,-1)(0,-2)(-2.5,-1.5)(-3.25,0)(-2.5,1.5) \psecurve[linecolor=white,linewidth=\stringwhite](0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=white,linewidth=\stringwhite](0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve[linecolor=violet]{c-c}(0.75,1)(-0.75,-1)(0,-2)(0.97,-2.24)(2,-2) \psecurve[linecolor=violet]{c-c}(0,2)(-0.75,1)(0.75,-1)(0,-2) \psecurve[linecolor=violet]{c-c}(-2.5,-1.5)(-3.25,0)(-2.5,1.5)(0,2)(0.75,1)(-0.75,-1) \psecurve[linecolor=white,linewidth=\stringwhite](-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psecurve[linecolor=darkgreen]{c-c}(-2.5,1.5)(-1.85,0)(-2.5,-1.5)(-3.3,-1.85)(-6,-1.5) \psline[linecolor=violet]{<-}(-3.25,-0.1)(-3.25,0.1) \pscircle[linestyle=dotted](-1,0){3.05} \uput{0.2}[45](0.97,2.24){$\textcolor{violet}{t_2}$} \uput{0.2}[-45](0.97,-2.24){$\textcolor{violet}{t_2}$} \uput{0.2}[135](-3.3,1.85){$\textcolor{darkgreen}{t_1}$} \uput{0.2}[-135](-3.3,-1.85){$\textcolor{darkgreen}{t_1}$} \uput{2.5}[180](-1,0){$\textcolor{red}{a}$} \uput{2.3}[-90](-1,0){$\textcolor{blue}{b}$} \uput{2.1}[0](-1,0){$\textcolor{darkgreen}{c}$} \uput{2.3}[90](-1,0){$\textcolor{gold}{d}$} \end{pspicture} } \caption{A tangle diagram for the $(2,-3)$-pretzel tangle.}\label{fig:HFTdmutationpretzeltangleT} \end{subfigure} \quad \begin{subfigure}[b]{0.6\textwidth}\centering \psset{unit=0.25} { \begin{pspicture}(-24,-11)(24,11) \rput(-12,0){\psrotate(0,0){-90}{ \psarc[linecolor=red](0,0){8}{0}{360} \SpecialCoor \rput(8;45){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(8;135){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(8;225){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \rput(8;315){\pscircle*[linecolor=white]{1}\pscircle[linecolor=darkgreen]{1}} \psecurve[linecolor=blue]% (10;130)(10;140)(8;155)% (8;-95)(11;-45)% (10.5;45)(8;70)% (8;-10)% (10;-45)(8;-70)% (8;120)(10;130)(10;140)(8;155)% \psdot(8;155)% \psdot(8;120)% \psdot(8;70)% \psdot(8;-10)% \psdot(8;-70)% \psdot(8;-95)% } \rput[b](8.7;65){$d$} \rput[l](8;30){~$x_1$} \rput[l](8;-20){~$x_2$} \rput[b](7.3;-100){$b$} \rput[l](8;-160){~$a_2$} \rput[l](8;-185){~$a_1$} \footnotesize \rput[t](7.1;110){$\textcolor{darkgreen}{p_1}$} \rput[t](9.8;110){$\textcolor{darkgreen}{q_1}$} \rput[c](8.8;-123){$\textcolor{darkgreen}{q_2}$} \rput[l](2;-90){$\textcolor{darkgreen}{p_2}$} \rput[c](3.5;90){\texttt{4}} \rput[c](7;-60){\texttt{2}} \rput[c](9;-80){\texttt{1}} \rput[c](9.5;42){\texttt{3}} } \rput(12,0){\psrotate(0,0){-90}{ \psarc[linecolor=red](0,0){8}{0}{360} \SpecialCoor \rput(8;45){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(8;135){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(8;225){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \rput(8;315){\pscircle*[linecolor=white]{1}\pscircle[linecolor=violet]{1}} \psecurve[linecolor=blue]% (10;-130)(10;-140)(8;-155)% (8;95)(11;45)% (10.5;-45)(8;-80)% (8;60) (9.5;45)(8;20) (8;-60)(9.5;-45)(10;45) (8;75)% (5.5;100)% (8;-120)(10;-130)(10;-140)(8;-155)% \psdot(8;-155)% \psdot(8;-120)% \psdot(8;-80)% \psdot(8;-60)% \psdot(8;20)% \psdot(8;60)% \psdot(8;75)% \psdot(8;95)% } \rput[b](8.7;-245){$d'$} \rput[r](8;-210){$y_1$~} \rput[r](8;-170){$y_2$~} \rput[b](7;-154){$y_3$} \rput[b](7.3;-70){$b'$} \rput[r](8;-29){$c_3$~} \rput[r](8;-15){$c_2$~} \rput[l](8;5){~$c_1$} \footnotesize \rput[t](7.5;70){$\textcolor{violet}{p_4}$} \rput[t](9.8;70){$\textcolor{violet}{q_4}$} \rput[t](3;-90){$\textcolor{violet}{p_3}$} \rput[l](9.5,-6){~$\textcolor{violet}{q_3}$} \rput[l](9.5,-7.5){~\texttt{1}} \rput[l](9.5,-9){~\texttt{5}} \psline[linestyle=dotted,dotsep=1pt](9.1;-45)(9.5,-6) \psline[linestyle=dotted,dotsep=1pt](8.7;-80)(9.5,-7.5) \psline[linestyle=dotted,dotsep=1pt](9.9;-80)(9.5,-9) \rput[c](4;90){\texttt{4}} \rput[c](1;90){\texttt{6}} \rput[c](7;-120){\texttt{2}} \rput[c](9.5;138){\texttt{3}} } \psline[linestyle=dashed](-6.25,5.65685424949236)(6.25,5.65685424949236) \psline[linestyle=dashed](-6.25,-5.65685424949236)(6.25,-5.65685424949236) \end{pspicture} } \caption{A Heegaard diagram for the tangle on the left.} \label{fig:HFTdmutationpretzeltangleHD} \end{subfigure} \\ \begin{subfigure}[b]{\textwidth} \centering \includegraphics[height=200pt]{pictures/ihatetikzcrop.pdf} \caption{The peculiar module for the pretzel tangle above.}\label{fig:firstcomplex} \end{subfigure} \caption{The computation of a peculiar module for a non-rational tangle, see example~\ref{exa:HFTdpretzeltangle}.}\label{fig:HFTdmutationexample} \end{sidewaysfigure} \begin{example}[(rational tangles)]\label{exa:CFTdRatTang} Figure~\ref{fig:CFTdForSomeRatTangles} shows the peculiar modules of some very simple 4-ended tangles. As shown in \cite[Example~4.3]{HDsForTangles}, every rational tangle $T$ has a tangle Heegaard diagram with just a single $\beta$-curve. Thus, we only count bigons in the differential of the peculiar invariant, and only those that do not occupy both~$p_i$ and~$q_j$. By tightening the $\beta$-curve, we can assume that there are no honest differentials in $\CFTd(T)$, ie that every bigon covers some $p_i$ or $q_i$. Then $\CFTd(T)$ can be read off from this single $\beta$-curve as follows: the vertices of the graph of $\CFTd(T)$ correspond to intersection points of this $\beta$-curve with the $\alpha$-arcs. Its arrows come in pairs labelled by powers of $p$ or $q$. More precisely, for each component of the $\beta$-curve minus the $\alpha$-arcs, we obtain an arrow pair connecting the vertices corresponding to the ends of this component; this arrow pair is labelled by powers of $p$ if the component goes via the front component of the 4-punctured sphere minus the $\alpha$-arcs and by powers of $q$ otherwise. Conversely, the $\beta$-curve can be read off from $\CFTd(T)$. So in this case, we might actually view the $\beta$-curve as the invariant associated with the tangle. \end{example} \begin{figure}[t]\centering \begin{subfigure}[b]{\textwidth}\centering \psset{unit=1.5} { \begin{pspicture}(-3.6,-1.6)(3.6,1.6) {\psset{linecolor=lightgray} \psframe*(-2.5,0.5)(-3.5,1.5) \psframe*(-0.5,0.5)(-1.5,1.5) \psframe*(0.5,0.5)(1.5,1.5) \psframe*(2.5,0.5)(3.5,1.5) \psframe*(-1.5,-0.5)(-2.5,0.5) \psframe*(-0.5,-0.5)(0.5,0.5) \psframe*(1.5,-0.5)(2.5,0.5) \psframe*(-2.5,-0.5)(-3.5,-1.5) \psframe*(-0.5,-0.5)(-1.5,-1.5) \psframe*(0.5,-0.5)(1.5,-1.5) \psframe*(2.5,-0.5)(3.5,-1.5) } \psline(3,0)(3,-1) \psline(2,1)(1,1) \psline(0,1)(-1,1) \psline(-2,1)(-3,1) \psline(2,-1)(1.25,0) \psline(1,-1)(0.75,0) \psline(0,-1)(-0.75,0) \psline(-1,-1)(-1.25,0) \psline(-2,-1)(-3,0) {\psset{linestyle=dashed,dash=4pt 2pt} \psline(-3,0)(-3,1) \psline(-2,-1)(-1,-1) \psline(0,-1)(1,-1) \psline(2,-1)(3,-1) \psline(-2,1)(-1.25,0) \psline(-1,1)(-0.75,0) \psline(0,1)(0.75,0) \psline(1,1)(1.25,0) \psline(2,1)(3,0) } \psdots[linecolor=red](-3,0)(-1.25,0)(-0.75,0)(0.75,0)(1.25,0)(3,0) \psdots[linecolor=blue](-1,1)(1,1)(3,-1) \psdots[linecolor=gold](-1,-1)(1,-1)(-3,1) \psdots[linecolor=darkgreen](-2,1)(-2,-1)(0,1)(0,-1)(2,1)(2,-1) { \uput{0.1}[180](-3,0){$-\tfrac{3}{2}$} \uput{0.1}[205](-1.25,0){$-\tfrac{3}{2}$} \uput{0.1}[25](-0.75,0){$-\tfrac{3}{2}$} \uput{0.1}[205](0.75,0){$-\tfrac{3}{2}$} \uput{0.1}[25](1.25,0){$-\tfrac{3}{2}$} \uput{0.1}[0](3,0){$-\tfrac{3}{2}$} \uput{0.1}[65](-2,1){$-\tfrac{3}{2}$} \uput{0.1}[-115](-2,-1){$-\tfrac{3}{2}$} \uput{0.1}[65](0,1){$-\tfrac{3}{2}$} \uput{0.1}[-115](0,-1){$-\tfrac{3}{2}$} \uput{0.1}[65](2,1){$-\tfrac{3}{2}$} \uput{0.1}[-115](2,-1){$-\tfrac{3}{2}$} \uput{0.1}[90](-1,1){$-1$} \uput{0.1}[90](1,1){$-1$} \uput{0.1}[-45](3,-1){$-2$} \uput{0.1}[-90](-1,-1){$-1$} \uput{0.1}[-90](1,-1){$-1$} \uput{0.1}[135](-3,1){$-2$} } \end{pspicture} } \caption{A schematic picture of the result. Generators correspond to vertices, arranged according to their Alexander grading and labelled by their $\delta$-grading. The dotted edges correspond to pairs of arrows labelled by powers of $q$, the solid ones to pairs of arrows labelled by powers of $p$.}\label{fig:examplesimplifiedgraph} \end{subfigure} \\ \psset{unit=1.3} \begin{subfigure}[b]{0.3\textwidth}\centering { \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psecurve(1.2,1)(0,1.4)(-1.2,1)(1.4,1)(0,0.6)(-1.4,1)(1.2,1)(0,1.4)(-1.2,1) { \psframe*[linecolor=white](-2,-2)(-1,2) \psframe*[linecolor=white](1,-2)(2,2) \psframe*[linecolor=white](-2,1)(2,2) \psframe*[linecolor=white](-2,-2)(2,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) } \psecurve[linestyle=dashed,dash=4pt 2pt](1.2,1)(0,1.4)(-1.2,1)(1.4,1)(0,0.6)(-1.4,1)(1.2,1)(0,1.4)(-1.2,1) \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.8) \psdot[linecolor=red](-1,0.663) \psdot[linecolor=gold](0.125,1) \psdot[linecolor=gold](-0.125,1) \psdot[linecolor=darkgreen](1,0.8) \psdot[linecolor=darkgreen](1,0.663) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{}\label{fig:loopbottom} \end{subfigure} \quad \begin{subfigure}[b]{0.3\textwidth}\centering \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psecurve(-0.8,-1.2)(-1.2,-1.2)(1.2,1.1)(-1.2,0.9)(0,1.4)(1.4,1.2)(-0.8,-1.2)(-1.2,-1.2)(1.2,1.1) { \psframe*[linecolor=white](-2,-2)(-1,2) \psframe*[linecolor=white](1,-2)(2,2) \psframe*[linecolor=white](-2,1)(2,2) \psframe*[linecolor=white](-2,-2)(2,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) } \psecurve[linestyle=dashed,dash=4pt 2pt](-0.8,-1.2)(-1.2,-1.2)(1.2,1.1)(-1.2,0.9)(0,1.4)(1.4,1.2)(-0.8,-1.2)(-1.2,-1.2)(1.2,1.1) \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.724) \psdot[linecolor=red](-1,-0.635) \psdot[linecolor=gold](-0.027,1) \psdot[linecolor=darkgreen](1,0.51) \psdot[linecolor=darkgreen](1,-0.03) \psdot[linecolor=blue](-0.38,-1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} \caption{}\label{fig:loopmiddle} \end{subfigure} \quad \begin{subfigure}[b]{0.3\textwidth}\centering \begin{pspicture}(-1.5,-1.5)(1.5,1.5) \psrotate(0,0){180}{ \psecurve(1.2,1)(0,1.4)(-1.2,1)(1.4,1)(0,0.6)(-1.4,1)(1.2,1)(0,1.4)(-1.2,1) { \psframe*[linecolor=white](-2,-2)(-1,2) \psframe*[linecolor=white](1,-2)(2,2) \psframe*[linecolor=white](-2,1)(2,2) \psframe*[linecolor=white](-2,-2)(2,-1) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) } \psecurve[linestyle=dashed,dash=4pt 2pt](1.2,1)(0,1.4)(-1.2,1)(1.4,1)(0,0.6)(-1.4,1)(1.2,1)(0,1.4)(-1.2,1) \psset{dotsize=5pt} \psdot[linecolor=darkgreen](-1,0.8) \psdot[linecolor=darkgreen](-1,0.663) \psdot[linecolor=blue](0.125,1) \psdot[linecolor=blue](-0.125,1) \psdot[linecolor=red](1,0.8) \psdot[linecolor=red](1,0.663) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} } \end{pspicture} \caption{}\label{fig:looptop} \end{subfigure} \caption{The final result of the computation from Example~\ref{exa:HFTdpretzeltangle} and Figure~\ref{fig:HFTdmutationexample}. The Subfigures (b)--(d) show the three loops from (a) separately on 4-punctured spheres.}\label{fig:mutationexamplefinalresult} \end{figure} \begin{example}[(the $(2,-3)$-pretzel tangle)]\label{exa:HFTdpretzeltangle} Figure~\ref{fig:HFTdmutationexample} shows the computation of $\CFTd(T)$ for the $(2,-3)$ pretzel tangle from Figure~\ref{fig:2m3pt}. In subsection~\ref{subsec:pretzels}, we will compute the peculiar modules for more general pretzel tangles, using some more advanced methods, which we develop in section~\ref{sec:glueingrevisited} as a corollary of the general classification of peculiar modules; here, we compute everything from the definition, which works surprisingly well. First, we compute the generators of the complex. They are ordered according to their Alexander grading on an infinite chessboard, where the generators in each of its fields have the same Alexander grading and where moving one field down, respectively to the right, increases the Alexander grading corresponding to the colour $\textcolor{darkgreen}{t_1}$, respectively $\textcolor{violet}{t_2}$, by 1. Next, we compute bigons and squares. Those correspond to the labelled arrows in Figure~\ref{fig:firstcomplex}. (The numbers in parentheses indicate which of the regions appear in the domain with which multiplicity. For example, the label $q_{432}(\texttt{15})$ of the arrow $\textcolor{red}{a_1y_1}\rightarrow\textcolor{gold}{x_2d'}$ means that the corresponding domain is given by the regions in Figure~\ref{fig:HFTdmutationpretzeltangleHD} labelled $q_4$, $q_3$, $q_2$, $\texttt{1}$ and $\texttt{5}$, each with multiplicity 1.) But there are also other contributing domains. Grading constraints tell us that we can only get additional morphisms between those generators which are connected by the other arrows. In principle, those could point in both directions. However, in each case, the connecting domains in one direction either have negative multiplicities or occupy both $p_i$s and $q_i$s, so we can only get arrows in one direction. From this and the $\partial^2$-relation, we can deduce that all solid arrows contribute. There are only eight remaining arrows (the dotted ones) and they can only appear in pairs. But it is easy to see that we can homotope those dotted arrows away (using the Clean-Up Lemma for curved type D structures), so in any case, the complex is homotopic to the invariant consisting of the solid arrows only. We can then apply the Cancellation Lemma. We obtain a complex in which every arrow is paired with another one going in the opposite direction and every generator is connected along the arrows to exactly two other generators -- just as for rational tangles! A schematic picture of this complex is shown in Figure~\ref{fig:examplesimplifiedgraph}, where these arrow pairs have been replaced by single unoriented edges, such that we obtain a collection of loops. In Figures~\ref{fig:mutationexamplefinalresult}b-d, the loops have been transferred onto separate 4-punctured spheres in such a way that the vertices lie on the four arcs that connect the punctures and the unoriented edges lie on the front or back of the spheres depending on whether they correspond to arrow pairs labelled by powers of $p$ or $q$. The meaning of both of these representations as loops will be discussed in section~\ref{sec:glueingrevisited}. For the moment, they are just a convenient way to see certain symmetries. \end{example} \begin{figure} \centering \begin{subfigure}[b]{0.34\textwidth}\centering \psset{unit=0.3} \bigskip \begin{pspicture}(-8.01,-3.01)(8.01,3.01) \pscircle[linestyle=dotted, linewidth=\stringwidth](0,0){3} \pscircle[fillcolor=white,fillstyle=solid,linecolor=lightgray](0,0){1.5} \rput(0,0){$T$} \rput(2.25;180){$a$} \rput(2.25;0){$c$} \psecurve[linecolor=white,linewidth=\stringwhite](0,0)(1.1;-135)(3;-140)(4;-150)(-4.5,0)(4;150)(3;140)(1.1;135)(0,0) \psecurve[linecolor=gray, linewidth=\stringwidth](0,0)(1.1;-135)(3;-140)(4;-150) \psecurve[linecolor=gray, linewidth=\stringwidth](4;150)(3;140)(1.1;135)(0,0) \psecurve[linecolor=white,linewidth=\stringwhite](0,0)(1.1;45)(3;40)(4;30)(4.5,0)(4;-30)(3;-40)(1.1;-45)(0,0) \psecurve[linecolor=gray, linewidth=\stringwidth](0,0)(1.1;45)(3;40)(4;30) \psecurve[linecolor=gray, linewidth=\stringwidth](4;-30)(3;-40)(1.1;-45)(0,0) \psecurve[linewidth=\stringwidth]{C-C}(1.1;-135)(3;-140)(4;-150)(-4.5,0)(4;150)(3;140)(1.1;135) \psecurve[linewidth=\stringwidth]{C-C}(1.1;45)(3;40)(4;30)(4.5,0)(4;-30)(3;-40)(1.1;-45) \end{pspicture} \caption{A link obtained as the closure of a tangle $T$.}\label{fig:ClosureDiagram} \end{subfigure} \quad \begin{subfigure}[b]{0.6\textwidth}\centering {\vspace*{-5pt} $ \begin{tikzcd}[row sep=0.5cm, column sep=0.7cm] & \textcolor{gold}{d}\phantom{\delta^{\frac{1}{2}}}\nmathphantom{\delta^{\frac{1}{2}}} \arrow[bend left=10,leftarrow]{dr}[inner sep=1pt]{q_4} \arrow[bend right=10,swap]{dr}[inner sep=1pt]{p_4} \\ \textcolor{red}{a}\phantom{t^{A(p_{12})}}\nmathphantom{t^{A(p_{12})}} \arrow[bend left=10,leftarrow]{ur}[inner sep=1pt]{q_1} \arrow[bend right=10,swap]{ur}[inner sep=1pt]{p_1} & & \textcolor{darkgreen}{c}\phantom{t^{A(p_{12})}}\nmathphantom{t^{A(p_{12})}} \arrow[bend left=10,leftarrow]{dl}[inner sep=1pt]{q_3} \arrow[bend right=10,swap]{dl}[inner sep=1pt]{p_3} \\ & \textcolor{blue}{b}\phantom{\delta^{\frac{1}{2}}}\nmathphantom{\delta^{\frac{1}{2}}} \arrow[bend left=10,leftarrow]{ul}[inner sep=1pt]{q_2} \arrow[bend right=10,swap]{ul}[inner sep=1pt]{p_2} \end{tikzcd} \quad\raisebox{-3pt}{$\longrightarrow$}\quad \begin{tikzcd}[row sep=0.5cm, column sep=-0.6cm] & t^{(0,0)}\delta^{1}\mathbb{F}_2 \\ t^{A(p_{2})}\delta^{\frac{1}{2}}\mathbb{F}_2 \arrow{ur}{1} && t^{A(q_{3})}\delta^{\frac{1}{2}}\mathbb{F}_2 \arrow[leftarrow]{dl}{1} \arrow[swap]{ul}{1} \\ & t^{(0,0)}\delta^{0}\mathbb{F}_2 \arrow{ul}{1} \end{tikzcd} $ } \caption{The functor $\Omega'$.}\label{fig:ClosureFunctor} \end{subfigure} \caption{Illustration of Proposition~\ref{prop:LazyClosing}.}\label{fig:Closure} \end{figure} To state the next proposition, we first need to introduce some notation: let \(T\) be a 4-ended tangle and \(L\) the link obtained by closing \(T\) at the sites \(a\) and \(c\) as shown in Figure~\ref{fig:ClosureDiagram}. We distinguish two cases: either the two open components of \(T\) belong to distinct components of \(L\) (case 1) or they belong to the same component of \(L\) (case 2). Consider the quotient homomorphism \begin{align*} \omega\co\Ad\rightarrow\mathbb{F}_2,&\quad p_1,p_2,q_3,q_4\mapsto 1,\quad q_1,q_2,p_3,p_4\mapsto 0. \end{align*} We can promote this map to a functor $\Omega'$ from $\Ad$ to the category of bigraded vector spaces which on objects is defined by \begin{align*} \textcolor{red}{a}&\mapsto t^{A(p_2)}\delta^{\frac{1}{2}}\mathbb{F}_2& \textcolor{blue}{b}&\mapsto t^{(0,0)}\delta^{0}\mathbb{F}_2& \textcolor{darkgreen}{c}&\mapsto t^{A(q_3)}\delta^{\frac{1}{2}}\mathbb{F}_2& \textcolor{gold}{d}&\mapsto t^{(0,0)}\delta^{1}\mathbb{F}_2, \end{align*} where $$t^{(a_1,a_2)}= \begin{cases*} t_1^{a_1}t_2^{a_2} & \text{(case 1)}\\ t^{a_1+a_2} & \text{(case 2)} \end{cases*} $$ For an illustration, see Figure~\ref{fig:ClosureFunctor}. Clearly, the functor $\Omega'$ preserves the $\delta$-grading of morphisms. It also preserves their Alexander grading, since in case~1, $$A(p_1)+A(p_2)=(0,0)=A(q_3)+A(q_4),$$ and in case 2, $t_1$ and $t_2$ are identified. So $\Omega'$ preserves the bigrading. Therefore, it induces a well-defined functor between the categories of peculiar modules and bigraded chain complexes, which we denote by $\Omega$. \begin{proposition}\label{prop:LazyClosing} With the notation as above, $$ \Omega(\CFTd(T))= \begin{cases*} \CFL(L)& \text{(case 1)}\\ \CFL(L)\otimes V & \text{(case 2)}\\ \end{cases*} $$ where \(V\) is the 2-dimensional vector space over \(\mathbb{F}_2\) supported in Alexander gradings \(t\) and~\(t^{-1}\) and identical \(\delta\)-gradings. Similar formulas hold for cyclic permutations of the indices. \end{proposition} \begin{proof} If we delete those basepoints in a peculiar Heegaard diagram of $T$ that correspond to the variables that $\omega$ sends to 1, we obtain a Heegaard diagram for $L$. In $\CFL(L)$, we only count those holomorphic curves that stay away from the remaining basepoints, so we need to set those algebra elements equal to 0. On the relative gradings of the generators, this has the same effect as the functor $\Omega$. The additional tensor factor $V$ in case 2 comes from the fact that the new closed component has four instead of the usual two basepoints. \end{proof} \begin{remark}\label{rem:LazyClosing} One can obtain the minus version of link Floer homology from the generalised peculiar modules from the same idea as in the proof of the previous proposition. \end{remark} \begin{definition}\label{def:reversedmirror} Let $T$ be an oriented 4-ended tangle in a $\mathbb{Z}$-homology 3-ball. Let $\m(T)$ be the tangle obtained by reversing the orientation of $M$, while preserving the labelling and orientation of $T$. We call $\m(T)$ the \textbf{mirror} of $T$. Note that the front and back component of $ \partial M\smallsetminus\textcolor{red}{S^1}$ are swapped. If $T$ is a tangle in $B^3$, a diagram of $\m(T)$ is obtained from one of $T$ by switching all crossings. Furthermore, let $\rr(T)$ be the tangle obtained by reversing the orientation of all components of $T$. We write $\mr(T)$ for $\m(\rr(T))=\rr(\m(T))$ and call it the \textbf{reversed mirror} of $T$. If $X$ is a peculiar module, let $\m(X)$ be the peculiar module obtained from $X$ by reversing the direction of all arrows, inverting the gradings of all generators and switching the labels $p_i$ and $q_i$ for each $i=1,2,3,4$. Furthermore, if $X$ is bigraded, let $\rr(X)$ be the bigraded peculiar module obtained from $X$ by reversing the Alexander gradings of the generators, switching algebra elements $U'_j$ and $V'_j$ and reversing the Alexander gradings $A_{t_1}$ and $A_{t_2}$ of the underlying algebra. We write $\mr(X)$ for $\m(\rr(X))=\rr(\m(X))$. \end{definition} The following proposition should be compared to the analogous results \cite[Propositions~6.5 and~6.6, Proposition~2.3 and Corollary~2.7]{HDsForTangles} for the non-glueable Heegaard Floer theory~$\HFT$ and the polynomial invariant~$\nabla^s_T$. \begin{proposition}\label{prop:reversedmirror} For any 4-ended tangle \(T\), $$\CFTminus(\m(T))=\m(\CFTminus(T)) \text{ and hence } \CFTd(\m(T))=\m(\CFTd(T)).$$ Similarly, $$\CFTminus(\rr(T))=\rr(\CFTminus(T)) \text{ and hence } \CFTd(\rr(T))=\rr(\CFTd(T)).$$ \end{proposition} \begin{proof} The first part follows from the usual arguments by changing the orientation of the Heegaard surface. If we embed the Heegaard surface in $\mathbb{R}^3$, we may think of this operation as a reflection along a plane. Since front and back component of $ \partial M\smallsetminus\textcolor{red}{S^1}$ are swapped, so are the labels $p_i$ and $q_i$. If $\x$ and $\y$ are two generators, the moduli space $\mathcal{M}(\x,\y)$ for the original Heegaard diagram becomes $\mathcal{M}(\y,\x)$ for the new diagram. This has the effect of changing the direction of the arrows in the complexes computed from these diagrams. Naturally, this reverses the relative $\delta$-grading on generators and also the Alexander gradings, since the orientation of the components of $T$ is preserved. The second part follows from the fact that orientation reversal of the tangle components simply reverses the orientation of the basepoints in a Heegaard diagram of $T$. Orientation reversal of the open components of $T$ changes the ordered matching associated with $T$, which amounts to reversing the corresponding Alexander gradings in the algebra. Orientation reversal of the closed components of $T$ also involves switching labels $w_j$ and $z_j$ for closed components of $T$, so we need to switch the algebra elements $U'_j$ and $V'_j$. \end{proof} We end this section with some simple observations about $\CFTd$. \begin{observation} By definition, the Alexander grading corresponding to a closed tangle component vanishes on $\Ad$. Also, the differential of a peculiar module preserves the Alexander grading by Lemma~\ref{lem:CFTdGradings}. Thus, $\CFTd(T)$ decomposes into the direct sum over the Alexander gradings of closed tangle components. \end{observation} \begin{observation} In \cite[Theorem~1.3]{OSHFKalt} and \cite[Theorem~1.3]{OSHFL}, Ozsv\'{a}th and Szab\'{o} showed that the link Floer homology $\HFL$ of alternating links is completely determined by the Alexander polynomial (and, to be precise, the signature, but this is only needed to fix the absolute homological and $\delta$-grading). The proof generalises immediately to $\HFT$, using the generalised clock Theorem~\cite[Theorem~1.13]{MyEssay}. \end{observation} \begin{question} Given an alternating tangle \(T\), is the bigraded chain homotopy type of \(\CFTd(T)\) determined by \(\nabla_T^s\)? \end{question} \section{Preliminaries: Algebraic structures from dg categories}\label{sec:AlgStructFromGDCats} In this paper, we often work in categories of various algebraic structures, namely type~D and curved type~D structures, but also type~A structures and various bimodules. In all settings, we often want to simplify these structures by replacing them by homotopy equivalent ones. The main goal of this section is to develop some tools for dealing with this problem, namely the Cancellation Lemma (\ref{lem:AbstractCancellation}) and the Clean-Up Lemma (\ref{lem:AbstractCleanUp}). The former can be used to reduce the number of generators of an algebraic structure, essentially by doing Gaussian elimination as in~\cite[Lemma~3.2]{BarNatanBurgosSoto}, the latter for making the structure maps ``look nicer'', essentially by changing the basis. In the category of ordinary chain complexes, both tools will be familiar to the reader as easy exercises in linear algebra. So it might not be too surprising that they also work in quite general settings. We will spend the first part of this section explaining a general construction which turns any differential graded category into another such category in which the lemmas hold in some generality sufficient for our purposes, see Definitions~\ref{def:CatOfMatrices} and~\ref{def:CatOfComplexes}. Next, we show that the various different algebraic structures mentioned above arise naturally from this general construction. We also study its functoriality properties and interpret the $\boxtimes$-tensor product between type~A and type~D structures in this framework. Finally, we state and prove the Cancellation and Clean-Up Lemmas. For simplicity, we only work over the field $\mathbb{F}_2=\mathbb{Z}/2$, so we do not need to keep track of signs. However, with the correct sign conventions, most (if not all) statements should also hold over fields of arbitrary characteristic. \subsection{The general construction} \begin{definition} Let $\Com$ be the category of $\mathbb{Z}$-graded chain complexes over $\mathbb{F}_2$ and grading preserving chain maps between them. A \textbf{differential graded (dg) category}~$\mathcal{C}$ over~$\mathbb{F}_2$ is an enriched category over $\Com$. To spell this out more explicitly, the hom-objects are $\mathbb{Z}$-graded $\mathbb{F}_2$-vector spaces, \[\Mor(A,B)=\bigoplus_{i\in\mathbb{Z}}\Mor_i(A,B)\] endowed with differentials \[\partial_i\co\Mor_i(A,B)\rightarrow \Mor_{i-1}(A,B),\] ie vector space homomorphisms satisfying $\partial_{i-1}\partial_i=0$ and \begin{equation}\label{eqn:CompatibleWithComposition} \partial\circ m=m\circ(\partial\otimes \id+\id\otimes \partial), \end{equation} where \[m\co \Mor_j(B,C)\otimes\Mor_i(A,B)\rightarrow\Mor_{i+j}(A,C)\] denotes composition in $\mathcal{C}$, which is associative and unital. For more details on enriched categories, see for example \cite{cathtpy}. Note that the identity morphisms have degree zero and lie in the kernel of~$\partial$. \end{definition} \begin{definition}\label{def:UnderlyingOrdinaryCat}\cite[Definition~3.4.5]{cathtpy} Given an enriched category $\mathcal{C}$ over some monoidal category $\mathcal{V}$, the \textbf{underlying ordinary category} $\mathcal{C}_0$ of $\mathcal{C}$ has the same objects as $\mathcal{C}$ and its hom-sets are defined by \[\mathcal{C}_0(A,B):=\Mor_\mathcal{V}(1_\mathcal{V},\mathcal{C}(A,B)).\] \end{definition} \begin{example}\label{exa:UnderlyingOrdCat} Let $\mathcal{C}$ be a dg category. The unit in $\Com$ is the complex $0\rightarrow\mathbb{F}_2\rightarrow0$, supported in homological degree 0, and the morphisms in $\Com$ are grading preserving. Hence, the hom-sets of $\mathcal{C}_0$ are those elements in the kernel of $\partial_0$. Next, consider the enriched category $H_\ast(\mathcal{C})$ over the category of graded vector spaces and grading preserving morphisms between them, obtained from $\mathcal{C}$ by replacing the hom-objects by their homologies with respect to the differential $\partial$. By passing to the underlying ordinary category, we pick out the degree~0 morphisms in $H_\ast(\mathcal{C})$. Therefore, we denote this category by $H_0(\mathcal{C})$. Since the hom-sets in $H_0(\mathcal{C})$ are just quotients of those in $\mathcal{C}_0$, we now get the usual notions of chain homotopies between morphisms and objects. The reason why we need to pass to the underlying category is that otherwise, two objects could be (chain) isomorphic through grading shifting morphisms. \end{example} \begin{definition}\label{def:CatOfMatrices} (cp.~\cite[section~6]{BarNatanKhT} and~\cite[section~I.3k]{Seidel}) Given a dg category~$\mathcal{C}$, we define another dg category $\Mat(\mathcal{C})$ as follows. Its objects are formal direct sums \[\bigoplus_{i\in I}O_i[n_i],\] where $I$ is some finite index set and $O_i[n_i]$ denotes the object $O_i\in\ob(\mathcal{C})$ with a formal grading shift by an integer $n_i$. Morphisms are given by \[\Mor_n(\bigoplus_{i\in I}O_i[n_i],\bigoplus_{j\in J}O_j[n_j]):=\bigoplus_{(i,j)\in I\times J}\Mor_{n+n_i-n_j}(O_i,O_j).\] Compositions and differentials in $\Mat(\mathcal{C})$ are induced by those in $\mathcal{C}$. \end{definition} \begin{definition}\label{def:CatOfComplexes}(cp.~\cite[section~I.3l]{Seidel}) Given a differential graded category $\mathcal{C}$, we define an auxiliary category $\Cxpre(\mathcal{C})$, \textbf{the category of pre-complexes}, which is an enriched category over the category of $\mathbb{Z}$-graded vector spaces and grading preserving morphisms between them. Its objects are pairs $(O,d_O)$, where $O\in\ob(\mathcal{C})$ and $d_O\in\Mor_{-1}(O,O)$. The hom-objects are the same as in $\mathcal{C}$, \[\Mor((O,d_O),(O',d_{O'}))=\Mor(O,O'),\] viewed as $\mathbb{Z}$-graded vector spaces. On these, we can define a map \[D\co\Mor_{i}((O,d_O),(O',d_{O'}))\rightarrow \Mor_{i-1}((O,d_O),(O',d_{O'}))\] by setting \[D(f):=d_{O'}\circ f+f \circ d_O+\partial(f).\] We would like $D$ to be a differential in order to turn $\Cxpre(\mathcal{C})$ into a dg category. However, this only works in general if we restrict ourselves to a full subcategory of $\Cxpre(\mathcal{C})$. It is easy to check that $D$ is always compatible with multiplication in the sense of (\ref{eqn:CompatibleWithComposition}). So $D$ is a differential iff \[D^2(f)=(d_{O'}^2+\partial(d_{O'}))\circ f +f\circ (d_O^2+\partial(d_O))\] vanishes. This is, of course, the case for the full subcategory $\Cx^{0}(\mathcal{C})$ of $\Cxpre(\mathcal{C})$ consisting of those objects $(O,d_O)$ for which \begin{equation} d^2_O+\partial(d_O) \tag{$\ast$}\label{eqn:d2term} \end{equation} vanishes. However, in some situations, other conditions on (\ref{eqn:d2term}) also work. For example, if we replace $\Com$ by the category of $\mathbb{Z}/2$-graded chain complexes, we can restrict to those objects $(O,d_O)$ for which (\ref{eqn:d2term}) is equal to the identity. Also, if the hom-objects are bimodules over an algebra~$\mathcal{A}$, we can ask (\ref{eqn:d2term}) to be equal to $a\cdot\id_O$ for a fixed central algebra element $a$ (of degree $-2$) which commutes with all morphisms~$f$. In both cases, $D$ will be a differential. We call any such full subcategory a \textbf{category of complexes}, denoted by $\Cx^{\ast}(\mathcal{C})$, where $\ast\in\{0,1,a\}$ is the value of~(\ref{eqn:d2term}). By construction, $\Cx^{\ast}(\mathcal{C})$ is a dg category. \end{definition} \begin{remark}\label{exa:GraphsForAlgebraicStructures} As usual, we can associate a directed graph to a category, where objects correspond to vertices and arrows to morphisms. In the same way, we can think of complexes in $\Cx^{\ast}(\Mat(\mathcal{C}))$ as graphs. \end{remark} The point of the construction above is that we can interpret the categories of type~D, type~A, type~AA and curved type~D structures as instances of $\Cx^{\ast}(\Mat(\mathcal{C}))$ for suitable choices of relatively simple differential graded categories $\mathcal{C}$. But let us start with an even simpler example: ordinary chain complexes.\bigskip \textbf{Note of warning.} In the following examples, our definitions only coincide with the usual ones after passing to the underlying ordinary categories, see Example~\ref{exa:UnderlyingOrdCat}. The advantage of our point of view is that the conditions we usually impose on morphism and chain homotopies for various algebraic structures arise naturally by viewing those morphisms as elements of chain complexes. \begin{example}[(ordinary chain complexes over $\mathbb{F}_2$)] \label{exa:HighBrowDefChainCxs} Let $\mathcal{C}$ be the category with a single object $\bullet$ in grading 0, $\Mor(\bullet,\bullet)=\mathbb{F}_2$ and vanishing differential. Then (the underlying ordinary category of) $\Cx^{0}(\Mat(\mathcal{C}))$ is isomorphic to the dg category of $\mathbb{Z}$-graded chain complexes over $\mathbb{F}_2$ with a preferred choice of basis via the identification of $\bullet$ with $\mathbb{F}_2$, considered as the 1-dimensional vector space over $\mathbb{F}_2$. Thus, $\Cx^{0}(\Mat(\mathcal{C}))$ is equivalent to $\Com$. \end{example} \begin{example}[(type~D structures over dg $\mathbb{F}_2$-algebras)]\label{exa:HighBrowDefTypeDoverF2} Let $\mathcal{A}$ be a differential graded algebra over $\mathbb{F}_2$. Let $\mathcal{C}$ be the category with a single object~$\bullet$ and morphisms being elements in $\mathcal{A}$. Composition is multiplication in $\mathcal{A}$ and the differential~$\partial$ is induced by the differential on $\mathcal{A}$. We define the category of type~D structures over $\mathcal{A}$ by $\Cx^{0}(\Mat(\mathcal{C}))$. (Again, note that we need to pass to the underlying ordinary category to obtain the definitions in \cite{Zarev} and \cite{LOT}.) \end{example} \begin{example}[(type~D structures over dg $\mathcal{I}$-algebras)]\label{exa:HighBrowDefTypeDoverI} Let us assume that $\mathcal{A}$ is an algebra over some ring $\mathcal{I}\subseteq\mathcal{A}$ of idempotents and fix a basis $\{i_j\}_{j\in J}$ of idempotents of $\mathcal{I}$, where $J$ is some index set. Let $\mathcal{C}^D_\mathcal{A}$ be the category with one object for each basis element of $\mathcal{I}$, and for any two such elements $i_1$ and $i_2$, let $\Mor(i_1,i_2):=i_2.\mathcal{A}.i_1$. Again, composition is multiplication in $\mathcal{A}$ and the differential $\partial$ is induced by the differential on $\mathcal{A}$. We call $\Cx^{0}(\Mat(\mathcal{C}^D_\mathcal{A}))$ the category of (right) type~D structures over the dg $\mathcal{I}$-algebra $\mathcal{A}$. To obtain the category of \emph{left} type~D structures, we only need to change the morphism spaces to $\Mor(i_1,i_2):=i_1.\mathcal{A}.i_2$ with the obvious multiplication; however, we usually work with right type D structures in this paper, since we interpret algebra elements as functions and thus read them from right to left. \end{example} \begin{remark}\label{rem:HighBrowNoBasisForI} \textit{A priori}, the definition in the previous example depends on a choice of basis for~$\mathcal{I}$. In all examples in this paper, there is a natural choice of such a basis, so this is not an issue. However, we can replace $\mathcal{C}^D_\mathcal{A}$ above by the enlarged category $\overline{\mathcal{C}^D_\mathcal{A}}$, where there is an object for \textit{every} element in $\mathcal{I}$. Then $\mathcal{C}^D_\mathcal{A}$ is a full subcategory of $\overline{\mathcal{C}^D_\mathcal{A}}$ and it is not hard to see that $\Mat(\mathcal{C}^D_\mathcal{A})$ and $\Mat(\overline{\mathcal{C}^D_\mathcal{A}})$ are equivalent. Now, the construction of the category of complexes is functorial (in the category of dg categories), so after all, the definition above does not depend on a basis for $\mathcal{I}$. \end{remark} \begin{example}[(curved type~D structures over dg $\mathcal{I}$-algebras)]\label{exa:HighBrowDefcurvedTypeD} We start with the same category $\mathcal{C}^D_\mathcal{A}$ as in the previous example, but we fix a central element $a_c\in Z(\mathcal{A})$, the curvature, and define the category of curved (right) type~D structures over $\mathcal{A}$ with curvature $a_c$ as $\Cx^{a_c}(\Mat(\mathcal{C}^D_\mathcal{A}))$. For a more explicit, but less concise definition, see Definition~\ref{def:curvedTypeDStructure}. \end{example} \begin{example}[(type~A structures over an $A_\infty$-algebra over $\mathcal{I}$)]\label{exa:HighBrowDefTypeAoverI} Let $\mathcal{A}$ be an $A_\infty$-algebra over a ring of idempotents $\mathcal{I}$ over $\mathbb{F}_2$. As in Example~\ref{exa:HighBrowDefTypeDoverI}, fix a basis $\{i_k\}_{k\in I}$ of idempotents of $\mathcal{I}$, where $I$ is some index set. Let $\mathcal{C}^A_\mathcal{A}$ be the category with one object for each basis element in $\mathcal{I}$, just as for type~D structures. However, a morphism in a hom-object $\Mor(i_1,i_2)$ of $\mathcal{C}^A_\mathcal{A}$ is given by a sequence of vector space homomorphisms $$(f_i\co i_2.\mathcal{A}^{\otimes i}.i_1\rightarrow \mathbb{F}_2)_{i\geq0}$$ where composition is defined by $$(f\circ g)_i(a_i\otimes\cdots\otimes a_{1}):= \sum_{j+k=i}f_k(a_{i}\otimes\cdots\otimes a_{j+1}) \cdot g_j(a_{j}\otimes\cdots\otimes a_{1}).$$ For $i\in\{i_k\}_{k\in I}$, the identity morphism $\id_{i}=(\id_{i,l})_{l\geq0}\in\Mor(i,i)$ is given by $\id_{i,0}(i.1.i)=1$ and $\id_{i,l}=0$ for all $l>0$. The differential $\partial$ is given by $$(\partial(f))_i(a_i\otimes\cdots\otimes a_{1}):=\sum_{j+k=i+1}\sum_{l=0}^{i-k} f_j(a_i\otimes\dots\otimes \mu_k(a_{l+k}\otimes \dots \otimes a_{l+1})\otimes \dots \otimes a_{1}).$$ We call $\Cx^{0}(\Mat(\mathcal{C}^A_\mathcal{A}))$ the category of (left) type~A structures over $\mathcal{A}$. We define the category of strictly unital type~A structures by restricting to those objects $(O,d_O)$ with the identity action \begin{equation}\label{eqn:identityAction} d_O(\cdot,1)=\id_O \end{equation} and with the property that \[d_O(\cdot,a_i\otimes\dots\otimes a_1)=0\text{ if $i>1$ and $a_j=1$ for some $j=1,\dots,i$}\] and restricting to those morphisms satisfying \[f(\cdot,a_i\otimes\dots\otimes a_1)=0\text{ if $i>0$ and $a_j=1$ for some $j=1,\dots,i$}.\] When we talk about type A structures in this paper, we will always implicitly assume them to be strictly unital. \emph{Right} type~A structures are defined in an analogous way; we only define the hom-objects of the underlying dg category to be given by sequences of vector space homomorphisms \[(f_i\co i_1.\mathcal{A}^{\otimes i}.i_2\rightarrow \mathbb{F}_2)_{i\geq0},\] and adapt the multiplication maps accordingly. In this paper, we restrict ourselves to left type~A structures. \end{example} \begin{remark}\label{exa:GraphsForTypeAstructures} When we describe type~A structures as directed graphs, it is useful to fix a basis of the algebra $\mathcal{A}$. Then, we label an arrow corresponding to a morphism $f$ by the formal sum of those tuples/tensor products of basis elements of the algebra $\mathcal{A}$ on which $f$ is non-zero. In particular, the identity morphisms are the length 0 (ie ``empty'') labels. In this language, composition of two morphisms $f$ and $g$ can be described as the sum of all concatenations of labels for $f$ and $g$ (modulo~2). Now consider a morphism $f_a$ whose only label is a tuple of basic algebra elements $a=(a_i,\dots,a_1)$. Then the arrow of $\partial(f_a)$ is labelled by the formal sum (modulo 2) of all labels obtained from $a$ by replacing a single entry $a_i$ by sequences $(a'_j,\dots, a'_1)$ such that \[a_i=\mu_j(a'_j\otimes\cdots\otimes a'_1).\] Note that in accordance with our conventions from the previous definition, we will always omit the arrows corresponding to the identity action~\eqref{eqn:identityAction}, ie arrows from each vertex to itself labelled by the length one sequence consisting of the identity of the algebra $\mathcal{A}$. \end{remark} \begin{example}[(bimodules of various kinds)] We can form the categories of type~DD, type~DA, type~AD and type~AA bimodules as follows. We start with the dg category where objects correspond to idempotents as before, but where the hom-objects are defined as the corresponding products of the hom-objects of $\mathcal{C}^A_\mathcal{A}$ and $\mathcal{C}^D_\mathcal{A}$ in Examples~\ref{exa:HighBrowDefTypeDoverI} and~\ref{exa:HighBrowDefTypeAoverI}. Multiplication is defined as the product on the two factors and the differential is defined as usual by the Leibniz rule. Likewise, multi-modules can be defined, but we will not need those in the current paper. We sometimes include the algebras over which the modules are defined in our notation, following the usual convention to use subscripts for type A and superscripts for type D sides. For example, the notation $\typeA{\mathcal{A}}{M}^\mathcal{B}$ means that $M$ is a type~AD $\mathcal{A}$-$\mathcal{B}$-bimodule where $\mathcal{A}$ acts on the left and $\mathcal{B}$ on the right. \end{example} \subsection{Functoriality and pairing for type A and type D structures} In the proofs of the glueing formula in section~\ref{sec:Pairing}, we need to change the underlying algebras and rings of idempotents of the algebraic structures involved. \begin{definition}\label{def:InducedFunctors} Suppose we have a subring $\mathcal{J}$ of the ring of idempotents~$\mathcal{I}$. Let us also fix an $\mathbb{F}_2$-basis $\{\iota_i\mid i\in J\}$ of $\mathcal{J}$ and extend it to a basis $\{\iota_i\mid i\in I\}$ of~$\mathcal{I}$. Let $\mathcal{A}$ be a dg $\mathcal{I}$-algebra and $\mathcal{B}$ be a dg $\mathcal{J}$-algebra. Note that via the inclusion $\mathcal{J}\hookrightarrow\mathcal{I}$, we can regard $\mathcal{A}$ also as a dg $\mathcal{J}$-algebra. Let $\pi\co\mathcal{B}\rightarrow\mathcal{A}$ be a dg $\mathcal{J}$-algebra homomorphism. Let us first consider the construction for (curved) type~D structures: consider the category $\mathcal{C}^D_{\mathcal{A}}$ corresponding to the $\mathcal{I}$-algebra $\mathcal{A}$ from Example~\ref{exa:HighBrowDefTypeDoverI}. Similarly, let $\mathcal{C}^D_{\mathcal{B}}$ be the category corresponding to the $\mathcal{J}$-algebra $\mathcal{B}$. The inclusion $\mathcal{J}\hookrightarrow\mathcal{I}$ and the $\mathcal{J}$-algebra homomorphism $\pi$ induce a functor $$\tilde{\mathcal{F}}^D_\pi\co\mathcal{C}^D_{\mathcal{B}}\rightarrow\mathcal{C}^D_{\mathcal{A}}$$ and hence also a functor $$\mathcal{F}^D_\pi\co\Cx^{\ast}(\Mat(\mathcal{C}^D_{\mathcal{B}}))\rightarrow\Cx^{\ast}(\Mat(\mathcal{C}^D_{\mathcal{A}})),$$ both of which respect the differentials on both sides. There is also a dual construction for type~A structures: consider the category $\mathcal{C}^A_{\mathcal{A}}$ corresponding to the $\mathcal{I}$-algebra $\mathcal{A}$ from Example~\ref{exa:HighBrowDefTypeAoverI}. Similarly, let $\mathcal{C}^A_{\mathcal{B}}$ be the category corresponding to the $\mathcal{J}$-algebra $\mathcal{B}$. However, to define an induced functor, we need to slightly modify $\mathcal{C}^A_{\mathcal{B}}$ by adding a zero object. Let us call this new category $\mathcal{C}^{A,0}_{\mathcal{B}}$. Then, we can define a functor $$\tilde{\mathcal{F}}^A_\pi\co\mathcal{C}^A_{\mathcal{A}}\rightarrow\mathcal{C}^{A,0}_{\mathcal{B}}$$ as follows: an object $\iota_i$ is sent to $\iota_i$ if $i\in J$ and to the zero-object otherwise. A basic morphism $f\in \Mor(\iota_1,\iota_2)$ of the form $\iota_2.\mathcal{A}^{\otimes i}.\iota_1\rightarrow \mathbb{F}_2$ is sent to $$\left(f\circ \iota_2.\pi^{\otimes i}.\iota_1\co\iota_2.\mathcal{B}^{\otimes i}.\iota_1\rightarrow \iota_2.\mathcal{A}^{\otimes i}.\iota_1\rightarrow \mathbb{F}_2\right)\in \Mor(\iota_1,\iota_2)$$ if $\iota_1$ and $\iota_2$ are in $J$ and to $0\in\Mor\left(\tilde{\mathcal{F}}^A_\pi(\iota_1),\tilde{\mathcal{F}}^A_\pi(\iota_2)\right)$ otherwise. This functor is well-defined and respects the differential if $\mathcal{A}$ and $\mathcal{B}$ are $A_\infty$-algebras over $\mathcal{I}$ and $\mathcal{J}$, respectively, and $\pi$ is an $A_\infty$-algebra homomorphism. Thus, it induces a functor $$\mathcal{F}^A_\pi\co\Cx^{0}(\Mat(\mathcal{C}^A_{\mathcal{A}}))\rightarrow\Cx^{0}(\Mat(\mathcal{C}^{A,0}_{\mathcal{B}}))$$ that likewise respects the differentials on both sides. Note that the category $\Cx^{0}(\Mat(\mathcal{C}^{A,0}_{\mathcal{B}}))$ agrees with the category of type~A structures $\Cx^{0}(\Mat(\mathcal{C}^A_{\mathcal{B}}))$, since adding a zero-object in the underlying dg category does not change the result after applying $\Cx^{0}(\Mat(\cdot))$. Similarly, we may define functors for bimodules of various types. \end{definition} \begin{remark}\label{rem:InducedFunctors} In the previous definition, non-injective homomorphisms $\mathcal{J}\rightarrow\mathcal{I}$ also induce functors $$\mathcal{F}^D_\pi\co\Cx^{\ast}(\Mat(\mathcal{C}^D_{\mathcal{B}}))\rightarrow\Cx^{\ast}(\Mat(\mathcal{C}^D_{\mathcal{A}}))$$ between the categories of (curved) type~D structures. All we need to change in its construction is to add a zero-object to $\mathcal{C}^D_{\mathcal{A}}$. Note, however, that in all examples that we are concerned with in this paper, either $\mathcal{J}=\mathcal{I}$ and $\pi$ is a quotient map, or $\mathcal{B}=\mathcal{J}.\mathcal{A}.\mathcal{J}$ and $\pi$ is the inclusion. \end{remark} \begin{observation}\label{obs:InducedFunctorsForGraphs} Let \(\mathcal{J}\), \(\mathcal{I}\), \(\mathcal{A}\), \( \mathcal{B}\) and \(\pi\co\mathcal{B}\rightarrow\mathcal{A}\) be as in Definition~\ref{def:InducedFunctors}. Choose a basis of the kernel of~$\pi$ and extend it to~$\mathcal{B}$. This induces a basis on the image of~$\pi$, which we can then extend it to a basis of $\mathcal{A}$. Suppose we have an oriented labelled graph representing a type~D structure $N$ over $\mathcal{B}$ (with respect to the basis chosen above). Then the graph representing $\mathcal{F}^D_{\pi}(N)$ is obtained by replacing all algebra elements by their images under $\pi$. Similarly, if we have an oriented labelled graph representing a type~A structure $M$ over $\mathcal{A}$, $\mathcal{F}^A_{\pi}(M)$ has the effect of replacing each label $a=(a_n,\dots,a_1)$ by the formal sum of all labels whose images under $\pi^{\otimes n}$ are equal to $a$. \end{observation} \begin{definition}[(pairing type~D and type~A structures)]\label{def:PairingTypeDandA} Let $M$ be a (right) type~D structure and $N$ a (left) type~A structure over the same dg algebra $\mathcal{A}$ over a ring $\mathcal{I}$ of idempotents, together with a fixed basis of $\mathcal{I}$ and $\mathcal{A}$. We now reformulate the definition of the chain complex $(M\boxtimes N,\partial^\boxtimes)$ from \cite[Definition~7.4]{Zarev} and \cite[section~2.4]{LOT} in terms of the graphs associated with $M$ and $N$. The generators of $M\boxtimes N$ are defined by pairs of vertices in $M$ and $N$ labelled by the same idempotents. Given two such pairs $(v_1,w_1)$ and $(v_2,w_2)$, the $(v_2,w_2)$-component of $\partial^\boxtimes(v_1,w_1)$ is equal to the number of sequences $s$ of labels of consecutive arrows along a path from~$v_1$ to~$v_2$ in~$M$ such that $s$ agrees with a label on the arrow from $w_1$ to $w_2$ in~$N$, all modulo 2. Note that for the differential to be well-defined, we need to make sure that this number is finite. This is usually done by requiring that at least one of $M$ or $N$ is \textbf{bounded}: for type D structures, this means that there are no loops in its graph; for type A structures, this means that there are only finitely many labels. If we start with a type~AD or type~DD bimodule~$M$ and a type~AA or type~AD bimodule $N$, the pairing is defined in the same way, except that we need to record the labels for the remaining type~A or type~D sides: the first component is equal to the product (ie algebra product or concatenation) of the first components of the arrows along the corresponding path from~$v_1$ to~$v_2$ in~$M$; the second component of a label on the arrow from $(v_1,w_1)$ to $(v_2,w_2)$ is equal to the second component of the corresponding label of the arrow from $w_1$ to $w_2$ in~$N$. Similarly, we can define a pairing between other types of bimodules, as long as we pair type~A sides with type~D sides and left structures with right structures. For details, see~\cite[Definition~2.3.9]{LOTBimodules}. \end{definition} \begin{remark}\label{rem:PairingIsFunctorial} The result of the pairing operation described above is a well-defined object in the corresponding category, ie a chain complex, type~A, type~D structure or a bimodule, see \cite[Proposition~2.3.10]{LOTBimodules}. Furthermore, pairing type~D and type~A structures is invariant under homotopy up to homotopy since it is functorial, see \cite[Lemma~2.3.13]{LOTBimodules}. \end{remark} \begin{question} Is it possible to interpret the pairing on the level of the categories $\mathcal{C}^D_\mathcal{A}$ and $\mathcal{C}^A_\mathcal{A}$? \end{question} \begin{theorem}[(Pairing Adjunction)]\label{thm:PairingAdjunction} Let \(\mathcal{J}\), \(\mathcal{I}\), \(\mathcal{A}\), \( \mathcal{B}\) and \(\pi\co\mathcal{B}\rightarrow\mathcal{A}\) be as in Definition~\ref{def:InducedFunctors}. Let \(M\) be a (right) type~D structure over~\(\mathcal{B}\) and \(N\) a (left) type~A structure over~\(\mathcal{A}\). Then we have an identification $$ \mathcal{F}^D_{\pi}(M)^\mathcal{A}\boxtimes \typeA{\mathcal{A}}{N}\cong M^\mathcal{B}\boxtimes \typeA{\mathcal{B}}{\mathcal{F}^A_{\pi}(N)}.$$ The same holds true for bimodules of various types. \end{theorem} \begin{proof} This follows almost tautologically from the interpretation of the induced functors in terms of oriented labelled graphs in Observation~\ref{obs:InducedFunctorsForGraphs}. On the level of generators this identity is clear, since by definition, $M$, $\mathcal{F}^D_{\pi}(M)$ and $\mathcal{F}^A_{\pi}(N)$ only possess generators belonging to idempotents in $\mathcal{J}$, so only those generators of $N$ that belong to idempotents in $\mathcal{J}$ survive the pairing. Next, fix a label $a=(a_m,\dots,a_1)$ from a vertex $w_1$ to $w_2$ in $N$, along with two vertices $v_1$ and $v_2$ in $M$ such that $v_1$ and $w_1$ belong to the same idempotent in~$\mathcal{J}$ and so do $v_2$ and $w_2$. The contribution of $a$ to the differential from $(v_1, w_1)$ to $(v_2, w_2)$ on the left-hand side is equal to the number of all sequences $b=(b_n,\dots,b_1)$ of labels from $v_1$ to $v_2$ such that $\pi(b):=(\pi(b_n),\dots,\pi(b_1))=a$. The label $a$ in $N$ corresponds to labels $b'=(b'_n,\dots,b'_1)$ from $w_1$ to $w_2$ in $\mathcal{F}^A_{\pi}(N)$ such that $\pi(b'):=(\pi(b'_n),\dots,\pi(b'_1))=a$. Those labels contribute 1 to the differential from $(v_1, w_1)$ to $(v_2, w_2)$ on the right-hand side iff they agree with some $b$ and do not contribute otherwise. So the contributions agree. For bimodules, note that the functors only act on one component of the morphism spaces. \end{proof} \subsection{Cancellation and Cleaning-up} \begin{wrapfigure}{r}{0.3333\textwidth} \centering $\begin{tikzcd}[row sep=1.5cm, column sep=0.3cm] & (Z,\zeta) \arrow[bend left=12]{ld}{a} \arrow[bend left=12]{rd}{c} \\ (Y_1,\varepsilon_1)\arrow[bend left=12]{rr}{f} \arrow[bend left=12]{ru}{b} & & (Y_2,\varepsilon_2) \arrow[bend left=12]{ll}{e}\arrow[bend left=12]{lu}{d} \end{tikzcd}$ \caption{The object~$(X,\delta)$ for Lemma~\ref{lem:AbstractCancellation}.}\label{fig:AbstractCancellation}\vspace*{-35pt} \end{wrapfigure} We now state and prove the two central lemmas mentioned at the beginning of this section. \begin{lemma}[(Cancellation Lemma)]\label{lem:AbstractCancellation} Let \((X,\delta)\) be an object of \(\Cx^{\ast}(\Mat(\mathcal{C}))\) for some differential graded category \(\mathcal{C}\) and suppose it has the form shown in Figure~\ref{fig:AbstractCancellation}, where \((Y_1,\varepsilon_1)\), \((Y_2,\varepsilon_2)\), \((Z,\zeta)\in\ob(\Cxpre(\Mat(\mathcal{C})))\) and \(f\) is an isomorphism with inverse \(g\). Then \((X,\delta)\) is chain homotopic to \((Z,\zeta+bgc)\). \end{lemma} \begin{figure}[b] \centering \begin{subfigure}{0.48\textwidth} \centering $\begin{tikzcd}[row sep=1cm, column sep=0cm] (Z,\zeta+bgc)\arrow{rrr}{1}\arrow[bend right=12]{rrdd}{gc} &~~~~~~~ && (Z,\zeta) \arrow[bend left=12,pos=0.3]{ldd}{a} \arrow[bend left=12]{rrd}{c}\\ & &&&~& (Y_2,\varepsilon_2) \arrow[bend left=12]{llld}{e}\arrow[bend left=12]{llu}{d}\\ & &(Y_1,\varepsilon_1) \arrow[bend left=10,pos=0.7]{rrru}{f} \arrow[bend left=12]{ruu}{b} \end{tikzcd}$ \caption{The chain map $F$.}\label{fig:ProofAbstractCancellationF} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering $\begin{tikzcd}[row sep=1cm, column sep=0cm] & (Z,\zeta) \arrow[bend left=12,pos=0.3]{ldd}{a} \arrow[bend left=12]{rrd}{c}\arrow{rrrr}{1}&&&~~~~~ &(Z,\zeta+bgc)\\ &&~& (Y_2,\varepsilon_2) \arrow[bend left=12]{llld}{e}\arrow[bend left=12]{llu}{d}\arrow[bend right=12]{rru}{bg}\\ (Y_1,\varepsilon_1) \arrow[bend left=10,pos=0.7]{rrru}{f} \arrow[bend left=12]{ruu}{b} \end{tikzcd}$ \caption{The chain map $G$.}\label{fig:ProofAbstractCancellationG} \end{subfigure} \begin{subfigure}{0.95\textwidth} \centering $\begin{tikzcd}[row sep=1cm, column sep=0cm] & (Z,\zeta) \arrow[bend left=12,pos=0.3]{ldd}{a} \arrow[bend left=12]{rrd}{c}\arrow{rrrr}{1}&&&~~~~~ &(Z,\zeta+bgc) \arrow{rrr}{1}\arrow[bend right=12]{rrdd}{gc} &~~~~~~~ && (Z,\zeta) \arrow[bend left=12,pos=0.3]{ldd}{a} \arrow[bend left=12]{rrd}{c} \\ &&~& (Y_2,\varepsilon_2) \arrow[bend left=12]{llld}{e}\arrow[bend left=12]{llu}{d}\arrow[bend right=12]{rru}{bg}\arrow[dashed,swap, bend right=12]{rrrrd}{g}&& & &&&~& (Y_2,\varepsilon_2) \arrow[bend left=12]{llld}{e}\arrow[bend left=12]{llu}{d} \\ (Y_1,\varepsilon_1) \arrow[bend left=12,pos=0.7]{rrru}{f} \arrow[bend left=12]{ruu}{b}&&&&& & &(Y_1,\varepsilon_1) \arrow[bend left=12,pos=0.7]{rrru}{f} \arrow[bend left=12]{ruu}{b} \end{tikzcd}$ \caption{The composition of $F$ and $G$ and the homotopy $H$ (dashed arrow).}\label{fig:ProofAbstractCancellationHomotopy} \end{subfigure} \caption{Maps for the proof of Lemma~\ref{lem:AbstractCancellation}.}\label{fig:ProofAbstractCancellation} \end{figure} \begin{remark} We usually apply this lemma to the case where $(Y_1,\varepsilon_1)=(Y_2,\varepsilon_2)$ and $f$ is the identity map. \end{remark} \begin{proof} First of all, let us check that $(Z,\zeta+bgc)$ is indeed an object of $\Cx^{\ast}(\Mat(\mathcal{C}))$: \begin{align*} (\zeta+bgc)^2+\partial(\zeta+bgc)=&~\zeta\zeta+\zeta bgc+bgc \zeta +bgcbgc+\partial(\zeta)+\partial(b)gc+b\partial(g)c+bg\partial(c)\\ =&~\zeta\zeta+(\zeta b +\partial(b))gc+bg(c \zeta +\partial(c)) +\partial(\zeta)+b\partial(g)c+bgcbgc\\ =&~\zeta\zeta+(b\varepsilon_1 +df)gc+bg(\varepsilon_2 c +fa) +\partial(\zeta)+b\partial(g)c+bgcbgc\\ =&~\zeta\zeta+dc+ba+\partial(\zeta)+\cancel{b\left(D(g)+gcbg\right)c}=\ast. \end{align*} For the last step, we observe that $$gcbg=gD(f)g=D(gfg)+D(g)fg+gfD(g)=D(g).$$ Next, we consider the two chain maps $$F\co(Z,\zeta+bgc)\rightarrow(X,\delta)\qquad\text{and}\qquad G\co(X,\delta)\rightarrow(Z,\zeta+bgc)$$ defined in Figure~\ref{fig:ProofAbstractCancellationF} and \ref{fig:ProofAbstractCancellationG}, respectively. One easily checks that $D(F)=0$ and $D(G)=0$. Indeed, the only non-trivial terms we need to compute are \begin{align*} gc(\zeta+bgc)+\varepsilon_1gc+\partial(gc)+a=& ~g(c\zeta+\partial(c))+(\varepsilon_1g+\partial(g))c+a+gcbgc\\ =&~g(fa+\varepsilon_2c)+(\varepsilon_1g+\partial(g))c+a+gcbgc\\ =&~\left(D(g)+gcbg\right)c=0 \end{align*} for the first identity and similarly \begin{align*} (\zeta+bgc)bg+bg\varepsilon_2+\partial(bg)+d=& ~(\zeta b+\partial(b))g+b(g\varepsilon_2+\partial(g))+d+bgcbg\\ =&~(df+b\varepsilon_1)g+b(g\varepsilon_2+\partial(g))+d+bgcbg\\ =&~b\left(D(g)+gcbg\right)=0 \end{align*} for the second identity. Now, $GF=\id_{Z}$ and conversely, it is not hard to check that $$FG=\id_{X}+D(H),$$ where $H$ is the homotopy given by the dashed line in Figure~\ref{fig:ProofAbstractCancellationHomotopy}. \end{proof} \begin{lemma}[(Clean-Up Lemma)]\label{lem:AbstractCleanUp} Let \((O,d_O)\) be an object in \(\Cx^{\ast}(\mathcal{C})\) for some differential graded category \(\mathcal{C}\). Then for any morphism \(h\in\Mor_0((O,d_O),(O,d_O))\) for which $$h^2, \quad hD(h) \quad\text{ and } \quad D(h)h$$ vanish, \((O,d_O)\) is chain isomorphic to \((O,d_O+D(h))\). \end{lemma} \begin{proof} We can easily check that $(O,d_O+D(h))$ is an object in $\Cx^{\ast}(\Mat(\mathcal{C}))$: \begin{align*} (d_O+D(h))^2+\partial(d_O+D(h)) = &\left(d_O^2+\partial(d_O)\right)+\left(d_OD(h)+D(h)d_O^{\phantom{2}\!\!}+\partial(D(h))\right)\\ &+D(h)D(h). \end{align*} The first term on the right gives ($\ast$) and the last term vanishes, which can be seen by applying the differential $D$ to $h D(h)=0$. The middle term also vanishes, which can be seen by expanding $D(h)=d_Oh+hd_O+\partial(h)$ and using the fact that a term ($\ast$) commutes with any morphism. The chain isomorphisms between the two objects are given by \begin{center} $\begin{tikzcd}[row sep=1.5cm, column sep=0.6cm] (O,d_O) \arrow{rr}{1+h} && (O,d_O+D(h)) \end{tikzcd}$ \quad and \quad $\begin{tikzcd}[row sep=1.5cm, column sep=0.6cm] (O,d_O+D(h)) \arrow{rr}{1+h} && (O,d_O). \end{tikzcd}$ \end{center} Indeed, these two morphisms lie in the kernel of $D$, since $hD(h)$ and $D(h)h$ vanish. Their composition is equal to $1+h^2=1$. \end{proof} \section{Curved complexes for marked surfaces}\label{sec:classification} In this section, we prove the main classification results which allow us to interpret peculiar modules and morphisms between them geometrically in terms of Lagrangian intersection Floer theory. These classification results are not particular to peculiar modules. They actually hold in a more general framework, namely for curved complexes over certain algebras constructed from \textit{arbitrary} surfaces with boundary (plus some extra data). Therefore, we will work in this general framework throughout this section. In section~\ref{sec:glueingrevisited}, we will return to the discussion of our tangle invariants and summarize the consequences of our main classification results from this section to peculiar modules. Many ideas in this section originate from~\cite{HRW} and~\cite{Kontsevich}. In the first paper, Hanselman, Rasmussen and Watson classify extendable type D structures in the context of a bordered Heegaard Floer theory for 3-manifolds with torus boundary. While our tangle invariants are classified by immersed curves on the 4-punctured sphere, their immersed curves live on the once-punctured torus. The reader should feel encouraged to read the relevant passages in their paper in parallel to this section, as our discussion will stay rather close to theirs, see also Remark~\ref{rem:comparisonToHRW}. In~\cite{Kontsevich}, Haiden, Katzarkov and Kontsevich classify twisted complexes over marked surfaces. Their results, unlike the ones from~\cite{HRW}, do not include a correspondence between Lagrangian intersection Floer theory and morphism spaces. Nonetheless, the main ideas about how to set up the correct framework in which a classification result for curved complexes could hold come from this paper. For a slightly more detailed comparison between their setting and ours, see Remark~\ref{rem:comparisonToKontsevich}. The broad outline of this section is as follows: after setting up the general framework of curved complexes over marked surfaces with arc systems, we introduce the category of precurves. The notion of precurves serves as a useful intermediary between the algebraic nature of curved complexes and the geometric nature of immersed curves with local systems. In subsections~\ref{subsec:precurves}, \ref{subsec:precurves_geometry} and~\ref{subsec:simplify_precurves}, we show that the category of curved complexes and the category of precurves over a fixed marked surface with arc system are equivalent and we discuss how the simplification algorithm from~\cite{HRW} can be adapted to precurves. In subsections~\ref{subsec:ClassificationMor} and~\ref{subsec:formulaMor}, we classify the homology of morphism spaces for a particularly simple class of precurves in terms of Lagrangian intersection theory, before finally proving the full classification of precurves in subsection~\ref{subsec:complete_classification}. \subsection{Curved complexes over marked surfaces with arc systems} \begin{definition}\label{def:MarkedSurface} A \textbf{marked surface} is a pair $(S,M)$ where $S$ is a connected oriented surface with non-empty boundary and $M$ is a (possibly empty) set of basepoints on $\partial S$. An \textbf{arc} $a$ of a marked surface $(S,M)$ is (the image of) an embedding of an oriented closed interval \[(I, \partial I) \hookrightarrow (S,\partial S\smallsetminus M)\] such that its class $[a]\in\pi_1(S,\partial S\smallsetminus M)$ is non-trivial. Suppose $A$ is a non-empty set of pairwise disjoint arcs on a marked surface $(S,M)$. For each arc $a\in A$, choose a closed neighbourhood $N(a)$ of $a$ such that $N(a)\cap N(a')=\emptyset$ for all $a'\in A\smallsetminus \{a\}$. Let $s_1(a)$ and $s_2(a)$ be the closures of the two components of $\partial N(a)\smallsetminus\partial S$ such that $s_1(a)$ lies to the right of the oriented arc $a$ and $s_2(a)$ to its left; we call $s_1(a)$ and $s_2(a)$ the two \textbf{sides} of $a$. We also fix a foliation $\mathcal{F}_a=I\times I$ of $N(a)$ such that $s_1(a)$, $s_2(a)$ and $a$ are leaves of $\mathcal{F}_a$. Furthermore, we call the closures of the connected components of $S\smallsetminus \bigcup_{a\in A}N(a)$ \textbf{faces} and denote the set of all faces by $F(S,M,A)$. We call~$A$ (together with a choice of fixed~$N(a)$ and foliation~$\mathcal{F}_a$ as above) an \textbf{arc system} if each face $f\in F(S,M,A)$ is a topological disc containing at most one point in $M$. If it does contain a point in $M$, we call it an \textbf{open face}. Otherwise, we call it \textbf{closed}. % Given a face $f\in F(S,M,A)$, we denote the set of all sides adjacent to $f$ by $S(f)$ and write $n_f$ for the cardinality of $S(f)$. Note that $n_f\geq2$ for closed faces $f$, because $n_f=0$ would imply $A=\emptyset$ and $n_f=1$ would imply that the arc adjacent to $f$ can be pushed into $\partial S\smallsetminus M$. \end{definition} \begin{figure}[t] \centering \begin{subfigure}[b]{\textwidth} \centering \psset{unit=0.15} \begin{pspicture}(-41,-21)(41,21) \pnode(-20,0){A} \pnode(20,0){C} \pnode([nodesep=4,angle=30]A){A30} \pnode([nodesep=4,angle=150]A){A150} \pnode([nodesep=4,angle=-30]A){Am30} \pnode([nodesep=4,angle=-150]A){Am150} \pnode([nodesep=4,angle=30]C){C30} \pnode([nodesep=4,angle=150]C){C150} \pnode([nodesep=4,angle=-30]C){Cm30} \pnode([nodesep=4,angle=-150]C){Cm150} \pscustom*[linecolor=lightred]{ \pscurve(-40,5)(-33,4.5)(-28,3.6)(A150) \psline(A150)(Am150) \pscurve(Am150)(-28,-3.6)(-33,-4.5)(-40,-5) } \psline[linecolor=red]{<-}(-24,0)(-40,0) \pscurve{->}(-40,5)(-33,4.5)(-28,3.6)(A150) \pscurve{->}(-40,-5)(-33,-4.5)(-28,-3.6)(Am150) \uput{0.4}[90](-31,0){\textcolor{red}{$a_1$}} \uput{4.7}[90](-31,0){$s_2(a_1)$} \uput{-4.7}[-90](-31,0){$s_1(a_1)$} \pscustom*[linecolor=lightred]{ \pscurve(4;150)(-7,3)(-13,3)(A30) \psline(A30)(Am30) \pscurve(Am30)(-13,-3)(-7,-3)(4;-150) } \psline[linecolor=red]{<-}(-4,0)(-16,0) \pscurve{<-}(4;150)(-7,3)(-13,3)(A30) \pscurve{<-}(4;-150)(-7,-3)(-13,-3)(Am30) \uput{0.4}[90](-10,0){\textcolor{red}{$a_2$}} \uput{3.6}[90](-10,0){$s_2(a_2)$} \uput{-3.6}[-90](-10,0){$s_1(a_2)$} \pscustom*[linecolor=lightred]{ \pscurve(4;30)(7,3)(13,3)(C150) \psline(C150)(Cm150) \pscurve(Cm150)(13,-3)(7,-3)(4;-30) } \psline[linecolor=red]{->}(4,0)(16,0) \pscurve{->}(4;30)(7,3)(13,3)(C150) \pscurve{->}(4;-30)(7,-3)(13,-3)(Cm150) \uput{0.4}[90](10,0){\textcolor{red}{$a_3$}} \uput{3.6}[90](10,0){$s_2(a_3)$} \uput{-3.6}[-90](10,0){$s_1(a_3)$} \pscustom*[linecolor=lightred]{ \pscurve(5,-20)(4.5,-13)(3.6,-8)(4;-60) \psline(4;-60)(4;-120) \pscurve[liftpen=1](4;-120)(-3.6,-8)(-4.5,-13)(-5,-20) } \psline[linecolor=red]{<-}(0,-4)(0,-20) \pscurve{->}(5,-20)(4.5,-13)(3.6,-8)(4;-60) \pscurve{->}(-5,-20)(-4.5,-13)(-3.6,-8)(4;-120) \uput{0.4}[0](0,-12){\textcolor{red}{$a_4$}} \uput{4.8}[180](0,-12){$s_2(a_4)$} \uput{-4.8}[0](0,-12){$s_1(a_4)$} \pscustom*[linecolor=lightred]{ \pscurve(5,20)(4.5,13)(3.6,8)(4;60) \psline(4;60)(4;120) \pscurve[liftpen=1](4;120)(-3.6,8)(-4.5,13)(-5,20) } \psline[linecolor=red]{->}(0,4)(0,20) \pscurve{<-}(5,20)(4.5,13)(3.6,8)(4;60) \pscurve{<-}(-5,20)(-4.5,13)(-3.6,8)(4;120) \uput{0.4}[0](0,12){\textcolor{red}{$a_5$}} \uput{4.8}[180](0,12){$s_2(a_5)$} \uput{-4.8}[0](0,12){$s_1(a_5)$} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](0,0){4}{0}{360} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](20,0){4}{0}{360} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](-20,0){4}{0}{360} \psline[linecolor=darkgreen,linearc=8,cornersize=absolute](0,-20)(-40,-20)(-40,20)(40,20)(40,-20)(0,-20) \psline[linecolor=darkgreen](-20,-21)(-20,-19) \rput(-20,-18){\textcolor{darkgreen}{$m_1$}} \psline[linecolor=darkgreen](3;-45)(5;-45) \rput(7;-45){\textcolor{darkgreen}{$m_2$}} \psarc[linecolor=darkgreen]{->}(-20,0){4}{-150}{-30} \psarc[linecolor=darkgreen]{->}(-20,0){4}{30}{150} \psarc[linecolor=darkgreen]{->}(0,0){4}{30}{60} \psarc[linecolor=darkgreen]{->}(0,0){4}{120}{150} \psarc[linecolor=darkgreen]{->}(0,0){4}{-150}{-120} \psarc[linecolor=darkgreen]{->}(20,0){4}{-150}{150} \psline[linecolor=darkgreen,linearc=8,cornersize=absolute]{->}(-40,5)(-40,20)(-5,20) \uput{4}[-45](-40,20){$\textcolor{darkgreen}{p_1}$} \uput{5}[90](-20,0){$\textcolor{darkgreen}{p_2}$} \uput{5}[-90](-20,0){$\textcolor{darkgreen}{p_3}$} \uput{5}[45](0,0){$\textcolor{darkgreen}{p_4}$} \uput{5}[135](0,0){$\textcolor{darkgreen}{p_5}$} \uput{5}[-135](0,0){$\textcolor{darkgreen}{p_6}$} \uput{5}[0](20,0){$\textcolor{darkgreen}{p_7}$} \rput(38,0){\textcolor{darkgreen}{$p_8$}} \psline[linecolor=darkgreen]{->}(10,-20)(5,-20) \end{pspicture} \caption{An arc system $A$ on the marked surface $(S,M)=(D^2\smallsetminus 3D^2, \{m_1, m_2\})$. The boundary components are labelled by the elementary algebra elements $p_i$, which correspond to the arrows in the quiver $Q(S,M,A)$ below. }\label{fig:MarkedSurfacePic} \end{subfigure} \begin{subfigure}[b]{\textwidth \[\begin{tikzcd}[row sep=0.7cm, column sep=1.2cm] & & a_5 \arrow{ld}[description]{p_5} \arrow[out=0, in=0,looseness=5]{dd}[description]{p_8} \\ a_1 \arrow[bend left=25]{rru}[description]{p_1} \arrow[bend right=20]{r}[description]{p_3} & a_2 \arrow[bend right=20]{l}[description]{p_2} \arrow{rd}[description]{p_6} && a_3 \arrow{ul}[description]{p_4} \arrow[out=-30, in=30, loop]{lr}[description]{p_7} \\ && a_4 \end{tikzcd}\] \caption{The quiver~$Q(S,M,A)$ for the arc system $A$ on the marked surface $(S,M)$ above.}\label{fig:MarkedSurfaceQuiver} \end{subfigure} \caption{An illustration of the Definitions~\ref{def:MarkedSurface} and~\ref{def:MarkedSurfaceQuiver}.}\label{fig:ExampleMarkedSurface} \end{figure} \begin{remark} Our conventions are slightly different from those in~\cite{Kontsevich}: their markings $M$ correspond to the subsets $\partial S\smallsetminus M$. Also, their arc systems allow faces of arbitrary genus. Our arc systems correspond to their admissible arc systems, except that they require $n_f\geq3$. \end{remark} \begin{example}\label{exa:MarkedSurface} Figure~\ref{fig:MarkedSurfacePic} shows an example of a marked surface $(S,M)$ with an arc system~$A$. We usually draw the boundary of the surface in green and mark elements in~$M$ by dashes through the boundary, in this case labelled by~$\textcolor{darkgreen}{m_1}$ and~$\textcolor{darkgreen}{m_2}$. The oriented arcs in $A$ are drawn in red, labelled by $\textcolor{red}{a_i}$, $i=1,\dots,5$. The neighbourhood of each arc $\textcolor{red}{a_i}$ is shaded in light red, bounded by solid arrows representing the two sides of $\textcolor{red}{a_i}$. In this particular example, the arcs cut the surface into three components, which are the faces in $F(S,M,A)$. For each face $f$, we have labelled the components of $\partial S\cap f$ which do not contain the two points $\textcolor{darkgreen}{m_1}$ and $\textcolor{darkgreen}{m_2}$ by variables $\textcolor{darkgreen}{p_i}$. These correspond to the basic algebra elements from the next definition. Note that this particular marked surface will be irrelevant for our tangle invariants; it simply serves as an example to illustrate the generality of the arguments in this section. \end{example} \begin{wrapfigure}{r}{0.4\textwidth} \centering \psset{unit=0.8}\medskip \begin{pspicture}(-3,-1)(3,1) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-2,0.3)(2,0.3) \psline(2,-0.3)(-2,-0.3) } \psline[linecolor=red](-2,0)(2,0) \psline(-2,0.3)(2,0.3) \psline(-2,-0.3)(2,-0.3) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-0.2,0.2)(0.2,0.2) \psline(0.2,-0.2)(-0.2,-0.2) } \rput[c](0,0){\textcolor{red}{$a$}} \rput[r](-2.2,0){\textcolor{darkgreen}{$\partial S$}} \rput[l](2.2,0){\textcolor{darkgreen}{$\partial S$}} \psline[linecolor=darkgreen,linewidth=1pt](-2,0.5)(-2,-0.5) \psline[linecolor=darkgreen,linewidth=1pt](2,0.5)(2,-0.5) \psline[linecolor=darkgreen,linewidth=1pt]{->}(-2,0.3)(-2,1)\rput[l](-1.8,0.7){$\textcolor{darkgreen}{p_{1}}$} \psline[linecolor=darkgreen,linewidth=1pt]{->}(-2,-1)(-2,-0.3)\rput[l](-1.8,-0.7){$\textcolor{darkgreen}{q_{1}}$} \psline[linecolor=darkgreen,linewidth=1pt]{<-}(2,0.3)(2,1)\rput[r](1.8,0.7){$\textcolor{darkgreen}{p_{2}}$} \psline[linecolor=darkgreen,linewidth=1pt]{<-}(2,-1)(2,-0.3)\rput[r](1.8,-0.7){$\textcolor{darkgreen}{q_{2}}$} \end{pspicture} \caption{A typical neighbourhood of an arc $a$. }\label{fig:TypicalNeighbourhoodOfArc} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{definition}\label{def:MarkedSurfaceQuiver} Given an arc system $A$ on a marked surface $(S,M)$, consider the graph $Q(S,M,A)$ obtained by contracting the closed neighbourhood of each arc in $A$ to a single point (which defines the vertices of $Q(S,M,A)$) and removing the interior of each face $f$ as well as any component of $\partial S\cap f$ containing a basepoint in $M$ (which defines the edges of $Q(S,M,A)$). By choosing the induced boundary orientation of $\partial f$, the 1-cells inherit an orientation from $(S,M)$, which turns $Q(S,M,A)$ into a quiver. Figure~\ref{fig:MarkedSurfaceQuiver} shows this quiver for the triple $(S,M,A)$ from Example~\ref{exa:MarkedSurface}. \\\indent A typical neighbourhood of an arc $a\in A$ is shown in Figure~\ref{fig:TypicalNeighbourhoodOfArc}. As we can see, each arc $a$ is the source of at most two arrows and the target of at most two arrows. Using the notation from Figure~\ref{fig:TypicalNeighbourhoodOfArc}, we associate with the triple $(S,M,A)$ the path algebra \[\mathcal{A}:=\mathbb{F}_2 Q(S,M,A)/\{p_1q_1=0=q_2p_2\mid\text{arcs }a\in A\}.\] Note that we follow the convention to read algebra elements from right to left. Every arrow in the quiver corresponds to an algebra element which we call the \textbf{elementary algebra elements}. For each closed face $f\in F(S,M,A)$, we write $U_f$ for the sum of all paths that start on a side on $f$ and go around $\partial f$ exactly once. We write $U$ for the sum of $U_f$ over all closed faces. For each arc $a\in A$, denote the idempotent corresponding to~$a$ by~$\iota_a$ and let $\mathcal{I}$ be the ring of all idempotents. We often consider $\mathcal{A}$ as a category: the underlying objects are given by the arcs in $A$. Then for any $a, b\in A$, \[\Mor_{\mathcal{A}}(a, b):=\iota_b.\mathcal{A}.\iota_a\] and multiplication is given by algebra multiplication. \end{definition} \begin{definition} Let $A$ be an arc system on a marked surface $(S,M)$. An $\mathbb{R}^{\geq0}$-grading on $\mathcal{A}$ is called a \textbf{$\delta$-grading}, if for all closed faces $f\in F(S,M,A)$, $U_f$ is homogeneous of degree 2 and the subalgebra of grading $0$ is $\mathcal{I}$. (Note that such gradings always exist: for example, if $n=\text{lcm}(n_f)$, we can define a $\frac{2}{n}\mathbb{Z}$-grading $\delta$ by setting $\delta(p)=\frac{2}{n_f}$ for any elementary algebra elements $p$ of a face $f$.) \end{definition} \begin{definition}\label{def:CCMarkedSurfaces} Let $A$ be an arc system on a marked surface $(S,M)$. We denote the dg category $\Cx^{U}(\Mat(\mathcal{A}(S,M,A)))$ by $\CC(S,M,A)$ and call it the \textbf{category of curved complexes} associated with $(S,M,A)$. \end{definition} \begin{wrapfigure}{r}{0.4\textwidth} \centering \psset{unit=1.7} \begin{pspicture}(-1.52,-1.52)(1.52,1.52) \psline[linecolor=red](-1,-1)(-1,1)(1,1)(1,-1)(-1,-1) \rput(-1,0){\fcolorbox{white}{white}{\red $a$}} \rput(0,-1){\fcolorbox{white}{white}{\red $b$}} \rput(1,0){\fcolorbox{white}{white}{\red $c$}} \rput(0,1){\fcolorbox{white}{white}{\red $d$}} \rput(-1.25,0){$\iota_1$} \rput(0,-1.25){$\iota_2$} \rput(1.25,0){$\iota_3$} \rput(0,1.25){$\iota_4$} \pscircle[fillstyle=solid, fillcolor=white,linecolor=darkgreen](1,1){0.25} \pscircle[fillstyle=solid, fillcolor=white,linecolor=darkgreen](1,-1){0.25} \pscircle[fillstyle=solid, fillcolor=white,linecolor=darkgreen](-1,1){0.25} \pscircle[fillstyle=solid, fillcolor=white,linecolor=darkgreen](-1,-1){0.25} \rput[c](-1,1){1} \rput[c](-1,-1){2} \rput[c](1,-1){3} \rput[c](1,1){4} \rput(-1,1){ \psarcn{<-}(0,0){0.5}{-5}{-85} \psarcn{<-}(0,0){0.5}{-95}{5} \pscircle*[linecolor=white](0.5;-45){8pt} \rput(0.5;-45){$p_1$} \pscircle*[linecolor=white](0.5;135){8pt} \rput(0.5;135){$q_1$} } \rput(-1,-1){ \psarcn{<-}(0,0){0.5}{85}{5} \psarcn{<-}(0,0){0.5}{-5}{95} \pscircle*[linecolor=white](0.5;45){8pt} \rput(0.5;45){$p_2$} \pscircle*[linecolor=white](0.5;-135){8pt} \rput(0.5;-135){$q_2$} } \rput(1,-1){ \psarcn{<-}(0,0){0.5}{175}{95} \psarcn{<-}(0,0){0.5}{85}{-175} \pscircle*[linecolor=white](0.5;135){8pt} \rput(0.5;135){$p_3$} \pscircle*[linecolor=white](0.5;-45){8pt} \rput(0.5;-45){$q_3$} } \rput(1,1){ \psarcn{<-}(0,0){0.5}{-95}{-175} \psarcn{<-}(0,0){0.5}{175}{-85} \pscircle*[linecolor=white](0.5;-135){8pt} \rput(0.5;-135){$p_4$} \pscircle*[linecolor=white](0.5;45){8pt} \rput(0.5;45){$q_4$} } \end{pspicture} \caption{The marked surface and arc system which are relevant for peculiar modules, see Example~\ref{exa:pqModSpecialCaseofCC}.}\label{fig:nutshelll}\vspace*{-20pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{example}\label{exa:pqModSpecialCaseofCC} Let $(S,M)=(S^2\smallsetminus 4 D^2,\emptyset)$ and let $A$ be the arc system consisting of four arcs which divide the surface into exactly two faces each of which has four boundary components, as shown in Figure~\ref{fig:nutshelll}. Then $\mathcal{A}=\Ad$ and $\CC(S,M,A)=\pqMod$. \end{example} \begin{remark} Usually, we fix a $\delta$-grading on $\mathcal{A}$ in the definition of the category $\CC(S,M,A)$ of curved complexes. This plays the role of the $\mathbb{Z}$-grading in section~\ref{sec:AlgStructFromGDCats}, except that the differential here \textit{increases} $\delta$-grading by 1. So actually, with respect to the $\delta$-grading, one should call $\CC(S,M,A)$ the category of curved \textit{co}complexes. We have chosen this convention, because, as seen in the previous example, the category $\pqMod$ of peculiar modules is a special case of $\CC(S,M,A)$, and for $\pqMod$, there is also a homological grading $h$, which decreases along differentials by 1. The grading $h$ is defined as a combination of the $\delta$-grading and the Alexander grading, see Definition~\ref{def:RecallGradingsFromHDsForTangles}. The latter is preserved by the differentials, so for the present section it is irrelevant. \end{remark} \begin{remark}\label{rem:comparisonToKontsevich} In~\cite{Kontsevich}, Haiden, Katzarkov and Kontsevich associate with the triple $(S,M,A)$ the path algebra \[\Atw:=\mathbb{F}_2 Q(S,M,A)/\{p_1p_2=0=q_2q_1\mid\text{arcs }a\in A\}.\] Note that the relations they impose on the free quiver algebra $\mathbb{F}_2 Q(S,M,A)$ are in some sense ``dual'' to ours. Haiden, Katzarkov and Kontsevich define an $A_\infty$-structure on $\Atw$, which then gives rise to the notion of twisted complexes using an $A_\infty$-version of $\Cx^{0}(\Mat(\cdot))$. In \cite[Theorem~4.3]{Kontsevich}, they show that twisted complexes over $\Atw$ are classified by immersed curves on $S$ with local systems. Curved complexes over $\mathcal{A}$ and twisted complexes over $\Atw$ are closely related. In~\cite[Theorem~5.10, Proposition~5.15]{MyThesis}, I define two functors between the category of peculiar modules and the category of twisted complexes for the triple $(S,M,A)$ from Example~\ref{exa:pqModSpecialCaseofCC} which I expect to set up an equivalence of categories. However, to define these functors, one first needs to pass to infinitely generated twisted complexes and peculiar modules, which complicates matters. The classification results below for peculiar modules and more generally curved complexes (which we assume to be finitely generated throughout this paper) suggest an alternative approach. We may argue that twisted complexes and peculiar modules encode the same geometric objects, namely immersed curves on the 4-punctured sphere with local systems. The only difference is that finitely generated peculiar modules correspond to \textit{compact} curves, while finitely generated twisted complexes can also represent non-compact ones. This explains why we cannot expect to obtain an equivalence for peculiar modules and twisted complexes in the finitely generated setting. \end{remark} \begin{remark}\label{rem:comparisonToHRW} It is worthwhile to compare our definition of curved complexes associated with marked surfaces to extendable type D structures studied by Hanselman, Rasmussen and Watson in~\cite{HRW}. In fact, for the proof of Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems} below, we will recycle a simplification algorithm which is a central ingredient in their classification result~\cite[Theorem~32]{HRW}. The main technical difference is that in~\cite{HRW}, the algebra $\mathcal{A}$ is truncated in the sense that any path that traverses certain elementary algebra elements more than once is set equal to zero. We work instead with the full algebra $\mathcal{A}$. \textit{A posteriori}, we will see that these two perspectives coincide and that it is actually sufficient to work with marked surfaces and arc systems in which each face has a basepoint, see Corollary~\ref{cor:AddingBasepointFunctorIsFaithfulUpToHom} and Corollary~\ref{cor:PeculiarModulesFromNiceDiagrams}. Apart from this rather technical difference (which could easily have been avoided), Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems} below follows directly from~\cite[Theorem~32]{HRW}. However, in~\cite{HRW}, the classification of morphisms relies on some ad-hoc arguments that seem to be particular to the torus algebra. We propose alternative arguments which work for arbitrary marked surfaces with arc systems. The notion of precurves, which we define in the next subsection, seems to provide a good framework for these kinds of arguments. \end{remark} \subsection{An algebraic reinterpretation of curved complexes}\label{subsec:precurves} \begin{definition}\label{def:AlgebraFromGlueingFaces} Let $A$ be an arc system on a marked surface $(S,M)$. We will now give an alternative description of the algebra $\mathcal{A}$ in terms of a subalgebra of a larger algebra built out of simpler pieces. For each face $f\in F(S,M,A)$, let $\mathcal{A}_f$ be the free path algebra of the cyclic or linear quiver with $n_f$ vertices, depending on whether $f$ is closed or open. If $f$ is open, we may think of the linear quiver as a deformation retract of $f$ to its boundary minus the basepoint. If $f$ is closed, let $f^{\ast}$ be the face $f$ punctured at a fixed point; then we may think of the cyclic quiver as a deformation retract of $f^{\ast}$ to the boundary of $f$. Thus, each side $s$ of the face $f$ corresponds to a vertex and hence to the constant path $\iota_s$ at that vertex. We denote the vector space generated by the idempotents of $\mathcal{A}_f$ by $\mathcal{I}_f$ and the (non-unital) subalgebra generated by paths of length $\geq1$ by $\mathcal{A}^+_f$. These fit into a short exact sequence $$ \begin{tikzcd}[row sep=10pt] 0 \arrow{r}{} & \mathcal{A}^+_f \arrow{r}{} & \mathcal{A}_f \arrow{r}{} & \mathcal{I}_f \arrow{r}{} & 0. \end{tikzcd} $$ By taking the direct sum over all faces $f\in F(S,M,A)$, we obtain the short exact sequence $$ \begin{tikzcd}[row sep=10pt] 0 \arrow{r}{} & \overline{\mathcal{A}}^+:=\bigoplus_f\mathcal{A}^+_f \arrow{r}{} & \overline{\mathcal{A}}:=\bigoplus_f\mathcal{A}_f \arrow{r}{} & \overline{\mathcal{I}}:=\bigoplus_f\mathcal{I}_f \arrow{r}{} & 0. \end{tikzcd} $$ Then $\mathcal{I}$ agrees with the subring of $\overline{\mathcal{I}}$ given by $$\{\iota_{s_1(a)}+\iota_{s_2(a)}\mid a\in A\}$$ and $\mathcal{A}$ agrees with the subalgebra of $\overline{\mathcal{A}}$ whose underlying vector space is given by $\mathcal{I}\oplus\overline{\mathcal{A}}^+$. For each face $f\in F(S,M,A)$, let $p_f$ be the sum of all length 1 paths in $\mathcal{A}_f$, $U_f=p_f^{n_f}$ and $U$ the sum of $U_f$ over all faces $f$. (Note that $U_f=0$ for open faces.) Our \textbf{standard basis} of $\mathcal{A}^+_f.\iota_s$ is given by $\{p_f^n.\iota_s\}_{n\geq1}$. We call $n$ the \textbf{length of a basis element} $p_f^n.\iota_s$. We denote the shortest element of the standard basis between two sides $s$ and $t$ of the same face by $p^s_t$. \end{definition} \begin{remark}\label{rem:precurveFunctors} If we view the algebras $\mathcal{A}$ and $\overline{\mathcal{A}}$ as categories whose objects correspond to constant paths, we can view the inclusions $\mathcal{I}\hookrightarrow\overline{\mathcal{I}}$ and $\mathcal{A}\hookrightarrow\overline{\mathcal{A}}$ in terms of a functor between the additive enlargements of $\mathcal{A}$ and $\overline{\mathcal{A}}$ $$\mathcal{F}\co\Mat\mathcal{A}\rightarrow\Mat\overline{\mathcal{A}}$$ which sends an object $\iota_a$, $a\in A$, to $\iota_{s_1(a)}\oplus \iota_{s_2(a)}$ and a morphism $\varphi\in\Mor_{\mathcal{A}}(\iota_a,\iota_b)$ to the map $$ \begin{tikzcd}[column sep=5cm,ampersand replacement=\&] \iota_{s_1(a)}\oplus \iota_{s_2(a)} \arrow{r}{\left(\begin{matrix} \iota_{s_1(b)}.\varphi.\iota_{s_1(a)} & \iota_{s_1(b)}.\varphi.\iota_{s_2(a)} \\ \iota_{s_2(b)}.\varphi.\iota_{s_1(a)} & \iota_{s_2(b)}.\varphi.\iota_{s_2(a)} \end{matrix}\right)} \& \iota_{s_1(b)}\oplus \iota_{s_2(b)}. \end{tikzcd} $$ In order to recover a given object $C$ in $\Mat\mathcal{A}$ from $\mathcal{F}(C)$, one needs to remember how the idempotents are matched up. This motivates the following definition. \end{remark} \begin{definition}\label{def:SplittingCatsForEquivalenceFunctors} Consider the category $\Mat_i\overline{\mathcal{A}}$ enriched over $\delta$-graded vector spaces \begin{itemize} \item whose objects are pairs $(C, \{P_a\}_{a\in A})$, where $C$ is a $\delta$-graded right module over $\overline{\mathcal{I}}$ and $$P_a\co C.\iota_{s_1(a)}\rightarrow C.\iota_{s_2(a)}$$ is a $\delta$-grading preserving vector space isomorphism for every arc $a$, and \item whose morphism spaces are defined as follows: for any two right $\overline{\mathcal{I}}$-modules $C$ and $C'$, the short exact sequences from Definition~\ref{def:AlgebraFromGlueingFaces} induce a split exact sequence $$ \begin{tikzcd}[row sep=10pt] 0 \arrow{r}{} & \Mor_{\Mat\overline{\mathcal{A}}^+}(C,C') \arrow{r}{} & \Mor_{\Mat\overline{\mathcal{A}}}(C,C') \arrow{r}{} & \Mor_{\Mat\overline{\mathcal{I}}}(C,C') \arrow{r}{} & 0. \end{tikzcd} $$ In other words, we may write each morphism $\varphi$ in the middle uniquely as $\varphi^++\varphi^\times$, where $\varphi^+$ is an element of the morphism space on the left and $\varphi^\times$ an element of the morphism space on the right. We then define the morphism spaces $$ \Mor((C, \{P_a\}_{a\in A}),(C', \{P'_a\}_{a\in A}))$$ of $\Mat_i\overline{\mathcal{A}}$ by $$\{\varphi\in\Mor_{\Mat\overline{\mathcal{A}}}(C,C')\mid \forall a\in A\co (\iota_{s_2(a)}.\varphi^\times.\iota_{s_2(a)})\circ P_a=P'_a\circ(\iota_{s_1(a)}.\varphi^\times.\iota_{s_1(a)})\}. $$ \item Finally, $\Mat_i\overline{\mathcal{A}}$ inherits its composition from $\Mat\overline{\mathcal{A}}$, which is indeed well-defined. \end{itemize} We may extend the functor $\mathcal{F}$ from Remark~\ref{rem:precurveFunctors} to a functor $$\mathcal{F}\co \Mat\mathcal{A}\longrightarrow\Mat_i\overline{\mathcal{A}}$$ by setting, for a generator $x.\iota_a\in C.\iota_a$, $P_a(x.\iota_{s_1(a)})=x.\iota_{s_2(a)}$. (Note that this makes the image of any morphism under $\mathcal{F}$ well-defined.) Conversely, we may also define a functor $$\mathcal{G}\co \Mat_i\overline{\mathcal{A}}\longrightarrow\Mat\mathcal{A}$$ as follows: given an object $(C,\{P_a\}_{a\in A})$ of $\Mat_i\overline{\mathcal{A}}$, we define an object $\mathcal{G}(X)$ in $\Mat\mathcal{A}$ by setting $\mathcal{G}(X).\iota_a=C.\iota_{s_1(a)}$ as vector spaces. Furthermore, if $\varphi\in\Mor_{\Mat_i\overline{\mathcal{A}}}((C,\{P_a\}),(C',\{P'_a\}))$, we define a morphism $\mathcal{G}(\varphi)$ in $\Mat\mathcal{A}$ by setting for each $a, a'\in A$ \begin{align*} \iota_{a'}.\mathcal{G}(\varphi).\iota_a:=(\iota_{s_1(a')}.\varphi^\times.\iota_{s_1(a)})&+(\iota_{s_1(a')}.\varphi^+.\iota_{s_1(a)})+(P'_{a'})^{-1}\circ(\iota_{s_2(a')}.\varphi^+.\iota_{s_1(a)})\\ &+(\iota_{s_1(a')}.\varphi^+.\iota_{s_2(a)})\circ P_a+(P'_{a'})^{-1}\circ(\iota_{s_2(a')}.\varphi^+.\iota_{s_2(a)})\circ P_a. \end{align*} \end{definition} \begin{lemma}\label{lem:SplittingCatsForEquivalenceAdditive} \(\mathcal{G}\circ\mathcal{F}=\id_{\Mat\mathcal{A}}\) and \(\mathcal{F}\circ\mathcal{G}\cong\id_{\Mat_i\overline{\mathcal{A}}}\). In particular, \(\Mat\mathcal{A}\) and \(\Mat_i\overline{\mathcal{A}}\) are equivalent. \end{lemma} \begin{proof} The first identity is obvious from the definitions. For the second, consider the two mutually inverse natural transformations $$ \begin{tikzcd}[column sep=2cm] (C, \{\id_{C.\iota_{s_1(a)}}\}_{a\in A})=\mathcal{F}(\mathcal{G}(C, \{P_a\}_{a\in A})) \arrow[shift left]{r}{\eta_{(C, \{P_a\})}} & (C, \{P_a\}_{a\in A}) \arrow[shift left]{l}{\varepsilon_{(C, \{P_a\})}} \end{tikzcd} $$ given by $$ \iota_a.\eta_{(C, \{P_a\})}.\iota_a= \left(\begin{matrix} \id_{C.\iota_{s_1(a)}} & 0 \\ 0 & P_a \end{matrix}\right) \quad\text{ and }\quad \iota_a.\varepsilon_{(C, \{P_a\})}.\iota_a= \left(\begin{matrix} \id_{C.\iota_{s_1(a)}} & 0 \\ 0 & P^{-1}_a \end{matrix}\right) $$ for all $a\in A$ and 0 in all other components. \end{proof} \begin{definition}\label{def:precurvesAlgebraic} Let $\CC_i(S,M,A)=\Cx^{U}(\Mat_i(\overline{\mathcal{A}}))$. More explicitly, \begin{itemize} \item objects are triples $(C, \{P_a\}_{a\in A},\partial)$, where $C$ is an object in $\Mat\overline{\mathcal{A}}$, $$P_a\co C.\iota_{s_1(a)}\rightarrow C.\iota_{s_2(a)}$$ is a $\delta$-grading preserving isomorphism for every arc $a$ and $$\partial\co C\longrightarrow C$$ defines an endomorphism of $(C, \{P_a\}_{a\in A})\in\ob\Mat_i\overline{\mathcal{A}}$ which increases $\delta$-grading by 1 and satisfies $\partial^2=U\cdot\id$. \item Morphism spaces are morphism spaces of the underlying objects in $\Mat_i\overline{\mathcal{A}}$. The differentials on these morphism spaces are given as usual by pre- and post-composition with the endomorphisms $\partial$. \item $\CC_i(S,M,A)$ inherits its composition from $\Mat_i\overline{\mathcal{A}}$. \end{itemize} We call an object in $\CC_i(S,M,A)$ a \textbf{precurve}. To simplify notation, we often denote a precurve $(C, \{P_a\}_{a\in A},\partial)$ by $C$. We call a precurve \textbf{reduced} if its differential does not contain any identity component, ie $\partial^+=\partial$. \end{definition} \begin{corollary}\label{cor:SplittingCatsForEquivalenceComplexes} \(\CC(S,M,A)\) and \(\CC_i(S,M,A)\) are equivalent as dg categories. \end{corollary} \begin{proof} This follows directly from the previous lemma. \end{proof} \begin{lemma}\label{lem:CCreduced} Every curved complex is chain homotopic to a reduced one. The same holds for precurves. \end{lemma} \begin{proof} Any arrow on a $\delta$-graded curved complex which is labelled by an idempotent does not have any other labels because of the $\delta$-grading. Therefore we can always reduce the number of generators of an unreduced curved complex by cancellation until the complex is reduced. By the previous corollary, any given precurve is isomorphic to one in the image of $\mathcal{F}$. Since $\mathcal{F}$ preserves chain homotopy classes, this implies that any precurve is chain homotopic to $\mathcal{F}(C,\partial)$ for some reduced curved complex $(C,\partial)$ such that $\mathcal{F}(C,\partial)$ is a reduced precurve. Alternatively, we may also apply Lemma~\ref{lem:AbstractCancellation} directly. \end{proof} \subsection{A geometric interpretation of precurves}\label{subsec:precurves_geometry} \begin{figure}[t] \centering \begin{subfigure}[b]{\textwidth} \centering \psset{unit=0.15} \begin{pspicture}(-41,-21)(41,21) \psecurve(0,-2)(4.5,-13)(4;-52.5)(-13,4.5) \psecurve(-10,-36.75)(-18,-20)(-10,-3.25)(-20,13.5) \pnode(-20,0){A} \pnode(20,0){C} \pnode([nodesep=4,angle=30]A){A30} \pnode([nodesep=4,angle=150]A){A150} \pnode([nodesep=4,angle=-30]A){Am30} \pnode([nodesep=4,angle=-150]A){Am150} \pnode([nodesep=4,angle=30]C){C30} \pnode([nodesep=4,angle=150]C){C150} \pnode([nodesep=4,angle=-30]C){Cm30} \pnode([nodesep=4,angle=-150]C){Cm150} \pscustom*[linecolor=lightred]{ \pscurve(-40,5)(-33,4.5)(-28,3.6)(A150) \psline(A150)(Am150) \pscurve(Am150)(-28,-3.6)(-33,-4.5)(-40,-5) } \psline[linecolor=red]{->}(-40,0)(-24,0) \pscurve{->}(-40,5)(-33,4.5)(-28,3.6)(A150) \pscurve{->}(-40,-5)(-33,-4.5)(-28,-3.6)(Am150) \pscircle*[linecolor=lightred](-29,0){1} \rput[c]{-90}(-29,0){\textcolor{red}{$a_1$}} \pscustom*[linecolor=lightred]{ \pscurve(4;150)(-7,3)(-13,3)(A30) \psline(A30)(Am30) \pscurve(Am30)(-13,-3)(-7,-3)(4;-150) } \psline[linecolor=red]{<-}(-4,0)(-16,0) \pscurve{<-}(4;150)(-7,3)(-13,3)(A30) \pscurve{<-}(4;-150)(-7,-3)(-13,-3)(Am30) \pscircle*[linecolor=lightred](-13,0){1} \rput[c]{-90}(-13,0){\textcolor{red}{$a_2$}} \pscustom*[linecolor=lightred]{ \pscurve(4;30)(7,3)(13,3)(C150) \psline(C150)(Cm150) \pscurve(Cm150)(13,-3)(7,-3)(4;-30) } \psline[linecolor=red]{->}(4,0)(16,0) \pscurve{->}(4;30)(7,3)(13,3)(C150) \pscurve{->}(4;-30)(7,-3)(13,-3)(Cm150) \pscircle*[linecolor=lightred](6,0){1} \rput[c]{-90}(6,0){\textcolor{red}{$a_3$}} \pscustom*[linecolor=lightred]{ \pscurve(5,-20)(4.5,-13)(3.6,-8)(4;-60) \psline(4;-60)(4;-120) \pscurve[liftpen=1](4;-120)(-3.6,-8)(-4.5,-13)(-5,-20) } \psline[linecolor=red]{<-}(0,-4)(0,-20) \pscurve{->}(5,-20)(4.5,-13)(3.6,-8)(4;-60) \pscurve{->}(-5,-20)(-4.5,-13)(-3.6,-8)(4;-120) \pscircle*[linecolor=lightred](0,-9){1} \rput[c](0,-9){\textcolor{red}{$a_4$}} \pscustom*[linecolor=lightred]{ \pscurve(5,20)(4.5,13)(3.6,8)(4;60) \psline(4;60)(4;120) \pscurve[liftpen=1](4;120)(-3.6,8)(-4.5,13)(-5,20) } \psline[linecolor=red]{->}(0,4)(0,20) \pscurve{<-}(5,20)(4.5,13)(3.6,8)(4;60) \pscurve{<-}(-5,20)(-4.5,13)(-3.6,8)(4;120) \pscircle*[linecolor=lightred](0,16){1} \rput[c](0,16){\textcolor{red}{$a_5$}} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](0,0){4}{0}{360} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](20,0){4}{0}{360} \psarc[fillcolor=white,fillstyle=solid,linecolor=darkgreen](-20,0){4}{0}{360} \psline[linecolor=darkgreen,linearc=8,cornersize=absolute](0,-20)(-40,-20)(-40,20)(40,20)(40,-20)(0,-20) \psline[linecolor=darkgreen](-20,-21)(-20,-19) \rput(-20,-18){\textcolor{darkgreen}{$m_1$}} \psline[linecolor=darkgreen](3;-45)(5;-45) \rput(7;-40){\textcolor{darkgreen}{$m_2$}} \psarc[linecolor=darkgreen]{->}(-20,0){4}{-150}{-30} \psarc[linecolor=darkgreen]{->}(-20,0){4}{30}{150} \psarc[linecolor=darkgreen]{->}(0,0){4}{30}{60} \psarc[linecolor=darkgreen]{->}(0,0){4}{120}{150} \psarc[linecolor=darkgreen]{->}(0,0){4}{-150}{-120} \psarc[linecolor=darkgreen]{->}(20,0){4}{-150}{150} \psline[linecolor=darkgreen,linearc=8,cornersize=absolute]{->}(-40,5)(-40,20)(-5,20) \uput{4}[-45](-40,20){$\textcolor{darkgreen}{p_1}$} \uput{5}[90](-20,0){$\textcolor{darkgreen}{p_2}$} \uput{5}[-90](-20,0){$\textcolor{darkgreen}{p_3}$} \uput{5}[45](0,0){$\textcolor{darkgreen}{p_4}$} \uput{5}[135](0,0){$\textcolor{darkgreen}{p_5}$} \uput{5}[-135](0,0){$\textcolor{darkgreen}{p_6}$} \uput{5}[0](20,0){$\textcolor{darkgreen}{p_7}$} \rput(38,0){\textcolor{darkgreen}{$p_8$}} \psline[linecolor=darkgreen]{->}(10,-20)(5,-20) \rput[l](6,-16.5){$P_{a_3}=\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) $} \rput[r](-8,16.5){$P_{a_5}=\left( \begin{matrix} 1 & 1 \\ 0 & 1 \end{matrix}\right) $} \psdots(4.5,13)(3.6,8)(-4.5,13)(-3.6,8) \psdots(7,3)(13,3)(7,-3)(13,-3) \psdots(-33,4.5)(-33,-4.5)(-10,3.25)(-10,-3.25)(-4.5,-13)(4.5,-13) \psecurve(-3.6,-4)(-33,4.5)(-3.6,8)(33,4.5) \psecurve(-3.6,11)(-33,-4.5)(-4.5,-13)(33,-4.5) \psecurve(1,3)(-4.5,13)(-10,3.1)(-4.5,-7) \psecurve(26,0)(13,3)(26,6)(26,-6)(13,-3)(26,0) \psecurve(0,3)(3.6,8)(7,3)(3,-2) \psecurve(-32,12)(4.5,13)(29,10)(29,-10)(7,-3)(29,4) \psecurve(-12.6,8)(-4.5,13)(3.6,8)(11.7,13) \psecurve(12.6,8)(4.5,13)(-3.6,8)(-11.7,13) \psline{->}(1,9.1)(1.5,8.65) \psline{->}(-1,9.1)(-1.5,8.65) \psecurve(-32,10)(-33,4.5)(-33,-4.5)(-32,-10) \psecurve(-10,10)(-10,3.1)(-10,-3.1)(-10,-10) \psecurve(10,12)(4.5,13)(-4.5,13)(-10,12) \psecurve(10,7)(3.6,8)(-3.6,8)(-10,8) \psecurve(13,9)(7,3)(13,-3)(7,-9) \psecurve(13,-9)(7,-3)(13,3)(7,9) \psecurve(-14.5,-12.5)(-4.5,-13)(4.5,-13)(8,-12) \end{pspicture} \caption{A simply-faced precurve for the triple $(S,M,A)$ from Figure~\ref{fig:ExampleMarkedSurface}.}\label{fig:ExamplePreloopPic} \end{subfigure} \begin{subfigure}[b]{\textwidth}\centering {$ \begin{tikzcd}[row sep=1.3cm, column sep=1.4cm] && e^{a_5}_2 \arrow[swap,out=180,in=90]{lldd}{p_{2}p_{5}} \\ && e^{a_5}_1 \arrow[leftarrow,bend right=10]{ld}{p_{1}p_{2}} \\ e^{a_1}_1 \arrow[bend right,swap,in=200]{rrd}{p_{6}p_{3}} \arrow[bend right,out=45,in=150,pos=0.4]{rru}{p_{1}} \arrow[leftarrow,bend left,out=25,in=170,pos=0.2,swap]{rru}{p_{2}p_{5}} & e^{a_2}_1 \arrow[leftarrow,bend right,swap,out=30,in=160,pos=0.8]{uur}{p_{5}} \arrow[bend left,out=50,in=140,pos=0.6]{uur}{p_{1}p_{2}} & & e^{a_3}_1 \arrow[bend right=30,swap]{uul}{p_{4}p_{7}} & e^{a_3}_2 \arrow[bend right=15,swap,pos=0.3]{ull}{p_{4}} \arrow{l}{p_{7}} \\ && e^{a_4}_1 \end{tikzcd} $} \caption{The image of the precurve shown above under the functor $\mathcal{G}$.}\label{fig:ExamplePreloopGraph} \end{subfigure} \caption{The geometric representation of a precurve and its corresponding curved complex.}\label{fig:ExamplePrecurve} \end{figure} \begin{definition} Given three positive integers $i,j,n$ satisfying $i,j\leq n$ and $i\neq j$, we define the elementary matrix $E^{j}_{i}\in \GL_n(\mathbb{F}_2)$ by \renewcommand{\kbldelim}{( \renewcommand{\kbrdelim}{) $$E^{j}_{i}:=(\delta_{i'j'}+\delta_{ii'}\delta_{jj'})_{i'j'}= \kbordermatrix{ & & i & j & \\ & 1 & & & \\ i & & \ddots & 1 & \\ j & & & \ddots & \\ & & & & 1 }. $$ Furthermore, let $P_{ij}\in \GL_n(\mathbb{F}_2)$ be the permutation matrix corresponding to the permutation $(i,j)$. \end{definition} \begin{definition}\label{def:precurvesGeometric} We interpret a reduced precurve $C$ geometrically on the marked surface $(S,M)$ with arc system $A$ as follows: for each side $s$, we represent the fixed basis $\{e^s_{i}\}_{i=1,\dots,n_s}$ of $C.\iota_s$ by pairwise distinct dots $\bullet$ on $s$, which we order according to the orientation of $s$. We label the $i^\text{th}$ dot on $s$ by $(s,i)$. Next, since we are assuming that the precurve is reduced, every component of the differential lies in $\mathcal{A}_f^+$ for some face $f$. So let us consider such a component $$e^s_i\xrightarrow{p_f^n.\iota_s}e^{s'}_{i'}.$$ We represent this component by an oriented or unoriented immersed interval connecting the dots $\bullet(s,i)$ and $\bullet(s',i')$, depending on the following three cases: \begin{enumerate} \item If $f$ is open, we draw an unoriented curve between $\bullet(s,i)$ and $\bullet(s',i')$ on $f$. \item If $f$ is closed and there is a component $e^{s'}_{i'}\rightarrow e^s_i$ of $\partial$, we do the same thing as in the first case. However, here, the unoriented curve represents both $e^s_i\rightarrow e^{s'}_{i'}$ and $e^{s'}_{i'}\rightarrow e^s_i$. Also note that because of the $\delta$-grading, $p_f^n.\iota_s=p^s_{s'}$ and the component $e^{s'}_{i'}\rightarrow e^s_i$ is labelled by $p^{s'}_s$, such that the compositions of the two components are equal to $U_f.\iota_s$ and $U_f.\iota_{s'}$, respectively. \item If $f$ is closed and there is no component $e^{s'}_{i'}\rightarrow e^s_i$ in $\partial$, we draw an oriented immersed interval from $\bullet(s,i)$ to $\bullet(s',i')$ on the punctured face $f^{\ast}$ which is homotopic to the path $p_f^n.\iota_s$ in the quiver of $f$. \end{enumerate} Up to homotopy in $f$, respectively $f^\ast$, this representation of $\partial$ is unique and uniquely determines the differential $\partial$. We call the oriented and unoriented immersed intervals \textbf{two-sided $f$-joins}. The $\partial^2$-relation for precurves says that each dot $\bullet(s,i)$ on a closed face $f$ is connected to some dot $\bullet(s',i')$ on the same face via an unoriented two-sided $f$-join. The same need not be true for open faces. So we connect those dots that do not lie on any two-sided $f$-joins to the boundary segment containing the puncture of $f$ by unoriented immersed curves on $f$. We call those intervals \textbf{one-sided $f$-joins}. We define the \textbf{length of an $f$-join} to be the length of the corresponding algebra element if the $f$-join is two-sided and 0 if it is one-sided. Finally, we label each arc $a\in A$ by the matrix $P_a$. However, we can also represent the matrix $P_a$ itself geometrically if we have a decomposition of $P_a$ into elementary matrices $$P_a=P_a^{l_a}\cdots P_a^1$$ for some $l_a\geq 0$, where for each $k=1,\dots,l_a$, $P_a^k$ is equal to some $E^{j}_{i}$ or $P_{ij}$, for some $i=i(k)$ and $j=j(k)$ with $i\neq j$. We divide the neighbourhood $N(a)$ of $a$ along some leaves of $\mathcal{F}_a$ into $l_a$~segments, ordered from $s_1(a)$ (right) to $s_2(a)$ (left), and label the $k^\text{th}$ segment by the matrix $P_a^k$. Finally, we represent each matrix $P_a^k$ graphically, as shown on the left of Figure~\ref{fig:traintrackmoves}. We call the crossing arcs for $P_a^k=P_{ij}$ \textbf{crossings} and the pair of arrows for $P_a^k=E^{j}_{i}$ \textbf{crossover arrows}. This notation is borrowed from~\cite[section~3.3]{HRW}, except that we restrict crossings and crossover arrows to the neighbourhoods of the arcs. Of course, the decomposition of $P_a$ is not unique. However, linear algebra tells us that it is unique up to the following moves, some of which are illustrated on the right of Figure~\ref{fig:traintrackmoves}: \begin{description} \item[(T1)] A crossings can be expressed in terms of crossover arrows, since $P_{ij}=E^{j}_{i}E^{i}_{j}E^{j}_{i}$. \item[(T2)] Two adjacent identical crossover arrows can be cancelled, since $E^{j}_{i}E^{j}_{i}$ is the identity. \item[(T3)] Any two crossover arrows can be moved past one another unless one strand is the start of one and the end of the other; in such a case, a crossover arrow connecting the other two strands needs to be added, as shown in Figure~\ref{fig:traintrackmoves}. This corresponds to the identities $$ E^{j}_{i}E^{j'}_{i'}=\begin{cases} E^{j'}_{i'}E^{j}_{i} & \text{if $i\neq j'$ and $j\neq i'$} \\ E^{j'}_{i'}E^{j}_{i}E^{j}_{i'} & \text{if $i=j'$ and $j\neq i'$}\\ E^{j'}_{i'}E^{j}_{i}E^{j'}_{i} & \text{if $i\neq j'$ and $j=i'$.} \end{cases} $$ \end{description} \end{definition} \begin{remark}\label{rem:PathsForTrainTracks} The point of the graphical notation for the matrices $P_a$ is that we can recover the entry in the $j^\text{th}$ column and $i^\text{th}$ row of each $P_a$ by counting (up to homotopy and modulo 2) all paths in the restriction of the precurve to $N(a)$ \begin{enumerate} \item which start at $\bullet(s_1(a),j)$ and end at $\bullet(s_2(a),i)$, \item which are transverse to the foliation $\mathcal{F}_a$ and \item whose orientation agree with each crossover arrow they traverse. \end{enumerate} Since $$P_a^{-1}=(P_a^{l_a}\cdots P_a^1)^{-1}=(P_a^1)^{-1}\cdots(P_a^{l_a})^{-1}=P_a^1\cdots P_a^{l_a},$$ we can similarly read off the entry in the $j^\text{th}$ column and $i^\text{th}$ row of each $P_a^{-1}$ by following paths from $\bullet(s_2(a),j)$ to $\bullet(s_1(a),i)$ which also satisfy the other two conditions above. \end{remark} \begin{remark} Up to the moves (T1)--(T3) and isotopies of the immersed curves, this geometric interpretation of a precurve is unique and also uniquely defines a precurve. So we will no longer carefully distinguish between precurves and their geometric representations. The following two definitions are examples of this principle. \end{remark} \begin{figure}[t] \centering \psset{unit=0.2} \begin{subfigure}{0.35\textwidth}\centering \begin{pspicture}(-13,-10.1)(13,10.1) \pscustom*[linecolor=lightred]{ \psline(-9,-10)(-9,10) \psline(9,10)(9,-10) } \psline[linecolor=red,linestyle=dotted](-2,-10)(-2,10) \psline[linecolor=red,linestyle=dotted](2,-10)(2,10) \psline[linecolor=red,linestyle=dotted](-6,-10)(-6,10) \psline[linecolor=red,linestyle=dotted](6,-10)(6,10) \rput[l](9.5,7.5){$s_1(a)$} \rput[r](-9.5,7.5){$s_2(a)$} \rput[c](10.5,1.5){$i$} \rput[c]{-90}(10.5,-0.25){$>$} \rput[c](10.5,-2){$j$} \rput[c](4,7.5){\textcolor{red}{$E^{j}_{i}$}} \rput[c](-4,7.5){\textcolor{red}{$P_{ij}$}} \rput[c](7.5,7.5){\textcolor{red}{$\cdots$}} \rput[c](-7.5,7.5){\textcolor{red}{$\cdots$}} \rput[c](0,7.5){\textcolor{red}{$\cdots$}} \pscustom{ \psline(9,5)(-9,5) } \pscustom{ \psline(-9,1.5)(-6,1.5) \psecurve(-10,-2)(-6,1.5)(-2,-2)(2,1.5) \psline(-2,-2)(9,-2) } \pscustom{ \psline(-9,-2)(-6,-2) \psecurve(-10,1.5)(-6,-2)(-2,1.5)(2,-2) \psline(-2,1.5)(9,1.5) } \pscustom{ \psline(9,-5.5)(-9,-5.5) } \psecurve(10,1.5)(6,-2)(2,1.5)(-2,-2) \psecurve(10,-2)(6,1.5)(2,-2)(-2,1.5) \psline{->}(4.85,0.87)(5.15,1.2) \psline{->}(3.15,0.87)(2.85,1.2) \psline{->}(-9,-10)(-9,10) \psline{->}(9,-10)(9,10) \psline[linecolor=darkgreen](-10,10)(10,10) \psline[linecolor=darkgreen](-10,-10)(10,-10) \psdots(9,5)(9,1.5)(9,-2)(9,-5.5) \psdots(-9,5)(-9,1.5)(-9,-2)(-9,-5.5) \end{pspicture} \end{subfigure} \begin{minipage}{0.6\textwidth}\centering \begin{subfigure}{\textwidth}\centering \begin{pspicture}(-21.5,-5.1)(13.5,5.1) \rput(10,0){ \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(3,-5)(3,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline[linecolor=red,linestyle=dotted](0,-5)(0,5) \pscustom{ \psline(1,-2)(0,-2) \psecurve(4,2)(0,-2)(-4,2)(-8,-2) \psline(-4,2)(-5,2) } \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \pscustom{ \psline(1,2)(0,2) \psecurve(4,-2)(0,2)(-4,-2)(-8,2) \psline(-4,-2)(-5,-2) } \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \psline[linestyle=dotted,dotsep=1pt](2,2)(1,2) \psline[linestyle=dotted,dotsep=1pt](2,-2)(1,-2) } \rput[b](0,1){(T1)} \rput(0,0){$\longleftrightarrow$} \rput(-14,0){ \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(11,-5)(11,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline[linecolor=red,linestyle=dotted](0,-5)(0,5) \psline[linecolor=red,linestyle=dotted](4,-5)(4,5) \psline[linecolor=red,linestyle=dotted](8,-5)(8,5) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \psline[linestyle=dotted,dotsep=1pt](9,2)(10,2) \psline[linestyle=dotted,dotsep=1pt](9,-2)(10,-2) \psline(-5,2)(9,2) \psline(-5,-2)(9,-2) \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } \rput(2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \rput(6,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } } \end{pspicture} \end{subfigure} \begin{subfigure}{\textwidth}\centering \begin{pspicture}(-17.5,-5.1)(21.5,5.1) \rput(-10,0){ \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(7,-5)(7,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline[linecolor=red,linestyle=dotted](0,-5)(0,5) \psline[linecolor=red,linestyle=dotted](4,-5)(4,5) \psline[linestyle=dotted,dotsep=1pt](-5,4)(-6,4) \psline[linestyle=dotted,dotsep=1pt](-5,0)(-6,0) \psline[linestyle=dotted,dotsep=1pt](-5,-4)(-6,-4) \psline[linestyle=dotted,dotsep=1pt](5,4)(6,4) \psline[linestyle=dotted,dotsep=1pt](5,0)(6,0) \psline[linestyle=dotted,dotsep=1pt](5,-4)(6,-4) \psline(-5,4)(5,4) \psline(-5,0)(5,0) \psline(-5,-4)(5,-4) \rput(-2,2){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } \rput(2,-2){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } } \rput[b](0,1){(T3)} \rput(0,0){$\longleftrightarrow$} \rput(10,0){ \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(11,-5)(11,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline[linecolor=red,linestyle=dotted](0,-5)(0,5) \psline[linecolor=red,linestyle=dotted](4,-5)(4,5) \psline[linecolor=red,linestyle=dotted](8,-5)(8,5) \psline[linestyle=dotted,dotsep=1pt](-5,4)(-6,4) \psline[linestyle=dotted,dotsep=1pt](-5,0)(-6,0) \psline[linestyle=dotted,dotsep=1pt](-5,-4)(-6,-4) \psline[linestyle=dotted,dotsep=1pt](9,4)(10,4) \psline[linestyle=dotted,dotsep=1pt](9,0)(10,0) \psline[linestyle=dotted,dotsep=1pt](9,-4)(10,-4) \psline(-5,4)(9,4) \psline(-5,0)(9,0) \psline(-5,-4)(9,-4) \rput(-2,-2){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } \rput(2,2){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,1.12)(1,1.55) \psline{->}(-0.7,1.12)(-1,1.55) } \rput(6,0){ \psecurve(-6,4)(-2,-4)(2,4)(6,-4) \psecurve(-6,-4)(-2,4)(2,-4)(6,4) \psline{->}(0.78,3.12)(1,3.55) \psline{->}(-0.78,3.12)(-1,3.55) } } \end{pspicture} \end{subfigure} \end{minipage} \caption{A precurve in a neighbourhood of an arc (left) and some moves corresponding to a change of matrix decomposition in Definition~\ref{def:precurvesGeometric} (right).}\label{fig:traintrackmoves} \end{figure} \begin{definition} We say a precurve is \textbf{simply-faced} if it is reduced and every dot lies on exactly one $f$-join. In particular, there are no oriented $f$-joins. For an illustration of a simply-faced precurve and its associated curved complex in $\CC(S,M,A)$, see Figure~\ref{fig:ExamplePrecurve}. \end{definition} \begin{definition} We say a precurve is \textbf{compact} if it is simply-faced and does not have any one-sided $f$-joins. \end{definition} \subsection{From precurves to curves}\label{subsec:simplify_precurves} One might call reduced precurves that lie in the image of the functor $\mathcal{F}$ ``simple arced'', because in the arc neighbourhoods, they just look like parallel strands without any crossings and crossover arrows. Using Corollary~\ref{cor:SplittingCatsForEquivalenceComplexes}, one can easily see that every reduced precurve is chain isomorphic to a ``simply-arced'' one. The same is true for simply-faced precurves: \begin{proposition}\label{prop:PreloopToCC} Every reduced precurve is chain isomorphic to a simply-faced precurve. \end{proposition} \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth}\centering { \begin{pspicture}(-2.2,-1.2)(2.2,3.2) \psrotate(0,1){-45}{ \psline(-2,0)(-2,2) \psline(2,0)(2,2) \psline(1,3)(-1,3) \psdots(-2,1)(2,1)(0,3) \psecurve{->}(-12,2)(-2,1)(0,3)(-1,13) \psecurve[linestyle=dashed]{<-}(12,2)(2,1)(0,3)(1,13) \pscurve{->}(-2,1)(-1,0.8)(1,0.8)(2,1) \rput{45}(0.8,1.8){\psframebox*[framesep=1pt]{$(m-n)$}} \rput{45}(-0.8,1.8){\pscirclebox*[framesep=1pt]{$n$}} \rput{45}(0,0.7){\pscirclebox*[framesep=1pt]{$m$}} \uput{0.1}[180]{45}(-2,1){$e^s_i$} \uput{0.1}[90]{45}(0,3){$e^t_j$} \uput{0.1}[0]{45}(2,1){$e^r_k$} } \end{pspicture} } \caption{}\label{fig:BasicIsosI} \end{subfigure} \begin{subfigure}{0.48\textwidth}\centering { \begin{pspicture}(-2.2,-1.2)(2.2,3.2) \psrotate(0,1){45}{ \psline(-2,0)(-2,2) \psline(2,0)(2,2) \psline(1,3)(-1,3) \psdots(-2,1)(2,1)(0,3) \psecurve[linestyle=dashed]{->}(-12,2)(-2,1)(0,3)(-1,13) \psecurve{<-}(12,2)(2,1)(0,3)(1,13) \pscurve{->}(-2,1)(-1,0.8)(1,0.8)(2,1) \rput{-45}(0.8,1.8){\pscirclebox*[framesep=1pt]{$n$}} \rput{-45}(-0.8,1.8){\psframebox*[framesep=1pt]{$(m-n)$}} \rput{-45}(0,0.7){\pscirclebox*[framesep=1pt]{$m$}} \uput{0.1}[180]{-45}(-2,1){$e^r_k$} \uput{0.1}[90]{-45}(0,3){$e^s_i$} \uput{0.1}[0]{-45}(2,1){$e^t_j$} } \end{pspicture} } \caption{}\label{fig:BasicIsosII} \end{subfigure} \caption{The morphisms $h_1$ and $h_2$ (given by the dashed arrows of length $(m-n)$) for applications of the Clean-Up Lemma in the proof of Proposition~\ref{prop:PreloopToCC}.}\label{fig:BasicIsos} \end{figure} \begin{proof} We simplify the precurve for each face separately. For this, we carefully choose two basis elements $e^s_i$ and $e^t_j$ such that there is an (oriented or unoriented) $f$-join going from one to the other whose length $n$ is smallest among all arrows leaving $e^s_i$ and arrows arriving at $e^t_j$. In particular, this implies that $e^s_i$ and $e^t_j$ do not lie on the same side. For open faces, this follows from the assumption that the precurve is reduced; for closed faces~$f$, we use the $\partial^2$-relation to see that the length of such a shortest arrow is strictly less than $n_f$. Suppose there is an $f$-join of length $m$ leaving $e_i^s$ and going to a generator $e_k^r$. Consider an arrow $h_1$ of length $(m-n)\geq0$ going from $e_j^t$ to $e_k^r$, see Figure~\ref{fig:BasicIsosI}. We want to apply the Clean-Up Lemma (\ref{lem:AbstractCleanUp}) to our precurve with $h=h_1$, so let us verify the hypotheses of this lemma. The first hypothesis is $h_1^2=0$. Clearly, the composition of $h_1$ with itself can only be non-zero if $e^t_j=e_k^r$. Also, $h_1$ is a morphism of $\delta$-grading 0, so $e^t_j=e_k^r$ implies $m=n$. But this means that the two $f$-joins actually agree, which is a contradiction. Given $h_1^2=0$, the second and third hypotheses of the Clean-Up Lemma are equivalent to $h_1\partial h_1=0$. Suppose there were an arrow from $e^r_k$ to $e^t_j$ in the differential $\partial$. Then its composition with $h_1$ would be a power of $U_f$ and thus have an even $\delta$-grading. However, the composition $h_1\partial$ has $\delta$-grading $1$, so we also have $h_1\partial h_1=0$. Hence, we may apply the Clean-Up Lemma to the precurve with $h=h_1$. If $m>n$, this strictly reduces the number of $f$-joins leaving $e^s_i$. Also, it does not affect the precurve in other faces of $(S,M,A)$. This is not necessarily true if $m=n$, ie if $h_1$ contains an identity component. So instead, we remove the $f$-join from $e^s_i$ to $e_k^r$ in this case by pre- or postcomposing the matrix decorating the arc of the side $t$ by the elementary matrix $E^{k}_{j}$ or $E^{j}_{k}$. We now repeat this procedure until there is only one $f$-join leaving $e^s_i$ left. Similarly, we can achieve that there are no other $f$-joins going into $e^t_j$. Indeed, by assumption, the length of any such $f$-join has to be at least $n$, so by the same argument as above, we can apply the Clean-Up Lemma with $h=h_2$ from Figure~\ref{fig:BasicIsosII}. After this, the $\partial^2$-relation ensures that there are no other $f$-joins leaving $e^t_j$ nor ending at $e^s_i$. We can now ignore the two generators $e_i^s$ and $e_j^t$ and the one or two arrows between them and apply induction on the number of generators on the face $f$ until there are no two-sided $f$-joins left. \end{proof} \begin{remark} In some sense, to be made precise in the proof of Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems} below, our simply-faced precurves correspond to train-tracks from~\cite[Definition~17]{HRW} of a special form. Proposition~\ref{prop:PreloopToCC} above corresponds roughly to~\cite[Propositions~22 and~23]{HRW}. While their proofs rely on the fact that extendable type D structures are defined over the truncated algebra $\mathcal{A}$ (see Remark~\ref{rem:comparisonToHRW}), our proof relies in essential points on the $\delta$-grading of our curved complexes over the full algebra $\mathcal{A}$: indeed, in the main step of the proof, we repeatedly apply the Clean-Up Lemma. To verify that all hypotheses of the lemma are satisfied, we use the $\delta$-grading. Also note that the $\delta$-grading played a key role in the proof of Lemma~\ref{lem:CCreduced}. If we dropped the $\delta$-grading, we might also see arrows labelled by $1+U_f$, which do not have an inverse in $\mathcal{A}$! \end{remark} \begin{figure}[t] \centering \psset{unit=0.2} \begin{subfigure}{0.49\textwidth}\centering \begin{pspicture}(-8,-7)(23,7) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \pscustom{ \psecurve(4,2)(0,-2)(-4,2)(-8,-2) \psline(-4,2)(-5,2) } \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \pscustom{ \psecurve(4,-2)(0,2)(-4,-2)(-8,2) \psline(-4,-2)(-5,-2) } \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \psecurve(-4,4)(0,2)(4,4)(5,6) \psecurve(-4,-4)(0,-2)(4,-4)(5,-6) \psecurve[linestyle=dotted,dotsep=1pt](0,2)(4,4)(5,6)(6,20) \psecurve[linestyle=dotted,dotsep=1pt](0,2)(4,-4)(5,-6)(6,-20) \psdots(0,2)(0,-2) \rput(7,0){$\longleftrightarrow$} \rput(13,0){ \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-3,5)(-3,-5) } \psline(0,5)(0,-5) \psline(0,2)(-1,2) \psline[linestyle=dotted,dotsep=1pt](-1,2)(-2,2) \psline(0,-2)(-1,-2) \psline[linestyle=dotted,dotsep=1pt](-1,-2)(-2,-2) \psecurve(-8,4)(0,-2)(8,4)(9,6) \psecurve(-8,-4)(0,2)(8,-4)(9,-6) \psecurve[linestyle=dotted,dotsep=1pt](0,2)(8,4)(9,6)(10,20) \psecurve[linestyle=dotted,dotsep=1pt](0,2)(8,-4)(9,-6)(10,-20) \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \psdots(0,2)(0,-2) } \rput(-2,3){$\textcolor{red}{P_{ij}}$} \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \end{pspicture}\\ (M1) \end{subfigure} \begin{subfigure}{0.49\textwidth}\centering \begin{pspicture}(-10.5,-7)(20.5,7) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(10,-5)(10,5) \psline(17,5)(17,-5) } \psline[linecolor=red,linestyle=dotted](14,-5)(14,5) \psline(10,5)(10,-5) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \psline[linestyle=dotted,dotsep=1pt](15,2)(16,2) \psline[linestyle=dotted,dotsep=1pt](15,-2)(16,-2) \psline(-5,2)(15,2) \psline(-5,-2)(15,-2) \psdots(0,2)(0,-2) \psdots(10,2)(10,-2) \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \rput[r](9.7,3.3){$j'$} \rput[r](9.7,-3.3){$i'$} \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \rput(12,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \end{pspicture}\\ (M2) \end{subfigure} \\ \begin{subfigure}{0.24\textwidth}\centering \begin{pspicture}(-9,-11)(9,11) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \psline(0,2)(-5,2) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline(0,-2)(-5,-2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \psecurve(-4.5,7)(0,2)(4.5,7)(0,12) \psecurve(-4.5,-7)(0,-2)(4.5,-7)(0,-12) \psdots(0,2)(0,-2) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,7)(7,7) \psline(7,10)(2,10) } \psline(2,7)(7,7) \psline(4.5,7)(4.5,8) \psline[linestyle=dotted,dotsep=1pt](4.5,8)(4.5,9) \psdot(4.5,7) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,-7)(7,-7) \psline(7,-10)(2,-10) } \psline(2,-7)(7,-7) \psline(4.5,-7)(4.5,-8) \psline[linestyle=dotted,dotsep=1pt](4.5,-8)(4.5,-9) \psdot(4.5,-7) \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \end{pspicture} (M3a) \end{subfigure} \begin{subfigure}{0.24\textwidth}\centering \begin{pspicture}(-9,-11)(9,11) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \psline(0,2)(-5,2) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline(0,-2)(-5,-2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \psecurve(-4.5,7)(0,2)(4.5,7)(0,12) \psecurve(-4.5,-7)(0,-2)(4.5,-7)(0,-12) \psdots(0,2)(0,-2) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,7)(7,7) \psline(7,10)(2,10) } \psline(2,7)(7,7) \psline(4.5,7)(4.5,8) \psline[linestyle=dotted,dotsep=1pt](4.5,8)(4.5,9) \psdot(4.5,7) \psline[linecolor=darkgreen](2,-7)(7,-7) \psline[linecolor=darkgreen](4.5,-8)(4.5,-6) \rput[l](3.5,-8.5){\textcolor{darkgreen}{$m\!\in\!M$}} \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \end{pspicture} (M3b) \end{subfigure} \begin{subfigure}{0.24\textwidth}\centering \begin{pspicture}(-9,-11)(9,11) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \psline(0,2)(-5,2) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline(0,-2)(-5,-2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \psecurve(-4.5,7)(0,2)(4.5,7)(0,12) \psecurve(-4.5,-7)(0,-2)(4.5,-7)(0,-12) \psdots(0,2)(0,-2) \psline[linecolor=darkgreen](2,7)(7,7) \psline[linecolor=darkgreen](4.5,8)(4.5,6) \rput[l](3.5,8.8){\textcolor{darkgreen}{$m\!\in\!M$}} \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,-7)(7,-7) \psline(7,-10)(2,-10) } \psline(2,-7)(7,-7) \psline(4.5,-7)(4.5,-8) \psline[linestyle=dotted,dotsep=1pt](4.5,-8)(4.5,-9) \psdot(4.5,-7) \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \end{pspicture} (M3c) \end{subfigure} \begin{subfigure}{0.24\textwidth}\centering \begin{pspicture}(-9,-11)(9,11) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-7,5)(-7,-5) } \psline[linecolor=red,linestyle=dotted](-4,-5)(-4,5) \psline(0,5)(0,-5) \psline(0,2)(-5,2) \psline[linestyle=dotted,dotsep=1pt](-5,2)(-6,2) \psline(0,-2)(-5,-2) \psline[linestyle=dotted,dotsep=1pt](-5,-2)(-6,-2) \rput(-2,0){ \psecurve(-6,2)(-2,-2)(2,2)(6,-2) \psecurve(-6,-2)(-2,2)(2,-2)(6,2) \psline{->}(0.7,-1.12)(1,-1.55) \psline{->}(-0.7,-1.12)(-1,-1.55) } \psecurve(-5,0)(0,2)(5,0)(10,2) \psecurve(-5,0)(0,-2)(5,0)(10,-2) \psdots(0,2)(0,-2) \psline[linecolor=darkgreen](5,2)(5,-2) \psline[linecolor=darkgreen](4,0)(6,0) \rput{-90}(7,-2){\textcolor{darkgreen}{$m\!\in\!M$}} \rput(1,3.3){$j$} \rput(1,-3.5){$i$} \end{pspicture} (M3d) \end{subfigure} \caption{An illustration of the moves (M1)--(M3) from Lemma~\ref{lem:CalculusForPreloops}. For (M3), there are four different cases to consider, depending on whether the $f$-joins are two- or one-sided.}\label{fig:GraphicCalculus} \end{figure} For the proof of the previous proposition, it was fairly irrelevant how the matrices $P_a$ change under the various base changes. For the main classification result, however, we need more control over these equivalences: \begin{lemma}\label{lem:CalculusForPreloops} Let \((C, \{P_a\}_{a\in A},\partial)\) be a simply-faced precurve on a marked surface \((S,M)\) with arc system~\(A\). For some \(a\in A\), let \(s\) be a side of \(a\) and \(f\) the face adjacent to \(s\). Let \(\bullet(s,i)\) and \(\bullet(s,j)\) be two distinct dots on \(s\). Then \((C, \{P_a\}_{a\in A},\partial)\) is chain isomorphic to the precurve obtained by one of the following three moves, which are illustrated in Figure~\ref{fig:GraphicCalculus}. \begin{description} \item[(M1)] Multiply \(P_a\) on the right/left by \(P_{ij}\), depending on whether \(s=s_1(a)\) or \(s=s_2(a)\), and switch the endpoints of the two \(f\)-joins ending in \(\bullet(s,i)\) and \(\bullet(s,j)\). \item[(M2)] Assume that \(\bullet(s,i)\) and \(\bullet(s,j)\) have the same \(\delta\)-grading. Suppose also that both \(\bullet(s,i)\) and \(\bullet(s,j)\) are connected by two two-sided \(f\)-joins which connect the same two sides of \(f\) and let \(\bullet(s',i')\) and \(\bullet(s',j')\) be the other endpoints, respectively. Then multiply \(P_a\) on the right/left by \(E^{j}_{i}\), depending on whether \(s=s_1(a)\) or \(s=s_2(a)\), and multiply \(P_{a'}\) on the right/left by \(E^{j'}_{i'}\), depending on whether \(s'=s_1(a')\) or \(s'=s_2(a')\). \item[(M3)] Assume that \(\bullet(s,i)\) and \(\bullet(s,j)\) have the same \(\delta\)-grading. If the \(f\)-joins starting at \(\bullet(s,i)\) and \(\bullet(s,j)\) are both two-sided, let us assume that they end on different sides. Unless the \(f\)-joins are both one-sided, assume also that if we follow the oriented boundary of \(f\), starting at \(s\), we meet the other end of the second \(f\)-join before the other end of the first. Then multiply \(P_a\) on the right/left by \(E^{j}_{i}\) depending on whether \(s=s_1(a)\) or \(s=s_2(a)\). \end{description} \end{lemma} \begin{wrapfigure}{r}{0.25\textwidth} \centering \psset{unit=0.2} \begin{pspicture}(-5,-10.5)(13,10.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(0,-5)(0,5) \psline(-3,5)(-3,-5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,7)(7,7) \psline(7,10)(2,10) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(2,-7)(7,-7) \psline(7,-10)(2,-10) } \psline[linestyle=dotted,dotsep=1pt](-2,2)(-1,2) \psline(-1,2)(0,2) \psline[linestyle=dotted,dotsep=1pt](-2,-2)(-1,-2) \psline(-1,-2)(0,-2) \psline[linestyle=dotted,dotsep=1pt](4.5,8)(4.5,9) \psline(4.5,7)(4.5,8) \psline[linestyle=dotted,dotsep=1pt](4.5,-8)(4.5,-9) \psline(4.5,-7)(4.5,-8) \psline(0,5)(0,-5) \psecurve(-4.5,7)(0,2)(4.5,7)(3,12) \psecurve(-4.5,-7)(0,-2)(4.5,-7)(3,-12) \psdots(0,2)(0,-2) \psline(2,7)(7,7) \psdot(4.5,7) \psline(2,-7)(7,-7) \psdot(4.5,-7) \psecurve{->}(0,10)(4.5,7)(0,-2)(-4.5,-2) \psecurve{<-}(0,-10)(4.5,-7)(0,2)(-4.5,2) \psecurve[linestyle=dashed]{<-}(0,-8)(4.5,-7)(4.5,7)(0,8) \rput(-1,0){$s$} \rput(5.5,8){$t$} \rput(5.5,-8){$t'$} \rput(10,0){$h=p^{t}_{t'}$} \rput(5,2){$p^t_s$} \rput(5,-1.2){$p^s_{t'}$} \end{pspicture} \caption{The morphism~$h$ for the proof of invariance of (M3).}\label{fig:CalculusForPreloopsProofHomotopy} \vspace*{-20pt} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{remark} Using (T2), we can reformulate (M2) as follows: if there is only one crossover arrow in the picture for (M2) in Figure~\ref{fig:GraphicCalculus}, then we may remove that crossover arrow and replace it by the other crossover arrow, thereby pushing the crossover arrow along parallel joins. \end{remark} \begin{proof}[of Lemma~\ref{lem:CalculusForPreloops}] All three parts of the lemma can be shown in the same way, namely by doing a base change, which in our graphical notation shifts the outermost segment of $N(a)$ adjacent to $s$ into the face $f$. \\\indent For (M1), the result after such a base change is obviously chain isomorphic to the original precurve. For (M2), we do two such base changes, one for $s$ and one for $s'$. For (M3), we do one such base change; the precurve then differs in at most two $f$-joins from the first, see Figure~\ref{fig:CalculusForPreloopsProofHomotopy}. However, we can remove these new $f$-joins using the Clean-Up Lemma and the morphism $h$ from the same figure. \end{proof} \begin{definition}\label{def:curves} Let $(S,M)$ be a marked surface with an arc system $A$. A \textbf{curve} on $(S,M,A)$ is a pair $(\gamma, X)$, where either \begin{enumerate} \item $\gamma$ is an immersion of an oriented circle into $S$, representing a non-trivial primitive element of $\pi_1(S)$ and $X\in \GL_n(\mathbb{F}_2)$ for some $n$; or \label{def:curves:loops} \item $\gamma$ is an immersion of an interval $(I,\partial I)$ into $(S,M)$, defining a non-trivial element of $\pi_1(S,M)$ and $X=\id\in\GL_n(\mathbb{F}_2)$ for some positive integer $n$ \label{def:curves:paths} \end{enumerate} satisfying the following properties: \begin{itemize} \item $\gamma$ restricted to each component of the preimage of each face is an embedding and \item $\gamma$ restricted to each component of the preimage of the neighbourhood $N(a)$ of each arc $a$ is an embedding, intersecting each leaf of $\mathcal{F}_a$ exactly once. \end{itemize} In case \ref{def:curves:loops}, we call $(\gamma,X)$ a \textbf{compact curve} or a \textbf{loop}, in case \ref{def:curves:paths}, we call it a \textbf{non-compact curve} or a \textbf{path}. We say that a curve $(\gamma,X)$ is \textbf{supported} on the immersed curve~$\gamma$ and call $\gamma$ the \textbf{underlying curve} and $X$ its \textbf{local system}. Note that the local system of paths only records some positive integer $n$. We consider curves up to homotopy of the underlying immersed curves through curves. Furthermore, we consider the local systems of loops up to matrix similarity. A \textbf{$\delta$-grading} on a curve $(\gamma,X)$ is an $\mathbb{R}$-grading on the set of intersection points of the underlying curve with arcs in $A$ satisfying the following property: let $x$ and $y$ be two intersection points of $\delta$-grading $\delta(x)$ and $\delta(y)$, respectively. Suppose $x$ and $y$ are joined by a component of $\gamma\smallsetminus A$. Then such a component is mapped to a path in $Q(S,M,A)$ corresponding to an algebra element $p^s_t$, from $x$ to $y$, say. Then we ask that $\delta(y)-\delta(x)+\delta(p^s_t)=1$. A \textbf{collection of ($\delta$-graded) curves} is a finite set of ($\delta$-graded) curves such that all underlying curves are pairwise non-homotopic as unoriented ($\delta$-graded) curves. We denote the set of all collections of $\delta$-graded curves up to equivalence by $\loops(S,M,A)$. \end{definition} \begin{remark} Since any immersed curve $\gamma$ in a pair $(\gamma,X)$ of the form \ref{def:curves:loops} or \ref{def:curves:paths} can be homotoped to one that satisfies the two conditions of the previous definition, the arc system $A$ is only required to define the $\delta$-grading on elements of $\loops(S,M,A)$. Apart from that, $\loops(S,M,A)$ is independent of $A$. \end{remark} \begin{definition} Given an arc system $A$ on $(S,M)$, we define a map \[\Pi_i\co \loops(S,M,A)\rightarrow \ob(\CC_i(S,M,A))/\text{(chain homotopy)}\] as follows: given a single curve $(\gamma,X)$, choose a small immersed tubular neighbourhood of $\gamma$ and replace $\gamma$ by $\dim X$ parallel copies thereof in this neighbourhood. Then, for each face $f\in F(S,M,A)$, the $f$-joins of~$\Pi_i(\gamma,X)$ are given by the intersection of these curves with~$f$. Then pick an intersection point $x$ of an arc $a$ with $\gamma$. Let the matrix $P_a$ be the diagonal block matrix with blocks of dimension $\dim X$ such that all blocks are equal to the identity matrix except the one corresponding to the intersection point $x$. We define this block to be equal to $X$ if $\gamma$ goes through $x$ from the right of $a$ to its left (ie from $s_1(a)$ to $s_2(a)$), and set it equal to $X^t$ otherwise. On all other arcs, we choose the identity matrix. Finally, we extend $\Pi_i$ to collections of curves by taking unions/direct sums. Note that the definition of $\Pi_i$ is indeed independent of the choice of $a$ and $x$ up to homotopy in $\CC_i(S,M,A)$, which can be seen by repeatedly applying (M2). Similarly, we see that conjugation of the local systems $C$ of a loop $(\gamma,X)$ does not change the homotopy type of the image under $\Pi_i$. We define the map \[\Pi\co \loops(S,M,A)\rightarrow\ob(\CC(S,M,A))/\text{(chain homotopy)}\] as the composition of $\Pi_i$ and the functor $\mathcal{G}$ from Definition~\ref{def:SplittingCatsForEquivalenceFunctors}. \end{definition} \begin{theorem}\label{thm:EverythingIsLoopTypeUpToLocalSystems} Any reduced precurve is chain isomorphic to one in the image of \(\Pi_i\). Thus any reduced curved complex is chain isomorphic to one in the image of \(\Pi\). \end{theorem} \begin{proof} Simply-faced precurves \((C, \{P_a\}_{a\in A},\partial)\) in our setting for which there are no one-sided $f$-joins correspond to certain types of train-tracks in the language of~\cite{HRW}, namely those in which all arrows come in pairs and only sit in the neighbourhoods of the arcs. Then the train-track moves from \cite[Proposition~25]{HRW} correspond exactly to our moves (T1) to (T3) and (M1) to (M3). So we can apply the algorithm explained in~\cite[section~3.7]{HRW} for simplifying train-tracks, since the geometric objects agree, even though they represent algebraic objects defined over slightly different algebras. The output of the algorithm is train-tracks whose crossover arrows only connect parallel immersed curves, ie loops. The same algorithm also works without any changes for simply-faced precurves which contain one-sided $f$-joins, since the additional moves for such $f$-joins from Lemma~\ref{lem:CalculusForPreloops}, namely (M3b), (M3c) and (M3d) from Figure~\ref{fig:GraphicCalculus}, can be regarded as generalisations of (M3a). Once all arrows only connect parallel immersed curves, we can remove all arrows on paths by applying moves (M2) followed by (M3d). \end{proof} \begin{remark} The key ingredient of the algorithm from~\cite{HRW} is a certain complexity that Hanselman, Rasmussen and Watson assign to each crossover arrow. This complexity, which they call weight, takes values in $(\mathbb{Z}\smallsetminus\{0\})^2\cup \infty$ and is defined as follows: the weight of a crossover arrow between two curves that always stay parallel is $\infty$. If the two segments diverge, the complexity is a pair of non-zero integers. Their absolute values record the maximum number of faces (plus 1) that the curve segments stay parallel to each other in each direction. Their signs are determined by whether the crossover arrow (if it were pushed into the face where the curves diverge) could be eliminated using the train-track move corresponding to our move (M3) or not. The depth of a crossover arrow is the minimum of the absolute values of these two integers (or $\infty$ if the weight is $\infty$). The depth of a curve configuration (ie precurve in our language) is defined as the minimum of the depths of all crossover arrows. Hanselman, Rasmussen and Watson then show that one can apply a sequence of train-track moves which strictly increases the depth of the curve configuration \cite[Proposition~29]{HRW}. Since the depth of any curve configuration is bounded (because there are only finitely many $f$-joins), this suffices to show that eventually one obtains a curve configuration of depth $\infty$, which means that all crossover arrows go between parallel curves. \end{remark} \begin{observation}\label{obs:CompactPreCurves} Compactness of precurves is preserved under the moves (M1) to (M3), so if the original precurve in Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems} is compact, we can choose a compact precurve in the image of $\Pi_i$. Note that $\Pi_i$ maps compact curves to compact precurves and non-compact curves to non-compact precurves. \end{observation} \subsection{Classification of morphisms between simply-faced precurves}\label{subsec:ClassificationMor} Throughout this section, let $C=(C, \{P_a\}_{a\in A},\partial)$ and $C'=(C', \{P'_a\}_{a\in A},\partial')$ be a pair of simplify-faced precurves on a fixed marked surface \((S,M)\) with arc system \(A\). \begin{wrapfigure}{r}{0.3333\textwidth} \centering \medskip \psset{unit=0.18} \begin{pspicture}(-13.5,-10.1)(13.5,10.1) \pscustom*[linecolor=lightred]{ \psline(-9,-10)(-9,10) \psline(9,10)(9,-10) } \rput[l](9.5,7.5){$s_1(a)$} \rput[r](-9.5,7.5){$s_2(a)$} \psline[linecolor=blue](-9,5.5)(-8,5.5) \psline[linecolor=blue](-9,3.75)(-8,3.75) \psline[linecolor=blue](-9,2)(-8,2) \psline[linecolor=red](-9,-5.5)(-8,-5.5) \psline[linecolor=red](-9,-3.75)(-8,-3.75) \psline[linecolor=red](-9,-2)(-8,-2) \psecurve[linecolor=blue](22,2)(9,-5.5)(-4,2)(-17,-5.5) \psecurve[linecolor=blue](22,3.75)(9,-3.75)(-4,3.75)(-17,-3.75) \psecurve[linecolor=blue](22,5.5)(9,-2)(-4,5.5)(-17,-2) \psecurve[linecolor=red](22,-2)(9,5.5)(-4,-2)(-17,5.5) \psecurve[linecolor=red](22,-3.75)(9,3.75)(-4,-3.75)(-17,3.75) \psecurve[linecolor=red](22,-5.5)(9,2)(-4,-5.5)(-17,2) \psframe[linecolor=blue](-4,1)(-8,6.5) \rput[c](-6,3.75){\textcolor{blue}{$P'_a$}} \psframe[linecolor=red](-4,-1)(-8,-6.5) \rput[c](-6,-3.75){\textcolor{red}{$P_a$}} \rput[b](0,5){\textcolor{blue}{$C'$}} \rput[t](0,-5){\textcolor{red}{$C$}} \psline{->}(-9,-10)(-9,10) \psline{->}(9,-10)(9,10) \psline[linecolor=darkgreen](-10,10)(10,10) \psline[linecolor=darkgreen](-10,-10)(10,-10) \psdots[linecolor=red](9,5.5)(9,3.75)(9,2)(-9,-5.5)(-9,-3.75)(-9,-2) \psdots[linecolor=blue](9,-5.5)(9,-3.75)(9,-2)(-9,5.5)(-9,3.75)(-9,2) \end{pspicture} \caption{A pair of precurves in pairing position in the neighbourhood of an arc $a$.}\label{fig:StandardPairingPosition} \end{wrapfigure} \textcolor{white}{~}\vspace{-\mystretch\baselineskip\relax} \begin{definition}\label{def:StandardPairingPosition} We say $C$ and $C'$ are in \textbf{pairing position} if the following holds: \begin{enumerate}[i] \item For each side $s$, all dots of $C$ on $s$ lie to the right of all dots of $C'$ on $s$, viewed from the face adjacent to $s$. \label{enu:pairingposSides} \item The crossings and crossover arrows lie in a small neighbourhood of the second side $s_2(a)$ for each arc $a\in A$; see Figure~\ref{fig:StandardPairingPosition} for an illustration. \label{enu:pairingposLocalSystems} \item For each open face $f$, the one-sided $f$-joins of $C$ end on the left of the basepoint, the ones of $C'$ end on the right of the basepoint, viewed from the face $f$. \label{enu:pairingposBasepoints} \item Any two $f$-joins intersect minimally with respect to the first three conditions. Similarly, the precurves intersect minimally on the neighbourhood of each arc. \label{enu:pairingposMinimality} \end{enumerate} \end{definition} \begin{definition} With a pair of two simply-faced precurves $C$ and $C'$ in pairing position, we associate a chain complex $\LagrangianFC(C,C')$ as follows: as a vector space over $\mathbb{F}_2$, $\LagrangianFC(C,C')$ decomposes into two summands $\LagrangianFC^\times(C,C')$ and $\LagrangianFC^+(C,C')$. The former is generated by the intersection points between the curves in the neighbourhoods of the arcs, the latter by those on faces. We call the former \textbf{upper} and the latter \textbf{lower} intersection points/generators of $\LagrangianFC(C,C')$. The differential on $\LagrangianFC(C,C')$ is a map $$d\co\LagrangianFC^\times(C,C')\rightarrow\LagrangianFC^+(C,C')$$ defined by counting bigons connecting upper intersection points to lower ones. More precisely, a bigon from an upper intersection point $x$ to a lower one $y$ is an orientation-preserving embedding $$\iota\co D^2\hookrightarrow S$$ satisfying the following properties: \begin{itemize} \item the restriction of $\iota$ to the non-negative real part of $\partial D^2$ is a path from $x=\iota(-i)$ to $y=\iota(+i)$ on the first precurve $C$ such that the orientation is opposite to the orientation of any crossover arrows in $C$; \item the restriction of $\iota$ to the non-positive real part of $\partial D^2$ is a path from $y$ to $x$ on the second precurve $C'$ such that the orientation is opposite to the orientation of any crossover arrows in $C'$; \item $x$ and $y$ are convex corners of the image of $\iota$. \end{itemize} See Figure~\ref{fig:LagrangianHFConventions} for an illustration of the above conventions. If $\mathcal{M}(x,y)$ denotes the set of such bigons up to reparametrization, the differential \(d\) is defined by $$d(x)=\sum \#\mathcal{M}(x,y)~y.$$ By construction, it only connects upper generators to lower ones. We denote the homology of $\LagrangianFC(C,C')$ by $\LagrangianFH(C,C')$ and call it the \textbf{Lagrangian intersection Floer homology} of $C$ and~$C'$. \end{definition} \begin{figure}[t]\centering { \psset{unit=0.5} \begin{pspicture}(-9,-3.5)(10,3.5) \rput(-5,0){ \psarc*[linecolor=lightgray,linewidth=0pt](0,0){2.5}{0}{360} \psarc[linecolor=red](0,0){2.5}{-90}{90} \psarc[linecolor=blue](0,0){2.5}{90}{-90} \psarc[linecolor=darkgray]{->}(0,0){2.5}{-5}{0} \psarc[linecolor=darkgray]{->}(0,0){2.5}{175}{180} \psdot(2.5;90) \psdot(2.5;-90) \rput(3;-90){$-i$} \rput(3;90){$i$} \rput(3;0){$\textcolor{red}{C}$} \rput(3;180){$\textcolor{blue}{C'}$} } \rput(0,0){$\xrightarrow{~~\iota~~}$} \rput(6,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=red](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=blue](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(0.1,-1.66)(-0.1,-1.66) \psline[linecolor=darkgray]{<-}(0.1,1.66)(-0.1,1.66) \psline[linecolor=red](3,0)(4,-1) \psline[linecolor=red](-3,0)(-4,-1) \psline[linecolor=blue](3,0)(4,1) \psline[linecolor=blue](-3,0)(-4,1) \psdots(-3,0)(3,0) \rput(3.6,0){$y$} \rput(-3.6,0){$x$} \rput(2.5,1.3){$\textcolor{red}{C}$} \rput(-2.5,-1.3){$\textcolor{blue}{C'}$} \psline[linecolor=gray]{->}(-2.5,0)(2.5,0) } \end{pspicture} } \caption{Our orientation conventions for bigons: $\iota$ maps the unit disc in $\mathbb{C}$ to $S$. Thus, the normal vector, determined by the right-hand rule, points out of the plane on the left, but into the plane on the right. The arrows on the boundary of the disc and bigon indicate the induced boundary orientations. The generator $x$ is an upper generator (so it lies in the neighbourhood of an arc) and $y$ is a lower generator (so it lies in some face). }\label{fig:LagrangianHFConventions} \end{figure} \begin{remark} The definition above is an adaptation of the Lagragian intersection Floer homology from Abouzaid's paper~\cite{AbouzaidSurfaces} to our more combinatorial setting, using the language of~\cite{HRW}. However, note that Hanselman, Rasmussen and Watson use slightly different orientation conventions in~\cite{HRW}: because of the way they express their glueing formula, they find it more convenient to interpret the two collections of (pre-)curves $C$ and $C'$ differently, namely one in terms of a type D structure and the other in terms of a type A structure; we treat both curves the same, which follows more standard conventions in Lagrangian intersection Floer theory. For example, in their setting~\cite[Definition~34]{HRW}, $\LagrangianFH(C,C)$ vanishes for some objects $C$; so in particular, there would be no identity morphism for such $C$. This does not happen with our conventions. \end{remark} \begin{definition}\label{def:resolution} With an intersection point $x$ between two simply-faced precurves $C$ and $C'$ in pairing position, we may associate a morphism of precurves $\varphi(x)$ from $C$ to $C'$ as follows. Consider the union of all paths $\gamma\co [0,1]\rightarrow C\cup C'$ (considered up to homotopy) satisfying the following conditions: \begin{enumerate} \item the restriction of $\gamma$ to $[0,\frac{1}{2}]$ is a path on $C$ from a dot $\bullet(s,i)$ to $x$ which does not meet any other dot on $C$ and which follows the orientation of any crossover arrows, \item the restriction of $\gamma$ to $[\frac{1}{2},1]$ is a path on $C'$ from $x$ to a dot $\bullet(s',i')$ which does not meet any other dot on $C'$ and which follows the orientation of any crossover arrows, \item $\gamma$ turns left at the intersection point $x$: $\raisebox{-3pt}{\psset{unit=0.2}% \begin{pspicture}(-1.1,-1)(1.1,1) \psline[linecolor=blue](1,-1)(-1,1) \psline[linecolor=red](-1,-1)(1,1) \psdot(0,0) \rput{-45}(0,0){ \psline[linearc=0.25,arrowsize= 1pt 2]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3) } \rput{135}(0,0){ \psline[linearc=0.25,arrowsize= 1pt 2]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3) } \end{pspicture}}$. \end{enumerate} By labelling each of these paths by $p^{s}_{s'}$ and counting them modulo 2, we may regard them as a morphism $\varphi(x)$ from $C$ to $C'$, which we call the \textbf{resolution of $x$}. \end{definition} \begin{lemma}\label{lem:ResolutionWellDefined} The resolution \(\varphi(x)\) is a well-defined morphism of precurves from \(C\) to \(C'\). \end{lemma} \begin{proof} If the intersection point is lower, then $\varphi(x)=\varphi^+(x)$, so there is nothing to check. If the intersection point is an upper point on an arc $a$, then $\varphi(x)=\varphi^\times(x)$. More explicitly, $\varphi_1:=(\iota_{s_1(a)}.\varphi^\times(x).\iota_{s_1(a)})$ contains exactly one non-zero component, say an arrow from a dot $\bullet(s_1(a),i)$ on the first precurve to a dot $\bullet(s_1(a),i')$ on the second precurve; see Figure~\ref{fig:BigonCounts} for an illustration. We need to show that $\varphi_2:=(\iota_{s_2(a)}.\varphi^\times(x).\iota_{s_2(a)})$ is equal to $P'_a\circ\varphi_1\circ P_a^{-1}$. For this, recall that the component of $P_a^{-1}$ in the $j^\text{th}$ column and $i^\text{th}$ row is given by the number of paths (satisfying the conditions in Remark~\ref{rem:PathsForTrainTracks}) from $\bullet(s_2(a),j)$ to $\bullet(s_1(a),i)$ of the first precurve; similarly, the component of $P'_a$ in the $(i')^\text{th}$ column and $(j')^\text{th}$ row is given by the number of paths from $\bullet(s_1(a),i')$ to $\bullet(s_2(a),j')$ of the second precurve. Thus, $P'_a\circ\varphi_1\circ P_a^{-1}$ has a non-zero component from $\bullet(s_2(a),j)$ to $\bullet(s_2(a),j')$ iff the number of paths from $\bullet(s_2(a),j)$ to $\bullet(s_2(a),j')$ via $x$ is odd. This is agrees with the definition of $\varphi_2$. \end{proof} \begin{lemma} The resolution \(\varphi(x)\) of any generator \(x\) of \(\LagrangianFC(C,C')\) is homogeneous with respect to the \(\delta\)-grading. \end{lemma} \begin{proof} If $x$ is upper, this is true because dots connected by curve segments on an arc neighbourhood have the same $\delta$-grading by the definition of $\delta$-gradings on precurves. For lower intersection points, $\varphi(x)$ has at most two components. If it has exactly two components, both $f$-joins are two-sided; for an illustration, see Figures~\ref{fig:GensOfMorNoM2}, \ref{fig:GensOfMorNoM4} and~\ref{fig:GensOfMorNoM5}. Suppose the two components correspond to paths from $\bullet(s,i)$ to $\bullet(s',i')$ and from $\bullet(t,j)$ to $\bullet(t',j')$. Assume without loss of generality that if $f$ is open, the basepoint of $f$ lies between the side $s'$ and $t$. Then the $\delta$-gradings of the two components are $$\delta(\bullet(s',i'))-\delta(\bullet(s,i))+\delta(p^{s}_{s'}) \quad\text{ and }\quad \delta(\bullet(t',j'))-\delta(\bullet(t,j))+\delta(p^{t}_{t'}),$$ respectively. Since $$ \delta(\bullet(s,i))-\delta(\bullet(t,j))+\delta(p^{t}_{s}) =1= \delta(\bullet(s',i'))-\delta(\bullet(t',j'))+\delta(p^{t'}_{s'}), $$ $p^{t}_{s}=p^{t'}_{s}p^{t}_{t'}$ and $p^{t'}_{s'}=p^{s}_{s'}p^{t'}_{s}$, these two gradings agree. \end{proof} \begin{definition}\label{def:gradingVIAresolution} We endow $\LagrangianFC(C,C')$ with a \textbf{$\delta$-grading} by defining the $\delta$-grading of any intersection point to be the $\delta$-grading of its resolution. \end{definition} \begin{lemma} The differential \(d\) on \(\LagrangianFC(C,C')\) increases \(\delta\)-grading by 1. \end{lemma} \begin{proof} This follows from a similar argument as the previous lemma. \end{proof} \begin{definition} Given two reduced precurves $C$ and $C'$, let us write $(\Mor^+(C,C'),D)$ for the subcomplex of $(\Mor(C,C'),D)$ generated by those morphisms which do not contain an identity component. Let $\Mor^\times(C,C')$ be the vector space of all morphisms that consist of identity components only. By endowing it with the 0 differential, we obtain the following short exact sequence of chain complexes $$ \begin{tikzcd 0 \arrow{r}{} & (\Mor^+(C,C'),D^+) \arrow{r}{} & (\Mor(C,C'),D) \arrow{r}{} & (\Mor^\times(C,C'),0) \arrow{r}{} & 0 \end{tikzcd} $$ as in Definitions~\ref{def:AlgebraFromGlueingFaces} and~\ref{def:SplittingCatsForEquivalenceFunctors}. Via the induced long exact sequence, it gives rise to a graded chain homotopy equivalence $$ H_*(\Mor(C,C'),D)\cong \left( \begin{tikzcd} H_*(\Mor^\times(C,C'),0) \arrow{r}{\beta} & H_*(\Mor^+(C,C'),D^+) \end{tikzcd} \right), $$ where $\beta$ is the boundary map from the long exact sequence. \end{definition} The central result of this subsection is the following: \begin{theorem}\label{thm:PairingMorLagrangianFH} Given a pair of simply-faced precurves \(C\) and \(C'\) in pairing position on a marked surface \((S,M)\) with arc system \(A\), the chain complex \(\LagrangianFC(C,C')\) and the mapping cone of the map $$ \beta\co H_*(\Mor^\times(C,C'),0) \longrightarrow H_*(\Mor^+(C,C'),D^+) $$ from the previous definition are graded chain isomorphic via the correspondence between generators of \(\LagrangianFC(C,C')\) and their resolutions. In particular, $$\LagrangianFH(C,C')\cong H_*(\Mor(C,C'),D).$$ \end{theorem} \begin{figure}[p] \centering \psset{unit=0.2} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(7.5,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(5,-3)(5,3) \psline(7.5,3)(7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline(5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](10,-5)(5,0)(0,-5)(5,-10) \psecurve[linecolor=red](-10,5)(-5,0)(0,5)(-5,10) \psdots[linecolor=red](-5,0) \uput{0.75}[180](-5,0){$s$} \psdots[linecolor=blue](5,0) \uput{0.75}[0](5,0){$s'$} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$t$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$t'$} \uput{2.5}[-45](0,0){$\textcolor{blue}{C'}$} \uput{2.5}[135](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorNoM1} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(7.5,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(5,-3)(5,3) \psline(7.5,3)(7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline(5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psline[linecolor=blue](0,5)(0,-5) \psline[linecolor=red](5,0)(-5,0) \psecurve{->}(10,1)(5,0)(0,-5)(-1,-10) \psecurve{->}(-10,-1)(-5,0)(0,5)(1,10) \psdots[linecolor=red](-5,0) \uput{0.75}[180](-5,0){$s$} \psdots[linecolor=red](5,0) \uput{0.75}[0](5,0){$t$} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$t'$} \uput{0.5}[-90](-3.5,0){$\textcolor{red}{C}$} \uput{0.5}[0](0,3.5){$\textcolor{blue}{C'}$} \psdot(0,0) \end{pspicture} \subcaption{}\label{fig:GensOfMorNoM2} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](-10,-5)(-5,-2)(0,-5)(-5,-8) \psecurve[linecolor=red](-10,5)(-5,2)(0,5)(-5,8) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$t'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$s$} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$t$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$s'$} \uput{3.4}[-120](0,0){$\textcolor{blue}{C'}$} \uput{3.4}[120](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorNoM3} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightgray]{ \psecurve(-7.5,0)(-5,2)(-2.5,0)(0,-5) \psecurve(0,5)(-2.5,0)(-5,-2)(-7.5,0) \psline(-5,-2)(-5,2) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=red](-7.5,0)(-5,2)(-2.5,0)(0,-5)(-2.5,-10) \psecurve[linecolor=blue](-7.5,0)(-5,-2)(-2.5,0)(0,5)(-2.5,10) \psecurve{<-}(-10,-5)(-5,-2)(0,-5)(-5,-8) \psecurve{->}(-10,5)(-5,2)(0,5)(3,10) \psdot(-2.5,0) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$t'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$s$} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$t$} \uput{1.4}[-80](0,0){$\textcolor{red}{C}$} \uput{1.4}[80](0,0){$\textcolor{blue}{C'}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorNoM4} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(7.5,7.5) \pscustom*[linecolor=lightgray]{ \psecurve(-10,2)(-5,-2)(5,2)(10,-2) \psline(5,2)(5,-2) \psecurve(10,2)(5,-2)(-5,2)(-10,-2) \psline(-5,2)(-5,-2) } \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(-10,-1.5)(-5,-2)(0,-1.5)(5,-2)(10,-1.5) \psline(5,-2)(5,-3)(-5,-3)(-5,-2) } \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(-10,1.5)(-5,2)(0,1.5)(5,2)(10,1.5) \psline(5,2)(5,3)(-5,3)(-5,2) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(5,-3)(5,3) \psline(7.5,3)(7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \psline(-5,-3)(-5,3) \psline(5,-3)(5,3) \psecurve[linecolor=blue](-15,2)(-5,-2)(5,2)(15,-2) \psecurve[linecolor=red](15,2)(5,-2)(-5,2)(-15,-2) \psecurve{->}(-10,2.5)(-5,2)(0,2.5)(5,2)(10,2.5) \psecurve{<-}(-10,-2.5)(-5,-2)(0,-2.5)(5,-2)(10,-2.5) \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$s$} \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$t'$} \psdot(0,0) \psdots[linecolor=blue](5,2) \uput{0.75}[0](5,2){$s'$} \psdots[linecolor=red](5,-2) \uput{0.75}[0](5,-2){$t$} \uput{1}[60](0,0){$\textcolor{blue}{C'}$} \uput{1}[-70](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorNoM5} \end{subfigure} \medskip\\ \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=red](10,-5)(5,1)(0,-5)(5,-11) \psecurve[linecolor=blue](-10,5)(-5,0)(0,5)(-5,10) \psdots[linecolor=blue](-5,0) \uput{0.75}[180](-5,0){$t'$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$s$} \uput{2.5}[-45](0,0){$\textcolor{red}{C}$} \uput{2.5}[135](0,0){$\textcolor{blue}{C'}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorM1} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psline[linecolor=blue](0,5)(0,-5) \psecurve[linecolor=red](-15,1)(-5,0)(5,1)(15,0) \psecurve{->}(-10,-1)(-5,0)(0,5)(1,10) \psdot(0,0.5) \psdots[linecolor=red](-5,0) \uput{0.75}[180](-5,0){$s$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$t'$} \uput{0.5}[-90](-3.5,0){$\textcolor{red}{C}$} \uput{0.5}[0](0,3.5){$\textcolor{blue}{C'}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorM2} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=red](10,5)(5,1)(0,5)(5,8) \psecurve[linecolor=blue](-10,-5)(-5,0)(0,-5)(-5,-10) \psdots[linecolor=blue](-5,0) \uput{0.75}[180](-5,0){$t'$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$s'$} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$s$} \uput{3.5}[45](0,0){$\textcolor{red}{C}$} \uput{2.5}[-135](0,0){$\textcolor{blue}{C'}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorM3} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](-10,-5)(-5,-2)(0,-5)(-5,-8) \psecurve[linecolor=red](-10,5)(-5,2)(-1,5)(-5,8) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$t'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$s$} \psline[linecolor=darkgreen](0,4.5)(0,5.5) \uput{0.5}[90](0,5.2){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$s'$} \uput{2}[-90](0,0){$\textcolor{blue}{C'}$} \uput{2.4}[100](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorM4} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightgray]{ \psecurve(-8,0)(-5,2)(-2,0)(0,-5) \psecurve(0,5)(-2,0)(-5,-2)(-8,0) \psline(-5,-2)(-5,2) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \psline(-5,-3)(-5,3) \psline(-3,5)(3,5) \psline[linecolor=darkgreen](-3,-5)(3,-5) \psecurve[linecolor=red](-8,0)(-5,2)(-2,0)(1,-5)(-2,-10) \psecurve[linecolor=blue](-8,0)(-5,-2)(-2,0)(0,5)(-2,10) \psecurve{->}(-10,5)(-5,2)(0,5)(3,10) \psdot(-2,0) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$t'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$s$} \psline[linecolor=darkgreen](0,-4.5)(0,-5.5) \uput{0.5}[-90](0,-5.2){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \uput{1.4}[-60](0,0){$\textcolor{red}{C}$} \uput{1.4}[80](0,0){$\textcolor{blue}{C'}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorM5} \end{subfigure} \medskip\\ \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](10,-5)(5,-1)(0,-5)(5,-8) \psecurve[linecolor=red](-10,5)(-5,0)(0,5)(-5,10) \psdots[linecolor=red](-5,0) \uput{0.75}[180](-5,0){$t$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$s$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$s'$} \uput{3.5}[-45](0,0){$\textcolor{blue}{C'}$} \uput{2.5}[135](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorMp1} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psline[linecolor=red](0,5)(0,-5) \psecurve[linecolor=blue](-15,-1)(-5,0)(5,-1)(15,0) \psecurve{<-}(-10,1)(-5,0)(0,-5)(1,-10) \psdot(0,-0.5) \psdots[linecolor=blue](-5,0) \uput{0.75}[180](-5,0){$s'$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$t$} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$s$} \uput{0.5}[90](-3.5,0){$\textcolor{blue}{C'}$} \uput{0.5}[0](0,-3.5){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorMp2} \end{subfigure} \begin{subfigure}{0.215\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](5,-3)(5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](10,5)(5,-1)(0,5)(5,11) \psecurve[linecolor=red](-10,-5)(-5,0)(0,-5)(-5,-10) \psdots[linecolor=red](-5,0) \uput{0.75}[180](-5,0){$t$} \psline[linecolor=darkgreen](4.5,0)(5.5,0) \uput{0.5}[0](5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$s$} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \uput{2.5}[45](0,0){$\textcolor{blue}{C'}$} \uput{2.5}[-135](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorMp3} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \psline(-5,-3)(-5,3) \psline(-3,5)(3,5) \psline[linecolor=darkgreen](-3,-5)(3,-5) \psecurve[linecolor=blue](-10,-5)(-5,-2)(-1,-5)(-5,-8) \psecurve[linecolor=red](-10,5)(-5,2)(0,5)(-5,8) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$s'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$t$} \psline[linecolor=darkgreen](0,-4.5)(0,-5.5) \uput{0.5}[-90](0,-5.2){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$s$} \uput{2}[-100](0,0){$\textcolor{blue}{C'}$} \uput{2.4}[90](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorMp4} \end{subfigure} \begin{subfigure}{0.16\textwidth}\centering \begin{pspicture}(-7.5,-7.5)(4,7.5) \pscustom*[linecolor=lightgray]{ \psecurve(-8,0)(-5,-2)(-2,0)(0,5) \psecurve(0,-5)(-2,0)(-5,2)(-8,0) \psline(-5,2)(-5,-2) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-5,-3)(-5,3) \psline(-7.5,3)(-7.5,-3) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline(-5,-3)(-5,3) \psline[linecolor=darkgreen](-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=blue](-8,0)(-5,-2)(-2,0)(1,5)(-2,10) \psecurve[linecolor=red](-8,0)(-5,2)(-2,0)(0,-5)(-2,-10) \psecurve{<-}(-10,-5)(-5,-2)(0,-5)(-5,-8) \psdot(-2,0) \psdots[linecolor=blue](-5,-2) \uput{0.75}[180](-5,-2){$s'$} \psdots[linecolor=red](-5,2) \uput{0.75}[180](-5,2){$t$} \psline[linecolor=darkgreen](0,4.5)(0,5.5) \uput{0.5}[90](0,5.2){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$s$} \uput{1.4}[60](0,0){$\textcolor{blue}{C'}$} \uput{1.4}[-80](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorMp5} \end{subfigure} \medskip\\ \begin{subfigure}{0.25\textwidth}\centering \begin{pspicture}(-8,-7.5)(4,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline[linecolor=darkgreen](-5,-3)(-5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve{->}(-3,7)(0,5)(0,-5)(-3,-7) \psecurve[linecolor=blue](-10,-5)(-5,2)(0,-5)(-5,-8) \psecurve[linecolor=red](-10,5)(-5,-2)(0,5)(-5,8) \psdot(-1.55,0) \psline[linecolor=darkgreen](-4.5,0)(-5.5,0) \uput{0.5}[180](-5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=red](0,5) \uput{0.75}[90](0,5){$s$} \psdots[linecolor=blue](0,-5) \uput{0.75}[-90](0,-5){$s'$} \uput{0.5}[90](-3.5,2){$\textcolor{blue}{C'}$} \uput{0.5}[-90](-3.5,-2){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorBoth1} \end{subfigure} \begin{subfigure}{0.25\textwidth}\centering \begin{pspicture}(-8,-7.5)(4,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,5)(3,5) \psline(3,7.5)(-3,7.5) } \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-3,-5)(3,-5) \psline(3,-7.5)(-3,-7.5) } \psline[linecolor=darkgreen](-5,-3)(-5,3) \psline(-3,5)(3,5) \psline(-3,-5)(3,-5) \psecurve[linecolor=red](-10,-5)(-5,-2)(0,-5)(-5,-8) \psecurve[linecolor=blue](-10,5)(-5,2)(0,5)(-5,8) \psline[linecolor=darkgreen](-4.5,0)(-5.5,0) \uput{0.5}[180](-5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](0,5) \uput{0.75}[90](0,5){$s'$} \psdots[linecolor=red](0,-5) \uput{0.75}[-90](0,-5){$s$} \uput{2}[90](0,0){$\textcolor{blue}{C'}$} \uput{2}[-90](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorBoth2} \end{subfigure} \begin{subfigure}{0.25\textwidth}\centering \begin{pspicture}(-8,-7.5)(8,7.5) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(5,-3)(5,3) \psline(7.5,3)(7.5,-3) } \psline[linecolor=darkgreen](-5,-3)(-5,3) \psline(5,-3)(5,3) \psline[linecolor=blue](5,2)(-5,2) \psline[linecolor=red](5,-2)(-5,-2) \psline[linecolor=darkgreen](-4.5,0)(-5.5,0) \uput{0.5}[180](-5,0){\textcolor{darkgreen}{$M$}} \psdots[linecolor=blue](5,2) \uput{0.75}[0](5,2){$s'$} \psdots[linecolor=red](5,-2) \uput{0.75}[0](5,-2){$s$} \uput{2.5}[90](0,0){$\textcolor{blue}{C'}$} \uput{2.5}[-90](0,0){$\textcolor{red}{C}$} \end{pspicture} \subcaption{}\label{fig:GensOfMorBoth3} \end{subfigure} \caption{The computation of morphism spaces between $f$-joins of two precurves illustrating the proof of Theorem~\ref{thm:PairingMorLagrangianFH}. The first row shows all configurations with two-sided $f$-joins only. However, there might be a point of $M$ in the face, in which case some of the black arrows are zero. The second and third rows show those configurations with both two- and one-sided $f$-joins. The last row shows those configurations with one-sided $f$-joins only.}\label{fig:GensOfMor} \end{figure} \begin{proof} Let us consider lower intersection points first. According to the theorem, these should correspond to generators of the homology of $(\Mor^+(C,C'),D^+)$. This chain complex is equal to the direct sum of the morphism spaces between individual $f$-joins of $C$ and $C'$ for all faces $f$, so we may compute each summand separately. Let us fix a face $f$. Consider a single $f$-join of $C$ between $s$ and $t$ and a single $f$-join of $C'$ between $s'$ and $t'$ where $s$ and $s'$ are sides of $f$, and $t$ and $t'$ are either sides of $f$ or points near the basepoint of $f$. We consider all cyclic orders of $s$, $t$, $s'$ and $t'$ on the boundary of $f$ separately, modulo interchanging $s$ and $t$ and interchanging $s'$ and $t'$. All possibilities are illustrated in Figure~\ref{fig:GensOfMor}: \begin{itemize} \item The first row shows the various cases in which both $f$-joins are two-sided. If $s$, $t$, $s'$ and $t'$ are all distinct, we are in case (a) or (b). If two sides coincide (without loss of generality $s$ and $t'$), we are in case (c) or (d). If both sides coincide, there is only one case, namely (e). In all five cases, we allow the face to be open. \item The second row shows all cases for which the $f$-join of $C$ is one-sided and the one for $C'$ is two-sided. In cases (f), (g) and (h), all sides are distinct. If two sides coincide, we are in case (i) or (j). \item The third row shows all cases for which the $f$-join of $C$ is two-sided and the one for $C'$ is one-sided. Again, in the first three cases, all sides are distinct and in the last two cases, two sides agree. \item The last row shows the cases in which both $f$-joins are one-sided. (p) and (q) are the two cases in which the two sides are distinct, in case (r), the two sides coincide. \end{itemize} In each of these cases, we may now study the morphism space separately and compare it to the number of intersection points. As we can see, in each case there is either one or no intersection point between the two $f$-joins. We claim that this agrees with the dimension of the homology of the morphism space between the $f$-joins. Moreover, we claim that the generator of the homology of each morphism space is given by the resolution of the corresponding intersection point. Let us only verify these claims in the examples of the first row, assuming the face $f$ is closed; all other cases follow similarly. \begin{enumerate} \item[(a)] Over $\mathbb{F}_2[U_f]$, the kernel of $D^+$ is generated by $$ (s\xrightarrow{p^{s}_{s'}} s') \oplus (t\xrightarrow{p^{t}_{t'}} t') \quad\text{ and }\quad (s\xrightarrow{p^{s}_{t'}} t') \oplus (t\xrightarrow{U_fp^{t}_{s'}} s'). $$ The former is equal to $D^+(t\xrightarrow{p^{t}_{s'}} s')$ and the latter is equal to $D^+(s\xrightarrow{p^{s}_{s'}} s')=D^+(t\xrightarrow{p^{t}_{t'}} t')$. \item[(b)] Over $\mathbb{F}_2[U_f]$, the kernel of $D^+$ is generated by $$ (s\xrightarrow{p^{s}_{s'}} s') \oplus (t\xrightarrow{p^{t}_{t'}} t') \quad\text{ and }\quad (s\xrightarrow{p^{s}_{t'}} t') \oplus ( t\xrightarrow{p^{t}_{s'}} s'). $$ The latter is equal to $D^+(s\xrightarrow{p^{s}_{s'}} s')=D^+(t\xrightarrow{p^{t}_{t'}} t')$. However, the former does not lie in the image of $D^+$, only its products with positive powers of $U_f$ do. \item[(c)] This case is the same as case (a), except that two sides coincide. This does not change the morphism space of algebra elements in $\mathcal{A}_f^+$. \item[(d)] This case is the same as case (b), except that two sides coincide. Again, this does not change the morphism space of algebra elements in $\mathcal{A}_f^+$; however, note that while the surviving generator is not in the image of $D^+$, it is a component of $D(s\xrightarrow{\iota_{s}=\iota_{t'}} t')$. \item[(e)] This case is similar to the previous one; here, the surviving generator is a component of $D(s\xrightarrow{\iota_{s}=\iota_{t'}} t')$ and $D(t\xrightarrow{\iota_{t}=\iota_{s'}} s')$. \end{enumerate} Let us now turn to upper intersection points. These should correspond to basis elements of $\Mor^\times(C,C')$, which can be written as the direct sum of $\Mor^\times(C.\iota_a,C'.\iota_a)$ over all arcs $a\in A$. Each summand $\Mor^\times(C.\iota_a,C'.\iota_a)$ can in turn be written as $$ \left\{\left.\left(C.\iota_{s_1(a)}\xrightarrow{\varphi_1} C'.\iota_{s_1(a)}, C.\iota_{s_2(a)}\xrightarrow{\varphi_2} C'.\iota_{s_2(a)}\right)\right\vert \varphi_2\circ P_a=P'_a \circ \varphi_1\right\}. $$ Since $P_a$ and $P'_a$ are invertible, the projection onto the first component $\varphi_1$ is an isomorphism. In other words, $\Mor^\times(C.\iota_a,C'.\iota_a)$ is isomorphic to the vector space of linear maps $$\varphi_1\co C.\iota_{s_1(a)}\rightarrow C'.\iota_{s_1(a)}.$$ By Lemma~\ref{lem:ResolutionWellDefined}, a resolution $\varphi$ of an intersection point $x$ between the precurves $C$ and $C'$ on an arc $a$ is an element of $\Mor^\times(C.\iota_a,C'.\iota_a)$. Moreover, it corresponds to a standard basis element of the vector space of linear maps $\varphi_1\co C.\iota_{s_1(a)}\rightarrow C'.\iota_{s_1(a)}$. So for each $a\in A$, the resolutions of upper intersection points in $N(a)$ form a basis of $\Mor^\times(C.\iota_a,C'.\iota_a)$. \begin{figure}[t] \centering \psset{unit=0.6} \begin{pspicture}(-13,-5.1)(9,5.1) \pscustom*[linecolor=lightred,linewidth=0pt]{ \psline(-7,-5)(-7,5) \psline(3,5)(3,-5) } \pscustom*[linecolor=lightgray]{% \psecurve(-9,3)(-3,-3)(3,3)(9,-3)(15,3 \psline(9,-3)(9,3 \psecurve(15,-3)(9,3)(3,-3)(-3,3)(-9,-3 \psline(-3,3)(-4.5,3)(-4.5,-3)(-3,-3)% } \pscustom*[linecolor=lightgray]{% \psecurve(-3.5,2)(-4.5,3)(-5.5,2)(-6.5,3 \psline(-5.5,2)(-7,2)(-7,-2)(-5.5,-2)% \psecurve(-6.5,-3)(-5.5,-2)(-4.5,-3)(-3.5,-2 \psline(-4.5,-3)(-3,-3)(-3,3)(-4.5,3)% } \pscustom*[linecolor=lightgray]{% \psecurve(-15,-2)(-11,2)(-7,-2)(-3,2 \psline(-7,-2)(-5.5,-2)(-5.5,2)(-7,2 \psecurve(-3,-2)(-7,2)(-11,-2)(-15,2 \psline(-11,-2)(-11,2)% } \psline*[linecolor=white,linewidth=0pt](10,4)(6,4)(6,-4)(10,-4)(10,4) \psline*[linecolor=white,linewidth=0pt](-9,3)(-12,3)(-12,-3)(-9,-3)(-9,3) \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(-13,1.5)(-11,2)(-9,1.5)(-7,2)(-5,1.5) \psline(-7,2)(-7,3)(-11,3)(-11,2) } \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(-13,-1.5)(-11,-2)(-9,-1.5)(-7,-2)(-5,-1.5) \psline(-7,-2)(-7,-3)(-11,-3)(-11,-2) } \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(0,2.5)(3,3)(6,2.5)(9,3)(12,2.5) \psline(3,3)(3,4)(9,4)(9,3) } \pscustom*[linecolor=white,linewidth=0pt]{ \psecurve(0,-2.5)(3,-3)(6,-2.5)(9,-3)(12,-2.5) \psline(3,-3)(3,-4)(9,-4)(9,-3) } \pscustom[linecolor=blue]{ \psline(-6,3)(-3,3) \psecurve(-9,-3)(-3,3)(3,-3)(9,3)(15,-3) } \pscustom[linecolor=blue]{ \psline(-4,2)(-7,2) \psecurve(-3,-2)(-7,2)(-11,-2)(-15,2) } \pscustom[linecolor=red]{ \psline(-6,-3)(-3,-3) \psecurve(-9,3)(-3,-3)(3,3)(9,-3)(15,3) } \pscustom[linecolor=red]{ \psline(-4,-2)(-7,-2) \psecurve(-3,2)(-7,-2)(-11,2)(-15,-2) } \psline[linecolor=blue,linestyle=dotted, dotsep=0.1](-6.1,3)(-6.4,3) \psline[linecolor=blue,linestyle=dotted, dotsep=0.1](-3.9,2)(-3.6,2) \psline[linecolor=red,linestyle=dotted, dotsep=0.1](-6.1,-3)(-6.4,-3) \psline[linecolor=red,linestyle=dotted, dotsep=0.1](-3.9,-2)(-3.6,-2) \psdot(-9,0) \psdot(0,0) \psdot(6,0) \uput{0.2}[0](6,0){$y$} \uput{0.4}[90](0,0){$x$} \uput{0.2}[180](-9,0){$z$} \pscircle*[linecolor=white](-17,8.5){10} \pscircle*[linecolor=white](-17,-8.5){10} \pscircle*[linecolor=white](12,-10){10} \pscircle*[linecolor=white](12,10){10} \rput(-5,2.5){\psset{linecolor=blue} \psecurve(-1.5,0.5)(-0.5,-0.5)(0.5,0.5)(1.5,-0.5) \psecurve(-1.5,-0.5)(-0.5,0.5)(0.5,-0.5)(1.5,0.5) \psline{->}(0.175,-0.28)(0.25,-0.4) \psline{->}(-0.175,-0.28)(-0.25,-0.4) } \rput(-5,-2.5){\psset{linecolor=red} \psecurve(-1.5,0.5)(-0.5,-0.5)(0.5,0.5)(1.5,-0.5) \psecurve(-1.5,-0.5)(-0.5,0.5)(0.5,-0.5)(1.5,0.5) \psline{->}(0.175,-0.28)(0.25,-0.4) \psline{->}(-0.175,-0.28)(-0.25,-0.4) } \psline[linecolor=blue](-3.5,1.5)(-6.5,1.5)(-6.5,4.5)(-3.5,4.5)(-3.5,1.5) \rput(-5,3.75){\textcolor{blue}{$P'_a$}} \psline[linecolor=red](-3.5,-1.5)(-6.5,-1.5)(-6.5,-4.5)(-3.5,-4.5)(-3.5,-1.5) \rput(-5,-3.75){\textcolor{red}{$P_a$}} \psline{->}(-7,-5)(-7,5) \rput[r](-7.25,4.25){$s_2(a)$} \psline{->}(3,-5)(3,5) \rput[l](3.25,4.25){$s_1(a)$} \psdot[linecolor=red](3,3) \psdot[linecolor=blue](3,-3) \psdot[linecolor=blue](-7,2) \psdot[linecolor=red](-7,-2) \uput{0.2}[45](3,3){$i$} \uput{0.2}[-45](3,-3){$i'$} \uput{0.2}[135](-7,2){$j'$} \uput{0.2}[-135](-7,-2){$j$} \rput{0}(0.3,0){\psline[linearc=0.4]{->}(1;61)(0,0)(1;-61)} \rput{180}(-0.3,0){\psline[linearc=0.4]{->}(1;61)(0,0)(1;-61)} \rput{0}(6,0.4){\psline[linearc=0.2]{<-}(1;62)(0,0)(1;118)} \rput{180}(6,-0.4){\psline[linearc=0.2]{<-}(1;62)(0,0)(1;118)} \rput{0}(-9,0.45){\psline[linearc=0.2]{<-}(1;61)(0,0)(1;119)} \rput{180}(-9,-0.45){\psline[linearc=0.2]{<-}(1;61)(0,0)(1;119)} \psline[linecolor=darkgray]{->}(1.45,2.25)(1.5,2.3) \psline[linecolor=darkgray]{->}(-1.45,-2.25)(-1.5,-2.3) \psline[linecolor=darkgray]{<-}(4.55,2.25)(4.5,2.3) \psline[linecolor=darkgray]{->}(-7.95,-1.57)(-8,-1.52) \psline[linecolor=darkgray]{<-}(-1.45,2.25)(-1.5,2.3) \psline[linecolor=darkgray]{<-}(1.45,-2.25)(1.5,-2.3) \psline[linecolor=darkgray]{->}(4.55,-2.25)(4.5,-2.3) \psline[linecolor=darkgray]{->}(-8.05,1.47)(-8,1.52) \psecurve{->}(3,4)(2.8,2.8)(2.8,-2.8)(3,-4) \psecurve{<-}(-7,3)(-6.8,1.8)(-6.8,-1.8)(-7,-3) \uput{0.4}[180](3,0){$\varphi_1$} \uput{0.4}[0](-7,0){$\varphi_2=P'_a\circ \varphi_1\circ P_a^{-1}$} \psline[linecolor=darkgreen](-7.5,5)(3.5,5) \psline[linecolor=darkgreen](-7.5,-5)(3.5,-5) \rput(-2,-3.25){$\textcolor{red}{C}$} \rput(-2,3.25){$\textcolor{blue}{C'}$} \end{pspicture} \caption{Illustration for the identification of bigons with the map $\beta$ in the proof of Theorem~\ref{thm:PairingMorLagrangianFH}.}\label{fig:BigonCounts} \end{figure} Finally, we need to identify the map $\beta$ with the differential on $\LagrangianFC(C,C')$. Given an upper intersection point $x$ and its resolution $(\varphi_1,\varphi_2)$, $\beta(\varphi_1,\varphi_2)$ is simply given by the differential of this morphism. Let us compute the components of this differential on $\varphi_1$ and $\varphi_2$ separately; for an illustration of the following argument, see Figure~\ref{fig:BigonCounts}. Suppose the morphism $\varphi_1$ goes from $\bullet(s_1(a),i)$ of the first precurve to $\bullet(s_1(a),i')$ of the second precurve. We claim that if the two $f$-joins starting at these two dots are disjoint, $D(\varphi_1)$ is null-homotopic; moreover, if they intersect at some point $y$, $D(\varphi_1)$ is equal to the resolution of $y$ and there is a single bigon from $x$ to $y$. These two claims can be easily verified for each case in which two sides coincide (ie cases (c), (d), (e), (i), (j), (n), (o) and (r)) separately. For $D(\varphi_2)$, we can argue similarly. However, we need to take the matrices $P_a$ and $P'_a$ into account. As we have seen in the proof of Lemma~\ref{lem:ResolutionWellDefined}, $\varphi_2$ has a non-zero component from $\bullet(s_2(a),j)$ of the first precurve to $\bullet(s_2(a),j')$ of the second precurve iff there are an odd number of paths from $\bullet(s_2(a),j)$ to $\bullet(s_2(a),j')$ via $x$. If the two $f$-joins starting at $\bullet(s_2(a),j)$ and $\bullet(s_2(a),j')$, respectively, intersect in a point $z$, then the number of bigons from $x$ to $z$ agrees with the number of such paths, noting that the boundary orientation of each bigon is opposite to the orientation of the crossover arrows. We may now argue as for $D(\varphi_1)$. \end{proof} \subsection{A formula for computing the homology of morphism spaces between curves}\label{subsec:formulaMor} If $C$ and $C'$ are two precurves, then by Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems}, there are two collections of curves $L$ and $L'$ such that $C$ and $\Pi_i(L)$ as well as $C'$ and $\Pi_i(L')$ are homotopic, respectively. Thus, $\Mor(C,C')$ and $\Mor(L,L'):=\Mor(\Pi_i(L),\Pi_i(L'))$ are chain homotopic. If we put $\Pi_i(L)$ and $\Pi_i(L')$ into pairing position, Theorem~\ref{thm:PairingMorLagrangianFH} says that the homology of $\Mor(L,L')$ is graded isomorphic to $\LagrangianFH(L,L'):=\LagrangianFH(\Pi_i(L),\Pi_i(L'))$. The main result of this section says that we can actually compute $\LagrangianFH(L,L')$ without putting the curves into pairing position. \begin{definition} Let $(L,L')$ be a pair of $\delta$-graded curves $L=(\gamma,X)$ and $L'=(\gamma',X')$ on a marked surface with arc system $(S,M,A)$ with $\dim X=:n$ and $\dim X'=:n'$. Assume that $\gamma$ and $\gamma'$ intersect minimally. Let $\mathbb{F}_2\langle\gamma\cap\gamma'\rangle$ denote the vector space over $\mathbb{F}_2$ spanned by intersection points between $\gamma$ and $\gamma'$. Each intersection point can be $\delta$-graded in exactly the same way as intersection points in $\LagrangianFC(C,C')$. If $\gamma$ and $\gamma'$ are parallel, let $\delta(\gamma,\gamma')$ be the unique real number one needs to add to the $\delta$-grading of each intersection point of $\gamma$ with arcs in $A$ such that $\gamma$ and $\gamma'$ agree as $\delta$-graded curves. For any non-negative integer $m$, let $V_\delta(m)$ be an $m$-dimensional vector space in $\delta$-grading $\delta\in\mathbb{R}$. \end{definition} \begin{theorem}\label{thm:PairingFormula} With the notation from above, \(\LagrangianFH(L,L')\) is graded isomorphic to \begin{equation}\label{eqn:MorSpacesNonparallel} V_0(n\cdot n')\otimes \mathbb{F}_2\langle\gamma\cap\gamma'\rangle, \end{equation} unless \(\gamma\) and \(\gamma'\) are parallel. If they are parallel, let us assume without loss of generality that their orientations agree. Then, \(\LagrangianFH(L,L')\) is graded isomorphic to \begin{equation}\label{eqn:MorSpacesParallel} \Big(V_0(n\cdot n')\otimes\mathbb{F}_2\langle\gamma\cap\gamma'\rangle\Big) \oplus \Big(\!\big(V_0(1)\oplus V_{1}(1)\big)\otimes V_{\delta(\gamma,\gamma')}\left(\dim\left(\ker\left((X^{-1})^t\otimes X'-\id\right)\right)\right)\!\!\Big). \end{equation} \end{theorem} \begin{figure}[t]\centering { \psset{unit=0.3} \begin{pspicture}(-18,-3.5)(18,3.5) \rput(-12,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=red](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=blue](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(-0.3,-1.66)(-0.4,-1.66) \psline[linecolor=darkgray]{<-}(0.4,1.66)(0.3,1.66) \psline[linecolor=gray]{->}(-2.5,0)(2.5,0) } \rput(-6,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=blue](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=red](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(-0.3,1.66)(-0.4,1.66) \psline[linecolor=darkgray]{<-}(0.4,-1.66)(0.3,-1.66) \psline[linecolor=gray]{<-}(-2.5,0)(2.5,0) } \rput(0,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=red](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=blue](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(-0.3,-1.66)(-0.4,-1.66) \psline[linecolor=darkgray]{<-}(0.4,1.66)(0.3,1.66) \psline[linecolor=gray]{->}(-2.5,0)(2.5,0) } \rput(6,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=blue](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=red](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(-0.3,1.66)(-0.4,1.66) \psline[linecolor=darkgray]{<-}(0.4,-1.66)(0.3,-1.66) \psline[linecolor=gray]{<-}(-2.5,0)(2.5,0) } \rput(12,0){ \pscustom*[linecolor=lightgray]{ \pscurve(-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[liftpen=2](3,0)(1,-1.5)(-1,-1.5)(-3,0) } \pscurve[linecolor=red](-3,0)(-1,1.5)(1,1.5)(3,0) \pscurve[linecolor=blue](-3,0)(-1,-1.5)(1,-1.5)(3,0) \psline[linecolor=darkgray]{->}(-0.3,-1.66)(-0.4,-1.66) \psline[linecolor=darkgray]{<-}(0.4,1.66)(0.3,1.66) \psline[linecolor=gray]{->}(-2.5,0)(2.5,0) } \rput(12,0){ \psline*[linecolor=lightgray](3,0)(4,-1)(4,1)(3,0) \psline[linecolor=red](3,0)(4,-1) \psline[linecolor=blue](3,0)(4,1) } \rput(-12,0){ \psline*[linecolor=lightgray](-3,0)(-4,-1)(-4,1)(-3,0) \psline[linecolor=blue](-3,0)(-4,1) \psline[linecolor=red](-3,0)(-4,-1) } \psdots(-15,0)(-9,0)(-3,0)(3,0)(9,0)(15,0) \rput(0,3){$\textcolor{red}{L}$} \rput(0,-3){$\textcolor{blue}{L'}$} \rput(-16,0){$\dots$} \rput(16,0){$\dots$} \end{pspicture} } \caption{A ``zig-zag'' chain of bigons.}\label{fig:BigonChain} \end{figure} \begin{proof} Let us start by considering the case when the local systems of both curves are trivial, ie $n=n'=1$, so $X=X'=\id$. Then, any upper intersection point is the source of at most two bigons and each lower intersection point is the target of at most two bigons. Thus, if we start at any intersection point and follow the bigons in either direction, we obtain a ``zig-zag'' chain of intersection points which are connected by bigons, see Figure~\ref{fig:BigonChain} for an illustration. If $L$ and $L'$ are not parallel, $\LagrangianFC(L,L')$ decomposes into a direct sum of finitely many linear chains, which look like $$ \begin{tikzcd} \bullet \arrow[leftarrow]{r} & \bullet & \cdots \arrow[leftarrow]{l} \arrow[leftarrow]{r} & \bullet & \bullet \arrow[leftarrow]{l} \end{tikzcd}, $$ $$ \begin{tikzcd} \bullet \arrow{r} & \bullet & \cdots \arrow{l} \arrow{r} & \bullet & \bullet \arrow{l} \end{tikzcd} $$ or $$ \begin{tikzcd} \bullet \arrow{r} & \bullet & \cdots \arrow{l} & \bullet \arrow{l} \arrow{r} & \bullet \end{tikzcd}. $$ In the first two cases, the number of intersection points is odd with an even number of upper, respectively lower intersection points. In the third case, the number of intersection points is even. The homology in the first two cases is 1 and in the third 0, which can be seen by cancelling one arrow at a time. This corresponds to sliding one curve over the other to remove one bigon at a time, until there is only one intersection point left or none. If $L$ and $L'$ are parallel, there is also a cyclic ``zig-zag'' chain: $$ \begin{tikzcd} \bullet \arrow{r} & \bullet & \bullet \arrow{l} \arrow{r} & \cdots \arrow{r} & \bullet & \bullet \arrow{r} \arrow{l} & \bullet \arrow[leftarrow,rounded corners,% to path={ -- ([xshift=2ex]\tikztostart.east) |- ([xshift=-2ex,yshift=-4ex]\tikztostart.east) -| ([xshift=-2ex,yshift=-4ex]\tikztotarget.west) -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}]{llllll} \end{tikzcd}\bigskip $$ In this case, we can apply the same procedure to obtain a cyclic chain with just two intersection points: $$ \begin{tikzcd} \bullet \arrow[bend left]{r} \arrow[bend right]{r} & \bullet \end{tikzcd} =\left(\bullet\oplus\bullet\right). $$ So in this case, the homology is 2-dimensional. Geometrically, this corresponds to removing all bigons except the last two. In order to compute the minimal intersection number of $\gamma$ and $\gamma'$, however, we need to remove these two bigon as well. So these additional two generators make up the extra term in formula~\eqref{eqn:MorSpacesParallel}. The general case with potentially non-trivial local systems is shown similarly, by assuming that the underlying curves $\gamma$ and $\gamma'$, considered as precurves, are in pairing position. Each intersection point $x\in\LagrangianFC(\gamma,\gamma')$ corresponds to $n\cdot n'$ intersection points of $\Pi_i(L)$ and $\Pi_i(L')$. Let us number the copies of $\gamma$ and $\gamma'$ in $\Pi_i(L)$ and $\Pi_i(L')$ such that we can index the intersection points corresponding to $x$ by pairs $(i,i')$. Let us write $(i,i')\otimes x$ for the generator indexed by $(i,i')$. Thus, we can identify $\LagrangianFC(L,L')$ with $V_0(n\cdot n')\otimes \LagrangianFC(\gamma, \gamma')$. Suppose the precurves $\Pi_i(L)$ and $\Pi_i(L')$ do not have any generators on a common arc. In this case, they are certainly not parallel and there are also no bigons; so the theorem follows immediately. If $\Pi_i(L)$ and $\Pi_i(L')$ have generators on a common arc, let us put the matrices $X$ and $X'$ on such an arc $a$. Let us also assume for the moment, that $\gamma$ and $\gamma'$ pass through this arc from its right to its left, such that the non-identity block of $P_a$ is $X$ and the one of $P'_a$ is $X'$. For an illustration, see Figure~\ref{fig:ParallelLoopsExtraMorphisms}. Then, the local systems only impact the bigons which cross the side $s_2(a)$. More precisely, suppose $x,z\in \LagrangianFC(\gamma, \gamma')$ and there is a bigon from $x$ to $z$. If this bigon does not cross $s_2(a)$, the bigon count from $V_0(n\cdot n')\otimes x$ to $V_0(n\cdot n')\otimes z$ is given by the identity matrix. Otherwise, the bigon count is given by $(X^{-1})^t\otimes X'$, because the bigon count from $(i,i')\otimes x$ to $(j,j')\otimes z$ is given by $(X^{-1})_{ij}\cdot X'_{j'i'}$ as in the proof of Theorem~\ref{thm:PairingMorLagrangianFH}. Since $X$ and $X'$ have full rank, $(X^{-1})^t\otimes X'$ has full rank, too. So for linear chains, we can now argue as before by doing cancellations which remove all bigons corresponding to a single bigon between $\gamma$ and $\gamma'$ at a time. By applying this procedure to cyclic chains, we now arrive at $$ \begin{tikzcd} \bullet \arrow[bend left]{r}{\id} \arrow[bend right,swap]{r}{(X^{-1})^t\otimes X'} & \bullet \end{tikzcd}. $$ The homology of this complex is given by the direct sum of a copy of the kernel of $(X^{-1})^t\otimes X'-\id$ and another such copy shifted up by 1 in $\delta$-grading. Finally, by definition, changing the orientation of $\gamma$ or $\gamma'$ only inverts and transposes the local systems. So for non-parallel curves, the formula does not change. If $\gamma$ and $\gamma'$ are parallel, we are assuming that they are oriented in the same direction. So if they pass through the arc $a$ from left to right, the non-identity block of $P_a$ is $X^t$ and the one of $P'_a$ is $X^{\prime t}$. Now observe that $X^{-1}\otimes X^{\prime t}-\id$ has the same rank as $(X^{-1})^t\otimes X'-\id$. \end{proof} \begin{figure}[t]\centering \psset{unit=0.43} \begin{pspicture}(-15,-5.5)(15,5.5) \pscustom*[linecolor=lightgray]{ \psecurve(-12,3)(0,-2)(12,3)(24,-2) \psline(12,3)(12,-2) \psecurve(24,3)(12,-2)(0,3)(-12,-2) \psline(0,3)(0,-2) } \psframe*[linecolor=white](6,-10)(20,10) \pscustom*[linecolor=lightgray]{ \psecurve(12,2)(0,-3)(-12,2)(-24,-3) \psline(-12,2)(-12,-3) \psecurve(-24,2)(-12,-3)(0,2)(12,-3) \psline(0,2)(0,-3) } \psframe*[linecolor=white](-6,-10)(-20,10) \psframe*[linecolor=lightgray](-1,1.5)(1,-1.5) \rput(-11.2,2.5){$\left\{\textcolor{white}{\rule[-0.8cm]{1.8cm}{1.8cm}}\right.$} \rput{0}(-14,2.5){$\textcolor{red}{L}$} \rput(-11.2,-2.5){$\left\{\textcolor{white}{\rule[-0.8cm]{1.8cm}{1.8cm}}\right.$} \rput{0}(-14,-2.5){$\textcolor{blue}{L'}$} \psecurve[linecolor=red]{<-}(-24,-1)(-12,4)(0,-1)(12,4)(24,-1) \psecurve[linecolor=red]{<-}(-24,-2)(-12,3)(0,-2)(12,3)(24,-2) \psecurve[linecolor=red]{<-}(-24,-3)(-12,2)(0,-3)(12,2)(24,-3) \psecurve[linecolor=red]{<-}(-24,-4)(-12,1)(0,-4)(12,1)(24,-4) \psecurve[linecolor=blue]{<-}(-24,1)(-12,-4)(0,1)(12,-4)(24,1) \psecurve[linecolor=blue]{<-}(-24,2)(-12,-3)(0,2)(12,-3)(24,2) \psecurve[linecolor=blue]{<-}(-24,3)(-12,-2)(0,3)(12,-2)(24,3) \psecurve[linecolor=blue]{<-}(-24,4)(-12,-1)(0,4)(12,-1)(24,4) \psframe[fillcolor=white,fillstyle=solid](-1,0.5)(1,4.5) \psframe[fillcolor=white,fillstyle=solid](-1,-0.5)(1,-4.5) \rput(0,2.5){$X'$} \rput(0,-2.5){$X$} \rput[r](-12,3.1){$i\,$} \rput[r](-12,2.1){$j\,$} \rput[l](12,3.1){$\,i$} \rput[l](12,2.1){$\,j$} \rput[r](-12,-2.9){$j'\,$} \rput[r](-12,-1.9){$i'\,$} \rput[l](12,-2.9){$\,j'$} \rput[l](12,-1.9){$\,i'$} \psdots(6,0.5)(-6,-0.5) \end{pspicture} \caption{An illustration of the identification of bigons between parallel curves and the matrix $(X^{-1})^t\otimes X'$. Up to conventions, this is the same as~\protect\cite[Figure~43]{HRW}.}\label{fig:ParallelLoopsExtraMorphisms} \end{figure} \subsection{Classification of curved complexes}\label{subsec:complete_classification} \begin{theorem}\label{thm:CompleteClassification} Let \(L=\{(\gamma_i,X_i)\}_{i\in I}\) and \(L'=\{(\gamma'_{i'},X'_{i'})\}_{i'\in I'}\) be two collections of loops. Then \(\Pi_i(L)\) is homotopic to~\(\Pi_i(L')\) iff there is a bijection \(\iota\co I\rightarrow I'\) such that \(\gamma_i\)~is homotopic to~\(\gamma'_{\iota(i)}\) and \(X_i\)~is similar to~\(X'_{\iota(i)}\). The same holds if we replace \(\Pi_i\) by \(\Pi\). \end{theorem} \begin{remark} We expect the same theorem to hold for general curves ie compact \emph{and non-compact} ones. The arguments that we use in the proof of Theorem~\ref{thm:CompleteClassification} rely on different growth properties of the dimensions of the two summands in \eqref{eqn:MorSpacesParallel} from Theorem~\ref{thm:PairingFormula} under pairing with particular test curves. However, the second term simply vanishes for non-compact curves and thus, a separate argument would be needed in this case. \end{remark} \begin{corollary}\label{cor:AddingBasepointFunctorIsFaithfulUpToHom} Consider a marked surface \((S,\emptyset)\) with an arc system \(A\). Let \(M\) be a set of points on \(\partial S\), such that every face \(f\in (S,\emptyset,A)\) contains at most one point in \(M\). Then, two objects in \(\CC(S,\emptyset,A)\) are homotopic iff their images under the induced functor \[\CC(S,\emptyset,A)\rightarrow\CC(S,M,A)\] are homotopic. \end{corollary} \begin{proof The induced functor is a functor of dg categories, as it is induced by the quotient map \[\mathcal{A}(S,\emptyset,A)\rightarrow\mathcal{A}(S,\emptyset,A)/\{p_{m}=0\mid m\in M\}=\mathcal{A}(S,M,A),\] where $p_{m}$ is the algebra element corresponding to the boundary component in which $m\in M$ lies. Thus, the images of two homotopic objects are homotopic. So we may assume that the two objects are direct sums of loops with local systems. The images of such objects are represented by the same loops with local systems, because adding a single basepoint to a face $f$ without any basepoints has the effect of removing exactly one of the two arrows that correspond to an $f$-join under $\Pi_i$. By Theorem~\ref{thm:CompleteClassification}, these loops with local systems represent homotopic objects in both $\CC(S,\emptyset,A)$ and $\CC(S,M,A)$ iff the curves are the same and the local systems are equivalent. \end{proof} We now turn to the proof of Theorem~\ref{thm:CompleteClassification}. \begin{definition}\label{def:companionmatrix} Given a polynomial \[f=x^n+\sum_{i=0}^{n-1} a_i x^i\in\mathbb{F}_2[x],\] define the \textbf{companion matrix $X_f$ of $f$} to be the matrix \[ X_f:=\left( \begin{matrix} 0 & \phantom{a_1} & \phantom{a_1} & a_0\\ 1 & \ddots & \phantom{a_1} & a_1\\ \phantom{a_1}& \ddots & 0 & \vdots\\ \phantom{a_1} & \phantom{a_1} & 1 & a_{n-1} \end{matrix}\right)\in\GL_n(\mathbb{F}_2). \] Note that $X_f$ is invertible iff $a_0\neq 0$. Also, the minimal polynomial of $X_f$ is $f$, so that for any polynomial $g\in\mathbb{F}_2[x]$, $g(X_f)=0$ iff $f\vert g$. A diagonal block matrix of the form \[ \left( \begin{matrix} X_{f_1} & \phantom{X_{f_1}}& \phantom{X_{f_1}}\\ \phantom{X_{f_1}}& \ddots &\phantom{X_{f_1}}\\ \phantom{X_{f_1}}& \phantom{X_{f_1}} & X_{f_r} \end{matrix}\right), \text{ where $f_1,\dots,f_r\in\mathbb{F}_2[x],$} \] is in Frobenius normal form if $f_{i+1}\vert f_{i}$ for all $i=1,\dots,r-1$. \end{definition} \begin{theorem} Every matrix is similar to a matrix in Frobenius normal form. Two matrices are similar iff they have the same Frobenius normal form. \end{theorem} \begin{proof} This is standard linear algebra. \end{proof} \begin{lemma}\label{lem:reformulationofparalleldimensioncount} Given a polynomial \(f\in \mathbb{F}_2[x]\), and \(X\in\GL_m(\mathbb{F}_2)\) for some integer \(m\), \[\dim(\ker((X^{-1})^t\otimes X_f-\id))=\dim(\ker(f(X))),\] where \(X_f\) is the companion matrix of~\(f\) from Definition~\ref{def:companionmatrix}. \end{lemma} \begin{proof} This follows from the same arguments as~\cite[Proposition~36]{HRW}. Let $n=\deg f$. By performing row and column operations, we can bring $((X^{-1})^t\otimes X_f-\id)$ into block diagonal form with the first block of dimension $(n-1)m$ equal to the identity matrix and the second block of dimension $m$ equal to the expression \begin{equation}\label{eqn:reformulationofparalleldimensioncount} \id+a_{n-1}(X^{-1})^t\cdots +a_0((X^{-1})^t)^{n}, \end{equation} where the $a_i$ are the coefficients of $f$ as in Definition~\ref{def:companionmatrix}. The kernel of the matrix \eqref{eqn:reformulationofparalleldimensioncount} has the same dimension as the kernel of $((X^{-1})^t\otimes X_f-\id)$. Now multiply \eqref{eqn:reformulationofparalleldimensioncount} by $(X^t)^n$ to obtain $f(X^t)$. Transposing a square matrix does not change the dimension of its kernel, so we are done. \end{proof} \begin{proof}[of Theorem~\ref{thm:CompleteClassification}] By Corollary~\ref{cor:SplittingCatsForEquivalenceComplexes}, it suffices to show the first part of the theorem. The if-direction is clear, so let us assume that $\Pi_i(L)$ and $\Pi_i(L')$ are homotopic. For every $i'\in I'$ such that there is no $i\in I$ with $\gamma'_{i'}=\gamma_i$, add a ``formal'' curve to $L$ which is supported on $\gamma'_{i'}$ and which has a 0-dimensional local system. Do the same for~$L'$. Note that this does not change $\LagrangianFH(L,L'')$ nor $\LagrangianFH(L',L'')$ for any curve $L''$ with local system. So by allowing 0-dimensional local systems, we may assume without loss of generality that there exists a bijection $\iota\co I\rightarrow I'$ such that $\gamma_i=\gamma'_{\iota(i)}$. Let us assume for simplicity that $I=I'$ and $\iota$ is the identity. Let us fix some $j\in I$ and let $p$ and $p'$ be the minimal polynomials of the matrices $X_j$ and $X'_{j}$, respectively. Then for $N>\deg p+\deg p'$, let \[ f_N(x):=(x^{N-\deg p-\deg p'}+1) \cdot p(x)\cdot p'(x). \] Note that since $f_N$ has a non-zero constant term, its companion matrix is invertible. So $L''=(\gamma_j, X_{f_N})$ is a well-defined curve with local system, which we can use as a ``test curve''. By Theorem~\ref{thm:PairingFormula} and Lemma~\ref{lem:reformulationofparalleldimensioncount}, the dimensions of the morphism spaces from $L$ and $L'$ to $L''$ are equal to \[ \left(\sum_{i\in I}\#\gamma_i\cap\gamma_j\cdot \dim X_{i}\right)\cdot N+2\dim X_j \] and \[ \left(\sum_{i\in I}\#\gamma_i\cap\gamma_j\cdot \dim X'_{i}\right) \cdot N+2\dim X'_{j}, \] respectively. By considering these two terms as linear functions in $N$, we see that they coincide iff their coefficients coincide. Hence, in particular $\dim X_j=\dim X'_{j}$. So it only remains to show that $X_j$ and $X'_{j}$ are similar for all $j\in I$. For this, let us fix some $j\in I$ and assume that both $X_j$ and $X'_{j}$ are in Frobenius normal form defined by polynomials $f_1,\dots, f_r$ and $f'_1,\dots, f'_{r'}$ such that $f_{l+1}\vert f_{l}$ and $f'_{l'+1}\vert f'_{l'}$ for all $l=1,\dots,r-1$ and $l'=1,\dots,r'-1$. Then $X_j$ and $X'_{j}$ are similar iff $r=r'$ and $f_l=f'_l$ for all $l=1,\dots,r$. Suppose this is not the case. Then there exists some minimal $m\leq\min(r,r')$ such that $f_m\neq f'_m$. Assume without loss of generality that $f'_m\not\vert f_m$ and consider the ``test curve'' given by $(\gamma_j,X_{f_m})$. Let $N=\deg f_m$. Then the spaces of morphisms from $L$ and $L'$ to $(\gamma_j,X_{f_m})$ have dimension \begin{equation}\label{eqn:pairingdim3} \left(\sum_{i\in I}\#\gamma_i\cap\gamma_{j}\cdot \dim X_{i}\right)\cdot N+ 2\left( \sum_{l=1}^{m-1}\dim\ker(f_m(X_{f_l}))+ \sum_{l=m}^{r}\dim \ker(f_m(X_{f_l})) \right) \end{equation} and \begin{equation}\label{eqn:pairingdim4} \left(\sum_{i\in I}\#\gamma_{i}\cap\gamma_{j}\cdot \dim X'_{i}\right)\cdot N+ 2\left( \sum_{l=1}^{m-1}\dim\ker(f_m(X_{f'_l}))+ \sum_{l=m}^{r'}\dim \ker(f_m(X_{f'_l})) \right), \end{equation} respectively. The first sums coincide by the results that we have already established. The second sums agree by minimality of $m$. The summands in the third sum of \eqref{eqn:pairingdim3} are equal to $\dim X_{f_l}=\deg f_l$, since $f_l\vert f_m$ for $l>m$. Now, \[\sum_{l=1}^{r}\dim X_{f_l}=\dim X_j=\dim X'_{j}=\sum_{l=1}^{r'}\dim X_{f'_l}.\] By minimality of $m$, we obtain \[\sum_{l=m}^{r}\dim X_{f_l}=\sum_{l=m}^{r'}\dim X_{f'_l}.\] Hence, the third sum in~\eqref{eqn:pairingdim4} is at most as large as the third sum in~\eqref{eqn:pairingdim3}. However, $f'_m\not\vert f_m$, hence $f_m(X_{f'_m})\neq 0$, so \[\dim \ker(f_m(X_{f'_m}))<\dim X_{f'_m}.\] Contradiction. \end{proof} \section{Pairing 4-ended tangles}\label{sec:Pairing} In this section, we prove a glueing formula for $\CFTd$: given the peculiar modules of two 4-ended tangles $T_1$ and $T_2$, we compute the Heegaard Floer homology of the link $L$ obtained by glueing $T_1$ to $T_2$, up to at most one stabilisation. The proof is essentially a calculation of a bordered sutured type~AA bimodule. So let us start by recalling Zarev's Heegaard Floer theory for bordered sutured manifolds. \subsection{Review of bordered sutured Heegaard Floer theory} We will assume some familiarity with \cite{Zarev,ZarevJoining,ZarevThesis} and only give a short review of the basic geometric objects involved. \begin{definition} An \textbf{arc diagram} $\mathcal{Z}$ is a triple $(Z, \mathbf{a},M)$, where $Z$ is a (possibly empty) set of oriented line segments, $\mathbf{a}$ an even number of points on $Z$ and $M$ a matching of points in $\mathbf{a}$. The \textbf{graph $G(\mathcal{Z})$ of an arc diagram} $\mathcal{Z}$ is the graph obtained from the line segments $Z$ by adding an edge between matched points in $\mathbf{a}$. Given an arc diagram $\mathcal{Z}$, $-\mathcal{Z}$ is defined to be the same arc diagram, except that the orientation of the line segments is reversed. \end{definition} \begin{definition} A \textbf{bordered sutured manifold with $\alpha$- and $\beta$-arcs} is a tuple \[(Y,\Gamma, \mathcal{Z}_\alpha,\phi_\alpha, \mathcal{Z}_\beta,\phi_\beta),\] where \begin{itemize} \item $Y$ is a sutured manifold with sutures $\Gamma$; more precisely, $Y$ is an oriented manifold and $\Gamma\subset\partial Y$ are embedded oriented simple closed curves, dividing $\partial Y$ into two oriented open surfaces-with-boundary $R_-$ and $R_+$ such that $\partial R_-$ is equal to $\Gamma$ as embedded oriented 1-manifolds; \item $\mathcal{Z}_\alpha=(Z_\alpha, \mathbf{a}_\alpha,M_\alpha)$ and $\mathcal{Z}_\beta=(Z_\beta, \mathbf{a}_\beta,M_\beta)$ are arc diagrams; \item $\phi_\alpha$ is an embedding of $G(\mathcal{Z}_\alpha)$ into the closure of $R_-$ such that $\phi_\alpha(Z_\alpha)\subset\Gamma$; \item $\phi_\beta$ is an embedding of $G(\mathcal{Z}_\beta)$ into the closure of $R_+$ such that $\phi_\beta(Z_\beta)\subset\Gamma$ and $\phi_\alpha(\mathcal{Z}_\alpha)\cap\phi_\beta(\mathcal{Z}_\beta)=\emptyset$; \end{itemize} such that the map \begin{equation}\label{eqn:homlinind} \pi_0(\Gamma\smallsetminus (\phi_\alpha(Z_\alpha)\cup \phi_\beta(Z_\beta)))\rightarrow \pi_0(\partial Y\smallsetminus (\im(\phi_\alpha)\cup \im(\phi_\beta))) \end{equation} is surjective. \end{definition} \begin{remark} Condition (\ref{eqn:homlinind}) is called \textbf{homological linear independence}. If we drop this condition, Zarev's invariants fail to be well-defined in general. Note that unlike Zarev, we allow the sutured surfaces of bordered sutured manifolds to be degenerate in the sense that surgery along all edges between matched points may contain closed components. This allows us to consider more general bordered sutured manifolds. If we restrict to non-degenerate sutured surfaces, homological linear independence is automatically satisfied, see \cite[Proposition~3.6]{Zarev}. \end{remark} \begin{definition}\label{def:HDforBorderedSuturedMfdls} A \textbf{Heegaard diagram of a bordered sutured manifold} is obtained from a Heegaard diagram of the underlying sutured manifold by adding the graphs of the arc diagrams to it. To be more precise, consider a Heegaard diagram of the underlying sutured manifold. Then we can embed the graph $G(\mathcal{Z}_\alpha)$ into $R_-$ in such a way that it misses the 2-handles $D^1\times D^2$ corresponding to the $\alpha$-curves, simply by sliding them off $S^0\times D^2\subset R_-$. This gives us an embedding of $G(\mathcal{Z}_\alpha)$ into the Heegaard surface such that its image does not intersect the $\alpha$-curves. We view the images of the edges connecting points in $\mathbf{a}$ as $\alpha$-arcs. We proceed similarly for $G(\mathcal{Z}_\beta)$ in $R_+$ and the $\beta$-curves. The image of $Z$ lies on the boundary of the Heegaard diagram, ie the sutures, which we usually draw in \textcolor{darkgreen}{green}. We put a marked point, a \textbf{basepoint}, in every open component of the boundary minus the image of $Z$. \end{definition} Given an arc diagram $\mathcal{Z}$, Zarev defines a moving strands algebra $\mathcal{A}(\mathcal{Z})$, and given a bordered sutured Heegaard diagram, Zarev defines various bimodules over the strands algebras corresponding to its arc diagrams. Each arc diagram can either play the role of a type~D or type~A side of the bimodule; in fact, this is true for each connected component of an arc diagram, in which case, one obtains multimodules. For a discussion of type~A and~D structures as algebraic objects, we refer the reader to section~\ref{sec:AlgStructFromGDCats}, in particular Examples~\ref{exa:HighBrowDefTypeDoverI} and~\ref{exa:HighBrowDefTypeAoverI}. A central result of Zarev's work is a glueing theorem, which is phrased in terms of the $\boxtimes$-tensor product (see Definition~\ref{def:PairingTypeDandA}). The following theorem summarizes his theory for bimodules. However, it is true for general multimodules. \begin{theorem}[{\cite[Theorem~3.10]{Zarev} or \cite[Theorem~12.3.2]{ZarevThesis}}]\label{thm:GlueingZarev} Let \(Y\) be a bordered sutured manifold, bordered by two arc diagrams \(-\mathcal{Z}_1\) and \(\mathcal{Z}_2\). Then there are bimodules, well defined up to homotopy equivalence: \begin{align*} _{\mathcal{A}(\mathcal{Z}_1)} \BSAA(Y) _{\mathcal{A}(\mathcal{Z}_2)} &&& ^{\mathcal{A}(\mathcal{Z}_1)} \BSDA(Y) _{\mathcal{A}(\mathcal{Z}_2)}\\ _{\mathcal{A}(\mathcal{Z}_1)} \BSAD(Y) ^{\mathcal{A}(\mathcal{Z}_2)} &&& ^{\mathcal{A}(\mathcal{Z}_1)} \BSDD(Y) ^{\mathcal{A}(\mathcal{Z}_2)} \end{align*} Let \(Y_1\) and \(Y_2\) be two such manifolds, bordered by \(-\mathcal{Z}_1\) and \(\mathcal{Z}_2\), and \(-\mathcal{Z}_2\) and \(\mathcal{Z}_3\), respectively, and let \(Y_1\cup_{\mathcal{Z}_2} Y_2\) be the 3-manifold obtained by glueing \(Y_1\) and \(Y_2\) together along tubular neighbourhoods of the images of \(\mathcal{Z}_2\) on \(\partial Y_1\), respectively \(-\mathcal{Z}_2\) on \(\partial Y_2\). Then there are homotopy equivalences \begin{align*} _{\mathcal{A}(\mathcal{Z}_1)} \BSAA(Y_1\cup_{\mathcal{Z}_2} Y_2) _{\mathcal{A}(\mathcal{Z}_3)} & \cong _{\mathcal{A}(\mathcal{Z}_1)} \BSAA(Y) _{\mathcal{A}(\mathcal{Z}_2)} \boxtimes ^{\mathcal{A}(\mathcal{Z}_2)} \BSDA(Y) _{\mathcal{A}(\mathcal{Z}_3)} ,\\ ~^{\mathcal{A}(\mathcal{Z}_1)} \BSDA(Y_1\cup_{\mathcal{Z}_2} Y_2) _{\mathcal{A}(\mathcal{Z}_3)} & \cong ~^{\mathcal{A}(\mathcal{Z}_1)} \BSDD(Y) ^{\mathcal{A}(\mathcal{Z}_2)} \boxtimes _{\mathcal{A}(\mathcal{Z}_2)} \BSAA(Y) _{\mathcal{A}(\mathcal{Z}_3)}, \end{align*} etc. Any combination of bimodules for \(Y_1\) and \(Y_2\) can be used, where one is type~A for \(\mathcal{A}(\mathcal{Z}_2)\), and the other is type~D for \(\mathcal{A}(\mathcal{Z}_2)\). \end{theorem} \subsection{A first glueing formula} \begin{definition}\label{def:tanglepairing} Given two 4-ended tangles $T_1$ and $T_2$, let $L(T_1,T_2)$ be the link obtained by glueing $T_1$ to $T_2$ as shown in Figure~\ref{fig:glueing2tangles1}. Equivalently, $L(T_1,T_2)$ is obtained by glueing \reflectbox{$T_1$} to $T_2$ as shown in Figure~\ref{fig:glueing2tangles2}, where \reflectbox{$T_1$} is the tangle obtained from $T_1$ by rotating it along a vertical axis (along with the parametrization of the boundary). Note that $L(T_1,T_2)=L(T_2,T_1)$, which can be seen by rotating the link in Figure~\ref{fig:glueing2tangles2} along the vertical axis by $\pi$. \end{definition} \begin{figure}[ht] \centering \psset{unit=0.2} \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}[showgrid=false](-11,-7)(11,7) \rput{-45}(0,0){ \pscircle[linecolor=lightgray](4,4){2.5} \pscircle[linecolor=lightgray](-4,-4){2.5} \psline[linecolor=white,linewidth=\stringwhite](1,-4)(-2,-4) \psline[linewidth=\stringwidth,linecolor=gray](1,-4)(-2,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,1)(-4,-2) \psline[linewidth=\stringwidth,linecolor=gray](-4,1)(-4,-2) \psline[linecolor=white,linewidth=\stringwhite](-6,-4)(-9,-4) \psline[linewidth=\stringwidth,linecolor=gray](-6,-4)(-9,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,-6)(-4,-9) \psline[linewidth=\stringwidth,linecolor=gray](-4,-6)(-4,-9) \psline[linecolor=white,linewidth=\stringwhite](-1,4)(2,4) \psline[linewidth=\stringwidth,linecolor=gray](-1,4)(2,4) \psline[linecolor=white,linewidth=\stringwhite](4,-1)(4,2) \psline[linewidth=\stringwidth,linecolor=gray](4,-1)(4,2) \psline[linecolor=white,linewidth=\stringwhite](6,4)(9,4) \psline[linewidth=\stringwidth,linecolor=gray](6,4)(9,4) \psline[linecolor=white,linewidth=\stringwhite](4,6)(4,9) \psline[linewidth=\stringwidth,linecolor=gray](4,6)(4,9) \pscircle[linestyle=dotted,linewidth=\stringwidth](4,4){5} \pscircle[linestyle=dotted,linewidth=\stringwidth](-4,-4){5} \psecurve[linewidth=\stringwidth]{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve[linewidth=\stringwidth]{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve[linewidth=\stringwidth]{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwidth]{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) \rput{45}(-4,-4){$T_1$} \rput{45}(4,4){$T_2$} \rput{45}(1.5,1.5){$a$} \rput{45}(6.5,1.5){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.5,6.5){$d$} \rput{45}(-6.5,-6.5){$a$} \rput{45}(-1.5,-6.5){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.5,-1.5){$d$} } \end{pspicture} \caption{}\label{fig:glueing2tangles1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}[showgrid=false](-11,-7)(11,7) \rput{-45}(0,0){ \pscircle[linecolor=lightgray](4,4){2.5} \pscircle[linecolor=lightgray](-4,-4){2.5} \psline[linecolor=white,linewidth=\stringwhite](1,-4)(-2,-4) \psline[linewidth=\stringwidth,linecolor=gray](1,-4)(-2,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,1)(-4,-2) \psline[linewidth=\stringwidth,linecolor=gray](-4,1)(-4,-2) \psline[linecolor=white,linewidth=\stringwhite](-6,-4)(-9,-4) \psline[linewidth=\stringwidth,linecolor=gray](-6,-4)(-9,-4) \psline[linecolor=white,linewidth=\stringwhite](-4,-6)(-4,-9) \psline[linewidth=\stringwidth,linecolor=gray](-4,-6)(-4,-9) \psline[linecolor=white,linewidth=\stringwhite](-1,4)(2,4) \psline[linewidth=\stringwidth,linecolor=gray](-1,4)(2,4) \psline[linecolor=white,linewidth=\stringwhite](4,-1)(4,2) \psline[linewidth=\stringwidth,linecolor=gray](4,-1)(4,2) \psline[linecolor=white,linewidth=\stringwhite](6,4)(9,4) \psline[linewidth=\stringwidth,linecolor=gray](6,4)(9,4) \psline[linecolor=white,linewidth=\stringwhite](4,6)(4,9) \psline[linewidth=\stringwidth,linecolor=gray](4,6)(4,9) \pscircle[linestyle=dotted,linewidth=\stringwidth](4,4){5} \pscircle[linestyle=dotted,linewidth=\stringwidth](-4,-4){5} \psecurve[linewidth=\stringwidth]{C-C}(-2,-1)(1,-4)(4,-1)(1,2) \psecurve[linewidth=\stringwidth]{C-C}(2,1)(-1,4)(-4,1)(-1,-2) \psecurve[linewidth=\stringwidth]{C-C}(3,8)(4,9)(3,10)(-4.3,4.3)(-10,-3)(-9,-4)(-8,-3) \psecurve[linewidth=\stringwidth]{C-C}(-3,-8)(-4,-9)(-3,-10)(4.3,-4.3)(10,3)(9,4)(8,3) \rput{45}(4,4){$T_2$} \rput{45}(-4,-4){\reflectbox{$T_1$}} \rput{45}(-1.5,-1.5){\reflectbox{$a$}} \rput{45}(-6.5,-1.5){\reflectbox{$d$}} \rput{45}(-6.5,-6.5){\reflectbox{$c$}} \rput{45}(-1.5,-6.5){\reflectbox{$b$}} \rput{45}(6.5,6.5){$c$} \rput{45}(1.5,6.5){$d$} \rput{45}(1.5,1.5){$a$} \rput{45}(6.5,1.5){$b$} } \end{pspicture} \caption{}\label{fig:glueing2tangles2} \end{subfigure} \caption{The link obtained by glueing two 4-ended tangles together along matching sites.}\label{fig:glueing2tangles} \end{figure} \begin{theorem}[(Glueing Theorem, version 1)]\label{thm:CFTdGeneralGlueing} Let \(T_1\) and \(T_2\) be two 4-ended tangles and \(L=L(T_1,T_2)\). Let \(\mathcal{P}\) be the strictly unital left-left type~AA structure over \((\Ad,\Ad)\) defined in Figure~\ref{fig:GlueingTypeAAstructure}, where the Alexander grading of \(\Ad\) is given by the ordered matching for \(T_2\). Then \[\CFL(L)\otimes V^{i}\cong\CFTd(\rr(T_1))\boxtimes\,\mathcal{P}\boxtimes\,\CFTd(T_2)\] where \(V\) is the 2-dimensional vector space supported in Alexander gradings \(t\) and~\(t^{-1}\) and identical \(\delta\)-gradings and where \(i=\vert T_1\vert+\vert T_2\vert-\vert L\vert-2\in\{0,1\}\). \end{theorem} \begin{remark} Let us discuss the type AA structure $\mathcal{P}$ in Figure~\ref{fig:GlueingTypeAAstructure} in more detail. First of all, the identity action is implicit. Secondly, the idempotent of a generator ${\red x}{\blue y}$ is $\iota_x$ in the first component and $\iota_y$ in the second. Likewise, the algebra elements in the first and second components are coloured {\red red} and {\blue blue}, respectively. This is the same convention that we use in the proof of Theorem~\ref{thm:CFTdGeneralGlueing} below, where we identify $\mathcal{P}$ with the bordered sutured type~AA structure $\mathcal{P}'$ illustrated in Figure~\ref{fig:GlueingDomains}. This colour convention forestalls Theorem~\ref{thm:CFTdGlueingAsMorphism}, where we will interpret the peculiar module of $\rr(T_1)$ (or more precisely $\mr(T_1)$) as an ${\red\alpha}$-curve, the one for $T_2$ as a ${\blue\beta}$-curve and the link Floer homology of $L(T_1,T_2)$ in terms of the Lagrangian intersection Floer homology of those two curves. The $\delta$- and Alexander gradings of each generator of $\mathcal{P}$ are specified by the exponents of~$\delta$ and $t$, where we write $$t^{(a_1,a_2)}= \begin{cases*} t_1^{a_1}t_2^{a_2} & \text{if $\vert T_1\vert+\vert T_2\vert-\vert L\vert-2=0$}\\ t^{a_1+a_2} & \text{if $\vert T_1\vert+\vert T_2\vert-\vert L\vert-2=1$} \end{cases*} $$ just as in the definition of the functor $\Omega$ for Proposition~\ref{prop:LazyClosing}. The gradings on $\mathcal{P}$ are defined in such a way that pairing with any two peculiar modules gives a chain complex which preserves Alexander grading and increases $\delta$-grading by 1. \end{remark} \begin{example} The functor $\Omega$, interpreted as a type A structure, can be seen as a special case of this glueing formula by taking $T_1$ to be the trivial tangle from Figure~\ref{fig:CFTdForSomeRatTangles}. \end{example} \begin{figure}[b] \centering \[ \begin{tikzcd}[row sep=1.9cm, column sep=2.5cm] & t^{A(q_2)}\delta^{\frac{1}{2}}\textcolor{red}{a}\blue b \arrow[leftarrow,bend right=20]{ld}[description]{\qn{2}} & t^{-A(q_4)}\delta^{\frac{1}{2}}\textcolor{red}{d}\blue c \arrow[leftarrow,bend right=20]{l}[description]{\qp{3}{1}} \arrow[leftarrow,bend right=10]{lld}[description]{\qp{32}{1}} \arrow[leftarrow,pos=0.35,bend right=6]{lldd}[description]{\qp{3}{12}} \\ t^{(0,0)}\delta^0\textcolor{red}{a}\blue a &&& t^{(0,0)}\delta^1\textcolor{red}{d}\blue d \arrow[leftarrow,bend right=20]{lu}[description]{\qn{4}} \arrow[leftarrow,bend right=10]{llu}[description]{\qp{43}{1}} \arrow[leftarrow,bend right=3]{lll}[description]{\pq{1}{432}+\qp{432}{1}} \arrow[leftarrow,bend right=7]{llld}[description]{\pq{12}{43}+\qp{43}{12}} \arrow[leftarrow,pos=0.65,bend left=6]{lldd}[description]{\pq{1}{43}} \\ t^{(0,0)}\delta^0\textcolor{red}{b}\blue b \arrow[pos=0.55,bend left=11.5]{ruu}[description]{\np{2}} \arrow[bend right=10]{rrd}[description]{\pq{12}{3}} &&& t^{(0,0)}\delta^{1}\textcolor{red}{c}\blue c \arrow[leftarrow,pos=0.55,bend right=11.5]{luu}[description]{\np{4}} \arrow[leftarrow,pos=0.65,bend right=6]{lluu}[description]{\qp{3}{41}} \arrow[leftarrow,bend left=7]{lllu}[description]{\pq{41}{32}+\qp{32}{41}} \arrow[leftarrow,bend left=3]{lll}[description]{\pq{412}{3}+\qp{3}{412}} \arrow[leftarrow,bend left=10]{lld}[description]{\pq{41}{3}} \arrow[leftarrow,bend left=20]{ld}[description]{\pn{4}} \\ & t^{A(p_2)}\delta^{\frac{1}{2}}\textcolor{red}{b}\blue a \arrow[leftarrow,pos=0.41,bend left=11.5]{luu}[description]{\nq{2}} \arrow[leftarrow,bend left=20]{lu}[description]{\pn{2}} & t^{-A(p_4)}\delta^{\frac{1}{2}}\textcolor{red}{c}\blue d \arrow[leftarrow,pos=0.35,bend left=6]{lluu}[description]{\pq{1}{32}} \arrow[leftarrow,bend left=20]{l}[description]{\pq{1}{3}} \arrow[pos=0.41,bend right=11.5]{ruu}[description]{\nq{4}} \end{tikzcd}\] \caption{The type~AA structure $\mathcal{P}$ for Theorem~\ref{thm:CFTdGeneralGlueing}. }\label{fig:GlueingTypeAAstructure} \end{figure} \begin{figure}[hp!] \centering \psset{unit=0.3} \bigskip \bigskip \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-10,-10)(10,10) \pscircle(0,0){10} \pscircle[linecolor=lightgray](0,0){4} \psline[linecolor=white,linewidth=\stringwhite](3;45)(7;45) \psline[linecolor=white,linewidth=\stringwhite](3;135)(7;135) \psline[linecolor=white,linewidth=\stringwhite](3;-135)(7;-135) \psline[linecolor=white,linewidth=\stringwhite](3;-45)(7;-45) \psline[linecolor=gray,linewidth=\stringwidth](3;45)(4.8;45) \psline[linecolor=gray,linewidth=\stringwidth](5.2;45)(5.9;45) \psline[linecolor=gray,linewidth=\stringwidth](3;135)(6.3;135) \psline[linecolor=gray,linewidth=\stringwidth](3;-135)(6.3;-135) \psline[linecolor=gray,linewidth=\stringwidth](3;-45)(5.9;-45) \pscircle[linecolor=blue](0,0){7} \pscurve[linecolor=blue](7;-135)(8;-130)(9;-135)(8;-140)(7;-135) \pscurve[linecolor=blue](7;45)(6;36)(5;45)(6;54)(7;45) {\psset{fillstyle=solid,fillcolor=white} \rput{45}(7;45){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{135}(7;135){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{-135}(7;-135){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{-45}(7;-45){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} } \psline[linecolor=darkgreen](6.9;45)(6.1;45) \psline[linecolor=darkgreen](7.1;-135)(7.9;-135) \psline[linecolor=darkgreen](7.1;135)(7.9;135) \psline[linecolor=darkgreen](6.9;-45)(6.1;-45) \rput(-5.5,0){\blue $A$} \rput(0,-5.5){\blue $B$} \rput(5.5,0){\blue $C$} \rput(0,5.5){\blue $D$} \rput(0,0){\textcolor{darkgray}{$T_1$}} \end{pspicture} \caption{The parametrization on $\partial X_{T_1}$.}\label{fig:GlueingTangleT1} \end{subfigure} \quad \begin{subfigure}[b]{0.45\textwidth}\centering \begin{pspicture}(-10,-10)(10,10) \pscircle(0,0){10} \pscircle[linecolor=lightgray](0,0){4} \psline[linecolor=white,linewidth=\stringwhite](3;45)(7;45) \psline[linecolor=white,linewidth=\stringwhite](3;135)(7;135) \psline[linecolor=white,linewidth=\stringwhite](3;-135)(7;-135) \psline[linecolor=white,linewidth=\stringwhite](3;-45)(7;-45) \psline[linecolor=gray,linewidth=\stringwidth](3;45)(4.8;45) \psline[linecolor=gray,linewidth=\stringwidth](5.2;45)(5.9;45) \psline[linecolor=gray,linewidth=\stringwidth](3;135)(6.3;135) \psline[linecolor=gray,linewidth=\stringwidth](3;-135)(6.3;-135) \psline[linecolor=gray,linewidth=\stringwidth](3;-45)(5.9;-45) \pscircle[linecolor=red](0,0){7} \pscurve[linecolor=red](7;-135)(8;-130)(9;-135)(8;-140)(7;-135) \pscurve[linecolor=red](7;45)(6;36)(5;45)(6;54)(7;45) {\psset{fillstyle=solid,fillcolor=white} \rput{45}(7;45){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{135}(7;135){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{-135}(7;-135){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} \rput{-45}(7;-45){\psellipse[linecolor=darkgreen](0,0)(0.5,1)} } \psline[linecolor=darkgreen](6.9;45)(6.1;45) \psline[linecolor=darkgreen](7.1;-135)(7.9;-135) \psline[linecolor=darkgreen](7.1;135)(7.9;135) \psline[linecolor=darkgreen](6.9;-45)(6.1;-45) \rput(-5.5,0){\red $a$} \rput(0,-5.5){\red $b$} \rput(5.5,0){\red $c$} \rput(0,5.5){\red $d$} \rput(0,0){\textcolor{darkgray}{$T_2$}} \end{pspicture} \caption{The parametrization on $\partial X_{T_2}$.}\label{fig:GlueingTangleT2} \end{subfigure} \\ \bigskip \bigskip \begin{subfigure}[b]{0.9\textwidth}\centering \psset{unit=1.8} \begin{pspicture}(-10.2,-10.2)(10.2,10.4) \pscircle(0,0){10} \pscurve[linecolor=red](3.5;-135)(4.5;-155)(4.5;-115)(3.5;-135) \pscustom*[linecolor=white,linewidth=0pt]{ \psline(4.05;-125.6)(8.1;-125.2) \psline(8.1;-144.8)(4.05;-144.4) } \pscurve[linecolor=red,linestyle=dotted,dotsep=1pt](3.5;-135)(4.5;-155)(4.5;-115)(3.5;-135) \pscurve[linecolor=red](4.3;33)(3.5;30)(3.5;60)(4.3;57) \pscurve[linecolor=red](4;140)(3.4;150)(3.4;-150)(4;-140) \pscurve[linecolor=red](4;-130)(3;-110)(4.5;-70)(4.5;-45) \psarcn[linecolor=red](0,0){4.5}{45}{-45} \pscurve[linecolor=red](4;130)(3;110)(4.5;70)(4.5;45) \pscurve[linecolor=blue](7.5;45)(8.5;65)(8.5;25)(7.5;45) \psarc[linecolor=blue](0,0){8.5}{135}{-135} \pscurve[linecolor=blue](8.5;-130)(8.3;-110)(7;-60)(7.5;-45) \pscurve[linecolor=blue](7.5;45)(7;30)(7;-30)(7.5;-45) \pscurve[linecolor=blue](8.5;130)(8.3;110)(7;60)(7.5;45) \pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](8;45)(8.1;54.8)(7.7;60)(7.7;30)(8.1;35.2)(8;45) \psrotate(0,0){90}{\pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](8;45)(8.1;54.8)(8.5;60)(8.5;30)(8.1;35.2)(8;45)} \psrotate(0,0){180}{\pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](8;45)(8.1;54.8)(8.5;60)(8.5;30)(8.1;35.2)(8;45)} \psrotate(0,0){-90}{\pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](8;45)(8.1;54.8)(7.7;60)(7.7;30)(8.1;35.2)(8;45)} \pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](4;45)(4.05;54.4)(4.5;60)(4.5;30)(4.05;35.6)(4;45) \pscustom*[linecolor=white,linewidth=0pt]{ \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](8.1;35.2)(4.05;35.6) } \pscurve[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4;45)(4.05;54.4)(4.5;60)(4.5;30)(4.05;35.6)(4;45) \psrotate(0,0){90}{ \pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](4;45)(4.05;54.4)(3.7;60)(3.7;30)(4.05;35.6)(4;45) } \psrotate(0,0){180}{ \pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](4;45)(4.05;54.4)(3.7;60)(3.7;30)(4.05;35.6)(4;45) } \psrotate(0,0){-90}{ \pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](4;45)(4.05;54.4)(4.5;60)(4.5;30)(4.05;35.6)(4;45) \pscustom*[linecolor=white,linewidth=0pt]{ \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](8.1;35.2)(4.05;35.6) } \pscurve[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4;45)(4.05;54.4)(4.5;60)(4.5;30)(4.05;35.6)(4;45) } \pscircle[linecolor=lightgray](0,0){6} \rput{45}(4;45){\psellipse[linecolor=lightgray,fillstyle=solid,fillcolor=white](0,0)(0.25,0.7)} \rput{-45}(4;135){\psellipse[linecolor=lightgray,fillstyle=solid,fillcolor=white](0,0)(0.25,0.7)} \rput{45}(4;-135){\psellipse[linecolor=lightgray,fillstyle=solid,fillcolor=white](0,0)(0.25,0.7)} \rput{-45}(4;-45){\psellipse[linecolor=lightgray,fillstyle=solid,fillcolor=white](0,0)(0.25,0.7)} \psline[linecolor=white,linewidth=4pt](4.5;54.6)(5;54.6) \psline[linecolor=white,linewidth=4pt](4.5;35.4)(5;35.4) \psrotate(0,0){-90}{ \psline[linecolor=white,linewidth=4pt](4.5;54.6)(5;54.6) \psline[linecolor=white,linewidth=4pt](4.5;35.4)(5;35.4) } \psrotate(0,0){180}{ \psline[linecolor=white,linewidth=4pt](4.5;54.6)(5;54.6) \psline[linecolor=white,linewidth=4pt](4.5;35.4)(5;35.4) } \psline[linecolor=white,linewidth=4pt](5.8;54.6)(6.2;54.6) \psline[linecolor=white,linewidth=4pt](5.8;35.4)(6.2;35.4) \psrotate(0,0){90}{ \psline[linecolor=white,linewidth=4pt](5.8;54.6)(6.2;54.6) \psline[linecolor=white,linewidth=4pt](5.8;35.4)(6.2;35.4) } \psrotate(0,0){180}{ \psline[linecolor=white,linewidth=4pt](5.8;54.6)(6.2;54.6) \psline[linecolor=white,linewidth=4pt](5.8;35.4)(6.2;35.4) } \psrotate(0,0){-90}{ \psline[linecolor=white,linewidth=4pt](5.8;54.6)(6.2;54.6) \psline[linecolor=white,linewidth=4pt](5.8;35.4)(6.2;35.4) } \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;35.6)(8.1;35.2) \psrotate(0,0){90}{ \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;35.6)(8.1;35.2) } \psrotate(0,0){180}{ \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;35.6)(8.1;35.2) } \psrotate(0,0){-90}{ \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;54.4)(8.1;54.8) \psline[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](4.05;35.6)(8.1;35.2) } \psline[linecolor=white,linewidth=4pt](6.7;54.6)(7;54.6) \psline[linecolor=white,linewidth=4pt](6.7;35.4)(7;35.4) \psrotate(0,0){-90}{ \psline[linecolor=white,linewidth=4pt](6.7;54.6)(7;54.6) \psline[linecolor=white,linewidth=4pt](6.7;35.4)(7;35.4) } \psrotate(0,0){180}{ \psline[linecolor=white,linewidth=4pt](6.65;54.6)(6.95;54.6) \psline[linecolor=white,linewidth=4pt](6.65;35.4)(6.95;35.4) } \pscurve[linecolor=blue](8.3;-147)(7.5;-150)(7.5;-120)(8.3;-123) \psrotate(0,0){180}{\pscurve[linecolor=darkgreen,fillstyle=solid,fillcolor=white](8;45)(8.1;54.8)(8.5;60)(8.5;30)(8.1;35.2)(8;45)} \pscurve[linecolor=darkgreen](8;45)(8.1;54.8)(7.7;60)(7.7;30)(8.1;35.2)(8;45) \psrotate(0,0){90}{\pscurve[linecolor=darkgreen](8;45)(8.1;54.8)(8.5;60)(8.5;30)(8.1;35.2)(8;45)} \psrotate(0,0){180}{\pscurve[linecolor=darkgreen](8;45)(8.1;54.8)(8.5;60)(8.5;30)(8.1;35.2)(8;45)} \psrotate(0,0){-90}{\pscurve[linecolor=darkgreen](8;45)(8.1;54.8)(7.7;60)(7.7;30)(8.1;35.2)(8;45)} \rput{45}(8;45){\psellipse[fillstyle=solid,fillcolor=white](0,0)(0.5,1.4)} \rput{-45}(8;135){\psellipse[fillstyle=solid,fillcolor=white](0,0)(0.5,1.4)} \rput{45}(8;-135){\psellipse[fillstyle=solid,fillcolor=white](0,0)(0.5,1.4)} \rput{-45}(8;-45){\psellipse[fillstyle=solid,fillcolor=white](0,0)(0.5,1.4)} \rput(-3.4,0){$\red a$} \rput(0,-4){$\red b$} \rput(4.9,0){$\red c$} \rput(0,4){$\red d$} \rput(2.5;45){$\red f$} \rput(4.7;-110){$\red e$} \rput(-9,0){$\blue A$} \rput(0,-8){$\blue B$} \rput(7,0){$\blue C$} \rput(0,8){$\blue D$} \rput(9;66){$\blue F$} \rput(7;-153){$\blue E$} \end{pspicture} \caption{The parametrization on the boundary of the thickened 4-punctured sphere $X$.}\label{fig:GlueingAAP} \end{subfigure} \caption{A decomposition of the complement of the link from Figure~\ref{fig:glueing2tangles}.}\label{fig:GlueingMflds} \end{figure} \begin{figure}[p] \centering \begin{subfigure}[b]{\textwidth} \centering \psset{unit=0.13} \begin{pspicture}(-27.5,-27.5)(27.5,27.5) \psrotate(0,0){-45}{ \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=blue](1,20)(1,18)(18,1)(20,1) } \psline[linecolor=lightgreen,linewidth=3pt](1,20)(20,1) \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=red](1,20)(5,27)(24,4)(20,1) } \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=blue](-1,-20)(-1,-18)(-18,-1)(-20,-1) } \psline[linecolor=lightgreen,linewidth=3pt](-1,-20)(-20,-1) \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=red](-1,-20)(-5,-27)(-24,-4)(-20,-1) } \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=blue](-1,20)(-1,18)(-24,4)(-20,1) \pscurve[linecolor=red](-20,1)(-18,1)(-13,25)(-5,27)(-1,20) } \psline*[linecolor=white](-15.15,10.9)(0,10)(0,30)(-20,30) \psrotate(0,0){180}{ \pscustom*[linecolor=lightgreen]{% \pscurve[linecolor=blue](-1,20)(-1,18)(-24,4)(-20,1) \pscurve[linecolor=red](-20,1)(-18,1)(-13,25)(-5,27)(-1,20) } \psline*[linecolor=white](-15.15,10.9)(0,10)(0,30)(-20,30) } \rput(0,20){$~$% \pscustom*[linecolor=lightgreen]{% \pscurve(-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \pscurve(2,0)(4,2)(8,-3)(-8,-3)(-4,2)(-2,0) } \pscustom*[linecolor=white]{ \pscurve(-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \pscurve(-2,0)(-4,2)(-8,-3)(8,-3)(4,2)(2,0) } } \psrotate(0,0){180}{ \rput(0,20){$~$% \pscustom*[linecolor=lightgreen]{% \pscurve(-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \pscurve(2,0)(4,2)(8,-3)(-8,-3)(-4,2)(-2,0) } \pscustom*[linecolor=white]{ \pscurve(-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \pscurve(-2,0)(-4,2)(-8,-3)(8,-3)(4,2)(2,0) } } } \pscurve[linecolor=blue](1,20)(1,18)(18,1)(20,1) \pscurve[linecolor=red](1,20)(5,27)(24,4)(20,1) \pscurve[linecolor=blue](-1,20)(-1,18)(-24,4)(-20,1) \pscurve[linecolor=red](-1,20)(-5,27)(-13,25)(-18,1)(-20,1) \psrotate(0,0){180}{ \pscurve[linecolor=red](1,20)(1,18)(18,1)(20,1) \pscurve[linecolor=blue](1,20)(5,27)(24,4)(20,1) \pscurve[linecolor=red](-1,20)(-1,18)(-24,4)(-20,1) \pscurve[linecolor=blue](-1,20)(-5,27)(-13,25)(-18,1)(-20,1) } \rput(-20,0){ \psellipse[linecolor=darkgreen,fillstyle=solid,fillcolor=white](0,0)(3,4) \psline[linecolor=darkgreen](0,-3)(0,-5) \psline[linecolor=darkgreen](0,3)(0,5) } \rput(0,-20){ \pscurve[linecolor=red](-2,0)(-4,2)(-8,-3)(8,-3)(4,2)(2,0) \pscurve[linecolor=blue](-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \psellipse[linecolor=darkgreen,fillstyle=solid,fillcolor=white](0,0)(4,3) \psline[linecolor=darkgreen](-3,0)(-5,0) \psline[linecolor=darkgreen](3,0)(5,0) } \rput(20,0){ \psellipse[linecolor=darkgreen,fillstyle=solid,fillcolor=white](0,0)(3,4) \psline[linecolor=darkgreen](0,-3)(0,-5) \psline[linecolor=darkgreen](0,3)(0,5) } \rput(0,20){ \pscurve[linecolor=red](-2,0)(-4,2)(-8,-3)(8,-3)(4,2)(2,0) \pscurve[linecolor=blue](-2,0)(-4,-2)(-8,3)(8,3)(4,-2)(2,0) \psellipse[linecolor=darkgreen,fillstyle=solid,fillcolor=white](0,0)(4,3) \psline[linecolor=darkgreen](-3,0)(-5,0) \psline[linecolor=darkgreen](3,0)(5,0) } \rput(3.1,13.85){\psdot\rput{45}{\uput{1}[-155](0,0){$C$}}} \rput(-4.85,14.4){\psdot\rput{45}{\uput{1}[-135](0,0){$D$}}} \rput(3.5,26){\psdot\rput{45}{\uput{1}[65](0,0){$c$}}} \rput(-3.67,25.95){\psdot\rput{45}{\uput{1}[45](0,0){$d$}}} \rput(-15.15,10.9){\psdot\rput{45}{\uput{1.5}[-168](0,0){$\delta$}}} \rput(-7.23,20){\psdot\rput{45}{\uput{1}[135](0,0){$\zeta_1$}}} \rput(7.23,20){\psdot\rput{45}{\uput{1}[-45](0,0){$\zeta_2$}}} \psrotate(0,0){180}{ \rput(3.1,13.85){\psdot\rput{-135}{\uput{1}[25](0,0){$a$}}} \rput(-4.85,14.4){\psdot\rput{-135}{\uput{1}[45](0,0){$b$}}} \rput(3.5,26){\psdot\rput{-135}{\uput{1}[-115](0,0){$A$}}} \rput(-3.67,25.95){\psdot\rput{-135}{\uput{1}[-135](0,0){$B$}}} \rput(-15.15,10.9){\psdot\rput{-135}{\uput{1.5}[12](0,0){$\beta$}}} \rput(-7.23,20){\psdot\rput{-135}{\uput{1}[-45](0,0){$\varepsilon_1$}}} \rput(7.23,20){\psdot\rput{-135}{\uput{1}[135](0,0){$\varepsilon_2$}}} } \rput(-8.7,-8.7){\rput{45}{\red $a$}} \rput(8.7,8.7){\rput{45}{\blue $C$}} \rput(15,21){\rput{45}{\red $c$}} \rput(-15,-21){\rput{45}{\blue $A$}} \rput(-17,5){\rput{45}{\red $d$}} \rput(17,-5){\rput{45}{\blue $B$}} \rput(22,-6){\rput{45}{\red $b$}} \rput(-22,6){\rput{45}{\blue $D$}} \rput(-0.6,12){\rput{45}{\red $f$}} \rput(-0.6,28){\rput{45}{\blue $F$}} \rput(0.6,-28){\rput{45}{\red $e$}} \rput(0.6,-12){\rput{45}{\blue $E$}} \rput(-5,16.5){\rput{45}{\red $P_4$}} \rput(0,15.5){\rput{45}{\red $Q_4$}} \rput(5,16.5){\rput{45}{\red $P_4$}} \rput(-5,23.5){\rput{45}{\blue $p_4$}} \rput(0,24.5){\rput{45}{\blue $q_4$}} \rput(5,23.5){\rput{45}{\blue $p_4$}} \rput(-5,-16.5){\rput{45}{\blue $q_2$}} \rput(0,-15){\rput{45}{\blue $p_2$}} \rput(5,-16.5){\rput{45}{\blue $q_2$}} \rput(-5,-23.5){\rput{45}{\red $Q_2$}} \rput(0,-24.5){\rput{45}{\red $P_2$}} \rput(5,-23.5){\rput{45}{\red $Q_2$}} \rput(-25,0){\rput{45}{\red $P_1$}} \rput(-15,0){\rput{45}{\blue $p_1$}} \rput(25,0){\rput{45}{\blue $q_3$}} \rput(15,0){\rput{45}{\red $Q_3$}} \rput(-12,16){\rput{45}{$\textcolor{white}{_\delta}\square_\delta$}} \rput(12,-16){\rput{45}{$\textcolor{white}{_\beta}\square_\beta$}} \end{pspicture} \caption{A Heegaard diagram for $X$. The orientation of the surface is such that the normal vector (determined by the right-hand rule) points out of the projection plane. }\label{fig:GlueingHD} \end{subfigure} \begin{subfigure}[b]{\textwidth} \begin{align*} {\red a}{\blue a}\co &Cc\beta\delta\varepsilon_2 & {\red b}{\blue a}\co &ACbc\delta & {\red c}{\blue c}\co &Aa\beta\delta\zeta_2 & {\red d}{\blue c}\co &ACad\beta \\ \overline{{\red a}{\blue a}}\co &BCbc\delta & {\red a}{\blue b}\co &BCac\delta & \overline{{\red c}{\blue c}}\co &ADad\beta & {\red c}{\blue d}\co &ADac\beta \\ \underline{{\red a}{\blue a}}\co &Cc\beta\delta\varepsilon_1 & {\red b}{\blue b}\co &ACac\delta & \underline{{\red c}{\blue c}}\co &Aa\beta\delta\zeta_1 & {\red d}{\blue d}\co &ACac\beta \end{align*} \caption{Generators in idempotents $\mathcal{I}'_\beta$ and $\mathcal{I}'_\alpha$ for the Heegaard diagram above. The five-letter word to the right of a colon specifies the intersection points of that generator; the two-letter word to the left of that colon denotes the name of the generator which indicates the corresponding idempotents. Namely, the idempotents of a generator ${\red x}{\blue y}$ are $\iota_x\in\mathcal{I}'_\beta$ in the first component and $\iota_y\in\mathcal{I}'_\alpha$ in the second. }\label{fig:GlueingGeneratorList} \end{subfigure} \begin{subfigure}[b]{\textwidth}\centering \[ \begin{tikzcd}[row sep=1.5cm, column sep=3.5cm] & \textcolor{red}{a}\blue b \arrow[leftarrow,bend right=20]{ld}[description]{\{{\blue q_{2}},\square_\beta\}} \arrow[leftarrow,pos=0.45,bend right=11.5]{ldd}[description]{\{{\red P_{2}}\}} \arrow[dashed,pos=0.35,bend left=6]{rrdd}[description]{\{{\red P_4},{\red P_{1}}/{\blue q_{3}},\square_\delta\}} & \textcolor{red}{d}\blue c \arrow[leftarrow,bend right=20]{l}[description]{\{{\red P_{1}}/{\blue q_{3}}\}} \arrow[dashed,leftarrow,bend right=10]{lld}[description]{\{{\red P_{1}}/{\blue q_3},{\blue q_{2}},\square_\beta\}} \arrow[dashed,leftarrow,pos=0.35,bend right=6]{lldd}[description]{\{{\red P_2},{\red P_{1}}/{\blue q_{3}}\}} \\ \textcolor{red}{a}\blue a \arrow[dashed,pos=0.65,bend right=6]{rrdd}[description]{\{{\red Q_2},{\red Q_{3}}/{\blue p_{1}},\square_\beta\}} &&& \textcolor{red}{d}\blue d \arrow[leftarrow,bend right=20]{lu}[description]{\{{\blue q_{4}}\}} \arrow[dashed,leftarrow,pos=0.53,bend right=10]{llu}[description]{\{{\red P_{1}}/{\blue q_{3}},{\blue q_4}\}} \arrow[dashed,leftarrow,bend right=1.5]{lll}[description]{\{{\red Q_2},{\red Q_4},{\red Q_{3}}/{\blue p_{1}},\square_\beta\}+\{{\red P_{1}}/{\blue q_{3}},{\blue q_2},{\blue q_4},\square_\beta\}} \arrow[dashed,leftarrow,bend right=7]{llld}[description]{\{{\red Q_4},{\red Q_{3}}/{\blue p_{1}},{\blue p_2}\}+\{{\red P_2},{\red P_{1}}/{\blue q_{3}},{\blue q_4}\}} \arrow[dashed,leftarrow,pos=0.65,bend left=6]{lldd}[description]{\{{\red Q_4},{\red Q_{3}}/{\blue p_{1}}\}} \arrow[leftarrow,pos=0.55,bend left=11.5]{ldd}[description]{\{{\red Q_4}\}} \\ \textcolor{red}{b}\blue b \arrow[dashed,bend right=10,pos=0.53]{rrd}[description]{\{{\red Q_{3}}/{\blue p_{1}},{\blue p_2}\}} &&& \textcolor{red}{c}\blue c \arrow[leftarrow,pos=0.53,bend right=11.5]{luu}[description]{\{{\red P_{4}},\square_\delta\}} \arrow[dashed,leftarrow,bend left=7]{lllu}[description]{\{{\red Q_2},{\red Q_{3}}/{\blue p_{1}},{\blue p_4},\square_\beta,\square_\delta\}+\{{\red P_4},{\red P_{1}}/{\blue q_{3}},{\blue q_2},\square_\beta,\square_\delta\}} \arrow[dashed,leftarrow,bend left=1.5]{lll}[description]{\{{\red Q_{3}}/{\blue p_{1}},{\blue p_4},{\blue p_2},\square_\delta\}+\{{\red P_2},{\red P_4},{\red P_{1}}/{\blue q_{3}},\square_\delta\}} \arrow[dashed,leftarrow,bend left=10]{lld}[description]{\{{\red Q_{3}}/{\blue p_{1}},{\blue p_4},\square_\delta\}} \arrow[leftarrow,bend left=20]{ld}[description]{\{{\blue p_{4}},\square_\delta\}} \\ & \textcolor{red}{b}\blue a \arrow[leftarrow,pos=0.47,bend left=11.5]{luu}[description]{\{{\red Q_{2}},\square_\beta\}} \arrow[leftarrow,bend left=20]{lu}[description]{\{{\blue p_{2}}\}} & \textcolor{red}{c}\blue d \arrow[leftarrow,bend left=20]{l}[description]{\{{\red Q_{3}}/{\blue p_{1}}\}} \end{tikzcd}\] \caption{The domains that contribute to the type~AA structure $\mathcal{P}'$.}\label{fig:GlueingDomains} \end{subfigure} \caption{A Heegaard diagram for the bordered sutured manifold~$X$ from Figure~\ref{fig:GlueingAAP} and some computations of generators and domains.}\label{fig:GlueingCAT} \end{figure} \begin{proof}[of Theorem~\ref{thm:CFTdGeneralGlueing}] The strategy of the proof is to glue the tangle complements $X_{T_1}$ and $X_{T_2}$ to opposite sides of a thickened 4-punctured sphere $X=I\times (S^2\smallsetminus4 D^2)$, ie along $\{0\}\times (S^2\smallsetminus4 D^2)$ and $\{1\}\times (S^2\smallsetminus4 D^2)$, respectively. For this, we equip $X_{T_1}$, $X_{T_2}$ and $X$ with the structures of bordered sutured manifolds specified by the arc diagrams in Figure~\ref{fig:GlueingMflds}. Note that each closed component of $T_1$ and $T_2$ carries two oppositely oriented meridional sutures. By glueing $X_{T_1}$ to the outside and $X_{T_2}$ to the inside of the thickened 4-punctured sphere $X$ as shown in Figure~\ref{fig:GlueingAAP}, we obtain the complement $X_L$ of the link $L$. In addition to those sutures on closed components of $T_1$ and $T_2$, the sutured manifold $X_L$ carries one meridional suture at each of the four places where the tangles have been glued together. Let $\mathcal{A}$ be the bordered sutured algebra corresponding to the arc diagram on $X$ consisting of $\alpha$-arcs and $\mathcal{I}_\alpha$ the corresponding ring of idempotents. Let $\mathcal{I}'_\alpha$ be the subring of idempotents occupying the $\alpha$-arcs ${\red e}$ and ${\red f}$ and $\iota_{\alpha}\co\mathcal{A}'\hookrightarrow\mathcal{A}$ the subalgebra $\mathcal{I}'_\alpha.\mathcal{A}.\mathcal{I}'_\alpha$ of~$\mathcal{A}$. Similarly define $\mathcal{I}_\beta$, $\mathcal{I}'_\beta$ and $\iota_{\beta}\co\mathcal{B}'\hookrightarrow\mathcal{B}$. Zarev's Glueing Theorem (Theorem~\ref{thm:GlueingZarev}) tells us that \begin{equation}\label{eqn:FirstGlueingProofStep1} \SFC(X_L)\cong \BSD(X_{T_1})^\mathcal{B}\boxtimes\typeA{\mathcal{B},\mathcal{A}}{\BSAA(X)}\boxtimes\BSD(X_{T_2})^\mathcal{A} \end{equation} The left-hand side agrees with the left-hand term of the identity from Theorem~\ref{thm:CFTdGeneralGlueing}. So the goal is to identify the three tensor-factors on the right-hand sides with each other. First of all, observe that we can choose a Heegaard diagram for $X_{T_1}$ where the two $\beta$-arcs that have ends on the same suture do not intersect any $\alpha$-curve. Thus, generators of its type~D structure belong to the idempotents that occupy one of the four arcs $\blue A$, $\blue B$, $\blue C$ or $\blue D$. The same is true for $X_{T_2}$ and its $\alpha$-arcs. Moreover, the labels of the type~D structures for $X_{T_1}$ and $X_{T_2}$ are contained in $\mathcal{B}'$ and $\mathcal{A}'$, respectively. In other words, $\BSD(X_{T_1})$ and $\BSD(X_{T_2})$ lie in the images of the functors $\mathcal{F}^D_{\iota_\beta}$ and $\mathcal{F}^D_{\iota_\alpha}$ induced by the inclusions $\iota_\beta$ and $\iota_\alpha$, respectively (see Definition~\ref{def:InducedFunctors}). Thus, by the Pairing Adjunction (Theorem~\ref{thm:PairingAdjunction}), the right-hand side of (\ref{eqn:FirstGlueingProofStep1}) is equal to \begin{equation}\label{eqn:FirstGlueingProofStep2} \BSD(X_{T_1})^{\mathcal{B}'}\boxtimes\typeA{\mathcal{B}',\mathcal{A}'}{\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))}\boxtimes\BSD(X_{T_2})^{\mathcal{A}'} \end{equation} where $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}$ is the functor induced by the inclusions $\iota_\beta$ and $\iota_\alpha$. So let us compute $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))$. A Heegaard diagram for $X$ is shown in Figure~\ref{fig:GlueingHD}, which is obtained by ``flattening'' the thickened 4-punctured sphere from Figure~\ref{fig:GlueingAAP}. The regions adjacent to a basepoint are shaded green (\textcolor{lightgreen}{$\blacksquare$}). $\alpha$- and $\beta$-arcs are labelled by ${\red a}$, ${\red b}$, ${\red c}$, ${\red d}$, ${\red e}$, ${\red f}$ and ${\blue A}$, ${\blue B}$, ${\blue C}$, ${\blue D}$, ${\blue E}$, ${\blue F}$, respectively, as in Figure~\ref{fig:GlueingAAP}. Intersection points of these are suggestively labelled by black Greek and Roman letters which indicate which $\alpha$- and $\beta$-arcs the intersection points lie on. For example, the intersection point $\beta$ lies on both ${\red b}$ and ${\blue B}$; the same principle applies to all Greek letter labels. The intersection points labelled by Roman letters lie on exactly one of $\{{\red e, f},{\blue E, F}\}$. This makes it easy to determine the generators of $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))$ and their corresponding idempotents, which are shown in Figure~\ref{fig:GlueingGeneratorList}. Next, let us consider the domains of this Heegaard diagram. The two square regions with vertices $\{d,\delta,D,\zeta_1\}$ and $\{\beta,b,\varepsilon_1,B\}$ are labelled by $\square_\delta$ and $\square_\beta$, respectively. All other regions in the Heegaard diagram have at least one boundary component and are labelled by ${\red P_i}$, ${\red Q_i}$, ${\blue p_i}$ and ${\blue q_i}$. There are two domains that have two labels, namely ${\red P_1}/{\blue q_3}$ and ${\red Q_3}/{\blue p_1}$. There are four pairs of regions that have the same label. Note however, that the coefficients of any paired regions agree in any domain connecting generators in $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))$, ie those generators that occupy $\red e$, $\red f$, $\blue E$ and~$\blue F$. Thus, we may use these labels to describe all domains that contribute to $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))$. In the following, let us write domains $D$ as formal differences $D_+-D_-$ of unordered sets of regions $D_+$ and $D_-$ with $D_+\cap D_-=\emptyset$ such that \[D=\sum_{r\in D_+}r-\sum_{r\in D_-}r.\] Let us calculate some connecting domains between the generators. First of all, here are some bigons with a single boundary puncture: \begin{align*} \{{\blue p_2}\}\co &{\red b}{\blue b}\rightarrow{\red b}{\blue a}, & \{{\red P_2}\}\co &{\red b}{\blue b}\rightarrow{\red a}{\blue b},& \{{\blue q_4}\}\co &{\red d}{\blue c}\rightarrow{\red d}{\blue d}, & \{{\red Q_4}\}\co &{\red c}{\blue d}\rightarrow{\red d}{\blue d}. \end{align*} The following domains consist of two bigons, each with a single boundary puncture: \begin{align*} \{{\blue q_2},\square_\beta\}\co &{\red a}{\blue a} \rightarrow{\red a}{\blue b}, & \{{\red Q_2},\square_\beta\}\co &{\red a}{\blue a}\rightarrow {\red b}{\blue a},& \{{\blue p_4},\square_\delta\}\co &{\red c}{\blue d}\rightarrow {\red c}{\blue c}, & \{{\red P_4},\square_\delta\}\co &{\red d}{\blue c}\rightarrow {\red c}{\blue c}. \end{align*} The polygonal regions ${\red P_1}/{\blue q_3}$ and ${\red Q_3}/{\blue p_1}$ connect the following generators: \begin{align*} \{{\red P_1}/{\blue q_3}\}\co &{\red a}{\blue b}\rightarrow {\red d}{\blue c}, & \{{\red Q_3}/{\blue p_1}\}\co &{\red b}{\blue a}\rightarrow {\red c}{\blue d}. \end{align*} All those domains above contribute to the type~AA structure. They correspond to the solid arrows in Figure~\ref{fig:GlueingDomains}. From these, we can compute the connecting domains labelling the dashed arrows in the same figure. We claim that these are in fact all domains that connect these eight generators and that have non-negative multiplicities. To show this, we can argue as follows: Pick any two generators $x$ and $y$ and calculate a connecting domain between them, eg by following along the arrows in Figure~\ref{fig:GlueingDomains}. We observe that if $x$ and $y$ are among those eight generators from Figure~\ref{fig:GlueingDomains}, we can always choose a domain \[X=(X_+-X_-)\co x\rightarrow y\] with multiplicities in $\{-1,0,+1\}$. We can obtain any other connecting domain between $x$ and $y$ by adding a periodic domain $P=P_+-P_-$ to $X$. Suppose this new connecting domain $(X_++P_+)-(X_-+P_-)$ has non-negative (respectively non-positive) coefficients only. Then \begin{align*} X_++P_+\subseteq X_-+P_-, &\text{ so } P_+\subseteq X_-, X_+\subseteq P_-,\text{ or respecively}\\ X_++P_+\supseteq X_-+P_-, &\text{ so } P_+\supseteq X_-, X_+\supseteq P_-. \end{align*} In particular, since neither $X_-$ nor $X_+$ contain any region more than once, we only need to consider periodic domains which have multiplicities $\geq -1$ or $\leq +1$. Let us compute the group of periodic domains of our Heegaard diagram. It is easy to see that it is freely generated by the following three domains: \begin{align*} \mathcal{D}_1&=\{{\red Q_4},{\red Q_3}/{\blue p_1},{\blue p_2}\}-\{{\red P_2},{\red P_1}/{\blue q_3},{\blue q_4}\},\\ \mathcal{D}_2&=\{{\blue p_2},{\blue q_2}\}-\{{\red P_2},{\red Q_2}\},\\ \mathcal{D}_3&=\{{\blue p_4},{\blue q_4}\}-\{{\red P_4},{\red Q_4}\}. \end{align*} The periodic domains which have multiplicities $\geq -1$ or $\leq +1$ are given by $\mathcal{D}_1$, $\mathcal{D}_2$, $\mathcal{D}_3$, \begin{align*} \mathcal{D}_1-\mathcal{D}_2&=\{{\red Q_4},{\red Q_2},{\red Q_3}/{\blue p_1}\}-\{{\red P_1}/{\blue q_3},{\blue q_4},{\blue q_2}\},\\ \mathcal{D}_1+\mathcal{D}_3&=\{{\red Q_3}/{\blue p_1},{\blue p_4},{\blue p_2}\}-\{{\red P_4},{\red P_2},{\red P_1}/{\blue q_3}\},\\ \mathcal{D}_1-\mathcal{D}_2+\mathcal{D}_3&=\{{\red Q_2},{\red Q_3}/{\blue p_1},{\blue p_4}\}-\{{\red P_4},{\red P_1}/{\blue q_3},{\blue q_2}\},\\ \mathcal{D}_2+\mathcal{D}_3&=\{{\blue p_2},{\blue q_2},{\blue p_4},{\blue q_4}\}-\{{\red P_2},{\red Q_2},{\red P_4},{\red Q_4}\},\\ \mathcal{D}_2-\mathcal{D}_3&=\{{\red P_4},{\red Q_4},{\blue p_2},{\blue q_2}\}-\{{\red P_2},{\red Q_2},{\blue p_4},{\blue q_4}\} \end{align*} and their negatives. It is now elementary to check that indeed, Figure~\ref{fig:GlueingDomains} shows all connecting domains with non-negative multiplicities. There are four generators that we have not considered yet, namely $\overline{{\red a}{\blue a}}$, $\underline{{\red a}{\blue a}}$, $\overline{{\red c}{\blue c}}$ and $\underline{{\red c}{\blue c}}$. They are connected by the following two contributing domains \begin{align*} \{\square_\beta\}\co &\underline{{\red a}{\blue a}}\rightarrow\overline{{\red a}{\blue a}}, & \{\square_\delta\}\co &\overline{{\red c}{\blue c}}\rightarrow\underline{{\red c}{\blue c}}. \end{align*} Thus the two generator pairs can be cancelled. Let us connect these two generators to those from Figure~\ref{fig:GlueingDomains}. There are for example the two domains \begin{align*} \{{\red P_2},{\red Q_2}\}\co &{\red a}{\blue a}\rightarrow\underline{{\red a}{\blue a}}, & \{{\red P_4},{\red Q_4}\}\co &\underline{{\red c}{\blue c}}\rightarrow{\red c}{\blue c}. \end{align*} We can use the same arguments as above to verify that there are no domains with non-negative domains leaving $\underline{{\red a}{\blue a}}$ or terminating at $\underline{{\red c}{\blue c}}$ other than $\{\square_\beta\}$ and $\{\square_\delta\}$. Hence, after cancellation, the remaining complex is the same as before. We can now use the $d^2$-relation for type AA structures to deduce that all domains from Figure~\ref{fig:GlueingDomains} -- not only those on the solid arrows -- contribute. For example, the contribution to the $d^2$-relation of the composition ${\red a}{\blue a}\rightarrow{\red a}{\blue b}\rightarrow{\red d}{\blue c}$ can only be cancelled by the differential of ${\red a}{\blue a}\rightarrow{\red d}{\blue c}$. We can argue similarly for all other dashed arrows. Thus, we obtain a type~AA structure $\mathcal{P}'$ which, by definition, is homotopic to $\mathcal{F}^{AA}_{\iota_\beta,\iota_\alpha}(\BSAA(X))$. Thus, the expression (\ref{eqn:FirstGlueingProofStep2}) is chain homotopic to \begin{equation}\label{eqn:FirstGlueingProofStep3} \BSD(X_{T_1})^{\mathcal{B}'}\boxtimes\typeA{\mathcal{B}',\mathcal{A}'}{\mathcal{P}'}\boxtimes \BSD(X_{T_2})^{\mathcal{A}'}. \end{equation} Let $\Anull$ be the quotient algebra obtained from $\Ad$ by setting $p_3=0=q_1$. Now identify $\Id$ with $\mathcal{I}'_\alpha$ and $\mathcal{I}'_\beta$ such that an idempotent for site $s$ corresponds to an idempotent which does not occupy the $\alpha$- and respectively $\beta$-arc labelled $s$ in Figure~\ref{fig:GlueingAAP}. Under this identification, there are unique $\Id$-algebra epimorphisms \[\pi_\alpha\co \mathcal{A}'\rightarrow\Anull\quad\text{and}\quad\pi_\beta\co \mathcal{B}'\rightarrow\Anull.\] Note that, as the notation for our domains suggests, each domain from Figure~\ref{fig:GlueingDomains} is recorded in $\mathcal{P}'$ by the algebra elements in $\mathcal{A}'$ and $\mathcal{B}'$ corresponding to the algebra elements in $\Anull$ obtained by the product of all blue and red labels of the domain, respectively. In fact, $\mathcal{P}'$ is equal to the image of the type~AA structure $\mathcal{P}$ from Figure~\ref{fig:GlueingTypeAAstructure} (viewed as a bimodule over $\Anull$) under the induced functor $\mathcal{F}^{AA}_{\pi_\beta,\pi_\alpha}$. Thus, by the pairing adjuction (Theorem~\ref{thm:PairingAdjunction}), the expression (\ref{eqn:FirstGlueingProofStep3}) is equal to \begin{equation}\label{eqn:FirstGlueingProofStep4} \mathcal{F}_{\pi_\beta}(\BSD(X_{T_1}))^{\Anull}\boxtimes\typeA{\Anull,\Anull}{\mathcal{P}}\boxtimes \mathcal{F}_{\pi_\alpha}(\BSD(X_{T_2}))^{\Anull}. \end{equation} Let $\pi_{31}\co\Ad\rightarrow\Anull$ be the quotient map defining $\Anull$. Then by a final application of the Pairing Adjunction, it is sufficient to identify $\mathcal{F}_{\pi_\beta}(\BSD(X_{T_1}))$ and $\mathcal{F}_{\pi_\alpha}(\BSD(X_{T_2}))$ with $\mathcal{F}_{\pi_{31}}(\CFTd(\rr(T_1)))$ and $\mathcal{F}_{\pi_{31}}(\CFTd(T_2))$, respectively. The second identification is immediate from Lemma~\ref{lem:IdentificationCFTdBSD} below and the observation that the bordered sutured Heegaard diagram for $X_{T_2}$ can be regarded as a tangle Heegaard diagram for $T_2$, up to a minor modification at the tangle ends. The first identification follows likewise, observing that switching the roles of $\alpha$- and $\beta$-curves while at the same time switching the orientation of the Heegaard surface leaves the resulting complex unchanged, up to a reversal of Alexander gradings. Finally, the $\delta$-grading on $\mathcal{P}$ is calculated as usual, see~\cite[Definition~5.13]{HDsForTangles}. By the additivity of the $\delta$-grading under glueing, the $\delta$-gradings on both sides of our glueing formula agree. Similarly, if $A$ is the Alexander grading induced by an ordered matching for $T_2$, then the three generating periodic domains have vanishing Alexander gradings. This shows that the Alexander grading on $\mathcal{P}$ is well-defined (as a relative grading). Again, by additivity under glueing, the Alexander grading on both sides of the glueing formula agree. \end{proof} \begin{figure}[t] \centering $ {\raisebox{-0.9cm}{ \psset{unit=0.5} \begin{pspicture}(-4,-0.1)(4,4.6) \pscustom[fillstyle=solid,fillcolor=lightgray,linewidth=0pt,linecolor=white]{ \psline(-3,4)(-3,0) \psline(-3,0)(3,0) \psline(3,0)(3,4) \psecurve(10,4.5)(3,4)(0,4.5)(-3,4)(-10,4.5) } \pscustom[fillstyle=solid,fillcolor=white,linewidth=0pt,linecolor=white]{ \psecurve(0,-1)(-1,0)(0,1.5)(1,0)(0,-1) \psline(1,0)(1,-1)(-1,-1)(-1,0) } \rput(0,2.75){$D$} \psline[linecolor=red](3,0)(3,4) \psline[linecolor=red](-3,0)(-3,4) \psecurve[linecolor=red](0,-1)(-1,0)(0,1.5)(1,0)(0,-1) \psline[linecolor=darkgreen](-3.5,0)(-0.5,0) \psline[linecolor=darkgreen](0.5,0)(3.5,0) \end{pspicture} } \longleftrightarrow \raisebox{-0.9cm}{ \psset{unit=0.5} \begin{pspicture}(-4,-0.1)(4,4.6) \pscustom[fillstyle=solid,fillcolor=lightgray,linewidth=0pt,linecolor=white]{ \psline(-3,4)(-3,0) \psline(-3,0)(3,0) \psline(3,0)(3,4) \psecurve(10,4.5)(3,4)(0,4.5)(-3,4)(-10,4.5) } \rput(0,2){$D'$} \psline[linecolor=red](3,0)(3,4) \psline[linecolor=red](-3,0)(-3,4) \psline[linecolor=darkgreen](-3.5,0)(3.5,0) \end{pspicture} } } $ \caption{A domain with multiplicity 1 near a tangle end.}\label{fig:DomainNearSillyArc} \end{figure} \begin{lemma}\label{lem:IdentificationCFTdBSD} Let \(D\) and \(D'\) be two domains which only differ in a small region of multiplicity 1 as shown in Figure~\ref{fig:DomainNearSillyArc}. Then, for a suitable choice of complex structure, \(D\) contributes iff \(D'\) does. \end{lemma} \begin{proof} This follows from the same arguments as \cite[Proposition~2.7]{Hanselman}. \end{proof} \section{Peculiar modules as collections of immersed curves}\label{sec:glueingrevisited} \begin{definition}\label{def:ImmersedCurveInvariantspqMod} By Example~\ref{exa:pqModSpecialCaseofCC}, the category of peculiar modules is a special case of a category of curved complexes over a marked surface with arc system. Therefore, if $T$ is a 4-ended tangle in a homology 3-ball $M$ with spherical boundary, Theorem~\ref{thm:EverythingIsLoopTypeUpToLocalSystems} implies that there is a collection of immersed curves corresponding to the peculiar module $\CFTd(T)$. We denote such a collection of immersed curves by $L_T$. If the tangle $T$ is oriented, the Alexander gradings on $\CFTd(T)$ give rise to an Alexander grading on $L_T$, which is defined similar to the $\delta$-grading. More precisely, the \textbf{Alexander grading of a curve} is an Alexander grading $A$ of intersection points of the curve with the four arcs parametrizing the 4-punctured sphere, such that if $x$ and $y$ are joined by a curve segment corresponding to an algebra element $p^s_t\in\Ad$ from $x$ to $y$, $A(y)-A(x)+A(p^s_t)=0$. The Alexander grading on the Lagrangian intersection Floer homology of two bigraded curves is defined in exactly the same way as the $\delta$-grading, namely via the Alexander grading of resolutions of intersection points, see Definitions~\ref{def:resolution} and~\ref{def:gradingVIAresolution}. \end{definition} As a special case of Theorem~\ref{thm:CompleteClassification}, we obtain: \begin{theorem}\label{thm:ImmersedCurveInvariants} \(L_T\) is a well-defined tangle invariant up to homotopy of the underlying immersed curves and similarity of the local systems.\hfill\qedsymbol \end{theorem} Just as for peculiar modules, we can study how immersed curves behave under mirroring and reversing Alexander gradings. \begin{definition} Given a collection $L$ of bigraded curves, let $\rr(L)$ denote the collection of curves obtained from $L$ by reversing all Alexander gradings. Note that for this to be a well-defined Alexander grading, we also need to reverse the Alexander grading of $\Ad$. Moreover, let $\m(L)$ denote the collection of curves defined as follows: the underlying oriented curves of $\m(L)$ are obtained as the image of the underlying oriented curves of $L$ under the orientation reversing automorphism of the 4-punctured sphere which preserves the four parametrizing arcs pointwise and exchanges the front and back. The bigrading of $\m(L)$ is equal to the reversed bigrading of $L$. Note that this is well-defined over the same bigraded algebra $\Ad$. Finally, the local systems of $L$ and $\m(L)$ are the same. As for the corresponding operations on peculiar modules and tangles from Definition~\ref{def:reversedmirror}, we write $\mr(L)$ for $\m(\rr(L))=\rr(\m(L))$. \end{definition} \begin{proposition} For any collection \(L\) of bigraded curves, \(\Pi(\rr(L))=\rr(\Pi(L))\) and \(\Pi(\m(L))=\m(\Pi(L))\). Moreover, if \(T\) is an oriented 4-ended tangle, then \(\rr(L_T)=L_{\rr(T)}\) and \(\m(L_T)=L_{\m(T)}\). \end{proposition} \begin{proof} The second part follows from the first in conjunction with Proposition~\ref{prop:reversedmirror}. The first part follows from the definitions of the operations. To identify the local systems of $\Pi(\m(L))$ and $\m(\Pi(L))$, note that the underlying curves of $\m(L)$ pass through arcs in the opposite direction to those of $L$. This corresponds to transposing the local systems, ie reversing the orientation of any crossover arrows and reversing the order of crossings and crossover arrows on arc neighbourhoods. \end{proof} \subsection{Peculiar modules from nice diagrams} \begin{definition} Given a tangle $T$, pick two basepoints $p_i$ and $q_j$ for some $i,j\in\{1,2,3,4\}$ as well as one of $z_j$ and $w_j$ for each closed component of $T$. A peculiar Heegaard diagram for $T$ is \textbf{nice} with respect to this choice of special basepoints, if all regions except those containing these basepoints are bigons or squares. \end{definition} \begin{theorem} Every 4-ended tangle \(T\) has a nice peculiar Heegaard diagram with respect to any choice of basepoints. \end{theorem} \begin{proof} This is an application of Sarkar and Wang's main result in~\cite{SarkarWang}, where they describe an algorithm for niceifying any pointed Heegaard diagram with one basepoint in each component of the Heegaard surface minus the $\alpha$-circles. \end{proof} \begin{corollary}\label{cor:PeculiarModulesFromNiceDiagrams} Peculiar modules for 4-ended tangles can be computed combinatorially. \end{corollary} \begin{proof} We can use a nice peculiar Heegaard diagram for a tangle to compute the peculiar module $\CFTd(T)$. Since all regions away from the special basepoints are bigons or squares, the calculation of all domains that miss those basepoints is purely combinatorial. The complex $\overline{\CFTd(T)}$ corresponding to those domains is exactly the image of $\CFTd(T)$ under the functor induced by the quotient map $\Ad\rightarrow\Ad/(p_i=0=q_j)$. So by Corollary~\ref{cor:AddingBasepointFunctorIsFaithfulUpToHom}, we can recover the homotopy type of $\CFTd(T)$ from $\overline{\CFTd(T)}$. This can be done algorithmically, by finding a curve with local system representing $\overline{\CFTd(T)}$ as described in the previous section. \end{proof} \begin{remark} The algorithm described in the proof above is implemented in the Mathematica package~\cite{PQM.m}. \end{remark} \begin{figure}[t]\centering \psset{unit=0.4} \begin{pspicture}(-12.5,-12.5)(12.5,12.5) \pscustom*[linecolor=lightgray]{ \psline[liftpen=1,linearc=1,cornersize=absolute](12,0)(12,-12)(0,-12) \psline(0,-12)(0,0)(12,0) } \psline*[linecolor=white,linewidth=0pt](0,0)(0,-8)(8,-8)(8,0)(0,0) \pscustom*[linecolor=lightgray]{ \psline[liftpen=1,linearc=1,cornersize=absolute](-12,0)(-12,-12)(0,-12) \psline(0,-12)(0,0)(-12,0) } \pscustom*[linecolor=white,linewidth=0pt]{ \psline[liftpen=1,linearc=0.8,cornersize=absolute](0,-10)(-9,-10)(-9,0) \psline(-9,0)(0,0)(0,-10) } \pscustom*[linecolor=lightgray]{ \psline[liftpen=1,linearc=1,cornersize=absolute](12,0)(12,12)(0,12) \psline(0,12)(0,0)(12,0) } \pscustom*[linecolor=white,linewidth=0pt]{ \psline[liftpen=1,linearc=0.8,cornersize=absolute](0,9)(10,9)(10,0) \psline(10,0)(0,0)(0,9) } \pscustom*[linecolor=lightgray]{ \psline[liftpen=1,linearc=0.8,cornersize=absolute](-9,0)(-9,3)(8,3)(8,0) \psline(8,0)(-9,0) } \pscustom*[linecolor=lightgray]{ \psline[liftpen=1,linearc=0.8,cornersize=absolute](0,-8)(-3,-8)(-3,9)(0,9) \psline(0,-8)(0,9) } \psframe[linecolor=blue,linearc=1,cornersize=absolute](-12,-12)(12,12) \psline[linecolor=blue](0,12)(0,-12) \psline[linecolor=blue](12,0)(-12,0) \psframe[linecolor=darkgreen,linestyle=dotted,dotsep=1pt](-6,-6)(6,6) \pscircle[fillstyle=solid,linecolor=darkgreen](-6,-6){1} \pscircle[fillstyle=solid,linecolor=darkgreen](6,-6){1} \pscircle[fillstyle=solid,linecolor=darkgreen](-6,6){1} \pscircle[fillstyle=solid,linecolor=darkgreen](6,6){1} \rput(-6,6){$\textcolor{darkgreen}{1}$} \rput(-6,-6){$\textcolor{darkgreen}{2}$} \rput(6,-6){$\textcolor{darkgreen}{3}$} \rput(6,6){$\textcolor{darkgreen}{4}$} \psline[linecolor=red,linearc=0.8,cornersize=absolute] (0,-8)(-3,-8)(-3,9)(10,9)(10,-8)(0,-8) \psline[linecolor=red,linearc=0.8,cornersize=absolute] (8,0)(8,3)(-9,3)(-9,-10)(8,-10)(8,0) \pscircle*[linecolor=blue](0,0){0.15} \rput(0.8;135){\blue $p_1$} \rput(0.8;-135){\blue $p_2$} \rput(0.8;-45){\blue $p_3$} \rput(0.8;45){\blue $p_4$} \rput(11.2,11.2){\blue $q_4$} \rput(11.2,-11.2){\blue $q_3$} \rput(-11.2,-11.2){\blue $q_2$} \rput(-11.2,11.2){\blue $q_1$} \pscircle*[linecolor=red](-3,3){0.15} \rput(-3,3){ \rput(0.8;135){\textcolor{red}{$q_1$}} \rput(0.8;-135){\textcolor{red}{$q_2$}} \rput(0.8;-45){\textcolor{red}{$q_3$}} \rput(0.8;45){\textcolor{red}{$q_4$}}} \pscircle*[linecolor=red](8,-8){0.15} \rput(8,-8){ \rput(0.8;135){\textcolor{red}{$p_3$}} \rput(0.8;-135){\textcolor{red}{$p_2$}} \rput(0.8;-45){\textcolor{red}{$p_1$}} \rput(0.8;45){\textcolor{red}{$p_4$}}} \pscircle*(-9,0){0.15} \uput{0.4}[135](-9,0){\red $a$\textcolor{blue}{$a$}} \pscircle*(-3,0){0.15} \uput{0.4}[135](-3,0){\red $b$\textcolor{blue}{$a$}} \rput(-9,0){ \rput{0}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{180}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \rput(-3,0){ \rput{0}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{180}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \pscircle*(0,-8){0.15} \uput{0.4}[45](0,-8){\red $b$\textcolor{blue}{$b$}} \pscircle*(0,-10){0.15} \uput{0.4}[-135](0,-10){\red $a$\textcolor{blue}{$b$}} \rput(0,-8){ \rput{90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{-90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \rput(0,-10){ \rput{90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{-90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \pscircle*(8,0){0.15} \uput{0.4}[135](8,0){\red $c$\textcolor{blue}{$c$}} \pscircle*(10,0){0.15} \uput{0.4}[-45](10,0){\red $d$\textcolor{blue}{$c$}} \rput(8,0){ \rput{0}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{180}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \rput(10,0){ \rput{0}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{180}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \pscircle*(0,3){0.15} \uput{0.4}[45](0,3){\red $c$\textcolor{blue}{$d$}} \pscircle*(0,9){0.15} \uput{0.4}[-135](0,9){\red $d$\textcolor{blue}{$d$}} \rput(0,3){ \rput{90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{-90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \rput(0,9){ \rput{90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} \rput{-90}(0,0){\psline[linearc=0.25]{->}(0.3,1.3)(0.3,0.3)(1.3,0.3)} } \uput{0.4}[-90](6,9){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.4}[0](-12,6){\textcolor{blue}{$L_{T_2}$}} \uput{0.5}[-135](-6,0){\blue $a$} \uput{0.5}[45](0,-6){\blue $b$} \uput{0.5}[-135](6,0){\blue $c$} \uput{0.5}[45](0,6){\blue $d$} \uput{0.5}[135](-6,3){\textcolor{red}{$a$}} \uput{0.5}[135](-3,-6){\textcolor{red}{$b$}} \uput{0.5}[135](6,3){\textcolor{red}{$c$}} \uput{0.5}[135](-3,6){\textcolor{red}{$d$}} \pscircle*[linecolor=red](-6,3){0.15} \pscircle*[linecolor=red](6,3){0.15} \pscircle*[linecolor=red](-3,6){0.15} \pscircle*[linecolor=red](-3,-6){0.15} \pscircle*[linecolor=blue](0,6){0.15} \pscircle*[linecolor=blue](0,-6){0.15} \pscircle*[linecolor=blue](-6,0){0.15} \pscircle*[linecolor=blue](6,0){0.15} \end{pspicture} \caption{A geometric interpretation of the type~AA structure $\mathcal{P}$ for pairing in the wrapped Fukaya category of the 4-punctured sphere. The boundary of the picture is identified to a point. The {\blue blue} curves denote a 1-skeleton and the \textcolor{red}{red} ones a Hamiltonian translate thereof with reversed roles of $p_i$ and $q_i$. The orientation is chosen such that the normal vector (determined by the right-hand rule) points into the projection plane. The arrow pairs at the intersection points of the two skeletons serve as a reminder of our conventions for resolutions of two curves lying in a tubular neighbourhood of the skeletons, see Definition~\ref{def:resolution}. }\label{fig:GlueingInterpretationFUK} \end{figure} \subsection{The Glueing Theorem revisited} \begin{theorem}[(Glueing Theorem, version 2)]\label{thm:CFTdGlueingAsMorphism} With the same notation as in Theorem~\ref{thm:CFTdGeneralGlueing}, \[\HFL(L)\otimes V^{i}\cong\LagrangianFH(L_{\mr(T_1)},L_{T_2})\cong H_\ast(\Mor(\CFTd(\mr(T_1)),\CFTd(T_2))),\] where \(\mr(T_1)\) denotes the reversed mirror of \(T_1\) (see Definition~\ref{def:reversedmirror}). \end{theorem} \begin{proof} The second equality is an application of Theorem~\ref{thm:PairingMorLagrangianFH}. As we saw in the proof of Theorem~\ref{thm:PairingFormula}, we can use any representatives of the underlying curves of $L_{\mr(T_1)}$ and $L_{T_2}$ to compute $\LagrangianFH(L_{\mr(T_1)},L_{T_2})$, as long as each pair of parallel curves bounds a cyclic chain of bigons (ie does not bound an immersed annulus). For example, we may homotope the curves in $L_{\mr(T_1)}$ and $L_{T_2}$ into neighbourhoods of the red and blue curves in Figure~\ref{fig:GlueingInterpretationFUK}, respectively. The first equality is then seen by identifying the intersection points between those curves and the connecting bigons with the generators and differentials of \[ \CFTd(\rr(T_1))\boxtimes\,\mathcal{P}\boxtimes\,\CFTd(T_2) \cong \Pi(L_{\rr(T_1)})\boxtimes\,\mathcal{P}\boxtimes \Pi(L_{T_2}) \] from Theorem~\ref{thm:CFTdGeneralGlueing}. Let us first consider the generators and their gradings. Obviously, there is a one-to-one correspondence between generators and intersection points. Let $\bullet(\textcolor{red}{s},i)$ be a generator of $\Pi(L_{\rr(T_1)})$ and $\bullet(\textcolor{blue}{s'},i')$ a generator of $\Pi(L_{T_2})$, where $\textcolor{red}{s}\in\{\textcolor{red}{a},\textcolor{red}{b},\textcolor{red}{c},\textcolor{red}{d}\}$ and $\textcolor{blue}{s'}\in\{\textcolor{blue}{a},\textcolor{blue}{b},\textcolor{blue}{c},\textcolor{blue}{d}\}$, such that there exists a generator $\textcolor{red}{s}\textcolor{blue}{s'}\in\mathcal{P}$. Then the $\delta$-grading of the corresponding intersection point agrees with the $\delta$-grading of $\bullet(\textcolor{red}{s},i)\boxtimes\mathcal{P}\boxtimes\bullet(\textcolor{blue}{s'},i')$; namely, it is equal to $$ \delta(\bullet(\textcolor{blue}{s'},i'))+\delta(\bullet(\textcolor{red}{s},i))+\delta(\textcolor{red}{s}\textcolor{blue}{s'})= \delta(\bullet(\textcolor{blue}{s'},i'))-\delta(\m(\bullet(\textcolor{red}{s},i)))+\delta(\textcolor{red}{s}\textcolor{blue}{s'}), $$ which can be checked for each generator $\textcolor{red}{s}\textcolor{blue}{s'}\in\mathcal{P}$ separately. A similar argument also shows that the Alexander gradings agree on both sides. Next, let us consider differentials. For this, let $\bullet(\textcolor{red}{s},i)\xrightarrow{\red p} \bullet(\textcolor{red}{t},j)$ be a differential in $\Pi(L_{\rr(T_1)})$ and $\bullet(\textcolor{blue}{s'},i')\xrightarrow{\blue p'} \bullet(\textcolor{blue}{t'},j')$ a differential in $\Pi(L_{T_2})$, where $\textcolor{red}{s},\textcolor{red}{t}\in\{\textcolor{red}{a},\textcolor{red}{b},\textcolor{red}{c},\textcolor{red}{d}\}$ and $\textcolor{blue}{s'},\textcolor{blue}{t'}\in\{\textcolor{blue}{a},\textcolor{blue}{b},\textcolor{blue}{c},\textcolor{blue}{d}\}$, such that there exist generators $\textcolor{red}{s}\textcolor{blue}{s'},\textcolor{red}{t}\textcolor{blue}{t'}\in\mathcal{P}$. Now observe that there is a component $({\red p}\vert {\blue p'} )$ in $\mathcal{P}$ from $\textcolor{red}{s}\textcolor{blue}{s'}$ to $\textcolor{red}{t}\textcolor{blue}{t'}$, iff there is a bigon in Figure~\ref{fig:GlueingInterpretationFUK} between the corresponding intersection points which covers exactly those ${\red p_i}$ and ${\red q_i}$ in ${\red p}$ and those ${\blue p_i}$ and ${\blue q_i}$ in ${\blue p'}$. Note that the boundary orientation of the bigons is indeed opposite to the orientation of any crossover arrows in $\Pi(L_{\mr(T_1)})$ and $\Pi(L_{T_2})$. \end{proof} We end this section with a number of computations illustrating the above Glueing Theorem. \begin{figure}[t] \centering { \psset{unit=0.2,linewidth=\stringwidth} \begin{pspicture}[showgrid=false](-11,-15)(11,15) \rput{-45}(0,0){ \psline[linecolor=red](1,-4)(-9,-4) \psline[linewidth=\stringwhite,linecolor=white](-4,1)(-4,-9) \psline[linecolor=red](-4,1)(-4,-9) \psline[linecolor=red]{->}(-4,-7)(-4,-8) \psline[linecolor=red]{->}(-1,-4)(0,-4) \psline[linecolor=blue](-1,4)(9,4) \psline[linewidth=\stringwhite,linecolor=white](4,-1)(4,9) \psline[linecolor=blue](4,-1)(4,9) \psline[linecolor=blue]{->}(4,7)(4,8) \psline[linecolor=blue]{<-}(0,4)(1,4) \pscircle[linestyle=dotted](4,4){5} \pscircle[linestyle=dotted](-4,-4){5} \rput{45}(-10,0){$t_2$} \rput{45}(0,10){$t_1$} \rput{45}(1.5,1.5){$a$} \rput{45}(6.5,1.5){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.5,6.5){$d$} \rput{45}(-6.5,-6.5){$a$} \rput{45}(-1.5,-6.5){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.5,-1.5){$d$} \psecurve{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) } \end{pspicture} }\qquad{\psset{unit=2} \begin{pspicture}(-2,-1.5)(1.7,1.5) \psecurve[linecolor=blue](1.3,1.3)(-0.25,0.25)(-1.3,-1.3)(0.25,-0.25)(1.3,1.3)(-0.25,0.25)(-1.3,-1.3) \rput{90}(0,0){ \psecurve[linecolor=red](1.3,1.3)(-0.25,0.25)(-1.3,-1.3)(0.25,-0.25)(1.3,1.3)(-0.25,0.25)(-1.3,-1.3) } \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) % \psset{dotsize=5pt} \psdot[linecolor=red](-1,0.45) \psdot[linecolor=blue](-1,-0.45) \psdot[linecolor=blue](0.45,1) \psdot[linecolor=red](-0.45,1) \psdot[linecolor=blue](1,0.45) \psdot[linecolor=red](1,-0.45) \psdot[linecolor=red](0.45,-1) \psdot[linecolor=blue](-0.45,-1) \uput{0.1}[45]{0}(1,1){$t_1$} \uput{0.1}[135]{0}(-1,1){$t_2$} \uput{0.1}[-135]{0}(-1,-1){$t_1$} \uput{0.1}[-45]{0}(1,-1){$t_2$} \uput{0.1}[90]{0}(0.45,1){$\delta^{\frac{1}{2}}t_1^\frac{1}{2}t_2^\frac{1}{2}$} \uput{0.1}[-90]{0}(-0.45,-1){$\delta^{\frac{1}{2}}t_1^{-\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[180]{0}(-1,-0.45){$\delta^0t_1^{\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[0]{0}(1,0.45){$\delta^0t_1^{-\frac{1}{2}}t_2^{\frac{1}{2}}$} \uput{0.1}[90]{0}(-0.45,1){$\delta^{-\frac{1}{2}}t_1^{-\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[-90]{0}(0.45,-1){$\delta^{-\frac{1}{2}}t_1^{\frac{1}{2}}t_2^{\frac{1}{2}}$} \uput{0.1}[180]{0}(-1,0.45){$\delta^0t_1^{-\frac{1}{2}}t_2^{\frac{1}{2}}$} \uput{0.1}[0]{0}(1,-0.45){$\delta^0t_1^{\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[180]{0}(-0.52,0){$\delta^1t_1^{1}t_2^{-1}$} \uput{0.1}[0]{0}(0.52,0){$\delta^1t_1^{-1}t_2^{1}$} \uput{0.1}[0]{0}(0,-0.52){$\delta^1t_1^{-1}t_2^{-1}$} \uput{0.1}[0]{0}(0,0.52){$\delta^1t_1^{1}t_2^{1}$} \uput{0.45}[180]{0}(-1,1){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.45}[0]{0}(1,1){\textcolor{blue}{$L_{T_2}$}} \pscircle*[linecolor=white](-1,0.7){3pt} \rput(-1,0.7){$a$} \pscircle*[linecolor=white](0,-1){3pt} \rput(0,-1){$b$} \pscircle*[linecolor=white](1,0.7){3pt} \rput(1,0.7){$c$} \pscircle*[linecolor=white](0,1){3pt} \rput(0,1){$d$} \psdot(-0.52,0) \rput{0}(-0.52,0.1){% \psline[linearc=0.1]{->}(0.25;137)(0,0)(0.25;43)} \rput{180}(-0.52,-0.1){% \psline[linearc=0.1]{->}(0.25;137)(0,0)(0.25;43)} \psdot(0.52,0) \rput{0}(0.52,0.1){% \psline[linearc=0.1]{->}(0.25;137)(0,0)(0.25;43)} \rput{180}(0.52,-0.1){% \psline[linearc=0.1]{->}(0.25;137)(0,0)(0.25;43)} \psdot(0,-0.52) \rput{180}(0,-0.62){% \psline[linearc=0.1]{->}(0.25;133)(0,0)(0.25;47)} \rput{0}(0,-0.42){% \psline[linearc=0.1]{->}(0.25;133)(0,0)(0.25;47)} \psdot(0,0.52) \rput{0}(0,0.62){% \psline[linearc=0.1]{->}(0.25;133)(0,0)(0.25;47)} \rput{180}(0,0.42){% \psline[linearc=0.1]{->}(0.25;133)(0,0)(0.25;47)} \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{A tangle decomposition of the Hopf link (left) and the intersection theory of the corresponding tangle invariants (right).}\label{fig:HopfLink} \end{figure} \begin{example}[(Hopf link)] Consider the tangle decomposition of the Hopf link into two tangles $\textcolor{red}{T_1}$ and $\textcolor{blue}{T_2}$ from Figure~\ref{fig:HopfLink}, following the same conventions as in Definition~\ref{def:tanglepairing}. The diagram on the right of that figure shows a 4-punctured sphere which we consider as the boundary of the 3-ball of the tangle $\textcolor{blue}{T_2}$. It is parametrized by four arcs which are drawn as dotted lines labelled $a$, $b$, $c$ and $d$ as usual. The intersection points of the curve $\textcolor{blue}{L_{T_2}}$ with those arcs are labelled by their gradings. The tangle $\textcolor{red}{T_1}$ agrees with $\textcolor{blue}{T_2}$ except for the orientations of the tangle strands. Thus, $\textcolor{red}{L_{\mr(T_1)}}=L_{\m(T_2)}$ is obtained from $\textcolor{blue}{L_{T_2}}$ by taking the mirror of the underlying curve and reversing all gradings. The curves $\textcolor{red}{L_{\mr(T_1)}}$ and $\textcolor{blue}{L_{T_2}}$ intersect in four points. They are labelled by their respective gradings, which we can compute from their resolutions. As a reminder of our conventions, we have indicated these resolutions by pairs of arrows around the intersection points in this example. (We omit those arrows in the examples below.) In summary, we conclude that $\HFL$ of the Hopf link is a 4-dimensional vector space supported in a single $\delta$-grading and Alexander gradings $$ t_1^{-1}t_2^{-1}+ t_1^{1}t_2^{-1}+ t_1^{-1}t_2^{1}+ t_1^{1}t_2^{1}. $$ \end{example} \begin{figure}[t] { \psset{unit=0.2,linewidth=\stringwidth} \begin{pspicture}[showgrid=false](-11,-15)(11,15) \rput{-45}(0,0){ \psline[linecolor=red](-4,1)(-4,-9) \psline[linewidth=\stringwhite,linecolor=white](1,-4)(-9,-4) \psline[linecolor=red](1,-4)(-9,-4) \psline[linecolor=red]{->}(-4,-7)(-4,-8) \psline[linecolor=red]{->}(-1,-4)(0,-4) \psline[linecolor=blue](-1,4)(9,4) \psline[linewidth=\stringwhite,linecolor=white](4,-1)(4,9) \psline[linecolor=blue](4,-1)(4,9) \psline[linecolor=blue]{->}(4,7)(4,8) \psline[linecolor=blue]{<-}(0,4)(1,4) \pscircle[linestyle=dotted](4,4){5} \pscircle[linestyle=dotted](-4,-4){5} \rput{45}(-10,0){$t_2$} \rput{45}(0,10){$t_1$} \rput{45}(1.5,1.5){$a$} \rput{45}(6.5,1.5){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.5,6.5){$d$} \rput{45}(-6.5,-6.5){$a$} \rput{45}(-1.5,-6.5){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.5,-1.5){$d$} \psecurve{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) } \end{pspicture} }\qquad{\psset{unit=2} \begin{pspicture}(-1.7,-1.5)(1.7,1.5) \psecurve[linecolor=blue](1.3,1.3)(-0.25,0.25)(-1.3,-1.3)(0.25,-0.25)(1.3,1.3)(-0.25,0.25)(-1.3,-1.3) \psecurve[linecolor=red]% (-0.3,-1)% (-1.15,-1.45)(-1.45,-1.15)% (-1,-0.3)% (0.3,1)% (1.15,1.45)(1.45,1.15)% (1,0.3)% (0.15,-0.15)% (-0.3,-1)% (-1.15,-1.45)(-1.45,-1.15)% \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) % \psset{dotsize=5pt} \psdot[linecolor=red](-1,-0.3) \psdot[linecolor=blue](-1,-0.45) \psdot[linecolor=blue](0.45,1) \psdot[linecolor=red](0.3,1) \psdot[linecolor=blue](1,0.45) \psdot[linecolor=red](1,0.3) \psdot[linecolor=red](-0.3,-1) \psdot[linecolor=blue](-0.45,-1) \uput{0.1}[45]{0}(1,1){$t_1$} \uput{0.1}[135]{0}(-1,1){$t_2$} \uput{0.1}[-135]{0}(-1,-1){$t_1$} \uput{0.1}[-45]{0}(1,-1){$t_2$} \uput{0.1}[-60]{0}(0.45,1){$\delta^{\frac{1}{2}}t_1^\frac{1}{2}t_2^\frac{1}{2}$} \uput{0.1}[120]{0}(-0.45,-1){$\delta^{\frac{1}{2}}t_1^{-\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[0]{0}(-1,-0.45){$\delta^0t_1^{\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[180]{0}(1,0.45){$\delta^0t_1^{-\frac{1}{2}}t_2^{\frac{1}{2}}$} \uput{0.1}[120]{0}(0.3,1){$\delta^{\frac{1}{2}}t_1^\frac{1}{2}t_2^\frac{1}{2}$} \uput{0.1}[-60]{0}(-0.3,-1){$\delta^{\frac{1}{2}}t_1^{-\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[180]{0}(-1,-0.3){$\delta^0t_1^{\frac{1}{2}}t_2^{-\frac{1}{2}}$} \uput{0.1}[0]{0}(1,0.3){$\delta^0t_1^{-\frac{1}{2}}t_2^{\frac{1}{2}}$} \uput{0.1}[-30]{0}(-0.09,-0.61){$\delta^0t_1^0t_2^0$} \uput{0.1}[-60]{0}(0.61,0.09){$\delta^1t_1^0t_2^0$} \uput{0.05}[135]{0}(0,0){\textcolor{blue}{$L_{T_2}$}} \uput{0.6}[135]{0}(0,0){\textcolor{red}{$L_{\mr(T_1)}$}} \psdot(-0.09,-0.61) \psdot(0.61,0.09) \pscircle*[linecolor=white](-1,0.5){3pt} \rput(-1,0.5){$a$} \pscircle*[linecolor=white](0.5,-1){3pt} \rput(0.5,-1){$b$} \pscircle*[linecolor=white](1,-0.5){3pt} \rput(1,-0.5){$c$} \pscircle*[linecolor=white](-0.5,1){3pt} \rput(-0.5,1){$d$} \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{A tangle decomposition of the unlink (left) and the intersection theory of the corresponding tangle invariants (right).}\label{fig:Unlink} \end{figure} \begin{example}[(Unlink)] Figure~\ref{fig:Unlink} shows a tangle decomposition of the two-component unlink and the pairing of the two corresponding curves. This example is very similar to the previous one, so we let the pictures speak for themselves. It illustrates that admissibility of Heegaard diagrams corresponds to the fact that we need to remove immersed annuli before counting intersection points between parallel immersed curves. \end{example} \begin{figure}[t] \centering { \psset{unit=0.2,linewidth=\stringwidth} \begin{pspicture}[showgrid=false](-11,-16.5)(11,16.5) \rput{-45}(0,0){ \psarc[linecolor=red](1,1){5}{180}{270} \psarc[linecolor=red](-9,-9){5}{0}{90} \psarc[linecolor=red]{->}(1,1){5}{250}{255} \psarcn[linecolor=red]{->}(-9,-9){5}{20}{15} \psecurve[linecolor=blue](14,2)(9,4)(4,2)(4,6)(-1,4)(-6,6) \psecurve[linewidth=\stringwhite,linecolor=white](6,-6)(4,-1)(6,4)(2,4)(4,9)(2,14) \psecurve[linecolor=blue](6,-6)(4,-1)(6,4)(2,4)(4,9)(2,14) \psecurve[linewidth=\stringwhite,linecolor=white](9,4)(4,2)(4,6)(-1,4) \psecurve[linecolor=blue](9,4)(4,2)(4,6)(-1,4) \psline[linecolor=blue]{<-}(-1,4)(-0.9,4.03) \psline[linecolor=blue]{<-}(4,9)(3.97,8.9) \pscircle[linestyle=dotted](4,4){5} \pscircle[linestyle=dotted](-4,-4){5} \rput{45}(-10,0){$t$} \rput{45}(0,10){$t$} \rput{45}(1.5,1.5){$a$} \rput{45}(6.75,1.25){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.25,6.75){$d$} \rput{45}(-6.5,-6.5){$a$} \rput{45}(-1.5,-6.5){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.5,-1.5){$d$} \psecurve{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) } \end{pspicture} }\qquad{\psset{unit=2} \begin{pspicture}(-1.7,-1.65)(1.7,1.65) \psecurve[linecolor=blue](1.2,-0.95)(0,-1.5)(-1.5,-1)(1.2,1.05)(-1.2,0.95)(0,1.5)(1.5,1)(-1.2,-1.05)(1.2,-0.95)(0,-1.5)(-1.5,-1) \rput{0}(-1.1,0){ \psecurve[linecolor=red]% (0.3,1.3)(-0.3,1.3)% (-0.4,0)% (-0.3,-1.3)(0.3,-1.3)% (0.45,0)% (0.3,1.3)(-0.3,1.3)% (-0.4,0)% } \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) % \psset{dotsize=5pt} \psdot[linecolor=blue](-1,0.78) \uput{0.1}[-135]{0}(-1,0.78){$\delta^1t^{2}$} \psdot[linecolor=blue](-1,-0.31) \uput{0.1}[145]{0}(-1,-0.31){$\delta^1t^{0}$} \psdot[linecolor=blue](-1,-0.54) \uput{0.1}[-35]{0}(-1,-0.54){$\delta^1t^{-2}$} \psdot[linecolor=blue](-0.04,1) \uput{0.1}[-90]{0}(-0.04,1){$\delta^{\frac{3}{2}}t^{3}$} \psdot[linecolor=blue](0.04,-1) \uput{0.1}[90]{0}(0.04,-1){$\delta^{\frac{3}{2}}t^{-3}$} \psdot[linecolor=blue](1,0.54) \uput{0.1}[145]{0}(1,0.54){$\delta^1t^{2}$} \psdot[linecolor=blue](1,0.31) \uput{0.1}[-45]{0}(1,0.31){$\delta^1t^{0}$} \psdot[linecolor=blue](1,-0.78) \uput{0.1}[45]{0}(1,-0.78){$\delta^1t^{-2}$} \psdot[linecolor=red](-0.63,1) \uput{0.1}[45]{0}(-0.63,1){$\delta^0t^{0}$} \psdot[linecolor=red](-0.63,-1) \uput{0.1}[45]{0}(-0.63,-1){$\delta^0t^{0}$} \uput{0.05}[0]{0}(-0.6,0.3){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.3}[-20]{0}(0,0){\textcolor{blue}{$L_{T_2}$}} \psdot(-0.855,1.33) \uput{0.1}[90]{0}(-0.855,1.33){$\delta^{\frac{3}{2}}t^{1}$} \psdot(-0.6,0.81) \uput{0.1}[-40]{0}(-0.6,0.81){$\delta^{\frac{3}{2}}t^{3}$} \psdot(-0.645,-0.145) \uput{0.1}[135]{0}(-0.645,-0.145){$\delta^{\frac{3}{2}}t^{1}$} \psdot(-0.63,-0.33) \uput{0.1}[-45]{0}(-0.63,-0.33){$\delta^{\frac{3}{2}}t^{-1}$} \psdot(-0.737,-1.235) \uput{0.1}[-60]{0}(-0.737,-1.235){$\delta^{\frac{3}{2}}t^{-3}$} \psdot(-1.09,-1.4) \uput{0.1}[-90]{0}(-1.09,-1.4){$\delta^{\frac{3}{2}}t^{-1}$} \pscircle*[linecolor=white](-1,0.3){3pt} \rput(-1,0.3){$a$} \pscircle*[linecolor=white](0.5,-1){3pt} \rput(0.5,-1){$b$} \pscircle*[linecolor=white](1,-0.3){3pt} \rput(1,-0.3){$c$} \pscircle*[linecolor=white](0.5,1){3pt} \rput(0.5,1){$d$} \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{A tangle decomposition of the trefoil (left) and the intersection theory of the corresponding tangle invariants (right).}\label{fig:Trefoil} \end{figure} \begin{example}[(Trefoil knot)] Figure~\ref{fig:Trefoil} shows a tangle decomposition of the trefoil knot and the pairing of the two corresponding curves. Again, this example is very similar to the previous two, except that the two open components of the two tangles are identified in the trefoil knot, so the same happens to the Alexander gradings: $t_1=t=t_2$. Intersection points lie in a single $\delta$-grading and Alexander gradings $$(t+t^{-1})(t^2+t^0+t^{-2}).$$ \end{example} \begin{remark} The attentive reader will have noticed that in each of the previous three examples, the 4-punctured sphere together with the curves $\textcolor{red}{L_{\mr(T_1)}}$ and $\textcolor{blue}{L_{T_2}}$ constitutes a well-defined multi-pointed Heegaard diagram for the link $L(\textcolor{red}{T_1},\textcolor{blue}{T_2})$, where we interpret $\textcolor{red}{L_{\mr(T_1)}}$ as an $\textcolor{red}{\alpha}$-curve and $\textcolor{blue}{L_{T_2}}$ as a $\textcolor{blue}{\beta}$-curve. But this is only true more generally if both $\textcolor{red}{T_1}$ and $\textcolor{blue}{T_2}$ are rational. In general, we cannot find a Heegaard diagram for the link $L(\textcolor{red}{T_1},\textcolor{blue}{T_2})$ whose Heegaard surface is a 4-punctured sphere inducing the given tangle decomposition. Theorem~\ref{thm:CFTdGlueingAsMorphism} says that nonetheless, we can regard the 4-punctured sphere together with $\textcolor{red}{L_{\mr(T_1)}}$ and $\textcolor{blue}{L_{T_2}}$ as a generalized Heegaard diagram for $L(\textcolor{red}{T_1},\textcolor{blue}{T_2})$ and use it to compute $\HFL(L(\textcolor{red}{T_1},\textcolor{blue}{T_2}))$, even though $\textcolor{red}{L_{\mr(T_1)}}$ and $\textcolor{blue}{L_{T_2}}$ might have multiple and possibly immersed components. Below, we compute two more examples to illustrate this point of view. \end{remark} \begin{figure}[t] \centering { \psset{unit=0.2,linewidth=\stringwidth} \begin{pspicture}[showgrid=false](-11,-16.5)(11,16.5) \rput{-45}(0,0){ \psarc[linecolor=red](-9,1){5}{-90}{0} \psarc[linecolor=red](1,-9){5}{90}{180} \psarc[linecolor=red]{->}(-9,1){5}{-90}{-15} \psarc[linecolor=red]{->}(1,-9){5}{160}{165} \psecurve[linecolor=blue] (-5,3)% (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linewidth=\stringwhite,linecolor=white](5,7)% (2,6)% (2,2)% (6,2)% (7,5) \psecurve[linecolor=blue](5,7)% (2,6)% (2,2)% (6,2)% (7,5) \psecurve[linewidth=\stringwhite,linecolor=white] (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linecolor=blue] (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linecolor=blue]% (2,2)% (6,2)% (7,5)(3,5)(4,9)(3,13) \psecurve[linewidth=\stringwhite,linecolor=white]% (13,3)(9,4)(5,3)(5,7)% (2,6)% (2,2)% \psecurve[linecolor=blue]% (13,3)(9,4)(5,3)(5,7)% (2,6)% (2,2)% \psecurve[linewidth=\stringwhite,linecolor=white]% (6,2)% (7,5)(3,5)(4,9) \psecurve[linecolor=blue]% (6,2)% (7,5)(3,5)(4,9) \psline[linecolor=blue]{<-}(-1,4)(-0.9,4) \psline[linecolor=blue]{<-}(9,4)(8.9,3.97) \pscircle[linestyle=dotted](4,4){5} \pscircle[linestyle=dotted](-4,-4){5} \rput{45}(-10,0){$t$} \rput{45}(0,10){$t$} \rput{45}(1.25,1.25){$a$} \rput{45}(6.75,1.25){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.25,6.75){$d$} \rput{45}(-6.75,-6.75){$a$} \rput{45}(-1.25,-6.75){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.75,-1.25){$d$} \psecurve{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) } \end{pspicture} }\qquad{\psset{unit=2} \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \uput{0.05}[180]{0}(-1.15,0.25){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.1}[-90]{0}(0,0){\textcolor{blue}{$L_{T_2}$}} \psecurve[linecolor=blue]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) \rput{180}(0,0){ \psecurve[linecolor=blue]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) } \psecurve[linecolor=blue](0,-1.65)(-1.55,-1.4)(-1.55,-0.35)(-0.2,0.4)(1.25,1.25)(1.55,0.7)(-1.55,-0.7)(-1.25,-1.3)(1.3,-0.7)(1.45,-1.45)(0,-1.65)(-1.55,-1.4)(-1.55,-0.35) \rput{0}(0,1.1){ \psecurve[linecolor=red]% (1.4,0.35)(1.55,-0.55)% (0,-0.75)% (-1.4,-0.65)(-1.4,0.35)% (0,0.5)% (1.4,0.35)(1.55,-0.55)% (0,-0.8)% } \psset{dotsize=5pt} \psdot(1.325,0.37) \uput{0.05}[-40]{0}(1.325,0.37){$\delta^{\tfrac{1}{2}}t^{1}$} \psdot(-0.37,0.31) \uput{0.1}[-70]{0}(-0.37,0.31){$\delta^{\tfrac{1}{2}}t^{-1}$} \psdot[linecolor=red](-1,0.265) \uput{0.1}[25]{0}(-1,0.265){$\delta^{0}t^{0}$} \psdot[linecolor=red](1,0.3) \uput{0.1}[-150]{0}(1,0.3){$\delta^{0}t^{0}$} \psdot[linecolor=blue](0.53,1) \uput{0.1}[90]{0}(0.53,1){$\delta^{0}t^{0}$} \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{A tangle decomposition of the unknot as a closure of the $(2,-3)$-pretzel tangle (left) and the intersection theory of the corresponding tangle invariants (right).}\label{fig:Unknot} \end{figure} \begin{example}[(unknot closure of the $(2,-3)$-pretzel tangle)] Figure~\ref{fig:Unknot} shows the unknot closure of the $(2,-3)$-pretzel tangle $T_{2,-3}$. Only the embedded component of $L_{T_{2,-3}}$ contributes to the pairing, since the immersed components can be homotoped such that they do not intersect the embedded curve of the trivial tangle. Note that the gradings are different from Example~\ref{exa:HFTdpretzeltangle}. This is because the orientation of one of the tangle strands is reversed, which, for relative gradings, corresponds simply to reversing the Alexander grading of $t_2$ and leaving the $\delta$-grading unchanged. However, since I expect that the relative $\delta$-gradings of generators of peculiar modules can be lifted to absolute ones agreeing with those of the corresponding Kauffman states, I am using the $\delta$-grading from~\cite[Example~6.9]{HDsForTangles} here. The relevant generator of the peculiar module of $T_{2,-3}$ is $dy_3$, which corresponds to the blue intersection point labelled by its $\delta$- and Alexander grading on the right of Figure~\ref{fig:Unknot}. From this, we can compute the gradings of the two intersection points and obtain the once stabilized knot Floer homology of the unknot. \end{example} \begin{figure}[p] \centering {\psset{unit=3} \begin{pspicture}(-2,-1.7)(2,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \uput{0.05}[180]{0}(-1.1,0.3){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.05}[0]{0}(1.1,-0.3){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.05}[180]{0}(-1.1,-0.3){\textcolor{blue}{$L_{T_2}$}} \uput{0.05}[0]{0}(1.1,0.3){\textcolor{blue}{$L_{T_2}$}} \uput{0.1}[0]{0}(1,1){$t_2$} \uput{0.1}[180]{0}(-1,1){$t_1$} \uput{0.1}[180]{0}(-1,-1){$t_1$} \uput{0.1}[0]{0}(1,-1){$t_2$} \psecurve[linecolor=red]% (0.3,1.5)% (-1.4,1.4)(-1.45,1)(-1,0.65)(0.5,1.25)(1.3,1)(1.3,0.6)% (0.3,0.55)% (-1.4,0.5)(-1.45,1)(-0.5,1.35)(1,0.8)(1.3,1)(1.3,1.3)% (0.3,1.5)% (-1.4,1.4)(-1.45,1) \psecurve[linecolor=blue]% (-0.3,1.5)% (1.4,1.4)(1.45,1)(1,0.65)(-0.5,1.25)(-1.3,1)(-1.3,0.6)% (-0.3,0.55)% (1.4,0.5)(1.45,1)(0.5,1.35)(-1,0.8)(-1.3,1)(-1.3,1.3)% (-0.3,1.5)% (1.4,1.4)(1.45,1) \psecurve[linecolor=blue]% (0.3,-1.5)% (-1.4,-1.4)(-1.45,-1)(-1,-0.65)(0.5,-1.25)(1.3,-1)(1.3,-0.6)% (0.3,-0.55)% (-1.4,-0.5)(-1.45,-1)(-0.5,-1.35)(1,-0.8)(1.3,-1)(1.3,-1.3)% (0.3,-1.5)% (-1.4,-1.4)(-1.45,-1) \psecurve[linecolor=red]% (-0.3,-1.5)% (1.4,-1.4)(1.45,-1)(1,-0.65)(-0.5,-1.25)(-1.3,-1)(-1.3,-0.6)% (-0.3,-0.55)% (1.4,-0.5)(1.45,-1)(0.5,-1.35)(-1,-0.8)(-1.3,-1)(-1.3,-1.3)% (-0.3,-1.5)% (1.4,-1.4)(1.45,-1) \psset{dotsize=5pt} \psdot(0,0.505) \uput{0.05}[-90]{0}(0,0.505){$\delta^{0}t_1^{0}t_2^{0}$} \psdot(0,1.074) \psline{->}(0,0.8)(0,1.074) \uput{0.3}[-90]{0}(0,1.074){$\delta^{0}t_1^{-2}t_2^{-2}$} \psdot(0,1.2) \uput{0.05}[90]{0}(0,1.2){$\delta^{1}t_1^{2}t_2^{2}$} \psdot(0,1.53) \uput{0.05}[90]{0}(0,1.53){$\delta^{1}t_1^{0}t_2^{0}$} \psdot(-1.356,0.85) \uput{0.05}[180]{0}(-1.356,0.85){$\delta^{0}t_1^{0}t_2^{-2}$} \psdot(-1.34,1.13) \uput{0.05}[150]{0}(-1.34,1.13){$\delta^{1}t_1^{0}t_2^{2}$} \psdot(1.356,0.85) \uput{0.05}[0]{0}(1.356,0.85){$\delta^{0}t_1^{-2}t_2^{0}$} \psdot(1.34,1.13) \uput{0.05}[30]{0}(1.34,1.13){$\delta^{1}t_1^{2}t_2^{0}$} \uput{0.05}[0]{0}(-1,0.72){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{-1}$} \uput{0.05}[0]{0}(-1,0.41){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{-3}$} \uput{0.05}[180]{0}(1,0.72){$\delta^{-\tfrac{3}{2}}t_1^{-1}t_2^{-2}$} \uput{0.05}[180]{0}(1,0.41){$\delta^{-\tfrac{3}{2}}t_1^{1}t_2^{-2}$} \uput{0.05}[-90]{0}(-0.33,1){$\delta^{-1}t_1^{1}t_2^{-1}$} \uput{0.05}[-90]{0}(0.33,1){$\delta^{-2}t_1^{-1}t_2^{-3}$} \psdot[linecolor=blue](-1,0.8) \psdot[linecolor=blue](1,0.645) \psdot[linecolor=blue](-1,0.48) \psdot[linecolor=blue](1,0.34) \psdot[linecolor=blue](-0.33,1) \psdot[linecolor=blue](0.12,1) \psdot[linecolor=red](1,0.8) \psdot[linecolor=red](-1,0.645) \psdot[linecolor=red](1,0.48) \psdot[linecolor=red](-1,0.34) \psdot[linecolor=red](0.33,1) \psdot[linecolor=red](-0.12,1) \psdot(0,-0.505) \uput{0.05}[90]{0}(0,-0.505){$\delta^{0}t_1^{0}t_2^{0}$} \psdot(0,-1.074) \psline{->}(0,-0.8)(0,-1.074) \uput{0.3}[90]{0}(0,-1.074){$\delta^{0}t_1^{2}t_2^{2}$} \psdot(0,-1.2) \uput{0.05}[-90]{0}(0,-1.2){$\delta^{1}t_1^{-2}t_2^{-2}$} \psdot(0,-1.53) \uput{0.05}[-90]{0}(0,-1.53){$\delta^{1}t_1^{0}t_2^{0}$} \psdot(-1.356,-0.85) \uput{0.05}[180]{0}(-1.356,-0.85){$\delta^{0}t_1^{0}t_2^{2}$} \psdot(-1.34,-1.13) \uput{0.05}[210]{0}(-1.34,-1.13){$\delta^{1}t_1^{0}t_2^{-2}$} \psdot(1.356,-0.85) \uput{0.05}[0]{0}(1.356,-0.85){$\delta^{0}t_1^{2}t_2^{0}$} \psdot(1.34,-1.13) \uput{0.05}[-30]{0}(1.34,-1.13){$\delta^{1}t_1^{-2}t_2^{0}$} \uput{0.05}[0]{0}(-1,-0.41){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{1}$} \uput{0.05}[0]{0}(-1,-0.72){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{3}$} \uput{0.05}[180]{0}(1,-0.41){$\delta^{-\tfrac{3}{2}}t_1^{1}t_2^{2}$} \uput{0.05}[180]{0}(1,-0.72){$\delta^{-\tfrac{3}{2}}t_1^{-1}t_2^{2}$} \uput{0.05}[90]{0}(0.33,-1){$\delta^{-1}t_1^{-1}t_2^{1}$} \uput{0.05}[90]{0}(-0.33,-1){$\delta^{-2}t_1^{1}t_2^{3}$} \psdot[linecolor=red](-1,-0.8) \psdot[linecolor=red](1,-0.645) \psdot[linecolor=red](-1,-0.48) \psdot[linecolor=red](1,-0.34) \psdot[linecolor=red](-0.33,-1) \psdot[linecolor=red](0.12,-1) \psdot[linecolor=blue](1,-0.8) \psdot[linecolor=blue](-1,-0.645) \psdot[linecolor=blue](1,-0.48) \psdot[linecolor=blue](-1,-0.34) \psdot[linecolor=blue](0.33,-1) \psdot[linecolor=blue](-0.12,-1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } {\psset{unit=3} \begin{pspicture}(-2,-1.7)(2,1.64) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \uput{0.45}[45]{0}(0,0){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.4}[-90]{0}(0,0){\textcolor{red}{$L_{\mr(T_1)}$}} \uput{0.05}[90]{0}(0,0){\textcolor{blue}{$L_{T_2}$}} \psecurve[linecolor=red]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) \rput{180}(0,0){ \psecurve[linecolor=red]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) } \psecurve[linecolor=blue](0,-1.65)(-1.55,-1.4)(-1.55,-0.35)(-0.2,0.4)(1.25,1.25)(1.55,0.7)(-1.55,-0.7)(-1.25,-1.3)(1.3,-0.7)(1.45,-1.45)(0,-1.65)(-1.55,-1.4)(-1.55,-0.35) \psset{dotsize=5pt} \psdot(0.015,0.55) \uput{0.05}[150]{0}(0.015,0.55){$\delta^{1}t_1^{0}t_2^{4}$} \psdot(0.4,0.89) \uput{0.05}[-90]{0}(0.4,0.89){$\delta^{1}t_1^{2}t_2^{4}$} \psdot(0.97,1.25) \uput{0.05}[90]{0}(0.97,1.25){$\delta^{1}t_1^{0}t_2^{2}$} \psdot(1.41,1.15) \uput{0.05}[30]{0}(1.41,1.15){$\delta^{1}t_1^{2}t_2^{2}$} \psdot(-1.41,-1.17) \uput{0.05}[-150]{0}(-1.41,-1.17){$\delta^{1}t_1^{0}t_2^{-4}$} \psdot(-0.63,-1.3) \uput{0.05}[-80]{0}(-0.63,-1.3){$\delta^{1}t_1^{0}t_2^{-2}$} \psdot(-0.12,-1.065) \uput{0.05}[-80]{0}(-0.12,-1.065){$\delta^{1}t_1^{-2}t_2^{-4}$} \psdot(1.43,-0.84) \uput{0.05}[20]{0}(1.43,-0.84){$\delta^{1}t_1^{-2}t_2^{-2}$} \psdot[linecolor=blue](0.53,1) \uput{0.05}[-30]{0}(0.53,1){$\delta^{-1}t_1^{1}t_2^{1}$} \psdot[linecolor=blue](1,0.19) \uput{0.05}[-30]{0}(1,0.19){$\delta^{-\tfrac{3}{2}}t_1^{1}t_2^{0}$} \psdot[linecolor=blue](1,-0.61) \uput{0.05}[150]{0}(1,-0.61){$\delta^{-\tfrac{3}{2}}t_1^{-1}t_2^{0}$} \psdot[linecolor=blue](-0.01,-1) \uput{0.05}[90]{0}(-0.01,-1){$\delta^{-1}t_1^{-1}t_2^{-1}$} \psdot[linecolor=blue](-1,-0.2) \uput{0.05}[150]{0}(-1,-0.2){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{-1}$} \psdot[linecolor=blue](-1,0.07) \uput{0.05}[150]{0}(-1,0.07){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{1}$} \psdot[linecolor=red](-1,0.413) \uput{0.05}[-150]{0}(-1,0.413){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{-3}$} \psdot[linecolor=red](-1,0.726) \uput{0.05}[-150]{0}(-1,0.726){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{-1}$} \psdot[linecolor=red](-0.22,1) \uput{0.05}[120]{0}(-0.22,1){$\delta^{-1}t_1^{1}t_2^{-1}$} \psdot[linecolor=red](1,0.413) \uput{0.05}[-30]{0}(1,0.413){$\delta^{-\tfrac{3}{2}}t_1^{1}t_2^{-2}$} \psdot[linecolor=red](1,0.726) \uput{0.05}[-30]{0}(1,0.726){$\delta^{-\tfrac{3}{2}}t_1^{-1}t_2^{-2}$} \psdot[linecolor=red](0.22,1) \uput{0.05}[90]{0}(0.22,1){$\delta^{-2}t_1^{-1}t_2^{-3}$} \psdot[linecolor=red](-1,-0.413) \uput{0.05}[30]{0}(-1,-0.413){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{1}$} \psdot[linecolor=red](-1,-0.726) \uput{0.05}[30]{0}(-1,-0.726){$\delta^{-\tfrac{3}{2}}t_1^{0}t_2^{3}$} \psdot[linecolor=red](-0.22,-1) \uput{0.05}[210]{0}(-0.22,-1){$\delta^{-2}t_1^{1}t_2^{3}$} \psdot[linecolor=red](1,-0.413) \uput{0.05}[30]{0}(1,-0.413){$\delta^{-\tfrac{3}{2}}t_1^{1}t_2^{2}$} \psdot[linecolor=red](1,-0.726) \uput{0.05}[-135]{0}(1,-0.726){$\delta^{-\tfrac{3}{2}}t_1^{-1}t_2^{2}$} \psdot[linecolor=red](0.22,-1) \uput{0.05}[-40]{0}(0.22,-1){$\delta^{-1}t_1^{-1}t_2^{1}$} \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} } \caption{The main calculation for Example~\ref{exa:KinoshitaTerasakaLink}.}\label{fig:KTLinkMixedPairing} \end{figure} \begin{figure}[p] \centering \begin{subfigure}[b]{0.49\textwidth}\centering { \psset{unit=0.2,linewidth=\stringwidth \begin{pspicture}[showgrid=false](-11,-9)(11,9) \rput{-45}(0,0){ \psecurve[linecolor=red] (-13,-5)% (-9,-4)(-5,-5)(-4,-9)% (-5,-13) \psecurve[linewidth=\stringwhite,linecolor=white](-3,-1)% (-6,-2)% (-6,-6)% (-2,-6)% (-1,-3) \psecurve[linecolor=red](-3,-1)% (-6,-2)% (-6,-6)% (-2,-6)% (-1,-3) \psecurve[linewidth=\stringwhite,linecolor=white] (-13,-5)% (-9,-4)(-5,-5)(-4,-9)% \psecurve[linecolor=red] (-13,-5)% (-9,-4)(-5,-5)(-4,-9)% \psecurve[linecolor=red]% (5,-5)(1,-4)(-3,-5)(-3,-1)% (-6,-2)% (-6,-6)% \psecurve[linewidth=\stringwhite,linecolor=white]% (-6,-6)% (-2,-6)% (-1,-3)(-5,-3)(-4,1)(-5,5) \psecurve[linecolor=red]% (-6,-6)% (-2,-6)% (-1,-3)(-5,-3)(-4,1)(-5,5) \psecurve[linewidth=\stringwhite,linecolor=white]% (1,-4)(-3,-5)(-3,-1)% (-6,-2)% \psecurve[linecolor=red]% (1,-4)(-3,-5)(-3,-1)% (-6,-2)% \psline[linecolor=red]{<-}(1,-4)(0.9,-4.025) \psline[linecolor=red]{<-}(-4,-9)(-4,-8.9) \psecurve[linecolor=blue] (-5,3)% (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linewidth=\stringwhite,linecolor=white](5,7)% (2,6)% (2,2)% (6,2)% (7,5) \psecurve[linecolor=blue](5,7)% (2,6)% (2,2)% (6,2)% (7,5) \psecurve[linewidth=\stringwhite,linecolor=white] (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linecolor=blue] (-1,4)(3,3)(4,-1)% (3,-5) \psecurve[linecolor=blue]% (2,2)% (6,2)% (7,5)(3,5)(4,9)(3,13) \psecurve[linewidth=\stringwhite,linecolor=white]% (13,3)(9,4)(5,3)(5,7)% (2,6)% (2,2)% \psecurve[linecolor=blue]% (13,3)(9,4)(5,3)(5,7)% (2,6)% (2,2)% \psecurve[linewidth=\stringwhite,linecolor=white]% (6,2)% (7,5)(3,5)(4,9) \psecurve[linecolor=blue]% (6,2)% (7,5)(3,5)(4,9) \psline[linecolor=blue]{<-}(-1,4)(-0.9,4) \psline[linecolor=blue]{<-}(4,9)(3.975,8.9) \pscircle[linestyle=dotted](4,4){5} \pscircle[linestyle=dotted](-4,-4){5} \rput{45}(-10,0){$t_1$} \rput{45}(0,10){$t_2$} \rput{45}(1.25,1.25){$a$} \rput{45}(6.75,1.25){$b$} \rput{45}(6.5,6.5){$c$} \rput{45}(1.25,6.75){$d$} \rput{45}(-6.75,-6.75){$a$} \rput{45}(-1.25,-6.75){$b$} \rput{45}(-1.5,-1.5){$c$} \rput{45}(-6.75,-1.25){$d$} \psecurve{C-C}(8,-3)(-1,4)(-10,-3)(-9,-4)(-8,-3) \psecurve{C-C}(-3,8)(4,-1)(-3,-10)(-4,-9)(-3,-8) \psecurve[linewidth=\stringwhite,linecolor=white](-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve[linewidth=\stringwhite,linecolor=white](3,-8)(-4,1)(3,10)(4,9)(3,8) \psecurve{C-C}(-8,3)(1,-4)(10,3)(9,4)(8,3) \psecurve{C-C}(3,-8)(-4,1)(3,10)(4,9)(3,8) } \end{pspicture} } \caption{}\label{fig:KTLinkDiagram} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth}\centering {\psset{unit=1.5} \begin{pspicture}(-1.7,-1.2)(1.7,1.2) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.1}[-90]{0}(1.6,0){$t_2$} \uput{0.1}[0]{0}(0,-1){$t_1$} \rput(-1.2,0.6){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(-1.2,0){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(-0.6,0.6){ \uput{0.1}[135]{0}(-0.025,0.025){$2$} \uput{0.1}[-45]{0}(0.025,-0.025){$2$} \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(-0.6,0){ \uput{0.1}[135]{0}(-0.025,0.025){$2$} \uput{0.1}[-45]{0}(0.025,-0.025){$2$} \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(0,0.6){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(0,0){ \uput{0.1}[135]{0}(-0.025,0.025){$3$} \uput{0.1}[-45]{0}(0.025,-0.025){$3$} \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(0,-0.6){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(0.6,0){ \uput{0.1}[135]{0}(-0.025,0.025){$2$} \uput{0.1}[-45]{0}(0.025,-0.025){$2$} \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(0.6,-0.6){ \uput{0.1}[135]{0}(-0.025,0.025){$2$} \uput{0.1}[-45]{0}(0.025,-0.025){$2$} \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(1.2,0){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \rput(1.2,-0.6){ \pscircle*(-0.025,0.025){0.05} \pscircle[fillcolor=white,fillstyle=solid](0.025,-0.025){0.05} } \end{pspicture} } \caption{}\label{fig:KTLinkFinalResult} \end{subfigure} \\ \begin{subfigure}[b]{\textwidth}\centering {\psset{unit=0.75} \begin{tabular}{cccc} \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \uput{0}[90]{0}(0,0.5){\textcolor{blue}{$L_{T_2}$}} \psecurve{->}(1.7,1)(0,0.5)(1.7,0)(3.4,0.5) \uput{0}[0]{0}(-1.7,0){\textcolor{red}{$L_{\mr(T_1)}$}} \psecurve{->}(-0.6,-1.7)(-0.3,0)(0,-1.7)(-0.3,-3.4) \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \psecurve[linecolor=blue]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \psecurve[linecolor=blue]% (1.2,-0.95)(0,-1.5)(-1.5,-1)(1.25,1.15)(-1.2,-1.05)(1.2,-0.95)(0,-1.5)(-1.5,-1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \rput{180}(0,0){ \psecurve[linecolor=blue]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) } \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} \\ \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \psecurve[linecolor=red]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle*(0.6,-0.6){0.1} \pscircle*(0,-0.6){0.1} \pscircle*(0.6,0){0.1} \pscircle*(0.05,-0.05){0.1} \pscircle[fillcolor=white,fillstyle=solid](-0.6,0.6){0.1} \pscircle[fillcolor=white,fillstyle=solid](0,0.6){0.1} \pscircle[fillcolor=white,fillstyle=solid](-0.6,0){0.1} \pscircle[fillcolor=white,fillstyle=solid](-0.05,0.05){0.1} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle*(1.2,-0.6){0.1} \pscircle*(0.6,-0.6){0.1} \pscircle*(1.2,0){0.1} \pscircle*(0.6,0){0.1} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \end{pspicture} \\ \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \psecurve[linecolor=red]% (1.2,-0.95)(0,-1.5)(-1.5,-1)(1.25,1.15)(-1.2,-1.05)(1.2,-0.95)(0,-1.5)(-1.5,-1) \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle[fillstyle=solid, fillcolor=white](-1.2,0.6){0.1} \pscircle[fillstyle=solid, fillcolor=white](-0.6,0.6){0.1} \pscircle[fillstyle=solid, fillcolor=white](-1.2,0){0.1} \pscircle[fillstyle=solid, fillcolor=white](-0.6,0){0.1} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle*(0.05,-0.05){0.1} \pscircle[fillcolor=white,fillstyle=solid](-0.05,0.05){0.1} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle[fillstyle=solid, fillcolor=white](1.2,-0.6){0.1} \pscircle[fillstyle=solid, fillcolor=white](0.6,-0.6){0.1} \pscircle[fillstyle=solid, fillcolor=white](1.2,0){0.1} \pscircle[fillstyle=solid, fillcolor=white](0.6,0){0.1} \end{pspicture} \\ \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linestyle=dotted](1,1)(1,-1) \psline[linestyle=dotted](1,-1)(-1,-1) \psline[linestyle=dotted](-1,-1)(-1,1) \psline[linestyle=dotted](-1,1)(1,1) \rput{180}(0,0){ \psecurve[linecolor=red]% (0,1.5)% (-1.35,1.35)(-1.375,1)(-1,0.725)(0.5,1.3)(1.375,1)(1.35,0.55)% (0,0.55)% (-1.35,0.55)(-1.375,1)(-0.5,1.3)(1,0.725)(1.375,1)(1.35,1.35)% (0,1.5)% (-1.35,1.35)(-1.375,1) } \pscircle[fillstyle=solid, fillcolor=white](1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,1){0.08} \pscircle[fillstyle=solid, fillcolor=white](1,-1){0.08} \pscircle[fillstyle=solid, fillcolor=white](-1,-1){0.08} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle*(-1.2,0.6){0.1} \pscircle*(-0.6,0.6){0.1} \pscircle*(-1.2,0){0.1} \pscircle*(-0.6,0){0.1} \end{pspicture} & \begin{pspicture}(-1.7,-1.7)(1.7,1.7) \psline[linecolor=lightgray](-1.5,0.6)(1.5,0.6) \psline[linecolor=lightgray](-1.5,-0.6)(1.5,-0.6) \psline[linecolor=lightgray](-1.2,0.9)(-1.2,-0.9) \psline[linecolor=lightgray](-0.6,0.9)(-0.6,-0.9) \psline[linecolor=lightgray](0.6,0.9)(0.6,-0.9) \psline[linecolor=lightgray](1.2,0.9)(1.2,-0.9) \psline{->}(0,1.1)(0,-1.1) \psline{->}(-1.7,0)(1.7,0) \uput{0.2}[-90]{0}(1.6,0){$t_2$} \uput{0.2}[0]{0}(0,-1){$t_1$} \pscircle*(-0.6,0.6){0.1} \pscircle*(0,0.6){0.1} \pscircle*(-0.6,0){0.1} \pscircle*(-0.05,0.05){0.1} \pscircle[fillcolor=white,fillstyle=solid](0.6,-0.6){0.1} \pscircle[fillcolor=white,fillstyle=solid](0,-0.6){0.1} \pscircle[fillcolor=white,fillstyle=solid](0.6,0){0.1} \pscircle[fillcolor=white,fillstyle=solid](0.05,-0.05){0.1} \end{pspicture} \end{tabular} } \caption{}\label{fig:KTLinkTable} \end{subfigure} \caption{A tangle decomposition of the Kinoshita-Terasaka link L10n36 \protect\cite[Figure~17]{OSHFLThurston} (a) and the intersection theory of the corresponding tangle invariants (b). The table (c) shows the intersection theory for each pair of components separately. The main calculation is done in Figure~\ref{fig:KTLinkMixedPairing}. In (b) and (c), generators of the homology groups are denoted by \protect\begin{pspicture}(-0.1,-0.1)(0.1,0.1) \protect\pscircle[fillcolor=white,fillstyle=solid]{0.075} \protect\end{pspicture} or \protect\begin{pspicture}(-0.1,-0.1)(0.1,0.1) \protect\pscircle*{0.075} \protect\end{pspicture}, depending on whether their $\delta$-grading is 0 or 1. They are arranged in the coordinate systems according to their Alexander gradings. In (b), the labels $2$ and $3$ of some generators indicate their multiplicity. }\label{fig:KTLink} \end{figure} \begin{example}[(Kinoshita-Terasaka link)]\label{exa:KinoshitaTerasakaLink} The Kinoshita-Terasaka link $L_{KT}$ can be decomposed into the $(2,-3)$-pretzel tangle $\textcolor{blue}{T_2}=T_{2,-3}$ and $\textcolor{red}{T_1}=\mr(T_{2,-3})$, as shown in Figure~\ref{fig:KTLinkDiagram}. Thus, its link Floer homology can be computed as the self-pairing of $L_{T_{2,-3}}$, ie $$\HFL(L_{KT})\cong\LagrangianFH(L_{T_{2,-3}},L_{T_{2,-3}}).$$ The Lagrangian Floer homology can be computed for each pair of components separately. The result is shown in the table of Figure~\ref{fig:KTLinkTable}. The final result is shown in Figure~\ref{fig:KTLinkFinalResult}. Oszváth and Szabó computed the link Floer homology of $L_{KT}$ in~\cite[Figure~24]{OSHFLThurston}. Note that our orientation of the strand $t_1$ is opposite to the one in~\cite[Figure~17]{OSHFLThurston}, so our result agrees with theirs after reversing the Alexander grading $t_1$. Let us discuss the computation of the table in Figure~\ref{fig:KTLinkTable} in more detail. The two opposite immersed components are easiest to pair, since they can be homotoped such that they do not intersect each other. The intersection of each immersed component with itself, however, gives eight generators each. This computation is done in the upper half of Figure~\ref{fig:KTLinkMixedPairing}. The self-intersection of the embedded component with itself is equal to the link Floer homology of the unlink, ie two generators in gradings $\delta^0t_1^0t_2^0$ and $\delta^1t_1^0t_2^0$, respectively. The lower half of Figure~\ref{fig:KTLinkMixedPairing} shows the pairing of the two immersed components with the embedded components. The opposite pairing is done similarly by switching red and blue curves, which reverses the Alexander gradings of intersection points between the curves and acts like $\delta\mapsto 1-\delta$ on their $\delta$-gradings. \end{example}
{ "timestamp": "2019-07-10T02:01:45", "yymm": "1712", "arxiv_id": "1712.05050", "language": "en", "url": "https://arxiv.org/abs/1712.05050" }
\section{Introduction} This paper deals with the generalized Kadomtsev-Petviashvili (GKP) equation in $\mathbb{R}^N$ \begin{equation} \label{eq1*} u_t + u_{xxx}+(h(u))_x = D^{-1}_{x}\Delta_y u, \end{equation} where $ (t,x,y)\times \R^{+}\times\R \times\R^{N-1},$ $ N \geq 2,$ $\displaystyle D^{-1}_{x} f(x,y)=\int_{-\infty}^{x}f(s,y)ds$ and $\displaystyle \Delta_y=\sum_{i=1}^{N-1}\frac{\partial^2}{\partial y^{2}_{i}}.$ We are interested, in particular, to the existence of a solitary wave for (\ref{eq1*}), that is, a solution $u$ of the form $u(t,x,y)=u(x-\tau t, y),$ with $\tau>0$. Hence, such function $u$ must satisfy the problem \begin{equation*}\label{eq2*} -\tau u_x + u_{xxx}+(h(u))_x = D^{-1}_{x}\Delta_y u, \qquad\hbox{in }\R^N, \end{equation*} or, equivalently, \begin{equation}\label{eq4} (- u_{xx}+ \tau u + D^{-2}_{x}\Delta_y u -h(u))_x=0, \qquad\hbox{in }\R^N. \end{equation} We would like to point out that the equation (\ref{eq1*}) is a two-dimensional Korteweg-de Vries type equation when $h(t)=t^{2}$, which is a model for long dispersive waves, essentially unidimensional, but having small transverse effects, see \cite{GKP}. For the Cauchy problem associated with equation (\ref{eq1*}), we would like to cite, e.g. \cite{Bourgain, Fam, IsazaM} and the survey \cite{Saut}. Generalized Kadomtsev-Petviashvili $(GKP)$ equation has been studied by many authors, see for instance, G\"ung\"or and Winternitz \cite{gungor}, Tian and Gao \cite{tian}, Zhang, Xu and Ma \cite{zhang} and references therein, where they focused in the study of solitary or soliton solutions, complete integrability, etc. The pioneering work is due to De Bouard and Saut in \cite{SautB2,SautB1}, where they treated a nonlinearity $h(s)=|s|^{p}s$ assuming that $p=\frac{m}{n}$, with $m$ and $n$ relatively prime, and $n$ is odd and $1 \leq p < 4 ,$ if $N=2,$ or $1 \leq p< 4/3,$ if $N=3$. In the mentioned paper, De Bouard and Saut obtained existence results for equation (\ref{eq1*}) by combining minimization with concentration compactness theorem \cite{Lions}. In \cite{SautB3}, instead, the authors proved that the solutions obtained in former papers are cylindrically symmetric. For the regularity of the solutions they assumed $ p=2,3,4$ if $N=2,$ and $p=2$ if $N=3.$ In an interesting paper \cite{Klein}, Klein and Saut used a numerical simulation to analyse several quantitative properties of the De Bouard and Saut existence results, such as, blow-up, stability or instability of solitary waves. Also, in this paper, they study the zero-mass case and make a survey of various mathematical results on this subject. In Willem \cite{Willem} and Wang and Willem \cite{WangWillem} the existence of a solitary waves for a class of $(GKP)$ problems of the type (\ref{eq1*}) were considered with an autonomous continuous nonlinearity $h$ with $N=2,$ and existence and multiplicity results have been proved, respectively. Their results were obtained by applying the mountain pass theorem \cite{AR} and Lusternik-Schnirelman theory, respectively. In \cite{Liang}, Liang and Su have proved the existence of solution for a class considered of the $(GKP)$ problem (\ref{eq1*}), which involves a non autonomous continuous function with $N \geq 2,$ while Xuan \cite{Xuan} treated the autonomous case in higher dimension. Applying the linking theorem stated in \cite{Szulkin}, He and Zou in \cite{HeZou} obtained nontrivial solution for (\ref{eq1*}), in higher dimension without Ambrosetti-Rabinowitz growth condition given in \cite{AR} (see also \cite{Zou} and \cite{chineses} for multiplicity results). We recall that in all the above papers, the regularity of the solitary waves have not been treated. Recently Alves and Miyagaki \cite{AlvesGKP} have treated the case non autonomous, getting results similar to those obtained in, for instance, \cite{SautB2,SautB1}, and also the regularity properties of the solutions. Paumond in \cite{Paumond} obtained nonsymmetric solutions for (\ref{eq1*}) with $N=5,$ extending those results in \cite{SautB2,SautB1}. Motivated by above articles dealing with Kadomtsev-Petviashvili equation as well as by the fact that the variational methods can be employed to find a solitary waves for (\ref{eq1*}), this paper concerns with the existence of solitary waves for problem \ref{eq4} with $\tau=1,$ in higher dimensions. In whole this paper, the function $h$ belongs to $C^1(\R^N)$ and $h(0)=0$. Here, we are going to work with two classes of problems: the Positive Mass case and the Negative Mass case. \\ \noindent {\bf Problem 1:} {\it Positive Mass}. In this case, we assume that $h$ satisfies the following conditions: \begin{enumerate}[label=($h_\arabic{*})$, ref=$h_\arabic{*}$] \item \label{hp} for some $p\in (1, \bar{N}-1),$ where $\bar{N}=\frac{2(2N-1)}{2N-3};$ \[ \lim_{|t| \to +\infty}\frac{h(t)}{|t|^p}=0; \] \item \label{h'} $h'(0)=0$; \item \label{h>0} there exists $\xi > 0$ such that $2H(\xi)-\xi^{2}>0$, where $H(t)=\int_{0}^{t}h(r)dr$. \end{enumerate} Our main result for this class of problems is the following \begin{thm}\label{T1} Suppose \eqref{hp}-\eqref{h>0}, then the problem (\ref{eq4}) possesses at least a nontrivial solution. \end{thm} Our second class of problems is the following\\ \noindent {\bf Problem 2:}\, {\it Zero Mass.} \\ \noindent We suppose \eqref{h>0} and the conditions below \begin{enumerate}[label=($h_\arabic{*})$, ref=$h_\arabic{*}$]\setcounter{enumi}{3} \item \label{hf} $h(t)=f(t)+t$; \item\label{hf5} $\displaystyle \lim_{t \to 0}\frac{f(t)}{|t|^{\bar{N}-1}}=\lim_{|t| \to +\infty}\frac{f(t)}{|t|^{\bar{N}-1}}=0$. \end{enumerate} Observe that, by \eqref{h>0} and \eqref{hf}, defining $F(t)=\int_{0}^{t}f(r)dr$, we infer that \begin{equation}\label{f>0} \hbox{there exists }\xi > 0\hbox{ such that }F(\xi)>0. \end{equation} For the Zero Mass case, the problem (\ref{eq4}) will be written of the form \begin{equation}\label{eq4'} (-u_{xx} - f(u) + D^{-2}_{x} \Delta u_{y})_x=0, \quad \mbox{in} \ \mathbb{R}^N. \end{equation} \\ Related with this class of problems we have the following result \begin{thm}\label{T2} Suppose \eqref{h>0}-\eqref{hf5}, then problem (\ref{eq4'}) possesses at least a nontrivial solution. \end{thm} One of the main motivations to study Problems 1 and 2 comes from the seminal papers due to Berestycki and Lions in \cite{BL}, for $N\geq 3,$ and to Berestycki, Gallou\"et and Kavian in \cite{kavian}, for $N=2,$ where the authors proved the existence of a ground state, namely a solution which minimizes the action among all the nontrivial solutions, for the problem $$ \left\{ \begin{array}{l} -\Delta u = g(u), \quad \mbox{in} \ \R^N,\\ u \in H^1(\R^N), \end{array}\right. $$ under the following assumptions on the nonlinearity $g$: \begin{description} \item[$ (g_1)$] $g$ is an odd and continuous function; \item[$ (g_2)$] $-\infty < \liminf_{s \rightarrow 0^+}\frac{ g(s)}{s} \leq \limsup_{s \rightarrow 0^+} \frac{ g(s)}{s} = −m \leq 0;$ \item[$ (g_3)$] $-\infty < \limsup_{s \rightarrow +\infty}\frac{ g(s)}{s^{2^*-1}}\leq 0, \ (N \geq 3) , \limsup_{s\rightarrow +\infty}\frac{ g(s)}{e^{\alpha s^2}}\leq 0, \forall \alpha>0, \ (N=2);$ \item[$ (g_4)$] there exists $\tau > 0 $ such that $G(\tau) := \int_{0}^{\tau} g(s) ds > 0;$ \end{description} where $2^* = 2N/(N-2).$ They considered two cases $m<0$ and $m=0$, so called \lq\lq positive mass\rq\rq and\lq\lq zero mass\rq\rq cases, respectively. The last case is related to the Yang-Mills equation, see, e.g. \cite{Gidas}. After these two pioneering papers, many researches worked in this subject, extending or improving in several ways, see, for instance, \cite{Monteo1, Monteo2, AP, Benci, Micheletti, HIT,JT,AW}, and references therein: clearly the list is not complete. It is very important to point out that in the most part of the above mentioned papers, which involves the Laplacian operator, the radial functions space plays an important role because of its compactness embedding properties. At contrary, for problems involving the Kadomtsev-Petviashvili equation, we do not have a similar result for radial functions any longer and therefore we need to use different arguments to prove our main results. Here, we will adapt for our problem the variational approach explored in Jeanjean \cite{jeanjean} and Hirata, Ikoma and Tanaka \cite{HIT} (see also \cite{ADP,CPDS}) by considering an auxiliary functional that allows to construct a suitable Palais-Smale sequence, which {\em almost} satisfies a Pohozaev type identity, see Sections 3 and 4 for more details. The paper is organized as follows. In Section 2 we present our functional setting describing its embeddings properties. In the last two sections, instead, we treat the positive mass case and the zero mass case, proving our main existence results. \vspace{0.5 cm} \noindent {\bf Notations:} \, Throughout the paper, unless explicitly stated, the symbol $C$ will always denote a generic positive constant, which may vary from line to line. The symbols \lq\lq $\rightarrow$ \rq\rq and \lq\lq $\rightharpoonup$ \rq\rq denote, respectively, strong and weak convergence, and all the convergences involving sequences in $n\in \N$ are as $n\rightarrow \infty.$ Moreover we denote by $|\cdot |_q$ the usual norm of Lebesgue space $L^q(\R^N)$, for $q\in [1,+\infty]$. Finally, for all $(x,y)\in \R \times \R^{N-1}$ and $r>0$, we denote by $B_r((x,y))$ the ball centred in $(x,y)$ with radius $r$. \section{Functional setting } We intend to study our problem using variational methods and, as first step, we introduce our functional setting. \begin{deff} On $Y=\{ g_x: g \in C_{0}^{\infty}(\R^N)\}$ define the inner product \begin{equation* (u,v)=\int_{\R^N} \left( u_x v_x +D^{-1}_{x}\nabla_y u \cdot D^{-1}_{x} \nabla_y v + uv\right) dV, \end{equation*} with the corresponding norm \begin{equation* \|u\|=\left(\int_{\R^N} \left( |u_x|^2 +|D^{-1}_{x}\nabla_y u|^2 + |u|^2\right) dV \right)^{\frac{1}{2}}, \end{equation*} where $\nabla_y=(\frac{\partial}{\partial y_1}, \ldots,\frac{\partial}{\partial y_{N-1}} ) $ and $dV= dx \ dy.$ We say that $u:\R^N \rightarrow \R$ belongs to $X$ if there exists a sequence $\{u_n\}\subset Y$ such that \begin{description} \item[a)] $ u_n \rightarrow u$ a.e. on $\R^N$, \item[b)] $\|u_j - u_k\|\rightarrow 0$, as $ j, k \rightarrow \infty.$ \end{description} \end{deff} \begin{deff} On $Y=\{ g_x: g \in C_{0}^{\infty}(\R^N)\}$ define the inner product \begin{equation* (u,v)_0=\int_{\R^N} \left( u_x v_x +D^{-1}_{x}\nabla_y u \cdot D^{-1}_{x} \nabla_y v \right) dV \end{equation*} with the corresponding norm \begin{equation* \|u\|_0=\left(\int_{\R^N} \left( |u_x|^2 +|D^{-1}_{x}\nabla_yu|^2 \right) dV \right)^{\frac{1}{2}}. \end{equation*} We say that $u:\R^N \rightarrow \R$ belongs to $X_0$ if there exists a sequence $\{u_n\}\subset Y$ such that \begin{description} \item[a)] $ u_n \rightarrow u$ a.e. on $\R^N$, \item[b)] $\|u_j - u_k\|_0\rightarrow 0$, as $j, k \rightarrow \infty.$ \end{description} \end{deff} From definition of $X$ and $X_0$, the embedding $(X,\|\,\,\|) \hookrightarrow (X_0,\|\,\,\|_0)$ is continuous. The spaces $X$ and $X_0$ endowed with inner products and norms given above are Hilbert spaces. Moreover, we have the following continuous embeddings, whose proof can be found in \cite[Theorem 15.7 p. 323]{Besov}, also in \cite[Lemma 2.1]{Liang} and \cite[Lemma 2.3]{Xuan}, \begin{equation} \label{continuous} X \hookrightarrow L^{q}(\R^N), \quad\mbox{for} \ 1\leq q \leq \bar{N}. \end{equation} Regarding to compact embeddings, De Bouard and Saut in \cite[Remark 1.1]{SautB1} for $N=2,3,$ and Xuan in \cite[Lemma 2.4]{Xuan} for higher dimensions, they have proved that the embeddings below \begin{equation}\label{compact} X \hookrightarrow L^{q}_{loc}(\R^N), \quad \mbox{for} \ 1\leq q < \bar{N}, \end{equation} are compacts. The following lemma is a Lions' type result, see \cite{Lions}, for the space $X$, whose the proof can be found in \cite[Lemma 2.5]{Xuan}. \begin{lem} \label{Lions} (\cite[Lemma 2.5]{Xuan})\,If $\{u_n\}$ is a sequence bounded in $X$ and if $$ \sup_{(x,y)\in \mathbb{R}^N}\int_{B_r((x,y))}|u_n|^{2}\,dV \to 0,\ \mbox{as}\ n \rightarrow \infty, $$ then $u_n \to 0$ in $L^{q}(\mathbb{R}^N)$ for all $q \in (2,\bar{N})$. \end{lem} With respect to the continuous embedding $X_0 \hookrightarrow L^{\bar{N}}(\mathbb{R}^N)$, we have the following result \begin{lem}\label{le:l6} (\cite[Lemma 2.3]{Xuan}) There exists a constant $S>0$ such that \begin{equation* |u|_{6} \leq S \left(\int_{\mathbb{R}^N}(|u_x|^2 +|D^{-1}_{x}\nabla_y u|^2)\,dV \right)^{\frac{1}{2}}, \quad \forall u \in X_0. \end{equation*} \end{lem} Finally, before concluding this section, we would like to point out that the same approach explored in \cite[Lemma 2.4]{Xuan}, or \cite[Theorem 7.3]{Willem}, gives \begin{equation}\label{compact2} X_0 \hookrightarrow L^{q}_{loc}(\R^N), \quad \mbox{for} \quad 1\leq q < \bar{N}, \end{equation} are compact. Moreover, it is very important to say that if $\Omega \subset \R^N$ is a smooth bounded domain and $q \in [1,\bar{N}]$, there exists $C>0$ such that \begin{equation}\label{Estimativa 2} |u|_{L^{q}(\Omega)} \leq C \left(\int_{\Omega} \left( |u_x|^2 +|D^{-1}_{x}\nabla_y u|^2 + |u|^2\right) dV \right)^{\frac{1}{2}}, \quad \forall u \in X_0. \end{equation} The above information follows from properties involving anisotropic Sobolev spaces, for more details see Besov, Il'in, and Nikolskiı \cite[Chapter 3]{Besov}. \section{The existence of solution for positive mass case} Through this section we will assume \eqref{hp}-\eqref{h>0}. We will find solutions of equation (\ref{eq4}) as critical points of the energy functional $I:X \longrightarrow \R$ given by $$ I(u)=\frac{1}{2}\|u\|^2-\int_{\R^N}H(u)\, dV. $$ \begin{lem} \label{PM1} The functional $I:X \to \mathbb{R}$ verifies the mountain pass geometry, that is, \begin{itemize} \item[(i)] there are $\alpha, \rho>0$ such that $I(u) \geq \alpha$, for $\|u\|=\rho$; \item[(ii)] there is $e \in X \setminus\{0\}$ such that $I(e)<0$, with $\|e \|>\rho$. \end{itemize} \end{lem} \begin{proof} The proof of $(i)$ follows by using standard arguments involving the growth condition on $h$, then it will be omitted. In order to prove $(ii)$, from \eqref{h>0} there is $\phi \in C^{\infty}_0(\mathbb{R}^{N})$ such that $$ \int_{\mathbb{R}^N}\left(H(\phi)-\frac{\phi^2}{2}\right)dV>0. $$ For $t>0$, setting $$ w_t(x,y)=\phi({x}/{t},{y}/{t^2}), \qquad \hbox{for }(x,y)\in \R\times\R^{N-1}, $$ by simple calculations, we derive $$ I_\lambda(w_t)=\frac{t^{2N-3}}{2}\|\phi\|_{0}^2-t^{2N-1}\int_{\mathbb{R}^N}\left(H(\phi)-\frac{\phi^2}{2}\right)dV \to -\infty \quad \mbox{as} \ t \to +\infty. $$ Therefore, $(ii)$ follows by choosing $e=w_t$ with $t$ large enough. \end{proof} We set $$ \Gamma=\{ \gamma \in C([0,1],X)\,:\,\gamma(0) =0,\ \gamma(1)=e \} $$ and $$ \sigma=\inf_{\gamma \in \Gamma}\max_{t \in [0,1]}I(\gamma(t)). $$ Clearly, by Lemma \ref{PM1}, $\sigma\ge \alpha>0$. Following \cite{HIT,jj}, we introduce an auxiliary functional $\tilde I\in C^1(\R\times X,\R)$ given by \[ \tilde I(\t,u)=\frac{e^{(2N-3)\t}}{2}\|u\|^2_0+\frac{e^{(2N-1)\t}}{2}|u|_2^2-e^{(2N-1)\t}\ird H(u) \, dV. \] The following properties hold, for all $(\t,u)\in \R\times X$, \begin{align*} \tilde I(0,u)&=I(u), \\ \tilde I (\t,u)&=I(u(e^{-\t}x,e^{-2\t}y)). \end{align*} We equip a standard product norm $\|(\t,u)\|_{\R\times X}=(|\t|^2+\|u\|^2)^{1/2}$ to $\R\times X$. By Lemma \ref{PM1}, it is easy to see that also the functional $\tilde I$ satisfies the mountain pass geometry. More precisely, the following holds \begin{lem} \label{PM1-tilde} The functional $\tilde I:\R\times X \to \mathbb{R}$ verifies the mountain pass geometry, that is, \begin{itemize} \item[(i)] there are $\alpha, \rho>0$ such that $\tilde I(\t,u) \geq \alpha$, for $\|(\t,u)\|_{\R\times X}=\rho$; \item[(ii)] there is $\tilde e \in \R\times X \setminus\{0\}$ such that $\tilde I(\tilde e)<0$, with $\|\tilde e \|_{\R\times X}>\rho$. \end{itemize} \end{lem} \begin{proof} For $(ii)$ it is sufficient to take $\tilde e=(0,e)$, while for $(i)$ just follows by Lemma \ref{PM1}. \end{proof} In what follows, we define the mountain pass level $\tilde \sigma$ for $\tilde I$ by $$ \tilde \sigma=\inf_{\tilde\gamma \in \tilde \Gamma}\max_{t \in [0,1]}\tilde I(\tilde \gamma(t)), $$ where $$ \tilde \Gamma=\{ \tilde \gamma \in C([0,1],\mathbb{R}\times X)\,:\,\tilde \gamma(0) =(0,0),\ \tilde\gamma(1)=(0,e) \}. $$ Hence, $\tilde \sigma\ge \alpha>0$. Arguing as in \cite[Lemma 4.1]{HIT}, we derive \begin{lem} The mountain pass levels of $I$ and $\tilde I$ coincide, namely $\sigma=\tilde \sigma$. \end{lem} Now, as a immediate consequence of Ekeland's variational principle, we have the result below whose proof follows as in \cite[Lemma 2.3]{jeanjean}. \begin{lem}\label{le:ekeland} Let $\eps>0$. Suppose that $\tilde \gamma \in \tilde{\Gamma}$ satisfies \[ \max_{t \in [0,1]}\tilde I(\tilde \gamma(t))\le \sigma+\eps, \] then there exists $(\t, u)\in \R\times X$ such that \begin{enumerate} \item ${\rm dist}_{\R \times X}\big((\t,u),\tilde{\gamma}([0,1])\big)\le 2 \sqrt{\eps}$; \item $\tilde{I}(\t,u)\in [\sigma-\eps,\sigma+\eps]$; \item $\|D\tilde{I}(\t,u)\|_{\R \times X^*}\le 2 \sqrt{\eps}$. \end{enumerate} \end{lem} Arguing as in \cite{HIT}, by Lemma \ref{le:ekeland}, the following proposition holds \begin{prop}\label{pr:sequence} There exists a sequence $\{(\t_n,u_n)\} \subset \R \times X$ such that, as $n \to +\infty$, we get \begin{enumerate} \item $\t_n \to 0$; \item $\tilde{I}(\t_n,u_n)\to \sigma$; \item $\de_\t\tilde{I}(\t_n,u_n)\to 0$; \item $\de_u\tilde{I}(\t_n,u_n)\to 0$ strongly in $X^*$. \end{enumerate} \end{prop} After the above study, we are ready to prove Theorem \ref{T1}. \begin{proof}[Proof of Theorem \ref{T1}] By Proposition \ref{pr:sequence}, there exists a sequence $\{(\t_n,u_n)\} \subset \R \times X$ such that \begin{equation}\label{sistema} \begin{cases} \dis\frac{e^{(2N-3)\t_n}}{2}\|u_n\|^2_0+\frac{e^{(2N-1)\t_n}}{2}|u_n|_2^2-e^{(2N-1)\t_n}\ird H(u_n) \, dV=\sigma+o_n(1), \\[4mm] \dis\frac{e^{(2N-3)\t_n}}{2}\|u_n\|^2_0+\frac{(2N-1)e^{(2N-1)\t_n}}{2}|u_n|_2^2-(2N-1)e^{(2N-1)\t_n}\ird H(u_n) \, dV=o_n(1), \\[4mm] \dis e^{(2N-3)\t_n}\|u_n\|^2_0+e^{(2N-1)\t_n}|u_n|_2^2-e^{(2N-1)\t_n}\ird h(u_n)u_n \, dV=o_n(1)\|u_n\|. \end{cases} \end{equation} From the first and the second equation of the previous system we get \[ (N-1)e^{\t_n}\|u_n\|^2_0=(2N-1)\sigma+o_n(1), \] and so, since $\t_n \to 0$, we infer that $\{u_n\}$ is bounded in $X_0$ and so also in $L^{\bar{N}}(\RD)$, by Lemma \ref{le:l6}. Observe that, by \eqref{hp} and \eqref{h'}, we deduce that for any $\delta>0$, there exists $C_\delta>0$ such that \begin{equation* |h(t)|\le \delta |t|+C_\delta |t|^{\bar{N}-1}, \qquad\hbox{for all }t\in \R. \end{equation*} Hence, by the third equation of \eqref{sistema}, using again the fact that $\t_n \to 0$, we find \[ \|u_n\|^2\le C e^{(2N-1)\t_n}\ird h(u_n)u_n \, dV+o_n(1)\|u_n\| \le C\left( \delta|u_n|^2_2 +C_\delta |u_n|^{\bar{N}}_{\bar{N}}\right)+o_n(1)\|u_n\|. \] Then for $\delta$ small enough, and using the fact that $\{|u_n|_{\bar{N}}\}$ is bounded, it follows that $$ \|u_n\|^2\le C, \quad \forall n \in \mathbb{N}, $$ for some $C>0$, showing that $\{u_n\}$ is actually bounded in $X$. Moreover, by the continuous embedding \eqref{continuous}, we also have for all $p<\bar{N}-1,$ \[ |u_n|_{p+1}^{2}\le C \|u_n\|^{2}\le C|u_n|^{p+1}_{p+1}, \quad \forall n \in \mathbb{N}, \] and so there exists $c>0$ such that $|u_n|_{p+1}\ge c>0$ for all $n \in \mathbb{N}$. Hence, by Lemma \ref{Lions}, there exist a sequence of points $\{(x_n,y_n)\}\subset\RD$ and $r,\beta>0$ such that $$ \int_{B_r((x_n,y_n))}|u_n|^{2}\,dV \ge \beta> 0. $$ Hence, calling $v_n=u(\cdot-x_n,\cdot-y_n)$, being $\{v_n\}$ a bounded sequence in $X$, up to a subsequence, we must have \[ v_n \rightharpoonup v\neq 0 \qquad \hbox{weakly in }X. \] By the invariance by translations of $\tilde{I}$, we have that $\de_u\tilde{I}(\t_n,v_n)\to 0$ strongly in $X^*$, and so, since $\t_n \to 0$ and by the local compact embedding \eqref{compact}, we conclude that $I'(v)=0$, thus $v$ is a non-trivial solution of \eqref{eq4}. \end{proof} \section{The existence of solution for the zero mass case} Through this section we will assume \eqref{h>0}-\eqref{hf5}. We start with a technical lemma, which will be used later on. \begin{lem}\label{le:supK} Let $\{w_n\} \subset X_0$ be a bounded sequence in $X_0$ with $$ \lim_{n \to +\infty}\sup_{(x,y)\in \mathbb{R}^N} \int_{K(x,y)}|f(w_n)w_n|\, dV=0, $$ where $K(x,y)=(x-1,x+1) \times B_1(y)$. Then $\displaystyle \lim_{n \to +\infty}\int_{\mathbb{R}^N}f(w_n)w_n\, dV=0$. \end{lem} \begin{proof} By \eqref{hf5}, there is $C>0$ such that $$ |f(t)t| \leq C|t|^{\bar{N}}, \quad \forall t \in \mathbb{R}. $$ The above inequality combines with (\ref{Estimativa 2}) to give $$ \int_{K(x,y)}\!\!|f(w_n)w_n|\, dV \leq C\int_{K(x,y)}\!\!|w_n|^{\bar{N}}\, dV \leq C_1\left[\int_{K(x,y)}\!\!\left(|(w_n)_x|^2+|D_x^{-1} \nabla_y w_n|^{2}+|w_n|^2 \right)\, dV\right]^{\bar{N}/2}. $$ Thus, for all $\lambda \in (0,1)$, $$ \int_{K(x,y)}\!\!\!|f(w_n)w_n| dV \leq C_1^{\lambda}\left[\int_{K(x,y)}\!\!\!\left(|(w_n)_x|^2\!+|D_x^{-1}\nabla_y w_n|^{2}\!+|w_n|^2\right)dV\right]^{\lambda\bar{N}/2} \!\left(\int_{K(x,y)}\!\!\!|f(w_n)w_n|dV\right)^{1-\lambda}. $$ Setting $\lambda=2/\bar{N}$, we get $$ \int_{K(x,y)}\!\!\!|f(w_n)w_n| dV \leq C_1^{2/\bar{N}}\left[\int_{K(x,y)}\!\!\!\left(|(w_n)_x|^2\!+|D_x^{-1}\nabla_y w_n|^{2}\!+|w_n|^2\right) dV\right] \!\left(\int_{K(x,y)}\!\!\!|f(w_n)w_n| dV\right)^{1- 2/\bar{N}}. $$ By using the fact that $$ |w_n|_{L^{2}(K(x,y))} \leq C_*|w_n|_{L^{\bar{N}}(K(x,y))}, \quad \forall n \in \mathbb{N}, $$ for some constant $C_*>0$ independent of $(x,y) \in \R^N$, we get $$ \int_{\mathbb{R}^N}|f(w_n)w_n|\, dV \leq C_1^{2/\bar{N}}\|w_n\|_0^2 \left(\sup_{(x,y)\in \mathbb{R}^N}\int_{K(x,y)}|f(w_n)w_n|\, dV\right)^{1-2/\bar{N}} $$ and so, $$ \int_{\mathbb{R}^N}|f(w_n)w_n|\, dV \leq C_2\left(\sup_{(x,y)\in \mathbb{R}^N}\int_{K(x,y)}|f(w_n)w_n|\, dV \right)^{1- 2/\bar{N}}, $$ for all $n \in \mathbb{N}$ and for some $C_2>0$. From where it follows that $$ \lim_{n\to +\infty}\int_{\mathbb{R}^N}|f(w_n)w_n|\, dV =0 $$ and so the claim. \end{proof} Associated with equation (\ref{eq4'}), by \eqref{hf}, we have the energy functional $I_0:X_0 \longrightarrow \R$ given by $$ I_0(u)=\frac{1}{2}\|u\|_0^2-\int_{\R^N}F(u)\, dV. $$ \begin{lem} \label{PM10} The functional $I_0:X_0 \to \mathbb{R}$ verifies the mountain pass geometry, that is, \begin{itemize} \item[(i)] there are $\alpha, \rho>0$ such that $I_0(u) \geq \alpha$, for $\|u\|_0=\rho$; \item[(ii)] there is $e \in X_0 \setminus\{0\}$ such that $I_0(e)<0$, with $\|e \|_0>\rho$. \end{itemize} \end{lem} \begin{proof} The proof is similar as in Lemma \ref{PM1}, using \eqref{hf5} and \eqref{f>0}. \end{proof} We set $$ \Gamma_0=\{ \gamma \in C([0,1],X_0)\,:\,\gamma(0) =0,\ \gamma(1)=e \} $$ and $$ \sigma_0=\inf_{\gamma \in \Gamma_0}\max_{t \in [0,1]}I_0(\gamma(t)). $$ Clearly, by Lemma \ref{PM10}, $\sigma_0\ge \alpha>0$. As in the previous section, we introduce the auxiliary functional $\tilde I_0\in C^1(\R\times X_0,\R)$ given by \[ \tilde I_0(\t,u)=\frac{e^{(2N-3)\t}}{2}\|u\|^2_0-e^{(2N-1)\t}\ird F(u) \, dV. \] The following properties hold, for all $(\t,u)\in \R\times X_0$, \begin{align*} \tilde I_0(0,u)&=I_0(u), \\ \tilde I_0 (\t,u)&=I_0(u(e^{-\t}x,e^{-2\t}y)). \end{align*} We equip $\R\times X_0$ with the standard product norm $\|(\t,u)\|_{\R\times X_0}=(|\t|^2+\|u\|_0^2)^{1/2}$. Arguing as in Lemma \ref{PM1-tilde}, we can see that $\tilde I_0$ satisfies the mountain pass geometry. More precisely, we have \begin{lem} \label{PM10-tilde} The functional $\tilde I_0:\R\times X_0 \to \mathbb{R}$ verifies the mountain pass geometry, that is, \begin{itemize} \item[(i)] there are $\alpha, \rho>0$ such that $\tilde I_0(\t,u) \geq \alpha$, for $\|(\t,u)\|_{\R\times X_0}=\rho$; \item[(ii)] there is $\tilde e \in \R\times X_0 \setminus\{0\}$ such that $\tilde I_0(\tilde e)<0$ with $\|\tilde e \|_{\R\times X_0}>\rho$. \end{itemize} \end{lem} Hence we define the mountain pass level $\tilde \sigma_0$ for $\tilde I_0$ by $$ \tilde \sigma_0=\inf_{\tilde\gamma \in \tilde \Gamma_0}\max_{t \in [0,1]}\tilde I_0(\tilde \gamma(t)), $$ where $$ \tilde \Gamma=\{ \tilde \gamma \in C([0,1],\mathbb{R}\times X_0)\,:\,\tilde \gamma(0) =(0,0),\ \tilde\gamma(1)=(0,e) \}. $$ From definition $\sigma_0$, we have $\tilde \sigma_0\ge \alpha>0$, $\sigma_0=\tilde \sigma_0$ and the following proposition \begin{prop}\label{pr:sequence0} There exists a sequence $\{(\t_n,u_n)\} \subset \R \times X_0$ such that, as $n \to +\infty$, we get \begin{enumerate} \item $\t_n \to 0$; \item $\tilde{I_0}(\t_n,u_n)\to \sigma$; \item $\de_\t\tilde{I_0}(\t_n,u_n)\to 0$; \item $\de_u\tilde{I_0}(\t_n,u_n)\to 0$ strongly in $X_0^*$. \end{enumerate} \end{prop} Now, we are going to prove Theorem \ref{T2}. \begin{proof}[Proof of Theorem \ref{T2}] By Proposition \ref{pr:sequence0}, there exists a sequence $\{(\t_n,u_n)\} \subset \R \times X_0$ such that, as $n \to +\infty$, we have \begin{equation}\label{sistema0} \begin{cases} \dis\frac{e^{(2N-3)\t_n}}{2}\|u_n\|^2_0-e^{(2N-1)\t_n}\ird F(u_n) \, dV=\sigma_0+o_n(1), \\[4mm] \dis\frac{e^{(2N-3)\t_n}}{2}\|u_n\|^2_0-(2N-1)e^{(2N-1)\t_n}\ird F(u_n) \, dV=o_n(1), \\[4mm] \dis e^{(2N-3)\t_n}\|u_n\|^2_0-e^{(2N-1)\t_n}\ird f(u_n)u_n \, dV=o_n(1)\|u_n\|_0. \end{cases} \end{equation} From the first and the second equation of the previous system we get \begin{equation}\label{3sigma0} (N-1)e^{\t_n}\|u_n\|^2_0=(2N-1)\sigma_0+o_n(1), \end{equation} and so, since $\t_n \to 0$, we infer that $\{u_n\}$ is bounded in $X_0$ and so also in $L^{\bar{N}}(\RD)$, by Lemma \ref{le:l6}. Hence, by the third equation of \eqref{sistema0} and \eqref{3sigma0}, using again the fact that $\t_n \to 0$, there exists $c>0$ such that \begin{equation* \ird f(u_n)u_n \, dV\ge c>0, \quad \forall n \in \mathbb{N}. \end{equation*} Hence, by Lemma \ref{le:supK}, there exist a sequence of points $\{(x_n,y_n)\}\subset\RD$ and $\bar c>0$ such that $$ \int_{K(x_n,y_n)}|f(u_n)u_n|\,dV \ge \bar c> 0, \quad \forall n \in \mathbb{N}. $$ Hence, calling $v_n=u(\cdot-x_n,\cdot-y_n)$, and being $\{v_n\}$ a bounded sequence in $X_0$, up to a subsequence, we have \[ v_n \rightharpoonup v \qquad \hbox{weakly in }X_0. \] By \eqref{hf5}, there is $C>0$ such that $$ |f(t)t| \leq \frac{\bar c}{2M}|t|^{\bar{N}}+C|t|^{2}, \quad \forall t \in \mathbb{R}, $$ where $M=\sup_{n \in \mathbb{N}} \int_{\mathbb{R}^N}|v_n|^{\bar{N}}\,dV$. From this, $$ C\int_{K(0,0)}|v_n|^{2}\,dV \geq \frac{\bar c}{2}, \quad \forall n \in \mathbb{N} $$ and so, by (\ref{compact2}), $$ \int_{K(0,0)}|v|^{2}\,dV \geq \frac{\bar c}{2C}, \quad \forall n \in \mathbb{N}, $$ implying that $v \neq 0$. \\ Now, we will prove that $v$ is a solution for (\ref{eq4'}). To this end, without loss of generality, we can assume that $$ v_n(x,y) \to v(x,y) \quad \mbox{a.e. in} \ \mathbb{R}^N, $$ and so, by continuity, $$ f(v_n(x,y)) \to f(v(x,y)) \quad \mbox{a.e. in} \ \mathbb{R}^N. $$ By \eqref{hf5}, $\{f(v_n)\}$ is a bounded sequence in $L^{\frac{\bar{N}}{\bar{N}-1}}(\mathbb{R}^N)$, and so, there exists $g\in L^{\frac{\bar{N}}{\bar{N}-1}}(\mathbb{R}^N)$ such that $f(v_n) \rightharpoonup g$ in $L^{\frac{\bar{N}}{\bar{N}-1}}(\mathbb{R}^N)$. It is standard to prove that $g=f(v)$. These informations yield, in particular, that $$ \int_{\mathbb{R}^N}f(v_n) \phi \, dV \to \int_{\mathbb{R}^N}f(v) \phi \, dV,\quad \forall \phi \in X_0. $$ This limit combines with $\de_u\tilde{I}_0(\t_n,v_n) [\phi] \to 0$ to give $I'_0(v)\phi=0$, for any $\phi \in X_0$, showing that $v$ is a nontrivial for (\ref{eq4'}). \end{proof}
{ "timestamp": "2019-02-07T02:17:37", "yymm": "1712", "arxiv_id": "1712.05221", "language": "en", "url": "https://arxiv.org/abs/1712.05221" }
\section{Introduction} \subsection{Stationary problems} The most popular implicit finite difference schemes, which approximate classic PDEs of mathematical physics, use three point stencils (for 1D problems) and have the second order of approximation. To improve the order, we can develop the stencil up to five points, however, in this case there are two significant obstacles: \begin{itemize} \item some additional boundary conditions are needed in comparison with the corresponding differential boundary problem; \item a linear algebraic system that we solve at every temporal step has to be solved with a five-diagonal matrix instead of a three-diagonal one, and therefore the number of arithmetic operations is doubled compared to the corresponding computational implementation of such a scheme. \end{itemize} There is an alternative approach to improve the order: to use compact finite difference schemes. We can optimally average the right-hand side of the corresponding original differential equation. For instance, we can approximate the ordinary differential equation \begin{equation} \label{diffusion0} d^2_xu=f, \;x \in [a,b],\; u(a) = A,\: u(b) = B, \end{equation} by the compact finite difference equation on the equidistant grid ${\{ x_j \}}^N_{j=0},\; x_j=a+jh,\; h=(b-a)/N$: \begin{equation}\label{cscheme} u_{j-1} - 2u_j + u_{j+1} = h^2[f_{j-1} + 10f_j + f_{j+1}]/12,\quad j=1,\ldots, N-1 \end{equation} instead of the classic finite difference equation \[ u_{j-1} - 2u_j + u_{j+1} = h^2f_{j}. \] Here $f_j = f(x_j)$ is a given function; $u_j$ is an approximation of the unknown solution $u(x), u_0=A, u_N=B$. The double-sweep method can be used to invert the same three-diagonal matrix and to obtain the solution ${\{ u_j \}}^{N-1}_{j=1}$ with a better approximation \cite{gord-10}. Similarly, we can use the scheme \begin{equation*} \begin{aligned} u_{i, j} + 0.2(u_{i, j - 1} + &u_{i - 1, j} + u_{i, j + 1} + u_{i + 1, j}) - 0.05(u_{i - 1, j - 1} + u_{i - 1, j + 1} + u_{i + 1, j - 1} + u_{i + 1, j + 1}) = \\ &-0.2h^2f_{i, j} - 0.025h^2(f_{i, j - 1} + f_{i - 1, j} + f_{i, j + 1} + f_{i + 1, j}), \end{aligned} \end{equation*} to approximate with the $4$-th order the Poisson equation \begin{equation*} \Delta u = f(x,y), \end{equation*} on a rectangular equidistant grid instead of the second order classic implicit scheme: \begin{equation*} u_{i, j - 1} + u_{i - 1, j} + u_{i, j + 1} + u_{i + 1, j} - 4u_{i, j} = h^2f_{i, j} . \end{equation*} Similar compact high order schemes were used after the fast Fourier transform with respect to longitude in \cite{gord-10} for the solution of an elliptic system of PDEs on $\mathbb{S}^2$. The equations have singularities at the points of the poles, and special boundary conditions at the ends of the segment $[-\pi/2,\, \pi/2]$ (according to \cite{abramov71}) with respect to latitude. There is a separate asymptotic in the poles for every Fourier-mode. The computational effectiveness is essential here because this elliptic system is an important ingredient of weather forecasting models. It is applied to the every vertical level on every time step in the forecasting model, see e.g. \cite{tolstykh02}. There are a few ways to determine coefficients for the compact difference scheme for a given PDE. One of the main ideas is to use a truncated Taylor series expansion (\cite{gupta1984single, zhang1998explicit, gelu2017sixth}); in \cite{sutmann2007compact} it is used together with the Pade approximation. Another approach is to utilize the central difference by expanding the leading truncation error term until the desirable order is reached (\cite{spotz1996high, ge2002symbolic}). For both approaches symbolic computations are used extensively to get rid of exhaustive algebraic manipulations. In our works (\cite{gord-14, gordin-05}), we develop a much simpler approach based on undetermined coefficients, which also uses computational algebra packages to derive the coefficients of the compact scheme. However, the majority of the formulae have been constructed for linear differential equations with constant coefficients only. There was an exception: differential equations with a variable coefficient in the low-order term. For instance, the ordinary differential equation \begin{equation*} d^2_xu + \rho(x)u = f(x) \end{equation*} may be approximated in the following way: \begin{equation*} \begin{aligned} &u_{j-1} -2u_j + u_{j+1} = h^2{[f_{j-1} - \rho(x_{j-1})u_{j-1} ] + 10[f_j - \rho(x_j)u_j] + [f_{j+1} - \rho(x_{j+1})u_{j+1}]}/12 \implies \\ &[1 + h^2\rho(x_{j-1})/12]u_{j-1} + 2[5h^2\rho(x_j)/12-1]u_j + [1 + h^2\rho(x_{j+1})/12]u_{j+1} = \\ &h^2[f_{j-1} + 10f_j + f_{j+1}]/12, \end{aligned} \end{equation*} --- it is a corollary of the relation (\ref{cscheme}). If the coefficient $\rho(x)$ is a non-positive function, then the corresponding three-diagonal matrix is (according to the Gershgorin theorem, see e.g. \cite{gord-10}) negative definite. \subsection{Evolutionary PDEs with a single spatial variable} The computational approach, which uses compact high order schemes can be developed for evolutionary PDEs, e.g. for the diffusion equation and for the Schr\"odinger one, see e.g. \cite{gord-14}. Compact difference schemes can be also developed for linear PDEs with a variable coefficient in the low-order term, e.g. to diffusion equation or Schr\"odinger equation with a potential. In this work, we focus on the important type of PDEs: 1D parabolic equations. Namely, we approximate the diffusion equation with a variable smooth positive coefficient: \begin{equation} \label{diffusion_eq} \frac{\partial u}{\partial t}=Pu+f,\quad Pu= \frac{\partial}{\partial x}\theta(x) \frac{\partial u}{\partial x}, \end{equation} where $\theta(x): \mathbb{R} \rightarrow \mathbb{R}_+$ is a variable time-independent diffusion coefficient, $t \in [0; T],x \in [0, 2\pi]$, $f=f(t, x)$ is a forcing. Then we consider the Leontovich -- Levin (Schr\"odinger-type) equation $i \frac{\partial u}{\partial t}=Pu+f $, and construct a similar compact scheme for it. Earlier we constructed the $4$-th order compact finite difference scheme for the first boundary problem for the linear ordinary self-adjoint equation: \begin{equation}\label{sturm} - d_x[\theta(x)d_xu] = f(x), ~ x \in [a, b], \end{equation} where the coefficient $\theta(x)$ is strongly positive and smooth \cite{gt16b}. If the coefficient $\theta(x)$ is discontinuous at the point $x^* \in (a, b)$ in equation (\ref{sturm}), the special confinement boundary conditions are necessary to provide fulfillment of the mass (or energy) conservation law as well as the high convergence rate, see \cite{gt16a}. \textbf{Note.} The linear operator $P$ is self-adjoint in the space of smooth functions under homogeneous Dirichlet conditions in the sense of Hilbert metric $L^2 [0, 2\pi]$. The spectrum of the self-adjoint operator $P$ is real and negative. Therefore, the resolving operators $R(t) = \exp(Pt), t > 0$ of the mixed initial-boundary problem in the space is self-adjoint and contractive. \textbf{Note.} The linear operator $P$ under periodical or homogeneous Neumann boundary conditions is $L^2$-self-adjoint, too. However it is non-positive, and resolving operators $R(t)$ are contractive on the orthogonal complement to the one-dimensional subspace of constants in $L^2$. The case of the Neumann boundary conditions is more difficult than the Dirichlet one for the compact finite-difference approximation. To provide high order of the scheme, we use wide stencils at the endpoints in two time moments. If the simplest approximation $u_0 = u_1 = 0$ of the Neumann conditions is used, we obtain the first order of error decrease instead of the fourth one. \subsection{Multidimensional problems} Multidimensional equation \begin{equation} \label{diff_multi} \rho\partial_t u= {\bf div}\,\vartheta \, {\bf grad}\, u +f, \end{equation} where $\vec x =\langle x_1,\,\ldots,\,x_n\rangle \in G\subseteq \mathbb{R}^n,\;\rho=\rho(\vec x),\:\vartheta =\vartheta(\vec x),\:f=f(t,\,\vec x)$ is a natural generalization of the equation~(\ref{diffusion_eq}). If the coefficients $\rho$ and $\vartheta$ are constant, then the coefficients of the relative compact scheme are obtained without strong difficulties. Certainly, the result depends on the choice of the difference scheme grid: rectangular, triangular, or hexagonal. The compact scheme for the Poisson equation may be constructed for such grids as well as the rectangular grid in polar coordinates, see \cite{gordin-05, lai2007formally}. The compact schemes may be developed (\cite{gord-14, tangman2008numerical, karaa2009two}) for diffusion equation with a constant coefficient, too. The case of equation (\ref{diff_multi}) with an arbitrary smooth variable coefficient $\theta$ is more difficult. However, the case of the coefficient, which depends on one variable $x_1$ only, may be reduced to the case considered here if the area $G$ is a direct sum of segments and/or circumferences, i.e. if we can use the fast Fourier transform with respect to variables $x_2,\,\ldots,\,x_n$. Such examples for the Helmholtz equation (or similar system) was considered in \cite{gord-10, tolstykh02} for $G=\mathbb{S}^2$, where $x_1$ is the latitude, $x_2$ is the longitude, see also \cite{chen2011optimal}. The special kind of boundary conditions (individual for any longitude Fourier mode) should be used here at the polar points. Otherwise, we do not obtain a desirable order of approximation. The compact schemes for the 4-th order of approximation of the second order elliptical linear PDE (Helmholtz-type) with a variable coefficient were constructed in \cite{ge2002symbolic, britt2011numerical}. The fourth order elliptical PDE with variable coefficient was considered in \cite{wang2008fourth}. The rest of the paper is organized as follows. In Sect. 2 we describe the "compact approach" to finite-difference approximation: for the positive smooth coefficient $\theta$ (Sect. 2.1), for inner grid points (Sect. 2.2), for the Neumann boundary conditions (Sect. 2.3). We also introduce the classic second order implicit scheme (Sect. 2.4) and the Leontovich -- Levin equation (Sect. 2.5). In Sect. 3, which is devoted to numerical experiments, we introduce sample solutions for numerical experiments (Sect. 3.1), examine approximation order numerically in Sect. 3.2 and utilize the Richardson extrapolation approach to finite-difference scheme's improvement (Sect. 3.3). The possible simplification of the scheme's coefficients is tested numerically in Sect. 3.4; the spectra of transition operators are analyzed in Sect. 3.5. Similar constructions for the Leontovich -- Levin equation are examined in Sect. 3.6. Section 4 concludes the paper. \section{Compact difference scheme} \subsection{Diffusion coefficient approximation for the $4-$th order finite difference model} The coefficient $\theta(x)$ in equation (\ref{diffusion_eq}) for all the physical problems is non-negative. Otherwise, the Cauchy problem for equation (\ref{diffusion_eq}) is incorrect in the Hadamard sense. The special case when $\theta(x)$ is non-negative and has zeros is not considered here. For the compact scheme construction we need to explicitly determine the derivatives of $\theta(x)$ in the grid points. Since the coefficient $\theta(x)$ is smooth and strongly positive, we approximate it locally (in the vicinity of an arbitrary internal grid point $x_j$) by the exponential function $\theta(x) = exp(\rho(x))$, where $\rho$ is an arbitrary real function. To ensure that the resulting compact scheme has a high order, we approximate $\rho(x)$ by the $4$-th order polynomial: \begin{equation} \label{exp_approx} \theta(x) \approx \theta(x_j) exp(c_1 y + c_2 y^2 + c_3 y^3 + c_4 y^4), \end{equation} where $y = x - x_j$, and then we determine the coefficients $c_1, c_2, c_3, c_4$ by using the interpolation conditions. We assume that relation (\ref{exp_approx}) is exact at the points $y = -h, -h/2, h/2, h$, see Fig.~\ref{fig:stencil}a. Thus, for every $j$ we obtain four linear algebraic equations, where $\theta_k=\theta(x_k)$, for these four undetermined coefficients: $c_4 h^4 - c_3 h^3 + c_2 h^2 - c_1 h = \ln\left(\theta_{j-1}/\theta_j\right)$, $c_4 h^4/16 - c_3 h^3/8 + c_2 h^2/4 - c_1 h/2 = \ln\left(\theta_{j-\frac{1}{2}}/\theta_j\right)$, $c_4 h^4/16 + c_3 h^3/8 + c_2 h^2/4 + c_1 h/2 = \ln\left(\theta_{j+\frac{1}{2}}/\theta_j\right)$, $c_4 h^4 + c_3 h^3 + c_2 h^2 + c_1 h = \ln\left(\theta_{j+1}/\theta_j\right)$. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{./images/stencil.png} \caption{ \textbf{a, b}: stencils for the compact finite difference scheme. \textbf{c}: diagram for test functions $u^*_{k_1, k_2} = y^k_1 t^k_2$, which are used in order to obtain the coefficients of scheme (\ref{comp_scheme}). Monomials, which are denoted as white circles, are unnecessary to obtain the coefficients yet the equation (\ref{comp_scheme}) holds on them. } \label{fig:stencil} \end{center} \end{figure} We solve the system and obtain the following solution $c_1, c_2, c_3, c_4$, where $\theta_k=\theta(x_k)$: $ c_1 = -[8\, \ln\left(\theta_{j-\frac{1}{2}}/\theta_j\right) - 8\, \ln\left(\theta_{j+\frac{1}{2}}/\theta_j\right) - \ln\left(\theta_{j-1}/\theta_j\right) + \ln\left(\theta_{j+1}/\theta_j\right)]/6h$, $ c_2 = [16\, \ln\left(\theta_{j-\frac{1}{2}}/\theta_j\right) + 16\, \ln\left(\theta_{j+\frac{1}{2}}/\theta_j\right) - \ln\left(\theta_{j-1}/\theta_j\right) - \ln\left(\theta_{j+1}/\theta_j\right)]/6h^2$, $ c_3 = 2\,[ 2\, \ln\left(\theta_{j-\frac{1}{2}}/\theta_j\right) - 2\, \ln\left(\theta_{j+\frac{1}{2}}/\theta_j\right) - \ln\left(\theta_{j-1}/\theta_j\right) + \ln\left(\theta_{j+1}/\theta_j\right)]/3h^3$, $ c_4 = -2\,[4\, \ln\left(\theta_{j-\frac{1}{2}}/\theta_j\right) + 4\, \ln\left(\theta_{j+\frac{1}{2}}/\theta_j\right) - \ln\left(\theta_{j-1}/\theta_j\right) - \ln\left(\theta_{j+1}/\theta_j\right)]/3h^4$. In the simplest case of $\theta(x)=$~const we obtain the trivial solution: $c_1 = c_2 = c_3 = c_4 = 0$. \subsection{Test functions and the corresponding coefficients for the implicit compact scheme} We construct the scheme on six-point two-layer stencils (see Fig. \ref{fig:stencil}b): \begin{equation} \label{comp_scheme} \begin{aligned} &b^L_{0,j}u^n_{j-1} + a_{0,j}u^n_j + b^R_{0,j}u^n_{j+1} + b^L_{1,j}u^{n+1}_{j-1} + a_{1,j}u^{n+1}_j + b^R_{1,j}u^{n+1}_{j+1} = \\ &q^L_{0,j}f^n_{j-1} + p_{0,j}f^n_j + q^R_{0,j}f^n_{j+1} + q^L_{1,j}f^{n+1}_{j-1} + p_{1,j}f^{n+1}_j + q^R_{1,j}f^{n+1}_{j+1}, \end{aligned} \end{equation} $ j = 1 \ldots N-1, \: n = 0 \ldots T.$ We assume that the relation (\ref{comp_scheme}) holds for several test exact solutions of equation (\ref{diffusion_eq}): $\langle u^\star,\,f^\star \rangle$. We use here the basis of test functions \begin{equation} \label{testest} u^*_{k_1,\,k_2} = y^{k_1}t^{k_2}, f^*_{k_1,\,k_2} = \frac{\partial u^*_{k_1,\,k_2}}{\partial t} - P u^*_{k_1,\,k_2}, \end{equation} shown by black circles on the diagram $\langle k_1,\,k_2\rangle$ (Fig.~\ref{fig:stencil}b). We substitute all of them to relation (\ref{comp_scheme}) to obtain a system of 11 homogeneous linear algebraic equations for the coefficients of compact scheme (\ref{comp_scheme}) at an arbitrary inner grid points $x_j, j \neq 0, N$. We add to the system one normalizing linear non-homogeneous equation $a_{0,j} = C^* = const > 0$ (see \cite{gord-10, gord-14, gt16b, gt16a}) and solve the $12$-th linear algebraic system; see the obtained $12$ coefficients in Table ~\ref{tab:coeffsu}, ~\ref{tab:coeffsf}. \begin{table} \caption{Coefficients for the left-hand side of compact scheme (\ref{comp_scheme}), expanded with respect to degrees of $h$. Here, $r_- = \theta(x_j)/\theta(x_j-h),\; r_+ = \theta(x_j)/\theta(x_j+h)$. We choose the normalizing constant $C^*$ in a way to provide the coefficients in the most algebraically simple form.} \label{tab:coeffsu} \centering \begin{tabular}{|c|m{2cm}|m{2cm}|m{2cm}|m{2cm}|m{2cm}|m{2cm}|} \hline - & $a_{0,j}$ & $b^L_{0,j}$ & $b^R_{0,j}$ & $a_{1,j}$ & $b^L_{1,j}$ & $b^R_{1,j}$ \\ \hline $1$ & $144 \nu_j - 120$ & $- 72 \nu_j - 12 r_-$ & $- 72 \nu_j - 12 r_+$ & $144 \nu_j + 120$ & $12 r_- - 72 \nu_j$ & $- 72 \nu_j + 12 r_+$ \\ \hline $h$ & $0$ & $36 c_1 \nu_j + 6 c_1 r_-$ & $- 36 c_1 \nu_j - 6 c_1 r_+$ & $0$ & $36 c_1 \nu_j - 6 c_1 r_-$ & $6 c_1 r_+ - 36 c_1 \nu_j$ \\ \hline $h^2$ & $8 c_1^2 - 128 c_2 + 192 c_2 \nu_j$ & $2 r_- c_1^2 - 96 c_2 \nu_j - 8 c_2 r_-$ & $2 r_+ c_1^2 - 96 c_2 \nu_j - 8 c_2 r_+$ & $- 8 c_1^2 + 128 c_2 + 192 c_2 \nu_j$ & $- 2 r_- c_1^2 - 96 c_2 \nu_j + 8 c_2 r_-$ & $- 2 r_+ c_1^2 - 96 c_2 \nu_j + 8 c_2 r_+$ \\ \hline $h^3$ & $0$ & $18 c_3 \nu_j - 12 c_3 r_- - 3 c_1^3 \nu_j + 42 c_1 c_2 \nu_j + 4 c_1 c_2 r_-$ & $12 c_3 r_+ - 18 c_3 \nu_j + 3 c_1^3 \nu_j - 42 c_1 c_2 \nu_j - 4 c_1 c_2 r_+$ & $0$ & $18 c_3 \nu_j + 12 c_3 r_- - 3 c_1^3 \nu_j + 42 c_1 c_2 \nu_j - 4 c_1 c_2 r_-$ & $3 c_1^3 \nu_j - 12 c_3 r_+ - 18 c_3 \nu_j - 42 c_1 c_2 \nu_j + 4 c_1 c_2 r_+$ \\ \hline $h^4$ & $48 c_1 c_3 - 256 c_4 + 384 c_4 \nu_j + 64 c_2^2 \nu_j - 32 c_2^2 - 48 c_1 c_3 \nu_j$ & $- 32 \nu_j c_2^2 - 192 c_4 \nu_j - 16 c_4 r_- + 24 c_1 c_3 \nu_j + 6 c_1 c_3 r_-$ & $- 32 \nu_j c_2^2 - 192 c_4 \nu_j - 16 c_4 r_+ + 24 c_1 c_3 \nu_j + 6 c_1 c_3 r_+$ & $256 c_4 - 48 c_1 c_3 + 384 c_4 \nu_j + 64 c_2^2 \nu_j + 32 c_2^2 - 48 c_1 c_3 \nu_j$ & $- 32 \nu_j c_2^2 - 192 c_4 \nu_j + 16 c_4 r_- + 24 c_1 c_3 \nu_j - 6 c_1 c_3 r_-$ & $- 32 \nu_j c_2^2 - 192 c_4 \nu_j + 16 c_4 r_+ + 24 c_1 c_3 \nu_j - 6 c_1 c_3 r_+$ \\ \hline $h^5$ & $0$ & $84 c_1 c_4 \nu_j + 8 c_1 c_4 r_- + 12 c_1 c_2^2 \nu_j - 18 c_1^2 c_3 \nu_j$ & $18 c_1^2 c_3 \nu_j - 8 c_1 c_4 r_+ - 12 c_1 c_2^2 \nu_j - 84 c_1 c_4 \nu_j$ & $0$ & $84 c_1 c_4 \nu_j - 8 c_1 c_4 r_- + 12 c_1 c_2^2 \nu_j - 18 c_1^2 c_3 \nu_j$ & $8 c_1 c_4 r_+ - 84 c_1 c_4 \nu_j - 12 c_1 c_2^2 \nu_j + 18 c_1^2 c_3 \nu_j$ \\ \hline $h^6$ & $72 c_3^2 - 144 c_3^2 \nu_j - 128 c_2 c_4 + 256 c_2 c_4 \nu_j$ & $72 \nu_j c_3^2 - 128 c_2 c_4 \nu_j$ & $72 c_3^2 \nu_j - 128 c_2 c_4 \nu_j$ & $128 c_2 c_4 - 144 c_3^2 \nu_j - 72 c_3^2 + 256 c_2 c_4 \nu_j$ & $72 \nu_j c_3^2 - 128 c_2 c_4 \nu_j$ & $72 c_3^2 \nu_j - 128 c_2 c_4 \nu_j$ \\ \hline $h^7$ & $0$ & $- 27 c_1 \nu_j c_3^2 + 48 c_1 c_2 c_4 \nu_j$ & $27 c_1 c_3^2 \nu_j - 48 c_1 c_2 c_4 \nu_j$ & $0$ & $- 27 c_1 \nu_j c_3^2 + 48 c_1 c_2 c_4 \nu_j$ & $27 c_1 c_3^2 \nu_j - 48 c_1 c_2 c_4 \nu_j$ \\ \hline $h^8$ & $256 c_4^2 \nu_j - 128 c_4^2$ & $-128 c_4^2 \nu_j$ & $- 128 c_4^2 \nu_j$ & $256 c_4^2 \nu_j + 128 c_4^2$ & $-128 c_4^2 \nu_j$ & $- 128 c_4^2 \nu_j$ \\ \hline $h^9$ & $0$ & $48 c_1 c_4^2 \nu_j$ & $- 48 c_1 c_4^2 \nu_j$ & $0$ & $48 c_1 c_4^2 \nu_j$ & $- 48 c_1 c_4^2 \nu_j$ \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Coefficients for the right-hand side of compact scheme (\ref{comp_scheme}), expanded with respect to degrees of $h$.} \label{tab:coeffsf} \begin{tabular}{|c|c|c|c|} \hline - & $p_{0,j} = p_{1,j}$ & $q^L_{0,j} = q^L_{1,j}$ & $q^R_{0,j} = q^R_{1,j}$ \\ \hline $1$ & $60$ & $6 r_-$ & $6 r_+$ \\ \hline $h$ & $0$ & $- 3 c_1 r_-$ & $3 c_1 r_+$ \\ \hline $h^2$ & $4 (- c_1^2 + 16 c_2)$ & $r_- (4 c_2 - c_1^2)$ & $r_+ (4 c_2 - c_1^2)$ \\ \hline $h^3$ & $0$ & $r_- (6 c_3 - 2 c_1 c_2)$ & $- r_+ (6 c_3 - 2 c_1 c_2)$ \\ \hline $h^4$ & $4 (4 c_2^2 + 32 c_4 – 6 c_1 c_3)$ & $r_- (8 c_4 - 3 c_1 c_3)$ & $r_+ (8 c_4 - 3 c_1 c_3)$ \\ \hline $h^5$ & $0$ & $- 4 c_1 c_4 r_-$ & $4 c_1 c_4 r_+$ \\ \hline $h^6$ & $4 (- 9 c_3^2 + 16 c_2 c_4)$ & $0$ & $0$ \\ \hline $h^7$ & $0$ & $0$ & $0$ \\ \hline $h^8$ & $64 c_4^2$ & $0$ & $0$ \\ \hline \end{tabular} \end{table} {\bf Note}. If these 11 linear algebraic equations hold, the algebraic linear homogeneous connections, which correspond to the white circles on Fig.~\ref{fig:stencil}, hold without any additional conditions. In other words, the rank of the $15$-th order matrix of the homogeneous linear algebraic system, which corresponds to all test functions (\ref{testest}) at $k_1=0,\ldots,4,\;k_2=0,\,1,\,2$, is equal to $11$ only. {\bf Note}. The finite difference compact scheme for the diffusion equation strongly corresponds to the similar compact scheme for the ordinary differential equation $-d_x \theta (x)d_x u=f$. Let us substitute $u^n_j$ and $u^{n+1}_j$ with $u^*_j$ in the equation (\ref{comp_scheme}). In other words, we calculate the coefficients $a_j=a_{0,j}+a_{1,j}$, $b^L_j=b^L_{0,j}+b^L_{1,j}$, and $b^R_j=b^R_{0,j}+b^R_{1,j}$. We obtain the ordinary finite difference equation \[ b^L_j u_{j-1}^* + a_j u_{j}^*+b^R_j u_{j+1}^* = 2\left[ q^L_{j}f_{j-1} + p_{j}f_j + q^R_{j}f_{j+1}\right],\quad j=1,\,\ldots,\,N-1. \] If we divide the equation by the function $\nu_j = \theta(x_j)\tau h^{-2}$, we obtain exactly the $4$-th order compact scheme, which approximates a linear $2$-nd order ordinary differential equation, see \cite{gt16a}. It is not a trivial statement, because the coefficients of compact scheme (\ref{comp_scheme}) can be changed, if we use another set of test functions instead of set (\ref{testest}). \subsection{Compact approximation of the Neumann boundary conditions} Let us consider the equation~(\ref{diffusion_eq}) under the Neumann boundary conditions \begin{equation*} \frac{\partial u}{\partial x}\Bigr|_{x=0, L} = 0. \end{equation*} We consider here the compact approximation for the left boundary in the form: \begin{equation} \label{neumann_coef} \sum\limits_{j=0}^2 \alpha^1_j u^{n+1}_{j h} + \sum\limits_{j=0}^2 \alpha^0_j u^{n}_{j h} = \sum\limits_{j=0}^2 \beta^1_j f^{n+1}_{j h} + \sum\limits_{j=0}^2 \beta^0_j f^{n}_{j h}, \end{equation} where $\alpha^k_j, \beta^k_j$, $j = 0,1,2; k = 0, 1$ are the coefficients which are determined by the basis of test functions $<u^{**}_{k_1, k_2}, f^{**}_{k_1, k_2}>$: $u^{**}_{k_1, k_2} \in \{ 1, t, t^2, x^2, x^2 t, x^2 t^2, x^3, x^3 t, x^3 t^2, x^4\}$; $f^{**}_{k_1, k_2} = \frac{\partial u^{**}_{k_1, k_2}}{\partial t} - P u^{**}_{k_1, k_2}$. This set of basis functions is a subset of one on the diagram $\langle k_1, k_2 \rangle$ on Fig.~\ref{fig:stencil}. We do not use here the test functions $u^{**}_{k_1, k_2} \in \{x, x t, x t^2\}$ as they are in contrary to the boundary conditions, e.g. if $u^{**} = x $, then $\frac{\partial u^{**}}{\partial x}\Bigr|_{x=0} = 1 \neq 0$. We construct the linear algebraic system for the coefficients of equation~(\ref{neumann_coef}) and obtain the following solution: \begin{itemize} \item $\alpha^1_0 = 6 \nu_0 + 4 a h + 8 b h^2 + 12 c h^3 + 16 d h^4 + 17 a h \nu_0 + 34 b h^2 \nu_0 + 51 c h^3 \nu_0 + 68 d h^4 \nu_0 + 8;$ \item $\alpha^1_1 = 16 exp(-a h -b h^2 -c h^3 -d h^4) - 32 b h^3 \nu_0 - 48 c h^3 \nu_0 - 64 d h^4 \nu_0 - 16 a h \nu_0;$ \item $\alpha^1_2 = - \nu_0 (4 d h^4 + 3 c h^3 + 2 b h^2 + a h + 6);$ \item $\alpha^0_0 = 6 \nu_0 - 4 a h - 8 b h^2 - 12 c h^3 - 16 d h^4 + 17 a h \nu_0 + 34 b h^2 \nu_0 + 51 c h^3 \nu_0 + 68 d h^4 \nu_0 - 8;$ \item $\alpha^0_1 = - 16 a h \nu_0 - 32 b h^2 \nu_0 - 48 c h^3 \nu_0 - 64 d h^4 \nu_0 - 16 exp(-a h -b h^2 -c h^3 -d h^4);$ \item $\alpha^0_2 = - \nu_0 (4 d h^4 + 3 c h^3 + 2 b h^2 + a h + 6);$ \item $\beta^0_0 = \beta^1_0 = 2 \tau \nu_0 (4 d h^4 + 3 c h^3 + 2 b h^2 + a h + 2);$ \item $\beta^0_1 = \beta^1_1 = 8 \tau \nu_0 exp(- d h^4 - c h^3 - b h^2 - a h);$ \item $\beta^0_2 = \beta^1_2 = 0.$ \end{itemize} We also obtain the similar coefficients for the Neumann boundary conditions at the right boundary. We note that $4-th$ approximation order cannot be obtained on a two-point stencil for boundary conditions as the linear algebraic system will be over-determined because we obtain too many equations for the selected number of variables (coefficients). Numerical experiments confirm the $4-th$ order of approximation for the joint usage of the compact difference scheme (\ref{ll_comp_scheme}) and compact boundary conditions approximation (\ref{neumann_coef}), see Fig.~\ref{fig:neumann}, \ref{fig:neumann2}. We have constructed similar coefficients on the two-point stencil and reduced approximation order, i.e. with $\alpha^0_2 = \alpha^1_2 = \beta^0_2 = \beta^1_2 = 0$ and reduced basis of test functions $<u^{**}_{k_1, k_2}, f^{**}_{k_1, k_2}>$: $u^{**}_{k_1, k_2} \in \{ 1, t, t^2, x^2, x^2 t, x^2 t^2, x^3\}$; $f^{**}_{k_1, k_2} = \frac{\partial u^{**}_{k_1, k_2}}{\partial t} - P u^{**}_{k_1, k_2}$. Our numerical experiments show that the basis set reduction affects the approximation order, see Fig.~\ref{fig:neumann3}. This result differs from the one in \cite{britt2011numerical}, where the compact difference scheme was used to approximate the Helmholtz equation. The classic approximation of Neumann boundary conditions is the following: \begin{equation}\label{classic_neumann_bc} \epsilon (u^{n+1}_1 - u^{n+1}_0) + (1 - \epsilon) (u^{n}_1 - u^{n}_0) = 0, ~~0 < \epsilon \leq 1. \end{equation} Approximation (\ref{classic_neumann_bc}) provides the second order for a classic Crank -- Nicolson scheme (see Sect. \ref{sect:implicit-classic}), however, for the compact scheme (\ref{comp_scheme}) we obtain the first order only, see Fig. \ref{fig:neumann3}. \subsection{Classic implicit scheme}\label{sect:implicit-classic} We compare the compact scheme (\ref{comp_scheme}) and the results of our numerical experiments with classic (see e.g. \cite{samarskii-01}) ones. The implicit second order finite difference scheme for (\ref{diffusion_eq}) can be written in the following form: \begin{equation} \label{divergent_1} \frac{u^{n+1}_j - u^n_j}{\tau} = \frac{1}{2h^2}[u^n_{j+1}\theta_{j+\frac{1}{2}} + u^n_{j-1}\theta_{j-\frac{1}{2}} - u^n_j(\theta_{j+\frac{1}{2}} + \theta_{j-\frac{1}{2}}) + \end{equation} \[ + u^{n+1}_{j+1}\theta_{j+\frac{1}{2}} + u^{n+1}_{j-1}\theta_{j-\frac{1}{2}} - u^{n+1}_j(\theta_{j+\frac{1}{2}} + \theta_{j-\frac{1}{2}})] + (f^{n+1}_j + f^n_j)/2 . \] The alternative versions for the right hand-side approximation are: \[ (F^{n+1}_j + F^n_j)/2, \] where \begin{equation} \label{divergent_2} F_j=(f_{j-1}+2f_j+f_{j+1})/4, \end{equation} or \begin{equation} \label{divergent_3} F_j=(f_{j-1}+2f_{j-1/2}+2f_j+2f_{j+1/2}+f_{j+1})/4. \end{equation} Our numerical experiments demonstrate very similar errors for these versions of the implicit scheme, see Table~\ref{tab:exper_accuracy}. Implicit scheme (\ref{divergent_1}) is, in fact, a version of the well-known Crank -- Nicolson scheme. The scheme \begin{equation*} \frac{u^{n+1} - u^n}{\tau} = A \frac{u^{n+1} + u^n}{2}, \end{equation*} where $A$ is a negative definite self-adjoint operator is unconditionally stable in both finite- and infinite-dimensional cases. Let us consider the transition operator for the implicit scheme in the case of $f(x)\equiv 0$. The matrices $A_{new}$ and $A_{old}$ (see Section \ref{matr_section}) are self-adjoint. They commute because their difference is proportional to the identity matrix. Therefore, like operators $R$ for (\ref{sturm}), the finite dimensional operator (matrix) $A_{new}^{-1}A_{old}$ is self-adjoint and contractive in the Euclidean metric $l^2$. Therefore, the unique limit $u^*_j=\lim\limits_{n\to +\infty} u^n_j$ exists for any stationary forcing $f=f(x)$. The implicit scheme is unconditionally stable. \subsection{The Leontovich -- Levin (Schr\"odinger-type) equation} Compact scheme (\ref{comp_scheme}) can be modified to approximate the PDE \begin{equation} \label{Schro_eq} i \frac{\partial \Psi}{\partial t}=P\Psi+f,\quad P\Psi= \frac{\partial}{\partial x}\theta(x) \frac{\partial \Psi}{\partial x}, \end{equation} which is known as the Leontovich -- Levin equation, see e.g. a review \cite{leont-10}. Here $i$ is an imaginary unit, the solution $\Psi=\Psi(t,\,x)$ is a unknown complex-valued function, the positive function (coefficient) $\theta=\theta (x)$ is known, as well as the complex-valued function $f=f(x)$. This equation describes e.g. an electromagnetic field of the linear vibrators. {\bf Note}. If the coefficient $\theta$ is constant, equation (\ref{Schro_eq}) is the famous Schr\"odinger equation, see e.g. \cite{gord-10, gord-14}. The operator $iP$ is skew self-adjoint in the space $L^2 [0,\,2\pi]$, and therefore the resolving operator $\exp(iPt)$ of the mixed initial-boundary problem in the space is unitary. In the case of $f \equiv 0$ the first integral of equation (\ref{Schro_eq}) under Dirichlet (or Neumann, or periodical) boundary conditions exists: \[ \int\limits_0^{2\pi} |\Psi (t,\,x)|^2\,dx=const. \] If we multiply the coefficients $\nu_j$ in Table~\ref{tab:coeffsu} by the imaginary unit $i$, we obtain the compact finite difference scheme, which approximates equation (\ref{Schro_eq}) with the $4$-th order; see its error in Table~\ref{tab:exper_accuracy_schrod}, where we compare its error with the error of the classic second order implicit scheme with the same temporal and spatial steps $\tau$ and $h$. \section{Numerical experiments for parabolic equations} \subsection{Sample solutions} We use several sample solutions with various properties both for diffusion and for Leontovich -- Levin equations for various boundary conditions. These solutions were chosen to differ from the test functions used for scheme construction. \begin{equation} \label{test_sol1} \begin{split} u^*(t, x) & = sin^3(x)sin(t) + sin(2x)cos(t); \\ \theta^*(x) & = cos^2(x) + 1. \\ \end{split} \end{equation} Hereafter we use the right-hand side $f^*(t, x) = \frac{\partial u^*}{\partial t}-\frac{\partial}{\partial x}\theta^*(x) \frac{\partial u^*}{\partial x}$, i.e. our analytic sample solutions are exact. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.45]{./images/sol1.png} \caption{ Sample solution $u^*(T, x)$, right hand side $f^*(T, x)$ and coefficient $\theta^*(x)$ (\ref{test_sol1}); $T = 1, N = 100$. }\label{fig:sol1} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.45]{./images/sol2_k2.png} \caption{ Sample solution $u^*(T, x)$, right hand side $f^*(T, x)$ and coefficient $\theta^*(x)$ (\ref{test_sol2}) at $k = 2$; $T = 1, N = 100$. }\label{fig:sol2_k2} \end{minipage} \end{figure} \begin{figure \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.45]{./images/sol2_k3.png} \caption{ Sample solution $u^*(T, x)$, right hand side $f^*(T, x)$ and coefficient $\theta^*(x)$ (\ref{test_sol2}) at $k = 3$; $T = 1, N = 100$. } \label{fig:sol2_k3} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.45]{./images/sol2_k4.png} \caption{ Sample solution $u^*(T, x)$, right hand side $f^*(T, x)$ and coefficient $\theta^*(x)$ (\ref{test_sol2}) at $k = 4$; $T = 1, N = 100$. } \label{fig:sol2_k4} \end{minipage} \end{figure} Then we consider the family of solutions which are very asymmetric with respect to $x$: \begin{equation} \label{test_sol2} \begin{split} u^*(t, x) & = sin(t)sin^k(x)exp(x); \\ \theta^*(x) & = cos^2(x) + 1. \\ \end{split} \end{equation} Here the parameter $k$ in the sample solutions’ family controls their behavior near the endpoints $0$ and $2\pi$. We also consider sample solutions with a very asymmetric coefficient $\theta(x)$: \begin{equation} \label{test_sol3} \begin{split} u^*(t, x) & = sin(x/2)(e^{b (2 \pi - x)}cos(\omega t) + e^{b x}sin(\omega t)); \\ \theta^*(x) & = e^{a x}. \end{split} \end{equation} \begin{figure}[h!] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.45]{./images/error_smooth.png} \caption{ Errors of compact and classic implicit schemes for sample solution (\ref{test_sol1}), $\nu^{\star} = 1,\: T = 1$. Compact scheme (solid line) outperforms classic one (dashed line) both in accuracy and convergence rate ($4-$th vs $2-$nd). Bilogarithmic scale. } \label{fig:error_smooth} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.4]{./images/errC.png} \caption{ Errors of compact and classic implicit schemes for sample solution (\ref{test_sol1}), $\nu^{\star} = 1,\: T = 1$ as a function of number of operations (multiplications) per time step. Compact scheme (solid line) outperforms classic one (dashed line) both in accuracy and convergence rate ($4-$th vs $2-$nd). Bilogarithmic scale. } \label{fig:error_eff} \end{minipage} \end{figure} We also consider the solution for the Neumann boundary problem for diffusion equation (\ref{diffusion_eq}): \begin{equation} \label{test_neumann} \begin{split} u^*(t, x) & = cos^2(x) sin (t); \\ \theta^*(x) & = cos^2(x) + 1. \end{split} \end{equation} and for Leontovich -- Levin equation (\ref{Schro_eq}): \begin{equation} \label{test_neumann_ll} \begin{split} u^*(t, x) & = cos^2(x) sin (t); \\ \theta^*(x) & = i(cos^2(x) + 1). \end{split} \end{equation} \subsection{Convergence rate of the scheme} We find approximate solutions by the compact scheme (\ref{comp_scheme}) and by the classic implicit scheme (\ref{divergent_1}) on sample solutions (\ref{test_sol1}-\ref{test_neumann_ll}), see Fig.~\ref{fig:sol1}-\ref{fig:sol2_k4}. To measure the convergence rate, we fix a Courant parameter $\nu^{\star}$, thus determining $\tau =h^2|\nu^{\star}|/\max\limits_{j} \theta_j$ for every $N$. We can see that compact scheme (\ref{comp_scheme}) gives us a much smaller error than the classic implicit one, see Fig.~\ref{fig:error_smooth}, \ref{fig:error_ks} and Table~\ref{tab:exper_accuracy}. The error of the compact scheme in these solutions is much smaller; the compact scheme demonstrates an error of the $4$-th order vs. the second order for the classic scheme. \begin{table} \centering \caption{Error in {\bf C}-norm for various sample solutions of equation~\ref{diffusion_eq}). Here $\nu^{\star} = 1,\: T = 1$. Compact scheme outperforms all the variations of implicit scheme by both accuracy and order. Among implicit schemes right-hand side averaging variants, namely (\ref{divergent_1}) shows the best accuracy.} \label{tab:exper_accuracy} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Sample solution & Scheme & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline (\ref{test_sol1}) & (\ref{comp_scheme}) & 1.58-2 & 1.36-3 & 3.73-5 & 2.36-6 & 3.83 \\ \hline (\ref{test_sol1}) & (\ref{divergent_1}) & 1.59-1 & 3.38-2 & 5.14-3 & 1.29-3 & 2.09 \\ \hline (\ref{test_sol1}) & (\ref{divergent_2}) & 2.18-1 & 4.94-2 & 8.67-3 & 2.16-3 & 2.00 \\ \hline (\ref{test_sol1}) & (\ref{divergent_3}) & 2.18-1 & 7.38-2 & 1.40-2 & 3.67-3 & 1.93 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{comp_scheme}) & 1.09+0 & 7.53-2 & 2.03-3 & 1.28-4 & 3.93 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{divergent_1}) & 2.26+1 & 6.48+0 & 1.07+0 & 2.71-1 & 1.92 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{divergent_2}) & 4.74+1 & 1.47+1 & 2.49+0 & 6.28-1 & 1.99 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{divergent_3}) & 3.17+1 & 1.10+0 & 2.03+0 & 5.20-1 & 1.96 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{comp_scheme}) & 6.84+0 & 4.28-1 & 1.13-2 & 7.12-4 & 3.98 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{divergent_1}) & 8.83+0 & 2.63+0 & 4.50-1 & 1.13-1 & 1.89 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{divergent_2}) & 1.68+1 & 6.28+0 & 1.12+0 & 2.83-1 & 1.98 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{divergent_3}) & 1.06+1 & 3.74+0 & 7.76-1 & 2.11-1 & 1.88 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{comp_scheme}) & 5.50+0 & 5.12-1 & 1.32-2 & 8.22-4 & 3.83 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{divergent_1}) & 1.19+1 & 2.99+0 & 5.06-1 & 1.28-1 & 1.97 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{divergent_2}) & 2.68+1 & 7.27+0 & 1.27+0 & 3.19-1 & 1.99 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{divergent_3}) & 1.16+1 & 3.47+0 & 7.42-1 & 1.94-1 & 1.94 \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Error in {\bf C}-norm for sample solution (\ref{test_sol3}) with various parameters. Here $\nu^{\star} = 100,\: T = 1$. Compact scheme outperforms classic implicit scheme (\ref{divergent_1}) by both accuracy and order.} \label{tab:exper_accuracy_3} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Parameters & Scheme & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline $a = 1, b = 1, \omega = 1$ & (\ref{divergent_1}) & 1.99+1 & 5.69+0 & 9.38-1 & 2.35-1 & 1.95 \\ \hline $a = 1, b = 1, \omega = 1$ & (\ref{comp_scheme}) & 6.59-1 & 4.60-2 & 1.20-3 & 7.55-5 & 3.96 \\ \hline $a = 1, b = 2, \omega = 2$ & (\ref{divergent_1}) & 1.93+4 & 5.45+3 & 9.02+2 & 2.27+2 & 1.95 \\ \hline $a = 1, b = 2, \omega = 2$ & (\ref{comp_scheme}) & 3.73+3 & 2.47+2 & 6.41+0 & 4.02-1 & 3.98 \\ \hline $a = 2, b = 1, \omega = 1$ & (\ref{divergent_1}) & 3.02+1 & 8.56+0 & 1.42+0 & 3.57-1 & 1.95 \\ \hline $a = 2, b = 1, \omega = 1$ & (\ref{comp_scheme}) & 9.74-1 & 6.18-2 & 1.59-3 & 9.92-5 & 3.99 \\ \hline $a = 1, b = 0.1, \omega = 1$ & (\ref{divergent_1}) & 5.22-2 & 1.38-2 & 2.22-3 & 5.56-4 & 1.98 \\ \hline $a = 1, b = 0.1, \omega = 1$ & (\ref{comp_scheme}) & 7.47-5 & 3.99-6 & 9.90-8 & 6.17-9 & 4.06 \\ \hline $a = 1, b = 2, \omega = 10$ & (\ref{divergent_1}) & 1.25+4 & 1.86+3 & 4.25+2 & 1.12+2 & 2.02 \\ \hline $a = 1, b = 2, \omega = 10$ & (\ref{comp_scheme}) & 2.60+3 & 1.10+2 & 2.71+0 & 1.78-1 & 4.09 \\ \hline \end{tabular} \end{table} \begin{figure}[h] \begin{center} \includegraphics[scale=.5]{./images/error_ks.png} \caption{ Errors of the compact and classic implicit schemes on sample solutions (\ref{test_sol2}) of diffusion equation (\ref{diffusion_eq}) at $\nu^{\star} = 1,\: T = 1$. The compact scheme outperforms the classic one by both accuracy and convergence rate for sample solutions (\ref{test_sol2}) with various values of $k$. Bilogarithmic scale. } \label{fig:error_ks} \end{center} \end{figure} \begin{figure}[h!] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.4]{./images/neumann.png} \caption{ Errors of the compact scheme on sample solutions (\ref{test_neumann}) of diffusion equation (\ref{diffusion_eq}) at $\nu^{\star} = 5,\: T = 1$. Bilogarithmic scale. Joint usage of compact difference scheme (\ref{ll_comp_scheme}) and boundary conditions approximation (\ref{neumann_coef}) shows the $4^{th}$ error decrease rate, while classic approximation (with $\epsilon = 0.5$) decreases the error rate down to $1^{st}$. } \label{fig:neumann} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[scale=.32]{./images/neu3.png} \caption{ Errors of the compact scheme on sample solutions (\ref{test_neumann}) of diffusion equation (\ref{diffusion_eq}) at $\nu^{\star} = 1,\: T = 1$. Bilogarithmic scale. Joint usage of compact difference scheme (\ref{ll_comp_scheme}) and boundary conditions approximation (\ref{neumann_coef}) shows the $4^{th}$ error decrease rate, while exploiting coefficients for the reduced two-point stencil results in $3^{rd}$ order. Classic approximation (with $\epsilon = 0.5$) decreases the error rate down to $1^{st}$. } \label{fig:neumann3} \end{minipage} \end{figure} \subsection{Efficiency} In 1D case, the double-sweep method for tridiagonal matrix requires $5N$ multiplications and divisions. On every time step, classic scheme (\ref{divergent_1}) requires double-sweep only, while usage of a compact scheme involves additional $3N$ multiplications for the right-hand side. We thus compare both schemes in terms of efficiency in a numerical experiment, see Fig.~\ref{fig:error_eff}. Compact difference scheme outperforms the classic one in experimental setup even on efficiency-based comparison with number of calculations per time step fixed. In the multidimensional case, iterative methods are widely used and are most effective. The matrix for the left hand-side is block-tridiagonal for both compact and classic schemes, the difference is the modified right-hand side, which is multiplied by another block-tridiagonal matrix. Since inversion of the left-hand side matrix is much more time-consuming, we hope that our approach will be also effective for the multidimensional case; this, however, require further investigation and numerical experiments. \subsection{Richardson extrapolation} The extrapolation Richardson method may be used to improve the schemes order. If we obtain a family of approximate solutions $u=u_{h}(t,\,x)$ at $t=T$, assume $\tau =h^2|\nu^{\star}|/\max\limits_{j} \theta_j$, and use the representation \begin{equation}\label{Ri} u_h (T,\,x)= u(T,\,x)+h^4 u_*(T,\,x) , \end{equation} then we can calculate $u_h$ at $h=h_*$ and at $h=h_*/2$. Afterwards, we substitute the representation into (\ref{Ri}), and obtain a simple linear algebraic system: \begin{equation*} u_{h_\star}=u+h^4_\star u_\star,\; u_{h_*/2}=u+h^4_\star u_\star/16 \rightarrow u=u(T,x)\approx \left[ 16 u_{h_\star/2}- u_{h_\star}\right]/15. \end{equation*} \begin{table}[h] \centering \caption{Error in {\bf C}-norm for various sample solutions using Richardson extrapolation for scheme (\ref{comp_scheme}). Compact scheme outperforms classic implicit scheme in both accuracy and convergence rate.} \label{tab:exper_accuracy_richardson} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Test solution & Scheme & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline (\ref{test_sol1}) & (\ref{divergent_1}) & 5.76-3 & 3.13-4 & 8.60-6 & 5.36-7 & 4.00 \\ \hline (\ref{test_sol1}) & (\ref{comp_scheme}) & 1.31-4 & 2.35-6 & 9.30-9 & 1.44-10 & 6.01 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{divergent_1}) & 4.77-1 & 3.72-2 & 9.91-4 & 6.29-5 & 3.98 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{comp_scheme}) & 8.10-3 & 2.26-4 & 9.27-7 & 1.46-8 & 5.99 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{divergent_1}) & 2.10+0 & 9.40-2 & 2.47-3 & 1.54-4 & 4.00 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{comp_scheme}) & 1.34-1 & 1.60-3 & 6.15-6 & 9.55-8 & 6.01 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{divergent_1}) & 1.94+0 & 1.52-1 & 3.80-3 & 2.37-4 & 4.00 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{comp_scheme}) & 3.74-2 & 2.68-3 & 1.02-5 & 1.59-7 & 6.00 \\ \hline \end{tabular} \end{table} We can improve (by using representation (\ref{Ri})) the order of both schemes: we improve the order for classic implicit scheme~(\ref{cscheme}) up to $4$-th and for compact scheme up~(\ref{comp_scheme})to $6$-th, see the Fig.~\ref{fig:richardson} and Table~\ref{tab:exper_accuracy_richardson}. \begin{figure}[h!] \begin{center} \includegraphics[scale=.7]{./images/richardson.png} \caption{ Errors of compact and classic implicit schemes in sample solution (\ref{test_sol1}). Richardson extrapolation technique is used here to improve the convergence rate. Compact scheme (solid line) outperforms the classic one (dashed line) both by accuracy and convergence rate ($4-$th vs $6-$th). Bilogarithmic scale. } \label{fig:richardson} \end{center} \end{figure} \subsection{Cut coefficients} If we eliminate in the coefficients of compact scheme (\ref{comp_scheme}) terms with a power of $h$ more than $4$ (e.g. $h^5$, $h^6$, \ldots , see Table \ref{tab:coeffsu}, \ref{tab:coeffsf}), the $4$-th order will be preserved. However, the absolute error may increase a bit, see Table \ref{tab:cut_accuracy}. \begin{table}[h!] \centering \caption{Error in {\bf C}-norm for various cases of coefficients of the compact scheme (\ref{diffusion_eq}) with terms cut for sample solution (\ref{test_sol1}). $\nu^{\star} =1,\: T = 1$.} \label{tab:cut_accuracy} \begin{tabular}{|c|c|c|c|c|c|} \hline Cutting terms with & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline $h^5$ and greater & 1.5659-2 & 1.7958-3 & 5.5527-5 & 3.5936-6 & 3.95 \\ \hline $h^6$ and greater & 1.6422-2 & 1.1773-3 & 3.6297-5 & 2.3422-6 & 3.95 \\ \hline $h^7$ and greater & 1.5917-2 & 1.3402-3 & 3.7068-5 & 2.3563-6 & 3.98 \\ \hline $h^8$ and greater & 1.6026-2 & 1.3750-3 & 3.7280-5 & 2.3594-6 & 3.98 \\ \hline $h^9$ and greater & 1.5965-2 & 1.3651-3 & 3.7272-5 & 2.3593-6 & 3.98 \\ \hline Exact scheme & 1.5812-2 & 1.3642-3 & 3.7271-5 & 2.3593-6 & 3.98 \\ \hline \end{tabular} \end{table} \subsection{Almost self-adjoint matrices} \label{matr_section} Our finite difference scheme may be rewritten in the matrix (operator) form: \begin{equation*} A_{new}u^{n+1} + A_{old}u^n = B_{old}f^n + B_{new}f^{n+1}, \end{equation*} where $A_{new} = \begin{pmatrix} 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\ b^L_{1, 2} & a_{1, 2} & b^R_{1, 2} & 0 & \cdots & 0 & 0 \\ 0 & b^L_{1, 3} & a_{1, 3} & b^R_{1, 3} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdots & a_{1, N-1} & b^R_{1, N-1} \\ 0 & 0 & 0 & 0 & \cdots & 0 & 1 \end{pmatrix}$, \\ $A_{old} = \begin{pmatrix} 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ b^L_{0, 2} & a_{0, 2} & b^R_{0, 2} & 0 & \cdots & 0 & 0 \\ 0 & b^L_{0, 3} & a_{0, 3} & b^R_{0, 3} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdots & a_{0, N-1} & b^R_{0, N-1} \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix}$, \\ $B_{new} = \begin{pmatrix} 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ q^L_{1, 2} & p_{1, 2} & q^R_{1, 2} & 0 & \cdots & 0 & 0 \\ 0 & q^L_{1, 3} & p_{1, 3} & q^R_{1, 3} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdots & p_{1, N-1} & q^R_{1, N-1} \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix}$, \\ $B_{old} = \begin{pmatrix} 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ q^L_{0, 2} & p_{0, 2} & q^R_{0, 2} & 0 & \cdots & 0 & 0 \\ 0 & q^L_{0, 3} & p_{0, 3} & q^R_{0, 3} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 0 & 0 & \cdots & p_{0, N-1} & q^R_{0, N-1} \\ 0 & 0 & 0 & 0 & \cdots & 0 & 0 \end{pmatrix}$. We cannot prove our hypothesis: compact scheme (\ref{comp_scheme}) is unconditionally stable and convergent for any smooth and positive coefficient $\theta(x)$. However, we checked in multiple numerical experiments the following properties of the scheme for various cases. The matrix $M=-A^{-1}_{new}A_{old}$ is not self-adjoint. However, all the eigenvalues $\left\{\lambda_j\right\}_{j=1}^N$ of matrix $M$ are real. The stability condition $|\lambda_j|<1$ holds, and $M$ does not have non-trivial Jordan blocks for any values $\nu$ and for various boundary conditions. Therefore, there is an Euclidean norm, where the linear operator with the matrix $M$ is self-adjoint. We evaluate the distance between the matrix $M$ and the subspace of self-adjoint (symmetrical) matrices. Let us define for an arbitrary square matrix $C \in \mathbb{R}^{(N-1) \times (N-1)}$ its measure of asymmetry: \begin{equation} \label{asymm_measure} S(C) = \frac{|| C - C^*||_F}{N-1}. \end{equation} Here $||\cdot||_F$ is a Frobenius norm in the $N^2$-dimensional space of matrices. The measure $S(M)$ decreases as $N\to\infty$, see Table~\ref{tab:exper_symm} and Fig.~\ref{fig:symm_error}. For the classic Euclidean norm $l^2$, the operator is asymptotically almost self-adjoint: $S(M)\approx N^{-4}$ as $N \rightarrow \infty$, see Table~\ref{tab:exper_symm}. \textbf{Note.} The spectrum $Spec~R(t)$ at $t > 0$ of resolving operators for differential problem (\ref{sturm}) is strongly negative under the Dirichlet boundary conditions. On the subspace of grid functions $u$ such that $u_0 = u_N = 0$, the spectrum of the finite-difference transition operator $M_{Dirichlet}$ is strongly negative at $\nu < \nu_\#$ only, where $\nu_\# \approx 1/3$. This estimate is a result of our numerical experiments, see Fig. \ref{fig:eigens_diff}. In the case of the Neumann conditions $Spec~R(t)$ is non-positive. The non-positiveness for the operator $M_{Neumann}$ is fulfilled at $\nu_\# \approx 1/4$ only. In all the cases the $Spec~M_{Neumann}$ is wider than $Spec~M_{Dirichlet}$. \begin{figure}[h!] \includegraphics[scale=.45]{./images/e3.png} \caption{ Eigenvalues $\lambda_j$ of the transition operator $M=-A^{-1}_{new}A_{old}$ for the diffusion equation (\ref{diffusion_eq}) for the Dirichlet and Neumann boundary conditions. Choice of compact Neumann boundary conditions approximation (\ref{neumann_coef}) extends the spectre. Here $N = 12$, $\theta(x) = cos^2(x)+1$, $\nu^{\star} = 5$. } \label{fig:eigens_diff} \end{figure} \begin{table}[h!] \centering \caption{Measure of asymmetry (\ref{asymm_measure}) for finite difference operators of the compact scheme (\ref{comp_scheme})} \label{tab:exper_symm} \begin{tabular}{|c|c|c|c|c|c|} \hline - & $N = 10$ & $N = 20$ & $N = 50$ & $N = 100$ & order \\ \hline $S(A^{-1}_{new}A_{old})$ & 3.32-3 & 2.44-4 & 9.05-6 & 7.93-7 & 3.62 \\ \hline $S(A^{-1}_{new}B_{old})$ & 1.64-4 & 3.01-6 & 1.79-8 & 3.91-10 & 5.62 \\ \hline \end{tabular} \end{table} \begin{figure}[h!] \centering \begin{minipage}{0.35\textwidth} \centering \includegraphics[scale=.4]{./images/symm_error.png} \caption{ $S(A^{-1}_{new}A_{old})$ (solid line) and $S(A^{-1}_{new}B_{old})$ (dashed line) as a function of $N$ on sample solution (\ref{test_sol2}). Bilogarithmic scale. } \label{fig:symm_error} \end{minipage} \hfill \centering \begin{minipage}{0.55\textwidth} \includegraphics[scale=.55]{./images/e2.png} \caption{ Eigenvalues $\lambda_j$ of the transition operator $M=-A^{-1}_{new}A_{old}$ for Leontovich -– Levin equation (\ref{Schro_eq}) on the complex plane for the Dirichlet boundary conditions. Dashed line shows the unit circle, i.e. all these eigenvalues are unimodal. If $\nu$ is fixed and $N \to \infty$, then the angle $\alpha \to const$. If $\nu \to 0$, then $ \alpha \to 0$; if $\nu \to \infty$, then $\alpha \to \pi$. In case of compact Neumann boundary conditions approximation (\ref{neumann_coef}), two coinciding eigenvalues appear, depicted by a star. Here $N = 50$, $\theta(x) = i [cos^2(x)+1]$, $\nu^{\star} = 1$. } \label{fig:eigens} \end{minipage} \end{figure} \subsection{Numerical experiments for the Leontovich -- Levin \\ (Schr\"odinger--type) equation} For the Leontovich -- Levin equation (\ref{Schro_eq}) we use the following scheme on six-point two-layer stencils (see Fig.~\ref{fig:stencil}): \begin{equation} \label{ll_comp_scheme} \begin{aligned} &B^L_{0,j}u^n_{j-1} + A_{0,j}u^n_j + B^R_{0,j}u^n_{j+1} + B^L_{1,j}u^{n+1}_{j-1} + A_{1,j}u^{n+1}_j + B^R_{1,j}u^{n+1}_{j+1} = \\ &B^L_{0,j}f^n_{j-1} + P_{0,j}f^n_j + Q^R_{0,j}f^n_{j+1} + Q^L_{1,j}f^{n+1}_{j-1} + P_{1,j}f^{n+1}_j + Q^R_{1,j}f^{n+1}_{j+1}, \end{aligned} \end{equation} $j = 1, \ldots, N-1, \; n = 0, \ldots, T/\tau$. Here coefficients $A_{0,j}, A_{1,j}$, $B^L_{0,j}, B^L_{1,j}$, $B^R_{0,j}, B^R_{1,j}$, $P_{0,j}, P_{1,j}$, $Q^L_{0,j}, Q^L_{1,j}$, $Q^R_{0,j}, Q^R_{1,j}$ are the same as lowercase coefficients in Table ~\ref{tab:coeffsu}, ~\ref{tab:coeffsf}, but all the $\nu_j$ entrances should be multiplied by an imaginary unit $i$. \begin{table}[h] \centering \caption{Error in {\bf C}-norm for various sample solutions for Leontovich -- Levin equation (\ref{Schro_eq}). Here $\nu^{\star} = i,\: T = 1$. The compact scheme outperforms implicit scheme (\ref{divergent_1}) by both accuracy and order.} \label{tab:exper_accuracy_schrod} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Test solution & Scheme & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline (\ref{test_sol1}) & (\ref{divergent_1}) & 2.18-1 & 4.28-2 & 5.79-3 & 1.29-3 & 2.17 \\ \hline (\ref{test_sol1}) & (\ref{comp_scheme}) & 2.58-2 & 1.86-3 & 5.12-5 & 3.22-6 & 3.99 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{divergent_1}) & 2.84+1 & 8.00+0 & 1.34+0 & 3.37-1 & 1.99 \\ \hline (\ref{test_sol2}), k = 2 & (\ref{comp_scheme}) & 1.05+0 & 7.51-2 & 2.07-3 & 1.30-4 & 3.99 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{divergent_1}) & 1.27+1 & 3.25+0 & 5.23-1 & 1.33-1 & 1.98 \\ \hline (\ref{test_sol2}), k = 3 & (\ref{comp_scheme}) & 8.50+0 & 5.67-1 & 1.47-2 & 9.23-4 & 4.00 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{divergent_1}) & 1.28+1 & 3.54+0 & 6.09-1 & 1.55-1 & 1.97 \\ \hline (\ref{test_sol2}), k = 4 & (\ref{comp_scheme}) & 5.69+0 & 5.33-1 & 1.40-2 & 8.71-4 & 4.00 \\ \hline \end{tabular} \end{table} \begin{figure} \begin{center} \includegraphics[scale=.6]{./images/schrod.png} \caption{ Errors of compact and classic implicit schemes in sample solution (\ref{test_sol1}) of Leontovich -– Levin equation (\ref{Schro_eq}), $\nu^{\star} =i,\: T = 1$. Compact scheme (solid line) outperforms the classic one (dashed line) both in accuracy and order ($2$-nd vs $4$-th). Bilogarithmic scale. } \label{fig:schrod} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=.5]{./images/neu_ll.png} \caption{ Errors of compact scheme on sample solutions (\ref{test_neumann}) of Leontovich -- Levin equation (\ref{Schro_eq}) at $\nu^{\star} = 5i,\: T = 1$. Bilogarithmic scale. Joint usage of compact difference scheme (\ref{ll_comp_scheme}) and boundary conditions approximation (\ref{neumann_coef}) show the $4^{th}$ error decrease rate, while classic approximation (with $\epsilon = 0.5$) decreases the error rate down to $1^{st}$. However, if one will use the main terms (limits of the coefficients as $h \rightarrow 0$) of compact approximation (\ref{neumann_coef}), thus cutting its coefficients, the error decrease rate will be equal to $3$. } \label{fig:neumann2} \end{center} \end{figure} \begin{table}[h] \centering \caption{Error in {\bf C}-norm for sample solution (\ref{test_sol3}) of Leontovich -- Levin equation (\ref{Schro_eq}). $\nu^{\star} = 100i,\: T = 1$. Compact scheme outperforms the implicit scheme (\ref{divergent_1}) by both accuracy and order.} \label{tab:exper_accuracy_schrod_3} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Parameters & Scheme & N = 10 & N = 20 & N = 50 & N = 100 & order \\ \hline $a = 1, b = 1, \omega = 1$ & (\ref{divergent_1}) & 2.18+1 & 6.30+0 & 1.05+0 & 2.62-1 & 1.94 \\ \hline $a = 1, b = 1, \omega = 1$ & (\ref{comp_scheme}) & 6.79-1 & 4.88-2 & 1.27-3 & 8.05-5 & 3.94 \\ \hline $a = 1, b = 2, \omega = 2$ & (\ref{divergent_1}) & 2.60+4 & 8.50+3 & 1.48+3 & 3.72+2 & 1.89 \\ \hline $a = 1, b = 2, \omega = 2$ & (\ref{comp_scheme}) & 5.02+3 & 3.77+2 & 1.01+1 & 6.38-1 & 3.93 \\ \hline $a = 1, b = 0.1, \omega = 1$ & (\ref{divergent_1}) & 9.79-2 & 2.31-2 & 3.91-3 & 9.61-4 & 2.03 \\ \hline $a = 1, b = 0.1, \omega = 1$ & (\ref{comp_scheme}) & 4.75-4 & 3.39-5 & 8.48-7 & 5.52-8 & 3.96 \\ \hline $a = 1, b = 2, \omega = 10$ & (\ref{divergent_1}) & 5.67+4 & 1.71+4 & 2.88+3 & 7.39+2 & 1.90 \\ \hline $a = 1, b = 2, \omega = 10$ & (\ref{comp_scheme}) & 1.22+4 & 7.94+2 & 2.28+1 & 1.39+0 & 3.96 \\ \hline \end{tabular} \end{table} All eigenvalues of the transition operator $M=-A^{-1}_{new}A_{old}$ are unimodal: \[ |\lambda_k|=1,\quad k=1,\ldots,N-1 \] (see Fig.~\ref{fig:eigens}) and the matrix $M$ has no non-trivial Jordan blocks. Therefore, in the standard Euclidean space $\mathbb{C}^{N-1}$ there is a positive definite quadratic form which is conserved according to the finite difference equation (\ref{ll_comp_scheme}). However, it is not trapezoidal or Simpson quadrature on the segment $x\in [0,\,2\pi]$ of $|\Psi(n\tau,\,x)|^2$. The coefficients of the quadrature are not constants with respect to the index $n$. There are oscillations of these quadratures (see Fig.~\ref{fig:first_int}). However, the amplitude of the oscillations decreases as $O(N^3)$ at $N\to \infty$. \begin{figure}[h!] \begin{center} \includegraphics[scale=.65]{./images/1st_int.png} \caption{ First integral of Leontovich -– Levin equation (\ref{Schro_eq}) $I(t) = \int\limits_0^{2\pi} |\Psi (t,\,x)|^2\,dx$ with $\Psi(0, x) = sin(x); \theta(x) = i [cos^2(x)+1]; f \equiv 0$, computed numerically using trapezoidal and parabolic (Simpson) quadrature. $N = 50, |\nu| = 1$. We checked numerically that the amplitude of oscillations at small step $h$ is proportional to $h^3$ for both quadratures. } \label{fig:first_int} \end{center} \end{figure} Thus, there is a positive definite quadratic form which conserved according to the finite difference equation (\ref{ll_comp_scheme}) and tends to the standard Euclidean metric as $N \to \infty$. The length of the arc with eigenvalues of $M$ (see Fig. \ref{fig:eigens}) depends on the Courant parameter $|\nu|$. If the spatial step $h$ is fixed, and the temporal step $\tau$ tends to zero, the arc contracts to $\lambda=1$. As about the alternative limit $\tau \to \infty$, the arc develops on the whole segment $[\pi, 2\pi]$. We consider the spectrum of the Dirichlet problem for (\ref{Schro_eq}). $Spec~M_{Dirichlet}$ of the operator $M_{Dirichlet}$ on the subspace of grid functions $u$ such that $u_0 = u_N = 0$. According to our numerical experiments, $Spec~M_{Dirichlet}$ is situated at the arc of the unit circle on the complex plane, see Fig.~\ref{fig:eigens}. The left boundary of the arc depends on $\nu$ and $N$: if $\nu$ is fixed and $N \to \infty$, then the angle $\alpha \to const$. If $\nu \to 0$, then $ \alpha \to 0$; if $\nu \to \infty$, then $\alpha \to \pi$. The spectrum $Spec~M_{Neumann}$ for the Neumann problem is wider, see additional "star" on the Fig.~\ref{fig:eigens}. \pagebreak \section{Summary and discussion} We have presented the $4$-th order compact implicit scheme which approximates mixed problems for the 1D parabolic equation with a variable coefficient and for the Leontovich -- Levin equation. We have confirmed the stability and convergence of the scheme by various numerical experiments, and it is the main result of the paper. We studied the spectral structure of transition operators of the scheme to explain the results. The scheme conserves the first integral for the homogeneous Leontovich -- Levin equation. We have compared the scheme with the classic implicit scheme and the advantages of the new scheme are clear. The number of arithmetic operations for both considered schemes is similar. This approach may be used for approximation of various linear PDEs with variable coefficients. Moreover, it can be developed for the approximation of weakly non-linear PDE like the non-linear Schr\"odinger equation or the Fisher -- Kolmogorov -- Petrovsky -- Piskunov equation. We are going to describe the extension in another article. We have considered here the Dirichlet and Neumann boundary conditions. The compact scheme is sensitive to the quality of the Neumann condition’s approximation. The special compact approximation of the Neumann condition was constructed. The function $f$ and its derivatives must be included into the difference boundary conditions to avoid the loss of order. The compact approach to the boundary condition’s approximation may be developed for other types of boundary conditions. The compact schemes are preferable for many-dimensional problems, too. Iteration approaches are effective for implementation of the implicit schemes. \section{Acknowledgements} The article was prepared within the framework of the Academic Fund Program at the National Research University Higher School of Economics (HSE) in 2016--2017 and 2018--2019 (grants \# 16-05-0069 and 18-05-0011) and supported within the framework of a subsidy granted to the HSE by the Government of the Russian Federation for the implementation of the Global Competitiveness Program. \\ The authors gratefully acknowledge the anonymous referee whose comments were helpful to improve the initial version of the paper. \bibliographystyle{ieeetr} \clearpage \input{diff_dec.bbl} \end{document}
{ "timestamp": "2018-05-31T02:10:06", "yymm": "1712", "arxiv_id": "1712.05214", "language": "en", "url": "https://arxiv.org/abs/1712.05214" }
\section{Introduction} \label{intro} Active galactic nuclei (AGN) have been found to be variable (in flux and/or spectrum) at all wavebands, from radio to X-rays, and even $\gamma$-rays, with variability amplitude being as large as a factor of $\sim 100$ \citep{Ulrich1997, Peterson2001, Netzer2008}. Moreover, the AGN are also known to be variable on different time-scales, i.e. from hours to years or even decades/centuries. Considering the lack of sufficient spatial resolution in most existing observations, variability studies have been essential to diagnose the physics, structure, and kinematics of the central regions of AGN. The time-scales, the spectral changes, and the correlations and delays between variations in the different continuum or line components will reveal the nature and location of different physical components and on their interdependencies. For example, in the reverberation mapping method, the time lag between optical/ultraviolet continuum and H$\beta$ (Mg{\sc ii}, C{\sc iv} also) emission lines is crucial to constrain the size of broad line regions in AGN (for a recent review, see \citealt{Bentz2016}). Theoretically speaking, the variability of AGN on different time-scales is believed to be triggered by different mechanisms. Over short time-scale of hours to months, the leading variability mechanism is the fluctuation within the accretion disc (e.g., \citealt{McHardy2004, Papadakis2004, Uttley2005, Done2005, Arevalo2006, McHardy2006, Netzer2008, Kelly2009, Kelly2011, McHardy2010}). Additionally, the variability in infrared, optical and soft X-rays on this time-scale also possibly relates to either the change in illuminating X-ray emission \citep[e.g., ][]{Shappee2014, Denney2014}, or in the dynamics of absorbing gas \citep{Grupe2013,LaMassa2015,Runnoe2016}. Besides, in Blazars, their variability may be caused by the shock and/or turbulence within the jet. Dramatic (large amplitude) flux variation on time-scales over years has been clearly observed in numerous AGN (e.g. \citealt{Strotjohann2016}), and several mechanisms are proposed (e.g., \citealt{Valtonen2008, Elitzur2014, Merloni2015}). Among them, one interesting scenario is the tidal disruption event (TDE) model, i.e. considering a star to be captured and tidally disrupted by the super-massive black hole (SMBH) of a galaxy (\citealt{Rees1988}, see \autoref{sec:tde}). The TDE usually produces an outburst (e.g., \citealt{Drake2011, Blanchard2017a}). On longer time-scales of $\sim 10^{3-6}$ yr, the variability can be triggered by the instabilities in the accretion discs (e.g., the radiation-pressure instability, see \citealt{Janiuk2004, Janiuk2011, Czerny2009}). Over cosmic time, consensus has been reached that during the non-AGN phase gases will accumulate in a quiescent disc around the SMBH located at the centre of a galaxy, and then, due to some un-identified instability mechanism (see below for more discussions), they will transfer inward rapidly on to the black hole (BH) in a brief but luminous outburst \citep{Bailey1980, Shields1978}, leading to the so-called AGN phase of the duty cycle. Among the variability analysis, the shape of the long-term light curve can be served to diagnose the physical mechanism. For example, the light curve of a typical TDE follows a $t^{-5/3}$ power-law form \citep[][]{Rees1988, Phinney1989, Gezari2009}. Another example comes from gamma-ray bursts, which exhibit broken power laws with power-law index difference from one phase to the other, superimposed with various flares \citep{Zhang2006}. Interestingly, we notice that in X-ray binaries (XRBs), those dwarf novae and soft X-ray transients exhibit numerous outbursts, where each outburst shows a fast-rise-exponential-decay (FRED) light-curve profile in X-rays (e.g., \citealt{Chen1997,Powell2007,Yan2015c}). Physically, the FRED profiles in XRBs can be nicely interpreted under the thermal-viscous disc instability model (DIM; for reviews see \citealt{Lasota2001, Done2007}). To our knowledge, additional systems with the FRED profile are, some GRBs \citep{Peng2010}, the thermonuclear burst of one neutron star XRB \citep{Galloway2008}, the outburst of intermediate BH candidate ESO 243-49 HLX-1\citep{Lasota2011}, and two decade-long sustained TDE candidates \citep{Lin2017a,Lin2017}. In this work, we gather from literature the X-ray observations of the low-luminosity AGN (LLAGN) NGC 7213 over the past four decades. We report that this source also exhibits a FRED light curve, together with a possible weak reflare as caught by {\it RXTE}. Remarkably, the rising phase is also observed. NGC 7213 is the {\it first} normal AGN, to our knowledge, with the FRED evolution pattern observed (cf. \citealt{Strotjohann2016} for X-ray light curves of AGN with large amplitude). This paper is organized as follows. Section 2 presents the basic properties of LLAGN NGC 7213, together with its FRED flare profile. Section 3 provides the TDE interpretation, which we argue most likely. Other mechanisms, e.g., radiation-pressure instability and DIM, are discussed in Sections 4 and 5, where we argue that they are unlikely. The final section is devoted to a brief summary. \section{Long-term X-ray Flare of LLAGN NGC 7213} In this section we will first provide the basic properties of NGC 7213, and then investigate the long-term light curve of NGC 7213 in various wavebands. We find that the unusual FRED pattern is mostly evident in X-rays, while there is no useful information/constrain in other wavebands (except the decline in radio). \subsection{Basic properties of NGC 7213} \label{sec:basic} NGC 7213 is a nearby face-on LINER (low-ionization nuclear emission-line region; \citealt{Phillips1979,Filippenko1984}) located at a luminosity distance $d_{\rm L}$=22.8 Mpc, a distance derived from a flat cosmology ($H_0=67.8$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_{\rm M}=0.308$ from \citealt{Planck-Collaboration2016}) with a corrected redshift $z_{\rm corr.3K}=0.005145$ to the reference frame defined by the 3K cosmic microwave background\footnote{\url{http://ned.ipac.caltech.edu}}. The central BH mass is $M_{\rm BH} = 8^{+16}_{-6}\times 10^7\ {\rm M}_{\sun}$ \citep{Schnorr-Muller2014}, which is derived from the $M_{\rm BH}-\sigma$ relation in \citet{Gultekin2009}. Based on observations at epochs after 2000, the bolometric luminosity estimated from broad spectrum is $L_{\rm bol} \simeq 0.9-1.8 \times 10^{43}\ {\rm erg\,s^{-1}}\sim 1 \times 10^{-3}\ M_8^{-1}\ L_{\rm Edd}$ (for details see \citealt{Emmanoulopoulos2012,Schnorr-Muller2014}), where $M_8 \equivM_{\rm BH}/10^8\ {\rm M}_{\sun}$ and the Eddington luminosity (for the hydrogen fraction of 0.7) is defined as $L_{\rm Edd}=1.47\times 10^{46}\ M_8\ {\rm erg\,s^{-1}}$. NGC 7213 is intermediate in radio, between radio-loud and radio-quiet, possibly, because of relatively large beaming effect, as the viewing angle in this system is likely small. The viewing angle of the jet can be constrained from that of the clumpy dusty torus, if we assume they are perpendicular to each one. Based on high-spatial-resolution infrared (IR) observations by Gemini, \citet{Ruschel-Dutra2014} modelled the IR spectrum and constrained the viewing angle to be $i\simeq 21\degr^{+9}_{-12}$. This source reveals a number of interesting properties. Observations after 2000 find that the ultraviolet bump is either absent or very weak in this source \citep{Starling2005,Lobban2010}. There is also no evidence for a Compton reflection continuum in hard X-rays, and the observed narrow Fe K$\alpha$ line is probably produced in the broad-line region \citep{Bianchi2003,Bianchi2008,Starling2005,Lobban2010, Ursini2015}. All these results suggest that a cold disc, if exists, should be truncated at a large radius after 2000 \citep{Starling2005,Lobban2010,Emmanoulopoulos2012}. We also find a complex (hybrid) correlation between the monochromatic radio luminosity $L_{\rm R}$ and the 2--10 keV X-ray luminosity $L_{\rm X}$, i.e. the correlation is unusually weak with $p\sim 0$ (in the form $L_{\rm R}\proptoL_{\rm X}^p$) when $L_{\rm X}$ is below a critical luminosity, and steep with $p>1$ when $L_{\rm X}$ is above that luminosity \citep{Bell2011,Xie2016}. On the other hand, we also find, from its long-term X-ray spectral evolution, likely a V-shaped index--$L_{\rm X}$ relation \citep{Xie2016}, i.e., its X-ray spectrum shows a `harder-when-brighter' behaviour when $L_{\rm X}$ is low (first reported by {\it RXTE} observations; see \citealt{Emmanoulopoulos2012}), and an opposite behaviour when $L_{\rm X}$ is high. The critical luminosity for the turnover in the $L_{\rm R}$--$L_{\rm X}$ correlation, estimated to be $L_{\rm X, crit} \approx 1.5\times 10^{42}\ {\rm \,erg\,s^{-1}}$, is consistent with that constrained by the turnover in the index--$L_{\rm X}$ correlation. Under the accretion--jet model \citep{Yuan2014}, which has been applied successfully to LLAGN and BH XRBs in their hard (and intermediate) states, \citet{Xie2016} then speculate that the accretion mode has been changed below and above this critical luminosity, i.e. it is a luminous hot accretion flow below $L_{\rm X, crit}$, and a two-phase (hot gas embedded with abundant cold clumps) accretion flow above it. Moreover, they also successfully modelled the broad-band (radio up to $\gamma$-rays) spectrum \citep{Xie2016}. \subsection{Long-term FRED light curve in X-rays}\label{sec:lc} \begin{table} \centering \caption{X-ray observations of NGC 7213} \label{fx} \begin{tabular}{ccccc} \hline Time & $F_\mathrm{X}$ (2--10 keV)&$\Gamma$ &Satellite & Refs. \\ (MJD)&{\tiny $10^{-11}$ erg/s/cm$^{2}$} & & &\\ \hline 43985 & 2.20 & 1.85$\pm$0.11 & $Einstein$ & [1] \\ 44168 & 1.67 & 2.1$\pm$0.6 & $Einstein$ & [1]\\ 44374 & 6.80 & 1.72$\pm$0.12 & $Einstein$ & [1]\\ 45644 & 5.00 & 1.78$\pm$0.05 & $EXOSAT$ & [2]\\ 48065 & 3.53$\pm$0.19 & 1.70$\pm$0.04 & $GINGA$ & [3] \\ 48193 & 4.24$\pm$0.24 & 1.83$\pm$0.04 & $GINGA$ & [3] \\ 49479 & 3.01 & 1.73 & $ASCA$ & [4] \\ 52058 & 2.20 & 1.69$\pm$0.04 & $XMM$-$Newton$ & [5] \\ & & & \& $BeppoSAX$ & \\ 53846 & 2.71$\pm$0.02 & 1.80$\pm$0.02 &$RXTE$ & [6] \\ 53942 & 2.73$\pm$0.02 & 1.82$\pm$0.02 &$RXTE$ & [6] \\ 54030 & 2.44 & 1.75$\pm$0.02 & $Suzaku$ & [7] \\ 54037 & 2.50$\pm$0.02 & 1.84$\pm$0.02 &$RXTE$ & [6] \\ 54135 & 2.35$\pm$0.02 & 1.85$\pm$0.03 &$RXTE$ & [6] \\ 54234 & 2.31$\pm$0.02 & 1.84$\pm$0.03 &$RXTE$ & [6] \\ 54318 & 2.32$\pm$0.04 & 1.69$\pm$0.01 & $Chandra$ & [8] \\ 54336 & 2.44$\pm$0.02 & 1.84$\pm$0.03 &$RXTE$ & [6] \\ 54430 & 2.16$\pm$0.02 & 1.87$\pm$0.03 &$RXTE$ & [6] \\ 54530 & 1.75$\pm$0.02 & 1.85$\pm$0.03 &$RXTE$ & [6] \\ 54635 & 1.18$\pm$0.02 & 1.90$\pm$0.05 &$RXTE$ & [6] \\ 54729 & 1.65$\pm$0.02 & 1.88$\pm$0.04 &$RXTE$ & [6] \\ 54821 & 1.86$\pm$0.02 & 1.87$\pm$0.03 &$RXTE$ & [6] \\ 54915 & 1.77$\pm$0.02 & 1.90$\pm$0.03 &$RXTE$ & [6] \\ 55102 & 1.54$\pm$0.02 & 1.90$\pm$0.04 &$RXTE$ & [6] \\ 55045 & 1.59$\pm$0.02 & 1.89$\pm$0.04 &$RXTE$ & [6] \\ 55039 & 1.33$\pm$0.02 & 1.91$\pm$0.04 &$RXTE$ & [6] \\ 55147 & 1.23$\pm$0.01 & 1.86$\pm$0.02 & $XMM$-$Newton$ & [9] \\ 56935 & 1.60$\pm$0.10 & 1.84$\pm$0.03 & $NuSTAR$ &[10]\\ 55493 & 1.05$\pm$0.14 & 1.67$\pm$0.11 & $Swift$ &[11]\\ 55751 & 1.32$\pm$0.28 & 1.68$\pm$0.17 & $Swift$ &[11]\\ 55756 & 0.99$\pm$0.19 & 1.87$\pm$0.15 & $Swift$ &[11]\\ 56937 & 1.42$\pm$0.18 & 1.66$\pm$0.10 & $Swift$ &[11]\\ 57752 & 1.18$\pm$0.19 & 1.62$\pm$0.13 & $Swift$ &[11]\\ \hline \end{tabular} References: [1]\citet{Halpern1984}; [2]\citet{Turner1989}; [3]\citet{Nandra1994}; [4]\citet{Turner2001}; [5]\citet{Bianchi2003}; [6]\citet{Emmanoulopoulos2012}; [7]\citet{Lobban2010}; [8]\citet{Bianchi2008}; [9]\citet{Emmanoulopoulos2013}; [10]\citet{Ursini2015}; [11]this work \end{table} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{fig1.pdf} \caption{The long-term X-ray light curve in LLAGN NGC 7213. This source brightens by a factor of $\sim 4$ within $\sim$ 200d (from MJD 44168 to MJD 44374) and then declines exponentially afterwards, with a possible weak flare, as detected by {\it RXTE}. For clarity, uncertainties in the observational data are not shown here. The dotted line shows the FRED profile fitting to the observational data, where the $e$-folding decay time-scale is constrained to be $\tau_{\rm decay}\approx8116$d ($\approx22.2$ yr). As labelled in the figure, different symbols mark observations from different X-ray satellites.} \label{lc} \end{figure*} \autoref{lc} shows the long-term light curve of NGC 7213 in hard X-rays over the past four decades, from modified Julian date (MJD) 43985 to MJD 57752. We list in \autoref{fx} the observation time (MJD), the 2-10 keV X-ray flux $F_{\rm X}$, the X-ray photon index $\Gamma$ (defined as $F_{\rm E}\propto E^{1-\Gamma}$, where $E$ is the photon energy), as well as the corresponding satellites of the observations. Most of the data are collected from \citet{Bell2011,Ursini2015} and references therein. In X-rays, NGC 7213 was first identified by $HEAO$-$1$ in 1977 \citep{Marshall1979}. However, due to the lack of the spectral information \citep{Piccinotti1982}, we exclude them from our sample (cf. \autoref{fx}). Since then, NGC 7213 has been observed by almost every X-ray satellite. In this work, we are mainly interested in its long-term evolution, thus we focus on the 2--10 keV band, which is covered by most X-ray missions. X-ray observations, which do not cover this energy band, e.g., those by $ROSAT$ and $CGRO$, are thus not included in \autoref{fx}. The continuum spectrum in 2--10 keV is all well fitted by a power-law component, while photon index varies in the range $\sim$ 1.6--2.1 (see \autoref{fx}). We take the rebinned data set for the $RXTE$ observations, with each bin $\sim$100d long (see details in \citealt{Emmanoulopoulos2012}). In this case, the MJD reported in \autoref{fx} is the average value of each bin for the $RXTE$ data. NGC 7213 is also observed frequently by the $Swift$/XRT \citep{Burrows2005}, which has never been reported. We hereby briefly describe the $Swift$/XRT data analysis. The $Swift$/XRT event data were first processed with XRTPIPELINE (v0.13.2) to generate the cleaned event data. Then we extracted the spectra by using XSELECT from a region of a circle when observed in the photon counting (PC) mode and a box when observed in the windowed timing (WT) mode. The $Swift$/XRT spectra were all well fitted with an absorbed power-law model by using XSPEC 12.9.0. To be consistent with data from the literature, the hydrogen column density $N_{\rm H}$ was fixed to 2.04$\times 10^{20}$ cm$^{-2}$ \citep{Ursini2015}. The 2--10 keV X-ray flux can then be derived from the spectral fitting results, as reported in \autoref{fx}. From \autoref{lc}, we find that the X-ray flux increased by a factor of $\sim 4$ in less than one year ($\sim$ 200d, between MJD 44168 and MJD 44374, all are observed by {\it Einstein} satellite, cf. \autoref{fx}), and then decreased gradually in the following 30 years. The onset/trigger of the flare may be missed, as the flux before the peak is higher than the lowest fluxes during the decay phase (by 2016 December 30), when obviously the source still does not reach its quiescence. The peak X-ray luminosity of this flare is $L_{\rm X, peak}\approx 4.2\times 10^{42}\ {\rm \,erg\,s^{-1}}$ (equivalently, $L_{\rm X, peak}\approx 2.9\times 10^{-4}\ M_8^{-1}\ L_{{\rm Edd}}$). Correspondingly, the peak bolometric luminosity can be estimated as $L_{\rm bol, peak} \approx 6.8\times10^{43} (f_{\rm X}/16)\ {\rm \,erg\,s^{-1}}$, or equivalently $L_{\rm bol, peak}\approx 4.6\times 10^{-3} (f_{\rm X}/16)\ M_8^{-1}\ L_{{\rm Edd}}$, which is about a factor of $5$ brighter than estimations based on later observations \citep[e.g.][]{Emmanoulopoulos2012,Schnorr-Muller2014}. Here, the X-ray bolometric correction $f_{\rm X}$ is set to $f_{\rm X}\equiv L_{\rm bol}/L_{\rm X}\approx 16$, an `average' value suggested by observations of LLAGNs \citep{Ho2008}. As shown in Fig. \ref{lc}, the profile of X-ray light curve is quite similar to a FRED form, where both the rise and the decay phases are observed. To quantitatively examine this, we adopt the following FRED function (\autoref{fred}) to model the X-ray light curve, \begin{equation} \label{fred} F_{\rm X}(t) = \left\{ \begin{array}{ll} \frac{t-t_\mathrm{s}}{t_\mathrm{p}-t_\mathrm{s}}\, F_{\rm X, peak} & (t\leq t_\mathrm{p}) \\ e^{-\frac{t-t_\mathrm{p}}{\tau_{\rm decay}}}\, F_{\rm X, peak} & (t\geq t_\mathrm{p})\\ \end{array} \right. \end{equation} where the $t_\mathrm{s}$ and $t_\mathrm{p}$ are the start and peak time of the flare, $F_{\rm X, peak}$ and $\tau_{\rm decay}$ are the peak flux and decay time-scale. As shown by the dotted line in \autoref{lc}, the light curve agrees with FRED fairly well. The $e$-folding decay time-scale is constrained to be $\tau_{\rm decay} \approx8116$d ($\approx 22.2$ yr). We note that {\it RXTE} may possibly detect a weak reflare that lasts $\sim$3yr (between 2006 July and 2009 September; \citealt{Emmanoulopoulos2012}). \subsection{Long-term evolution information from other wavebands} \label{sec:oir} NGC 7213 is a bright galaxy in the optical/infrared band, with numerous photometry observations in the literature from as early as 1950s \citep[e.g.][]{Evans1952}. However, the aperture size in those early works is too large to obtain the nuclear contribution. For example, the bulge component dominates the emission at the $J$ band even within 1$\arcsec$ \citep{Prieto2002}. Broad lines in optical band are indeed observed (e.g., \citealt{Phillips1979,Filippenko1984}), but they can mainly provide constrains in BH mass. Practically, there is no useful information on the long-term evolution of NGC 7213 in the optical/infrared band. NGC 7213 was first identified in the radio band by the low-resolution single-dish {\it Parkes} 2700 MHz Survey \citep{Wright1974,Wright1977}. These detections are before the rise of the X-ray flux. The 2.7 GHz radio flux observed by {\it Parkes} is roughly stable \citep{Wright1977,Halpern1984,Sadler1984}. We caution that there exists a circumnuclear ring of H {\sc ii} regions at $\sim 2$ kpc from the nucleus \citep{Storchi-Bergmann1996}, and the H {\sc ii} regions are known to be radio emitters. We did not find any radio observations that cover the period of the fast-rise and the early decay phases of the X-ray emission ($\sim$ MJD 44000--47000). Higher frequency radio observations at 8.4 GHz since 1988, much later after the peak in X-rays, do find an obvious decline trend \citep{Blank2005,Bell2011}. There is also a weak reflare detected in the radio band, which is simultaneous with the X-ray reflare detected by $RXTE$ \citep[2006--2009;][]{Bell2011}. The long-term radio flux in the 8.4 GHz positively correlates with X-rays in a complex way (cf. \autoref{sec:basic}; \citealt{Bell2011, Xie2016}). \section{A Delayed TDE of A Main-Sequence Star} \label{sec:tde} We in this section investigate the TDE scenario for the nuclei activity in NGC 7213. When a star approaches the SMBH at a distance comparable to the tidal radius ($r_t = R_{*} (M_{\rm BH}/M_{*})^{1/3}$; cf. \citealt{Ulmer1999}), immense tidal forces from the SMBH will (partially) disrupt it, resulting in a stream of debris that falls back on to the SMBH and powers a luminous flare. The maximal mass fall-back rate decreases with the BH mass \citep{Evans1989,Ulmer1999,Guillochon2013}: \begin{equation} \dot{M}_\mathrm{fb,peak} \propto M_\mathrm{BH}^{-1/2}M_{*}^{2}R_{*}^{-3/2} \end{equation} where $M_{*}$ and $R_{*}$ are, respectively, the mass and radius of the star. Obviously, with $\dot{M}_\mathrm{fb,peak} c^2/L_{\rm Edd}\propto M_\mathrm{BH}^{-3/2}$, it can be sub-Eddington when $M_{\rm BH}\sim10^8\ {\rm M}_{\sun}$, the case in NGC 7213. This makes the TDE scenario attractive. For example, the lack of prominent thermal emission, a component observed in most TDEs, is reasonable in this source, as it may stay in hot accretion flow mode due to low accretion rates \citep{Yuan2014}. Another advantage of TDE scenario is that the resulting accretion disc of a TDE is very compact, i.e. $R \la 500\, R_{\rm s}$ (here $R_{s}=2GM_{\rm BH}/c^{2} = 3\times10^{13}\, (M_{\rm BH}/10^{8})$ cm is the Schwarzschild radius of BH), compared to that of normal accretion discs around SMBHs. Consequently, in X-rays we should not observe any reflection emission from cold accretion gases, which is in excellent agreement with X-ray observations (\citealt{Bianchi2003,Bianchi2008,Starling2005,Lobban2010, Ursini2015}, cf. \autoref{sec:basic}). Besides, the ionizing gas inferred from narrow Fe K$\alpha$ line in X-rays may just be the unbound debris of the TDE. We additionally note that the observed broad Balmer lines (e.g., H$\alpha$ and H$\beta$ series) shortly after the flare in this system \citep{Phillips1979,Filippenko1984,Schnorr-Muller2014} are consistent with the TDE interpretation. This is mainly because the broad line region is close to the BH, at a distance within $\sim 1-10$ light days (or typically hundreds to thousands of $R_{\rm s}$; \citealt{Peterson2006}). Despite the uncertainty in the formation of broad line clouds, there is sufficient time either for the wind/outflow launched from the central accretion disc to dynamically move to that distance, or to light and accelerate the pre-existing clouds there. However, there are still two challenges for the TDE interpretation: one is the duration of the flare and the other is the FRED profile of light curve in X-rays. The duration of most TDEs observed is usually short, of the order of months to one year (for a review on TDE observations, see \citealt{Komossa2015a}), which is much shorter than the 40yr duration observed in NGC 7213. However, long-duration TDE scenario has been applied to understand the long-term X-ray flares in several sources, i.e. IC 3599 \citep{Campana2015}, NGC 3599 \citep{Saxton2015}, 3XMM J150052.0+015452 \citep{Lin2017a} and 2XMM J123103.2+110648 \citep{Lin2017}, where the whole duration of the latter two is over one decade. Interestingly, unlike those normal TDEs whose decay profiles are power law (e.g., $t^{-5/3}$; \citealt{Rees1988,Gezari2009}), the decay light curve profile of the two decade-long TDEs \citep{Lin2017a,Lin2017} is also exponential, with the decay time-scale $\tau_{\rm decay}$ more than 1000d. To have a TDE that can last for years to decades, additional mechanism such as slow circularization should operate, i.e. instead of an efficient orbital circularization, the fall-back stream of disrupted gas gradually circularizes to form an accretion disc around the central BH under the viscous influence, after which radiation will emit out (e.g., \citealt{Guillochon2014,Guillochon2015, Shiokawa2015,Hayasaki2016}). The accretion will then be determined by the viscous time-scale $\tau_\mathrm{visc}$, which is much longer than the free fall-back time-scale $\tau_\mathrm{fb}$. Interestingly, with the delay between the stream falling back and the final accretion on to BH, both the long decay time-scale and the FRED light curve can be realized simultaneously (see the supplementary note 7 in \citealt{Lin2017a}). Due to the pile-up of accreting gas, the delayed mass accretion rate $\dot{M}_\mathrm{acc}$ naturally shows an exponential decay profile. Following \citet{Lin2017a}, the delayed mass accretion rate $\dot{M}_\mathrm{acc}$ can be expressed as, \begin{equation} \label{M_acc} \dot{M}_\mathrm{acc}(t) = \frac{1}{\tau_\mathrm{visc}}\Big( e^{-t/\tau_\mathrm{visc}}\int^{t}_{0}e^{t'/\tau_\mathrm{visc}}\dot{M}_\mathrm{fb}(t')dt'\Big) \end{equation} The evolution of the mass fall-back rate $\dot{M}_\mathrm{fb}$ in \autoref{M_acc} can be determined through hydrodynamical simulations of the tidal disruption process \citep[e.g.][]{MacLeod2012,Guillochon2013,Guillochon2014}. The viscous time-scale $\tau_\mathrm{visc}$ can be expressed as \citep{Guillochon2013} \begin{equation} \label{t_visc} \tau_\mathrm{visc} = 3.2\times10^{-3} \beta^{-3}\Big(\frac{\alpha}{0.1}\Big)^{-1}\Big(\frac{M_{*}}{M_{\odot}}\Big)^{-1/2}\Big(\frac{R_{*}}{R_{\odot}}\Big)^{3/2}~{\rm yr} \end{equation} where $\alpha$ is the viscosity parameter and $\beta$ is the ratio between the tidal radius $r_\mathrm{t}$ and the pericentre distance $r_\mathrm{p}$, $\beta=r_\mathrm{t}/r_\mathrm{p}$. \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{fig2.pdf} \caption{Theoretical modelling of the light curve based on the delayed TDE scenario. We assume the X-ray bolometric correction to be $f_{\rm X}=16$ (cf. \autoref{sec:lc}) and the radiative efficiency to be $\epsilon=0.1$. The red line represents the modelling of a delayed TDE, with parameters $M_\mathrm{BH}=10^{8}\ M_{\odot}$ (fixed), $\beta=1.0$ (fixed), $M_{*}=4.27\ M_{\odot}$ (the corresponding $R_{*}=2.41R_{\odot}$ for a main-sequence star) and $\alpha=2.85\times10^{-5}$. See text for details. } \label{fit} \end{figure*} Below we investigate whether a delayed TDE can reproduce the FRED light curve observed in NGC 7213. We assume the X-ray bolometric correction to be $f_{\rm X}=16$ and the radiative efficiency to be $\epsilon=0.1$ cf. \autoref{sec:lc}, in order to compare the modelling light curves with observations. We simplify the modelling by using the analytical approximations in \citet{Guillochon2013,Guillochon2014}, i.e. we take a power-law dependence of the mass fall-back rate $\dot{M}_\mathrm{fb}$ on $M_\mathrm{BH}$, stellar mass $M_{*}$ and stellar radius $R_{*}$ as \citep{Guillochon2014} \begin{equation} \dot{M}_{fb} = \Big(\frac{M_\mathrm{BH}}{10^{6}\ M_{\odot}}\Big)^{-1/2}\Big(\frac{M_{*}}{M_{\odot}}\Big)^{2}\Big(\frac{R_{*}}{R_{\odot}}\Big)^{-3/2}\dot{M}_\mathrm{init}(\beta). \end{equation} Here $\dot{M}_\mathrm{init}(\beta)$ is the mass fall-back rate of a solar type stellar disrupted by a $10^{6}M_{\odot}$ SMBH, which comes from detailed numerical simulations in \citet{Guillochon2013}. In order to derive the $\dot{M}_\mathrm{fb}$, we also take the interpolation method for different impact parameter $\beta$ \citep{Guillochon2014}. We assume the disrupted star is a main-sequence star, with mass-radius relation given in \citet{Tout1996}, and the polytropic index of the disrupted star to be $\gamma=5/3$. With fitting formulae given above, the impact of different parameters can be understood easily. For example, with other parameters fixed, smaller $\beta$ ($\lesssim1$) means the pericentre of the star moves outwards. Consequently, $\tau_\mathrm{visc}$ will be larger, i.e. we will have a longer duration flare. Meanwhile, the tidal force will be weaker, thus star could eventually survive the encounter and a smaller fraction of its mass will be captured by the BH \citep{Guillochon2013}. The dependences on $\beta$ for the evolutions of many quantities have been investigated in \citet{Guillochon2013}. Here we simply chose a typical value $\beta = 1.0$. The BH mass is fixed as $M_\mathrm{BH} = 10^{8} M_{\odot}$ \citep{Blank2005,Schnorr-Muller2014}. We adopt the `Emcee' implementation \citep{Foreman-Mackey2013} of the Markov Chain Monte Carlo (MCMC) sampler to constrain the other two parameters $M_{*}$ and $\alpha$. From the posterior samples, we obtain that $M_{*}=4.27\pm1.55M_{\odot}$ (the corresponding $R_{*}=2.41\pm0.49R_{\odot}$) and an extremely low viscous parameter $\alpha=2.85\pm1.44 \times10^{-5}$. The corresponding viscous time-scale is about 20.34 yr (\autoref{t_visc}), which is similar to the observed $e$-folding decay time-scale. The light curve of a delayed TDE with the best-fitting parameters ($M_{*}=4.27M_{\odot}$ and $\alpha=2.85\times10^{-5}$) is shown as the red solid curve in \autoref{fit}, which is consistent with the observed light curve. We admit that the weak reflare during the decay is not taken into account during the modelling. To conclude, we find that a delayed TDE of a normal star is viable to produce the FRED light curve in X-rays as observed in NGC 7213. At last, we need to discuss the AGN activity of NGC 7213. We obviously cannot rule out the possibility of the pre-TDE nuclear activity, especially considering the detections in X-ray and radio band before the onset of the flare \citep{Marshall1979,Piccinotti1982,Wright1974,Wright1977}. The expected TDE rate in AGN is much higher than the quiescent galaxy \citep{Karas2007,Kennedy2016}. However, only a few flares/outbursts in AGN have been considered as TDEs. For example, the long-duration flares in the two AGN, NGC 3599 and IC 3599, have been argued as TDEs \citep{Saxton2015,Campana2015}. The host galaxy of typical TDE ASASSN-14li has also been detected in radio band long before the outburst, which indicates the possible AGN activity \citep{Alexander2016}. The coincidence of the optical transient CSS100217:102913+404220 with the nuclear of its host galaxy makes it as a candidate TDE in the narrow-line Seyfert 1 galaxy (NLS1). Recently, the transient PS16dtm was interpreted as a TDE in the NLS1 galaxy \citep{Blanchard2017a}. The newly formed accretion disc during the TDE may change the geometry of the pre-existing accretion disc, which causes an unusual evolution of X-ray luminosity \citep{Blanchard2017a}. There are some studies that are trying to investigate the effects brought by the presence of the pre-existing accretion flow. \citet{Bonnerot2016} found that the ambient gas on the TDE bound debris stream, in some cases, will dissolve a significant part of the stream. Interestingly, the interactions between the debris stream and the pre-existing accretion flow can potentially stall the stream far from the SMBH, and lead to a dim and long flare \citep{Kathirgamaraju2017}. For example, the accretion time-scale will be $\sim$10 yr and the corresponding accretion rate will be 0.01 $L_\mathrm{Edd}$ for a $10^{6}$ SMBH \citep{Kathirgamaraju2017}. \section{A newly triggered AGN} Based on the radio spectral properties, it is argued that NGC 7213 is a gigahertz-peaked spectrum (GPS) source \citep{Blank2005, Hancock2009}. The compact radio emission site ($<3$ mas, or equivalently $< 0.33$ pc; cf. \citealt{Blank2005}) implies that NGC 7213 could possibly be a newly triggered young radio galaxy. One scenario for the trigger of the short-lived GPS sources is the radiation-pressure-driven instability proposed by \citet{Czerny2009}. This model has been applied to numerous systems, e.g., the repetitive flares with time-scales of tens of seconds observed in two BH XRBs, GRS 1915$+$105 and IGR J17091$-$3624 \citep{Belloni2000,Altamirano2011}, and one NS XRB, MXB 1730$-$335 \citep{Bagnoli2015}, the short-lived compact young radio sources \citep{Czerny2009, Wu2009}, the flares in AGNs IC 3599 and NGC 3599 \citep{Grupe2015,Saxton2015} and even the outbursts in ultra-luminous sources ESO 243$-$49 HLX-1 \citep{Sun2016,Wu2016}. We caution that the flares in the two AGNs (IC 3599 and NGC 3599) are also inferred to be TDEs \citep{Montesinos-Armijo2011,Saxton2015,Campana2015}. However, from theoretical point of view, the presence of this instability is actually under active debate, e.g., opposite results can be found in different numerical simulations \citep{Hirose2009,Jiang2013,Mishra2016,Sc-adowski2016}. Apart from this debate, one prediction of the radiation-pressure-driven instability model is that there exists a universal correlation among the duration of the outburst, the peak bolometric luminosity, and the viscosity parameter of the accretion disc $\alpha$ \citep{Czerny2009,Wu2016}, i.e., $\log (T_{\rm burst}/{\rm yr}) = 1.25\log (L_{\rm bol, peak}/{\rm \,erg\,s^{-1}}) + 0.38\log(\alpha/0.02)-53.6$. Interestingly, the flare observed in NGC 7213, with a duration of $\sim$20 yr, peak bolometric luminosity $\sim 7\times 10^{43}{\rm \,erg\,s^{-1}}$ (see \autoref{sec:basic}), and $\alpha\approx 0.01$ \citep{Xie2016}, agrees with this correlation. However, there are several pieces of evidence against the radiation-pressure instability model. First, the radiation-pressure instability should occur in the high-luminosity regime \citep{Shakura1973,Lightman1974,Janiuk2002}. For example, the average luminosity of the sample in \citet{Wu2016} is about 0.3 $L_\mathrm{Edd}$. \citet{Czerny2009} gave a threshold accretion rate for the radiation-pressure instability $\sim0.025 \dot{M}_\mathrm{Edd}$. However, the peak luminosity of this flare of NGC 7213 is still lower than the luminosity required by the radiation-pressure instability. Secondly, a slow-rise-fast-decay light curve is the characteristic property of the radiation-pressure instability model \citep{Janiuk2002, Janiuk2004, Janiuk2011, Czerny2009, Grzedzielski2017}. For example, the duration for which the flux changes by one order of magnitude during the rise phase is two or three times longer than that during the decay phase (see the simulated light curves for a $10^8~M_{\odot}$ SMBH in fig. 3 of \citealt{Czerny2009}). This is contradictory to that observed in NGC 7213, where (decay time-scale)/(rise time-scale) $>10$ (cf. \autoref{lc}). Thirdly, the typical amplitude of an SMBH flare driven by radiation-pressure instability is about 2-3 orders of magnitude \citep[e.g.][]{Czerny2009}; for comparison, the amplitude in NGC 7213 is only $\sim $5-6 (cf. \autoref{sec:lc}). Smaller amplitude flare can be produced on the condition that the viscous stress of angular momentum transport has a stronger dependence on the gas pressure than the total (gas+radiation) pressure \citep{Grzedzielski2017}, However, such dependence seems unlikely, as the magneto-hydrodynamical simulations confirm that the viscous stress should be proportional to the total pressure, not other variants \citep{Hirose2009,Blaes2014}. Considering the above reasons, we disfavour the operation of radiation-pressure instability in NGC 7213. \section{Thermal-viscous disc instability model}\label{sec:dim} The thermal-viscous DIM model (see review in \citealt{Lasota2001}) is first to explain the outburst of dwarf nova and then is applied to low-mass X-ray transients (LMXBTs). Although the idea of applying this disc instability in AGN was proposed decades ago (e.g., \citep{Lin1986, Siemiginowska1996, Burderi1998, Hameury2009}), there is still no evidence of this mechanism to operate in any AGN. The DIM can naturally produce a FRED light curve \citep{Cannizzo1994,Cannizzo1995,King1998}. The extended exponential decay will be achieved when the disc can stay in hot state and maintains a quasi-steady surface density profile during the outburst \citep{King1998,Lasota2016}. The decay time-scale ($\tau_\mathrm{decay}$) is determined by the viscous time-scale at the critical radius for instability, i.e. $\tau_\mathrm{decay} \approx \frac{1}{3}\tau_\mathrm{visc}$ \citep{King1998}. The critical radius is given as \citep[e.g. Eq.2 in][]{Lasota2012}: \begin{equation} R_\mathrm{crit}\approx 42R_\mathrm{s}\Big(\frac{\dot{M}}{10^{-2}\dot{M}_\mathrm{Edd}}\Big)^{1/3}\Big(\frac{M}{10^{8}M_{\odot}}\Big)^{-1/3} \end{equation} where $\dot{M}_\mathrm{Edd} = 10L_\mathrm{Edd}/c^{2}$ is the Eddington accretion rate. Numerical calculation also shows that the critical radius for instability is orders of magnitude of 100 $R_\mathrm{s}$ for a $10^{8}M_{\odot}$ BH at 0.01 $\dot{M}_\mathrm{Edd}$ accretion rate \citep{Janiuk2004,Janiuk2011}. The viscous time-scale at the critical radius is \citep[e.g. Eq.6 in][]{Lasota2012} \begin{equation} \tau_\mathrm{visc} \sim 10^{6}\Big(\frac{\alpha}{0.1}\Big)^{-1}\Big(\frac{T}{10^{4}\mathrm{K}}\Big)^{-1}\Big(\frac{R}{10^{15}\mathrm{cm}}\Big)^{1/2}\Big(\frac{M}{10^{8}\mathrm{M}_{\odot}}\Big)^{1/2}~\mathrm{yr} \end{equation} Therefore, here is one serious problem with the DIM application in NGC 7213 that the decay time-scale predicted by DIM is much longer than that observed in NGC 7213. This is actually a well-known challenge for the application of DIM to AGNs \citep{Lin1986}. We thus disfavour the DIM model as the trigger of the flare observed in NGC 7213. \section{Summary} Among the variability analysis, the shape of the long-term light curve can be served to diagnose its physical mechanism. We summarize the main observational characteristics of this X-ray flare of NGC 7213 as follows: This flare lasts nearly 40 years. The peak bolometric luminosity of this flare is $\sim$ 0.01 $L_\mathrm{Edd}$. The X-ray light curve shows a FRED profile (see \autoref{lc}). The $e$-folding decay time-scale is approximately $8116$d ($\approx 22.2$ yr). The amplitude of the flare is constrained to be larger than $\sim$ 5-6, as the trigger of this flare may still be missed and it has not recovered back to its quiescence (2016 Dec). We then examined the possible variability models proposed in the literature, and find that either the newly triggered AGN scenario (driven by radiation-pressure instability) or the thermal-viscous DIM fails to explain some key properties observed in NGC 7213, thus we ague them unlikely. For examples, the radiation-pressure instability is incapable to produce the FRED profile, and decay time-scale from the thermal-viscous DIM is several orders of magnitude longer than that observed in NGC 7213. A delayed TDE of a main-sequence star is favoured. The disruption of a star by a massive SMBH ($\sim10^{8}M_{\odot}$) can produce a long-duration flare with low peak luminosity, and a small fraction of mass is accreted by the SMBH in the case of small impact parameter \citep{Guillochon2013}. Additionally, if the disrupted bound stream suffers slow circularization, an exponential decay profile can be produced \citep{Lin2017a}. So the X-ray light curve of NGC 7213 fits the delayed TDE scenario quite well, as confirmed by our detailed light-curve modelling (\autoref{fit}). Under the TDE interpretation, the flare in NGC 7213 has several unique properties compared to others. First, TDEs normally happen around less massive SMBHs, while in NGC 7213 it is a partial disruption of a star around a $M_{\rm BH}\sim 10^8\ {\rm M}_{\sun}$ BH. Secondly, normally the peak bolometric luminosity of TDEs is around or above the Eddington luminosity, but this one is highly sub-Eddington, $L_{\rm bol, peak} \sim$ 0.01 $L_{Edd}$. Thirdly, the flare in NGC 7213 lasts more than four decades, which is much longer compared to others. \section*{Acknowledgments} We appreciate Paul Wiita, James Guillochon, Tinggui Wang, Wenfei Yu, Bin Liu, Morgan MacLeod and Dacheng Lin for helpful discussions, the referee for suggestions, and Dimitris Emmanoulopoulos for providing us with the {\it RXTE} data of NGC 7213. This work was supported in part by the National Key Research and Development Program of China (2016YFA0400804), the Youth Innovation Promotion Association of CAS (id. 2016243), the Natural Science Foundation of Shanghai (No. 17ZR1435800), and the National Natural Science Foundation of China (grant Nos. 11403074, 11773055, 11333005 and 11350110498). This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. \input{NGC7213.bbl} \end{document}
{ "timestamp": "2018-01-03T02:12:23", "yymm": "1712", "arxiv_id": "1712.05272", "language": "en", "url": "https://arxiv.org/abs/1712.05272" }
\subsection{Related Work} \label{sec:related} Resource allocation, utility maximization and opportunistic scheduling for downlink wireless systems have been intensely studied in the last two decades, and have had a major impact on cellular standards. We refer to \cite{sriyin14,GeoNeeTas06} for a survey of the key results. In this paper, we focus on joint scheduling of URLLC and eMBB traffic. From an application point of view, there have been several studies arguing for the need to support URLLC services (e.g. for industrial automation) \cite{hwwtahaa16,ywjbas15,gla17}. With demand of both broadband and low-latency services growing, there has been rapid developments in the 5G standardization efforts in 3GPP. Of key relevance to this paper, the 3GPP RAN WG1 has focused on standardizing slot structure for eMBB and URLLC, and have been evaluating signaling and control channels to support superposition and puncturing in recent meetings \cite{3gpp_ran1_87,3gpp_ran1_88}. We specifically refer the reader to Sections~8.1.1.3.4 -- 8.1.1.3.6 in \cite{3gpp_ran1_88} for current proposals. Beyond standards, recent work has focused on system level design for such systems (overheads, packet sizes, control channel structure, etc.) \cite{pbfms16,ljcjs17,dkp16}. Of particular note, \cite{ljcjs17} argues (based on system level simulation and queuing models) that statically partitioning bandwidth between eMBB and URLLC is very inefficient. There have also been several studies focusing on physical layer aspects of URLLC (coding and modulation, fading, link budget) \cite{dkopy16,sltum16}. Efficient sharing of radio resources between eMBB and URLLC traffic has been discussed in literature, see~\cite{ylpy18, ptsd18, ksp18}. In~\cite{ylpy18}, the authors have considered joint optimization of resource allocation for eMBB and URLLC traffic. However, they do not use puncturing/superposition mechanisms to share resources. Some works (\cite{ptsd18, ksp18}) use information theoretic results to obtain expressions for the average eMBB rates under URLLC puncturing for various decoding schemes for uplink eMBB traffic punctured/superposed by URLLC users. However, they do not consider the design of joint scheduling algorithms for eMBB and URLLC traffic. To the best of our knowledge, our paper is the first to explore the resource allocation issues for joint scheduling of URLLC and eMBB traffic using puncturing/superposition based mechanisms. \section{Introduction} \label{sec:intro} An important requirement for 5G wireless systems is its ability to efficiently support both broadband and ultra reliable low-latency communications. On one hand enhanced Mobile Broadband (eMBB) might require gigabit per second data rates { (based on a bandwidth of several 100 MHz)} and a moderate latency (a few milliseconds). On the other hand, Ultra Reliable Low Latency Communication (URLLC) traffic requires extremely low delays (0.25-0.3 msec/packet) with very high reliability (99.999\%)~\cite{3gpp_ran1_87}. To satisfy these heterogenous requirements, the 3GPP standards body has proposed an innovative \emph{superposition/puncturing} framework for multiplexing URLLC and eMBB traffic in 5G cellular systems\footnote{An earlier version of this work appears in the Proceedings of IEEE Infocom 2018, Honolulu, HI,~\cite{AndeVSh18}.}. The proposed scheduling framework has the following structure \cite{3gpp_ran1_87}. As with current cellular systems, time is divided into slots, with a proposed one millisecond (msec) slot duration. Within each slot, eMBB traffic can share the bandwidth over the time-frequency plane (see Figure~\ref{fig:time_frequency_plane}). The sharing mechanism can be opportunistic (based on the channel states of various users); however, the eMBB shares are decided by the beginning, and fixed for the duration of a slot\footnote{The sharing granularity among various eMBB users is at the level of Resource Blocks (RB), which are small time-frequency rectangles within a slot. In LTE today, these are (1 msec $\times$ 180 KHz), and could be smaller for 5G systems.}. Further the new framework also allows aggregation of eMBB slots where transmissions to an eMBB user over consecutive slots are coded together to achieve better coding gains resulting from long codewords while reducing overheads due to control signals. This results in better spectral efficiency as compared to the OFDMA frame structure of LTE~\cite{pbfms16}. \begin{figure} \centering \includegraphics[width=2.5in]{slot} \caption{Illustration of superposition/puncturing approach for multiplexing eMBB and URLLC: Time is divided into slots, and further subdivided into minislots. eMBB traffic is scheduled at the beginning of slots (sharing frequency across two eMBB users), whereas URLLC traffic can be dynamically overlapped (superpose/puncture) at any minislot.} \label{fig:time_frequency_plane} \end{figure} URLLC downlink packets may arrive during an ongoing eMBB transmission; if tight latency constraints are to be satisfied, they cannot be queued until the next slot. Instead each eMBB slot is divided into minislots, each of which has a 0.125 msec duration\footnote{In 3GPP, the formal term for a `slot' is eMBB TTI, and a `minislot' is a URLLC TTI, where TTI expands to Transmit Time Interval.}. Thus upon arrival URLLC packets can be immediately scheduled in the next minislot {\em on top of the ongoing eMBB transmissions.} If the Base Station (BS) chooses non-zero transmission powers for both eMBB and overlapping URLLC traffic, then this is referred to as {\em superposition.} If eMBB transmissions are allocated zero power when URLLC traffic is overlapped, then it is referred to as {\em puncturing} of eMBB transmissions. To achieve high reliability URLLC transmissions are by design protected through coding and HARQ if necessary. At the end of an eMBB slot, the BS can signal eMBB users the locations, if any, of URLLC superposition/puncturing. eMBB users can then use this information to decode transmissions, with some possible loss of rate depending on the amount of URLLC overlap. See~\cite{3gpp_ran1_87, 3gpp_ran1_88} for additional details. A key problem in this setting is the {\em joint scheduling of eMBB and URLLC traffic over two time-scales.} At the slot boundary, resources are allocated to eMBB users (with possible aggregation of slots) based on their channel states and utilities, in effect, allocating long term rates to optimize high-level goals (e.g. utility optimization). Meanwhile, at each minislot boundary, the (stochastic) URLLC demands are placed onto previously scheduled and ongoing eMBB transmissions. Decisions on the placement of such overlaps across scheduled eMBB user(s) will impact the rates they will see on that slot. Thus we have a coupled problem of jointly optimizing the scheduling of eMBB users on slots with the placement of URLLC demands across minislots. \subsection{Main Contributions} \label{sec:main-contribs} This paper is, to our knowledge, the first to formalize and solve the joint eMBB/URLLC scheduling problem described above. We consider various models for the eMBB rate loss associated with URLLC superposition/puncturing, for which we characterize the associated feasible throughput regions and propose online joint scheduling algorithms as detailed below. \noindent \textbf{Linear Model:} When the rate loss to eMBB is directly proportional to the fraction of superposed/punctured minislots, we show that the joint optimal scheduler has a nice decomposition. Despite having non-linear utility functions and time-varying channel states, the stochastic URLLC traffic can be {\em uniform-randomly placed} in each minislot, while the eMBB scheduler can be scheduled via a greedy iterative gradient algorithm that only accounts for the {\em expected} rate loss due to the URLLC traffic. \noindent \textbf{Convex Model:} For more general settings where the rate loss can be modeled by a convex function, the solution does not have the decomposition property as in the linear model and hence, the finding the optimal solution is challenging. Therefore, we restrict to a simpler class of joint scheduling policies called as \emph{minislot-homogeneous} joint scheduling policies where the URLLC placement policy does not change across the minislots in an eMBB slot. In this setting, we characterize the capacity region and derive concavity conditions under which we can derive the effective rate seen by eMBB users (post-puncturing by URLLC traffic). We then develop a stochastic approximation algorithm which jointly schedules eMBB and URLLC traffic, and show that it asymptotically maximizes the utility for eMBB users while satisfying URLLC demands. We also show that for convex functions which are \emph{homogeneous}, minislot-homogeneous joint scheduling policies are optimal within the larger class of \emph{causal} and \emph{non-anticipative} joint scheduling policies. Further for the convex loss model, we show that it is better to schedule eMBB users to share bandwidth (i.e. slice across frequency, see also Fig.~\ref{fig:config_2}), and let each user occupy the entire slot duration to mitigate rate loss due to URLLC puncturing. \noindent \textbf{Threshold Model:} Finally we consider a loss model, where eMBB traffic is unaffected by puncturing until a threshold is reached; beyond this threshold it suffers complete throughput loss (a 0-1 rate loss model). We consider two broad classes of minislot homogeneous policies, where the URLLC traffic is placed in minislots in proportion to the eMBB resource allocations (Rate Proportional (RP)) or eMBB loss thresholds (Threshold Proportional (TP)). We motivate these policies (e.g. TP minimizes the probability of any eMBB loss in an eMBB slot) and derive the associated throughput regions. Finally, we utilize the additional structure underlying the RP and TP Placement policies along with the shape of the threshold loss function to derive fast gradient algorithms that converge and provably maximize utility. \input{1b-related} \section{System Model} \label{sec:sys-model} {\em \bf Traffic model:} We consider a wireless system supporting a fixed set $ \mathcal{U} $ of backlogged eMBB users and a stationary process of URLLC demands. eMBB scheduling decisions are made across slots while URLLC demands arrive and are immediately scheduled in the next minislot. In this section we shall consider the case where eMBB all users receive resources for slots without using slot aggregation even though more flexible resource allocations which can possibly include slot aggregation and splitting are proposed in 5G standards~\cite{pbfms16}. We shall justify this choice in Sec.~\ref{sec:eMBB_slot_aggregation}. Each eMBB slot has an associated set of minislots where the set ${\cal M} = \{ 1,\dots |{\cal M}| \}$ denotes their indices. URLLC demands across minislots are modeled as an independent and identically distributed (i.i.d.) random process. We let the random variables $(D(m), m \in {\cal M})$ denote the URLLC demands per minislot for a typical eMBB slot and let $D$ be a random variable whose distribution is that of the aggregate URLLC demand per eMBB slot, i.e., $D \sim \sum_{m \in {\cal M}} D(m)$ with, cumulative distribution function $F_D(\cdot)$ and mean $E[D] = \rho.$ We assume demands have been normalized so the maximum URLLC demand per minislot is $f$ and the maximum aggregate demands per eMBB slot is $f \times |{\cal M}| = 1$ i.e., all the frequency-time resources are occupied. URLLC demands per minislot exceeding the system capacity are blocked by URLLC scheduler thus $D \leq 1$ almost surely. The system is engineered so that blocked URLLC traffic on a minislot is a rare event, i.e., satisfies the desired reliability on such traffic. {\em \bf Wireless channel variations:} The wireless system experiences channel variations each eMBB slot which are modeled as an i.i.d. random process over a set of channel states ${\cal S} = \{1,\ldots, |{\cal S}|\}.$ Let $S$ be a random variable modeling the distribution over the states in a typical eMBB slot with probability mass function $p_S(s) = P(S=s)$ for $s \in {\cal S}.$ For each channel state $s$ eMBB user $u$ has a known peak rate $\hat{r}_{u}^s.$ The wireless system can choose what proportions of the frequency-time resources to allocate to each eMBB user on each minislot for each channel state. This is modeled by a matrix ${\bm \phi} \in \Sigma $ where \begin{multline} \Sigma := \left \{ \bvecgreek{\phi} \in \Reals_+^{|{\cal U}|\times|{\cal M}|\times|{\cal S}|} ~|~ \right. \\ \left. \sum_{u \in {\cal U}} \phi_{u,m}^{s} =f, \forall m \in {\cal M}, s \in {\cal S} \right\} \end{multline} and where the element $\phi_{u,m}^s$ represents the fraction of resources allocated to user $u$ in mini slot $m$ in channel state $s$. We also let $\phi_{u}^{s} = \sum_{m\in {\cal M}} \phi_{u,m}^{s}$, i.e., the total resources allocated to user $u$ in an eMBB slot in channel state $s$. Now assuming no superposition/puncturing if the system is in channel state $s$ and the eMBB scheduler chooses an allocation ${\bm \phi}$ the rate $r_u$ allocated to user $u$ would be given by $ r_u = \phi_{u}^s \hat{r}_{u}^{s}. $ The scheduler is assumed to know the channel state and can thus opportunistically exploit such variations in allocating resources to eMBB users. Note that for simplicity, we adopt a flat-fading model, namely, the rate achieved by an user is directly proportional to the fraction of bandwidth allocated to it (the scaling factor is the peak rate of the user for the current channel state). {\em \bf Class of joint eMBB/URLLC schedulers:} We consider a class of stationary joint eMBB/URLLC schedulers denoted by $\Pi$ satisfying the following properties. A scheduling policy combines a possibly state dependent eMBB {\em resource allocation} matrix $\bvec{ \phi}$ per slot with a URLLC {\em demand placement} strategy across minislots. The placement strategy may impact the eMBB users' rates since it affects the URLLC superposition/puncturing loads they will experience. As mentioned earlier in discussing the traffic model, in order to meet low latency requirements URLLC traffic demands are scheduled immediately upon arrival or blocked. The scheduler is assumed to be {\em causal} so it only knows the current (and past) channel states and peak rates $ \hat{r}_{u}^{s}$ for all $u \in {\cal U}$ and $s \in {\cal S}$ but does not know the realization of future channels or URLLC traffic demands. In making superposition/puncturing decisions across minislots, the scheduler can use knowledge of the previous placement decisions that were made. In addition the scheduler is assumed to know (or able measure over time) the channel state distribution across eMBB slots and URLLC demand distributions per minislot i.e., that of $D(m)$, and per eMBB slot, i.e., $D$, and thus in particular knows $\rho = E[D]$. In summary a joint scheduling policy $\pi \in \Pi$ is thus characterized by the following: \begin{itemize} \item an eMBB resource allocation ${\bm \phi}^{\pi} \in \Sigma$ where ${\phi}^{\pi,s}_{u,m}$ denotes the fraction of frequency-time slot resources allocated to eMBB user $u$ on minislot $m$ when the system is in state $s$. \item the distributions of URLLC loads across eMBB resources induced by its URLLC placement strategy, denoted by random variables ${\bf L}^{\pi} = (L^{\pi,s}_{u,m} | u \in {\cal U}, m \in {\cal M}, s \in {\cal S})$ where $L^{\pi,s}_{u,m}$ denotes the URLLC load superposed/puncturing the resource allocation of user $u$ on minislot $m$ when the channel is in state $s$. \end{itemize} The distributions of $L^{\pi,s}_{u,m}$ and their associated means $\overline{l}_{u,m}^{\pi,s}$ depend on the joint scheduling policy $\pi$, but for all states, users and minislots satisfy $$ L^{\pi,s}_{u,m} \leq {\phi}^{\pi,s}_{u,m}~~~ \mbox{almost surely}. $$ In the sequel we let $L^{\pi,s}_{u} = \sum_{m \in {\cal M}} L^{\pi,s}_{u,m}$, i.e., the aggregate URLLC traffic superposed/puncturing user $u$ in channel state $s$, and denote its mean by $\overline{l}_{u}^{\pi,s}$ and note that $$ L^{\pi,s}_{u} \leq {\phi}^{\pi,s}_{u} \quad \mbox{almost surely}. $$ We also let $L^{\pi,s} := \sum_{u \in {\cal U}} L^{\pi,s}_{u}$ denote the aggregate induced load and note that any policy $\pi$ and for any state $s$ we have that $$ \rho = \E[D] = \E[L^{\pi,s }] = \E[\sum_{u \in {\cal U}} L_{u}^{\pi,s}] = \sum_{u \in {\cal U}} \overline{l}_{u}^{\pi,s}. $$ {\em \bf Modeling superposition/puncturing and eMBB capacity regions:} Under a joint scheduling policy $\pi$ we model the rate achieved by an eMBB user $u$ in channel state $s$ by a random variable \begin{eqnarray} R^{\pi,s}_{u} &=& f_{u}^{s} ( \phi^{\pi,s}_{u}, L_{u}^{\pi,s}), \label{eqn:tput-fn} \end{eqnarray} where the {\em rate allocation function} $f_{u}^{s}(\cdot ,\cdot )$ models the impact of URLLC superposition/puncturing -- one would expect it to be increasing in the first argument (the allocated resources) and decreasing in the second argument (the amount superposition/puncturing by URLLC traffic). Under our system model we have that $$ R^{\pi,s}_{u} \leq f_{u}^{s} ( \phi^{\pi,s}_{u},0) = \phi^{\pi,s}_{u} \hat{r}_{u}^{s} ~\mbox{almost surely}, $$ with equality if there is no superposition/puncturing, i.e., when $l_{u}^{s}=0.$ Let $\overline{r}^{\pi,s}_{u} = E[ R^{\pi,s}_{u}]$ denote the mean rates achieved by user $u$ in state $s$ under the URLLC superposition/puncturing distribution induced by scheduling policy $\pi$. \noindent {\bf Models for Throughput Loss:} In the sequel we shall consider specific forms of superposition/puncturing loss models: {\em (i)} linear, {\em (ii)} convex, and {\em (iii)} threshold models. We rewrite the rate allocation function in (\ref{eqn:tput-fn}) as the difference between the peak throughput and the loss due to URLLC traffic, and consider functions that can be decomposed as: $$ f_{u}^{s}(\phi_{u}^{s},l_{u}^{s}) = \hat{r}_{u}^{s} \phi_{u}^{s} \left(1- h^s_u \left( \frac{L_{u}^{\pi,s}}{\phi_{u}^{s}}\right)\right), $$ where $h^s_u :[0,1] \rightarrow [0,1]$ is the {\em rate loss function} and captures the relative rate loss due to URLLC overlap on eMBB allocations. The puncturing models we study now map directly to structural assumptions on the rate loss function $h^s_u(\cdot);$ namely it is a non-decreasing function, and is one of {\em linear, convex, or threshold} as shown in Figure~\ref{fig:lossfunctions}. \begin{figure} \centering \includegraphics[height=1.5in,width=3.5in]{lossfunctions} \caption{The illustration exhibits the rate loss function for the various models considered in this paper, linear, convex and threshold.} \label{fig:lossfunctions} \end{figure} \noindent {\bf Linear Model:} Under the linear model, the expected rate for user $u$ in channel state $s$ for policy $\pi$ is given by $$ r^{\pi,s}_{u} = \E [f_{u}^{s} ( \phi^{\pi,s}_{u}, L_{u}^{\pi,s})]= \hat{r}_{u}^{s}( \phi^{\pi,s}_{u} - \overline{l}_{u}^{\pi,s}), $$ i.e., $h^s_u(x) = x,$ and the resulting rate to eMBB users is a linear function of both the allocated resources and mean induced URLLC loads. This model is motivated by basic results for the channel capacity of AWGN channel with erasures, see~\cite{Julian_2002} for more details. Our system in a given network state can be approximated as an AWGN channel with erasures, when the slot sizes are long enough so that the physical layer error control coding of eMBB users use long code-words. Further, there is a dedicated control channel through which the scheduler can signal to the eMBB receiver indicating the positions of URLLC overlap. Indeed such a control channel has been proposed in the 3GPP standards~\cite{3gpp_ran1_87}. Note that under this model the rate achieved by a given user depends on the aggregate superposition/puncturing it experiences, i.e., does not depend on which minislots and frequency bands it occurs. We discuss scheduling policies for linear loss models in Section~\ref{sec:erasure_channel}. \noindent {\bf Convex Model:} In the convex model, the rate loss function $h^s_u(\cdot)$ is convex (see Figure~\ref{fig:lossfunctions}), and the resulting rate for eMBB user $u$ in channel state $s$ under policy $\pi$ is given by $$ r^{\pi,s}_{u} = \E [f_{u}^{s} ( \phi^{\pi,s}_{u}, L_{u}^{\pi,s})]= \hat{r}_{u}^{s} \phi^{\pi,s}_{u} \left(1- E\left[ h^s_u \left( \frac{L^{\pi,s}_{u}}{\phi_{u}^{\pi,s}}\right)\right]\right). $$ This covers a broad class of models, and is discussed in Section~\ref{sec:convex_model}. \noindent {\bf Threshold Model:} Finally the threshold model is designed to capture a simplified packet transmission and decoding process in an eMBB receiver. The data is either received perfectly or it is lost depending on the amount of superposition/puncturing. With slight abuse of notation we shall let $h^s_u$ also depend on both the relative URLLC load and the eMBB user allocation, i.e., $h^s_u (x) = {\bf 1} (x \geq t_u^s(\phi^s_u))$ where the threshold in turn is an increasing function $t_{u}^{s}(\cdot)$ satisfying $x \geq t_{u}^{s}(x) \geq 0.$ Such thresholds might reflect various engineering choices where codes are adapted when users are allocated more resources, so as to be more robust to interference/URLLC superposition/puncturing. The resulting rate for eMBB user $u$ in channel state $s$ and policy $\pi$ is then given by $$ r^{\pi,s}_{u} = \hat{r}_{u}^{s} \phi^{\pi,s}_u P(L_{u}^{\pi,s} \leq \phi_u^{\pi,s} t_{u}^{s}(\phi_{u}^{\pi,s})). $$ While such a sharp falloff is somewhat extreme, it is nevertheless useful for modeling short codes that are designed to tolerate a limited amount of interference. In practice one might expect a smoother fall off, perhaps more akin to the convex model, e.g., when hybrid ARQ (HARQ) is used. We discuss polices under the threshold based model in Section~\ref{sec:threshold_model}. \noindent {\bf Capacity set for eMBB traffic:} We define the capacity set ${\cal C} \subset \Reals_+^{|{\cal U}|}$ for eMBB traffic as the set of long term rates achievable under policies in $\Pi.$ Let ${\bf c}^\pi = ( c_u^\pi | u \in {\cal U} )$ where $$ c_u^\pi = \sum_{s \in {\cal S}} r^{\pi,s}_{u} p_S(s). $$ Then the capacity is given by $$ {\cal C} = \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists~ \pi \in \Pi ~\mbox{such that}~ \mathbf{c} \leq \mathbf{c}^{\pi} \}. $$ Note that this capacity region depends on the scheduling policies under consideration as well as the distributions of the channel states and URLLC demands. \noindent {\bf Scheduling objective: URLLC priority and eMBB utility maximization:} As mentioned earlier, URLLC traffic is immediately scheduled upon arrival, in the next minislot, i.e, no queuing is allowed. Thus if demands exceed the system capacity on a given minislot traffic would be lost. However, we assume that the system has been engineered so that such URLLC overloads are extremely rare, and thus URLLC traffic can meet extremely low latency requirements with high reliability\footnote{Note that since we allow URLLC traffic in the entire system bandwidth, such overload events are very rare. }. For eMBB traffic we adopt a utility maximization framework wherein each eMBB user $u$ has an associated utility function $U_u(\cdot)$ which is a strictly concave, continuous and differentiable of the average rate $c_u^\pi$ experienced by the user. Our aim is to characterize optimal rate allocations associated with the utility maximization problem: \begin{equation} \max_{\mathbf{c}} \{ \sum_{u \in \mathcal{U}} U_u\brac{c_u}~|~ \mathbf{c} \in {\cal C} \}, \end{equation} and determine a scheduling policy $\pi$ that will realize such allocations. \section{Linear Model for Superposition/Puncturing} \label{sec:erasure_channel} In any state $ s $, the optimal joint eMBB/URLLC scheduler may either 1) protect the user with the lower channel rate by placing less URLLC traffic into its frequency resources to ensure fairness or 2) opportunistically place URLLC traffic so that the user with a better channel gets a higher rate to improve the overall system throughput. The solution for any state is complex function of network states and their distribution and user utility functions and in general, eMBB scheduling and URLLC puncturing may be dependent. In this section, we show a surprising result -- despite having non-linear utility functions, if the loss functions are linear and the eMBB scheduler is intelligent (i.e., takes into the degradation of rates due to puncturing), then the URLLC scheduler can be {\em oblivious to the channel states, utility functions and the actual rate allocations of the eMBB scheduler.} \subsection{Characterization of capacity region} Let us consider the capacity region for a wireless system based on linear superposition/puncturing model under a restricted class of policies $\Pi^{LR}$ that combine feasible eMBB allocations ${\bm \phi} \in \Sigma$ with random placement of URLLC demands uniformly over the bandwidth across minislots. Note that the notation $ LR $ stands for linear loss model (L) with random (R) placement of URLLC traffic. For any $\pi \in \Pi^{LR}$ with eMBB allocation $\bm{\phi}^\pi$ the mean induced loads under such randomization for each state $s \in {\cal S}$ and minislot $m \in {\cal M}$ will satisfy $\overline{l}^{\pi,s}_{u,m} = \rho \phi^{\pi,s}_{u,m}.$ Indeed randomization clearly leads to an induced loads that are proportional to the eMBB allocations on a per mini-slot basis, but also per eMBB slot, i.e., $\overline{l}^{\pi,s}_{u} = \rho \phi^{\pi,s}_{u}.$ Thus for our linear loss model we have that $$ r^{\pi,s}_u= \hat{r}_{u}^{s} (\phi_{u}^{\pi,s}-\overline{l}_{u}^{\pi,s}) = \hat{r}_{u}^{s} \phi_{u}^{\pi,s} (1 - \rho) . $$ Hence the overall user rates achieved under such a policy are given by ${\bf c}^{\pi} = ( c_u^{\pi} | u \in {\cal U} )$ where $$ c_u^{\pi} = \sum_{s\in {\cal S}} \hat{r}_{u}^{s} \phi_{u}^{\pi,s}(1- \rho) p_S(s). $$ The capacity region associated with policies that use URLLC uniformly randomized placement is thus given by \begin{eqnarray*} {\cal C}^{LR} & = & \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists \pi \in \Pi^{LR} ~\mbox{s.t.}~ \mathbf{c} \leq {\bf c}^{\pi} \} \\ & = & \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists {\bm{\phi}} \in \Sigma ~\mbox{s.t.}~ \mathbf{c} \leq {\bf c}^{\bm{\phi}} \}, \end{eqnarray*} where we have abused notation by using ${\bf c}^{\bm{\phi}}$ to represent the throughput achieved under policy $\pi$ that uses eMBB resource allocation ${\bm{\phi}}$ and uniformly randomized URLLC demand placement. Finally note that for any fixed $\rho \in (0, 1),$ $ {\cal C}^{LR}$ is a closed and bounded convex region. This is because an affine map of a convex region remains convex; hence multiplying the constraints on the capacity region defined by ${\bm \phi}$ by a constant $(1-\rho)$ preserves convexity of the rate region. \begin{theorem} \label{thm:theorem_on_erasure_channel_based_model} For a wireless system under the linear superposition/puncturing loss model we have that ${\cal C} = {\cal C}^{LR}.$ \end{theorem} The proof is deferred to the Appendix~A. In other words the throughput ${\bf c}^\pi \in {\cal C}$ achieved by any feasible policy $\pi \in \Pi$ can also be achieved by policy $\pi'$, with a possibly different eMBB resource allocation policy than $\pi$ but utilizing uniform random placement of URLLC demands across mini-slots. \subsection{Utility maximizing joint scheduling} \label{sec:util-linear} Given the result in Theorem \ref{thm:theorem_on_erasure_channel_based_model} we now restate the utility maximization problem as optimizing solely over joint scheduling policies that use URLLC random placement policies, as follows: \begin{eqnarray*} \label{eq:optimization_prob_linear} \max_{\bvec{\phi} \in \Sigma} & & \sum_{u \in \mathcal{U}} U_u ( c_u^{\bm \phi}), \\ \mbox{s.t.} & & c_u^{\bm \phi}= \sum_{s\in {\cal S}} \hat{r}_{u}^{s} \phi^{s}_{u} (1- \rho) p_S(s),~~ \forall u \in {\cal U} . \end{eqnarray*} The above optimization problem has a strictly concave cost function and convex constraints. Thus, at face-value, it appears that we can apply the gradient scheduler introduced in \cite{Stolyar05}, which is an online algorithm designed to converge to the solution of similar optimization problem. This observation is approximately correct, but subject to two modifications. First, the setting in \cite{Stolyar05} has deterministic rates in each channel state. However, in our case, in each channel state, the rates are stochastic due to puncturing by URLLC traffic (this results in the $(1 - \rho)$ correction). This can be easily addressed by modifying the setting in \cite{Stolyar05}; the finite state and i.i.d. nature of puncturing implies that the proofs in \cite{Stolyar05} hold with minor modifications; we skip the details. The second issue is somewhat more nuanced. In current wireless systems (e.g. LTE) and proposals for 5G systems, a slot is partitioned into a collection of Resource Blocks (RB), where each RB is a time-frequency rectangle (1 msec $\times$ 180 KHz in LTE). Importantly, these RBs can be individually allocated to different eMBB users. If we now apply the gradient scheduler in \cite{Stolyar05} to our setting, the result will be that all RBs in a slot will be allocated to the same user. While this is no-doubt asymptotically optimal, it seems intuitive that sharing RBs across users even within a slot will lead to better short-term performance. Indeed this intuition has been explored in the context of iterative MaxWeight algorithms to provide formal guarantees, see \cite{BodShaYinSri_10,BodShaYinSri_11}. The high level idea is that even within a slot, RB allocations are done iteratively, where future RB allocations need to account for prior rate allocations even within the same slot. This is formalized below, where we describe our proposed joint eMBB-URLLC scheduler. {\bf The URLLC scheduler:} As explained in the previous section, the URLLC scheduler places the URLLC traffic uniformly at random in each minislot. {\bf The eMBB scheduler:} Let there be $B$ resource blocks available for allocation every eMBB slot, indexed by $1,2, \ldots, B$. Let $\overline{R}_u(t-1)$ be the random variable denoting the average rate received by eMBB user up to eMBB slot $t-1$. Let $ \overline{r}_u(t-1) $ be a realization of $\overline{R}_u(t-1)$. In any eMBB slot $t$ we schedule an user $u(b)$ in RB $b$ such that \begin{equation} u(b) \in \text{argmax} \cbrac{\hat{r}_{u}^{s} U_u^{'}\brac{\rbartime{u}{b-1}{t}}, \, u=1,2, \ldots, \mathcal{U}}, \label{eqn:opt-search} \end{equation} where $\rbartime{u}{b-1}{t}$ is an \emph{estimate} of the average rate received by eMBB user $u$ till slot $t$ which is iteratively updated as follows: \begin{multline} \rbartime{u}{b}{t} = \begin{cases} \overline{r}_u(t-1), & b=0, \\ \brac{1-\epsilon}\rbartime{u}{b-1}{t} \quad &\\ + \epsilon \brac{\hat{r}_{u}^{s}\frac{1}{B}(1-\rho) \mathbbm{1}\brac{i=u(b)}}, & b \neq 0. \label{eqn:lin-rate-update} \end{cases} \end{multline} In the above equation, $\epsilon$ is a small positive value. At the end of eMBB slot $t$, the eMBB scheduler receives feedback from the eMBB receivers indicating the actual rates received by the eMBB users due to allocations. We denote the rate received eMBB user $u$ in slot by the random variable $R_u(t)$ and its realization by $ r_u(t) $. We finally update $\overline{r}_u(t)$ as follows: \begin{equation} \label{eq:update_for_rbar_lin} \overline{r}_u(t)= \brac{1-\epsilon}\overline{r}_u(t-1) + \epsilon r_u(t). \end{equation} This scheduler and update equations are analogous to the gradient algorithm \cite{Stolyar05} (see also iterative algorithms in \cite{BodShaYinSri_10,BodShaYinSri_11}). The optimality proof of this algorithm follows (with minor modifications) from the analysis in \cite{Stolyar05}; we skip the details. \noindent {\bf Remarks:} \textbf{(i)} A natural decomposition of the joint eMBB+URLLC scheduling is now apparent. On one hand, the eMBB scheduler maximizes utilities based on the {\em expected} channel rates stemming from uniform random puncturing of minislots (accounted for through the $(1-\rho)$ multiplicative factor), and does so using the iterative gradient scheduler. The URLLC scheduler, on the other-hand, is completely agnostic to either the channel state or the actual eMBB allocations and simply punctures minislots based on the current instantaneous demand. \textbf{(ii)} The fact that the URLLC traffic placement is completely agnostic to the channel state and eMBB utilities/allocation is surprising. Intuitively it seems plausible that one could puncture an eMBB user with a lower marginal utility with more URLLC traffic, while protecting an eMBB user with a higher marginal utility and achieve a better sum utility. Further, it seems reasonable that eMBB users with a worse channel state (and thus lower rate) could be loaded with additional URLLC traffic. However, Theorem.~\ref{thm:theorem_on_erasure_channel_based_model} implies that there exists an optimal solution that is achieved by channel and utility oblivious and uniform random URLLC placement, thus providing a very simple algorithm for URLLC scheduling. \textbf{(iii)} We remark that the optimality of random puncturing for linear loss models depends critically on the use of an opportunistic scheduler for eMBB traffic.To see this, consider a simple system with two symmetric eMBB users each with two possible channel states. The associated channel rates are either $\{2, 4\}$ packets/slot with equal probability, and independent across users and time slots. Suppose that we use a static (non-opportunistic) scheduler, which equally splits channel access between the users. It is easy to calculate that the rate to each user is then 1.5 packets/slot. Next suppose that the URLLC load is 50\%, and that this traffic {\em randomly punctures} eMBB users. Then from symmetry, it follows that the rate per eMBB user is $0.75$ packets/slot. In contrast, suppose that puncturing is opportunistic, where the user with the currently lower rate is punctured whenever possible (opportunistic puncturing of the currently worse eMBB user), a straightforward calculation shows that the rate to each eMBB user is $0.875$ packets/slot, which is a {\em strict improvement over random puncturing.} At a high-level, this follows because opportunistic eMBB scheduling operates on the Pareto frontier of two-user capacity region, and consequently there is no residual opportunistic to be obtained by puncturing. However, with non-opportunistic scheduling, the system is not pushed to the boundary; thus, opportunistic puncturing can extract additional throughput for eMBB users. \section{Threshold Model and Placement Policies} \label{sec:threshold_model} In the previous section, we developed a stochastic approximation based algorithm for minislot-homogeneous policies. This algorithm iteratively solves the optimization problem given in (\ref{eqn:opt-stoch-approx}). This optimization problem jointly optimizes over a pair of row vectors $(\bvecgreek{\phi}^s, \bvecgreek{\gamma}^s).$ While this convex optimization problem can be solved using standard methods, it could become computationally challenging as the number of users increases. In this section, we shall restrict our attention to a threshold model for superposition/puncturing, and look at policies that impose structural conditions on the puncturing matrix $\bvecgreek{\gamma}.$ We will show that the resulting class of policies have nice theoretical properties that lead to simpler online algorithms (solving (\ref{eqn:opt-search}), which is an one-dimensional search). We consider two types of structural conditions on $\bvec{\gamma}$: \noindent \textbf{(i) Resource Proportional (RP) Placement:} The first is based on allocating URLLC demands in proportion to eMBB user slot allocations, i.e., $\gamma^s_{u} = \phi^s_{u}.$ We refer to this as Resource Proportional (RP) Placement and denote such policies by $$ \Pi^{RP,\delta} := \{ ({\bm \phi},{\bm \gamma}) \in \Pi^{H, \delta} ~|~ {\bm \gamma} ={\bm \phi} \}, $$ and define the associated achievable throughput region \begin{eqnarray*} {\cal C}^{RP,\delta} &=& \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists {\bm \pi} \in \Pi^{RP,\delta} ~\mbox{s.t.}~ \mathbf{c} \leq {\bf c}^{{\bm \pi}} \}. \end{eqnarray*} The motivation for RP Placement comes from the optimality of random placement for the linear model in Section~\ref{sec:erasure_channel}. Observe that if puncturing occurs uniformly randomly, then the expected number of punctures is directly proportional to the fraction of bandwidth allocated to an eMBB user. Thus, RP Placement can be viewed as a {\em determinized version} of the random placement strategy which ensures that the proportions of puncturing satisfy resource proportional ratios. \noindent \textbf{(ii) Threshold Proportional (TP) Placement:} The second policy allocates URLLC demands in proportion to the eMBB users associated loss thresholds so as to avoid losses, $$ \gamma^s_{u} = \frac{\phi^s_{u} t^s_{u}(\phi^s_{u})}{\sum_{u' \in {\cal U}} \phi^s_{u'} t^s_{u'}(\phi^s_{u'})}. $$ We refer to this as Threshold Proportional (TP) Placement and denote such policies by \begin{eqnarray*} \Pi^{TP,\delta} := && \\ &&\hspace*{-50pt} \{ ({\bm \phi},{\bm \gamma}) \in \Pi^{H, \delta} ~|~ \gamma^s_{u} = \frac{\phi^s_{u} t^s_{u}(\phi^s_{u})}{\sum_{u' \in {\cal U}} \phi^s_{u'} t^s_{u'}(\phi^s_{u'})} \forall s \in {\cal S}, u \in {\cal U} \}. \end{eqnarray*} The associated achievable throughput region is denoted \begin{eqnarray*} {\cal C}^{TP,\delta} & = & \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists {\bm \pi} \in \Pi^{TP,\delta} ~\mbox{s.t.}~ \mathbf{c} \leq {\bf c}^{{\bm \pi}} \}. \end{eqnarray*} First we state a corollary to Theorem~\ref{thm:main-theorem} which characterizes the rates under different URLLC placement policies for systems having threshold loss model for superposition/puncturing. \begin{corollary} \label{cor:main-theorem-th} Under a $(1-\delta)$ sharing factor and time-homogeneous scheduler ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{H,\delta}$ the probability of induced eMBB loss for user $u\in {\cal U}$ in channel state $s \in {\cal S}$ is given by $$ \epsilon^{{\bm \pi},s}_u = 1-F_ D( \frac{\phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u)}{\gamma^{{\bm \pi},s}_u} ). $$ where $F_D$ denotes the cumulative distribution function of the URLLC demands on a typical eMBB slot. Then the associated user throughput is given by $$ r^{{\bm \pi},s}_{u} = \hat{r}^{s}_u \phi^{{\bm \pi},s}_{u} F_ D(\frac{\phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u)}{\gamma^{{\bm \pi},s}_u} ). $$ and the overall user throughputs are given by ${\bf c}^{{\bm \pi}}= ( c^{{\bm \pi}}_{u} : u \in {\cal U})$ where $$ c^{{\bm \pi}}_{u} = \sum_{u \in {\cal U}} \hat{r}^{s}_u \phi^{s}_{u} F_ D(\frac{\phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u)}{\gamma^{{\bm \pi},s}_u} ) p_S(s). $$ \end{corollary} The following two corollaries are direct consequences of Corollary \ref{cor:main-theorem-th} and Theorem~\ref{thm:capacity-theorem} restricted to RP and TP Placement strategies, and characterize the capacity regions under the two policies. \begin{corollary} \label{cor:rp} Consider a wireless system with full sharing factor and time-homogeneous scheduler based on the RP URLLC Placement policy ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{RP,\delta}.$ Then any eMBB resource allocation $\bm{\phi}$ combined with a RP URLLC demand placement policy, $\bm{\gamma}= \bm{\phi}$ is feasible. The probability of loss for user $u\in {\cal U}$ in channel state $s \in {\cal S}$ is given by $$ \epsilon^{{\bm \pi},s}_u = 1-F_ D( {t^{s}_u(\phi^{{\bm \pi},s}_u)} ), $$ with associated user throughput \begin{eqnarray} r^{{\bm \pi},s}_{u} &=& \hat{r}^{s}_u \phi^{s}_{u} F_ D({t^{s}_u(\phi^{{\bm \pi},s}_u)} ). \label{eqn:RP-cor-rate} \end{eqnarray} Further if for all $s\in {\cal S}$ and $u \in {\cal U}$ the functions $g^s_u ( , ) $ given by \begin{eqnarray} g^s_u ( \phi^s_u ) &=& \phi^s_u F_ D({t^{s}_u(\phi^{{\bm \pi},s}_u)} ), \label{eqn:RP-cor-g} \end{eqnarray} are concave then ${\cal C}^{RP,\delta}= \hat{\cal C}^{RP,\delta}.$ \end{corollary} \begin{corollary} \label{cor:tp} Under a $(1-\delta)$ sharing factor and jointly uniform scheduler based on the TP URLLC Placement policy ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{TP,\delta},$ the probability of induced eMBB loss user $u\in {\cal U}$ in channel state $s \in {\cal S}$ is given by \begin{eqnarray} \epsilon^{{\bm \pi},s}_u &=& 1-F_ D( \sum_{u \in {\cal U}} \phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u) ), \label{eqn:tp-bound} \end{eqnarray} with associated user throughput \begin{eqnarray} r^{{\bm \pi},s}_{u} = \hat{r}^{s}_u \phi^{s}_{u} F_ D( \sum_{u \in {\cal U}} \phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u) ). \label{eqn:TP-cor-rate} \end{eqnarray} Further if for all $s\in {\cal S}$ and $u \in {\cal U}$ the functions $g^s_u ( , ) $ given by \begin{eqnarray} g^s_u ( \phi^s_u, \gamma^s_u ) &=& \phi^s_u F_ D( \sum_{u \in {\cal U}} \phi^{{\bm \pi},s}_u t^{s}_u(\phi^{{\bm \pi},s}_u) ), \label{eqn:TP-cor} \end{eqnarray} are jointly concave then ${\cal C}^{TP,\delta}= \hat{\cal C}^{TP,\delta}.$ \end{corollary} The following theorem provides a formal motivation for TP Placement. The main takeaway here is that the {\em probability of any loss in an eMBB slot under TP Placement policy is a lower bound for all other strategies.} Note that minimizing the probability of any eMBB loss is not same as minimizing eMBB rate loss. \begin{theorem} \label{thm:tp-optimality-theorem} Consider a system with $(1-\delta)$ sharing factor. Consider a joint scheduling policy based on the TP URLLC placement i.e, ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{TP,\delta}.$ Then ${\bm \pi}$ achieves the minimum probability of any eMBB loss amongst all joint scheduling policies using the same eMBB resource allocation ${\bm \phi}^{\bm \pi}.$ \end{theorem} The proof is included in Appendix~H. Next we consider online algorithms that implement the RP and TP Placement policies. While the stochastic approximation algorithm developed in Section~\ref{sec:th-stoch-approx} can clearly be used, the additional structure imposed by the RP and TP Placement policies, and the shape of the threshold loss function (discussed below) can result in much simpler algorithms (with optimality guarantees). We consider the case where $t^s_u(\phi)$ is a (state dependent but $\phi$ independent) constant, i.e., $t^s_u(\phi) = \alpha^s,$ where $\alpha^s \in (0,1).$ Intuitively, this means that eMBB traffic which has a higher share of the bandwidth is more resilient to losses (e.g. through coding over larger fraction of resources). Then, by substituting this loss function in (\ref{eqn:RP-cor-rate}) and (\ref{eqn:TP-cor-rate}) (where we also use the fact that $\sum_{u \in {\cal U}} \phi^{s}_u = 1$), we have that $$ r^{{\bm \pi},s}_{u} = \hat{r}^{s}_u \phi^{s}_{u} F_D(\alpha^s). $$ Comparing with the development in Section~\ref{sec:util-linear}, we observe that the cost and constraints are identical if $F_D(\alpha^s)$ replaces $(1 - \rho).$ Note that a small difference is that $F_D(\alpha^s)$ is state dependent, whereas $(1 - \rho)$ does not depend on the state; however, it is easy to see that the development in Section~\ref{sec:util-linear} immediately generalizes to this setting. Hence, we can interpret $F_D(\alpha^s)$ as the state dependent average rate loss due to puncturing via the RP or TP Placement policies. We can now employ the rate-based iterative gradient scheduler developed in Section~\ref{sec:util-linear} (by replacing $(1 - \rho)$ in (\ref{eqn:lin-rate-update}) by a user-dependent $F_D(\alpha^s)$), and the theoretical guarantees directly carry over. As this algorithm only minimizes over users at each slot in (\ref{eqn:opt-search}), this is easier to implement when compared to the stochastic approximation algorithm developed in Section~\ref{sec:th-stoch-approx}. \subsection{Stochastic approximation based online algorithm} \label{sec:th-stoch-approx} We first restate the utility maximization problem for minislot-homogeneous URLLC/eMBB scheduling policies: \begin{align} \underset{\bvec{\phi}, \bvec{\gamma} \in \Pi^{U,\delta}}{\max} \quad \sum_{u \in \mathcal{U}}& U_u\brac{\sum_{s \in \mathcal{S}} p_{\mathcal{S}} (s)g^s_u \brac{ \phi^s_u, \gamma^s_u } }. \end{align} Observe that the objective function is concave because it consists of a sum of compositions of non-decreasing concave functions ($U_u(\cdot)$), and concave functions ($g^s_u \brac{ \cdot, \cdot }$) in $\bvec{\phi}$ and $\bvec{\gamma}$ (if Assumption~\ref{condn:conc-g} holds). Further, the constraint set is convex. Therefore, the above problem fits in the framework of standard convex optimization problems. However, solving the above problem requires knowledge of all possible network states and their probability distribution, resulting in an {\em offline} optimization problem. In this section, we develop a stochastic approximation based online algorithm to solve the above problem. {\bf Online algorithm: } Let $\bvec{\overline{R}}(t-1):=\brac{\overline{R}_1(t-1), \overline{R}_2(t-1), \ldots, \overline{R}_u(t-1), \ldots, \overline{R}_{\abs{\mathcal{U}}}(t-1)}$ be the random vector denoting the average rates received by eMBB users up to eMBB slot $t-1$ under our online algorithm. Let $ \overline{\bvec{r}}(t-1) $ denote a realization of $ \bvec{\overline{R}}(t-1)$. Let $s$ be the network state in slot $t$. Define vectors $\bvecgreek{\phi}^s:=\brac{\phi^s_u, \mid u \in \mathcal{U}}$ and $\bvecgreek{\gamma}^s:=\brac{\gamma^s_u \mid u \in \mathcal{U} }$. At the beginning of eMBB slot $t$, we compute vectors $\brac{\bvecgreek{{\tilde{\phi}}} (t), \bvecgreek{{\tilde{\gamma}}} (t) }$ as the solution to the following optimization problem: \begin{align} \underset{{\phi^s}, {\gamma^s}}{\max} \quad \sum_{u \in \mathcal{U}}& U_u^{'}\brac{\overline{r}_u(t-1)} g^s_u ( \phi^s_u, \gamma^s_u ) , \label{eqn:opt-stoch-approx}\\ \text{s.t.} \quad \bvecgreek{\phi}^s & \geq \brac{1-\delta}\bvecgreek{\gamma}^s, \\ \sum_{u \in \mathcal{U}} \phi^s_u&=1 \mbox{ and } \sum_{u \in \mathcal{U}} \gamma^s_u=1, \\ \bvecgreek{\phi}^s &\in \sbrac{0,1}^{\abs{\mathcal{U}}} \mbox{ and } \bvecgreek{\gamma}^s \in \sbrac{0,1}^{\abs{\mathcal{U}}}. \end{align} This optimization problem is a convex optimization problem and can be solved numerically using standard convex optimization techniques. Using $\brac{\bvecgreek{{\tilde{\phi}}} (t), \bvecgreek{{\tilde{\gamma}}} (t) }$, we schedule URLLC and eMBB traffic as follows: {\bf The eMBB scheduler:} For notational ease, we fluidize the bandwidth. Specifically, we assume that the bandwidth of a resource block is very small when compared to the total bandwidth available. Hence, the bandwidth can be split into arbitrary fractions and we allocate fraction $\tilde{\phi}_u (t)$ of the total bandwidth to eMBB user $u$. {\bf The URLLC Scheduler:} We load different eMBB users with URLLC traffic according to the vector ${\bvecgreek{\tilde{\gamma}}}(t)$. At the end of eMBB slot $t$, the eMBB scheduler receives feedback from the eMBB receivers indicating the rates received by the eMBB users. Let us denote the rate received eMBB user $u$ in the slot by the random variable $R_u(t)$. We update $\overline{R}_u(t)$ as follows: \begin{equation} \label{eq:update_for_rbar} \overline{R}_u(t)= \brac{1-\epsilon_t}\overline{R}_u(t-1) + \epsilon_t R_u(t), \end{equation} where $\cbrac{\epsilon_t \mid t=1,2,3, \ldots}$ is a sequence of positive numbers which satisfy the following (standard) assumption: \begin{assumption} The averaging sequence $\{\epsilon_t \}$ satisfies: \label{eq:condition_on_epsilon} $$\sum_{t=1}^{\infty} \epsilon_t = \infty \quad \mbox{and} \quad \sum_{t=1}^{\infty} \epsilon_t^2 < \infty.$$ \end{assumption} Finally, we state the main result of this section, which is the optimality of the stochastic approximation based online algorithm. \begin{theorem} \label{th:optimality_of_stoch_approx} Let $\bvec{r}^*$ be the optimal average rate vector received by eMBB users under the solution to the offline optimization problem. Suppose that Assumptions~\ref{eq:condition_on_epsilon} and~\ref{condn:conc-g} hold, then we have that: \begin{equation} \underset{t \rightarrow \infty}{\lim} \bvec{\overline{R}}(t) = \bvec{r}^* \quad \text{almost surely}. \end{equation} \end{theorem} The proof is available in the Appendix~E. \section{Convex Model -- Minislot-Homogenous Policies} \label{sec:convex_model} In this section we shall consider joint scheduling for wireless systems for convex superposition/puncturing loss models. This is a somewhat complex problem, whence we will focus our attention on a restricted, but still rich, class of scheduling policies which we refer to as minislot-homogeneous eMBB/URLLC schedulers. We identify a key concavity requirement in Assumption~\ref{condn:conc-g} (that is satisfied by convex loss functions) that enables a stochastic approximation approach for utility maximizing scheduling. \subsection{Minislot-homogeneous eMBB/URLLC Scheduling policies} \label{subsec:minislot_homogeneous} We shall define minislot-homogeneous eMBB/URLLC schedulers as follows. First, feasible eMBB allocations ${\bm \phi} \in \Sigma$ will be restricted such that for any eMBB slot in channel state $s \in {\cal S}$ allocations are {\em minislot-homogeneous} across minislots in an eMBB slot, i.e., $\phi_{u,1}^{s}=\phi_{u,m}^{s}, \forall m \in {\cal M}$ and its overall allocation for the slot is given by $\phi_{u}^{s} = |{\cal M}| \phi_{u,1}^{s}.$ The set of minislot-homogeneous eMBB allocations is thus given by \begin{multline} \nonumber \Sigma^H := \left \{ \bvecgreek{\phi} \in \Sigma ~|~ u\in {\cal U},~ \phi_{u,m}^{s} =\phi_{u,1}^s ~~ \forall m \in {\cal M}, \forall s \in {\cal S}\right\}. \end{multline} Second, URLLC demand placements per minislot are done proportionally based on pre-specified weights, and these weights are assumed to be time-homogeneous across minislots. In particular such policies are parametrized by a weight matrix $\bvecgreek{\gamma} \in \Sigma^H$, where the induced load on user $u$ under channel state $s$ and slot $m$ is given by $$ L_{u,m}^{s} = \frac{\gamma_{u,m}^s}{\sum_{u' \in {\cal U}} \gamma_{u',m}^s} D(m) = \frac{\gamma_{u,1}^s}{f} D(m). $$ We shall call $ \gamma_{u,1}^s$ the \emph{URLLC placement factor} for eMBB user $ u $ in state $ s $. The eMBB and URLLC allocations are coupled together since it must be the case that for all $u\in {\cal U}$ $L_{u,m}^{s} \leq \phi_{u,m}^{s} = \phi_{u,1}^s$ almost surely, i.e., one can not induce more superposition/puncturing load on a user than the resources it has been allocated on that slot. So the following condition must be satisfied. For all $m \in {\cal M}$ we have that $$ D(m) \leq \min_{u \in {\cal U}} \frac{\phi_{u,1}^{s}}{\gamma_{u,1}^s} f, \, \, \, \mbox{almost surely}. $$ Recall that $ f $ denotes the maximum URLLC load per minislot so $D(m) \leq f$ almost surely, thus if $\frac{\phi_{u,1}^{s}}{\gamma_{u,1}^s} \geq 1$ the above condition will always hold. Yet if $ \phi_{u,1}^{s}\geq \gamma_{u,1}^s $ for all $ u $, then we have that $ \phi_{u,1}^{s}= \gamma_{u,1}^s $, i.e., there is not flexibility to exploit careful placement of URLLC demands. Hence, we introduce the following assumption: \begin{assumption} \label{sharingfactor-assumption} We say the system has a $(1-\delta)$ URLLC sharing factor per minislot if $D(m) \leq f (1-\delta)$ almost surely for all $m \in {\cal M}$, where $ \delta \in \brac{0,1} $. \end{assumption} For any $\delta$ the above assumption implies that the {\em peak URLLC demand} in an eMBB slot can be at most $ 1-\delta$ which is lower than maximum possible value of one. Such an assumption is reasonable as we consider shared resources which are engineered to meet the peak URLLC loads while also serving eMBB traffic. Under a $(1-\delta)$ URLLC sharing factor a minislot-homogeneous eMBB resource allocation ${\bm \phi}$ and URLLC allocation ${\bm \gamma}$ is will be feasible if for all $s \in {\cal S}$ we have $$ (1-\delta) \leq \min_{u \in {\cal U}} \frac{\phi_{u,1}^{s}}{\gamma_{u,1}^s}, $$ which is satisfied as long as $(1-\delta) \gamma_{u,1}^s \leq \phi_{u,1}^{s}$ for all $u \in {\cal U}$. This motivates the following definition: \begin{definition} {\em For a system with a $(1-\delta)$ sharing factor, the feasible minislot-homogeneous eMBB/URLLC scheduling policies are parameterized by ${\bm \phi},{\bm \gamma} \in \Sigma^H$ such that $(1-\delta)\bm{\gamma} \leq {\bm \phi}$. We shall denote the set of such policies as follows: $$ \Pi^{H,\delta} := \{ ({\bm \phi},{\bm \gamma}) ~\mid~ {\bm \phi},{\bm \gamma} \in \Sigma^H ~\mbox{and}~ (1-\delta)\bm{\gamma} \leq {\bm \phi} \} , $$ where $\Pi^{H,\delta}$ is a convex set.} \end{definition} \subsection{Characterization of the throughput region} In this section we characterize the throughput regions achievable under time-homogeneous scheduling. \begin{theorem} \label{thm:main-theorem} For a system with a $(1-\delta)$ sharing factor and minislot-homogeneous scheduler ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{H,\delta}$ the average induced throughput for user $u\in {\cal U}$ in channel state $s \in {\cal S}$ is given by $$ r^{\bm{\pi},s}_{u} = \E[ {f}^{s}_u ( \phi^{{\bm \pi},s}_{u}, \gamma^{\bm{\pi},s}_u D )], $$ and the overall average user throughputs are given by ${\bf c}^{{\bm \pi}}= ( c^{{\bm \pi}}_{u} \mid u \in {\cal U})$ where $ c^{{\bm \pi}}_{u} = \sum_{s \in {\cal S}} r^{{\bm \pi},s}_{u} p_S(s).$ \end{theorem} The proof is included in Appendix~B. Based on the above we can define feasible throughput region constrained to the time-homogeneous policies in $\Pi^{H,\delta}.$ First let us define \begin{eqnarray*} {\cal C}^{H,\delta} & = & \{ \mathbf{c} \in \Reals_+^{|{\cal U}|} ~|~ \exists {\bm \pi} \in \Pi^{H,\delta} ~\mbox{s.t.}~ \mathbf{c} \leq {\bf c}^{{\bm \pi}} \} \end{eqnarray*} and let $\hat{\cal C}^{H,\delta}$ denote the convex hull of ${\cal C}^{H,\delta}.$ Note that rates in the convex hull are achievable through policies that do time sharing/randomization amongst minislot-homogeneous scheduling policies in $\Pi^{H,\delta}.$ \begin{assumption} \label{condn:conc-g} For all $s\in {\cal S}$ and $u \in {\cal U}$ the functions $g^s_u ( , ) $ given by \begin{equation} \label{eqn:conc-condn} g^s_u ( \phi^s_u, \gamma^s_u ) = \E[ {f}^{s}_u ( \phi^{s}_{u}, \gamma^{s}_u D )], \end{equation} are jointly concave on $\Pi^{H,\delta}.$ \end{assumption} \begin{lemma} \label{lm:condn:examplesg} Assumption \ref{condn:conc-g} is satisfied for systems where superposition/puncturing of each user is modelled via either a \begin{enumerate} \item Convex loss function or \item Threshold loss function with fixed relative thresholds, i.e., $t^s_u(\phi^s_u) = \alpha^s_u$ for $\phi \in [0,1]$ and the URLLC demand distribution $F_D(\cdot)$ is such that $F_D(\frac{1}{x})$ is concave in $x$ (satisfied by the truncated Pareto distribution). \end{enumerate} \end{lemma} The proof is included in Appendix~C. With this condition in place, we now describe the throughput region. \begin{theorem} \label{thm:capacity-theorem} Under Assumption~\ref{condn:conc-g} we have that ${\cal C}^{H,\delta}= \hat{\cal{C}}^{H,\delta}$. \end{theorem} The proof is available in the Appendix~D. The above theorem implies that we do not have to consider time-sharing/randomization amongst minislot-homogeneous joint scheduling policies. Thus, with minislot-homogeneous policies and under the concavity of $ g_u^{\pi, s}\brac{\cdot,\cdot} $ from Assumption~\ref{condn:conc-g}, the above result sets up a convex optimization problem in $(\bvecgreek{\phi}, \bvecgreek{\gamma)},$ i..e, we have a concave cost function with convex constraints. Thus, by iteratively updating $(\bvecgreek{\phi}, \bvecgreek{\gamma)},$ we can develop an online scheduling algorithm that asymptotically maximizes eMBB users' utility. This is descried next. \input{4c-stoch-approx} \input{optimality_of_mini_slot_homoegenous_policies} \input{horizontal_vs_vertical} \section{Simulations} We consider a system with a total of 100 RBs available per eMBB slot, and with 8 minislots per eMBB slot. In an eMBB slot, $\hat{r}_u^s$ for an eMBB user is drawn from the finite set $\cbrac{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}$ Mbps according to a probability distribution and i.i.d. across users and slots. Our system consists of 20 users, and with 100 channel states (all equally likely). The ($20$ users $\times$ $100$ states) rate matrix is one-time synthesized by independently and uniformly sampling a rate from the finite rate set for each matrix element. For $ 10$ eMBB users, we have chosen the probability distribution such that the average rate is $ 7 $ Mbps. For the rest, probability distribution is such that the average rate is $ 3 $ Mbps. This models two classes of users, one class with higher link rates which can tolerate a higher amount of puncturing and the other with lower link rates which can tolerate lesser amount of puncturing. This is reasonable as a user with a higher channel rate can code more robustly and protect its transmissions from URLLC puncturing more than a user with a lower channel rate. In this spirit we shall call users with $ 7 $ Mbps average rates as `robust' users and users with $ 3 $ Mbps average rates as `sensitive' users. We use the utility function $ U_u(r)=\log \brac{r} $ for all users. We first show that joint scheduling is necessary to preserve eMBB throughputs. To that end we benchmark our optimal online algorithm (stochastic approximation algorithm, see Section~\ref{sec:th-stoch-approx}) for convex loss functions with a scheme which performs standard gradient based scheduling for eMBB users and Resource Proportional (RP) URLLC placement. Note that for convex loss functions, RP placement strategy does not take into account the eMBB user's sensitivity to delays. For users with average rate $7$ Mbps, we use the loss function $ h_u^s(x)=x^2 $. For users with average rate $ 3 $ Mpbs, we use the following loss function: \begin{equation} h_u^s(x)= \begin{cases} \brac{\frac{x}{0.7}}^2, & \mbox{if } x \leq 0.7, \\ 0, & \mbox{if } 0.7< x \leq 1. \end{cases} \end{equation} URLLC demands in a minislot is drawn from a binomial distribution which can take values $ 0 $ with $ p $ and $ \frac{1-\delta}{8} $ with probability $ 1-p $. Note that this ensures that peak URLLC load in an eMBB slot is less than or equal to $ 1-\delta $. In Fig.~\ref{fig:convex_loss_utility}, we compare the average sum utility under our optimal joint scheduler and the RP based policy as a function of the URLLC load. As the load increases, RP performs poorly. To understand this phenomenon in detail, we have plotted the average rates of robust and sensitive users under the two policies in Fig.~\ref{fig:convex_loss_avg_rate}. As we increase the URLLC load, the average eMBB rates of both sensitive and robust users decrease rapidly. For example, when $ \rho=0.4 $, RP has $15$ \% lower throughput for robust users and almost similar performance for sensitive users as compared to optimal algorithm. Further as we increase $ \rho $ to $ 0.6 $, the throughput of robust and sensitive users in RP decrease by $ 35 $ \% and $ 26 $ \%, respectively. Sensitive users are the most affected by URLLC puncturing. When the RP URLLC placement policy is combined with the standard gradient based algorithm for eMBB users, it allocates more resources to sensitive users because they have higher marginal utility. Since sensitive users receive more bandwidth, under the RP URLLC placement strategy they receive more puncturing. This will lead to even more allocation of resources to sensitive users and this process continues until robust users have similar marginal utilities (due to reduced rates) as sensitive users. Hence, the robust users are resource starved. As we increase the URLLC load further, sensitive users receive even more URLLC puncturing and neither the robust nor sensitive users get good average rates when compared to the optimal joint scheduler. This shows that we require joint scheduling of eMBB and URLLC to exploit the heterogeneity in sensitivities to URLLC puncturing in maximizing eMBB utilities. \begin{figure} \centering \includegraphics[height=2in,width=3in]{urrlc_rates_vs_load_quadratic_binomial_utility} \caption{Sum utility as a function of URLLC load $\rho$ for the optimal and RP policies under convex model $ \delta=0.3 $.} \label{fig:convex_loss_utility} \end{figure} \begin{figure} \centering \includegraphics[height=2in,width=3in]{urrlc_rates_vs_load_quadratic_binomial} \caption{Average rates as a function of URLLC load $\rho$ for the optimal and RP policies under convex model $ \delta=0.3 $.} \label{fig:convex_loss_avg_rate} \end{figure} \begin{figure} \centering \includegraphics[height=2in,width=3in]{threshold_july31} \caption{Sum utility as a function of URLLC load $\rho$ for the optimal and TP Placement policies under threshold model ($\delta = 0.1$).} \label{fig:threshold_model} \end{figure} Next we consider a threshold based loss model with $\alpha^s =0.3$ for 50\% of eMBB states and $\alpha^s =0.7$ for the rest. We use the utility function $U_u(r)=\log(r) + 6.5$ for all eMBB users, where $r$ is measured in Mbps (constant added to ensure non-negativity of the sum utility). URLLC load in an eMBB slot ($D$) is generated based on the truncated Pareto distribution with tail exponent $\eta=2$. We compare the optimal policy (stochastic approximation algorithm, see Section~\ref{sec:th-stoch-approx}) with that from the TP Placement policy (the simpler gradient algorithm in Section~\ref{sec:threshold_model}). In this case, since the threshold functions are (state-dependent) constants, the RP and TP Placement policies are the same. As we can see in Figure~\ref{fig:threshold_model}, unlike the convex loss model the RP/TP Placement policy tracks the optimal policy very well. \begin{figure} \centering \includegraphics[height=2in,width=3in]{delay_utility_trade_off} \caption{Sum utility and mean URLLC delay as a function of $\delta$.} \label{fig:URLLC_cap_vs_emBB_util} \end{figure} \begin{figure} \centering \includegraphics[height=1.65in,width=3in]{deadline_violation_probability} \caption{Log-scale plot of the probability that URLLC traffic is delayed by more than two minislots (0.25 msec) for various values of $\delta.$ } \label{fig:URLLC_cap_vs_emBB_util} \end{figure} In Figure~\ref{fig:URLLC_cap_vs_emBB_util}, we study the trade-off between achieving a higher eMBB utility and lowering the mean delay of URLLC traffic for different values of the sharing factor $1-\delta$. Figure~\ref{fig:URLLC_cap_vs_emBB_util} plots the corresponding probability that the URLLC traffic delay exceeds two minislots ($0.125 \times 2 = 0.25$ msec). To study this trade-off we generate URLLC arrivals in each minislot from an uniform distribution between $\sbrac{0, 1/8}$ (recall there are 8 minislots). In each minislot, we can serve at most $\frac{1-\delta}{8}$ units of URLLC traffic. If the URLLC load in a given minislot is more than $\frac{1-\delta}{8}$, the remaining URLLC traffic is queued and served in the next minislot on a FCFS basis. For the eMBB users we use a convex model with $h_u^s(x) = e^{{\kappa_u\brac{x-1}}}$ where $\kappa_u$ determines the sensitivity of an eMBB user to an URLLC load. We have chosen $\kappa=0.2$ for 50 \% of the users and $\kappa=0.7$ for the rest. We also set $\forall u$ $U_u(x)=\log(x) + 4.2$ (constant added to ensure positive sum utility). In summary, a larger value of $\delta$ limits the amount of URLLC traffic than can be served in a minislot. However, a larger $\delta$ enlarges the constraint set $\Pi^{H,\delta}$ in the eMBB utility maximization problem, and hence we get higher eMBB utility. \section{Conclusion} \label{sec:concl} In this paper, we have developed a framework and algorithms for joint scheduling of URLLC (low latency) and eMBB (broadband) traffic in emerging 5G systems. Our setting considers recent proposals where URLLC traffic is dynamically multiplexed through puncturing/superposition of eMBB traffic. Our results show that this joint problem has structural properties that enable clean decompositions, and corresponding algorithms with theoretical guarantees. \subsection{Proof of Theorem~\ref{thm:theorem_on_erasure_channel_based_model}} \label{pf:theorem_on_erasure_channel_based_model} Clearly since $\Pi^{LR} \subset \Pi$ we have that ${\cal C}^{LR} \subset {\cal C}$ Now consider any policy $\pi \in \Pi$ with eMBB user allocations ${\bm \phi}^{\pi}$ and URLLC loads $\overline{{\bf l}}^{\pi}$ and associated long term throughput is $\mathbf{c}^{\pi}$ given by $$ c_u^\pi = \sum_{s\in {\cal S}} \hat{r}_{u}^{s}( \phi^{\pi,s}_{u} - \overline{l}_{u}^{\pi,s}) p_S(s). $$ Let us define a $\pi'$ based on $\pi$ to have per minislot eMBB user allocations given by $$ \phi_{u,m}^{\pi',s} = \frac{\phi_{u}^{\pi,s}- \overline{l}_{u}^{\pi,s}} {\sum_{u' \in {\cal U}} \phi_{u'}^{\pi,s}- l_{u'}^{\pi,s}} f = \frac{\phi_{u}^{s}- \overline{l}_{u}^{\pi,s}}{ 1- \rho} f, $$ for $s \in {\cal S},$ $u \in {\cal U}$ and $m \in {\cal M}.$ Since induced mean loads on an eMBB user can not exceed its allocation we have that ${\bm \phi}^\pi \geq \overline{{\bf l}}^\pi$ so the above allocations are positive. Note also that this allocation is not minislot dependent, but normalized so that per minislot they sum to $f$ and over the whole eMBB slot sum to $1$, i.e., ${\bm \phi}^{\pi'} \in \Sigma$. Thus for such an allocation we have that $$ \phi_{u}^{\pi',s} = \frac{\phi_{u}^{s}-\overline{l}_{u}^{\pi,s}}{ 1- \rho}. $$ Also suppose that $\pi'$ uses randomized URLLC placement across minislots which induces mean URLLC loads proportional to the allocations, i.e., $\overline{l}^{\pi',s}_{u} = \rho \phi^{\pi',s}_{u}$. It follows that \begin{eqnarray*} \phi^{\pi',s}_{u}- \overline{l}^{\pi',s}_{u} & = & \phi^{\pi',s}_{u} - \rho \phi^{\pi',s}_{u} \\ & = & (1-\rho) \phi^{\pi',s}_{u} \\ & =& \phi_{u}^{\pi,s}-\overline{l}^{\pi,s}_{u}, \end{eqnarray*} and so $c_u^{\pi,s}= c_u^{\pi',s}$ for all $s \in {\cal S}$ and $u \in {\cal U}.$ Thus for any policy $\pi$ there is a policy $\pi'$ which uses randomized URLLC placement and achieves the same long term throughputs. It follows that ${\cal C} \subset {\cal C}^{LR}$ and so $ {\cal C} = {\cal C}^{LR}.$ \subsection{Proof of Theorem~\ref{thm:main-theorem}} \label{pf:main-theorem} Under a policy ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{H,\delta}$ we have that the induced loads are given by $$ L_{u,m}^{{\bm \pi},s} = \frac{\gamma_{u,1}^{{\bm \pi},s}}{f} D(m), $$ so we have that $$ L_{u}^{{\bm \pi},s} = \sum_{u \in {\cal U}} L_{u,m}^{{\bm \pi},s} = \frac{\gamma_{u,1}^{{\bm \pi},s}}{f} \sum_{u \in {\cal U}} D(m) = \frac{\gamma_{u,1}^{{\bm \pi},s}}{f} D = \gamma_{u}^{{\bm \pi},s} D. $$ where the last equality follows from the uniformity of URLLC splits and normalization it follows that $$ r^{{\bm \pi},s}_{u} = \E[ {f}^{s}_u ( \phi^{{\bm \pi},s}_{u}, L^{{\bm \pi},s}_u)] = \E[ {f}^{s}_u ( \phi^{{\bm \pi},s}_{u}, \gamma^{{\bm \pi},s}_u D )]. $$ \subsection{Proof of Lemma~\ref{lm:condn:examplesg}} \label{pf:condn:examplesg} Recall that convex loss functions are specified as follows $$ f_{u}^{s}(\phi_{u}^{s},l_{u}^{s}) = \hat{r}_{u}^{s} \phi_{u}^{s} (1- h^s_u \brac{ \frac{\l_{u}^{s}}{\phi_{u}^{s}}}), $$ with $h^s_u: [0,1] \rightarrow [0,1]$ a convex increasing function. For time-homogenous policies we have defined \begin{eqnarray*} g^s_u ( \phi^s_u, \gamma^s_u ) & = & \E[ {f}^{s}_u ( \phi^{s}_{u}, \gamma^{s}_u D )] \\ & = & \hat{r}_{u}^{s} E[ \phi_{u}^{s} - \phi_{u}^{s} h^s_u ( \frac{\gamma^{s}_u}{\phi^{s}_u} D )]. \end{eqnarray*} Recall that convex function $h(\cdot)$ one can define a function $ l(\phi, \gamma) = \phi h (\frac{\gamma}{\phi})$ known as the perspective of $h(\cdot)$ which is known to be jointly convex in its arguments. It follows that $ \phi-\phi h (\frac{\gamma}{\phi})$ is jointly concave, and so is $g^s_u(\cdot)$ since it is a weighted aggregation of jointly concave functions. For threshold-based loss functions where $t^s_u(\phi^s_u) = \alpha^s_u$ we have that \begin{eqnarray*} g^s_u ( \phi^s_u, \gamma^s_u ) & = & \E[ {f}^{s}_u ( \phi^{s}_{u}, \gamma^{s}_u D )] \\ & = & \hat{r}_{u}^{s} \phi^{\pi,s}_u P(\gamma^s_u D \leq \phi_u^{\pi,s} \alpha^u_s )\\ & = & \hat{r}_{u}^{s} \phi^{\pi,s}_u F_D ( \frac{\phi_u^{\pi,s} \alpha^u_s}{\gamma^s_u}). \end{eqnarray*} Now using the same result on the perspective functions of variables the result follows. The truncated Pareto case can be easily verified by taking derivatives. \subsection{Proof of Theorem~\ref{thm:capacity-theorem}} \label{pf:capacity-theorem} Clearly ${\cal C}^{H,\delta} \subset {\cal \hat{C}}^{H,\delta}.$ We will show that ${\bf c} \in {\cal \hat{C}}^{H,\delta}$ then their exists ${\bm \pi} = ({\bm \phi}^{\bm \pi},{\bm \gamma}^{\bm \pi}) \in \Pi^{H,\delta}$ such that ${\bf c} \leq {\bf c}^{{\bm \pi}}$ from which it follows that ${\cal C}^{H,\delta} \subset {\cal C}^{H,\delta}.$ Suppose ${\bf c} \in {\cal \hat{C}}^{H,\delta}$, then it can be represented as a convex combination of policies $\Pi^{H,\delta}$, in each channel state. For example suppose for simplicity that for that in channel state $s \in {\cal S}$ we have that $\lambda \in [0,1]$ one time shares between two policies ${\bm \pi}_1$ and ${\bm \pi}_2$ to achieve throughputs for $u \in {\cal U}$ given by $$ r^s_u = \lambda {r}^{{\bm \pi}_1,s}_u + (1- \lambda) {r}^{{\bm \pi}_2,s}_u. $$ Consider $u$ we have \begin{eqnarray*} \lefteqn{ r^{s}_{u} = \lambda r^{{\bm \pi}_1,s}_u + (1- \lambda) r^{{\bm \pi}_2,s}_u } \\ & =& \lambda g^s_u (\phi^{{\bm \pi}_1,s}_{u},{\gamma^{{\bm \pi}_1,s}_u}) + (1- \lambda) g^s_u (\phi^{{\bm \pi}_2,s}_{u},{\gamma^{{\bm \pi}_2,s}_u}) \\ & \leq & g^s_u (\lambda \phi^{{\bm \pi}_1,s}_{u} + (1-\lambda) \phi^{{\bm \pi}_2,s}_{u},~ \lambda \gamma^{{\bm \pi}_1,s}_{u} + (1-\lambda) \phi^{\gamma_2,s}_{u} ) \\ & = & g^s_u (\phi^{{\bm \pi},s}_{u}, \gamma^{{\bm \pi},s}_{u} ), \end{eqnarray*} where $\phi^{{\bm \pi},s}_{u}= \lambda \phi^{{\bm \pi}_1,s}_{u} + (1- \lambda) \phi^{{\bm \pi}_2,s}_{u}$ and $\gamma^{{\bm \pi},s}_{u} =\lambda \gamma^{{\bm \pi}_1,s}_{u} + (1- \lambda) \gamma^{{\bm \pi}_2,s}_{u}$. Clearly ${\bm \phi}^{\bm \pi}, {\bm \gamma}^{\bm \pi}$ as given above correspond to a policy $\bm{\pi}$ such that $\bm{\pi} \in \Pi^{H,\delta}$ since the set is convex. It also follows that $ r^{s}_{u} \leq r^{{\bm \pi},s}_u $, so ${c^s_u} \leq {c}^{{\bm \pi},s}_u$ and so ${\bf c} \leq {\bf c}^{{\bm \pi}}.$ \subsection{Proof of Theorem~\ref{th:optimality_of_stoch_approx}} \label{pf:optimality_of_stoch_approx} The proof requires intermediate lemmas, detailed below. For the ease of exposition, let us define $U(\bvec{r}):=\sum_{u \in \mathcal{U}} U_u(r_u)$ and $\nabla U\brac{\bvec{r}}:= \brac{\frac{\partial U_1(x)}{\partial x}\Bigr|_{x_1=\substack{r_1}}, \frac{\partial U_2(x)}{\partial x}\Bigr|_{x_2=\substack{r_2}}, \ldots, \frac{\partial U_1(x)}{\partial x}\Bigr|_{\substack{{x_{\abs{\mathcal{U}}}=r_{\abs{\mathcal{U}}}}}}}^T$. First we have the following important lemma regarding the stochastic approximation algorithm. \begin{lemma} \label{lm:unbiasedness} $\bvec{R}(t)=\brac{R_1(t), R_2(t), \ldots, R_{\abs{\mathcal{U}}}}^T$ is an unbiased estimator of $\underset{\bvec{c} \in \mathcal{C}^{H, \delta}}{\text{argmax: }} \nabla U\brac{\bvec{\overline{R}(t)}}^T \bvec{c}$, i.e., \begin{equation} \label{eq:unbiasedness} \expect{\bvec{R}(t)}= \underset{\bvec{c} \in \mathcal{C}^{H, \delta}}{\text{argmax: }} \nabla U\brac{\bvec{\overline{R}(t)}}^T \bvec{c}. \end{equation} \end{lemma} \begin{proof} Based on the definition of $\mathcal{C}^{H, \delta}$ we can re-write $\underset{\bvec{c} \in \mathcal{C}^{H, \delta}}{\text{max: }} \nabla U\brac{\bvec{\overline{R}(t)}}^T \bvec{c}$ as follows: \begin{align} \underset{\bvec{\phi}, \bvec{\gamma}}{\max} \quad \sum_{u \in \mathcal{U}}& U^{'}_u\brac{\overline{R}_u(t)}\brac{\sum_{s \in \mathcal{S}} p_{\mathcal{S}} (s)g^s_u \brac{ \phi^s_u, \gamma^s_u } }, \\ \text{s.t.} \quad \bvec{\phi} & \geq \brac{1-\delta}\bvec{\gamma}, \\ \bvec{\phi},\ \bvec{\gamma} & \in \Pi^{H,\delta}. \end{align} Observe that the above optimization problem can be solved separately for each network state $s \in \mathcal{S}$. The de-coupled problem for any state $s$ is same as the optimization problem~\eqref{eqn:opt-stoch-approx} in our online algorithm. With a slight abuse of notation, let $\brac{\bvecgreek{\tilde{\phi}}(s), \bvecgreek{\tilde{\gamma}}(s)}$ be the optimal solution to the online problem when $S(t)=s$. Conditioned on $S(t)=s$, we have that: \begin{multline} \expect{R_u(t) \mid S(t)=s}=\expect{f_u^s\brac{\tilde{\phi}_u^s, \tilde{\gamma}_u^s D} \mid S(t)=s}\\ =g^s_u \brac{ \tilde{\phi}^s_u, \tilde{\gamma}^s_u } \quad \forall u \in \mathcal{U}. \end{multline} Computing $\expect{\expect{R_u(t) \mid S(t)}}$ gives the desired result~\eqref{eq:unbiasedness}. \end{proof} The main intuition behind the proof of optimality is that for large $t$, the trajectories of $\overline{\bvec{R}}(t)$ can be approximated by the solution to the following differential equation in $\bvec{x}(t)$ with continuous time $t$: \begin{equation} \label{eq:differential_equation} \frac{d \bvec{x}(t)}{dt}= \underset{\bvec{c} \in \mathcal{C}^{\mathcal{H}, \delta}}{\text{argmax: }} \nabla U\brac{\bvec{x}(t)}^T \bvec{c} - \bvec{x}(t). \end{equation} Let us define $q(\bvec{x}):= \underset{\bvec{c} \in \mathcal{C}^{\mathcal{H}, \delta}}{\text{argmax: }} \nabla U\brac{\bvec{x}}^T \bvec{c}$. To show the optimality of our online algorithm, we shall also require the following result on the above differential equation. \begin{lemma} \label{lm:lemma_on_differetial_equation} The differential equation~\eqref{eq:differential_equation} is globally asymptotically stable. Furthermore, for any initial condition $\bvec{x}(0) \in C^{\mathcal{H}, \delta}$, we have that $\lim_{t \rightarrow \infty} \bvec{x}(t)= \bvec{r}^*$. \end{lemma} \begin{proof} To prove this lemma it is enough to show that there exists a Lyapunov function $L(\bvec{x}(t))$ such that it has a negative drift when $x(t)\neq \bvec{r}^*$ and has zero drift when $x(t)=\bvec{r}^*$. Define $L(\bvec{x})= U(\bvec{r}^*)- U(\bvec{x})$. Observe that under our assumption of strictly concave $U_u(\cdot)$, the offline optimization problem is guaranteed to have an unique optimal solution, which is $\bvec{r}^*$. Therefore, $\forall \bvec{x} \in C^{\mathcal{H}, \delta}$ and $\bvec{x} \neq \bvec{r}^*$ $L(\bvec{x}) > 0 $. Next we will compute the drift of $L(\bvec{x}(t))$ with respect to time. \begin{align} \frac{d L(\bvec{x}(t)) }{dt} &= - \nabla U\brac{\bvec{x}(t)}^T \frac{d \bvec{x}(t)}{dt}, \\ &= - q\brac{\bvec{x}(t)} + \nabla U\brac{\bvec{x}(t)}^T\bvec{x}(t), \label{eq:inequality_of_lyapunov1}\\ & < 0 \quad \quad \forall \bvec{x}(t) \neq \bvec{r}^*. \label{eq:inequality_of_lyapunov} \end{align} To get inequality~\eqref{eq:inequality_of_lyapunov}, first observe that from the definition of $q(\bvec{x(t)})$ and~\eqref{eq:inequality_of_lyapunov1}, we get that $\frac{d L(\bvec{x}(t)) }{dt} \leq 0$. However, we have to show that this inequality is strict for $\bvec{x}(t) \neq \bvec{r}^*$. Observe that $q(\bvec{x})=\bvec{x}$ is a necessary and sufficient condition for optimality of the offline optimization problem, see~\cite{boyd_book} for more details. From strict concavity of the utility functions, we have an unique optimal point $\bvec{r}^*$. Therefore, $\frac{d L(\bvec{x}(t)) }{dt} < 0$ for $\bvec{x}(t) \neq \bvec{r}^*$ and $\frac{d L(\bvec{x}(t)) }{dt}=0$ at $\bvec{x}(t)=\bvec{r}^*$. \end{proof} To conclude the proof, Lemmas~\ref{lm:unbiasedness} and~\ref{lm:lemma_on_differetial_equation} along with the condition~\ref{eq:condition_on_epsilon} satisfy all the conditions necessary to apply Theorem~2.1 in Chapter~5,~\cite{Kushner_book} which states that $\bvec{\overline{R}}(t)$ converges to $\bvec{r}^*$ almost surely. \subsection{Proof of Theorem~\ref{thm:optimality_of_minislot_homogeneous_policy}} \label{pf:optimality_of_minislot_homogeneous_policy} The proof has the following two steps. \begin{enumerate} \item We shall first consider a hypothetical \emph{non-casual} scenario and show that there exists an optimal joint scheduling policy with minislot-homogeneous URLLC placement policy which in general is a function of the aggregate URLLC load in an eMBB slot. We then upper bound the optimal value of $\mathcal{OP}_1$ by the solution to a hypothetical {non-causal} scenario described in the sequel. \item Secondly, under Assumption~\ref{asm:assumption_on_convex_cost_functions} on the loss functions, we show that there exists an URLLC placement policy policy which is minislot-homogeneous but independent of the aggregate URLLC load for the hypothetical non-causal scenario. We then conclude that there exists an optimal minislot-homogeneous joint sceduling policy for $\mathcal{OP}_1$ as an upper bound for its value is attained by a minislot-homogeneous joint scheduling policy. \end{enumerate} The two steps are elaborated next. \subsubsection{Hypothetical non-causal scenario} First let us describe the \emph{non-causal} scenario. At the beginning of each eMBB slot, first the scheduler chooses $\phi^{\pi, s}$. Next the total URLLC demand in each minislot is revealed, i.e., the realizations of $D(1), D(2), \ldots, D({\abs{\mathcal{M}}})$ are revealed. Therefore, this setting is not causal as it assumes knowledge about future URLLC demand realizations. In general the URLLC placement under the non-causal setting is dependent on the minislot index $m$ and $\bvec{D}^{\brac{1:\abs{\mathcal{M}}}}$. With slight abuse of notation, we shall denote it by $\gamma_{u,m}^s\brac{\bvec{D}^{\brac{1:\abs{\mathcal{M}}}}}$. The joint scheduling policy has to satisfy the constraints~\eqref{eq:constraint_on_phi},~\eqref{eq:constraint_on_gamma_1}, and~\eqref{eq:constraint_on_gamma_2}. We have the following lemma on the \emph{non-causal} setting. \begin{lemma} \label{lm:lemma_on_non_causal} There exists an optimal minislot-homogeneous policy for the non-casual setting such that the URLLC placement depends only on the total URLLC demand in an eMBB slot, i.e., $\sum_{m=1}^{\abs{\mathcal{M}}}D_m$. \end{lemma} \begin{proof} Let $\brac{\tilde{\phi}^{\pi}, \tilde{\gamma}^{\pi, s}\brac{ \cdot}}$ be the decision variables under an optimal joint scheduling policy $\pi$ in the non-causal setting. Let $d(1)$, $d(2), \ldots, d({\abs{\mathcal{M}}})$ be realizations of $D(1), D(2), \ldots, D({\abs{\mathcal{M}}})$ such that $\sum_{m=1}^{\abs{\mathcal{M}}} d(m) =d$. Define the following: \begin{equation} \nu_u^s:=\frac{\sum_{m=1}^{\abs{\mathcal{M}}} \tilde{\gamma}_{u,m}^{\pi, s}\brac{ \bvec{d}^{\brac{1:\abs{\mathcal{M}}}}} d(m)}{d}. \end{equation} Note that with the definition of $\nu_u^s$, the total puncturing experienced by an eMBB user $u$ in an eMBB slot is $\nu_u^s d$. From this one can construct an equivalent minislot-homogeneous URLLC placement policy. For all minislots, use $\nu^s$ as the URLLC placement factor. This satisfies the constraints~\eqref{eq:constraint_on_phi},~\eqref{eq:constraint_on_gamma_1}, and~\eqref{eq:constraint_on_gamma_2}. In general $\nu^s$ could depend on $d(1),\, d(2), \, \ldots, d({\abs{\mathcal{M}}})$. However, we will show that the optimal solution depends only on the sum $\sum_{m=1}^{\abs{\mathcal{M}}} d_m$. Let $d'(1), \, d'(2), \ldots,\, d'({\abs{\mathcal{M}}})$ be such that $\sum_{m=1}^{\abs{\mathcal{M}}}d_m'=d$ and there exists an $m$ such that $d'(m)\neq d (m)$. Define the following: \begin{equation} \nu_u'^s:=\frac{\sum_{m=1}^{\abs{\mathcal{M}}} \tilde{\gamma}_{u,m}^{\pi, s}\brac{\bvec{d}'^{\brac{1:\abs{\mathcal{M}}}}} d_m'}{d}. \end{equation} Therefore, the total puncturing observed by $\nu_u'^sd$. Observe that $\nu'^s$ is also a feasible URLLC policy for the case when the URLLC demand realizations are $d(1)$, $d(2), \, \ldots, \, d({\abs{\mathcal{M}}})$. Similarly $\nu^s$ is also a feasible URLLC placement policy for the case with $d'(1)$, $d'(2), \, \ldots, \, d'({\abs{\mathcal{M}}})$. Therefore, the optimal solution has to be independent of the realizations of $D(1), \, D(2), \ldots, D({\abs{\mathcal{M}}})$ and depends only on the sum $\sum_{m=1}^{\abs{\mathcal{M}}}D_m$. \end{proof} Therefore, we shall restrict ourselves to minislot-homogeneous policies in the non-causal setting with the URLLC placement as a function of the total URLLC demand for that eMBB slot. With slight abuse of notation we shall denote a URLLC placement policy in this setting by $\gamma_u^s \brac{\cdot}$ with the only argument as the total URLLC demand in that eMBB slot. This procedure is formally described next. \begin{enumerate} \item At the beginning of an eMBB slot, the joint scheduler chooses $\phi_u^{\pi, s}, u \in \mathcal{U}$ such that \begin{equation} \label{eq:constraint_on_phi_hyp} \sum_{u \in \mathcal{U}} \phi^{\pi, s}_u =1 \mbox{ and } \phi^{\pi, s}_u \in \sbrac{0,1} \quad \forall u. \end{equation} \item The total URLLC demand $D=\sum_{m=1}^{\abs{\mathcal{M}}} D(m)$ in that eMBB slot is revealed. \item For an URLLC demand of $D$, $\gamma_u^{\pi, s}(D)$ is chosen such that \begin{equation} \sum_{u \in \mathcal{U}}\gamma_u^{\pi, s}(D)=1, \quad \mbox{and} \quad \gamma_u^{\pi, s}(D) \in \sbrac{0,1}. \end{equation} \end{enumerate} Let us denote the feasible policies for this hypothetical non-causal scenario by $\Pi^{\dagger}$. $\brac{\bvecgreek{\phi^{\pi, s}}, \bvecgreek{\gamma^{\pi, s}} }$ is chosen as the solution to the following optimization problem. \begin{equation} \mathcal{OP}_2 : \quad \underset{\pi \in \Pi^{\dagger}}{\max}: \sum_{u \in \mathcal{U}} w_u g_u^{\pi, s}\brac{\phi_u^{\pi, s}, \gamma_u^{\pi, s}\brac{\cdot}}, \end{equation} where $g_u^{\pi, s}\brac{\phi_u^{\pi, s}, \gamma_u^{\pi, s}\brac{\cdot}} = r_u^s \phi_u^{\pi, s} \expect{{1-h_u^s\brac{\frac{\gamma_u^{\pi, s}(D)D}{\phi_u^{\pi, s}}} }}$. We have the following important lemma which states that the optimal value under the non-causal scenario is an upper bound to the optimal value under the causal and minislot-dependent policy. \begin{lemma} \label{lm:lemma_on_upper_bound} \begin{multline} \underset{\pi \in \Pi^{\dagger}}{\max}: \sum_{u \in \mathcal{U}} w_u g_u^{\pi, s}\brac{\phi_u^{\pi, s}, \gamma_u^{\pi, s}\brac{\cdot}} \\ \geq \underset{\pi \in \tilde{\Pi}}{\max}: \sum_{u \in \mathcal{U}} w_u g_u^{\pi, s}\brac{\phi_u^{\pi, s}, \bvecgreek{\gamma}_{u}^{\pi, s}}. \end{multline} \end{lemma} \begin{proof} This directly follows from the proof of Lemma~\ref{lm:lemma_on_non_causal} where we have shown that any URLLC placement factor $\bvecgreek{\gamma}_{u}^{\pi, s}$ can be transformed into a minislot-homogeneous policy which depend only on the total URLLC demand in an eMBB slot, and hence, any feasible solution for $\mathcal{OP}_1$ is a feasible solution for $\mathcal{OP}_2$. \end{proof} \subsubsection{Existence of an optimal solution independent of the value of $ D $} In general the optimal URLLC placement policy under $\mathcal{OP}_2$ may depend on the total URLLC demand in an eMBB slot. However, under the Assumption~\ref{asm:assumption_on_convex_cost_functions} it is independent of the total URLLC demand. This is stated formally in the following lemma. \begin{lemma} \label{lm:optimality_on_lemma_on_upper_bound} Under Assumption~\ref{asm:assumption_on_convex_cost_functions}, there exists an optimal solution $\brac{\bvec{\phi}^{*, s}, \bvec{\gamma}^{*, s}\brac{\cdot}}$ for $\mathcal{OP}_2$ with URLLC placement policy ($\bvec{\gamma}^{*, s}\brac{\cdot}$) independent of $D$. \end{lemma} \begin{proof} If $\brac{\bvec{\phi}^{*, s}, \bvec{\gamma}^{*, s}\brac{\cdot}}$ is an optimal solution to $\mathcal{OP}_2$, then $\bvec{\gamma}^{*, s}\brac{\cdot}$ must also be an optimal solution to the following optimization problem in $\bvecgreek{\gamma}^s:=\brac{\gamma_1^s(\cdot), \gamma_2^s(\cdot), \ldots, \gamma_{\abs{\mathcal{U}}}^s(\cdot) }$. \begin{align} \underset{\bvecgreek{\gamma}^s}{\max} \quad \sum_{u \in \mathcal{U}}& w_u g^s_u ( \phi^{*, s}_u, \gamma^s_u\brac{\cdot} ) , \\ \text{s.t.} \quad \phi_u^{*, s} & \geq \brac{1-\delta}\gamma_u^{s}(d) \quad \forall u,\, d, \\ \sum_{u \in \mathcal{U}} \gamma^{s}_u(d) &=1 \mbox{ and } \gamma_u^s(d) \in \sbrac{0,1} \quad \forall u,\, d. \\ \end{align} For any $d$ and $u$, from the K.K.T. conditions for the above optimization problem, we have that \begin{equation} \label{eq:KKT_1} -w_u r_u^s d^p h_u^{s'}\brac{\frac{\gamma_u^{*, s}(d) }{\phi_u^{*, s}}} + \beta(d) + \eta_u(d) -\nu_u(d) -\lambda_u(d)=0. \end{equation} where $h_u^{s'}(x) = \frac{d h_u^{s}(y)}{dy}\Bigr|_{\substack{y=x}}, $$\beta(d)$ is an arbitrary constant (function of $d$) and $\eta_u(d)$, $\nu_u(d)$ and $\lambda_u(d)$ are constants such that \begin{align} \lambda_u(d) \brac{{\phi}_u^{*, s}(d) - {\gamma}_u^{*, s} (1-\delta)}= 0 \quad & \text{and } \quad \lambda_u(d) \geq 0 \quad \forall u, \\ \eta_u(d) {\gamma}_u^{*, s} (d) =0 \quad & \text{and} \quad \eta_u(d) \geq 0 \quad \forall u, \\ \nu_u(d) \brac{1-{\gamma}_u^{*, s}(d)} =0 \quad & \text{and} \quad \nu_u(d) \geq 0 \quad \forall u. \end{align} Note that we have used the fact that for a homogeneous loss functions $ h_u^{s'} (dx)=d^p h_u^{s'}(x) $. For any $\tilde{d} \neq d$, if we choose $\beta(\tilde{d})=\beta(d)\frac{\tilde{d}^p}{d^p}$, $\eta_u(\tilde{d})= \eta_u(d)\frac{\tilde{d}^p}{d^p} $, $\nu_u(\tilde{d})=\nu_u(d) \frac{\tilde{d}^p}{d^p} $, and $\lambda_u(\tilde{d})= \lambda_u(d) \frac{\tilde{d}^p}{d^p}$, then from~\eqref{eq:KKT_1} $\gamma_u^{*, s} (d)$ and $\phi_u^{*, s}$ satisfy the K.K.T. condition for $ \tilde{d} $ \begin{equation} -w_u r_u^s \tilde{d}^p h^{s'}\brac{\frac{\gamma_u^{*, s}(d)}{\phi_u^{*, s}}} + \beta(\tilde{d}) + \eta_u(\tilde{d}) -\nu_u(\tilde{d}) -\lambda_u(\tilde{d})=0. \end{equation} Hence, $\gamma_u^{*, s} (d)$ and $\phi_u^s$ are optimal for $\tilde{d}$ too. Hence, we have a constructed an optimal solution with URLLC placement policy independent of $D$. \end{proof} We have shown in Lemma~\ref{lm:optimality_on_lemma_on_upper_bound} that there exists an optimal policy $\brac{\bvec{\phi^{*, s}}, \bvec{\gamma^{*, s}}}$ which is a minislot-homogeneous policy and independent of the realization of $D$. In Lemma~\ref{lm:lemma_on_upper_bound}, we have also shown that the optimal value of $\mathcal{OP}_2$ is an upper bound for $\mathcal{OP}_1$. Hence, there exists a minislot-homogeneous policy which achieves an upper bound for $\mathcal{OP}_1$. Therefore, there exists a minislot-homogeneous policy which is optimal for $\mathcal{OP}_1$. \subsection{Proof of Theorem~\ref{thm:horizontal_vs_vertical}} \label{pf:horizontal_vs_vertical} Let $\mathcal{S}_k$ be the set of all subsets with $k$ elements chosen from the set $\cbrac{1,2, \ldots, m_1+m_2}$. For example, if $m_1+m_2=3$ and $k=2$, then $\mathcal{S}_k =\cbrac{\cbrac{1,2}, \cbrac{2,3}, \cbrac{1,3}}$. Note that $\abs{\mathcal{S}_k}= \binom{\abs{\mathcal{M}}}{k}$. Using the above definitions, we can re-write the R.H.S. of~\eqref{eq:result_on_vertical_vs_horizontal} as follows: \begin{multline} \expect{h_1^s \brac{\sum_{m=1}^{m_1 + m_2 } \phi_1 D(m)}} \\ = \expect{h_1^s\brac{ \frac{1}{ \binom{m_1 + m_2}{m_1}} \sum_{q \in \mathcal{S}_{m_1}}\brac{\sum_{m \in q} D(m)}}}. \end{multline} Using the above expression one can apply Jensen's inequality on the R.H.S. of~\eqref{eq:result_on_vertical_vs_horizontal}, we have that \begin{multline} \expect{h_1^s \brac{\sum_{m=1}^{m_1 + m_2} \phi_1 D(m)}} \\ \leq \frac{1}{ \binom{m_1 + m_2}{m_1}} \sum_{q \in \mathcal{S}_{m_1}} \expect{h_1^s\brac{ {\sum_{m \in q} D(m)}}}. \end{multline} Since $D_m$'s are i.i.d. the R.H.S. of the above expression is same as the L.H.S. of~\eqref{eq:result_on_vertical_vs_horizontal}. Hence, proved. \subsection{Proof of Theorem~\ref{thm:tp-optimality-theorem}} \label{pf:tp-optimality-theorem} Clearly the probability of loss depends on the minislot demands and the users thresholds. If one relaxes the sequential constraint on URLLC allocations, one can consider aggregating the the minislot demands and pooling together the users superposition/puncturing thresholds. The probability of loss for this relaxed system is simply the probability the demand exceeds the size of the superposition/puncturing pool, i.e., The probability of loss under the pooled resources is given by $$ P(D \geq \sum_{u \in {\cal U}} \phi_{u}^{s} t_{u}^{s}(\phi_{u}^{s})). $$ This is clearly a lower bound for any placement policy. Note however that the threshold proportional strategy meets this bound from Corollary~\ref{cor:tp} (see Equation~\eqref{eqn:tp-bound}) so it indeed minimizes the probability of loss on a given eMBB slot. \subsection{Optimal eMBB Slot Aggregation} \label{sec:eMBB_slot_aggregation} In Sec.~\ref{sec:sys-model} we have used uniform slot sizes for eMBB users, i.e. the allocated minislots to all users span the entire width of the slot (see Figure~\ref{fig:config_2}). However, new proposals allow greater flexibility in slot allocation, e.g., the capability to choose different slot durations for different eMBB users~\cite{3gpp_ran1_87}. An example of such an eMBB resource allocation configuration is shown in Fig.~\ref{fig:emBB_slots}. Here we will justify the use of uniform slot sizes for eMBB users in Sec.~\ref{sec:sys-model}. We show that choosing a slot duration equal to a common \emph{coherence time}\footnote{In a wireless network, different users may have different channel coherence times due differences in user mobilities. We take the minimum of coherence times of eMBB users} of eMBB users' channels along with appropriate bandwidth allocations results in a better average eMBB rate under minislot homogeneous URLLC placement policies. \begin{figure} \centering \includegraphics[scale=0.35]{eMBB_flexibile_resource_allocation} \caption{An example of eMBB resource allocations in 5G NR time-frequency plane.} \label{fig:emBB_slots} \end{figure} The essence of the problem can be captured by comparing the two resource allocation configurations shown in Figures~\ref{fig:config_1} and~\ref{fig:config_2}. In configuration~1, eMBB user 1 is allocated the entire bandwidth for its aggregated slot duration. Let there be $ m_1 $ mini-slots in eMBB user 1's aggregated slot. Similarly eMBB user 2 is allocated the entire bandwidth for its aggregated slot duration which consists of $ m_2 $ minislots. The network state $s$ is assumed to be the same for the entire $ m_1 + m_2$ minislots. This implies that the loss functions of eMBB users $ (h_u^s\brac{\cdot}) $ do not change for $ m_1 + m_2 $ minislots. In Configuration~2 we shall allocate an eMBB user $ u $ a fraction $\phi_u$ of the bandwidth for a duration of $ m_1 + m_2 $ minislots, where $ \phi_u:=\frac{m_u}{m_1 +m_2} $ for $ u=1,2 $. Note that the total resources allocated to eMBB users, which is the area allocated to them in the time-frequency plane is same in both these configurations. In Configuration~1, the total puncturing observed by eMBB user $u$ is given by $\sum_{m=1}^{m_u} D(m) $. In Configuration~2, under uniform URLLC placement, the total puncturing observed by eMBB user $u$ is given by $\sum_{m=1}^{m_1+m_2} \phi_u D(m)$. The main result of this section is given below: \begin{theorem} \label{thm:horizontal_vs_vertical} Under the assumption of i.i.d. URLLC demands\footnote{This result can be extended to exchangeable URLLC demands. We use i.i.d. assumption to maintain consistency with other sections. } ($D(m), \, m=1,2, \ldots, m_1+m_2$) and convex loss functions $(h_u^s \brac{\cdot})$, for any eMBB user $ u $ we have that \begin{equation} \label{eq:result_on_vertical_vs_horizontal} \expect{h_u^s \brac{\sum_{m=1}^{m_u} D(m)}} \geq \expect{h_u^s \brac{\sum_{m=1}^{m_1+m_2 } \phi_u D(m)}}. \end{equation} \end{theorem} Proof of this result is given in Appendix~\ref{pf:horizontal_vs_vertical}. {\em Remarks:} The above theorem shows that the expected loss suffered by an eMBB user due to URLLC puncturing in Configuration~1 is higher than in Configuration~2 during a coherence time. This implies that all it is preferable for eMBB users to spread their resource allocation in time from the perspective of reducing their puncturing. The underlying reason is that Configuration~2 results in lesser variability in the total puncturing. Since the loss functions are convex, lower variability leads to a lower expected loss. Now consider a more general configuration given in Fig.~\ref{fig:emBB_slots}. Suppose we use a minislot homogeneous URLLC placement policy for the entire duration in Fig.~\ref{fig:emBB_slots}. One can then apply the Thm.~\ref{thm:horizontal_vs_vertical} iteratively and show that using the same slot duration for eMBB users with appropriate scaling of the bandwidth allocation results in a higher average rate for eMBB users. \begin{figure} \centering \includegraphics[scale=0.33]{configuration_1} \caption{In this configuration, eMBB users share time and do not share the bandwidth in an eMBB slot.} \label{fig:config_1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.35]{configuration_2} \caption{In this configuration, eMBB users share bandwidth and do not share the time in an eMBB slot.} \label{fig:config_2} \end{figure} \subsection{Optimal eMBB Slot Slicing} \label{sec:eMBB_slot_aggregation} In Section~\ref{sec:sys-model} we have used uniform slot sizes for eMBB users, i.e. the allocated minislots to all users span the entire width of the slot (see Figure~\ref{fig:config_2}; henceforth referred to as frequency slices). However, new proposals allow greater flexibility in slot allocation, e.g., the capability to choose different slices over both time and frequency for different eMBB users~\cite{3gpp_ran1_87}. In this section we will show that while it is possible to slice eMBB users' resources flexibly, it is preferable to slice frequency (see~\ref{fig:config_2}) than time from the point of view of puncturing losses for convex loss functions. \begin{figure} \centering \includegraphics[scale=0.33]{configuration_1} \caption{Time Slices: In this configuration, eMBB users share resources over time in an eMBB slot.} \label{fig:config_1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.35]{configuration_2} \caption{Frequency Slices: In this configuration, eMBB users homogeneously share frequency in an eMBB slot.} \label{fig:config_2} \end{figure} The essence of the discussion can be captured by comparing the two resource allocation configurations shown in Figures~\ref{fig:config_1} and~\ref{fig:config_2}. In Configuration~1 (time slices), eMBB user 1 is allocated the entire frequency band for a subset of $ m_1 $ minislots. Similarly eMBB user 2 is allocated the entire frequency band for its subset of $ m_2 $ minislots. The network state $s$ is assumed to be the same for the entire $ m_1 + m_2$ minislots. This implies that the loss functions of eMBB users $ (h_u^s\brac{\cdot}) $ do not change throughout the $ m_1 + m_2 $ minislots. In Configuration~2 (frequency slices) we allocate an eMBB user $1 $ a fraction $\phi_1$ of the bandwidth for a duration of $ m_1 + m_2 $ minislots, where $ \phi_1:=\frac{m_1}{m_1 +m_2}$ and similarly for eMBB user $ 2 $. Note that the total resources allocated to eMBB users, which is represented by the area allocated in the time-frequency plane is same in both configurations. In Configuration~1, the total puncturing observed by eMBB user $1$ is given by $\sum_{m=1}^{m_1} D(m) $ and similarly for eMBB user $ 2 $. Whereas in Configuration~2, under uniform URLLC placement, the total puncturing observed by eMBB user $1$ is given by $\sum_{m=1}^{m_1+m_2} \phi_1 D(m)$. Note that the mean total puncturing is same in both the configurations. The main result of this section is given below: \begin{theorem} \label{thm:horizontal_vs_vertical} Under the assumption of i.i.d. URLLC demands\footnote{This result can be extended to exchangeable URLLC demands. We use i.i.d. assumption to maintain consistency with other sections. } ($D(m), \, m=1,2, \ldots, m_1+m_2$) and convex loss functions $(h_u^s \brac{\cdot})$, for any eMBB user, e.g., eMBB user $ 1 $, we have that \begin{equation} \label{eq:result_on_vertical_vs_horizontal} \expect{h_1^s \brac{\sum_{m=1}^{m_1} D(m)}} \geq \expect{h_1^s \brac{\sum_{m=1}^{m_1+m_2 } \phi_1 D(m)}}. \end{equation} \end{theorem} Proof of this result is given in Appendix~G. {\bf Remarks:} The above theorem shows that the expected loss suffered by an eMBB user due to URLLC puncturing in Configuration~1 (time slicing) is higher than in Configuration~2 (frequency slicing). This implies that it is preferable for eMBB users to spread their resource allocation over time from the perspective of reducing their loss due to puncturing. The underlying reason is that Configuration~2 results in smaller variability in the total puncturing even though both the configurations have the same mean total puncturing. Since the loss functions are convex, a lower variability leads to a lower expected loss. Finally, for more complex (rectangular) slices, we can now apply Thm.~\ref{thm:horizontal_vs_vertical} iteratively and show that using frequency slices with appropriate scaling of the bandwidth allocation results in a higher average rate for eMBB users. \section*{Acknowledgements} The work of Arjun Anand was partially supported by FutureWei Technologies and NSF grant CNS-1731658, Gustavo de Veciana was partially supported by NSF grants CNS-1343383 and CNS-1731658, and Sanjay Shakkottai was partially supported by NSF grants CNS-1343383 and CNS-1731658, and the US DoT D-STOP Tier 1 University Transportation Center. \bibliographystyle{IEEEtran} \subsection{Optimality of Minislot-Homogeneous Policies} In the previous section we restricted ourselves to minislot-homogeneous policies. In this section will justify this choice. Let us consider a generalization of minislot-homogeneous policies where the URLLC placement in each minislot can depend on the history of URLLC arrivals prior to that minislot. Such a policy will obviously perform better than minislot-homogeneous URLLC placement policies since in a minislot-homogeneous policy we decide the URLLC placement at the beginning of an eMBB slot based on the expected loss due to puncturing/superposition and do not adapt it based on the realization of URLLC demands per minislot. However, finding an optimal scheduling policy under this generalization can be computationally expensive as compared to minislot-homogeneous policies which are attractive due to their simplicity. In this section we identify conditions under which minislot-homogeneous URLLC placement polices perform as well as the general class of \emph{causal} and \emph{minislot-dependent} policies. These terms are defined below. \begin{definition} A scheduler is said to be \emph{causal} if at the beginning of a mini-slot $m$ the scheduler knows the realizations of $D(1)$, $D(2), \ldots, D(m-1)$ and is unaware of the realizations of $ D(m), D(m+1), \ldots, D(\abs{\mathcal{M}}) $. \end{definition} \begin{definition} A scheduling policy is said to be \emph{minislot-dependent} if the URLLC placement policy can vary with the minislot index $m$ and previous URLLC demands in the eMBB slot. \end{definition} The decision variables in a causal and minislot-dependent joint scheduling policy $\pi$ can be described as follows: \begin{enumerate} \item At the beginning of an eMBB slot, the scheduler chooses $\phi_u^{\pi, s}, u \in \mathcal{U}$ such that \begin{equation} \label{eq:constraint_on_phi} \sum_{u \in \mathcal{U}} \phi^{\pi, s}_u =1 \mbox{ and } \phi^{\pi, s}_u \in \sbrac{0,1} \quad \forall u \in \mathcal{U}. \end{equation} \item In each mini-slot $m$, the total puncturing placed on eMBB user $u$ is given by $\gamma_{u,m}^{\pi, s}\brac{ \bvec{d}^{\brac{1:m-1}}} D_m$, where $\gamma_{u,m}^{\pi, s}\brac{ \cdot}$ characterizes the URLLC placement in minislot $ m $ as function of the previously seen URLLC demands $\bvec{D}^{\brac{1:m-1}}:= \brac{D(1), D(2), \ldots, D(m-1)}$. Let $ \bvec{d}^{\brac{1:m-1}} $ is a realization of $ \bvec{D}^{\brac{1:m-1}}$. For any $m$ and $\bvec{d}^{\brac{1:m-1}}$, $\gamma_{u,m}^{\pi, s}\brac{ \bvec{d}^{\brac{1:m-1}}}$ has to satisfy the following constraints. \begin{align} \sum_{u \in \mathcal{U}} \gamma^{s,\pi }_{u,m} ( \bvec{d}^{\brac{1:m-1}})&=1, \label{eq:constraint_on_gamma_1}\\ \gamma^{s,\pi }_{u,m} ( \bvec{d}^{\brac{1:m-1}}) &\leq \frac{ \phi_u^{\pi, s}}{\abs{\mathcal{M}}\brac{1-\delta}} \quad \forall u \in \mathcal{U}, \label{eq:constraint_on_gamma_2}\\ \gamma^{s,\pi }_{u,m} ( \bvec{d}^{\brac{1:m-1}}) &\in \sbrac{0,1} \quad \forall u \in \mathcal{U}. \label{eq:constraint_on_gamma_3} \end{align} \end{enumerate} Observe that the URLLC placement factor for causal and minislot-dependent scheduling policy is not just dependent on the user and network state but it also depends on the mini-slot index and past URLLC demands. Let $\tilde{\Pi}$ be the set of all causal and mini-slot dependent scheduling policies. In our online algorithm~\eqref{eqn:opt-stoch-approx}, for any eMBB slot $t$, we find the policy which solves the following optimization problem with non-negative weights $ w_u $. \begin{equation} \mathcal{OP}_1 : \quad \underset{\pi \in \tilde{\Pi}}{\max}: \sum_{u \in \mathcal{U}} w_u g_u^{\pi, s}\brac{\phi_u^{\pi, s}, \bvecgreek{\gamma}_{u}^{\pi, s}}, \end{equation} where $s$ is the current network state, $ \bvecgreek{\gamma}_{u}^{\pi, s}:=\brac{{\gamma}_{u,1}^{\pi, s}\brac{ \cdot}, {\gamma}_{u,2}^{\pi, s}\brac{ \cdot}, \ldots, {\gamma}_{u,\abs{\mathcal{M}}}^{\pi, s}\brac{ \cdot}} $ is the vector of URLLC placement factors of all minislots (with slight abuse of notation) and $g_u^{\pi, s}\brac{\cdot, \cdot}$ is the average rate experienced by eMBB user $u$ under policy $\pi$. $g_u^{\pi, s}\brac{\cdot, \cdot}$ is given by the following expression: \begin{multline} g_u^{\pi, s} \brac{\phi_{u}^{\pi, s}, \bvecgreek{\gamma}_{u}^{\pi, s}} := \\ r_u^s \phi_u^{\pi, s}\expect{{1- h_u^s \brac{\frac{\sum_{m=1}^{\abs{\mathcal{M}}}{\gamma_{u,m}^{\pi, s}\brac{{ \bvec{D}^{\brac{1:m-1}}} }D_m}}{{ \phi_u^{\pi, s}}}}}}, \end{multline} where the expectation is computed with respect to the joint distribution of $D(1)$, $D(2), \ldots$, $D(\abs{\mathcal{M}})$. One can formulate the above optimization problem as a Markov Decision Problem (MDP), however the state space for such an MDP is prohibitively large. Furthermore we note that minislot-homogeneous policies are attractive in terms of it computational complexity. In general, one cannot expect optimal minislot-homogeneous policies to perform as well as optimal minislot dependent policies, however, if we restrict ourselves to convex \emph{homogeneous} loss functions, then we can show that minislot-homogeneous policies are in fact optimal over $ \tilde{\Pi} $. \begin{definition} \label{asm:assumption_on_convex_cost_functions} A loss function $ h_u^s(\cdot) $ is said to be homogeneous if there exists a real number $p$ such that $ \forall $ $ x \in \sbrac{0,1} $ and $ \kappa \geq 0$ we have that \begin{equation} h_u^s(\kappa x) = \kappa^p h_u^s( x). \end{equation} \end{definition} Even with this restriction we can model useful loss functions which could possibly be user and network state dependent. Some examples are given below. \begin{enumerate} \item {\bf Linear:} $h_u^s(x) = k_u^s \brac{x}$, where $k_u^s\geq 0$. \item {\bf Monomial:} $h_u^s(x)= k_u^s \brac{x}^q$ where $k_u^s\geq 0$ and $q \geq 1$. \end{enumerate} Our main result on the optimality of minislot-homogeneous policies is proved in Appendix~F and stated next. \begin{theorem} \label{thm:optimality_of_minislot_homogeneous_policy} If the support of URLLC demands $ D $ is a finite discrete set and eMBB loss functions are homogeneous and convex, then there exists an optimal solution $\brac{\bvec{\phi}^{s, *}, \bvec{\gamma}^{s, *}\brac{ \cdot}}$ for $\mathcal{OP}_1$ with a minislot-homogeneous URLLC placement policy $ \bvec{\gamma}^{s, *} $. \end{theorem}
{ "timestamp": "2018-08-28T02:08:50", "yymm": "1712", "arxiv_id": "1712.05344", "language": "en", "url": "https://arxiv.org/abs/1712.05344" }
\section{Introduction} Let $\hat M^m(c)$ be the compact complex Grassmannian $SU_{m+2}/S(U_2U_m)$ of rank two (resp. noncompact complex Grassmannian $SU_{2,m}/S(U_2U_m)$ of rank two) for $c>0$ (resp. $c<0$), where $c=\max||K||/8$ is a scaling factor for the Riemannian metric $g$ and $K$ is the sectional curvature for $\hat M^m(c)$. It is an irreducible Riemannian symmetric space equipped with a K\"ahler structure $J$ and a quaternionic K\"ahler structure $\mathfrak{J}$ not containing $J$. Let $M$ be a connected, oriented real hypersurface isometrically immersed in $\hat M^m(c)$, $m\geq 2$, and $N$ be a unit normal vector field on $M$. Denote by the same $g$ the Riemannian metric on $M$. The Reeb vector field $\xi$ is defined by $\xi = -JN$, and we define $\xi_a = -J_a N$, $a\in\{1,2,3\}$, where $\{J_1,J_2,J_3\}$ is a canonical local basis of $\mathfrak J$. Denote by $D^\perp$ (resp. $\mathfrak D^\perp$) the distribution on $M$ spanned by $\xi$ (resp. $\{\xi_1,\xi_2,\xi_3\}$). A real hypersurface $M$ in a K\"ahler manifold is said to be Hopf if the Reeb vector field is principal, that is, $A\xi=\alpha\xi$. The study of real hypersurfaces in $\hat M^m(c)$ was initiated by Berndt and Suh in \cite{berndt-suh, berndt-suh2}. They considered the invariance of $\mathfrak D^\perp$ under the shape operator $A$ of Hopf hypersurfaces $M$ in $\hat M^m(c)$ and proved a classification of such Hopf hypersurfaces in $\hat M^m(c)$. The structures $J$ and $\mathfrak{J}$ of the ambient space impose several restrictions on the geometry of its real hypersurfaces, for example, there does not exist any semi-parallel real hypersurface in $SU_{m+2}/S(U_2U_m)$ \cite{looth} while the non-existence problem of Hopf hypersurfaces in $\hat M^m(c)$ with parallel Ricci tensor were studied in \cite{suh2, suh3}. Besides the shape operator and the Ricci tensor, there are particularly two operators on a real hypersurface $M$ which draw much attention, namely the normal Jacobi operator $R_N$ and the structure Jacobi operator $R_\xi$. Denote by $\hat R$ and $R$ the curvature tensor on $\hat M^m(c)$ and that induced on $M$ respectively. We define $R_N(X)=\hat R(X,N)N$ and $R_\xi(X)=R(X,\xi)\xi$ for any vector field $X$ tangent to $M$. A $(1,s)$-tensor field $P$ is said to be \emph{semi-parallel} if $R\cdot P=0$, where the curvature tensor $R$ acts on $P$ as a derivation. More precisely, \begin{align*} (R(X,Y) &\cdot P)(X_1,\cdots,X_s)\\ &=R(X,Y)P(X_1,\cdots,X_s)-\sum^s_{j=1}P(X_1,\cdots,R(X,Y)X_j,\cdots,X_s). \end{align*} The tensor field $P$ is said to be \emph{recurrent} if there exists an $1$-form $\omega$ on $M$ such that \[ (\nabla_XP)(X_1,\cdots,X_s)=\omega(X)P(X_1,\cdots,X_s). \] Clearly, a vanishing $\omega$ leads to parallelism of $P$. Recently, we proved the non-existence of real hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with pseudo-parallel normal Jacobi operator \cite{loothfilo}. On the other hand, related to the structure Jacobi operator $R_\xi$, Jeong, et al. proved that there does not exist any Hopf hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with parallel structure Jacobi operator \cite{jps}. Also, the non-existence of Hopf hypersurfaces with $D^\perp$-parallel structure Jacobi operator is obtained under certain conditions \cite{jmps}. Jeong, et al. considered Reeb-parallel structure Jacobi operator and proved the following: \begin{theorem}[\cite{jks}] Let $M$ be a Hopf hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with Reeb parallel structure Jacobi operator. If the principal curvature of the Reeb vector field $\xi$ on $M$ is non-vanishing and constant along the direction of the Reeb vector field $\xi$, then $M$ is an open part of a tube around a totally geodesic $SU_{m+1}/S(U_2U_{m-1})$ in $SU_{m+2}/S(U_2U_m)$ with radius $r\in (0,\frac{\pi}{4\sqrt{2}}) \cup (\frac{\pi}{4\sqrt{2}}, \frac{\pi}{\sqrt{8}})$. \end{theorem} We say that a real hypersurface $M$ has commuting structure Jacobi operator if it commutes with any other Jacobi operator defined on $M$, that is, $R_\xi \cdot R_X = R_X \cdot R_\xi$ for any $X$ tangent to $M$. Machado, et al. proved the non-existence of Hopf real hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with commuting structure Jacobi operator under certain conditions \cite{mps}. They also classified real hypersurfaces in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with $\hat R_N\cdot R_\xi=R_\xi\cdot \hat R_N$. In \cite{recurrent}, Jeong, et al. proved the following: \begin{theorem}[\cite{recurrent}] There does not exist any Hopf hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with recurrent structure Jacobi operator if the distribution $\mathfrak D$ or $\mathfrak D^\perp$-component of the Reeb vector field is invariant under the shape operator. \end{theorem} On the other hand, under certain restrictions, Hwang, et al. obtained the following non-existence result. \begin{theorem}[\cite{hwang-lee-woo}] There does not exist any Hopf hypersurface in $SU_{m+2}/S(U_2U_m)$, $m \ge 3$, with semi-parallel structure Jacobi operator if the smooth function $\alpha=g(A\xi, \xi)$ is constant along the direction of $\xi$. \end{theorem} Motivated from the above studies, a natural question arises: \begin{problem} Does there exist real hypersurface in $\hat M^m(c)$ with parallel, recurrent or semi-parallel structure Jacobi operator? \end{problem} In the present paper we first prove the following: \begin{theorem}\label{T2} There does not exist any connected real hypersurface in $\hat M^m(c)$, $m\geq3$, with semi-parallel structure Jacobi operator. \end{theorem} The non-existence of real hypersurfaces with semi-parallel structure Jacobi operator in a non flat complex space form has been proved in \cite{cho-kimura, ivey-ryan}. We also remark that Theorem~\ref{T2} holds for non-Hopf real hypersurfaces as well and no further conditions are imposed. By a result in \cite{de-loo}, we learn that if a tensor field is recurrent, it is always semi-parallel. Hence, as a corollary we obtain the following: \begin{corollary} There does not exist any real hypersurface in $\hat M^m(c)$, $m\geq3$, with parallel or recurrent structure Jacobi operator. \end{corollary} \section{Preliminaries} In this section, we recall some fundamental identities for real hypersurfaces in complex Grassmannian of rank two, which have been proven in \cite{berndt-suh, berndt-suh2, lee-loo, looth}. Let $M$ be a connected, oriented real hypersurface isometrically immersed in $\hat M^m(c)$, $m\geq 3$. The almost contact metric 3-structure $(\phi_a,\xi_a,\eta_a,g)$ on $M$ is given by $$ J_a X=\phi_a X+\eta_a (X) N,\quad\quad J_a N=-\xi_a,\quad\quad \eta_a(X)=g(X,\xi_a), $$ for any $X\in TM$, where $\{J_1,J_2,J_3\}$ is a canonical local basis of $\mathfrak J$ on $\hat M^m(c)$. It follows that \begin{align* \phi_a\phi_{a+1}-\xi_a\otimes\eta_{a+1}=\phi_{a+2 \\ \phi_a\xi_{a+1}=\xi_{a+2}=-\phi_{a+1}\xi_a \end{align*} for $a\in\{1,2,3\}$. The indices in the preceding equations are taken modulo three. The K\"ahler structure $J$ induces on $M$ an almost contact metric structure $(\phi,\xi,\eta,g)$, namely, \begin{align*} JX=\phi X+\eta(X)N, \quad JN=-\xi, \quad \eta(X)=g(X,\xi). \end{align*} Let $\mathfrak D^\perp=\mathfrak JTM^\perp$, and $\mathfrak D$ its orthogonal complement in $TM$. We define a local $(1,1)$-tensor field $\theta_a$ on $M$ by \[ \theta_a :=\phi_a\phi -\xi_a\otimes\eta. \] Denote by $\nabla$ the Levi-Civita connection on $M$. Then there exist local $1$-forms $q_a$, $a\in\{1,2,3\}$ such that \begin{eqnarray}\label{eqn:contact} \left.\begin{aligned} & \nabla_X \xi = \phi AX \\%\label{eqn:delxi}\\ &\nabla_X \xi_a = \phi_a AX+q_{a+2}(X)\xi_{a+1}-q_{a+1}(X)\xi_{a+2} \\ &\nabla_X\phi\xi_a =\theta_aAX+\eta_a(\xi)AX+q_{a+2}(X)\phi\xi_{a+1} - q_{a+1}(X)\phi\xi_{a+2}. \end{aligned}\right\} \end{eqnarray} The following identities are known. \begin{lemma}[\cite{lee-loo}] \label{lem:theta} \begin{enumerate} \item[(a)] $\theta_a$ is symmetric, \item[(b)] $\phi\xi_a=\phi_a\xi$ \item[(c)] $\theta_a\xi=-\xi_a; \quad \theta_a\xi_a=-\xi; \quad \theta_a\phi\xi_a=\eta(\xi_a)\phi\xi_a$, \item[(d)] $\theta_a\xi_{a+1}= \phi\xi_{a+2}=-\theta_{a+1}\xi_a$, \item[(e)] $-\theta_a\phi\xi_{a+1}+\eta(\xi_{a+1})\phi\xi_a =\xi_{a+2}=\theta_{a+1}\phi\xi_a-\eta(\xi_a)\phi\xi_{a+1}$. \end{enumerate} \end{lemma} \begin{lemma}[\cite{lee-loo}]\label{lem:Aphixi_a} If $\xi\in\mathfrak D$ everywhere, then $A\phi\xi_a=0$ for $a\in\{1,2,3\}$. \end{lemma} For each $x\in M$, we define a subspace $\mathcal H^\perp$ of $T_xM$ by $$\mathcal H^\perp: =\mathrm{span}\{\xi,\xi_1,\xi_2,\xi_3,\phi\xi_1,\phi\xi_2,\phi\xi_3\}.$$ Let $\mathcal{H}$ be the orthogonal complement of $\mathcal {H}^\perp$ in $T_xM$. Then $\dim\mathcal H=4m-4$ (resp. $\dim\mathcal H=4m-8$) when $\xi\in\mathfrak D^\perp$ (reps. $\xi\notin\mathfrak D^\perp$). Moreover, $\theta_{a|\mathcal{H}}$ has two eigenvalues: $1$ and $-1$. Denote by $\mathcal H_a(\varepsilon)$ the eigenspace corresponding to the eigenvalue $\varepsilon$ of ${\theta_a}_{|\mathcal H}$. Then $\dim \mathcal H_a(1)=\dim \mathcal H_a(-1)$ is even, and \begin{align*} \begin{aligned &\phi\mathcal H_a(\varepsilon)=\phi_a\mathcal H_a(\varepsilon)=\theta_a\mathcal H_a(\varepsilon)=\mathcal H_a(\varepsilon) \\ &\phi_b\mathcal H_a(\varepsilon)=\theta_b\mathcal H_a(\varepsilon)=\mathcal H_a(-\varepsilon), \quad (a\neq b). \end{aligned \end{align*} We define the tensor fields $\theta$, $\phi^\perp$, $\xi^\perp$ and $\eta^\perp$ on $M$ as follows \begin{align*} \theta:=&\sum^3_{a=1}\eta_a(\xi)\theta_a, \quad \phi^\perp:=\sum^3_{a=1}\eta_a(\xi)\phi_a, \quad \xi^\perp:=\sum^3_{a=1}\eta_a(\xi)\xi_a, \quad \eta^\perp:=\sum^3_{a=1}\eta_a(\xi)\eta_a. \end{align*} Then for each $x\in M$ with $\xi^\perp\neq0$, $\theta_{|\mathcal H}$ has two eigenvalues $\varepsilon||\xi^\perp||$, $\varepsilon\in\{1,-1\}$. Let $\mathcal H(\varepsilon)$ be the eigenspace of $\theta_{|\mathcal H}$ corresponding to $\varepsilon||\xi^\perp||$. Then \begin{enumerate} \item[(a)] $\phi \mathcal H(\varepsilon)=\phi^\perp \mathcal H(\varepsilon)= \mathcal H(\varepsilon)$, \item[(b)] $\dim \mathcal H(1)=\dim \mathcal H(-1)$ is even. \end{enumerate} Moreover, we can take a canonical local basis of $\mathfrak J$ on a neighborhood $G\subset M$ of such a point $x$ such that \begin{align* &\xi_1=\frac{\xi^\perp}{||\xi^\perp||}, \quad 0<\eta_1(\xi)=||\xi^\perp||\leq 1, \quad \eta_2(\xi)=\eta_3(\xi)=0\\ &\mathcal H(\varepsilon)=\mathcal H_1(\varepsilon), \quad \theta=\eta_1(\xi)\theta_1, \quad \phi^\perp=\eta_1(\xi)\phi_1, \quad \eta^\perp=\eta_1(\xi)\eta_1. \end{align*} In particular, if $||\xi^\perp||=1$ at $x$, then \begin{align* &\xi_1=\xi=\xi^\perp, \quad \xi_2=\theta\xi_2=\phi\xi_3, \quad \xi_3=\theta\xi_3=-\phi\xi_2. \end{align*} Throughout this paper, we always consider such a local orthonormal frame $\{\xi_1,\xi_2,\xi_3\}$ on $\mathfrak D^\perp$ under these situations. A straightforward calculation gives \begin{align}\label{eqn:global} (\nabla_X\theta)Y=\eta^\perp(\phi Y)AX-g(AX,Y)\phi\xi^\perp +2\sum^3_{a=1}\eta_a(\phi AX)\theta_aY. \end{align} The equations of Gauss and Codazzi are respectively given by $$\begin{aligned} R(X,Y)Z=&g( AY,Z) AX-g( AX,Z) AY+c\{g( Y,Z) X-g(X,Z) Y\\ &+g(\phi Y,Z)\phi X-g(\phi X,Z)\phi Y -2g(\phi X,Y)\phi Z\}\\ &+c\sum_{a=1}^3\{g(\phi_aY,Z)\phi_aX-g(\phi_aX,Z) \phi_aY-2g(\phi_aX,Y)\phi_aZ\\ &+g(\theta_aY,Z)\theta_aX-g(\theta_aX,Z)\theta_aY\} \end{aligned}$$ \begin{align* (\nabla_X A)Y-(\nabla_Y A)X=&c\{\eta(X)\phi Y-\eta(Y)\phi X-2g(\phi X,Y)\xi\}\\ &+c\sum_{a=1}^3 \{\eta_a(X)\phi_a Y-\eta_a(Y)\phi_a X -2g(\phi_a X,Y)\xi_a\\& +\eta_a(\phi X)\theta_a Y-\eta_a(\phi Y)\theta_a X\}. \end{align*} As $M$ is a real hypersurface in $\hat M^m(c)$, by the Gauss equation, we have \begin{align}\label{eqn:10} R_\xi X=&\alpha AY-\eta(AY)A\xi+c\{X-\eta(X)\xi-\theta X\} \notag\\ &-c\sum^3_{a=1}\{\eta_a(X)\xi_a+3\eta_a(\phi X)\phi\xi_a\}. \end{align} We end this section with the following general result. \begin{theorem}\label{T1} Let $M$ be an almost contact metric manifold. The structure Jacobi operator $R_\xi$ is semi-parallel if and only if $R_\xi=0$. \end{theorem} \begin{proof} Suppose the structure Jacobi operator is semi-parallel. Then \[ R(X,Y)R_\xi Z-R_\xi R(X,Y)Z=(R(X,Y)\cdot R_\xi)Z=0 \] for any $X,Y,Z\in TM$. In particular, for $Y=Z=\xi$, we obtain $R_\xi^2X=0$. Since $R_\xi$ is self-adjoint, $R_\xi=0$. The converse is trivial. \end{proof} \section{Proof of Theorem~\ref{T2}} By virtue of Theorem~\ref{T1}, It suffices to show that the structure Jacobi operator cannot be identically zero. Suppose to the contrary that $R_\xi=0$. Then by (\ref{eqn:10}), we have \begin{align}\label{eqn:B-10} \alpha AY-\eta(AY)A\xi+c\{Y-\eta(Y)\xi-\theta Y\}-c\sum^3_{a=1}\{\eta_a(Y)\xi_a+3\eta_a(\phi Y)\phi\xi_a\}=0. \end{align} \begin{claim} $\xi\notin\mathfrak D$ on an open dense subset of $M$. \end{claim} \begin{proof} Suppose $\xi\in\mathfrak D$ on an open subset $G$ of $M$. For each $x\in G$, we have $\theta=0$. It follows from Lemma~\ref{lem:Aphixi_a} that $\phi\xi_1=0$ after putting $Y=\phi\xi_1$ in (\ref{eqn:B-10}); a contradiction. Hence we obtain the claim. \end{proof} Consider a point $x\in M$ on which $\xi\notin\mathfrak D$ on a neighborhood $G$ of $x$ in $M$. We define subspaces $\mathcal F$, $\mathcal F(1)$ and $\mathcal F(-1)$ of $T_xM$ by \begin{align*} \mathcal F=\{X\in\mathcal H: \eta(AX)=0\}, \quad \mathcal F(1)=\mathcal F\cap\mathcal H(1), \quad \mathcal F(-1)=\mathcal F\cap\mathcal H(-1). \end{align*} It is clear that \begin{align*} AY=\lambda_\varepsilon Y, \quad \alpha\lambda_\varepsilon+c(1-\varepsilon ||\xi^\perp||)=0 \end{align*} for any $Y\in\mathcal F(\varepsilon)$ and $\varepsilon\in\{1,-1\}$. By (\ref{eqn:B-10}), we have \begin{align*} &(X\alpha)AY+\alpha(\nabla_XA)Y-\eta(AY)\big\{(\nabla_XA)\xi+A\nabla_X\xi\big\} \notag\\ &-\big\{g(\nabla_XA)Y,\xi)+g(AY,\nabla_X\xi)\big\}A\xi+c\{-g(\nabla_X\xi,Y)-\eta(Y)\nabla_X\xi-(\nabla_X\theta)Y\} \notag\\ &+c\sum^3_{a=1}\{-g(\nabla_X\xi_a,Y)\xi_a-\eta_a(Y)\nabla_X\xi_a+3g(\nabla_X\phi\xi_a,Y)\phi\xi_a -3\eta_a(\phi Y)\nabla_x\phi\xi_a\}=0 \end{align*} for any $X,Y\in T_xM$. By using (\ref{eqn:contact}) and (\ref{eqn:global}), the preceding equation becomes \begin{align*} &(X\alpha)AY+\alpha(\nabla_XA)Y-\eta(AY)\big\{(\nabla_XA)\xi+A\phi AX\big\} \\ &-\big\{g(\nabla_XA)Y,\xi)+g(A\phi AX,Y)\big\}A\xi \\ &+c\{-g(\phi AX,Y)-\eta(Y)\phi AX +4g(AX,Y)\phi\xi^\perp-4\eta^\perp(\phi Y)AX\} \notag\\ &+c\sum^3_{a=1}\{-g(\phi_aAX,Y)\xi_a-\eta_a(Y)\phi_aAX \\ &+3g(\theta_aAX,Y)\phi\xi_a-3\eta_a(\phi Y)\theta_aAX-2\eta_a(\phi AX)\theta_aY\}=0. \end{align*} By the preceding equation and the Codazzi equation, we have \begin{align}\label{eqn:B-100} & (X\alpha)AY- (Y\alpha)AX-\eta(AY)\big\{(\nabla_XA)\xi+A\phi AX\big\} +\eta(AX)\big\{(\nabla_YA)\xi+A\phi AY\big\} \notag\\ &+c\big\{\eta(X)(\phi AY+\alpha\phi Y)-\eta(Y)(\phi AX+\alpha\phi X) -4\eta^\perp(\phi Y)AX+4\eta^\perp(\phi X)AY\big\} \notag\\ &+2g(c(\phi+\phi^\perp)X-A\phi AX,Y)A\xi-cg(2\alpha\phi X+(\phi A+A\phi )X,Y)\xi \notag\\ &+c\sum^3_{a=1}\{ \eta_a(X)(\phi_a AY+\alpha\phi_a Y)-\eta_a(Y)(\phi_a AX+\alpha\phi_a X) \notag\\ & -\alpha\eta_a(\phi Y)\theta_a X+\alpha\eta_a(\phi X)\theta_a Y-3\eta_a(\phi Y)\theta_aAX+3\eta_a(\phi X)\theta_aAY \notag\\ &+2\eta_a(\phi AY)\theta_aX-2\eta_a(\phi AX)\theta_aY-2\eta_a(X)\eta_a(\phi Y)A\xi+2\eta_a(Y)\eta_a(\phi X)A\xi\} \notag\\ &-g(2\alpha\phi_a X+(\phi_a A+A\phi_a )X,Y)\xi_a+3g((\theta_a A-A\theta_a)X,Y)\phi\xi_a \} =0 \end{align} for any $X,Y\in T_xM$. By putting $X,Y\in\mathcal F$ in (\ref{eqn:B-100}), we have \begin{align*} & (X\alpha)AY- (Y\alpha)AX \notag\\ &+2g(c(\phi+\phi^\perp)X-A\phi AX,Y)A\xi-cg(2\alpha\phi X+(\phi A+A\phi )X,Y)\xi \notag\\ &+c\sum^3_{a=1}\{ -g(2\alpha\phi_a X+(\phi_a A+A\phi_a )X,Y)\xi_a+3g((\theta_a A-A\theta_a)X,Y)\phi\xi_a\} =0. \end{align*} Since the first two terms are in $\mathcal H$ and the remaining are in $\mathcal H^\perp$, we have \begin{align} \label{eqn:B-120} &2g(c(\phi+\phi^\perp)X-A\phi AX,Y)A\xi-cg(2\alpha\phi X+(\phi A+A\phi )X,Y)\xi \notag\\ &+c\sum^3_{a=1}\{ -g(2\alpha\phi_a X+(\phi_a A+A\phi_a )X,Y)\xi_a+3g((\theta_a A-A\theta_a)X,Y)\phi\xi_a\} =0 \end{align} for any $X,Y\in\mathcal F$. For any $\varepsilon\in\{1,-1\}$, we can further deduce from (\ref{eqn:B-120}) that \begin{align}\label{eqn:B-130} 0=&2g(\phi X,Y)\{c(1-\varepsilon||\xi^\perp||)-\lambda_\varepsilon^2\}A\xi +g(\phi X,Y)c(2\alpha+2\lambda_\varepsilon)\left\{-\xi+\frac\varepsilon {||\xi^\perp||}\xi^\perp\right\} \notag\\ =&g(\phi X,Y)(\alpha+\lambda_\varepsilon)\left\{-\lambda_\varepsilon A\xi -c\left(\xi-\frac\varepsilon {||\xi^\perp||}\xi^\perp\right)\right\} \end{align} for any $X,Y\in\mathcal F(\varepsilon)$; and \begin{align}\label{eqn:B-140} 0=\sum^3_{a=1}\{-(2\alpha+\lambda_\varepsilon+\lambda_{-\varepsilon})g(\phi_aX,Y)\xi_a +3(\lambda_\varepsilon-\lambda_{-\varepsilon})g(\theta_aX,Y)\phi\xi_a\} \end{align} for any $X\in\mathcal F(\varepsilon)$ and $Y\in\mathcal F(-\varepsilon)$. \begin{claim}\label{claim:A} $\dim \mathcal H\geq 8$. \end{claim} \begin{proof} Suppose $\dim \mathcal H=4$. Since $\dim M=4m-1\geq 11$, we have $\xi\notin\mathfrak D^\perp$ or $0<||\xi^\perp||<1$. Take a unit vector $V\in\mathcal F(1)$ such that $\mathcal H(1)=\mathbb RV\oplus\mathbb R\phi_1V$ and $\mathcal H(-1)=\mathbb R\phi_2V\oplus\mathbb R\phi_3V$. Substituting $X=V$ and $Y=\phi_2 V$ in (\ref{eqn:B-140}), we obtain \begin{align*} 0 =&-(2\alpha+\lambda_1+\lambda_{-1})\xi_2 -3(\lambda_1-\lambda_{-1})\phi\xi_3 \end{align*} for any $Y\in\mathcal F(1)$. Since $\{\xi_a,\phi\xi_a\}_{a\in\{1,2,3\}}$ is linearly independent, $-2c\alpha^{-1}||\xi^\perp||=\lambda_1-\lambda_{-1}=0$; a contradiction. Hence, the claim is obtained. \end{proof} According to Claim~\ref{claim:A}, there exists $X\in\mathcal F(\varepsilon)$ such that $X\perp$ $\phi A\xi$. Taking such a vector $X$ and $Y=\phi X$ in (\ref{eqn:B-130}), we obtain \begin{align}\label{eqn:B-150} (\alpha+\lambda_\varepsilon)\left\{\lambda_\varepsilon A\xi+c\left(\xi-\frac{\varepsilon}{||\xi^\perp||}\xi^\perp\right)\right\}=0 \end{align} for any $\varepsilon\in\{1,-1\}$. \begin{claim}\label{claim:B-I} $||\xi^\perp||=1$ on $M$. \end{claim} \begin{proof} Suppose $0<||\xi^\perp||<1$ on the open subset $G$ of $M$. It clear that $\alpha+\lambda_1$ and $\alpha+\lambda_{-1}$ can not be both zero as $\lambda_1-\lambda_{-1}=-2c\alpha^{-1}||\xi^\perp||\neq0$. Fixed $\varepsilon\in\{1,-1\}$ such that $\alpha+\lambda_\varepsilon\neq0$. It follows from (\ref{eqn:B-150}) that \begin{align*} \lambda_\varepsilon A\xi+c\left(\xi-\frac{\varepsilon}{||\xi^\perp||}\xi^\perp\right)=0. \end{align*} This imply that $A\xi\perp$ $\mathcal H$, so $\mathcal F(1)=\mathcal H(1)$ and $F(-1)=\mathcal H(-1)$. Taking $X\in\mathcal H(1)$ and $Y=\phi_2X$ in (\ref{eqn:B-140}), we can obtain a contradiction by using a similar method as in the proof of Claim~\ref{claim:A}. Hence, $||\xi^\perp||=1$ at the point $x$. By the connectedness of $M$ and the continuity of $||\xi^\perp||$, we conclude that $||\xi^\perp||=1$ on $M$. \end{proof} Since $||\xi^\perp||=1$ on $M$ or $\xi\in\mathfrak D^\perp$ everywhere, we have $\lambda_1=0$ and $\lambda_{-1}=-2c/\alpha$ ($=\lambda$, for simplicity). Moreover, we have \begin{align}\label{eqn:B-160} -\sum^3_{a=1}\eta_a(\phi Y)\phi\xi_1=\sum^3_{a=1}\eta_a(Y)\xi_a-\eta(Y)\xi. \end{align} It follows from (\ref{eqn:B-10}) and (\ref{eqn:B-160}) that \begin{align}\label{eqn:B-180} \alpha AY-\eta(AY)A\xi+c\{Y-4\eta(Y)\xi-\theta Y\}+2c\sum^3_{a=1}\eta_a(Y)\xi_a=0. \end{align} On the other hand, we have \begin{align*} \sum^3_{a=1}&\{g(Y,\nabla_X\phi\xi_a)\phi\xi_a-\eta_a(\phi Y)\nabla_X\phi\xi_a\} \\ =&\sum^3_{a=1}\{g(Y,\nabla_X\xi_a)\xi_a+\eta_a(Y)\nabla_X\xi_a\}-g(Y,\nabla_X\xi)\xi-\eta(Y)\nabla_X\xi. \end{align*} By using (\ref{eqn:contact}), we have \begin{align}\label{eqn:B-200} \sum^3_{a=1}&\{g(Y,\theta_aAX)\phi\xi_a-\eta_a(\phi Y)\theta_aAX\} \notag\\ =&\sum^3_{a=1}\{g(Y,\phi_aAX)\xi_a+\eta_a(Y)\phi_aAX\}-g(Y,\phi AX)\xi-\eta(Y)\phi AX. \end{align} \begin{claim}\label{claim:B-II-a} $\lambda A\xi+2c\xi\neq0$ . \end{claim} \begin{proof} Suppose $\lambda A\xi+2c\xi=0$. Then $A\xi=\alpha\xi$ and $\alpha\lambda+2c=0$. It follows that from (\ref{eqn:B-180}) that \begin{align*} AY=-\frac c\alpha(Y-\theta Y)+\frac{\alpha^2+4c}\alpha\eta(Y)\xi-\frac{2c}\alpha\sum^3_{a=1}\eta_a(Y)\xi_a. \end{align*} By \cite[Theorem 6.1]{lee-loo}, we obtain \begin{align}\label{eqn:B-202} \alpha^2+4c=0 \end{align} and so $c<0$. Furthermore, we have either \[ \frac{- c}\alpha=\frac{\sqrt{-2c}\tanh(\sqrt{-2c} r)}2, \quad \frac{-2c}\alpha=\sqrt{-2c}\coth(\sqrt{-2c} r), \quad r>0 \] or \[ \frac {-c}\alpha=\frac{\sqrt{-2c}}2, \quad \frac{-2c}\alpha=\sqrt{-2c}. \] However, both cases contradict (\ref{eqn:B-202}). Accordingly, we obtain the claim. \end{proof} By using Claim~\ref{claim:B-II-a}, (\ref{eqn:B-150}) and (\ref{eqn:B-180}), there exists a unit vector field $U$ tangent to $\mathcal H(-1)\oplus(\mathfrak D^\perp\ominus\mathbb R\xi)$ and functions $\tau$, $\beta$ ($\beta\neq0$) on $M$ such that \begin{align}\label{eqn:B-220} \left.\begin{array}{ll} A\xi=\alpha\xi+\beta U &\\ AU=\beta \xi+\tau U&\\ AY=\lambda Y, \quad &(Y\in\mathcal F(-1)) \\ AX=0, \quad & (X\in\mathcal H(1))\\ \alpha\tau+\alpha^2=\beta^2 \\ \lambda+\alpha=0, \quad \alpha^2=2c . \end{array}\right\} \end{align} By using (\ref{eqn:B-200}) and (\ref{eqn:B-220}), (\ref{eqn:B-100}) is descended to \begin{align}\label{eqn:B-240} & -\eta(AY)\big\{\alpha\phi AX+(X\beta)U+\beta\nabla_XU\big\} +\eta(AX)\big\{\alpha\phi AY+(Y\beta)U+\beta\nabla_YU\big\} \notag\\ &+c\big\{\eta(X)(4\phi AY+\alpha\phi Y) -\eta(Y)(4\phi AX+\alpha\phi X) \big\} \notag\\ &+2g(c(\phi+\phi^\perp)X-A\phi AX,Y)A\xi-cg(2\alpha\phi X+4(\phi A+A\phi )X,Y)\xi \notag\\ &+c\sum^3_{a=1}\{\eta_a(X)(-2\phi_a AY+\alpha\phi_a Y) -\eta_a(Y)(-2\phi_a AX+\alpha\phi_a X) \notag\\ & -\alpha\eta_a(\phi Y)\theta_a X +\alpha\eta_a(\phi X)\theta_a Y +2\eta_a(\phi AY)\theta_aX -2\eta_a(\phi AX)\theta_aY \notag\\ & -2\eta_a(X)\eta_a(\phi Y)A\xi+2\eta_a(Y)\eta_a(\phi X)A\xi\} \notag\\ & +g(-2\alpha\phi_a X+(\phi_a A+A\phi_a )X,Y)\xi_a \} =0. \end{align} On the other hand, by the Codazzi equation, we obtain \begin{align}\label{eqn:B-250} cg((\phi+\phi^\perp)Y,Z)-2c\sum^3_{a=1}\eta_a(Y)\eta_a(\phi Z)=g((\nabla_\xi A)Y-(\nabla_YA)\xi,Z). \end{align} Substituting $Z=\xi$ in (\ref{eqn:B-250}) gives \[ (\xi\beta)g(U,Y)+\beta g(\nabla_\xi U,Y)+4\beta\alpha g(\phi U,Y). \] Letting $Y=U$ in the preceding equation gives \begin{align} \xi\beta=&0 \label{eqn:B-260}\\ \nabla_\xi U+4\alpha\phi U=&0 \label{eqn:B-270} \end{align} Next, with the help of (\ref{eqn:B-220}), after putting $Y\in\mathcal H(1)$ and $Z=U$ in (\ref{eqn:B-250}) gives \begin{align}\label{eqn:B-280} Y\beta=0 \end{align} for any $Y\in\mathcal H(1)$. By putting $X=\xi$ and $Y\perp$ $\xi$ in (\ref{eqn:B-240}), we have \begin{align}\label{eqn:B-290} &Y\beta+(\alpha^2+2\beta^2)g(Y,\phi U)+3cg(Y,\phi U-\phi^\perp U) +\sum^3_{a=1}\{-2\alpha\eta_a(\phi AY)\eta_a(U) \notag\\ &-\alpha\beta g(\phi_a Y,U)\eta_a(U)-\alpha\beta g(\theta_aY,U)\eta_a(\phi U)-2c\alpha\eta_a(U)\eta_a(\phi U)\}=0 \end{align} for any $Y\perp$ $\xi$. In particular, for $Y\in\mathcal H(1)$, with the help of (\ref{eqn:B-280}), we obtain \begin{align*} 0=-\sum^3_{a=1}\{g(\phi_a Y,U)\eta_a(U)-g(\theta_aY,U)\eta_a(\phi U)\}=2\sum^3_{a=1}\eta_a(U)g(\phi_aU,Y) \end{align*} for any $Y\in\mathcal H(1)$. Denote by $U^-$ the $\mathcal H(-1)$-component of $U$. If $U$ is tangent to $\mathfrak D^\perp$ on an open subset $G$ of $M$. Then for each $x\in G$, $\mathcal F(-1)=\mathcal H(-1)$ and so $A\mathfrak D^\perp\subset\mathfrak D^\perp$. By virtue of \cite[Theorem 3.6]{lee-loo}, $\xi$ is principal on $G$. This contradicts Claim~\ref{claim:B-II-a}. Hence, we assume that $U^-\neq0$. By putting $Y=\phi_bU^-$, $b\in\{2,3\}$ in the preceding equation, we obtain $\eta_2(U)=\eta_3(U)=0$. Consequently, $U=U^-\in\mathcal H(-1)$ and $\phi U=\phi^\perp U$. These, together with (\ref{eqn:B-260}) and (\ref{eqn:B-290}) give \begin{align*} Y\beta=-(\alpha^2+2\beta^2)g(Y,\phi U) \end{align*} for any vector field $Y$ tangent to $M$. It follows that \begin{align*} (XY-\nabla_XY)\beta=(\alpha^2+2\beta^2)\{4\beta g(X,\phi U)g(Y,\phi U)-g(Y,\nabla_X\phi U)\}. \end{align*} Hence \[ g(Y,\nabla_X\phi U)-g(X,\nabla_YU)=0. \] By virtue of (\ref{eqn:B-220}) and (\ref{eqn:B-270}), after substituting $X=\xi$ and $Y=U$ in the preceding equation give $4\alpha+\tau=0$. But then $\beta^2=\alpha\tau+\alpha^2=-3\alpha^2$; a contradiction. This completes the proof.
{ "timestamp": "2017-12-15T02:04:15", "yymm": "1712", "arxiv_id": "1712.05108", "language": "en", "url": "https://arxiv.org/abs/1712.05108" }
\section{Introduction\label{sec:1}} The knowledge of how to find inverse multivector (MV) in the Clifford algebra in a symbolic and coordinate-free form is very important both from practical computational and purely theoretical point of views. A universal formula for inverse MV would allow to write down a fast and general algorithm for all occasions rather then to resort to either specific symbolic or numerical cases. The inverse of MV can be used to find explicit solutions of algebraic GA equations with all the ensuing consequences and applications. A closely related problem is the normalization of spinors~\cite{Vaz2016,Lundholm09}. Invertibility also serves as an important criterion on deciding whether homogeneous versors are the blades~\cite{Bouma2001}, which are essential in numerous geometric constructions. The first attempts of inversion of some specific forms of MVs can be traced back to papers~\cite{Semenov1991, Semenov1993A,Semenov1993B}. However it was not until 2002 when J.P.~Fletcher \cite{Fletcher2002} has suggested some general MV inverse formulas for low dimensional algebras. His algorithm was based on decomposition of MV into matrix basis elements, and, as a consequence, resulted into large, inconvenient and signature dependent formulas the size of which grows exponentially with algebra dimension. The breakthrough occurred after D.~Lundholm~\cite{Lundholm06} has presented explicit expressions for $n\le5$ (determinant) norms and later P.~Dadbeh~\cite{Dadbeh2011} introduced grade-negation operation what has allowed to write down compact and explicit formulas for MV inverse in a coordinate-free form. The heart of algorithm~\cite{Dadbeh2011} is the geometric product of initial MV and its carefully chosen grade-negated counterpart(s) that after few iterations eventually yields a scalar. As a matter of fact, the product may be related with the determinant of a matrix representation of general MV, from which the inverse multivector can be easily extracted by simply removing the initial MV which is always positioned in either left-most or right-most side of the product. Using the described method P.~Dadbeh was able to find explicit inverses for a general MV up to dimension $n\le 5$. It is important to stress that the obtained formulas are determined by vector space dimension $n=p+q$ only and are independent of a particular GA signature $(p,q)$. The same formulas were also obtained by other authors using different methods (see, for example,~\cite{Shirokov2012,Hitzer2016,Suzuki2016}). When $n\le 5$, detailed mathematical proofs are given in \cite{Hitzer2016}. If general multivector $\m{A}$ is given in expanded form in some orthogonal basis with symbolic coefficients, then the verification of the algorithm can be easily done by a direct substitution of symbolic MV into formula and explicitly computing the inverse $\m{A}^{-1}$, and finally checking that the property $\m{A}\m{A}^{-1}=\m{A}^{-1}\m{A}=1$ is satisfied. For $n>4$, however, calculation of explicit inverse in symbolic form is time consuming and results in lengthy expressions for coefficients at basis elements of $\m{A}^{-1}$. Nonetheless, such calculations, in fact, have status of ``computer-assisted proof''. Similar formulas for the (determinant) norms of MVs when $n\le 5$ were also given in~\cite{Lundholm09}. Analysis of general structure of such formulas was presented in~\cite{Suzuki2016}. However, until now any attempts to step across the threshold $p+q=5$ were unsuccessful although there is a need for such formulas in practice. In this report we show that the grade-negation method can be extended beyond $n=5$ threshold if, in addition, properly constructed linear combinations of grade-negated MVs are introduced. In particular, we write down explicit MV inverse formulas for algebras with vector space dimension $n=6$ having all possible $(p,q)$ signatures. We also provide alternative formulas for even MVs and $n=5$ case. In Sec.~\ref{sec:2} the grade-negation method is shortly reviewed and required notation and terminology is introduced. In Sec.~\ref{sec:3} the inverse even MVs that follow from higher grade inverse MVs are considered. Finally, in Sec.~\ref{sec:4} a general algorithm for construction of inverse MV at $n=6$ is briefly discussed and the obtained coordinate-free formulas are presented in a form of tables. \section{Grade-negated self-product and inverse of a general MV in $n\le5$ case\label{sec:2}} Following~\cite{Dadbeh2011} we first introduce a grade-negated self-product that is defined via \textit{grade-$r$ negation operation}. Applied to the multivector $\m{A}$ this operation does what it says, i.~e., it changes the sign of grade-$r$ part of $\m{A}$. Such grade-$r$ negated MV will be denoted as $\m{A}_{\bar r}$, with the bar over index designating which of the grades have opposite signs. Formally the grade-$r$ negated MV can be expressed as $\m{A}_{\bar r}=\m{A}-2\langle \m{A}\rangle_r$, or $\m{A}_{\bar r,\bar s}=\m{A}-2\langle \m{A}\rangle_r-2\langle \m{A}\rangle_s$ for a double negation, where $\langle \m{A}\rangle_r$ denotes grade-$r$ projection of multivector $\m{A}$. In particular, we have $(\langle\m{A}\rangle_r)_{\bar r}=-\langle\m{A}\rangle_r$. The following properties are evident from the definition of grade-negation operation: $(\m{A}_{\bar p})_{\bar r}=\m{A}_{{\bar p},{\bar r}}=\m{A}_{{\bar r},{\bar p}}$, $\m{A}_{{\bar r},{\bar r}}=\m{A}$, $(\m{A}+\m{B})_{\bar r}=\m{A}_{\bar r}+\m{B}_{\bar r}$. However, $(\m{AB})_{\bar r}\ne \m{A}_{\bar r}\m{B}_{\bar r}$. If $\m{A}$ does not contain grade-$r$ elements then negation returns the same MV, $\m{A}_{\bar r}=\m{A}$. Commutator with grade-negated MV then can be expressed as $\m{A} \m{A}_{\bar r}-\m{A}_{\bar r}\m{A}=2(\langle \m{A}\rangle_r\m{A}-\m{A}\langle \m{A}\rangle_r)$. Multivector $\m{A}$ commutes with scalar negated $\m{A}$, i.~e., $[\m{A},\m{A}_{\bar 0}]=0$ and, as a consequence, with $\m{A}_{\bar i,\cdots,\bar j}$, where all grades $i\neq0,j\neq 0,\cdots$ of $\m{A}$ (except scalar) are negated. The standard involutions such as MV reversion $\reverse{\m{A}}$, grade inversion $\gradeinverse{\m{A}}$, and Clifford conjugate $\cliffordconjugate{\m{A}}=\reverse{\gradeinverse{\m{A}}}$ in terms of negation can be written as $\reverse{\m{A}}=\m{A}_{{\bar 2},{\bar 3},{\bar 6},{\bar 7},{\bar {10}},{\bar {11}}\dots}$, $\gradeinverse{\m{A}}=\m{A}_{{\bar 1},{\bar 3},{\bar 5},{\bar 7},{\bar {9}},{\bar {11}}\dots}$ and $\cliffordconjugate{\m{A}}=\m{A}_{{\bar 1},{\bar 2},{\bar 5},{\bar 6},{\bar {9}},{\bar {10}}\dots}$, respectively. For finite algebras the index series should be chopped off when negation index becomes larger than that of the pseudoscalar. It is worth mentioning that the grades $0,4,8,12,\dots$ are absent in the above list of standard GA involutions. From this follows that the inverse of a general MV cannot be expressed using just standard involutions, or their combinations. As we shall see this observation also is directly related to the MV inversion problem for $n\ge 6$. By grade-negated self-product we identify the geometric product of general MV $\m{A}$ with any number of its grade-negated counterparts, for example, geometric product $\m{A} \m{A}_{\bar r} \m{A}_{\bar k,\bar l,\bar t} \m{A}_{\bar s, \bar t}\cdots$ is the left grade-negated self-product and $\m{A}_{\bar r} \m{A}_{\bar k,\bar l,\bar t} \m{A}_{\bar s, \bar t}\cdots\m{A}$ is the right grade-negated self-product. These self-products, where the initial MV stands in the left-most or right-most position, will be of special importance in finding the inverse MV $\m{A}^{-1}$. The explicit inverse formulas which we will construct rely on our ability to get a real scalar using just grade-negated self-products (or, as we shall see, some linear combinations of them for $n>5$), where initial MV $\m{A}$ is mandatory and is located either in the left-most or right-most position. We shall call the real scalar \begin{equation}\label{discriminant} s_m=\overbrace{\m{A}\m{A}_{\bar r,\bar t,\ldots}\m{A}_{\bar s,\ldots}\cdots}^m = \m{A}\, f\bigl(\m{A}\bigr)= f\bigl(\m{A}\bigr)\,\m{A} \end{equation} obtained in this way the determinant norm\footnote{In literature, the GA terminology on MV norms is quite confusing. The book~\cite{Hestenes1987} and most of introductory GA lecture courses, for example~\cite{Chisolm2012}, typically define the MV norm $|\m{A}|^2$ as a scalar part of product $\left\langle\m{A}\reverse{\m{A}}\right\rangle_0$, which can be negative. (A better name would be the pseudonorm). If sign is positive this kind of norm is also called the magnitude. The same term sometimes is used to define a positive square root of absolute value of $\sqrt{\mathrm{abs}(|\m{A}|^2)}$. On the other hand, in~\cite{Lundholm06} a different scalar (named MV norm) is defined which coincides with our determinant norm $s_m$ for $n\le 5$, whereas in~\cite{Suzuki2016} a similar construction is called ``the discriminant''. In~\cite{Dadbeh2011,Shirokov2012} this construction has been named ``the determinant'', because a determinant of matrix representation of $\m{A}$ always coincides with the scalar. Other authors~\cite{Hitzer2016} bypass naming confusion calling it just ``a real scalar''. In order to maintain the analogy with MV norm defined in~\cite{Lundholm06} and to distinguish it from the norm definition of~\cite{Hestenes1987}, we will use prefixed form ``determinant (pseudo)norm'' or shorter form ``determinant norm'', or just a ``determinant'' if it is clear from the context what is meant. In general, the last term should not be confused with a determinant of matrix that represents the MV~$\m{A}$. It should be noted, however, that the term ``determinant'' in GA is usually addressed in the context of linear transformations~(see, for example, \cite{DoranLasenby2003}, p.~108, the subsection ``The determinant''), the definition and meaning of which is related to the exterior product of basis vectors and has nothing to do neither with the above determinant norm nor with a determinant of matrix that represents $\m{A}$.}. If we succeed in constructing such a scalar $s_m$ then it is easy to see that inverse MV formula for $\m{A}$ (correspondingly, for $\reverse{\m{A}}$) can be obtained simply by removing either extreme left initial MV $\m{A}$ or extreme right initial MV $\m{A}$ from the scalar scalar $s_m$ in~\eqref{discriminant} (correspondingly, from the $s_m^{\prime}=\cdots \reverse{\m{A}}_{\bar s,\ldots}\reverse{\m{A}}_{\bar r,\bar t \ldots}\reverse{\m{A}}$), and then dividing the result by $s_m$ or $s_m^{\prime}=\reverse{\m{A}}\, \reverse{f}\bigl(\reverse{\m{A}}\bigr)= \reverse{f}\bigl(\reverse{\m{A}}\bigr)\,\reverse{\m{A}}$. The inverse formulas for a MV $\m{A}$, thus, become \begin{equation}\label{inverse} \m{A}^{-1}=\frac{\m{A}_{\bar r,\bar t,\ldots}\m{A}_{\bar s,\ldots}\cdots}{s_m}=\frac{f\bigl(\m{A}\bigr)}{s_m},\quad\reverse{\m{A}^{-1}}=\frac{\cdots \reverse{\m{A}}_{\bar s,\ldots}\reverse{\m{A}}_{\bar r,\bar t \ldots}}{s_m^{\prime}}=\frac{\reverse{f}\bigl(\reverse{\m{A}}\bigr)}{s_m^{\prime}}. \end{equation} All known cases for $n\le5$ (for detailed proofs see~\cite{Hitzer2016}) show that inverse formulas in~\eqref{inverse} can be applied to arbitrary signature algebra at a fixed vector space dimension $n$. Furthermore, it is easy to check explicitly, for example, that the inverse formula for $n=5$ also yields inverses for $n=4,3,2,1$ as well, i.~e., each of formula for larger $n$ {\it automatically contains inverses of all lower algebras $m\le n$ with all possible signatures}. Therefore, we {\it conjecture} that explicit formulas for determinant norm~\eqref{discriminant} are signature independent\footnote{The informal explanation is very simple. Geometric product of two vectors splits into antisymmetric and symmetric parts: $a b=a\wedge b + a\cdot b$. Because the commutators vanish, $[e_i,e_i]=0$ for $i=1,2,...$, only the diagonal part (i.~e. signature) is not fixed in the totaly antisymmetric expression (whatever that might mean), when the geometric product of vectors is extended to the whole algebra (whatever that might mean). Since the result (the determinant norm) is a real scalar obtained by the same fixed antisymmetrization construction (i.~e. the formula), it can only be a function of signature.}, and that, when restricted to lower dimension vector spaces, they yield determinant norms of lower dimensional vector spaces, generally raised in some power, which can be easily determined from the dimension of matrix representation (see \textit{Example~5} below). The known cases also suggest that formulas~\eqref{inverse} always ensure that the inverse commutes with all three main involutions: reversion, grade inversion and Clifford conjugate, i.~e. $\reverse{\m{A}^{-1}}=(\reverse{\m{A}})^{-1}$, $\gradeinverse{\m{A}^{-1}}=(\gradeinverse{\m{A}})^{-1}$ and $\cliffordconjugate{\m{A}^{-1}}=(\cliffordconjugate{\m{A}})^{-1}$. This is not generally true for an arbitrary involution, for example, for an arbitrary combination of grade negations: $(\m{A}^{-1})_{\bar i,\ldots}\neq(\m{A}_{\bar i,\ldots})^{-1}$. In all known cases we also have $s_m=s_m^{\prime}$. It is well known that Clifford algebras are isomorphic to algebras of square matrices, the left and right inverses of which coincide. The Clifford MV, therefore, has only one inverse which, depending on our needs, can be written using either of the two determinant norm forms~\eqref{discriminant}. As a result we have that $\m{A}\m{A}^{-1}=\m{A}^{-1}\m{A}=1$ which can also serve as a test of correctness of GA inverse algorithm. {\bf Two-dimensional quadratic space.} Since the one-dimensional case is trivial we start with the algebras $Cl_{2,0}$, $Cl_{1,1}$ and $Cl_{0,2}$. For $n=2$ let it be $Cl_{2,0}$. Writing $\m{A} = \sum_{J=0}^{2^n-1} a_J\e{J}$, where the multi-index $J$ covers all orthonormal base elements arranged in the increasing order of the degrees followed by the lexicographic order, i.~e., the lowest grade elements appear first\footnote{As known, the summation is orderless with respect to base elements. Nevertheless, it is convenient in advance to settle some order which is required if we want to enumerate coefficients $a_i$ in front of base elements in a unique way.} while elements of the same grade are ordered lexicographically. In the sum the multi-index takes the following explicit values $J=[\{\},\{1\},\{2\},\{1,2\}]$, which illustrate inverse degree lexicographic ordering~\cite{Cox2007}. We can check that self-product $\m{A} \m{A}_{\bar 1,\bar 2}$ immediately gives the required scalar $s_2=a_{\{\}}^2 - a_{1}^2 - a_{2}^2 + a_{1,2}^2$. In the standard notation it is more convenient to rewrite the coefficient indices as in $s_2=a_{0}^2 - a_{1}^2 - a_{2}^2 + a_{3}^2$, where the scalar coefficient was indexed by zero. The indices of coefficients for vector components run from $1$ to $n$, while numerical indices for higher grades increase monotonically up to $2^n$. In general, when programming, for a coefficient in front of grade-$r$ base it is convenient to start the element index enumeration by $\sum_{k=0}^{r-1} (\begin{smallmatrix}n\\ k\end{smallmatrix})$ and to end by $\Bigl(\sum_{k=0}^{r-1} (\begin{smallmatrix}n\\ k\end{smallmatrix})\Bigr)+(\begin{smallmatrix}n\\ r\end{smallmatrix})-1$. Here $(\begin{smallmatrix}n\\ k\end{smallmatrix})$ denotes the binomial coefficient. For example, for \cl{3}{0}, which has three vectors and three bivectors, these expressions give $1$ and $3$ for first/last vector indices, and $4$ and $6$ for first/last bivector indices. Taking into account that binomial with negative index vanishes, this convention enumerates all coefficients of MV and allows an easy transition to more standard notation in a consistent way, for example, for initial MV we write $\m{A}=a_0 + a_1\e{1} + a_2\e{2} + a_3\e{12}$ and for grade-negated MV $\m{A}_{\bar 1,\bar 2}=a_0 - a_1\e{1}- a_2\e{2}- a_3\e{12}$. This notation will be used throughout the paper. Note, that indices of base elements will never be renamed and always be referred to as multi-indices. Before going to higher dimensional algebras a few comments are in place here. The comments are of common character and can be easily generalized to higher dimensional algebras. (1) The explicit form of determinant $s_m$ comprises all coefficients $a_i$ of a multivector~$\m{A}$. (2) The condition for a MV to have inverse is determined by determinant which must be nonzero, $s_m\ne 0$. If some of intermediate results in $s_m$ after negation reduce to zero the entire determinant turns out to zero automatically. This explicit statement was given in order to resolve indeterminate cases like zero division by zero. (3) Because reversion operation leaves the scalar (determinant norm) invariant, the above formula for $n=2$ can be rewritten as $s_2=\m{A} \m{A}_{\bar 1,\bar 2}=\m{A}_{\bar 1,\bar 2}\m{A}=\reverse{\m{A}}_{\bar 1,\bar 2}\reverse{\m{A}}=\reverse{\m{A}}\reverse{\m{A}}_{\bar 1,\bar 2}$ for an arbitrary multivector $\m{A}$. The right hand side of equality, therefore, can be understood as the determinant norm of a multivector $\reverse{\m{A}}$ written in the right hand side form (where $\reverse{\m{A}}$ now stands in the right most position). Due to this arbitrariness, inversions can always be written in two forms: $\m{A}^{-1}=\m{A}_{\bar 1,\bar 2}/s_2= \m{A}_{\bar 1,\bar 2}/(\m{A}\m{A}_{\bar 1,\bar 2})= \m{A}_{\bar 1,\bar 2}/(\m{A}_{\bar 1,\bar 2}\m{A})$ and, correspondingly, $\reverse{\m{A}^{-1}}=\reverse{\m{A}}_{\bar 1,\bar 2}/(\reverse{\m{A}}_{\bar 1,\bar 2}\reverse{\m{A}})=\reverse{\m{A}}_{\bar 1,\bar 2}/(\reverse{\m{A}}\reverse{\m{A}}_{\bar 1,\bar 2})=\m{A}_{\bar 1,\bar 3}/(\m{A}_{\bar 2,\bar 3}\m{A}_{\bar 1,\bar 3})=\m{A}_{\bar 1,\bar 3}/(\m{A}_{\bar 1,\bar 3}\m{A}_{\bar 2,\bar 3})=\reverse{\m{A}}^{-1}$, where we explicitly had used $\reverse{\m{A}}=\m{A}_{\bar 2,\bar 3}$. Determinant norms of $\m{A}$ and $\reverse{\m{A}}$ are equal~\cite{Shirokov2012}, because the matrix representation of the reversed MV yields the same determinant. Construction of matrix operation itself, which corresponds to MV $\reverse{\m{A}}$ reversion for any signature is described in~\cite{Ablamowicz2011}. Since $\m{A}_{\bar 1,\bar 2}=\cliffordconjugate{\m{A}}$, the negated MV for $n=2$ algebras can be expressed through standard involutions. (4) The determinant expression for $n=2$ contains only two MVs. From 8-periodicity table~\cite{Lounesto97} it follows that the algebras \cl{2}{0}, \cl{1}{1} and \cl{0}{2} are isomorphic to the algebra $\mathbb{R}(2)$ of real $2\times2$ matrices or to the algebra $\mathbb{H}$ of quaternions. The determinant of these matrices is a quadratic polynomial in the coefficients $a_i$ of the MVs; therefore, this polynomial can be constructed by multiplying just two MVs. Below we shall see that in all cases the total polynomial degree (where coefficients $a_i$ play the role of variables) of matrix determinant always matches the number of MVs in the determinant product. The determinant of matrix representation with quaternionic elements can be calculated using isomorphism $\mathbb{H}\cong \mathbb{C}(2)$, i.~e. we first replace quaternions by $2\times2$ block matrices and then calculate the determinant (the real scalar) of the resulting matrix. This practical procedure allows us to avoid considering numerous definitions of the determinant of $\mathbb{H}(n)$ due to element non-commutativity. {\bf Three-dimensional quadratic space.} Our goal is to eliminate as many grades as possible by forming suitable self-products until finally the grade-0 element (determinant norm) is left. As another example, let us calculate the inverse of $Cl_{2,1}$ algebra. It is easy to check that geometric product of $\m{A}=a_{0}+a_{1} \e{1}+a_{2} \e{2}+a_{3} \e{3}+a_{4} \e{12}+a_{5} \e{13}+a_{6} \e{23}+a_{7} \e{123}$ with $\m{A}_{\bar 1,\bar 2}$ lacks grades $1$ and $2$. The result is a new multivector $\m{B}=\m{A} \m{A}_{\bar 1,\bar 2}= b_{0}+b_{7} \e{123}$ which consists of scalar and grade-$3$ element with coefficients $b_0=a_{0}^2 - a_{1}^2 - a_{2}^2 + a_{3}^2 + a_{4}^2 - a_{5}^2 - a_{6}^2 + a_{7}^2$ and $b_7=-2 a_{3} a_{4} + 2 a_{2} a_{5} - 2 a_{1} a_{6} + 2 a_{0} a_{7}$. Repeating grade negation procedure one finds that grade-$3$ part can be removed too, and we obtain the determinant norm $s_4=\m{B} \m{B}_{\bar 3}=\m{A} \m{A}_{\bar 1,\bar 2}(\m{A} \m{A}_{\bar 1,\bar 2})_{\bar 3}=b_0^2 - b_7^2$. Reversion of $s_4$ then immediately yields an alternative form of the norm $s_4^\prime=(\reverse{\m{A}}_{\bar 1,\bar 2}\reverse{\m{A}})_{\bar 3} \reverse{\m{A}}_{\bar 1,\bar 2}\reverse{\m{A}}$. Of course, for different 3D algebras the same formula will give distinct real expressions, which differ in signs at individual coefficients~$a_i$. The important thing is that despite the fact that the scalar $s_4^\prime$ was calculated in $Cl_{2,1}$ algebra, exactly the same sequence of products and grade negations will produce the scalar (generally different) in all other algebras with $n=3$. Also note, that the determinant norm in this case is determined by two terms, $b_0^2$ and $b_7^2$, the difference of which should not be equal to zero for an invertible multivector $\m{A}$ to exist. The condition $b_0^2-b_7^2\ne 0$ exactly matches the MV invertibility condition, which according to 8-periodicity table may be obtained by calculating the determinant of a matrix representation of MV. {\bf Four-dimensional quadratic space.} In $n=4$ case, in trying to eliminate as much grades as possible we can proceed in two alternative ways. Firstly, we can negate simultaneously the grades $1$ and $2$ and then in the next step the grades $3$ and $4$. Alternatively, we can eliminate $2$ and $3$, and then $1$ and $4$ grades. Both choices are valid. However, if we choose $1$ and $3$, and then $2$ and $4$ grade combinations, neither one will do the job. Thus, we find the following formulas for determinant norm $s_4= \m{A}\m{A}_{\bar{1},\bar{2}}(\m{A} \m{A}_{\bar{1},\bar{2}})_{\bar{3},\bar{4}}= \m{A} \m{A}_{\bar{2},\bar{3}} (\m{A}\m{A}_{\bar{2},\bar{3}})_{\bar{1},\bar{4}}$ and $s_4^\prime= (\reverse{\m{A}}_{\bar{1},\bar{2}}\reverse{\m{A}})_{\bar{3},\bar{4}}\,\reverse{\m{A}}_{\bar{1},\bar{2}}\reverse{\m{A}}=(\reverse{\m{A}}_{\bar{2},\bar{3}}\reverse{\m{A}})_{\bar{1},\bar{4}}\reverse{\m{A}}_{\bar{2},\bar{3}} \reverse{\m{A}}$. It is easy to check that both expressions indeed give determinant norms. The occurrence of symbol $\m{A}$ as often as four times in the products $s_4$ and $s_4^{\prime}$ is again what we expect from the matrix representations for $n=4$. The total degree of the determinant, considered as a polynomial function of the coefficients of MV, is $4$ for all algebras in the case $n=4$. Despite different forms, the expanded explicit expressions for $s_m$ and $s_m^\prime$ were found to be equal, $s_m=s_m^\prime$, as it should be~\cite{Hitzer2016}. Our computations show that formulas $s_m$ and $s_m^\prime$ are the only possible equivalent ways to get determinant norm in $n=4$ case using geometric product and negation operations. {\bf Five-dimensional quadratic space.} This is the largest dimension, $n=5$, when consecutive grade elimination works by factorizing the determinant norm into product of the initial MV and negated ones. The grade elimination sequence is similar~\cite{Hitzer2016}: (1) Eliminate simultaneously grades $2$ and~$3$; (2) Then eliminate grades $1$ and~$4$; (3) Finally eliminate grade~$5$. Apart from the last step this sequence is exactly the same as the second alternative of $n=4$ case. The final result is $s_8^\prime=(\m{F}\reverse{\m{A}})_{\bar{5}}\m{F}\reverse{\m{A}}$ with $\m{F}=(\reverse{\m{A}}_{\bar{2},\bar{3}}\reverse{\m{A}})_{\bar{1},\bar{4}} \reverse{\m{A}}_{\bar{2},\bar{3}}$ and $s_8=\m{A}\m{D}\, (\m{A}\m{D})_{\bar{5}}$ with $\m{D}=\m{A}_{\bar{2},\bar{3}} (\m{A}\m{A}_{\bar{2},\bar{3}})_{\bar{1},\bar{4}}$. The total degree of determinant polynomial in this case is $8$, which again exactly matches the number of MVs in the determinant norm product. With our program~\cite{AcusDargys2017} we have found that $s_8$ determinant norm can be written in 52~ different ways as presented in Table~\ref{dim5F}. \begin{table} {\small \[\begin{array}{llll} N\downarrow& \textrm{Abbreviation} \rightarrow& H=\m{A}\reverse{\m{A}}=\m{A}\m{A}_{\bar{2},\bar{3}}&H^\prime=\m{A}\cliffordconjugate{\m{A}}=\m{A}\m{A}_{\bar{1},\bar{2},\bar{5}} \\[5pt]\hline\\[-5pt] 13& H (H (H_{\bar{1},\bar{4}} H)_{\bar{5}})_{\bar{1},\bar{4}}& H (H (H H_{\bar{1},\bar{4}})_{\bar{5}})_{\bar{1},\bar{4}}& H (H (HH_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}\\ & H (H(HH_{\bar{4},\bar{5}})_{\bar{1}})_{\bar{4},\bar{5}}& H H_{\bar{1},\bar{4}}(H_{\bar{1},\bar{4}}H)_{\bar{5}}& \underline{H H_{\bar{1},\bar{4}}(HH_{\bar{1},\bar{4}})_{\bar{5}}}\\ & H H_{\bar{1},\bar{5}}(H_{\bar{1},\bar{5}}H)_{\bar{4}}& H H_{\bar{4},\bar{5}}(H_{\bar{4},\bar{5}}H)_{\bar{1}}& \\[5pt] 14& H (H_{\bar{1},\bar{4}}HH_{\bar{1},\bar{5}})_{\bar{4},\bar{5}}& H (H_{\bar{1},\bar{4}}HH_{\bar{4},\bar{5}})_{\bar{1},\bar{5}}& H (H_{\bar{1},\bar{5}}H_{\bar{1},\bar{4}}H)_{\bar{4},\bar{5}}\\ & H (H_{\bar{1},\bar{5}}H_{\bar{4},\bar{5}}H))_{\bar{1},\bar{4}}& H (H_{\bar{1},\bar{5}}HH_{\bar{1},\bar{4}})_{\bar{4},\bar{5}}& H (H_{\bar{4},\bar{5}}H_{\bar{1},\bar{4}}H)_{\bar{1},\bar{5}}\\ & H (H_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}}H)_{\bar{1},\bar{4}}& H (H_{\bar{4},\bar{5}}HH_{\bar{1},\bar{4}})_{\bar{1},\bar{5}}& H (H(H_{\bar{1},\bar{5}}H)_{\bar{3},\bar{4}})_{\bar{1},\bar{5}}\\ & H (H(H_{\bar{4},\bar{5}}H)_{\bar{1},\bar{3}})_{\bar{4},\bar{5}}& H (HH_{\bar{1},\bar{4}}H_{\bar{1},\bar{5}})_{\bar{4},\bar{5}}& H (HH_{\bar{1},\bar{4}}H_{\bar{4},\bar{5}})_{\bar{1},\bar{5}}\\ & H (HH_{\bar{1},\bar{5}}H_{\bar{4},\bar{5}})_{\bar{1},\bar{4}}& H (HH_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}})_{\bar{1},\bar{4}}& H H_{\bar{1},\bar{4}} H_{\bar{1},\bar{5}} H_{\bar{4},\bar{5}}\\ & H H_{\bar{1},\bar{4}}H_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}}& \underline{H H_{\bar{1},\bar{5}}(HH_{\bar{1},\bar{5}})_{\bar{3},\bar{4}}}& H H_{\bar{1},\bar{5}}H_{\bar{4},\bar{5}}H_{\bar{1},\bar{4}}\\ & \underline{H H_{\bar{4},\bar{5}}(HH_{\bar{4},\bar{5}})_{\bar{1},\bar{3}}}& H H_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}}H_{\bar{1},\bar{4}} & \\[5pt] 15& H (H_{\bar{1},\bar{4}}(H_{\bar{1},\bar{5}}H)_{\bar{3}})_{\bar{4},\bar{5}}& H (H_{\bar{1},\bar{4}}(H_{\bar{4},\bar{5}}H)_{\bar{3}})_{\bar{1},\bar{5}}& H (H_{\bar{1},\bar{5}}(HH_{\bar{4},\bar{5}})_{\bar{3}})_{\bar{1},\bar{4}}\\ & H (H_{\bar{4},\bar{5}}(HH_{\bar{1},\bar{5}})_{\bar{3}})_{\bar{1},\bar{4}}& H (H(H_{\bar{1},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{3}})_{\bar{4},\bar{5}}& H (H(H_{\bar{4},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{3}})_{\bar{1},\bar{5}}\\ & H H_{\bar{1},\bar{5}}(H_{\bar{1},\bar{4}}H_{\bar{4},\bar{5}})_{\bar{3}}& H H_{\bar{4},\bar{5}}(H_{\bar{1},\bar{4}}H_{\bar{1},\bar{5}})_{\bar{3}} & \\[5pt] 17& H (H_{\bar{1},\bar{4}}(H_{\bar{1},\bar{4}}H_{\bar{1},\bar{5}})_{\bar{1}})_{\bar{1},\bar{5}}& H (H_{\bar{1},\bar{4}}(H_{\bar{1},\bar{4}}H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}& H (H_{\bar{1},\bar{5}}(H_{\bar{1},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{1}})_{\bar{1},\bar{4}}\\ & H (H_{\bar{1},\bar{5}}(H_{\bar{1},\bar{5}}H_{\bar{4},\bar{5}})_{\bar{5}})_{\bar{4},\bar{5}}& H (H_{\bar{1},\bar{5}}(H_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}})_{\bar{5}})_{\bar{4},\bar{5}}& H (H_{\bar{4},\bar{5}}(H_{\bar{1},\bar{5}}H_{\bar{4},\bar{5}})_{\bar{5}})_{\bar{1},\bar{5}}\\ & H (H_{\bar{4},\bar{5}}(H_{\bar{4},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{4}})_{\bar{1},\bar{4}}& H (H_{\bar{4},\bar{5}}(H_{\bar{4},\bar{5}}H_{\bar{1},\bar{5}})_{\bar{5}})_{\bar{1},\bar{5}} & \\[5pt] 18& H (H_{\bar{1},\bar{4}}(H_{\bar{1},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{1},\bar{3}})_{\bar{1},\bar{5}}& H (H_{\bar{1},\bar{4}}(H_{\bar{4},\bar{5}}H_{\bar{1},\bar{4}})_{\bar{3},\bar{4}})_{\bar{4},\bar{5}}& H (H_{\bar{1},\bar{5}}(H_{\bar{1},\bar{4}}H_{\bar{1},\bar{5}})_{\bar{1},\bar{3}})_{\bar{1},\bar{4}}\\ & H (H_{\bar{4},\bar{5}}(H_{\bar{1},\bar{4}}H_{\bar{4},\bar{5}})_{\bar{3},\bar{4}})_{\bar{1},\bar{4}} & & \\[5pt] 15& H^\prime (H^\prime (H^\prime H^\prime_{\bar{3}})_{\bar{4}})_{\bar{3}}& H^\prime H^\prime_{\bar{3}}(H^\prime_{\bar{3}} H^\prime)_{\bar{4}} & \\[5pt] 16& H^\prime (H^\prime (H^\prime_{\bar{3}} H^\prime)_{\bar{1},\bar{4}})_{\bar{3}}& H^\prime (H^\prime_{\bar{3}} (H^\prime H^\prime_{\bar{3}})_{\bar{1},\bar{4}} & \\ \end{array} \] \caption{\label{dim5F} Alternative formulas for MV determinant norm for GAs of vector space dimension $n=5$ listed by increasing number of negations. For example, the number of negations $N$ in the first line is determined by four $H=\m{A}\m{A}_{\bar{2},\bar{3}}$, each including $2$ negations, $(\bar{2},\bar{3})$, and five explicit negations $(\bar{1},\bar{4})+\bar{5}+(\bar{1},\bar{4})$ in the formula, resulting in $2*4+5=13$ total negations. Computationally preferred forms that contain the largest number of repeating pieces are underlined.} } \end{table} \vspace{2mm} \textit{Example~1}. Let's take $\m{A}=3+\e{2}+\e{5}-\e{12}-\e{15}+3 \e{125}$ in $Cl_{4,1}$. It is easy to check that $\m{A}\m{A}_{\bar{2},\bar{3}}=0$. In Table~\ref{dim5F}, the formulas with $H=\m{A}\m{A}_{\bar{2},\bar{3}}$ immediately allow to conclude that the determinant norm of this MV is zero and therefore the inverse of $\m{A}$ does not exist. Now let's try to find the determinant of $\m{A}$ with formula that does not contain $\m{A}\m{A}_{\bar{2},\bar{3}}=0$, for example with $H^\prime=\m{A}\m{A}_{\bar{1},\bar{2},\bar{5}}$ (see the first line in Table~\ref{dim5F}), from which the final result is not so obvious. First, we calculate $H^\prime= 18+18 \e{125}$, which is not zero. However, computing the next step $H^\prime (H^\prime)_{\bar{3}}$ and $ (H^\prime)_{\bar{3}} H^\prime$ in the last two lines in Table~\ref{dim5F} we get zero again. \vspace{2mm} \textit{Example~2}. Given $\m{A}=1+2 \e{1}+3 \e{23}+4 \e{2345}$ in $Cl_{5,0}$ let us find $\m{A}^{-1}$ using a couple of alternative formulas. First, we shall use new computationally efficient formula $D_{1}=H H_{\bar{1},\bar{5}} (H H_{\bar{1},\bar{5}})_{\bar{3},\bar{4}}$ (underlined in Table~\ref{dim5F}). Computation of $H$ yields $H=\m{A}\m{A}_{\bar{2},\bar{3}}=30+4 \e{1}+8 \e{2345}+16 \e{12345}$. Then $H H_{\bar{1},\bar{5}}=692+352 \e{2345}$. And lastly, $D_1=354960$. Then the inverse is $\m{A}^{-1}=\frac{\m{A}_{\bar{2},\bar{3}}H_{\bar{1},\bar{5}} (H H_{\bar{1},\bar{5}})_{\bar{3},\bar{4}}}{D_1}= \frac{1}{354960}(3576+96 \e{1}-53832 \e{23}-15072 \e{45}-8592 \e{123}-28992 \e{145}+47424 \e{2345}-8256 \e{12345})$. Now let's compute the determinant norm of $\m{A}$ using $D_2=H H_{\bar{4},\bar{5}} H_{\bar{1},\bar{5}} H_{\bar{1},\bar{4}}$, which computationally is less efficient, because it contains smaller number of repeating parts. Computation of $H H_{\bar{4},\bar{5}}$ yields $596-16 \e{1}$. Then, $H H_{\bar{4},\bar{5}} H_{\bar{1},\bar{5}}$ gives $17944-2864 \e{1}+5024 \e{2345}-9664 \e{12345}$. Finally, $D_2=354960$. Of course, these formulas give the same explicit expressions for inverse MV in symbolic form as well. \vspace{2mm} Can the above described determinant computation procedure be extended beyond $n=5$? The short answer is `yes' if, as shown below, we allow linear combinations of grade-negated self-products with properly chosen numerical coefficients. Before describing this case let us derive some useful formulas for even subalgebras when $n\le6$. The even subalgebras are directly related with the spinor groups that are very important in the quantum mechanics~\cite{Vaz2016}. \section{Inverse of even multivectors\label{sec:3}} \begin{table} \[\begin{array}{ll} Cl_{p,q}&\qquad \bigl(\m{A}^{+}\bigr)^{-1} \\[5pt]\hline p+q=2&\qquad \frac{B \bigl(\m{A}^{+} B\bigr)} {\bigl(\m{A}^{+} B\bigr)^2}, \qquad B =(v)_{\bar{1}}=-v \\[4pt] p+q=3 &\qquad \frac{C \bigl(\m{A}^{+} C\bigr)} {\bigl(\m{A}^{+} C\bigr)^2}, \qquad C = (\m{A}^{+} v)_{\bar{1}} \\[4pt] p+q=4&\qquad \frac{C \bigl(\m{A}^{+} C\bigr)} {\bigl(\m{A}^{+} C\bigr)^2}, \qquad C = (\m{A}^{+} v)_{\bar{1}} \\[4pt] p+q=5&\qquad \frac{D \bigl(\m{A}^{+} D\bigr)} {\bigl(\m{A}^{+} D\bigr)^2}, \qquad D = (\m{A}^{+} v)_{\bar{3}} \bigl(\m{A}^{+} (\m{A}^{+} v)_{\bar{3}}\bigr)_{\bar{1}} \\[4pt] p+q=6&\qquad \frac{D \bigl(\m{A}^{+} D\bigr)_{\bar{6}}} {\bigl(\m{A}^{+} D\bigr)\bigl(\m{A}^{+} D\bigr)_{\bar{6}}}, \quad D = (\m{A}^{+} v)_{\bar{3}} \bigl(\m{A}^{+} (\m{A}^{+} v)_{\bar{3}}\bigr)_{\bar{1}} \end{array} \] \caption{\label{inveven} Explicit formulas of the inverse of even MVs in a coordinate-free form for Clifford algebras of dimension $n=p+q\le 6$. The quantities $\bigl(\m{A}^{+} B\bigr)^2/v^2$, $\bigl(\m{A}^{+} C\bigr)^2/v^2$, $\bigl(\m{A}^{+} D\bigr)^2/v^4$ and ${\bigl(\m{A}^{+} D\bigr)\bigl(\m{A}^{+} D\bigr)_{\bar{6}}}/v^4$ are the determinant norms of respective MVs. Here, nonisotropic and unnormalized vector $v$ also can be replaced by one of orthonormal base vector $\e{i}$ for computational efficiency.} \end{table} When MV consists of even grade elements only, i.~e. $\m{A}^{+}=\langle\m{A}\rangle_{0}+\langle\m{A}\rangle_{2}+\langle\m{A}\rangle_{4}+\cdots$, simpler formulas for inverse MVs can be constructed as shown in Table~\ref{inveven}. The nonisotropic unnormalized vector $v$ in these formulas play the role of dummy variable. It is interesting to observe that in contrast to general case considered in the next section there exists a single self-negated product for inverse of even MV (see Table~\ref{inveven}) that is not a linear combination. This property is to be expected (compare with a general MV form for $n=5$ in Table~\ref{invall}), because there exists the well-known isomorphism between the even subalgebra of one vector space dimension larger algebra and the full lower dimensional Clifford algebra, \begin{equation}\label{iso} \cl{p}{q+1}^+\cong\cl{p}{q},\quad\cl{p+1}{q}^+\cong\cl{q}{p}\,. \end{equation} Because we can write inverse MV as a single self-negated product for dimension $n=5$ (see Table~\ref{invall}), it is not surprising that according to isomorphisms~\eqref{iso} we can do this for even subalgebra of dimension $n=6$. Unfortunately, as we shall see in section~\ref{sec:4} this property does not extend to the full six dimensional Clifford algebra. \section{Inverse of general MV in 6-dimensional quadratic space\label{sec:4}} \subsection{Insufficiency of single self-negated product in $n=6$ case\label{sec:4a}} Let us take $Cl_{6,0}$ algebra. Using \textit{Mathematica} GA package~\cite{AcusDargys2017}, after some experimentation with a pair of self-negated product formed from a general MV it is not difficult to ascertain that we can eliminate simultaneously either grades $1$, $2$, $5$ and $6$ (then $0$, $3$ and $4$ grades survive) or, alternatively, the grades $2$, $3$ and $6$ (then the grades $0$, $1$, $4$ and $5$ survive). In both cases the grade $4$ remains and, therefore, cannot be eliminated by the method of simple self-negated product used till now. This conclusion strictly follows from an attempt to simultaneously nullify all coefficients of grade-$4$ base elements using all $2^6=64$ possible combinations of grade negations in the two term product. The grade-$4$ therefore is distinct from the other grades and deserves special examination. From experiments by computer it turns out that there exists a subalgebra with base formed by elements $\{1, \e{1256}, \e{1346},\e{2345}\}$ such that any self product of multivector $\m{B}=a_{0}+a_{47} \e{1256}+a_{49} \e{1346}+a_{52} \e{2345}$ by grade-negated $\m{B}$ will yield new MV having at least one grade-$4$ base element present. This is the reason why such a single self-product fails in eliminating grade-$4$ part and, as we shall see, one has to use a combination of at least a pair of self-products. \subsection{Linear combination of self-products\label{sec:4b}} Before considering general case, we shall note that the determinant of a matrix representation (computed using symbolic coefficients) of restricted MV, which consists of just scalar and general grade-4 element, can be written as a square of some polynomial of MV coefficients $s_4$, i.~e. as $\det \bigl(\langle\m{A}\rangle_{0}+\langle\m{A}\rangle_{4}\bigr)=s_8=(s_4)^2$. Thus, we assume that one can always extract the square root from~$s_8$. From this follows that in search of inverse of $\langle\m{A}\rangle_{0}+\langle\m{A}\rangle_{4}$, as a first step one can try to test only linear combination of product of four negated MVs instead of eight as required in general case. Due to above arguments we assume that square root of determinant norm may be written as a linear combination of the following form \begin{equation}\begin{split} &s_{4+4}=s_{4f}+s_{4g}=\\ &b_1 B f_5\Bigl(f_4(B) f_3\bigl(f_2(B) f_1(B)\bigr)\Bigr) +b_2 B g_5\Bigl(g_4(B) g_3\bigl(g_2(B) g_1(B)\bigr)\Bigr)\label{s1}, \end{split}\end{equation} where each of $f_j$ and $g_j$ is either the identity mapping or the grade-$4$ negation, and $b_1, b_2$ are unknown scalar coefficients of the linear combination. Once a pattern of linear combination of square root of the determinant norm was fixed we can calculate explicit symbolic form of matrix representation of the above mentioned MV $\m{B}=a_{0}+a_{47} \e{1256}+a_{49} \e{1346}+a_{52} \e{2345}$, then compute the matrix determinant, take square root and compare it with the GA expression~\eqref{s1} after the same GA multivector $\m{B}$ was inserted. The negation functions $f_j$ and $g_j$ that control signs of grades can be modelled as a multiplication by unknown coefficient $p_{4jk}$, where the index $j$ denotes the involution number $f_j,g_j$ in the self-product and $k$ is the term number in linear combination (when $n=6$, $k=1$ for $f$ and $k=2$ for $g$). Later, when we shall deal with negations of other grades the first index $i$ in $p_{ijk}$ will indicate possibly negated grade-$i$. For the moment the index $i$ is fixed to $4$, which corresponds to current nontrivial grade of $\m{B}$ (we will ignore negations of scalar, because it is equivalent to negation of all other remaining MV grades). The coefficients $p_{ijk}$ acquire values $\pm 1$ only, where $-1$ means that involution which changes sign of grade $i$ is to be applied, while $+1$ means the identity map. In the considered $n=6$ case, comparison with the square root of the determinant of a matrix representation of $\m{B}$ yields the system of four equations for each of base element (including the scalar) of $\m{B}$. The system is too long to be fully presented here, therefore a small characteristic part of it is written down in a truncated form below, \begin{equation} \left\{ \begin{array}{l} a_{0}^4 - 2 a_{0}^2 a_{47}^2 + b_2 a_{0}^2 a_{47}^2 p_{4 1 2} p_{4 2 2}+ <\textrm{72 monomials}>= 0\\ b_1 a_{0}^3 a_{52}+2b_2 a_0 a_{47}^2 a_{52} p_{412} p_{422} p_{4 3 2} p_{4 4 2} p_{4 52} + <\textrm{72 monomials}>= 0\\ <\textrm{74 monomials}>=0\\ <\textrm{74 monomials}>=0 \end{array}\right.\label{explicitEq} \end{equation} We see that even for a simple $\m{B}$ which contains only 4~base elements (the scalar, $\e{1256}$, $\e{1346}$ and $\e{2345}$) the system is highly nonlinear in $p_{4ij}$. We can, however, try to substitute concrete values for $p_{4ij}$ one by one to get much simpler systems that contain only variables $b_1, b_2$ and $a_i$. Then we can try to solve each of simple systems separately with respect to $b_1, b_2$ for arbitrary coefficients $a_i$. Only few of them have solutions. In fact, we have solved the systems with {\it Mathematica} command \textbf{SolveAlways[~]}. After testing all $2^{10}=1024$ possible values of $p_{4ij}$ we have found two sets of solutions, $\{b_1=-2/3,\; b_2=-1/3,\; p_{411}=-1,\; p_{412}=1,\; p_{421}=-1,\; p_{422}=1,\; p_{431}=-1,\; p_{432}=-1,\; p_{441}=-1,\; p_{442}=1,\; p_{451}=-1,\; p_{452}=1\}$ and $\{b_1=-1/3,\; b_2=-2/3,\; p_{411}=1,\; p_{412}=-1,\; p_{421}=1,\; p_{422}=-1,\; p_{431}=-1,\; p_{432}=-1,\; p_{441}=1,\; p_{442}=-1,\; p_{451}=1,\; p_{452}=-1\}$. The obtained solution is unique up to the permutation of two terms. The common sign of coefficients $b_1$ and $b_2$ in general is not fixed as yet, because square root of determinant norm was calculated at this stage. It is easy to check that the above solution (though computed from highly simplified MV, namely, the scalar plus three grade-$4$ base elements) works flawlessly for a more general MV (scalar plus any number of grade-$4$ elements) $\m{C}= a_0+a_{42} \e{1234}+a_{43} \e{1235}+a_{44} \e{1236}+a_{45} \e{1245}+a_{46} \e{1246}+a_{47} \e{1256}+a_{48} \e{1345}+a_{49} \e{1346}+a_{50} \e{1356}+a_{51} \e{1456}+a_{52} \e{2345}+a_{53} \e{2346}+a_{54} \e{2356}+a_{55} \e{2456}+a_{56} \e{3456} $. This is what one expects, because grade involution done on the same grade elements acts in exactly identical way. Starting with a simple MV and then augmenting it till the number of solutions cannot be further decreased allows one to keep the whole calculation size manageable. This considerably speeds up search procedure for involutions and balances calculation complexity against the speed. Once $f_j, g_j$ and coefficients $b_k$ in equations~\eqref{explicitEq} and \eqref{s1} have been determined, we can renew the search for other negation involutions in exactly the same way. Our next task is to double the number of multivectors in the products~\eqref{s1}, because we know that determinant of general MV requires 8~multipliers in order to match the total degree polynomial of determinant of a MV matrix representation. It was already mentioned that in the product $\m{A}\m{A}_{i,j,...}$ one can simultaneously eliminate either grades $1$, $2$, $5$ and $6$ (then grades $0$, $3$ and $4$ remain), or, alternatively, the grades $2$, $3$ and $6$ (then grades $0$, $1$, $4$ and $5$ survive). All in all there are $\frac{6!}{2!4!}+\frac{6!}{3!3!} +1=36$ elements of grades $2$, $3$ and $6$ which can be eliminated simultaneously. This is more than $28$ elements of grades $1$, $2$, $5$ and $6$. Therefore, if we replace $\m{B}$ in equation~\eqref{s1} by self-negated product $\m{A}\m{A}_{{\bar 2},{\bar 3},{\bar 6}}$ we are left to deal with self-negated product of four multivectors of grades $0$, $1$, $4$, $5$ only, from which we can ignore the grades $0$ and $4$ for which $f_j$ and $g_j$ already have been established. Thus, repeating the same procedure we can find a number of valid solutions for coefficients $p_{1jk},p_{2jk}$ and $p_{5jk}$. In order to speed up the derivation we had used the multivector $ a_{0}+a_{1} \e{1}+a_{22} \e{123}+a_{47} \e{1256}$ first. The obtained result then was explicitly verified (i.~e. proved by explicit expansion in orthogonal basis) using the most general $Cl_{6,0}$ multivector with symbolic coefficients. Finally we found $320$ valid forms of determinant norm. After removing all superfluous involutions a number of possible determinant forms was reduced to 16+4=20. All these forms are presented in Table~\ref{dim6F}. After the determinant norm has been found the inverse multivector can be immediately written using equation~\eqref{inverse}. The results for inverse MVs for algebras $n\le 6$ are summarized in Table~\ref{invall}. \begin{table} \[\begin{array}{ll} \cl{p}{q}&\qquad \m{A}^{-1} \\[5pt]\hline\\[-5pt] p+q=0&\qquad\frac{\m{B}}{\m{A}\m{B}},\qquad\qquad\qquad\qquad\m{B}=1\\[4pt] p+q=1&\qquad \frac{\m{B}(\m{A}\m{B})_{\bar{1}}}{\m{A}\m{B}\,(\m{A}\m{B})_{\bar{1}}} = \frac{\gradeinverse{\m{A}}}{\m{A} \gradeinverse{\m{A}}},\quad\qquad\ \m{B}=1\\[4pt] p+q=2&\qquad \frac{\m{C}}{\m{A}\m{C}}=\frac{\cliffordconjugate{\m{A}}}{\m{A} \cliffordconjugate{\m{A}}} ,\qquad\qquad\quad\ \m{C}=\m{A}_{\bar{1},\bar{2}}\\[4pt] p+q=3 &\qquad \frac{\m{C}(\m{A}\m{C})_{\bar{3}}}{\m{A}\m{C}\,(\m{A}\m{C})_{\bar{3}}} =\frac{\cliffordconjugate{\m{A}}\gradeinverse{\m{A}}\reverse{\m{A}}}{\m{A} \cliffordconjugate{\m{A}}\gradeinverse{\m{A}}\reverse{\m{A}}} ,\qquad\ \m{C}=\m{A}_{\bar{1},\bar{2}}\\[4pt] p+q=4&\qquad\frac{\m{D}}{\m{A}\m{D}},\qquad\qquad\qquad\qquad \m{D}=\m{A}_{\bar{2},\bar{3}} (\m{A}\m{A}_{\bar{2},\bar{3}})_{\bar{1},\bar{4}}\quad \\ &\qquad\qquad\qquad\qquad\qquad\textrm{or}\quad \m{D}=\m{A}_{\bar{1},\bar{2}}(\m{A} \m{A}_{\bar{1},\bar{2}})_{\bar{3},\bar{4}} \\[4pt] p+q=5&\qquad\frac{\m{D}(\m{A}\m{D})_{\bar{5}}}{\m{A}\m{D}\, (\m{A}\m{D})_{\bar{5}}},\qquad\qquad\qquad \m{D}=\m{A}_{\bar{2},\bar{3}} (\m{A}\m{A}_{\bar{2},\bar{3}})_{\bar{1},\bar{4}}\\[4pt] p+q=6&\qquad\frac{\m{G}}{\m{A}\m{G}},\qquad\m{G}=\frac{1}{3} \m{A}_{\bar{2},\bar{3},\bar{6}} \Bigl(\m{H} (\m{H} \m{H})_{\bar{1},\bar{4},\bar{5}} + 2 \bigl(\m{H}_{\bar{4}} (\m{H}_{\bar{4}} \m{H}_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}\bigr)_{\bar{4}}\Bigr)\\ &\quad\textrm{or}\quad \m{G}=\frac{1}{3} \m{A}_{\bar{2},\bar{3},\bar{6}} \Bigl(\bigl(\m{H} (\m{H} \m{H}_{\bar{1},\bar{5}})_{\bar{4}}\bigr)_{\bar{1},\bar{5}} + 2 \bigl(\m{H}_{\bar{4},\bar{5}} (\m{H}_{\bar{4},\bar{5}} \m{H}_{\bar{1},\bar{4}})_{\bar{4}}\bigr)_{\bar{1},\bar{4}}\Bigr)\\ &\qquad \qquad \qquad\textrm{with}\quad \m{H}=\m{A} \m{A}_{\bar{2},\bar{3},\bar{6}}=\m{A}\reverse{\m{A}} \end{array} \] \caption{\label{invall}Summary of formulas for inverse multivectors $\m{A}^{-1}$ in Clifford algebras of dimension $n\le 6$.} \end{table} The denominators, which are real scalars, in these expressions are the determinant norms of respective MVs. At the same time they give the condition for existence of the inverse MV. It was found that determinant norms exactly match expressions for determinants calculated from matrix representation of respective MVs (in fact, the whole derivation of the determinant norm formulas relied on this match). All formulas were proved by explicit calculation of symbolic inverses of the most general MV for concrete Clifford algebras with all possible signatures $(p,q)$. For this task we wrote the {\it Mathematica} package for symbolic calculations in Clifford algebras~\cite{AcusDargys2017}, which also can be found in {\it Mathematica} package data base \url{http://packagedata.net/}. The specific property of our GA program is that it can simultaneously work with a number of Clifford algebras having different signatures $(p,q)$ what was a great help in computer-assisted verifications. \begin{table} \[\begin{array}{ll} N \downarrow& \textrm{Abbreviation} \rightarrow H=A\reverse{A}=AA_{\bar{2},\bar{3},\bar{6}} \\[5pt]\hline\\[-5pt] 38&\frac{1}{3} H (H (H H_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}\\ &\frac{1}{3} H H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H)_{\bar{4}}+\frac{2}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}\\ &\frac{1}{3} H (H (H H_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H (H_{\bar{4},\bar{5}} (H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}})_{\bar{4}})_{\bar{1},\bar{4}}\\ &\frac{1}{3} H H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H)_{\bar{4}}+\frac{2}{3} H (H_{\bar{4},\bar{5}} (H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}})_{\bar{4}})_{\bar{1},\bar{4}}\\[5pt] & \\ 34&\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}\\ & \\ 36&\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{4}})_{\bar{4}}\\ &\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H H)_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}\\ &\frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{4}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}\\ & \\ 38&\frac{1}{3} H (H_{\bar{1},\bar{5}} (H H)_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{4}})_{\bar{4}}\\ &\frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{4}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{4}})_{\bar{4}}\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H H)_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}\\ &\frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{4}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}\\ & \\ 42&\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}}\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}\\ & \\ 44&\frac{1}{3} H (H_{\bar{1},\bar{5}} (H H)_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}}\\ &\frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{4}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}}\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{4}})_{\bar{4}}\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}\\ & \\ 50&\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{5}}+\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}} \end{array} \] \caption{\label{dim6F} The formulas for determinant norm of MV $\m{A}$ for Clifford algebras of vector space dimension $n=6$. $N$ counts the total number of negations in the norm (see Table~\ref{dim5F} for explanation).} \end{table} \section{Classification and examples\label{sec:5}} In Table~\ref{dim6F} we have collected all formulas obtained by the described method that represent the determinant norm at $n=6$ in the pattern~\eqref{s1}. All expressions for determinant norm naturally split into two types. The first four formulas at the beginning of Table~\ref{dim6F} ($N=38$) constitute the first type or class. All four expressions from this class can be obtained by forming pairs from two sets \[\begin{split}&\lbrace\frac{1}{3} H (H (H H_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}, \ \frac{1}{3}H H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H)_{\bar{4}}\rbrace,\quad\text{and}\\ &\{\frac{2}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}, \frac{2}{3} H (H_{\bar{4},\bar{5}} (H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}})_{\bar{4}})_{\bar{1},\bar{4}}\} \end{split}\] in any combination. This gives the first four, $2\times 2=4$, formulas. The remaining 16 formulas, which constitute the second class can be obtained similarly by forming all possible pairs from sets {\small\[\begin{split} &\{\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}},\ \frac{1}{3} H (H_{\bar{1},\bar{5}} (H H)_{\bar{4}})_{\bar{1},\bar{5}}, \frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{4}},\\ &\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{5}}\}, \end{split}\]} and {\small\[\begin{split} &\{\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}},\frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{4}} (H)_{\bar{4}})_{\bar{4}})_{\bar{1},\bar{4},\bar{5}},\\ &\frac{2}{3} H ((H)_{\bar{4}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{4}})_{\bar{4}}, \frac{2}{3} H ((H)_{\bar{1},\bar{4},\bar{5}} ((H)_{\bar{1},\bar{4},\bar{5}} (H)_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}})_{\bar{1},\bar{4},\bar{5}}\}. \end{split}\]} This yields remaining $4\times 4=16$ formulas. Both classes are well defined, because symbolic expressions for each of separate terms in a pair always coincide within the class but differ between different classes. The representatives of both classes were included in Table~\ref{invall}. In the first class only grade-3 and grade-4 cancellation can occur between both terms in a pair. In the second class, the MV in each pair generally is made up of grades 1, 4 and 5, which cancel out in the final result (see \textit{Example}~4). It is also interesting to note that formulas of Table~\ref{dim6F} can also be rewritten as a sum of three different terms with all weight coefficients being the same and equal to $1/3$ as shown in Table~\ref{treetermformula}. The inverse MV formula can be constructed by taking any three expressions either from sets $S_1$, $S_2$ and $S_3$, or from sets $T_1$, $T_2$ and $T_3$ listed in Table~\ref{treetermformula}. For example, the formulas \[\begin{split} &\frac{1}{3} H (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}} )_{\bar{1},\bar{4},\bar{5}} )_{\bar{4}}+\frac{1}{3} H ((H_{\bar{4}} H_{\bar{4}} )_{\bar{4}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}}+\frac{1}{3} H H (H H )_{\bar{1},\bar{4},\bar{5}}\quad \mathrm{and}\\ &\frac{1}{3} H ( H_{\bar{4},\bar{5}} (H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}} )_{\bar{4}} )_{\bar{1},\bar{4}} +\frac{1}{3} H (( H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}} )_{\bar{4}} H_{\bar{4},\bar{5}} )_{\bar{1},\bar{4}} +\frac{1}{3} H(H(H H_{\bar{1},\bar{5}} )_{\bar{4}} )_{\bar{1},\bar{5}}\end{split}\] are pretty valid choices. Generally the Table~\ref{treetermformula} allows to make $4^3=64$ different triplets from sets $S_i$ and $2^3=8$ triplets from sets $T_i$. Thus, all in all we can construct $64+8=72$ different triplet formulas of inverse for $n=6$ algebras without taking into account possible reversed forms. \begin{table} \[\begin{array}{ll} S_1=&\{\frac{1}{3} H (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}} )_{\bar{1},\bar{4},\bar{5}} )_{\bar{4}},\quad \frac{1}{3} H (H_{\bar{4}} (H_{\bar{1},\bar{4},\bar{5}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{4}} )_{\bar{4}},\\ &\quad\frac{1}{3} H (H_{\bar{1},\bar{4},\bar{5}} (H_{\bar{4}} H_{\bar{4}} )_{\bar{4}} )_{\bar{1},\bar{4},\bar{5}},\quad \frac{1}{3} H (H_{\bar{1},\bar{4},\bar{5}} (H_{\bar{1},\bar{4},\bar{5}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}}\}, \\ S_2=&\{\frac{1}{3} H ((H_{\bar{4}} H_{\bar{4}} )_{\bar{1},\bar{4},\bar{5}} H_{\bar{4}} )_{\bar{4}}, \quad \frac{1}{3} H ((H_{\bar{4}} H_{\bar{4}} )_{\bar{4}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}},\\ &\quad\frac{1}{3} H ((H_{\bar{1},\bar{4},\bar{5}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{4}} H_{\bar{4}} )_{\bar{4}}, \quad\frac{1}{3} H ((H_{\bar{1},\bar{4},\bar{5}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}} H_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{4},\bar{5}}\}, \\ S_3=&\{\frac{1}{3} H (H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}} )_{\bar{1},\bar{4},\bar{5}} )_{\bar{1},\bar{5}}, \quad\frac{1}{3} H (H_{\bar{1},\bar{5}} (H H )_{\bar{4}} )_{\bar{1},\bar{5}},\\ &\quad \frac{1}{3} H H (H_{\bar{1},\bar{5}} H_{\bar{1},\bar{5}} )_{\bar{4}}, \frac{1}{3} H H (H H )_{\bar{1},\bar{4},\bar{5}}\}. \\[5pt] T_1=&\{\frac{1}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}} )_{\bar{4}} )_{\bar{4},\bar{5}},\quad \frac{1}{3} H ( H_{\bar{4},\bar{5}} (H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}} )_{\bar{4}} )_{\bar{1},\bar{4}}\}, \\ T_2=&\{ \frac{1}{3} H (( H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}} )_{\bar{4}} H_{\bar{4},\bar{5}} )_{\bar{1},\bar{4}},\quad \frac{1}{3} H ((H_{\bar{4},\bar{5}} H_{\bar{1},\bar{4}} )_{\bar{4}} H_{\bar{1},\bar{4}} )_{\bar{4},\bar{5}}\},\\ T_3=&\{ \frac{1}{3} H(H(H H_{\bar{1},\bar{5}} )_{\bar{4}} )_{\bar{1},\bar{5}},\quad \frac{1}{3} H H_{\bar{1},\bar{5}} (H_{\bar{1},\bar{5}} H)_{\bar{4}}\}. \end{array} \] \caption{The sets for construction of different triplets of weight $\tfrac{1}{3}$ in inversion formulas for $n=6$. For details see text.\label{treetermformula}} \end{table} \vspace{2mm} \textit{Example~3}. In $Cl_{4,2}$ let's take $\m{A}=2+\e{1}+\e{5}-2 \e{15}+3 \e{26}+3 \e{1256}$ and find the inverse with formula from Table~\ref{invall}, $\m{A}\m{G}=\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}$, which has a minimal number of negations. We obtain $H=\m{A}\m{A}_{\bar{2},\bar{3},\bar{6}}=8 \e{1}+8 \e{5}$. In the next step, however, we get zero since $H H = 0$ as well as $H_{\bar{4}} H_{\bar{4}} = 0$. In fact, the MV in the considered example was constructed multiplying the MV $1+2 \e{1}+3 \e{126}$ by isotropic vector $(\e{1}+\e{5})$ from the left. It is easy to check that the last mentioned MV $\m{A}^{\prime}=1+2 \e{1}+3 \e{126}$ is non-invertible as well. Indeed, $H=\m{A}^{\prime}\m{A}^{\prime}_{\bar{2},\bar{3},\bar{6}}=-4+4 \e{1}$, then $H H =32-32 \e{1}$, and finally $H H (H H)_{\bar{1},\bar{4},\bar{5}}=0$. In a similar way $H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}=0$. One can check that all formulas in Table~\ref{dim6F} will yield the same result, namely, zero. In \cite{Helmstetter2014} one can find more information on MVs that contain isotropic multipliers. \vspace{5mm} \textit{Example~4}. In $Cl_{1,5}$ let's find inverse of $\m{A}=2+\e{1}+4 \e{3}+\e{15}+3 \e{126}$ using the same generic formula with minimal number of negations. Computation steps are: {\small 1) $H=\m{A}\m{A}_{\bar{2},\bar{3},\bar{6}}=-3+4 \e{1}+16 \e{3}-2 \e{5}-24 \e{1236}$, 2)$H H = -811-24 \e{1}-96 \e{3}+12 \e{5}+144 \e{1236}-96 \e{12356}$, 3)$\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}=1/3 (678025+27648 \e{5}+2304 \e{1236}+18432 \e{1256}-4608 \e{2356}+3456 \e{12356})$, 4) $(H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}=-811+24 \e{1}+96 \e{3}-12 \e{5}+144 \e{1236}-96 \e{12356}$, 5) $(H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}=-2487-3316 \e{1}-13264 \e{3}-646 \e{5}+19704 \e{1236}-1536 \e{1256}+384 \e{2356}+864 \e{12356}$, 6) $\frac{2}{3} H (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}=2/3 (678025-13824 \e{5}-1152 \e{1236}-9216 \e{1256}+2304 \e{2356}-1728 \e{12356})$.} From 3) and 5) we get that determinant norm of $\m{A}$ is equal to $678025$. Doing calculations in a similar way we find the inverse:\\ {\small$\m{A}^{-1}=\frac{1}{678025}\bigl(\frac{1}{3} (44766-9765 \e{1}-95588 \e{3}+1841 \e{15}+8412 \e{26}-5176 \e{35}-71355 \e{126}-12112 \e{135}+20568 \e{236}-1554 \e{1256}+19608 \e{2356}-7488 \e{12356})+\frac{2}{3} (44766-9765 \e{1}-95588 \e{3}+1841 \e{15}+8412 \e{26}+8 \e{35}-71355 \e{126}-12112 \e{135}+18840 \e{236}-8466 \e{1256}+21336 \e{2356}-4032 \e{12356})\bigr)=\\ \frac{1}{678025}(44766-9765 \e{1}-95588 \e{3}+1841 \e{15}+8412 \e{26}-1720 \e{35}-71355 \e{126}-12112 \e{135}+19416 \e{236}-6162 \e{1256}+20760 \e{2356}-5184 \e{12356})$}. If different formula from Table~\ref{dim6F} is employed in finding the inverse, for example, using the first line $\frac{1}{3} H (H (H H_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}+\frac{2}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}} H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}$, then intermediate results will be different. In particular, for the first term $\frac{1}{3} H (H (H H_{\bar{1},\bar{5}})_{\bar{4}})_{\bar{1},\bar{5}}$ we find $678025/3$, and for the second term $\frac{2}{3} H (H_{\bar{1},\bar{4}} (H_{\bar{1},\bar{4}}\\ H_{\bar{4},\bar{5}})_{\bar{4}})_{\bar{4},\bar{5}}$ we find $1356050/3$. Thus, in this case no cancellation between nonzero grades of both terms in the pair occurs. \vspace{5mm} \textit{Example~5}. The example shows that, in principle, one can use $n=6$ formula to find the determinant norm and inverse MVs of all smaller algebras, $n<6$, as well. At first sight this may be a bit unexpected. The following example explains how does it happen. Let's take $\cl{2}{2}$ MV, $\m{A}=45+55 \e{1}+84 \e{12}+39 \e{134}+93 \e{234}+15 \e{1234}$, and use the determinant norm formula of $n=6$ instead of $n=4$: $\m{A}\m{G}=\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}+\frac{2}{3} H (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}$, where $H=\m{A}\m{A}_{\bar{2},\bar{3},\bar{6}}$. The negation of absent grades, of course, should be ignored. Computation steps are: {\small 1) $H= 22501+7740 \e{1}-10410 \e{2}-8880 \e{1234}$, 2)$H H = 753425101+348315480 \e{1}-468470820 \e{2}-399617760 \e{1234}$, 3)$\frac{1}{3} H H (H H)_{\bar{1},\bar{4},\bar{5}}=67166445910339801/3$, 4) $H_{\bar{4}} H_{\bar{4}}=753425101+348315480 \e{1}-468470820 \e{2}+399617760 \e{1234}$, 5) $\frac{2}{3}(H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}}=2*67166445910339801/3$, 6) $\m{A}\m{G}=67166445910339801=(259164901)^2$,} \noindent where $\sqrt{\m{AG}}=259164901$ is exactly the determinant norm of the MV calculated by $n=4$ formula. So, calculation using the $n=6$ norm formula yields the squared determinant of matrix representation of $\cl{2}{2}$ multivector. It is easy then to check that the $n=6$ formula yields the inverse of $\cl{2}{2}$ MV as well. Indeed, {\small 7) $H (H H)_{\bar{1},\bar{4},\bar{5}}+2 (H_{\bar{4}} (H_{\bar{4}} H_{\bar{4}})_{\bar{1},\bar{4},\bar{5}})_{\bar{4}}= 17494408312203-6017809001220 \e{1}+8093719858230 \e{2}+6904152962640 \e{1234}$ 8) Finally, the inverse of $\cl{2}{2}$ MV calculated using $n=6$ formula, is \[\begin{split} \m{A}^{-1} =&\frac{1}{3*67166445910339801}\bigl(559831173421635-630567641500575 \e{1}\\ &+ 127983403060830\e{2} - 1024375706022402 \e{12}+61927453093950 \e{34}\\ &- 560876126302467 \e{134} - 1156984425071379 \e{234} - 302208303582585 \e{1234}\bigr)\\ =& \frac{1}{259164901} \bigl( 720045-811025 \e{1}+164610 \e{2}-1317534 \e{12}+79650 \e{34}\\ &-721389 \e{134}-1488093 \e{234}-388695 \e{1234}\bigr) \end{split} \] } \noindent which does coincide with the inverse MV of $\cl{2}{2}$ computed using the inverse formula for $n=4$. And this is not a coincidence. Explicit symbolic computations confirm that the $n=6$ determinant norm formula when is applied to the MV of any algebra with vector space dimension $n<6$ indeed yields either the determinant of matrix representation of MV (for $n=5$), or the determinant raised in power of~2 for algebras with the vector space dimension $n=3$ and $n=4$, or in power of~4 for $n=2$ and $n=1$. Exactly the same happens, when formula of determinant norm for $n=5$ is applied to $n=4,3,2$ and $n=1$ cases. Similarly, when the determinant norm formula for $n=4$ is used to $n=3$ MV, we obtain the determinant of matrix representation of the MV, whereas the same procedure for $n=2$ and $n=1$ yields the determinant raised in power of 2, etc. So, the determinant norms disclose very interesting onion-like structure, when each higher dimensional formula embraces all lower dimensional ones. We think that this property may be important in further investigations. \section{Conclusions\label{sec:6}} From this computer-aided study we conjecture that the inverse of a general MV of arbitrary Clifford algebra can be always expressed as a linear combination of a proper number of specially constructed self-negated products, where the initial MV may stay in the left-most or right-most position. For the first time we have explicitly derived compact formulas for inverse MV that consist of sums of two grade-negated products, Tables~\ref{invall} and \ref{dim6F}, for all Clifford algebras of dimension $n=6$. The formulas are independent of a particular algebra signature $(p,q)$ and may be useful in numerical and especially in symbolic programming. They reduce to known expressions of inverses for vectors, bivectors etc, or their combinations. We have also presented different formulas for inverse of even Clifford subalgebras for dimensions $n\le 6$ (Table~\ref{inveven}), which are important for spinor algebras. A number of previously unknown explicit expressions were provided for $n=5$ dimensional algebras, Table~\ref{dim5F}, some of which may be interesting from computational point of view as well. We believe, that formulas in Table~\ref{dim5F} exhaust all possible single term forms of writing determinant for $n=5$ algebras using geometric product and involutions only. We have shown that determinant norm formulas for $n=6$ split into two classes, Tables~\ref{dim6F} and \ref{treetermformula}. We also presented inverses of $n=6$ in a more symmetric three term linear combination with all weights equal to $1/3$ (Table~\ref{treetermformula}). Though this form is not preferred from computational point of view it may be interesting when searching similar formulas for higher dimension Clifford algebras. The computation of inverse MV by formulas in Table~\ref{invall} requires considerably smaller number of multiplications since many pieces in the formulas are iterated many times. The speed of calculation can be improved further if MV multiplication algorithm is realized in such a way, that it can skip calculation of some specific grades, because we know in advance that some of grades should vanish from the product. In this case a large number of multiplications can be avoided. The number $m$ of multipliers in a self-negated product of the determinant norm is determined by total degree of polynomial of representation of matrix in 8-periodicity table~\cite{Lounesto97}. For algebra with $n=2$ the determinant norm consists just of two multipliers, whereas for $n=3$ as well as for $n=4$ one has to make products of four MV multipliers in accord with dimension of a respective matrix in 8-periodicity table. The Clifford algebras of vector space dimensions $5$ or $6$, require eight multivectors to construct the determinant norm. Then, looking at $8$-periodicity table one may expect that the determinant norm of algebras of vector space dimension $7$ and $8$ can be constructed by multilinear combinations from products of $16$ multivectors, etc. From the considerations above it should be clear that the complexity of finding explicit expression of general MV inverse grows rapidly with the increase of algebra dimension. For example, a direct head-on symbolic multiplication of eight general arbitrary multivectors in $n=6$ dimensional vector space in the worst case requires $(2^6)^8=281\mspace{2mu}474\mspace{3mu}976\mspace{3mu}710\mspace{4mu}656$ geometric multiplications of base elements, which fortunately can be bypassed due to significant grade reduction in the determinant formula as discussed in this article. However, in practice it does not help much when searching for correct shape of determinant norm formula. This is why reduced (shorter) MVs are always worth to try first which, under good choice, may greatly reduce the number of candidates for possible linear combinations. It should be stressed that the determinant norms and inverse multivectors obtained in the paper make computations self-contained within the geometric product and negation structure without any need to resort to matrix representations. Finally, we demonstrated that $n=6$ formula {\it alone} can generate MV determinant norms and respective inverses of all algebras of vector space dimension $n\le 6$ and of any signature, \textit{albeit} using more geometric multiplications. The identified onion-like structure of determinant norms and numerous shapes of inverse multivectors presented in this article incline us to believe that similar formulas (which include only operations of grade negation, geometric product and summation) may exist for Clifford algebras of all dimensions.
{ "timestamp": "2018-03-15T01:07:06", "yymm": "1712", "arxiv_id": "1712.05204", "language": "en", "url": "https://arxiv.org/abs/1712.05204" }
\section*{Supplemental Material} \setcounter{figure}{0} \setcounter{equation}{0} \setcounter{table}{0} \renewcommand\theequation{S\arabic{equation}} \renewcommand\thesection{S\arabic{section}} \renewcommand\thefigure{S\arabic{figure}} \renewcommand\thetable{S\Roman{table}} \section{overlap between the displaced Fock states} In this section, we study the overlap between the two oppositely displaced Fock states, $\hat{D}(g/\omega)|n\rangle_{\rm o}$ and $\hat{D}(-g/\omega)|n\rangle_{\rm o}$, where $\hat{D}$ is the displacement operator, $g$ is the coupling constant, $\omega$ is the resonance frequency of the oscillator, and $|n\rangle_{\rm o}$ is the $n$-photon Fock state of the oscillator. The wave function of $\hat{D}(g/\omega)|n\rangle_{\rm o}$ in the coordinate basis is given by $\phi_n(x,g/\omega) = {}_{\rm o}\langle x|\hat{D}(g/\omega)|n\rangle_{\rm o}$, where $|x\rangle_{\rm o}$ is an eigenstate of the coordinate operator $\hat{x} = (\hat{a} + \hat{a}^\dagger)/2$, and $\hat{a}$ and $\hat{a}^\dagger$ are the annihilation and creation operators. Besides the normalization factor, it is given by \begin{equation} \phi_n(x,g/\omega) = \exp\{-[x-(g/\omega)]^2\}H_n(\sqrt{2}[x-(g/\omega)]), \label{Eq:HOphi} \end{equation} where $H_n$ is the Hermite polynomial; $H_0(x) = 1$, $H_1(x) = 2x$, $H_2(x) = 4x^2-2$, and so on. Note that $\phi_n(x,g/\omega)$ is real. The overlap integral, which appears in the second line of Eq.~(3) in the main text, can be calculated as \begin{eqnarray} I_n(g/\omega) & = & {}_{\rm o}\langle n | \hat{D}^\dagger (-g/\omega)\hat{D}(g/\omega)|n\rangle_{\rm o}\\ & = & \int_{-\infty}^{\infty} \phi_n(x,-g/\omega)\phi_n(x,g/\omega)dx. \label{Eq:OLI} \end{eqnarray} To be concrete, in the following, we consider the case of $n = 2$ as an example. Figures~\ref{Fig:He}(a)--(e) show wave functions of the displaced two-photon Fock states $\phi_2(x,\pm g/\omega)$ and their product $\phi_2(x,-g/\omega)\phi_2(x,g/\omega)$ for five different values of $g/\omega$. The values of $g/\omega$ are chosen so that $I_2(g/\omega)$ is either maximal [(a) and (e)], minimal (c), or zero [(b) and (d)] [see Fig.~\ref{Fig:He}(f)]. When the positions of peaks or dips of $\phi_2(x,\pm g/\omega)$ coincide, the product $\phi_2(x,-g/\omega)\phi_2(x,g/\omega)$ is mostly positive, and hence $I_2(g/\omega)$ becomes maximal [Figs.~\ref{Fig:He}(a) and (e)]. On the other hand, when the peak positions of a wave function coincide with the dip positions of the other, the product $\phi_2(x,-g/\omega)\phi_2(x,g/\omega)$ is mostly negative, and hence $I_2(g/\omega)$ becomes minimal [Fig.~\ref{Fig:He}(c)]. \begin{figure} \includegraphics{WF_SM1.pdf} \caption{(a)--(e) Wave functions of the oppositely displaced two-photon Fock state $\phi_2(x,-g/\omega)$ (red) and $\phi_2(x,g/\omega)$ (blue) and their product $\phi_2(x,-g/\omega)\phi_2(x,g/\omega)$ (black) for various values of $g/\omega$: (a)~0, (b)~0.386, (c)~0.622, (d)~0.924, and (e)~1.27. (f) Normalized overlap integral $I_2(g/\omega)/I_2(0)$. Solid blue circles indicate the values of $g/\omega$ for panels (a)--(e). } \label{Fig:He} \end{figure} \section{symmetry of quantum Rabi model and state assignment from the spectra} The parity operator of the qubit-oscillator system is defined as $\hat{P} = \hat{P}_{\rm q}\otimes\hat{P}_{\rm o} \equiv \hat{\sigma}_z(-1)^{\hat{a}^\dagger \hat{a}}$, where, $\hat{P}_{\rm q} \equiv \hat{\sigma}_z$ and $\hat{P}_{\rm o}\equiv (-1)^{\hat{a}^\dagger \hat{a}}$ are respectively the parity operators of the qubit and the oscillator. The parities of states and operators are defined as follows; The parity of a state $|\phi\rangle$ is even (odd) when $\hat{P}|\phi\rangle = |\phi\rangle$ $(-|\phi\rangle)$. The parity of an operator $\hat{A}$ is even when $[\hat{A},\hat{P}] = \hat{A}\hat{P} - \hat{P}\hat{A} = 0$, and is odd when $\{\hat{A},\hat{P}\} = \hat{A}\hat{P} + \hat{P}\hat{A} = 0$. The parity symmetry in the states and operators that appear in the quantum Rabi Hamiltonian, \begin{equation} \hat{H}_{\rm Rabi} = -\frac{\hbar\Delta}{2} \hat{\sigma}_z+\hbar\omega \hat{a}^{\dagger}\hat{a}+ \hbar g \hat{\sigma}_x \left( \hat{a} + \hat{a}^{\dagger} \right), \label{Eq:RabiHami_SM} \end{equation} is summarized in Table~\ref{Tab:P}. Here, $\Delta$ is the qubit's transition frequency. Because both $\hat{\sigma}_x$ and $(\hat{a}+\hat{a}^\dagger)$ have negative parities, their product has a positive parity, meaning that all three terms in $\hat{H}_{\rm Rabi}$ have positive parities. Therefore, $[\hat{H}_{\rm Rabi}, \hat{P}] = 0$, and hence, the energy eigenstates are also eigenstates of $\hat{P}$, and have well-defined parities. Note that this property does not depend on the values of $\Delta$, $\omega$, and $g$. Although the energy eigenstates and their eigenenegies of $\hat{H}_{\rm Rabi}$ cannot be described analytically for arbitrary values of $\Delta$, $\omega$, and $g$, the symmetry allows to define energy eigenstates and their eigenenegies of $\hat{H}_{\rm Rabi}$ as $|in\rangle$ and $\omega_{in}$, where $i$ (= g, e) indicates that the qubit is in ``g'' the ground or ``e'' the excited state and $n$ is the number of real photons in the oscillator. Since the parity of $(\hat{a} + \hat{a}^\dagger)$ is odd, the transition matrix elements $\langle im|(\hat{a} + \hat{a}^\dagger)|jn\rangle$ may have non-zero values when the parities of the energy eigenstates $|im\rangle$ and $|jn\rangle$ are opposite, whereas they always vanish when the parities are same. From the transition frequencies alone, the energy eigenstates and the eigenenergies cannot be determined uniquely. However, by using the parity symmetry discussed above, energy eigenstates and eigenenergies are recursively determined as long as $\Delta < \omega$ in the following way. (i) The ground and the first excited states of a coupled circuit are respectively $|\textrm{g}0\rangle$ and $|\textrm{e}0\rangle$, since there is no energy-level crossing between them. The corresponding eigenenergies are respectively $\omega_{\textrm{g}0}$ and $\omega_{\textrm{e}0}$. (ii) Between the (2$n$+2)th and $(2n + 3)$th excited states ($n \ge 0$), the state having nonzero transition matrix element with the state $|\textrm{g}n\rangle$ is $|\textrm{g}n+1\rangle$ and the other is $|\textrm{e}n+1\rangle$. The corresponding eigenenergies are respectively $\omega_{\textrm{g}n+1}$ and $\omega_{\textrm{e}n+1}$. In this way, photon-number-dependent qubit frequency $\Delta_n \equiv \omega_{\textrm{e}n} - \omega_{\textrm{g}n}$ can be uniquely determined for all the parameter sets in this work. \begin{table} \begin{tabular}{l @{\hspace{0.2cm}}|@{\hspace{0.2cm}} c @{\hspace{0.5cm}} c} \hline parity & even & odd\\ \hline \hline \rule{0pt}{5ex}qubit state & ${\displaystyle |\textrm{g}\rangle_{\rm q} = \frac{\left |\circlearrowleft \right\rangle_{\rm q} + \left |\circlearrowright \right\rangle_{\rm q}}{\sqrt{2}}}$ & ${\displaystyle |\textrm{e}\rangle_{\rm q} = \frac{\left |\circlearrowleft \right\rangle_{\rm q} - \left |\circlearrowright \right\rangle_{\rm q}}{\sqrt{2}}}$\\ photon state & $|$even number$\rangle_{\rm o}$ & $|$odd number$\rangle_{\rm o}$\\ qubit operator & $\hat{\sigma}_z$ & $\hat{\sigma}_x$\\ photon operator &$\hat{a}^\dagger\hat{a}$ & $\hat{a} + \hat{a}^\dagger$\\ \hline \end{tabular} \caption{The parity symmetry in the states and operators that appears in $\hat{H}_{\rm Rabi}$.} \label{Tab:P} \end{table} \section{coupler inductance and flux bias points} The coupler inductance for sets B--I is a dc superconducting quantum interference device (SQUID) consisting of two parallel Josephson junctions as shown in Fig.~1(c) in the main text. Its Josephson inductance is given as \begin{equation} L_{\rm J} = \frac{\Phi_0}{2\pi \sqrt{(2I_{\rm c}\cos |\pi n_{\phi \rm c}|)^2-I_{\rm b}^2}}, \label{Eq:Ic_coupler} \end{equation} where $I_{\rm c}$ is the critical current of each Josephson junction, $n_{\phi \rm c}$ is the normalized flux bias through the loop in units of the superconducting flux quantum $\Phi_0 = h/(2e)$, and $I_{\rm b}$ is the current flowing through the SQUID. An external superconducting magnet produces a uniform magnetic field, and flux biases are applied to the qubit and the coupler proportionally to the areas of their loops. The area ratio of the loops $r_{\rm c} = A_{\rm coupler}/A_{\rm qubit}$ is approximately 0.05. The flux bias through the coupler loop $n_{\phi \rm c} = r_{\rm c}n_{\phi}$ depends on the normalized flux bias of the qubit $n_{\phi}$, which in most cases is around the symmetry point of the qubit, i.e. $n_{\phi} = \pm 0.5, \pm 1.5$, and so on. \section{background transmission coefficient} The amplitudes of the measured transmission spectra $|S_{21}^{\rm meas}(\varepsilon,\omega_{\rm p})|$ are fitted by the following formula: \begin{eqnarray} |S_{21}^{\rm meas}(\varepsilon,\omega_{\rm p})| = |S_{21}^{\rm bg}(\omega_{\rm p})S_{21}(\varepsilon,\omega_{\rm p})|, \label{Eq:S21meas} \end{eqnarray} where \begin{eqnarray} S_{21}(\varepsilon,\omega_{\rm p}) = 1-\frac{(Q_{\rm L}/Q_{\rm e})e^{i\phi}}{1+2iQ_{\rm L}\frac{\omega_{\rm p}-\omega_0(\varepsilon)}{\omega_0(\varepsilon)}}, \label{Eq:S21} \end{eqnarray} and we assumed that a background transmission coefficient $S_{21}^{\rm bg}(\omega_{\rm p})$ is independent of energy bias $\varepsilon$ and is written by a polynomial of the probe photon frequency $\omega_{\rm p}$. Eq.~(\ref{Eq:S21}) can be applied to a transmission line that is inductively and capacitively coupled to an LC oscillator~\cite{Khalil12JAP}, where $Q_{\rm L}$ is the total quality factor of the oscillator, $Q_{\rm e}$ is the external quality factor due to the coupling to the transmission line, and $\phi$ is a phase factor that accounts for the asymmetry of the resonance line shape. Note that $|S_{21}(\varepsilon,\omega_{\rm p})|$ may become larger than 1 depending on the value of $\phi$. \section{avoided crossings in two-tone spectroscopy} In this section, we discuss the physical origin of the avoided crossings observed in Fig. 3 of the main text. The Hamiltonian of a three-level system under the application of a drive field with frequency $\omega_{\rm d}$ can be described by \begin{eqnarray} \nonumber \mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}} & = & \hbar(\omega_{\rm a}\hat{\sigma}_{\rm aa} + \omega_{\rm b}\hat{\sigma}_{\rm bb} + \omega_{\rm c}\hat{\sigma}_{\rm cc})\\ && + \hbar \omega_{\rm d}\hat{b}^\dagger\hat{b} + \hbar\left[\chi_{\rm ab}(\hat{\sigma}_{\rm ab}\hat{b}^\dagger + \hat{\sigma}_{\rm ba}\hat{b}) + \chi_{\rm bc}(\hat{\sigma}_{\rm bc}\hat{b}^\dagger + \hat{\sigma}_{\rm cb}\hat{b})\right], \label{Eq:3lsbc} \end{eqnarray} where $\hat{\sigma}_{ij}=|i\rangle\langle j|$, $\hat{b}$ and $\hat{b}^\dagger$ are respectively the annihilation and creation operators of the oscillator representing the drive field, and $\chi_{ij}$ describes the interaction strength between the drive field mode and the transition dipole moment. Here we assume that $\omega_{\rm a} < \omega_{\rm b} < \omega_{\rm c}$ and the transition $|\mathrm{a}\rangle \to |\mathrm{c}\rangle$ is forbidden. This situation applies to the energy eigenstates involved in the avoided crossings observed in Figs. 3(a) and (b) in the main text. Namely, $|\mathrm{a}\rangle = |\mathrm{g}0\rangle$, $|\mathrm{b}\rangle = |\mathrm{g}1\rangle$, and $|\mathrm{c}\rangle = |\mathrm{g}2\rangle$ for Fig. 3(a), and $|\mathrm{a}\rangle = |\mathrm{e}0\rangle$, $|\mathrm{b}\rangle = |\mathrm{e}1\rangle$, and $|\mathrm{c}\rangle = |\mathrm{e}2\rangle$ for Fig. 3(b). The energy-level diagram of the coupled system (three-level system and drive field) is described in Fig.~\ref{Fig:AC}(a). Here, the states $|i,k\rangle$ ($i$ = a, b, c, and $k$ is the number of drive photons) are energy eigenstates of $\mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}}$ when the off-diagonal terms are ignored. The corresponding energies are given as $\omega_i + k\omega_{\rm d}$. Under the application of the probe field, transitions occur between the energy eigenstates of $\mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}}$. The $|\mathrm{a},N\rangle \to |\mathrm{b},N\rangle$ and $|\mathrm{a},N\rangle \to |\mathrm{c},N-1\rangle$ transition frequencies shown in Fig.~\ref{Fig:AC}(a) are given by $\omega_{\rm ab}$ and $\omega_{\rm ac} - \omega_{\rm d}$ ($\omega_{ij} = \omega_j - \omega_i$), which are the horizontal and diagonal dotted lines in Fig.~\ref{Fig:AC}(b). Note that the transition $|\mathrm{a},N\rangle \to |\mathrm{c},N-1\rangle$ accompanies the absorption of one photon from the drive field and hence the transition frequency decreases linearly with $\omega_{\rm d}$ with slope $-1$. Note also that because the transitions $|\mathrm{a},N\rangle \to |\mathrm{c},N-1\rangle$ is forbidden when the drive field is off, the diagonal line does not appear in the spectrum. The off diagonal terms of $\mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}}$ renormalize the energy eigenstates, which are qubit-oscillator-drive doubly dressed states, and induces the avoided crossings. In particular, for $\omega_{\rm d} \simeq \omega_{\rm bc}$, we observe that $|\mathrm{b},N\rangle$ and $|\mathrm{c},N-1\rangle$ are nearly degenerate whereas the other states are largely separated from each other. Then, the relevant eigenenergies of $\mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}}$ are approximately given by $\omega_{\rm a} + N\omega_{\rm d}$ and $[\omega_{\rm b} + \omega_{\rm c} + (2N - 1)\omega_{\rm d}]/2 \pm \sqrt{(\omega_{\rm bc}-\omega_{\rm d})^2/4 + \Omega^2_{\rm bc}}$, where $\Omega_{ij} = \chi_{ij}\sqrt{N}$ and $N$ is the number of photons in the drive field. The transition frequencies are then $(\omega_{\rm ab} + \omega_{\rm ac}-\omega_{\rm d})/2 \pm \sqrt{(\omega_{\rm bc}-\omega_{\rm d})^2/4 + \Omega^2_{\rm bc}}$, which are the solid lines in Fig.~\ref{Fig:AC}(b). When $\omega_{\rm a} < \omega_{\rm c} < \omega_{\rm b}$ and $\chi_{\rm ac} = 0$, the Hamiltonian is given by \begin{eqnarray} \nonumber \mathcal{H}'_{\omega_{\rm c} < \omega_{\rm b}} & = & \hbar(\omega_{\rm a}\hat{\sigma}_{\rm aa} + \omega_{\rm b}\hat{\sigma}_{\rm bb} + \omega_{\rm c}\hat{\sigma}_{\rm cc})\\ && + \hbar \omega_{\rm d}\hat{b}^\dagger\hat{b} + \hbar\left[\chi_{\rm ab}(\hat{\sigma}_{\rm ab}\hat{b}^\dagger + \hat{\sigma}_{\rm ba}\hat{b}) + \chi_{\rm bc}(\hat{\sigma}_{\rm cb}\hat{b}^\dagger + \hat{\sigma}_{\rm bc}\hat{b})\right]. \label{Eq:3lscb} \end{eqnarray} This situation applies to the energy eigenstates involved in the avoided crossing observed in Fig. 3(c) in the main text. Namely, $|\mathrm{a}\rangle = |\mathrm{g}0\rangle$, $|\mathrm{b}\rangle = |\mathrm{g}1\rangle$, and $|\mathrm{c}\rangle = |\mathrm{e}1\rangle$. The energy-level diagram is described in Fig.~\ref{Fig:AC}(c). The $|\mathrm{a},N\rangle \to |\mathrm{b},N\rangle$ and $|\mathrm{a},N\rangle \to |\mathrm{c},N + 1\rangle$ transition frequencies shown in Fig.~\ref{Fig:AC}(c) are given by $\omega_{\rm ab}$ and $\omega_{\rm ac} + \omega_{\rm d}$, which are the horizontal and diagonal dotted lines in Fig.~\ref{Fig:AC}(d). Note that the transition $|\mathrm{a},N\rangle \to |\mathrm{c},N+1\rangle$ accompanies the emission of one photon to the drive field and hence the transition frequency increases linearly with $\omega_{\rm d}$ with slope $+1$. For $\omega_{\rm d} \simeq \omega_{\rm cb}$, we observe that $|\mathrm{b},N\rangle$ and $|\mathrm{c},N+1\rangle$ are nearly degenerate whereas the other states are largely separated from each other. The eigenenergies of $\mathcal{H}'_{\omega_{\rm c} < \omega_{\rm b}}$ are approximately given by $\omega_{\rm a} + N\omega_{\rm d}$ and $[\omega_{\rm b} + \omega_{\rm c} + (2N + 1)\omega_{\rm d}]/2 \pm \sqrt{(\omega_{\rm cb}-\omega_{\rm d})^2/4 + \Omega^2_{\rm bc}}$. The transition frequencies are then $(\omega_{\rm ab} + \omega_{\rm ac}+\omega_{\rm d})/2 \pm \sqrt{(\omega_{\rm cb}-\omega_{\rm d})^2/4 + \Omega^2_{\rm bc}}$, which are the solid lines in Fig.~\ref{Fig:AC}(d). \begin{figure} \includegraphics{AvoidedCrossingsFY3.pdf} \caption{(a) (c) The energy-level diagram when (a) $\omega_{\rm a} < \omega_{\rm b} < \omega_{\rm c}$ and (c) $\omega_{\rm a} < \omega_{\rm c} < \omega_{\rm b}$. Here, the states $|i,k\rangle$ ($i$ = a, b, c, and $k$ is the number of drive photons) are energy eigenstates of $\mathcal{H}'_{\omega_{\rm b} < \omega_{\rm c}}$ and $\mathcal{H}'_{\omega_{\rm c} < \omega_{\rm b}}$ when the off-diagonal terms are ignored. We have kept only the states with drive photon numbers $N-1$ and $N$ in (a) and $N$ and $N+1$ in (c). The gray arrows indicate allowed transitions in the three-level system. The magenta arrow indicates the drive frequency $\omega_{\rm d}$. The dotted and solid green arrows indicate the transitions with the transition frequencies around $\omega_{\rm ab}$ when $\Omega_{\rm bc} = 0$ and $\Omega_{\rm bc} \neq 0$, respectively. (b) (d) Transition frequencies as functions of $\omega_{\rm d}$ when (b) $\omega_{\rm a} < \omega_{\rm b} < \omega_{\rm c}$ and (d) $\omega_{\rm a} < \omega_{\rm c} < \omega_{\rm b}$. The dotted and solid green lines correspond to the transition frequencies when $\Omega_{\rm bc} = 0$ and $\Omega_{\rm bc} \neq 0$, respectively. } \label{Fig:AC} \end{figure} \section{numerically calculated $\Delta_n$} In Fig.~\ref{Fig:GauLag_SM}, normalized photon-number-dependent qubit frequencies $\Delta_n/\Delta$ obtained from the two-tone spectroscopies are plotted in open stars for set~E, which has largest value of $\Delta/\omega = 0.933$. The solid lines are theoretically predicted values in the limit $\Delta \ll \omega$: \begin{eqnarray} \Delta_n(g/\omega) & \simeq & \Delta \exp(-2g^2/\omega^2)L_n(4g^2/\omega^2), \label{Eq:Dn_SM} \end{eqnarray} which is also given in the main text. The dotted lines are numerically calculated values from $\hat{H}_{\rm Rabi}$ and the parameter $\Delta/\omega = 0.933$. Here, the eigenenergies and the energy eigenstates are calculated by diagonalizing $\hat{H}_{\rm Rabi}$, where up to the 40-photon Fock states, which gives enough accuracy, are taken into account. Some of our calculations were performed using the QuTiP simulation package~\cite{Johansson13CPC}. Once we have the eigenenergies and the energy eigenstates, $\Delta_n/\Delta$ can be obtained as discussed in Section~S2. Note that the numerically calculated values approach the solid lines given by Eq.~(\ref{Eq:Dn_SM}) as the parameter $\Delta/\omega$ approaches zero. Although there are clear deviations in smaller values of $g/\omega$, the qualitative behaviors of solid and dotted lines are similar. Unexpectedly, the blue open star (measured $\Delta_2$) is close to the solid line rather than the dotted line, although the latter is expected to give more accurate prediction of the circuit described by $\hat{H}_{\rm Rabi}$. The numerically calculated $\Delta_2$ in the range $0.8 \lesssim g/\omega \lesssim 1.1$ is larger than $\Delta_2$ given by Eq.~(\ref{Eq:Dn_SM}) and hence the agreement of the blue open star and the solid line is a coincidence. \begin{figure} \includegraphics{GauLagDataHRabi_SM.pdf} \caption{Photon-number-dependent normalized qubit frequencies $\Delta_n/\Delta$ as functions of $g/\omega$. The parameters $\Delta$, $\omega$, and $g$ are obtained from the transmission spectra. The black, red, and blue solid lines are respectively $\Delta_0$, $\Delta_1$, and $\Delta_2$ obtained from Eq.~(\ref{Eq:Dn_SM}). The dotted lines are numerically calculated $\Delta_n$ from $\hat{H}_{\rm Rabi}$ for $\Delta/\omega = 0.933$ corresponding to set~E. The open stars are the qubit frequencies obtained from two-tone spectroscopies for set~E. } \label{Fig:GauLag_SM} \end{figure}
{ "timestamp": "2018-05-08T02:14:15", "yymm": "1712", "arxiv_id": "1712.05039", "language": "en", "url": "https://arxiv.org/abs/1712.05039" }
\section{Introduction} The out-of-time-order (OTO) four-point function $F(\hat{u}) = \langle V(\hat{u}) W(0) V(\hat{u}) W(0) \rangle/(\langle VV \rangle \langle WW \rangle)$ in a thermal state serves as a diagnostic of quantum chaos \cite{Larkin:1969aa,Shenker:2013pqa,Shenker:2013yza,Leichenauer:2014nxa,Maldacena:2015waa,Kitaev:2015aa}. A manifestation of this is the existence of a time regime where the (connected and regularized) part of $F(\hat{u})$ grows exponentially:\footnote{ Throughout this letter, we we denote Euclidean times as $u$ and real times as $\hat{u}$.} $F(\hat{u})_{conn.} \sim e^{\lambda_L(\hat{u}-\hat{u}_*)}$. The {\it scrambling time} $\hat{u}_*$ is larger than the typical timescale of thermal dissipation by a factor of the logarithm of the entropy of the system. It has thus been suggested that it quantifies a more fine-grained aspect of thermalization, a process that has been coined {\it scrambling} \cite{Hayden:2007cs,Sekino:2008he,Lashkari:2011yi}. In this letter we aim to explore generalizations of these statements. We consider higher-point correlation functions in OTO configurations. We will suggest a particular generalization of the four-point chaos correlator, which we call the ``maximally braided'' OTO correlator. As we will see, the maximally braided $2k$-point function is a function of $k$ Lorentzian insertion times and has several interesting features: \begin{enumerate} \item There exist Lorentzian insertion time configurations for which it exhibits exponential growth up until a time $\hat{u}_*^{(k)} \sim (k-1) \hat{u}_*$. These configurations are such that the correlator is {\it maximally OTO}, i.e., they display the highest possible number of switchbacks in real time. \item The Lyapunov exponent describing the speed of this growth is the same $\lambda_L$ as for the four-point function. The longer time scales are associated with the higher-point correlators being more fine grained quantities, thus can be made progressively smaller initially. \end{enumerate} We demonstrate these features in a particular model, which is known to be maximally chaotic (i.e., the Lyapunov exponent is as large as universally allowed in any quantum system, $\lambda_L = \frac{2\pi}{\beta}$ \cite{Maldacena:2015waa,Maldacena:2016hyu,Kitaev:2017awl}): the Schwarzian theory of a single time reparametrization mode, describing the fluctuations of the location of the boundary in $AdS_2$ gravity coupled to scalar matter fields. \section{OTO Correlation Functions} \subsection{Backreaction in $AdS_2$} Our starting point is the calculation of backreaction of matter fields in Euclidean $AdS_2$ space. We follow previous discussions in \cite{Almheiri:2014cka,Maldacena:2016upp,Engelsoy:2016xyb,Jensen:2016pah}, which the reader is invited to consult for further details. The gravitational action reduces to a boundary term, which describes the dynamics of the soft mode $t(u)$: \begin{equation} \label{eq:Igrav} -I_{grav}=\frac{1}{\kappa^2}\int du \, \bigg[-\frac{1}{2}\left(\frac{t''}{t'}\right)^2+\left(\frac{t''}{t'}\right)' \bigg] \end{equation} This is the Schwarzian action, which is determined by a pattern of spontaneous and explicit conformal symmetry breaking. The coupling $\kappa$ is our expansion parameter: in gravity it is proportional to $G_N^{1/2}$ (the bulk Newton constant) and it scales as $N^{-1/2}$ in the SYK model. Note that the SYK model \cite{Kitaev:2015aa,Maldacena:2016hyu} has an additional energy scale $J$, which appears in the gravity calculation as a UV cutoff. The dominance of the soft modes over the massive modes of the SYK model, for certain quantities, stems from those quantities being UV sensitive. We believe this is the case for the special class of correlation functions discussed here, and therefore that the time scales we unravel are also relevant to the SYK model. However, for simplicity we restrict our attention to the purely gravitational calculation, representing the contribution of the soft mode to correlation functions. We couple the gravity theory to a matter action which represents external massless particles: \begin{equation} \label{matter} -I_{matter}= \frac{1}{2\pi}\int du_1 du_2 \frac{t'(u_1) t'(u_2)}{(t(u_1)-t(u_2))^2}\,j(u_1)j(u_2) \end{equation} where $j$ is a source for the (dimension 1) operator whose correlator we are calculating. To compute correlators perturbatively in a black hole background, we transform $t(u)=\tan(\frac{\tau(u)}{2})$, corresponding to working with temperature $\beta = 2 \pi$, and expand around the saddle: $\tau(u)=u+\kappa \, \upvarepsilon(u)$. To leading order in $\kappa$ the Schwarzian action gives a quadratic term, and hence a propagator for the mode $\upvarepsilon(u)$. This propagator can be written as \begin{equation} \label{prop} \begin{split} \langle \upvarepsilon(u)\upvarepsilon(0)\rangle&= \frac{1}{2 \pi}\bigg[\frac{ 2 \sin \, u - (\pi +u)}{2} \,(\pi+u) \\ &\qquad\qquad\qquad + 2 \pi \Theta(u) (u-\sin \, u) \bigg] \end{split} \end{equation} where we take the coefficients $a,b$ appearing in \cite{Maldacena:2016upp} to zero (this corresponds to a gauge choice). Further expansion of the Schwarzian action gives self-interaction terms for $\upvarepsilon(u)$, suppressed by factors of $\kappa$. These are required for calculating general correlation functions. We will see that those interactions terms are not needed for our purposes. Similarly, we can expand the matter action (\ref{matter}). We write the expansion in $\kappa$ as \begin{equation} -I_{matter} = \frac{1}{2\pi} \int du_1 du_2 \, \frac{j(u_1)j(u_2)}{4\sin^2 (\frac{u_{12}}{2})} \sum_{p\geq 0} \kappa^p \, {\cal B}^{(p)}(u_1,u_2) \end{equation} where $u_{12} \equiv u_1-u_2$. The leading order contribution comes from the two-point function in the absence of backreaction. It is the conformal correlator at finite temperature, i.e., $\B[0]= 1$. We will also need the first and second order expansions, corresponding to the way the matter sources the soft mode $\upvarepsilon(u)$ to orders $\kappa$ and $\kappa^2$ \cite{Sarosi:2017ykf}: \begin{equation} \begin{split} \B[1](u_1,u_2)&= \upvarepsilon'(u_1)+\upvarepsilon'(u_2)-\frac{\upvarepsilon(u_1)-\upvarepsilon(u_2)} {\tan(\frac{u_{12}}{2})} \\ \B[2](u_1,u_2)&= \frac{1}{4 \sin^2(\frac{u_{12}}{2})} \Big[ (2+\cos u_{12})\, (\upvarepsilon(u_1)-\upvarepsilon(u_2))^2 \\ & + 4 \sin^2 \big(\frac{u_{12}}{2}\big)\,\upvarepsilon'(u_1)\upvarepsilon'(u_2) \nonumber\\ & - 2 \,\sin u_{12} \,\left( \upvarepsilon(u_1) - \upvarepsilon(u_2) \right) \left( \upvarepsilon'(u_1)+\upvarepsilon'(u_2)\right) \Big] \end{split} \label{Bfactor} \end{equation} In order to compute a Euclidean $2k$-point function up to ${\cal O}(\kappa^n)$, one has to sum the relevant diagrams arising from this expansion: first, one writes all possible products of $k$ instances of $\B[p_i](u_{2i-1},u_{2i})$, which are relevant at $n$-th order in perturbation theory (i.e., $\sum_i p_i \leq n$). In this product, one then contracts $\upvarepsilon$'s either with propagators \eqref{prop}, or with higher-point vertices arising from expanding the action \eqref{eq:Igrav} to higher orders in $\kappa$. This quickly gets complicated (see appendix \ref{app:sixpt} for examples). We will now present a particularly interesting class of observables for which this task simplifies considerably. \subsection{Systematics of the Calculation} \label{sec:conventions} Consider coupling the Schwarzian theory, describing gravity in $AdS_2$ space, to $k$ distinguishable matter fields representing the coupling to external operators $V_i$ with $i=1,...,k$. Our aim is to calculate $2k$-point correlation functions involving the operators $V_1(u_1), V_1(u_2), \ldots , V_k(u_{2k -1}) ,V_k(u_{2k})$. We proceed as follows: $(1)$ We calculate the Euclidean correlators. Without loss of generality, for each pair of insertions of the same operator, say $V_i(u_{2i-1})$ and $V_i(u_{2i})$, we order the Euclidean times as $u_{2i-1}>u_{2i}$. The remaining relations between Euclidean insertion times determine the order in which the operators occur in the correlation function. $(2)$ Then, to discuss Lorentzian times we analytically continue by setting $u_r \rightarrow \delta_r + i \hat{u}_r$ for all $r=1,\ldots,2k$. We then analyze the late time dependence on Lorentzian times $\hat{u}_r$. $(3)$ Ultimately we are interested in putting equivalent operators at coincident Lorentzian times, $\hat{u}_{2i-1} = \hat{u}_{2i}$. The short time regulators $\delta_r$ (which are ordered in the same way as the original Euclidean times) serve to regulate the divergence in this limit. We write below terms at leading order in $\delta_{ij} \equiv \delta_i - \delta_j$, which are universal in the sense that they contain the exponential behavior we are interested in (see the discussion in \cite{Roberts:2014ifa}). We start by discussing the computation of Euclidean correlators. The Euclidean time ordering determines the ordering of operators in the correlator. We are interested in a specific set of orderings, which we will call {\it maximally braided} correlators, for which the calculation becomes particularly simple. To describe those correlators we need to introduce some conventions. The backreaction calculation involves in intermediate steps Heaviside $\Theta$-functions, resulting from the propagator of the soft mode \eqref{prop}. Organizing these will be crucial. We choose to write all step functions canonically as $\Theta(u_i-u_j)$ with $i>j$, using $\Theta(x)=1-\Theta(-x)$. We then use the configuration of these step functions to uniquely characterize the different possible operator orderings of the correlation function. For example, the time ordered correlator $\langle V_1(u_1) V_1(u_2) \cdots V_k(u_{2 k -1}) V_k(u_{2k}) \rangle$, with the canonical ordering $u_1 > u_2> \ldots > u_{2k}$, is characterized as being the term in the generic Euclidean $2k$-point function with no step functions. Since we are interested in the exponential growth in the chaos regime, we will only keep terms that are dominant at late times. Longest living modes can be characterized as a coefficient in the generic Euclidean correlator with the maximum number of step functions. It is simpler to evaluate, and subtracting off all other time orderings does not influence the information we are interested in. \subsection{Maximally Braided Correlator} Our subtracted maximally braided correlator can be characterized by the appearance of precisely $k-1$ step functions, ``braiding" every pair of operators with the consecutive pair. Elementary combinatorics shows that this characterization is equivalent to computing a product of commutators (see appendix \ref{app:simplifications} for details). We thus define our basic observable of interest as \begin{widetext} \begin{equation} \label{eq:F2kdef} F_{2k}(u_1,\ldots,u_{2k}) = \frac{ \big{\langle} V_1(u_1) [V_2(u_3), V_1(u_2)]\, [V_3(u_5), V_2(u_4)]\, [V_4(u_7), V_3(u_6)] \cdots [V_{k}(u_{2k-1}), V_{k-1}(u_{2k-2})] V_{k}(u_{2k}) \big{\rangle} }{ \langle V_1(u_1) V_1(u_2) \rangle \cdots \langle V_k(u_{2k-1}) V_k(u_{2k}) \rangle} \end{equation} \end{widetext} The maximally braided configuration is obtained by dropping all commutator brackets (see Fig.\ \ref{general}). The commutators in $F_{2k}$ serve to subtract subleading pieces from the maximally braided configuration. $F_{2k}$ is then just the coefficient of a term in the generic Euclidean correlator with $k-1$ step functions. We argue below that to leading order in perturbation theory, $F_{2k}$ can be computed using only the Feynman diagrams of the type illustrated in Fig.\ \ref{general}. \begin{figure} \includegraphics[width=.28\textwidth]{chained2.png} \put(-139,13){$1$} \put(-70,0){$2$} \put(-102,-7){$3$} \put(-50,18){$5$} \put(-42,40){$4$} \put(-50,71){$7$} \put(-68,93){$6$} \put(-97,100){$9$} \put(-123,93){$8$} \put(-153,30){$2k$} \put(-150,40){$\vdots$} \put(-137,83){$\iddots$} \put(0,80){$\;= \; \B[1](u_{2i-1},u_{2i})$} \put(0,46){$\;= \; \B[2](u_{2i-1},u_{2i})$} \put(0,15){$\;= \; \langle \upvarepsilon \upvarepsilon \rangle$} $\qquad\qquad\qquad$ \caption{General maximally braided $2k$-point correlator (first term obtained by expanding out commutators in $F_{2k}$): only diagrams of the type shown contribute to the connected correlator $F_{2k}$ at leading order in $\kappa$. The arrangement of insertions along the circle indicates the ordering in Euclidean time.} \label{general} \end{figure} Note that thus far we are discussing the Euclidean time ordering, or equivalently the operator ordering in the correlator. This determines the combinatorics of the calculation, and is the source of the simplification we exploit. We discuss the independent issue of the Lorentzian time ordering, which is crucial to the understanding of the different time scales involved in the correlator we compute, further below. \subsection{Example: OTO Four-Point Function} Consider the correlator $\langle V_1(u_1)[V_2(u_3),V_1(u_2)]V_2(u_4) \rangle$. We demonstrate here the simplified calculation that picks out this particular combination (which describes precisely the dominant term in the chaos regime), without the need to calculate the full Euclidean or Lorentzian 4-point function. We then generalize that process for higher-point functions. Using the simplifications described in appendix \ref{app:simplifications}, we compute $F_4$ as the coefficient of $\Theta(u_{32})$ in the exchange of a soft mode between two bilinears: \begin{equation} \label{2OTO} \begin{split} F_4 &=\kappa^2 \, \langle \B[1](u_1,u_2) \B[1](u_3,u_4)\rangle\big{|}_{\Theta(u_{32})} + {\cal O}(\kappa^3) \\ & = \left\{ \frac{4\,\kappa^2}{\delta_{12}\delta_{34}}\, [ (u_{23}-\sin u_{23}) ] + {\cal O}(\delta_{ij}^{-1}) \right\} + {\cal O}(\kappa^3) \end{split} \end{equation} Introducing $\delta_{ij}\equiv \delta_i - \delta_j$, we have already used the benefit of hindsight and extracted the leading divergence as $\delta_{ij}\rightarrow 0$ for the analytic continuation $u_r \rightarrow \delta_r + i \hat{u}_r$ with $\hat{u}_1=\hat{u}_2$, $\hat{u}_3 = \hat{u}_4$. We can thus complete the analytic continuation to the OTO chaos region by simply setting $u_{23} \rightarrow i \hat{u}_{23}$.\footnote{Note that $\Theta(u)= \frac{1}{2\pi i} \int d\omega \, \frac{e^{i \omega u}}{w- i \epsilon^+}$ depends only on the real part of $u$. In our context this means the step functions are sensitive to the operator ordering, but not to the Lorentzian time ordering.} The term $\sin u_{23}$ in (\ref{2OTO}) then gives an exponentially growing term $e^{\lambda_L |\hat{u}_2-\hat{u}_3|}$, with $\lambda_L =1 =\frac{2 \pi}{\beta= 2 \pi}$, as expected. The time scale associated to this exponential growth, where the correlator becomes of order one, is the {\it scrambling time} $\hat{u}_* \sim \log(\kappa^{-2}) \sim \log(G_N^{-1}) \sim \log(N)$, or with units: $\hat{u}_* \sim \frac{\beta}{2\pi} \, \log(\frac{2\pi}{\beta \kappa^2}) $. Indeed, \eqref{2OTO} is the result obtained by evaluating the full 4-point function, specializing to the operator ordering $\langle V_1(u_1)V_2(u_3)V_1(u_2)V_2(u_4) \rangle$, subtracting off the time-ordered part, and expanding in small $\delta_{ij}$ (c.f.\ \cite{Maldacena:2016upp}). Note that the exponentially growing factor is associated with one soft mode propagator, relating the even and odd parts of two matter perturbations. We see below that such pattern persists for higher-point correlators, where one such exponential factor is associated with any exchange of operators relative to the canonical ordering. Any such exchange is reflected by the presence of a (canonically ordered) step function we use to organize the calculation. Each such step function is accompanied by a similar propagator factor and hence by an exponentially growing mode. This is the basic structure of the results derived below. \section{Higher-Point Correlators} Consider the six point function $F_6$ as defined in \eqref{eq:F2kdef}, following the process outlined and demonstrated in the previous section. The combination $\langle V_1(u_1) [V_2(u_3), V_1(u_2)] [V_3(u_5), V_2(u_4)] V_3(u_6)\rangle $ is obtained from the generic Euclidean six-point function by isolating the terms involving the product of step functions $\Theta(u_{32}) \Theta(u_{54})$. We claim that the necessary presence of this product of step functions specifies a unique diagram that can contribute to the (connected and subtracted) maximally braided correlator, to leading order in $\kappa$. Indeed the diagram depicted in Fig.\ \ref{general} (for $k=3$) contains the minimal ingredients necessary to produce the two step functions defining the maximally braided ordering we are interested in. Such diagram is of order $\kappa^4$. Other diagrams of the same order, for example disconnected ones or those involving a 3-point self-interaction of the soft mode, will have fewer step functions. They contribute only to other correlators, where the braiding is less than maximal, or get subtracted off in the combination $F_6$. Similarly, diagrams involving more than two $\upvarepsilon$-propagators contribute to $F_6$ but at higher orders in $\kappa$. We are therefore faced with the relatively easy calculation of the following contribution to Fig.\ \ref{general}: \begin{equation} \label{eq:F6} F_6 = \kappa^4 \, \langle \B[1](u_1,u_2) \, \B[2](u_3,u_4)\,\B[1](u_5,u_6)\rangle \big{|}_{\Theta(u_{32})\Theta(u_{54})} \end{equation} up to corrections of ${\cal O}(\kappa^5)$. Since we will eventually set $u_r = \delta_r + i \hat{u}_r$, we can further use the simplifications of appendix \ref{app:simplifications}. The result to leading order in $\kappa$ and to leading order in the regulators $\delta_{ij}$ is \begin{equation} \label{eq:F6res} \begin{split} F_6&\sim\frac{24\,\kappa^4}{\delta_{12}\delta_{34}^2\delta_{56}}\, (u_{23}-\sin u_{23})(u_{45}-\sin u_{45}) \end{split} \end{equation} In appendix \ref{app:sixpt} we illustrate how to calculate the full six-point function and reproduce this simple result for the maximally braided subtracted correlator. The calculation of the eight-point function is similar. To leading order in $\kappa$ and $\delta_{ij}$ we find: \begin{equation} \label{eq:F8res} \begin{split} F_ &\sim \frac{144\,\kappa^6}{\delta_{12}\delta_{34}^2\delta_{56}^2\delta_{78}}\, \prod_{i=1}^3\; (u_{2i,2i+1} - \sin u_{2i,2i+1}) \end{split} \end{equation} Similar results are obtained for higher order maximally braided correlators $F_{2k}$. Those continue to obey the pattern evident from extrapolating \eqref{eq:F6res} and \eqref{eq:F8res}. \section{Lorentzian Times} \label{sec:lorentzian} We now turn to the analytic continuation $u_r \rightarrow \delta_r + i \hat{u}_r$ in more detail. Our assumptions so far concerned Euclidean time ordering and the first term in $F_{2k}$ (dropping all commutators) corresponds to the choice $\delta_1 > \delta_3 > \delta_2 > \delta_5 > \ldots$. The late time growth indicating quantum chaos is, however, sensitive to the ordering of real Lorentzian times $\hat{u}_r$. As we will now see, there is an independent way to characterize the real time ordering of the correlator. The {\it proper-OTO number} of $F_{2k}$ is determined by the real time ordering and it affects both the associated Lyapunov exponents and the associated scrambling time scales. We will see that the correlator we discuss involves the time scale $\hat{u}_*$, but also longer time scales, depending on the proper-OTO number. \subsection{Types of OTO Correlators} Our maximally braided correlators involve $k$ swaps of neighbouring operators as compared to the canonical (time ordered) configuration. It also has the distinguishing feature that it can be {\it maximally OTO}: its analytic continuation allows for configurations that are as much out of time order as any $2k$-point function can be. The {\it proper-OTO number} indicates the minimal number of switchbacks in the complex time contour that is required to represent a correlator \cite{Haehl:2017qfl}. The proper-OTO number of a $2k$-point function is at most $k$. In the case of $F_{2k}$, maximal OTO number is achieved by the real time ordering $\hat{u}_1=\hat{u}_2 > \hat{u}_3 = \hat{u}_4 > \ldots > \hat{u}_{k-1} = \hat{u}_k$, which we focus on. The associated contour is shown in Fig.\ \ref{contour}. Most other configurations of real times lead to a smaller proper-OTO number (i.e., the correlator can be represented on a contour with fewer switchbacks). We now show the significance of this characterization of the possible Lorentzian time orderings of our correlators. \begin{figure} \includegraphics[width=.4\textwidth]{chainContour.png} \put(-183,168){$V_k$} \put(-183,129){$V_k$} \put(-153,155){$V_{k-1}$} \put(-153,105){$V_{k-1}$} \put(-122,130){$V_{k-2}$} \put(-122,80){$V_{k-2}$} \put(-91,105){$V_{k-3}$} \put(-55,31){$V_{2}$} \put(-24,19){$V_{1}$} \put(-24,57){$V_{1}$} \put(-197,12){$\beta$} \caption{Complex time contour representation of the maximally braided correlator. We show the first (and dominant) term in the expansion of commutators in $F_{2k}$. Lorentzian time runs horizontally. We depict the Lorentzian time configuration which is maximally OTO. Operators are separated by small imaginary times, which enforces the operator ordering along the contour.} \label{contour} \end{figure} \subsection{Time Scales} Let us now discuss the physical significance of the proper-OTO number. Using the result from the previous section, we have the following behavior for real time separations $|\hat{u}_{2i} - \hat{u}_{2i+1}| \gg 1 =\frac{\beta}{2\pi}$:\footnote{ This assumption about separation of time scales is simply to extract a clear exponential time dependence from the trigonometric factors in $F_{2k}$. Dropping it would allow transient effects where the time dependence interpolates between different regimes. Note that this assumption is mild: the time scale $\frac{\beta}{2\pi}$ is sometimes referred to as the {\it dissipation time} over which, e.g., two-point functions decay. It is parametrically smaller than the scrambling time scales we are interested in.} \begin{equation}\label{eq:F2exp} \begin{split} F_{2k} &\sim {\cal N}\, \frac{\exp \left( \sum_{i=1}^{k-1}|\hat{u}_{2i} - \hat{u}_{2i+1}|-(k-1)\, \hat{u}_* \right)}{\delta_{12}\delta_{34}^2 \cdots \delta_{2k-3,2k-2}^2\delta_{2k-1,2k}} \end{split} \end{equation} with scrambling time $\hat{u}_* \sim \log (\kappa^{-2})$, associated with the growth of the 4-point function. The normalization ${\cal N}$ is ${\cal O}(1)$ and has an alternating sign depending on the sign of $\hat{u}_{2i}-\hat{u}_{2i+1}$. Note the appearance of the term $(k-1)\, \hat{u}_{*}$ in the exponent, reflecting the fact that the connected $2k$-point functions are proportional to $\kappa^{2 (k -1)}$. Depending on the real time ordering, the connected $2k$-point function $F_{2k}$ exhibits different growth patterns as function of different time separations. We focus on the proper $k$-OTO configurations: these are maximally OTO, i.e $\hat{u}_{2i} > \hat{u}_{2i-1}$ for all $i$. The time differences in the exponent in \eqref{eq:F2exp} are then all positive and cancel telescopically (recalling that we set $\hat{u}_{2i} = \hat{u}_{2i-1}$ for all $i$), yielding $F_{2k} \sim e^{\hat{u}_1 - \hat{u}_{2k-1} - (k-1) \hat{u}_*}$. Thus the correlator in this case is a function of a single time separation $\hat{u}_{1,2k-1}$, corresponding to a measurement which is only sensitive to the total duration of the experiment. Despite being scrambled in different ``channels", the chaotic growth of $F_{2k}$ does not saturate after the scrambling time $\hat{u}_*$ and continues unabated until $\hat{u}_{1,2k-1}$ reaches the {\it $k$-scrambling time} \begin{equation} \hat{u}_*^{(k)} \sim (k-1) \hat{u}_* \,. \end{equation} The Lyapunov exponent for this growth is still $\lambda_L^{(k)} = 1 = \frac{2\pi}{\beta}$, but the longer time scale is associated with our chosen correlators being sensitive to more fine grained quantum chaos: they start off smaller and continue to grow for a longer time. Let us now discuss briefly configurations with less than maximal OTO-number. For example, proper $(k-1)$-OTO configurations are obtained by swapping the order of a single pair of real times, say $\hat{u}_1$ and $\hat{u}_3$, giving a correlator which can be represented on a contour with only $k-1$ switchbacks. The exponents in \eqref{eq:F2exp} do not quite add up anymore, and we get $F_{2k} \sim e^{2\hat{u}_3 - \hat{u}_1 - \hat{u}_{2k-1} - (k-1) \hat{u}_*}$. There is now a two-dimensional space of time dependence on both $\hat{u}_{31}$ and $\hat{u}_{3,2k-1}$. If, e.g., $1\ll \hat{u}_{31}\ll \hat{u}_*$, then after a total duration of the experiment $\hat{u}_\text{tot} = \hat{u}_{3,2k-1} \sim (k-2) \hat{u}_*$ the observable $F_{2k}$ already reaches size of ${\cal O}(1)$. Working recursively, we see that less than maximal OTO configurations can exhibit intermediate time scales and transient behavior. It would be interesting to explore this in more detail. \section{Discussion} We have argued that there exists new physically interesting data in higher-point out-of-time-order (OTO) correlation functions. These are qualitatively similar to the OTO four-point function used to diagnose quantum chaos. However, the observables $F_{2k}$ we constructed in \eqref{eq:F2kdef} display an exponential growth for a longer time $\hat{u}_*^{(k)} \sim (k-1) \, \hat{u}_*$. That is, there exists a hierarchy of timescales associated with scrambling, probed by increasingly fine grained (OTO) observables. This is reminiscent of similar hierarchies encountered in the context of unitary $k$-design in quantum circuit complexity \cite{Roberts:2016hpo,Cotler:2017jue}. It would be interesting to explore this connection. Similarly, it would be an intriguing task to explore the experimental relevance, or the precise operational meaning of the hierarchy of $k$-scrambling times (for instance, along the lines of \cite{Halpern:2016zcm,Halpern:2017abm}). An interpretation in terms of echo experiments, or more theoretically as quantifying operator growth by the size of repeated commutators, seem possible. Several other questions immediately spring to mind: It would be interesting to repeat the calculation in the Lorentzian setting, as a variant fo the standard shock wave calculation \cite{Shenker:2013pqa,Shenker:2013yza,Stanford:2014jda} (one would have to interpret the maximal braiding in that context). Similarly, one would like to make precise the connection to the formalism of \cite{Mertens:2017mtv}. Extensions to higher dimensions (e.g. \cite{Gu:2016oyy}) and exploration of butterfly velocities would be interesting, for example in the context of 2-dimensional CFTs at large central charge \cite{Roberts:2014ifa}. It is also interesting to explore whether those $k$-OTO correlators obey some bounds along the lines of \cite{Maldacena:2015waa} (see also \cite{Tsuji:2017fxs}). Finally, we hope to explore other types of $2k$-point OTO correlators, such as the (suitably regularized) ``tremolo'' correlator $\langle (W(t) V(0))^k \rangle$. This might shed light on the physical significance of abstract arguments about the structure of OTO correlators \cite{Haehl:2017qfl,Haehl:2017eob}. \begin{acknowledgments} We thank Ahmed Almheiri, Pawel Caputa, Nicole Yunger Halpern, Kristan Jensen, Rob Myers, Dan Roberts, Brian Swingle and Beni Yoshida for helpful discussions. FH is grateful for hospitality by University of California, Santa Barbara, where part of this work was done. FH is supported through a fellowship by the Simons Collaboration `It from Qubit'. MR is supported by a Discovery grant from NSERC. \end{acknowledgments} \begin{widetext}
{ "timestamp": "2017-12-22T02:10:30", "yymm": "1712", "arxiv_id": "1712.04963", "language": "en", "url": "https://arxiv.org/abs/1712.04963" }
\section*{Abstract } \noindent \textbf{Motivation: }Automatically testing changes to code is an essential feature of continuous integration. For open-source code, without licensed dependencies, a variety of continuous integration services exist. The COnstraint-Based Reconstruction and Analysis (COBRA) Toolbox is a suite of open-source code for computational modelling with dependencies on licensed software. A novel automated framework of continuous integration in a semi-licensed environment is required for the development of \texttt{the COBRA Toolbox} and related tools of the COBRA community. \noindent \textbf{Results:} \texttt{ARTENOLIS} is a general-purpose infrastructure software application that implements continuous integration for open-source software with licensed dependencies. It uses a \emph{master-slave }framework, tests code on multiple operating systems, and multiple versions of licensed software dependencies. \texttt{ARTENOLIS} ensures the stability, integrity, and cross-platform compatibility of code in \texttt{the COBRA Toolbox} and related tools. \noindent \textbf{Availability and Implementation: }The continuous integration server, core of the reproducibility and testing infrastructure, can be freely accessed under \href{http://prince.lcsb.uni.lu/jenkins}{artenolis.lcsb.uni.lu}. The continuous integration framework code is located in the \mcode{.ci} directory and at the root of the repository freely available under \href{http://github.com/opencobra/cobratoolbox}{github.com/opencobra/cobratoolbox}. \noindent \textbf{Contact: }\href{mailto:ronan.mt.fleming@gmail.com }{ronan.mt.fleming@gmail.com } \noindent \textbf{Supplementary information: }Supplementary data are available at Bioinformatics online. \noindent \rule[0.5ex]{1\textwidth}{0.5pt} \begin{multicols}{2} \section{Introduction} Implementation of measures to ensure reproducibility of computational analyses is a fundamental aspect of scientific credibility {\color{ForestGreen}\citep{Bak16, Mun17}}, in particular in computational biology {\color{ForestGreen}\citep{Bea17}}. Analysis software developed collaboratively offers the potential for synergistic effort, but is prone to instability over time due to varying degrees of specialist domain knowledge of code contributors, laborious manual integration of individual code contributions, misinterpretation of the intended operation of code or, especially in academia, the lack of personnel continuity. Tracking of versions and merging of code is typically done with version control software, e.g., \texttt{git} ({\color{blueTOC}\href{https://git-scm.com/}{git-scm.com}}). The process of integrating individual code contributions can be semi-automated using a \emph{continuous integration} approach {\color{ForestGreen}\citep{Duv07}}, which involves automatically testing proposed software changes before they are considered for merger with the main code base {\color{ForestGreen}\citep{Sta14}}. Continuous integration enables immediate evaluation of the system-wide impact of local code changes and accelerates the development of quality code by enabling the early detection and tracking of errors, defects, or compatibility issues that can arise when a proposed software change undermines some previously established functionality in an unanticipated manner. Continuous integration systems, such as \texttt{Travis} ({\color{blueTOC}\href{https://travis-ci.org}{travis-ci.org}}) or \texttt{Jenkins} ({\color{blueTOC}\href{https://jenkins.io/}{jenkins.io}}), are tailored to continuously integrate code and are commonly used for testing open-source code written in Python {\color{ForestGreen}\citep{Ebr13}} or Julia {\color{ForestGreen}\citep{Hei17}} on a publicly available infrastructure. \texttt{The COBRA Toolbox} is a collaboratively developed software tool for creation, analysis and mechanistic modelling of genome-scale biochemical networks {\color{ForestGreen}\citep{HeiArr17}}. It is distributed as open-source code ({\color{blueTOC}\href{http://github.com/opencobra/cobratoolbox}{github.com/opencobra/cobratoolbox}}), but as it is dependent on MATLAB (The Mathworks, Inc.), using public infrastructure for continuous integration is hardly possible. There is a need for a customisable and fully-controllable environment that offers several types of continuous integration jobs, allows for 20 or more concurrent builds, is expandable, permits cross-repository testing on multiple operating systems (including \emph{macOS}), and that satisfies short and long-term financial constraints. \begin{figure*}[t] \noindent \begin{centering} \includegraphics[width=0.99\textwidth]{setup} \vspace*{-0.4cm} \par\end{centering} \caption{The core of the infrastructure is \texttt{Jenkins}, the open source automation server for continuous delivery {\color{ForestGreen}\citep{Fer11}}. The continuous integration infrastructure consists of a cascade of 3 distinct layers and is a \emph{master-slave} architecture: a \emph{public interaction layer} (top), the \texttt{\emph{Jenkins}}\emph{ server layer}, and the \emph{layer with 4 computing nodes} behind a firewall, each running a different operating system. A change made in a public repository on \texttt{GitHub} ({\color{blueTOC}\href{https://github.com}{github.com}}) is seen by the \texttt{Jenkins} server (\emph{master}), which in turn triggers multiple builds on the 4 computing nodes simultaneously (\emph{slaves}) (see the \emph{Supplementary Information} section for details).} \label{fig:Continuous-integration-setup.} \vspace{-2mm} \end{figure*} This need is apparent when aiming at integrating publicly available code that is dependent on licensed software, such as MATLAB, and even more so in regards to technical challenges, such as the activation of licensed software or the size of proprietary and licensed software container images. Here we present a general purpose infrastructure software application for continuous integration that is compatible with dependencies on licensed software. We illustrate its utility for development of software within \texttt{the COBRA Toolbox}, but \texttt{ARTENOLIS} is ready to be used with other tools using licensed dependencies. This approach ensures that existing high-quality \texttt{COBRA Toolbox} code is stably maintained to ensure reproducibility, yet permits the rapid integration of improvements to existing methods as well as the addition of novel COBRA methods by an active and geographically dispersed openCOBRA development community ({\color{blueTOC}\href{https://opencobra.github.io/}{opencobra.github.io}}). \vspace*{-0.8cm} \section{Implementation} \label{sec:Implementation} \texttt{ARTENOLIS} shown in Figure~\ref{fig:Continuous-integration-setup.} ensures that the rarely smooth and seamless process of integrating individual contributions, the so-called \textit{Integration Hell}, is avoided. The implementation of \texttt{ARTENOLIS} includes the configuration of \texttt{Jenkins} and its slaves, the continuous integration code that couples \texttt{ARTENOLIS} to the repository, and customisation code for repository testing and code quality evaluation. Key to the present multi-lingual implementation are the unique cross-platform triggering mechanism, the multiple versions of MATLAB and dependencies, and the customised and on-the-fly tutorial and documentation deployment. Prompt feedback on the quality and the stability of the submission is provided through a comprehensive console output and build badges before the merger of a code contribution and the evaluation of the actual stability and the deployment of the documentation. The tutorials are automatically and regularly tested to ensure error-free execution with the fast developing code base. \vspace{-0.5cm} \section{Conclusions} \label{sec:Conclusions} \texttt{ARTENOLIS} offers a flexible continuous integration solution for development of any open-source software with licensed dependencies. Applied to software development in \texttt{the COBRA Toolbox}, it ensures the stability required for reproducibility of research results yet accelerates the evolution of software implementations of novel constraint-based modelling methods. \vspace{-0.5cm} \section*{Acknowledgements and funding} The authors acknowledge the support from Thomas Pfau for extensively testing the continuous integration server. The Reproducible Research Results (R3) team of the Luxembourg Centre for Systems Biomedicine is acknowledged for support of the project and for promoting reproducible research. \texttt{ARTENOLIS} was funded by the National Centre of Excellence in Research (NCER) on Parkinson's disease, the U.S. Department of Energy, Offices of Advanced Scientific Computing Research and the Biological and Environmental Research as part of the Scientific Discovery Through Advanced Computing program, grant no. DE-SC0010429, and the Luxembourg National Research Fund (FNR) ATTRACT program (FNR/A12/01) and OPEN (FNR/O16/11402054) grants. \noindent \emph{Conflict of Interest:} none declared. \vspace{-0.5cm} \bibliographystyle{bioinformatics} \fontsize{7}{9}\selectfont \section{Background and overview} Software engineers in industry routinely apply continuous integration approaches to ensure that code developed in a collaborative way results in stable and cross-platform compatible software products \citep{Sil17} and to shorten turnaround times. Continuous integration is used more and more often in computational biology, especially for reproducing data-driven research results \citep{Gar13,Nar16}, but rarely when software has a dependency on commercial or licensed software. Continuous integration practices are adopted at a different pace for industrial and open-source software projects as both face different challenges in software development \citep{Hol03}. Typically, in a research environment, code might not be used beyond the scientist who authored the code. Researchers focus on obtaining results swiftly rather than writing prototype code that is compatible with several licensed software versions or operating systems. A common practice is to store multiple versions of code on various media with regular backups. This setup eventually works well for a small code base used by a single user, but is prohibitive in a collaborative environment, and more so when intending to guarantee reproducibility of results and aiming to provide a stable code base. According to \citet{Duv07} and \citet{Fow06} who analysed similar situations, a continuous integration system should centre itself in a development process aiming for stability, high quality, reproducible results, bug-free code execution, and a pleasant end-user experience. Metrics for code quality, such as code grade or code coverage, help developing functional, high-quality code, and ultimately, code that ensures reproducibility of results (see Section \ref{subsec:Code-quality}). For growing projects, scaling up the continuous integration system could be envisaged \citep{Mey14}. Nonetheless, while the code repositories that are continuously integrated must be maintained, the same holds for a running environment for reproducibility and testing \citep{Rog04}. Integrated software development and analysis environments, such as MATLAB, make it straightforward to start coding and modify existing code, even for novice users. Within the COBRA community, end users run their code on a variety of operating systems, such as \emph{Windows}, \emph{macOS,} or \emph{Linux}. \texttt{ARTENOLIS} benefits all end users considerably by ironing out incompatibilities and defects, especially \emph{Windows} users. The underlying server architecture is explained in detail in Section \ref{subsec:Server-architecture} and the configuration of \texttt{ARTENOLIS} is described in Section \ref{sec:Configuration-of-Continuous}. \vspace*{-2mm} \section{Cascade of the Continuous Integration system} \label{sec:Continuous-integration-system} The cascade shown in Figure~1 is typical of a continuous integration system of a reproducibility and testing infrastructure, but includes specificities tailored to \texttt{the COBRA Toolbox}. \subsubsection*{Public interaction layer} The main code base, as well as all of its \emph{forks}, are located on GitHub, e.g., the repository of \texttt{the COBRA Toolbox} is located at \href{http://github.com/opencobra/cobratoolbox}{github.com/opencobra/cobratoolbox}. A \emph{fork} is an individual and modifiable copy of the main code base of the contributor with \emph{<userName>}, and is located under \href{http://github.com/<userName>/cobratoolbox}{github.com/<userName>/cobratoolbox}. Contributors may contribute to the main code base by opening a \emph{pull request }(PR), which is reviewed and tested. In particular, developers of \texttt{the COBRA Toolbox} repository may easily submit pull requests using the\emph{ }\texttt{MATLAB.devTools} (\href{https://github.com/opencobra/MATLAB.devTools}{github.com/opencobra/MATLAB.devTools}). The public interaction layer also includes the badges of the build status (see Section \ref{subsec:Build-status}) and the publicly accessible website \href{http://opencobra.github.io/cobratoolbox}{opencobra.github.io/cobratoolbox}, which hosts the documentation and tutorials. The details on the generation and deployment of the documentation are provided in Section \ref{sec:Documentation-and-tutorials}. The repository of \texttt{the COBRA Toolbox} is structured according to common practices of continuous integration (e.g., \href{https://github.com/JuliaLang/julia}{github.com/JuliaLang/julia}) and is explained in detail in Section \ref{subsec:Repository-structure-and}. \subsubsection*{Jenkins server layer} The main element of the\texttt{ Jenkins} server layer is the \texttt{Jenkins} software installed on a virtual machine, which acts as a \emph{master} node. The\texttt{ Jenkins} server is accessible from the Internet and is constantly listening for activity on the main GitHub repository via \emph{web-hooks}. Once a PR is submitted by a contributor, builds are triggered on the slave machines through secure Java Web Start communication. The\texttt{ Jenkins} server layer partially includes the \texttt{codecov} and \texttt{Documenter.py} elements. The coverage report is prepared on the\texttt{ Jenkins} server, and is being made publicly available through \texttt{codecov} under \href{http://codecov.io/gh/opencobra/cobratoolbox}{codecov.io/gh/opencobra/cobratoolbox}. Similarly, the documentation is generated on the\texttt{ Jenkins} server before being deployed to the \href{http://opencobra.github.io/cobratoolbox}{opencobra.github.io/cobratoolbox} website. \subsubsection*{Computing nodes layer} The computing nodes layer contains 4 computing nodes, each running a different operating system. Most importantly, the continuous integration system set up for \texttt{the COBRA Toolbox} runs on the most popular operating systems in the COBRA community: \emph{Linux}, \emph{macOS},\emph{ Windows 7}, and \emph{Windows 10}. A virtualisation environment bears key advantages for a continuous integration system and is described in Section \ref{subsec:Server-architecture}. When triggered, each of the 4 computing nodes launches the last 4 stable versions of MATLAB (R2014b, R2015b, R2016b, and R2017b). Each MATLAB version then runs the dedicated testing suite (see Section \ref{subsec:Continuous-integration-test}), which makes use of multiple solvers, such as CPLEX \citep{Ibm17}, \citet{Gur17}, Tomlab \citep{Hol04}, Mosek \citep{And00}, GLPK (GNU Linear Programming Kit), PDCO \citep{Che98} and DQQ \citep{Ma17}. \vspace*{-2mm} \section{Architecture } \label{subsec:Server-architecture} \subsection{Computing nodes and resources} \label{subsec:Computing-nodes-and} \begin{table*}[t] \noindent \begin{centering} \small{% \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}>{\centering}m{1.5cm}>{\centering}m{2.8cm}ccc>{\centering}m{1.5cm}>{\centering}m{2.1cm}>{\centering}m{2.2cm}} \textbf{Physical machine } & \textbf{Name} & \textbf{Type} & \textbf{Mode} & \textbf{Memory} & \textbf{Number of cores} & \textbf{Operating System} & \textbf{Storage}\tabularnewline \hline 1 & \emph{Jenkins server} & Virtual & master & 8 GB ECC & 4 & \emph{Ubuntu 16.04} & 34GB (OS) + 120GB (data)\tabularnewline \hline \multirow{3}{1.5cm}{\centering2} & \emph{Linux node} & \multirow{3}{*}{Virtual} & \multirow{3}{*}{slave} & \multirow{3}{*}{18 GB ECC} & \multirow{3}{1.5cm}{\centering20} & \emph{Ubuntu 16.04} & 46GB (OS) + 200GB (data)\tabularnewline \cline{2-2} \cline{7-8} & \emph{Windows 7 node} & & & & & \emph{Windows 7} & 60GB (OS) + 200 GB(data)\tabularnewline \cline{2-2} \cline{7-8} & \emph{Windows 10 node} & & & & & \emph{Windows 10} & 50 GB (OS) + 200 GB (data)\tabularnewline \hline 3 & \emph{Mac node} & Physical & slave & 24 GB ECC & 12 & \emph{macOS 10.12} & 250 GB\tabularnewline \hline \end{tabular*}} \par\end{centering} \vspace{1mm} \caption{Technical specifications of \emph{master }and\emph{ slave} nodes (computing nodes layer) within \texttt{ARTENOLIS}.} \label{tab:Specifications-of-machines} \end{table*} The advantage of virtual machines over physical machines is that various computing environments can co-exist on a single physical machine. A setup with fewer physical machines but with multiple virtual environments is more economical and occupies a smaller space in the server racks. Thanks to a virtualisation layer and a hypervisor monitoring the health (see Section \ref{subsec:Virtualization-layer}), new virtual machines can be created or deleted on demand and based on the available capacity of the physical server. The \texttt{Jenkins} virtual machine running the \emph{Linux Ubuntu} operating system is a shared resource, is referred to as the \emph{master} node\emph{,} handles HTTPs requests, and orchestrates the slave nodes. The \emph{Linux} (Debian based), \emph{Windows 7,} and \emph{Windows 10} operating systems are virtual machines and are running on the same physical computing node, whereas the \emph{macOS} operating system is running on a dedicated physical machine. The specifications of the computing nodes are provided in Table \ref{tab:Specifications-of-machines}. A limited amount of storage is usually provided to each virtual machine and only satisfies the need to run and store small amount of data. For larger data, such as build data, an NFS (Network File System) or SMB (Server Message Block) mount point is provided on a central storage system. \subsection{Virtual machine management and access control} \label{subsec:Virtual-machine-management} Security and appropriate access control to each virtual machine and associated services is provided through \emph{FreeIPA} (\href{http://freeipa.org}{freeipa.org}). \emph{FreeIPA} provides a centralised resource to control authentication, authorisation and account information by storing all information about a user, virtual machine and other sets of objects required to manage the security of the virtual machine. In order to reduce maintenance and initialisation time of new virtual machines, the virtual machines are deployed using \emph{Foreman} (\href{http://theforeman.org}{theforeman.org}), a virtualisation environment agnostic web tool typically used to manage the complete lifecycle of virtual machines. All physical and virtual servers (except the \emph{macOS} node) are configured via \emph{Puppet}, which is a configuration management tool that facilitates standardised configurations across a pool of servers such as firewall definitions, administration SSH (Secure Shell) keys, default packages, and other settings. In the current setup, the compliance of all machine configurations is constantly monitored, while periodic health reports are provided to \emph{Foreman.} Besides the configuration monitoring and deployment, the performance of the virtual machines is monitored using \emph{netdata} (\href{https://github.com/firehol/netdata}{github.com/firehol/netdata}), which is particularly useful when evaluating the performance of the continuous integration test suite (see Section \ref{subsec:Job-definitions}). \subsection{Virtualisation layer} \label{subsec:Virtualization-layer} Hypervisor software runs on a server handling virtual machines (VMs). In the present case, \emph{oVirt} (\href{http://ovirt.org}{ovirt.org}) is installed on each of the physical servers, and is a key element in VM management. \emph{Kernel-based Virtual Machine} (KVM), a virtualisation infrastructure,\emph{ }is used as the hypervisor layer. \emph{oVirt} provides a graphical user interface (GUI) to manage all physical and logical resources needed for the virtual infrastructure (e.g storage, network, data centres). In the current continuous integration system, the 2 physical servers (not the \emph{macOS} node, see Table \ref{tab:Specifications-of-machines}) are running \emph{CentOS 7.3} and are virtualised using \emph{oVirt 4.1.0}. \vspace{-3mm} \section{Configuration ARTENOLIS} \label{sec:Configuration-of-Continuous} \subsection{Repository structure and branches} \label{subsec:Repository-structure-and} In Table \ref{tab:Directory-structure-of}, the main directories of \texttt{the COBRA Toolbox} repository are listed together with their respective purpose. The test suite is located in \emph{/test}, while the source files of the code base are located in the \emph{/src} directory. These two directories, together with \emph{/tutorials}, are key components. The \emph{/.ci }directory contains scripts required for the continuous integration integration The scripts \emph{travis.yml }and \emph{codecov.yml} located at the root of the repository are required to trigger the jobs (see Section \ref{subsec:Continuous-integration-(CI)}) and to report code coverage (see Section \ref{subsec:Code-coverage}), respectively. \vspace{-2mm} \begin{table}[H] \noindent \begin{centering} \small{% \begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}l>{\raggedright}m{5cm}} \textbf{Directory/file name} & \textbf{Purpose}\tabularnewline \hline \emph{/.ci} & Directory with continuous integration bash-scripts \tabularnewline \hline \emph{/src} & Directory with code source files\tabularnewline \hline \emph{/test} & Directory with test files and \emph{testAll.m} \tabularnewline \hline \emph{/tutorials} & Directory with tutorial \emph{.mlx} files \tabularnewline \hline \emph{travis.yml} & YAML trigger script\tabularnewline \hline \emph{codecov.yml} & YAML script for code coverage report \tabularnewline \hline \end{tabular*}} \par\end{centering} \vspace{1mm} \caption{Structure and key directories and files of \texttt{the COBRA Toolbox} repository.} \label{tab:Directory-structure-of} \end{table} \vspace{-5mm} As explained in Section \ref{subsec:Job-definitions}, \texttt{the COBRA Toolbox} follows the common development model of a stable \emph{master} branch and a \emph{develop} branch for development. This development model is particularly well suited for a reproducibility and testing infrastructure, such as\texttt{ ARTENOLIS}. It is against the \emph{develop} branch that new pull requests are raised by developers, while it is the \emph{master} branch that contains the stable version of\texttt{ the COBRA Toolbox}. A regular merge strategy from the \emph{develop} to the \emph{master }branch ensures that the latest features are adopted in the stable version. As the development of new features is made on separate branches that are being merged to the \emph{develop} branch only through Pull Requests (PRs), and as each PR is being tested by the continuous integration system, the risk of the \emph{develop} branch failing to build is very low. This stability of the \emph{master} branch is particularly important in a fast-moving research environment that relies on the reproducibility of data-driven results. \vspace{-2mm} \subsection{Local and GitHub user accounts} \label{subsec:Local-and-Github} \begin{table*}[t] \noindent \begin{centering} \small{% \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}l>{\raggedright}p{3.5cm}cccc} \textbf{Job name} & \textbf{Description} & \textbf{R2017b} & \textbf{R2016b} & \textbf{R2015b} & \textbf{R2014b}\tabularnewline \hline COBRAToolbox-branches-auto-\textbf{linux} & \multirow{4}{3.5cm}{Build the \emph{develop} and \emph{master} branches (automatic trigger)} & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-branches-auto-\textbf{macOS} & & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-branches-auto-\textbf{windows7} & & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-branches-auto-\textbf{windows10} & & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \hline \hline COBRAToolbox-pr-auto-\textbf{linux} & \multirow{4}{3.5cm}{Build any newly submitted Pull Request } & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-pr-auto-\textbf{macOS} & & & $\star$ & & \tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-pr-auto-\textbf{windows7} & & & $\star$ & & \tabularnewline \cline{1-1} \cline{3-6} COBRAToolbox-pr-auto-\textbf{windows10} & & & $\star$ & & \tabularnewline \hline \hline COBRAToolbox-branches-manual-linux & Build the \emph{develop} and \emph{master} branches by SHA1 (Secure Hash Algorithm 1)\\ \textit{(manual trigger)} & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \hline COBRAToolbox-pr-manual-linux & Build a Pull Request by SHA1 \textit{}\\ \textit{(manual trigger)} & $\star$ & $\star$ & $\star$ & $\star$\tabularnewline \hline \end{tabular*}} \par\end{centering} \vspace{1mm} \caption{Job definitions for the continuous integration setup of \texttt{the COBRA Toolbox} repository.\label{tab:Job-definitions-for}} \end{table*} A local non-administrative user account on the slave nodes is set up in order to run MATLAB as a specific user during a continuous integration build, have proper read/write permissions, and allow for simplified repository maintenance and/or debugging. This user is not bound to a physical administrator, and can be associated to licenses or perform repetitive tasks, such as setting the build badges (see Section \ref{subsec:Continuous-integration-feedback}). This local user account is also not intended to be used to perform administrative tasks on the computing node. Within \texttt{ARTENOLIS}, this local user account is named \emph{jenkins}, and is independent of server-wide access control as explained in Section \ref{subsec:Virtual-machine-management}. In order to perform certain repository related tasks, such as deploying the documentation (see Section \ref{sec:Documentation-and-tutorials}), a dedicated GitHub account that is not related to a physical person has been created. This GitHub account is a bot-type account, named \emph{cobrabot}, and is only used for administrative tasks or committing/pushing automatically to the GitHub server. \subsection{\emph{master} and \emph{slave} nodes} \label{subsec:Configuration-of-master} The configuration of the continuous integration system exploits the \emph{master/slave} functionality integrated in \texttt{Jenkins}. A large number of jobs running on various operating systems (see Section \ref{subsec:Job-definitions}) may be triggered from a single \texttt{Jenkins} server. The workload of multiple jobs can be delegated by the \emph{master} node to multiple \emph{slave} nodes, which allows for a single \texttt{Jenkins} installation. The \emph{master} node serves the web interface of \texttt{Jenkins} (\href{http://artenolis.lcsb.uni.lu/}{artenolis.lcsb.uni.lu}) and acts as a portal to the entire farm of slave nodes. The load of the jobs is distributed on the \emph{master }node, while the \emph{slave} nodes are triggered accordingly. The slaves are connected to the master node through Java Web Start (Java Network Launch Protocol - JNLP). An agent is running on the slave that listens for a triggering signal from \texttt{Jenkins} on the master node. Once triggered, the \emph{slave} node executes the job and reports back to the \emph{master }node a build status as explained in Section \ref{subsec:Build-status}. Importantly, on all computing nodes, the service running the agent must be configured to launch upon startup of the server, as \texttt{Jenkins} on the \emph{master} node is not able to wake the agent on the slaves. On \emph{macOS}, it is important to run \emph{Caffeine} (\href{http://lightheadsw.com/caffeine}{lightheadsw.com/caffeine}), a tiny program that prevents the \textit{macOS} system from automatically activating the sleep function. \subsection{Job definitions} \label{subsec:Job-definitions} A job is a configuration of a build pipeline on the \texttt{Jenkins} server, has a specific purpose, runs on a different slave/operating system, or builds a different branch of the repository. An example of such a configuration file is given under \href{http://artenolis.lcsb.uni.lu/userContent/configExample.yml}{artenolis.lcsb.uni.lu/userContent/configExample.yml}. The job definitions in \texttt{Jenkins} on the master node are configured accordingly as indicated in Table \ref{tab:Job-definitions-for}. In essence, one job is defined per operating system, and in this particular continuous integration setup, one job per slave. This setup has been chosen in order to provide the highest robustness of \texttt{ARTENOLIS}. A \texttt{Jenkins} job may also be parameterised. In the case of \texttt{the COBRA Toolbox} repository, a matrix of sub-jobs is generated for different MATLAB versions using the \mcode{MATLAB_VER} parameter. In order to streamline the continuous integration setup, there are two distinct job types: jobs that trigger builds of the \emph{develop} and \emph{master} branches (marked with \mcode{-branches}) and jobs marked with \mcode{-pr} that build any newly submitted pull request (PR). Each job type is either triggered automatically by \texttt{Jenkins} or manually by an administrator using the SHA1 of the commit. As the \emph{develop} and \emph{master }branches must be tested for all supported MATLAB versions, the \mcode{-branches} jobs run for each supported MATLAB version on each slave. However, whenever a Pull Request is submitted, only the job running on the \emph{Linux} node triggers the test suite on all supported MATLAB versions. On all other slaves (\emph{macOS}, \emph{Windows} \emph{7}, and \emph{Windows 10}), only the most stable and most used version of MATLAB is tested. This setup of the reproducibility and testing infrastructure is tailored to reduce the required computational resources to a minimum. In order to ensure flexibility of the continuous integration system and in case of emergency, the same job types may be manually triggered on the \textit{Linux} platform. Whenever the \emph{develop} or \emph{master} branches failed to build, or pull requests failed to be triggered properly, the specific SHA1 of the commit failing the branch may be built separately. In addition, the job may also be manually relaunched. The manual trigger is only available on the \emph{Linux} node in order to permit swift debugging, rapid intervention, and lowest resource consumption. Another good practice is to configure a job for testing purposes (e.g., a job that only builds a specific branch) before making any production changes on critical jobs (i.e., the \mcode{-branches} jobs). Although \emph{Mathworks Inc.} releases two versions of MATLAB per year, only the last released versions (version \emph{b}) are officially supported by \texttt{the COBRA Toolbox} and run with various solver packages on each supported operating system. Certain solvers are on a different release schedule, and the compatibility of certain solvers is generally only ensured a long time after the release of a new MATLAB version. These combinations are prone for incompatibilities. The compatibility of \texttt{the COBRA Toolbox} has been tested and evaluated separately such that incompatibility issues between solver versions, operating systems, and MATLAB versions do not crash on the continuous integration server or on the user system. A solver compatibility matrix has been established for all supported operating systems and actively supported solver interfaces with their respective versions. This compatibility matrix is used by MATLAB during runtime to determine whether a certain user setup is compatible or not (\href{http://opencobra.github.io/cobratoolbox/docs/compatibility.html}{opencobra.github.io/cobratoolbox/docs/compatibility.html}), and ensures that certain tests on the continuous integration setup are skipped, which would otherwise lead to a build failure. Each job can either succeed or fail. The build stability of each job and build trend is monitored and can be retrieved from \href{http://artenolis.lcsb.uni.lu/job/<jobName>/buildTimeTrend}{artenolis.lcsb.uni.lu/job/<jobName>/buildTime-Trend}, where \emph{<jobName>} is the name of a job as listed in Table \ref{tab:Job-definitions-for}. Currently, a job on the continuous integration system takes in average around 30-40 minutes to finish (all MATLAB versions). Although all MATLAB versions are launched in parallel, and despite some tests of the test suite requesting a parallel pool of workers, the memory consumption is moderate. Each job requests about 12GB of memory, while all CPUs of the virtual machines are used of up to 60\%. For each job, input/output (IO) on the slaves is high at the beginning during the repository cloning phase, but is negligible during the test run itself. If more CPU power or RAM was needed in the future, the technical characteristics of the virtual machines may effortlessly be changed, which is another advantage of the \emph{master-slave} setup and of the virtual machines as explained in Sections \ref{subsec:Configuration-of-master} and \ref{subsec:Computing-nodes-and}. As shown in Table \ref{tab:Specifications-of-machines}, the hard-drive of each of the slaves has limited capacity. All builds are stored on a central storage server, and old builds are discarded. The workspace itself is cleaned after each job in order to reduce the storage needs. The performance of the slaves and of the hypervisor are monitored internally using \emph{netdata} (\href{http://github.com/firehol/netdata}{github.com/firehol/netdata}). Over time, the \texttt{Jenkins} memory usage on the \emph{master }node increases gradually. In order to avoid the \emph{master }node to swap memory, \texttt{Jenkins} is restarted every night when being idle. A regular restart also ensures that the configuration files of \texttt{Jenkins} are reloaded and that the latest configuration files of \texttt{Jenkins} and the jobs are used. \subsection{Continuous integration scripts} \label{subsec:Continuous-integration-(CI)} When triggered, each job starts by cloning the repository, and checks out the latest commit or the commit that shall be built. On the \texttt{Jenkins} node, the Java web call triggers the interpretation of a YAML (YAML Ain't Markup Language) script, namely the \emph{travis.yml }script shown in Listing \ref{lst:travisScript}. Together with build-specific environment variables, an executable shell script, also known as the Hudson shell file, is generated. This Hudson shell file is then sent to and executed on each slave in a shell-like environment. The continuous integration process is consequently started through the \emph{travis.yml} script placed at the root of the repository directory. \begin{lstlisting}[caption={.travis.yml script interpreted by Jenkins and used to trigger the \mcode{runtests.sh} script.}\vspace{1mm},label={lst:travisScript}, style=yamlStyle, frame=single, basicstyle=\small\ttfamily] language: bash before_install: # fresh clone of the repository - if [[ -a .git/shallow ]]; then git fetch --unshallow; fi script: # launch the tests - bash .ci/runtests.sh \end{lstlisting} \vspace{3mm} As a shell script offers cross-platform flexibility, a proper shell script is called from the YAML script, namely \emph{runtests.sh} located in the specific \emph{/.ci }folder. The shell script \emph{runtests.sh} is shown in Listing \ref{lst:runtestsScript} and runs on all supported platforms. Each platform is identified by the environment variable \mcode{ARCH}, which is set by the job definitions (see Section \ref{subsec:Job-definitions}). The simplest launch command is set for UNIX operating systems (\emph{Linux} and \emph{macOS}). As explained in Section \ref{subsec:Configuration-of-master}, the usage of the tiny \emph{caffeine }program is obvious when running the script on \emph{macOS}. On UNIX, any output of MATLAB while running the \emph{testAll.m} script is routed directly to the shell of the slave, and ultimately, to the console publicly accessible on \texttt{Jenkins} under \href{http://artenolis.lcsb.uni.lu/job/<jobName>/<buildNumber>/MATLAB_VER=<MATLABversion>,label=<platform>/console}{artenolis.lcsb.uni.lu/job/<jobName>/<buildNumber>/MATLAB\_{}VER=\\<version>,label=<platform>/console}, where \emph{<jobName>} is the name of the job as defined in Table \ref{tab:Job-definitions-for}, \emph{<buildNumber>} is the number of the build, \emph{<platform>} is the label of the operating system, and \emph{<version>} is the MATLAB version. \begin{lstlisting}[caption={\mcode{runtests.sh} script used to launch the MATLAB sessions}\vspace{1mm}, label={lst:runtestsScript}, style=bashStyle, frame=single, float=*, basicstyle=\small\ttfamily] #!/bin/sh if [ "$ARCH" == "Linux" ]; then $INSTALLDIR/MATLAB/$MATLAB_VER/bin/./matlab -nodesktop -nosplash < test/testAll.m elif [ "$ARCH" == "macOS" ]; then caffeinate -u & $INSTALLDIR/MATLAB_$MATLAB_VER.app/bin/matlab -nodesktop -nosplash < test/testAll.m elif [ "$ARCH" == "windows" ]; then # change to the build directory echo " -- changing to the build directory --" cd "D:\\jenkins\\workspace\\COBRAToolbox-windows\\MATLAB_VER\\$MATLAB_VER\\label\\$ARCH" echo " -- launching MATLAB --" unset Path nohup "D:\\MATLAB\\$MATLAB_VER\\\bin\\matlab.exe" -nojvm -nodesktop -nosplash -useStartupFolderPref -logfile output.log -wait -r "restoredefaultpath; cd test; testAll;" & PID=$! # follow the log file tail -n0 -F --pid=$! output.log 2>/dev/null # wait until the background process is done wait $PID fi CODE=$? exit $CODE \end{lstlisting} A key challenge with setting up \texttt{ARTENOLIS} is to trigger a build on a \emph{Windows} platform and launch MATLAB while providing live feedback, similar to UNIX platforms. As no native \emph{bash} console exists on DOS-based systems, the output of MATLAB is not directly routed to the console. Instead, \texttt{git Bash} (\href{http://git-scm.com/download/win}{git-scm.com/download/win}) is used, and the Hudson script launching MATLAB is executed in \emph{sh.exe}. As the output from a shell-like environment is not routed back to the \emph{master} node directly, a computational trick must be used in order to however display a live console output on the \texttt{Jenkins} web interface. This trick ensures the homogeneity of \texttt{ARTENOLIS} despite the large differences between DOS and UNIX operating systems supported by \texttt{ARTENOLIS}. The output of MATLAB is routed to a file (\emph{output.log}), while the process is run as a hidden process marked with \mcode{&}. The process ID is saved as \mcode{PID}. While the hidden process is running, the log file is constantly read in (or \emph{followed}) by the system command \mcode{tail}, whose output is redirected to \texttt{Jenkins}. The shell script \emph{runtests.sh} then only exits once the MATLAB process with \mcode{PID} has been executed without an error. Any eventual error code thrown during this process is caught in the variable \mcode{$CODE}. This is the exit code of the script \emph{runtests.sh} that is returned to \texttt{Jenkins} running on the \textit{master} node. More details on how this feedback code is interpreted are given in Section \ref{subsec:Continuous-integration-feedback}. \subsection{GitHub interaction} \label{subsec:GitHub-interaction} In order for a build to be triggered from GitHub on the \texttt{Jenkins} server, a so-called \emph{web-hook} has been installed on GitHub. This hook listens to events that occur on the GitHub repository of \texttt{the COBRA Toolbox} and include the opening of a pull request, a change of a commit, or another status modification. On \texttt{Jenkins}, a cookie is stored, which allows \texttt{Jenkins} to listen to that particular hook and that cannot be triggered by any other hook. A valuable feature of the continuous integration system is that the continuous integration software \texttt{Jenkins} integrates seamlessly with the version control server GitHub. Next to each commit that has triggered a build on the continuous integration system, a status is displayed (see Section \ref{subsec:Build-status}). This visual system is standard for continuously integrated repositories on GitHub, and allows developers to swiftly check and track the status of a build from GitHub directly. \section{Evaluation of code stability} \label{sec:MATLAB-testsuite-and} \subsection{Continuous integration test suite} \label{subsec:Continuous-integration-test} The test-suite for relies on testing framework functions and is designed to test the functionality and performance of the MATLAB code of \texttt{the COBRA Toolbox}. Once MATLAB is running on the continuous integration server, the test-suite is launched from the \mcode{testAll} script. The test-suite makes use of the dedicated unit testing functions implemented in MATLAB. The \mcode{testAll} script is structured such that \texttt{MOcov} and \texttt{JsonLab} are added to the path first, global variables are defined, and the code quality grade is determined (see Section \ref{subsec:Code-quality-grade}) before running any unit-test functions. The \mcode{runtests} command runs in serial (or in parallel) all test files in the \emph{/test} folder. Once the \mcode{runtests} command completed, the number of tests that failed and/or are incomplete is evaluated, and the code coverage percentage is computed (see Section \ref{subsec:Code-coverage} for details). A test, part of the continuous integration test suite, is tailored to run a function in the \emph{/src} directory such that a result is output, a warning, or an error are thrown. A test must evaluate the result returned by the function against a pre-computed reference result. This evaluation within the test is performed using the \mcode{assert} function. All tests for \texttt{the COBRA Toolbox} follow a template, and a guide helps developers get started (\href{http://opencobra.github.io/cobratoolbox/docs/contributing.html}{opencobra.github.io/cobratoolbox/docs/contributing.html}). The \mcode{runtests} command prints a table with the test name, the running time of each test, and whether the test failed or succeeded. As the exit status code returned by \mcode{runtests} determines the exit code of the MATLAB process (see Section \ref{subsec:Continuous-integration-(CI)}), the continuous integration test suite is launched within a \mcode{try...catch} statement. The exit code is explicitly set to \mcode{1} if the number of failed or incomplete tests is not $0$. The \mcode{exit(exit_code)} command is called on the continuous integration server before the end of the \mcode{try...catch} statement. Any exception thrown, such as a crash of MATLAB, is caught in the \mcode{catch} statement. On the continuous integration server, this leads to a returned exit code of \mcode{1}, while on a user computer, the exception is explicitly rethrown. The core of the continuous integration system is the test suite. In other words, the number and quality of tests determine the quality of the code in the \emph{/src }directory. Writing tests may be a long process, in particular for bug prone code. In the case of \texttt{the COBRA Toolbox}, the community picked up writing tests for their own functions, leading to a steady increase of code quality (see Section \ref{subsec:Code-quality}). \subsection{Continuous integration feedback} \label{subsec:Continuous-integration-feedback} \label{subsec:Build-status} An execution of a job on the continuous integration server, or a run of the MATLAB test suite, is considered as a \emph{build}. For each commit on the \emph{develop} and \emph{master} branches, a job- and build-specific build status is set on GitHub and updated continuously while the build is running. This commit build status reflects the actual status of the build on the server. The build status is either a \emph{green check mark} for \emph{success}, a \emph{red cross mark} for \emph{failure}, or a \emph{yellow dot} for a job that is pending. A commit can hence cause the failure or the success of a job. The trigger build status, or global build status, is set as successful when all jobs ran without errors on a certain operating system. Commonly, one or multiple visible badges are displayed visibly on the first page of a GitHub repository that indicate the status of the latest builds of the \emph{develop} branch. The badges are a visual aid of quickly determining the stability and reproducibility of the development of a repository. For the case of \texttt{the COBRA Toolbox} repository, the MATLAB test suite is run on multiple operating systems. As for each build (i.e., for each MATLAB version), a build status is returned, a matrix of build status is defined, which can be consulted under \href{http://opencobra.github.io/cobratoolbox/docs/builds.html}{opencobra.github.io/cobratoolbox/docs/builds.html}. In order to simplify the readout for the end-user who is primarily interested in the stability of \texttt{the COBRA Toolbox} on a specific platform, a build status is retrieved for each operating system. To this end, a Julia script that retrieves continuously the list of build status for the last commit on the \emph{develop} branch using \texttt{Github.jl} (\href{http://github.com/JuliaWeb/GitHub.jl}{github.com/JuliaWeb/GitHub.jl}), which provides a Julia interface to the GitHub API v3. The overall status of the build is determined for each operating system, and a visual badge is set on the \texttt{Jenkins} web server. In the rare case of a build failing because of a misconfiguration of a continuous integration server node or in case of emergency, a build status of a commit may also be changed manually using the GitHub API v3. \begin{figure*}[t] \noindent \begin{centering} \includegraphics[width=1\textwidth]{documenter} \par\end{centering} \vspace{-1mm} \caption{Pipeline for generating, building, and deploying the documentation using \texttt{Documenter.py}. The documentation is automatically deployed from the continuous integration server and generated based on the function headers.} \label{fig:pipelineDocumentation} \end{figure*} Another valuable feedback mechanism is to alert the administrator of the continuous integration system of build failures by email. Other health monitoring tools, such as explained in Section \ref{subsec:Virtual-machine-management}, help determine the cause of failure in cases where the failure of a build might be system related (e.g., high memory usage or faulty disks). \vspace{-5mm} \section{Documentation and tutorials} \label{sec:Documentation-and-tutorials} The documentation of \texttt{the COBRA Toolbox} is generated and deployed automatically once a push is made on the\emph{ master} or \emph{develop} branch of the GitHub repository, and if the build is successful. The building of the documentation is done in 3 phases: the generation of the tutorials, the generation of the documentation and its deployment to a host server. Figure \ref{fig:pipelineDocumentation} represents all the steps involved in the creation and deployment of tutorials and documentation, and summarises the creative pipelines. As MATLAB does not provide a full documentation system to publish documentation of user created MATLAB codes, a specific documentation pipeline that uses the popular Python documentation tool \emph{Sphinx} (\href{http://www.sphinx-doc.org/en/stable/index.html}{sphinx-doc.org}) has been developed. \texttt{Documenter.py} (\href{https://github.com/syarra/Documenter.py}{github.com/syarra/Documenter.py}) is at the heart of the documentation pipeline. The package is designed to combine\emph{ reStructuredText} files and inline \textit{docstrings} from MATLAB functions into a single interlinked and publishable documentation. A \textit{docstring} of a function is a string literal specified in source code that is used to document a function in an easily comprehensible way. In order for the documentation to be generated properly, the docstrings must be written using keywords listed in Table~\ref{tab:keywords-documentation}.A keyword defines the start of a block of documentation with the header of a function, must be followed by non-empty lines, and is separated from the next block by an empty line. \vspace{-5mm} \begin{table}[H] \begin{centering} \small{% \begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}l|>{\raggedright}p{5cm}} \textbf{Keyword} & \textbf{Purpose}\tabularnewline \hline \mcode{USAGE:} & defines how to use the function\tabularnewline \hline \mcode{INPUT:} or \mcode{INPUTS:} & describes input argument(s)\tabularnewline \hline \mcode{OUTPUT:} or \mcode{OUTPUTS:} & describes the output argument(s) \tabularnewline \hline \mcode{EXAMPLE:} & shows example of code (MATLAB syntax)\tabularnewline \hline \mcode{NOTE:} & displays a highlighted box with text\tabularnewline \hline \mcode{AUTHOR:} & lists author(s) of the function\tabularnewline \hline \end{tabular*}} \par\end{centering} \vspace{1mm} \caption{\label{tab:keywords-documentation}Main keywords in the function headers (docstrings) used for documentation extraction.} \vspace{-3mm} \end{table} The first phase in the generation of the full documentation is the generation of tutorials (steps \textbf{I.a}-\textbf{I.d} in Figure \ref{fig:pipelineDocumentation}). MATLAB provides a very convenient way of writing tutorials using\emph{ Live Script}, which is a document containing both computer code and text elements, such as paragraphs, equations, figures, or links, and which is saved as a \emph{.mlx} file. In order to allow for swift consultancy of the tutorial through a web browser, or even print a hard copy of the tutorial, the \emph{Live Scripts} are converted automatically into PDF and HTML formats by executing the \emph{prepareTutorials.sh} script (steps\textbf{ I.a }and\textbf{ I.b}) based on MATLAB functionality. During steps\textbf{ I.c} and \textbf{I.d}, the \emph{.pdf} and \emph{.html} versions of the tutorials are further modified to fit the webpage style and moved to the web server location. A user running MATLAB R2016a or above may consult the tutorials in 3 different formats: as a document (\emph{.pdf}), on the web (\emph{.html}), or locally directly in MATLAB (\emph{.mlx}). The Live Script functionality is not available for versions of MATLAB older than R2016a. For convenience, the\emph{ .pdf }documents are converted to \emph{.png} image files and displayed directly within the README.md files on GitHub when browsing the \emph{/tutorials }directory (\href{https://github.com/opencobra/cobratoolbox/tree/master/tutorials}{github.com/opencobra/cobratoolbox/tree/master/tutorials}). The second phase consists of extracting the docstrings from MATLAB functions using the \texttt{Sphinx} package and the \texttt{matlabdomain} plugin (\href{https://pypi.python.org/pypi/sphinxcontrib-matlabdomain}{pypi.python.org/pypi/sphinxcontrib-matlabdomain}) (steps \textbf{II.a}-\textbf{II.d}) based on the docstrings and the keywords shown in Table \ref{tab:keywords-documentation}. The docstrings are then combined with the \emph{reStructuredText} files located in the \textit{/docs/source/modules} directory from the original repository to produce HTML pages (steps \textbf{II.e}-\textbf{II.h}) that can be displayed on the web server. Style and layout are matched to the style of the web documentation for Julia packages thanks to the \texttt{Sphinx} COBRA Theme package (\href{https://github.com/uni-lu/sphinx_cobra_theme}{github.com/uni-lu/sphinx\_{}cobra\_{}theme}). The Julia Sphinx theme (\href{https://github.com/uni-lu/sphinx_julia_theme}{github.com/uni-lu/sphinx\_{}julia\_{}theme}) is adopted in order to provide a harmonised package suite together with \texttt{COBRA.jl} \citep{Hei17} (\href{https://opencobra.github.io/COBRA.jl/stable}{opencobra.github.io/COBRA.jl/stable}). The third and last phase of the documentation generation is the deployment to the host web server (\href{https://opencobra.github.io/cobratoolbox}{opencobra.github.io/cobratoolbox}) via the \emph{gh-pages }branch of \texttt{the COBRA Toolbox} repository (\href{https://github.com/opencobra/cobratoolbox/tree/gh-pages}{github.com/opencobra/cobratoolbox/tree/gh-pages}). As explained in Section \ref{subsec:Local-and-Github}, the user \emph{cobrabot} pushes the newest changes and publishes the latest version of the documentation. \section{Evaluation of code quality} \label{subsec:Code-quality} \subsection{Code coverage} \label{subsec:Code-coverage} Functional coverage provides information about which scenarios have been tested. In software development, it is essential to track test coverage, or in other words, which functions and source lines are executed during the test run. The coverage report helps to estimate how much of the code base is tested or executed without crashing and how much code is not covered through testing. Code coverage is reported through a coverage report generator for MATLAB and GNU Octave, namely the \texttt{MOcov} toolbox (\href{http://github.com/MOcov/MOcov}{github.com/MOcov/MOcov}), and through the free and GitHub integrated \href{https://codecov.io/}{codecov.io} service. A code coverage report reveals which lines have been added and whether they have been tested. The code coverage difference can consequently be determined for every pull request. The code coverage determines the scope of the test suite and is determined as a ratio of the number of executed code source lines and the total number of executable lines of code. Executable lines of code are counted as all lines in the\emph{ .m} files in the \emph{/src} folder that do not start with \mcode{\%} or the language keywords \mcode{end}, \mcode{otherwise}, \mcode{switch}, \mcode{else}, \mcode{case}, or \mcode{function}. Tracking the code coverage provides a measure of stability of the code base and the breadth of the test suite. The limitations of code coverage reports include that the intended functionality of the code is not verified and that the quality of the code is not assessed. Although there is no precise measure for code quality itself, the efficiency of code certainly can be graded (see Section \ref{subsec:Code-quality-grade}). \subsection{Code efficiency grade} \label{subsec:Code-quality-grade} The code efficiency grade is a valuable measure of how MATLAB code can be improved for efficiency and to detect potential problems and opportunities for code improvement. The implemented function \mcode{checkcode} is run on each source code file and the number of MATLAB \texttt{Code Analyzer} messages is recorded. \begin{table}[H] \noindent \begin{centering} \small{% \begin{tabular}{cc} \textbf{Code efficiency grade} & \textbf{Percentage range}\tabularnewline \hline A & 0\% - 3\%\tabularnewline \hline B & 3\% - 6\%\tabularnewline \hline C & 6\% - 9\%\tabularnewline \hline D & 9\% - 12\%\tabularnewline \hline E & 12\% - 15\%\tabularnewline \hline F & > 15\%\tabularnewline \hline \end{tabular}} \par\end{centering} \vspace{1mm} \caption{Conversion chart from coverage percentage to quality grade.} \label{tab:Code-grade-percentage} \end{table} \vspace{-3mm} The average number of messages per source code file is divided by the actual number of executable source code lines, which yields the code grade percentage. For ease of use, the code grade percentage is converted to a letter code as shown in Table \ref{tab:Code-grade-percentage}. \subsection{Code linting} Linting is the practice of harmonising a given code style across an entire code base. The need for code linting of an open-source code base is obvious; many developers contributing to the same project and source files may not have the same discipline of writing code according to style guidelines. A clean, homogeneous, and easily readable code base is required for easy debugging and swift maintenance. \texttt{Automatlab.py} (\href{https://github.com/syarra/automatlab}{github.com/syarra/automatlab}) is a tool designed to ensure that MATLAB code comply to style conventions defined at \href{https://opencobra.github.io/cobratoolbox/docs/styleGuide.html}{opencobra.github.io/cobratoolbox/docs/styleGuide.html}. \texttt{Automatlab.py} runs in two steps: first, a compliance analysis is performed to check the adherence to the style guide. Then, formatting is fixed directly in the MATLAB code. Linting is a common practice when coding using Python (e.g., the package \texttt{AutoPEP8}: \href{http://github.com/hhatto/autopep8}{github.com/hhatto/autopep8}) or Julia (e.g., the package \texttt{Lint.jl}: \href{http://github.com/tonyhffong/Lint.jl}{github.com/tonyhffong/Lint.jl}). Several state-of-the art editors, such as \emph{Atom} (\href{https://atom.io/}{atom.io}), use built-in linting functionality to advise the developer of best coding practices. \hfill{}$\blacksquare$ \section*{List of Acronyms} \vspace{-3mm} \noindent \begin{center} \begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}>{\raggedright}p{2cm}l} \textbf{Acronym} & \textbf{Designation}\tabularnewline \hline ARTENOLIS & Automated Reproducibility and Testing Environment\\ & for Licensed Software\tabularnewline \hline COBRA & COnstraint-Based Reconstruction and Analysis\tabularnewline \hline HTTP & Hypertext Transfer Protocol\tabularnewline \hline JNLP & Java Network Launch Protocol \tabularnewline \hline KVM & Kernel-based Virtual Machine\tabularnewline \hline NFS & Network File System\tabularnewline \hline PR & Pull request\tabularnewline \hline SHA1 & Secure Hash Algorithm 1\tabularnewline \hline SMB & Server Message Block\tabularnewline \hline SSH & Secure Shell \tabularnewline \hline YAML & YAML Ain't Markup Language\tabularnewline \hline \end{tabular*} \par\end{center} \bibliographystyle{bioinformatics} \fontsize{7}{9}\selectfont
{ "timestamp": "2017-12-15T02:08:15", "yymm": "1712", "arxiv_id": "1712.05236", "language": "en", "url": "https://arxiv.org/abs/1712.05236" }
\section{Introduction} The Dark Matter Particle Explorer (DAMPE) experiment recently published the energy spectrum of electrons and positions from about $10\ensuremath{\,\text{GeV}}$ to about $4\ensuremath{\,\text{TeV}}$\cite{Ambrosi:2017wek}. The spectrum, by eye, contained two interesting features: a break at about $1\ensuremath{\,\text{TeV}}$ and a monochromatic excess at about $1.4\ensuremath{\,\text{TeV}}$. The DAMPE analysis itself contained no statistical analysis of the excess, which, nevertheless, stirred much interest\cite{Liu:2017obm,Ding:2017jdr,Yang:2017cjm,Cao:2017sju,Ghorbani:2017cey,Nomura:2017ohi,Gu:2017lir,Zhu:2017tvk,Li:2017tmd,Chen:2017tva,Chao:2017emq,Niu:2017hqe,Gao:2017pym,Jin:2017qcv,Cholis:2017ccs,Huang:2017egk,Duan:2017qwj,Gu:2017bdw,Chao:2017yjg,Tang:2017lfb,Zu:2017dzm,Liu:2017rgs,Cao:2017ydw,Athron:2017drj,Gu:2017gle,Duan:2017pkq,Fang:2017tvj,Fan:2017sor,Yuan:2017ysv,Jin:2016kio,Okada:2017pgr,Sui:2017qra,Zhao:2017nrt,Ge:2017tkd}. In particular, dark matter (DM) was invoked to explain the excess. DM with a mass of about $1.4\ensuremath{\,\text{TeV}}$ could annihilate into electrons in a subhalo within about a kpc resulting in a narrow spike in the spectrum. It is thus important to estimate the statistical significance of the excess. We do so with frequentist statistics in \refsec{sec:freq} and Bayesian statistics in \refsec{sec:bayes}. In each case, we fit the spectrum by three toy models: \begin{itemize} \item A single power-law (PL), \begin{equation} \Phi(E) = \Phi_0 \left(\frac{E}{100\ensuremath{\,\text{GeV}}}\right)^{-p}, \end{equation} described by a normalisation $\Phi_0$ and a power $p$. \item A smoothly-broken power-law (SBPL), \begin{equation} \begin{split} \Phi(E) = {}&{} \Phi_b \left(\frac{E}{100\ensuremath{\,\text{GeV}}}\right)^{-p_1} \times\\ {}&{} \left[1 + \left(\frac{E}{E_b}\right)^{(p_2 - p_1) / \Delta} \right]^{-\Delta}, \end{split} \end{equation} described by a normalisation $\Phi_b$, powers $p_1$ and $p_2$, a break $E_b$ and a smoothing parameter $\Delta$. This approximately equals two power-laws, which are smoothly matched at the break at $E_b$ by a smoothness governed by $\Delta$. \item A half-normal distribution upon a smoothly-broken power-law (signal), \begin{equation} \Phi(E) = \frac{A}{\sqrt{2\pi}\sigma}e^{-\frac{(E - m_\chi)^2}{2\sigma^2}} \end{equation} for $E \le m_\chi$ and zero elsewhere. This template is motivated by DM particles of mass $m_\chi$ annihilating into electrons in a nearby subhalo, resulting in a signal of amplitude $A$ and width $\sigma$. \end{itemize} The PL, SBPL and signal models have 2, 5 and 8 parameters, respectively. The toy models capture the behaviour of possible spectra from underlying physical processes. The unknown relationships between fundamental and toy model parameters cannot impact our frequentist analysis; however, they could influence suitable choices of prior in our Bayesian analysis. This is especially so for the width and amplitude of the signal, which could, in principle, be related to the DM annihilation cross section, subhalo properties and diffusion equations governing the propagation of charged cosmic rays. DAMPE measured the average flux in 38 energy bins. We may predict the average flux in the $i$-th bin by \begin{equation} \bar{\Phi}_i \equiv \frac{1}{b_i - a_i} \int_{a_i}^{b_i} \Phi(E) \,\text{d}E, \end{equation} where the bin spans energies $a_i$ to $b_i$. DAMPE associated their measurement in the $i$-th bin with the energy $\langle E_i \rangle$ at which the predicted flux equals the predicted average flux in that bin for the best-fit SBPL model\cite{Lafferty:263644}, i.e.,\xspace $\langle E_i \rangle$ is defined by \begin{equation} \Phi(\langle E_i \rangle) = \bar{\Phi}_i. \end{equation} The SBPL and PL fluxes are approximately linear on scales similar to the bin width such that $\Phi(\langle E_i \rangle) \approx \bar{\Phi}_i $ for the SBPL and PL models. The signal model, however, contains a peak that may be narrower than the bin width and we must explicitly calculate $\bar{\Phi}_i $ as it is not approximated by $\Phi(\langle E_i \rangle)$. This subtlety means that previous calculations of the required amplitude of a DM signal are underestimates by a factor of approximately the bin width divided by the signal width, $\Delta E/\sigma \approx \text{5 -- 20}$. \section{Frequentist analysis}\label{sec:freq} We performed two hypothesis tests: an SBPL versus a single PL under the hypothesis of a single PL, and an SBPL versus a signal under the hypothesis of an SBPL. We performed the former to validate our methodology against a result published by DAMPE. We used chi-squared test-statistics, \begin{equation} \Delta \chi^2 = \min \chi^2(H_0) - \min \chi^2(H_1). \end{equation} We minimised the chi-squared with respect to each model's parameters with a CMA-ES evolutionary algorithm\cite{2016arXiv160400772H} implemented in \texttt{stochopy}\cite{stochopy}. The chi-squared itself was \begin{equation} \chi^2 =\sum\limits_i \frac{\left(\bar\Phi_i - \mu_i\right)^2}{\sigma_i^2}, \end{equation} where $\bar\Phi_i$ and $\mu_i$ were the predicted and measured average flux in the $i$-th bin, we summed over bins from $55\ensuremath{\,\text{GeV}}$ to $2.63\ensuremath{\,\text{TeV}}$ (matching the DAMPE analysis), and we added statistical and systematic errors in quadrature. We found the distributions of our test-statistics by Monte Carlo. To do so, we generated 1000 pseudo-datasets from the best-fit single PL and best-fit SBPL models and reminimised the test-statistic for each dataset and model. Thus, we estimated the \ensuremath{\text{\textit{p}-value}}\xspace, \begin{equation} \ensuremath{\text{\textit{p}-value}}\xspace = \cond{\Delta \chi^2 \ge \Delta \chi^2_\text{obs}}{H_0} \end{equation} by the fraction of pseudo-experiments in which the test-statistic exceeded that observed. We, furthermore, calculated $68\%$ Clopper-Pearson intervals for the \ensuremath{\text{\textit{p}-value}}\xspace \see{agresti2003categorical}. \begin{figure}[tbp] \centering \includegraphics[height=0.3\textwidth]{dampe.pdf} \caption{Scaled energy spectrum of electrons and positrons measured by DAMPE (blue). Fits with a PL (yellow), SBPL (green) and an SBPL plus a DM signal (red) are also shown.} \label{fig:spectrum} \end{figure} We found no differences in chi-squared between the PL and SBPL models as extreme as that observed in 1000 pseudo-experiments under the PL hypothesis. This resulted in a \ensuremath{\text{\textit{p}-value}}\xspace associated with the PL model of at most $0.002$, which is equivalent to at least $2.9\sigma$. DAMPE applied Wilks' theorem to estimate the significance, finding $6.6\sigma$; however, in the limit $p_1 \to p_2$ the SBPL reduces to the single PL with no other parameters and, thus, Wilks' theorem cannot strictly apply. We found about $7\sigma$ with a similar procedure. Although we could not populate the tail of the distribution by Monte Carlo, since the observed test-statistic of about $56$ lies in the extreme tail of the distribution we expected that the \ensuremath{\text{\textit{p}-value}}\xspace was negligible. Only 11 of our 1000 pseudo-experiments under the SBPL hypothesis had differences in chi-squared between the PL and SBPL models as extreme as that observed, resulting in a global significance of about $2.2\sigma$ -- $2.4\sigma$. This includes a two-dimensional look-elsewhere effect in the mass and width of the excess and corresponds to a \ensuremath{\text{\textit{p}-value}}\xspace of about $1\%$. The local significance was about $3.6\sigma$, assuming a $\frac12\chi^2$ distribution for the test-statistic. To validate our methodology, we checked that our Monte Carlo reproduced a $\frac12\chi^2_1$ distribution from a model with a fixed mass and width. We show best-fit spectra for our three models in \reffig{fig:spectrum}. There were degeneracies in the fits, especially in the amplitude and width of the signal. The amplitude of the narrow excess demonstrates that previous analyses underestimated the amplitude required to fit the anomalous bin. We show in \reffig{fig:DM}, furthermore, confidence regions for the DM mass and width of the signal. The DM signal must have a mass of about $1300\ensuremath{\,\text{GeV}}$ to $1500\ensuremath{\,\text{GeV}}$, a width of less than about $100\ensuremath{\,\text{GeV}}$, and an amplitude of about $10^{-5}\ensuremath{/\text{s}/\text{sr}/\text{m}^2}$. This amplitude corresponds to a peak flux of about $10^{-7}\ensuremath{/\text{GeV}/\text{s}/\text{sr}/\text{m}^2}$ for a signal width of $\sigma = 10\ensuremath{\,\text{GeV}}$. \begin{figure}[tbp] \centering \includegraphics[height=0.3\textwidth]{DM.pdf} \caption{Two-dimensional confidence interval for the DM mass and width of the DM signal.} \label{fig:DM} \end{figure} \section{Bayesian analysis}\label{sec:bayes} We considered Bayes factors between the three competing models of the spectrum. Bayes factors update the relative plausibility of two hypotheses with experimental data (see \refcite{Gregory}); \begin{equation} \text{Posterior odds} = \text{Bayes factor} \times \text{Prior odds}. \end{equation} The Bayes factor itself may be written \begin{equation} B = \frac{\cond{D}{M_1}}{\cond{D}{M_2}}, \end{equation} for data $D$, and models $M_1$ and $M_2$. This is a ratio of evidences, \begin{equation} \cond{D}{M} = \int \cond{D}{M, x}\,\cond{x}{M} \,\text{d}x, \end{equation} where $x$ represents a model's parameters, $\cond{D}{M, x} = e^{-\frac12\chi^2}$ is our likelihood function and $\cond{x}{M}$ are our priors for the model's parameters. We calculated evidences with \texttt{(Py-)MultiNest-3.10}\cite{Buchner:2014nha,Feroz:2007kg,Feroz:2008xx,2013arXiv1306.2144F}. We list our priors in \reftable{tab:priors}. We picked flat priors for the exponents in the PL and SBPL models and logarithmic priors for all other parameters. Since we a priori knew the order of magnitude of the exponents, the choice of flat or logarithmic prior was moot. We found that, as anticipated, the SBPL model was favoured against the single PL model by about $10^{10}$. Since this was resounding and agreed with our frequentist analysis, we considered the matter settled and did not investigate prior sensitivity. \begin{table} \begin{ruledtabular} \begin{tabular}{lll} Parameter & Range & Prior\\ \hline \multicolumn{3}{l}{Single power-law}\\ \hline $\Phi_0$ & ($10^{-5}$ -- $10^{-3}$)\ensuremath{/\text{GeV}/\text{s}/\text{sr}/\text{m}^2} & Log\\ $p$ & 3 -- 4 & Linear\\ \hline \multicolumn{3}{l}{Smoothly-broken power-law}\\ \hline $\Phi_b$ & ($10^{-5}$ -- $10^{-3}$)\ensuremath{/\text{GeV}/\text{s}/\text{sr}/\text{m}^2} & Log\\ $p_1$ & 3 -- 4 & Linear\\ $p_2$ & 3 -- 5 & Linear\\ $E_b$ & (55 -- 2630)\ensuremath{\,\text{GeV}} & Log\\ $\Delta$ & $10^{-3}$ -- $1$ & Log\\ \hline \multicolumn{3}{l}{DM Signal}\\ \hline $A$ & ($10^{-7}$ -- $10^{-4}$)\ensuremath{/\text{s}/\text{sr}/\text{m}^2} & Log\\ $m_\chi$ & (55 -- 2630)\ensuremath{\,\text{GeV}} & Log\\ $\sigma$ & (10 -- 500)\ensuremath{\,\text{GeV}} & Log\\ \end{tabular} \end{ruledtabular} \caption{\label{tab:priors}Priors for the model parameters in the Bayesian analysis of the DAMPE electron and positron spectrum.} \end{table} We found that the signal model was favoured versus an SBPL by a Bayes factor of about 2. We anticipate that changes in priors for the SBPL parameters, which are present in each model, could not substantially modify the Bayes factor. We found that the Bayes factor increased to 4 with linear rather than logarithmic priors for the mass, amplitude and width of the DM signal. Our prior range for the amplitude spanned only three orders of magnitude about that favoured by the $1.4\ensuremath{\,\text{TeV}}$ excess and for the width spanned fewer than two orders of magnitude; arguably, they should have been more diffuse, which would decrease the Bayes factor. Our prior for the mass spanned the range searched by DAMPE, $55\ensuremath{\,\text{GeV}}$ to $2.63\ensuremath{\,\text{TeV}}$; shrinking it to between $1\ensuremath{\,\text{TeV}}$ to $2.63\ensuremath{\,\text{TeV}}$ could increase the Bayes factor to about 4. The maximum Bayes factor achievable with any priors is about $500$, which is obtained for Dirac delta functions at the best-fit mass, width and amplitude of a DM signal. Nevertheless, it seems difficult to make a reasonable case that the Bayes factor is compelling, especially since the narrow signal and substantial amplitude preferred by DAMPE were, if anything, a priori implausible as such a signal must originate from a nearby subhalo with a substantial DM density. \vspace{0.3cm} \section{Conclusions}\label{sec:concs} The DAMPE energy spectrum of electrons and positrons contained two interesting features: a spectral break and a monochromatic excess. We performed a Bayesian and frequentist analysis of the features by testing three models: a single power-law, a smoothly-broken power-law, and a smoothly-broken power-law with a signal feature motivated by dark matter annihilation in a nearby subhalo. We found global \ensuremath{\text{\textit{p}-value}}\xspace{}s through 1000 pseudo-experiments, including refits of models with 2, 5 and 8 parameters with evolutionary algorithms. We found Bayesian evidences by nested sampling. The break in the spectrum was significant with frequentist and Bayesian statistics --- we bounded the \ensuremath{\text{\textit{p}-value}}\xspace at about $0.1\%$ and the Bayes factor was about $10^{10}$. We expect in fact that $\ensuremath{\text{\textit{p}-value}}\xspace \lll 0.1\%$; our Monte Carlo may be unsuitable and specialised techniques such as Gross-Vitells\cite{Gross:2010qma} may be more appropriate. The excess, on the other hand, was present at $3.6\sigma$ local and $2.3\sigma$ global significance. The Bayes factor was sensitive to our choices of priors for the mass, amplitude and width of the signal, but for our choices favoured a signal by about $2$. Thus whilst intriguing, the excess is not currently compelling. We hope that this serves as a example of using frequentist and Bayesian methods for analysing anomalies in high-energy physics\cite{Fowlie:2016rmn}.
{ "timestamp": "2017-12-15T02:03:30", "yymm": "1712", "arxiv_id": "1712.05089", "language": "en", "url": "https://arxiv.org/abs/1712.05089" }
\section{Introduction}\label{sec:intro} The activated random walk model (ARW) is a system of interacting particles on a graph $G=(V,E)$. Together with Abelian and Stochastic Sandpiles, it belongs to a class of systems which have been introduced in order to study a physical phenomenon known as self-organized criticality. Moreover, it can be interpreted as a toy model for an epidemic spreading, with infected individuals moving diffusively on a graph. The model is defined as follows. Every particle is in one of two states, A (active) or S (inactive, sleeping). Initially, the number of particles at each vertex of $G$ is an independent Poisson random variable with mean $\mu\in(0,\infty)$, usually called the \emph{particle density}, and all particles are of type A. Active particles perform an independent, continuous time random walk on $G$ with jump rate $1$, and with each jump being to a uniformly random neighbour. Moreover, every A-particle has a Poisson clock of rate $\lambda>0$ (called \emph{sleeping rate}). When the clock of a particle rings, if the particle does not share the vertex with other particles, the particle becomes of type S; otherwise nothing happens. Each S-particle does not move and remains sleeping until another particle jumps into its location. At such an instant, the S-particle is activated and turns into type A. For any value of $\lambda$, a phase transition as a $\mu$ varies is expected to occur. When $\mu$ is small, there is a lot of free space between the particles. This allows every particle to turn into type S eventually and never become active again. When this happens, we say that ARW \textit{fixates}. This is not expected to occur when $\mu$ is large, since the active particles will repetitively jump on top of other particles, activating the ones that had turned into type S. In this case, we say that ARW is \textit{active}. In a seminal paper \cite{Rolla}, Rolla and Sidoravicius prove a 0-1 law (i.e., the process is either active or fixates with probability 1) and a monotonicity property with respect to $\mu$. This leads to the existence of a critical curve $\mu_c = \mu_c(\lambda)$, \begin{equation} \mu_c=\mu_c\left(\lambda\right) := \inf\lrc{ \mu \geq 0 \, : \, \mathbb{P}(\text{ARW is active}) > 0 }. \label{eq:criticaldensity} \end{equation} which is such that, for any $\mu>\mu_c$ the system is almost surely active, and for any $\mu<\mu_c$ the system fixates almost surely. Though \cite{Rolla} is restricted to the case of $G$ being $\mathbb{Z}^d$, the above properties hold for any vertex-transitive graph. Throughout this paper we always consider that $G$ is an infinite simple graph that is locally-finite and vertex transitive, which ensures the existence of $\mu_c$. In recent years considerable effort has been made to prove basic properties of the critical curve $\mu_c = \mu_c(\lambda)$ \cite{Amir, Basu, Rolla, Rolla3, Rolla2, Sidoravicius, Stauffer, Taggi}. A quite natural bound for this curve is $\mu_c \leq 1$ for any value of $\lambda \in (0,\infty)$ , which was proved in \cite{Amir, Rolla, Shellef}. Indeed, one does not expect fixation when the average number of particles per vertex is more than one, since a particle can be in the S-state only if it is alone on a given vertex and, for this reason, there is not enough space for all the A-particles to turn to the S-state. A more challenging question is whether $\mu_c$ is strictly less than one for any value of $\lambda \in (0, \infty)$, which is expected to hold true under wide generality. In other words, one expects that, for any value of $\lambda \in (0,\infty)$, there exists a value of $\mu$ which is strictly less than one such that, even though there is enough space for all the particles to turn into the S-state, particle motion prevents this from happening, so the system does not fixate. This question was asked by Rolla and Sidoravicius in their seminal paper \cite{Rolla} and appears also in \cite{Basu, Dickman}. Such a question received much attention in the last few years \cite{Rolla, Basu, Rolla2, Stauffer, Taggi} but, despite much effort, a complete answer was provided only in two cases: on vertex-transitive graphs where the random walk has a positive speed \cite{Stauffer} and for a simplified model on $\mathbb{Z}^d$ where the jump distribution of active particles is biased in a fixed direction \cite{Taggi}. A partial answer which requires the assumption that $\lambda$ is smaller than a finite constant $\lambda_0 < \infty$ was also provided in \cite{Stauffer} when $G$ is vertex-transitive and transient and in \cite{Basu} when $G = \mathbb{Z}$. The first main result of this paper is the next theorem, which provides a positive answer to this question for any $\lambda \in (0, \infty)$ on $\mathbb{Z}^d$ , when $d \geq 3$, for the original model where active particles jump uniformly to nearest-neighbours. More generally, our result holds for any vertex-transitive amenable graph where the random walk is transient. \begin{theorem} \label{theo1:transient graph} If $G$ is vertex-transitive, amenable and transient, then $$\mu_c(\lambda) < 1~~~~~ \mbox{$\forall \lambda \in (0, \infty)$.}$$ Moreover, $\limsup\limits_{\lambda \rightarrow 0} \frac{\mu_c(\lambda)}{\lambda^{\frac{1}{2}}} < \infty$. \end{theorem} A second basic question concerning the behaviour of the critical curve $\mu_c=\mu_c(\lambda)$ is whether its value is positive. A positive answer has been proved by Sidoravicius and Teixeira in \cite{Sidoravicius} when $G= \mathbb{Z}^d$ by means of renormalization techniques. A shorter proof was also provided by Stauffer and Taggi in \cite{Stauffer} when $G$ is amenable and vertex-transitive and when $G$ is a regular tree. The proofs of \cite{Sidoravicius, Stauffer} crucially rely on the amenability property of the graph or on the assumption that $G$ is a regular tree. Our second main theorem provides a positive answer to this question on vertex-transitive graphs that are non-amenable, establishing the occurrence of a phase transition for this class of graphs and extending the previous results \cite{Sidoravicius, Stauffer}. Moreover, we also obtain that $\lim_{\lambda \rightarrow \infty} \mu_c(\lambda)=1$. \begin{theorem} \label{theo: non amenable} If $G$ is vertex-transitive and non-amenable, then $\mu_c(\lambda) > 0$ for any value of $\lambda \in (0,\infty)$. More specifically, $$ \mu_c(\lambda) \geq \frac{\lambda}{1 + \lambda} ~~~~~ \mbox{$\forall \lambda \in (0, \infty)$.} $$ \end{theorem} Theorem \ref{theo: non amenable} and the results of \cite{Sidoravicius, Stauffer} imply that $\mu_c(\lambda) >0$ for any $\lambda \in (0, \infty)$ and that $\lim_{\lambda \rightarrow \infty} \mu_c(\lambda) = 1$ on any vertex transitive graph. Moreover, Theorem \ref{theo1:transient graph} and the results of \cite{Stauffer} imply that $\mu_c(\lambda) < 1$ for any $\lambda \in (0, \infty)$ and that $\lim_{\lambda \rightarrow 0} \mu_c(\lambda) = 0$ on any vertex-transitive graph where the random walk is transient. Furthermore, it has been proved in \cite{Rolla3} that if the initial particle configuration is a spatially ergodic distribution with density $\mu$, then ARW is a.s. active whenever $\mu > \mu_c(\lambda)$ and fixates a.s. whenever $\mu < \mu_c(\lambda)$. \paragraph{Description of the proofs} Our proofs are simple and rely on a graphical representation, which is called \textit{Diaconis-Fulton} and has been introduced in \cite{Rolla}, and on \textit{weak stabilization}, a procedure that has been introduced in \cite{Stauffer} which consists of using the random instructions of such a representation by following a certain strategy. A fundamental quantity for the mathematical analysis of the activated random walks is the number of times $m_{B_L}$ the origin is visited by a particle when the dynamics take place in a finite ball of radius $L$, $B_L$, with particles being absorbed whenever they leave $B_L$. As it was proved in \cite{Rolla}, activity for ARW is equivalent to the limit $L \rightarrow \infty$ of this quantity being infinite almost surely. A quantity that plays a central role in this paper is the probability $Q(x,B_L)$ that an S-particle is at a vertex $x \in B_L$ when $B_L$ becomes stable. This quantity is important since the values $\{ \, Q(x,B_L) \, \}_{x \in B_L}$ are related to the expectation of $m_{B_L}$ by mass-conservation arguments. Thus, one can deduce whether the system is active by estimating these values. The proof of Theorem \ref{theo1:transient graph} consists of bounding away from one the probabilities $\{ Q(x,B_L)\}_{x \in B_L}$ for any $\lambda \in (0, \infty)$ uniformly in $L$ and in $x \in B_L$. This improves the upper bound that was provided in \cite{Stauffer}, where the probabilities $\{ Q(x,B_L)\}_{x \in B_L}$ were bounded away from one only for $\lambda$ small enough. Such an enhancement is obtained by introducing a stabilization procedure that allows to recover independence from sleep instructions at one vertex. This gained independence and the fact that we do not count the total number of instructions but only jump instructions, allows to obtain an additional factor in the upper bound for $Q(x,B_L)$ which prevents this bound from exploding when $\lambda$ is infinitely large. Our upper bound on $Q(x,B_L)$ implies that for any $\lambda \in (0, \infty)$ one can find $\epsilon>0$ and set the value of $\mu$ such that $ 1 > \mu \geq Q(x,B_L ) + \epsilon$ for all $L$ and $x \in B_L$. This implies that a positive density $\epsilon$ of particles eventually leaves $B_L$ and, as it was proved in \cite{Rolla2}, that the system is active, proving Theorem \ref{theo1:transient graph}. Theorem \ref{theo: non amenable} extends to non-amenable graphs the analogous result that was proved in \cite{Stauffer} for amenable graphs. The idea of the proof that is presented in \cite{Stauffer} is that one assumes activity and uses this assumption and the weak stabilization procedure to show that for any $\epsilon>0$, there exists a large enough constant $r_0=r_0(\epsilon)$ such that, for any large enough $L$ and for any vertex $x \in B_L$ which has a distance at least $r_0$ from the boundary of $B_L$, \begin{equation}\label{eq:intro} Q(x,B_L) \geq \frac{\lambda}{1+\lambda}- \epsilon. \end{equation} This leads to the conclusion that the particle density after the stabilization of $B_L$ is at least $ \frac{\lambda}{1+\lambda}$. The amenability assumption is crucial here, since the number of particles which start `close' to the boundary, for which (\ref{eq:intro}) does not hold, can be neglected only if the graph is amenable (i.e. their number is of order $o(|B_L|)$). Since the initial particle density is $\mu$ and since the particle density cannot increase, we conclude that $\mu \geq \frac{\lambda}{1+\lambda}$. Since this is a consequence of activity, we obtain that $\mu_c \geq \frac{\lambda}{1+\lambda}$. In this paper, we use a different strategy that allows us to extend this result to non-amenable graphs. By assuming that the system is active and by using (\ref{eq:intro}), one obtains that the particle density in a small ball $B_{(1-\delta)L} \subset B_L$ after the stabilization of the larger ball $B_L$ is at least $ \frac{\lambda}{1+\lambda}$, for some $\delta>0$ and all $L$ large enough. Thus, if we set $\mu < \frac{\lambda}{1+\lambda}$, this means that the particle density inside the smaller ball must have increased during the stabilization of the larger ball. Due to the conservation law, the only way this might have happened is if a large number of particles which started from $B_L \setminus B_{(1-\delta) L}$ turns into the S-state for the last time in $B_{(1-\delta) L}$. We show that, if the graph is non-amenable, this cannot happen simply because, even though the number of the boundary particles is not negligible if compared to $|B_{(1-\delta) L}|$, the bias towards the outside of the ball allows only a few of them to penetrate inside the ball. So, the particle density in the smaller ball cannot increase and this leads to the conclusion that $\mu \geq \frac{\lambda}{1+\lambda}$. Since this is a consequence of activity, we obtain that $\mu_c \geq \frac{\lambda}{1+\lambda}$. \vspace{0.2cm} The remaining part of the paper is organized as follows. In Section \ref{sec:Diaconis} we introduce the Diaconis-Fulton representation following \cite{Rolla}, we recall the notion of weak stabilization following \cite{Stauffer} and we fix the notation. In Section \ref{sec:enforced stabilization} we provide an explicit upper bound for $Q(x,B_L)$, which is presented in Theorem \ref{theo:boundsQ}, and we prove Theorem \ref{theo1:transient graph}. Finally, Section \ref{sec: Fixation on non-amenable graphs} is dedicated to the proof of Theorem \ref{theo: non amenable}. \section{Diaconis-Fulton representation and weak stabilization} \label{sec:Diaconis} In this section we describe the Diaconis-Fulton graphical representation for the dynamics of ARW, following~\cite{Rolla}, and we recall the notion of \textit{weak stabilization}, following \cite{Stauffer}. Before starting, we fix the notation. \textbf{Notation} A graph is denoted by $G=(V,E)$ and is always assumed to be simple, infinite and locally-finite. The simple random walk measure is denoted by $P_x$, where $x$ is the starting vertex of the random walk. The expectation with respect to $P_x$ is denoted by $E_x$. For any set $Z \subset V$ and any pair of vertices $x, y \in V$, we let $$ G_{Z}(x,y) = {E}_x \big ( \, \sum\limits_{t=0}^{\tau_Z-1} \mathbbm{1} \{ \, X(t) = y \, \} \, \big) $$ be the expected number of times a discrete time random walk $X(t)$ starting from $x$ hits $y$ before reaching $Z$ (Green's function), where $\tau_Z$ is the hitting time of the set $Z$. If $Z = \emptyset$, then we set $\tau_Z = \infty$ and we simply write $G(x,y)$. We also denote by $\tau^+_Z$ the return time to $Z$. The origin of the graph will be denoted by $0 \in V$. We let $B_r(x) = \{ y \in V \, \, : \, \, d(x,y) < r\}$ be the ball of radius $r>0$ centred at $x$, where $d( \cdot, \cdot )$ is the graph distance, and write $B_r$ for $B_r(0)$. \subsection{Stabilization} \paragraph{Diaconis-Fulton representation} For a graph $G=(V,E)$, the state of configurations is $\Omega=\{0,\rho,1,2,3,\ldots\}^V$, where a vertex being in state $\rho$ denotes that the vertex has one S-particle, while being in state $i\in\{0,1,2,\ldots\}$ denotes that the vertex contains $i$ A-particles. We employ the following order on the states of a vertex: $0 < \rho < 1<2<\cdots$. In a configuration $\eta\in \Omega$, a vertex $x \in V$ is called \textit{stable} if $\eta(x) \in \{0, \rho \}$, and it is called \textit{unstable} if $\eta(x) \geq 1$. We fix an array of \textit{instructions} $\tau = ( \tau^{x,j}: \, x \in V, \, j \in \mathbb{N})$, where $\tau^{x,j}$ can either be of the form $\tau_{xy}$ or $\tau_{x\rho}$. We let $\tau_{xy}$ with $x,y\in V$ denote the instruction that a particle from $x$ jumps to vertex $y$, and $\tau_{x\rho}$ denote the instruction that a particle from $x$ falls asleep. Henceforth we call $\tau_{xy}$ a \emph{jump instruction} and $\tau_{x\rho}$ a \emph{sleep instruction}. Therefore, given any configuration $\eta$, performing the instruction $\tau_{xy}$ in $\eta$ yields another configuration $\eta'$ such that $\eta'(z)=\eta(z)$ for all $z\in V\setminus\{x,y\}$, $\eta'(x)=\eta(x)-\ind{\eta(x)\geq 1}$, and $\eta'(y)=\eta(y)+\ind{\eta(x)\geq 1}$. We use the convention that $1+\rho=2$. Similarly, performing the instruction $\tau_{x\rho}$ to $\eta$ yields a configuration $\eta'$ such that $\eta'(z)=\eta(z)$ for all $z\in V\setminus\{x\}$, and if $\eta(x)=1$ we have $\eta'(x)=\rho$, otherwise $\eta'(x)=\eta(x)$. Let $h = ( h(x)\, : \, x \in V)$ count the number of instructions used at each vertex. We say that we \textit{use} an instruction at $x$ (or that we \emph{topple} $x$) when we act on the current particle configuration $\eta$ through the operator $\Phi_x$, which is defined as, \begin{equation} \label{eq:Phioperator} \Phi_x ( \eta, h) = ( \tau^{x, h(x) + 1} \, \eta, \, h + \delta_x), \end{equation} where $\delta_x(y)=1$ if $y=x$ and $\delta_x(y)=0$ otherwise. The operation $\Phi_x$ is \textit{legal} for $\eta$ if $x$ is unstable in $\eta$, otherwise it is \textit{illegal}. \vspace{0.8cm} \noindent {\textbf{Properties}} We now describe the properties of this representation. Later we discuss how they are related to the stochastic dynamics of ARW. For a sequence of vertices $\alpha = ( x_1, x_2, \ldots x_k)$, we write $\Phi_{\alpha} = \Phi_{x_k} \Phi_{x_{k-1}} \ldots \Phi_{x_1}$ and we say that $\Phi_{\alpha}$ is \textit{legal} for $\eta$ if $\Phi_{x_\ell}$ is legal for $\Phi_{(x_{\ell-1}, \ldots, x_1)} (\eta,h) $ for all $\ell \in \{ 1, 2, \ldots k \}$. Let $m_{\alpha} = ( m_{\alpha}(x) \, : \,x \in V )$ be given by, $m_{\alpha}(x) \, = \, \sum_{\ell} \ind{x_\ell = x},$ the number of times the vertex $x$ appears in $\alpha$. We write $m_{\alpha} \geq m_{\beta}$ if $m_{\alpha} (x) \, \geq \, m_{\beta} (x) \, \, \, \forall x \in V$. Analogously we write $\eta' \geq \eta$ if $\eta' (x) \, \geq \, \eta(x)$ for all $x \in V$. We also write $(\eta', h') \geq (\eta, h)$ if $\eta' \geq \eta$ and $h' = h$. Let $\eta, \eta'$ be two configurations, $x$ be a vertex in $V$ and $\tau$ be a realization of the array of instructions. Let $V'$ be a finite subset of $V$. A configuration $\eta$ is said to be \textit{stable} in $V'$ if all the vertices $x \in V'$ are stable. We say that $\alpha$ is contained in $V'$ if all its elements are in $V'$, and we say that $\alpha$ \textit{stabilizes} $\eta$ in $V'$ if every $x \in V'$ is stable in $\Phi_\alpha \eta$. The following lemmas give fundamental properties of the Diaconis-Fulton representation. For the proof, we refer to \cite{Rolla}. \begin{lemma}[Abelian Property]\label{prop:lemma2} Given any $V'\subset V$, if $\alpha$ and $\beta$ are both legal sequences for $\eta$ that are contained in $V'$ and stabilize $\eta$ in $V'$, then $m_{\alpha} = m_{\beta}$. In particular, $\Phi_{\alpha} \eta = \Phi_{\beta} \eta$. \end{lemma} For any subset $V'\subset V$, any $x\in V$, any particle configuration $\eta$, and any array of instructions $\tau$, we denote by $m_{V^{\prime},\eta,\tau}(x)$ the number of times that $x$ is toppled in the stabilization of $V'$ starting from configuration $\eta$ and using the instructions in $\tau$. Note that by Lemma~\ref{prop:lemma2}, we have that $m_{V^{\prime},\eta,\tau}$ is well defined. \begin{lemma}[Monotonicity]\label{prop:lemma3} If $V' \subset V''\subset V$ and $\eta \leq \eta'$, then $m_{V', \eta, \tau} \leq m_{V'', \eta', \tau}$. \end{lemma} By monotonicity, given any growing sequence of subsets $V_1\subseteq V_2 \subseteq V_3\subseteq \cdots \subseteq V$ such that $\lim_{m\to\infty} V_m=V$, the limit $$ m_{\eta, \tau} = \lim\limits_{m\to \infty} m_{V_m, \eta, \tau}, $$ exists and does not depend on the particular sequence $\{V_m\}_m$. We now introduce a probability measure on the space of instructions and of particle configurations. We denote by $\mathcal{P}$ the probability measure according to which, for any $x \in V$ and any $j \in \mathbb{N}$, $\mathcal{P} ( \tau^{x,j} = \tau_{x\rho} ) = \frac{\lambda}{1 + \lambda}$ and $\mathcal{P} ( \tau^{x,j} = \tau_{xy} ) = \frac{1}{d(1 + \lambda)}$ for any $y\in V$ neighboring $x$, where $d$ is the degree of each vertex of $G$ and the $\tau^{x,j}$ are independent across diffent values of $x$ or $j$. Finally, we denote by $\mathcal{P}^\nu=\mathcal{P}\otimes \nu$ the joint law of $\eta$ and $\tau$, where $\nu$ is a distribution on $\Omega$ giving the law of $\eta$. Let $\mathbb{P}^\nu$ denotes the probability measure induced by the ARW process when the initial distribution of particles is given by $\nu$. We shall often omit the dependence on $\nu$ by writing $\mathcal{P}$ and $\mathbb{P}$ instead of $\mathcal{P}^\nu$ and $\mathbb{P}^\nu$. The following lemma relates the dynamics of ARW to the stability property of the representation. \begin{lemma}[0-1 law] \label{prop:lemma4} Let $\nu$ be a translation-invariant, ergodic distribution with finite density. Let $x\in V$ be any given vertex of $G$. Then $\mathbb{P}^{\nu} (\text{ARW fixates} ) = \mathcal{P}^{\nu} ( m_{\eta, \tau} (x) < \infty ) \in \{0, 1 \}$. \end{lemma} Roughly speaking, the next lemma gives that removing an instruction sleep, cannot decrease the number of instructions used at a given vertex for stabilization. In order to state the lemma, consider an additional instruction $\iota$ besides $\tau_{xy}$ and $\tau_{x\rho}$. The effect of $\iota$ is to leave the configuration unchanged; i.e., $\iota \, \eta = \eta$, so we will call this instruction \textit{neutral}. Then given two arrays $\tau = \left( \tau^{x,j} \right)_{x ,\, j }$ and $\tilde{\tau} = \left( \tilde{\tau}^{x,j} \right)_{x, \, j }$, we write $\tau \leq \tilde{\tau}$ if for every $x \in V$ and $j \in \mathbb{N}$, we either have $\tilde{\tau}^{x,j} = {\tau}^{x,j}$ or we have $\tilde{\tau}^{x,j} = \iota$ and ${\tau}^{x,j} = \tau_{x\rho}$. \begin{lemma}[Monotonicity with enforced activation] \label{prop:lemma5} Let $\tau$ and $\tilde{\tau}$ be two arrays of instructions such that $\tau \leq \tilde{\tau}$. Then, for any finite subset $V' \subset V$ and configuration $\eta \in \Omega$, we have $m_{V', \eta, \tau} \leq m_{V', \eta, \tilde{\tau}}.$ \end{lemma} When we average over $\eta$ and $\tau$ using the measure $\mathcal{P}$, we will simply write $m_{V'}$ instead of $m_{V',\eta,\tau}$ and we will do the same for the other quantities that will be introduced later. \subsection{Weak stabilization} We now recall the notion of weak stabilization following \cite{Stauffer}. \begin{definition}[weakly stable configurations]\label{def:wstable} We say that a configuration $\eta$ is \emph{weakly stable} in a subset $K\subset V$ with respect to a vertex $x\in K$ if $\eta(x)\leq 1$ and $\eta(y)\leq\rho$ for all $y\in K\setminus\{x\}$. For conciseness, we just write that $\eta$ is weakly stable for $(x, K)$. \end{definition} \begin{definition}[weak stabilization] Given a subset $K\subset V$ and a vertex $x\in K$, the \emph{weak stabilization} of $(x,K)$ is a sequence of topplings of unstable vertices of $K\setminus\{x\}$ and of topplings of $x$ whenever $x$ has at least two active particles, until a weakly stable configuration for $(x,K)$ is obtained. The order of the topplings of a weak stabilization can be arbitrary. \end{definition} The Abelian property (Lemma \ref{prop:lemma2}), the monotonicity property (Lemma \ref{prop:lemma3}) and monotonicity with enforced activation (Lemma \ref{prop:lemma5}) hold for weak stabilization as well. Since the proof of these lemmas is the same as for stabilization, for the proofs we refer to \cite{Rolla}. For any given particle configuration $\eta$ and instruction array $\tau$, we let $m^1_{(x,K), \eta, \tau}(y)$ be the number of instructions that are used at $y$ for the weak stabilization of $(x,K)$. By the Abelian property, this quantity is well defined. We now formulate the Least Action Principle for weak stabilization of $(x, K)$. In order to state the lemma, we need to extend the notion of unstable vertex and of legal operations to weak stabilization of $(x,K)$. We call a vertex $y$ \textit{WS-unstable} (that is, unstable for weak stabilization) in $\eta \in \Omega$ if $\eta(y) \geq 1 + \delta_x(y)$, where $\delta_x(y)=1$ if $x=y$ and $\delta_x(y)=0$ otherwise. We call a vertex $y$ \textit{WS-stable} in $\eta \in \Omega$ if it is not WS-unstable. We call the operation $\Phi_y$ defined in (\ref{eq:Phioperator}) \textit{WS-legal} for $\eta$ if $y$ is WS-unstable in $\eta$. Note that a WS-legal operation is always legal but a legal operation is not necessarily WS-legal. For a sequence of vertices $\alpha = ( x_1, x_2, \ldots x_k)$, we say that $\Phi_{\alpha}$ is {WS-legal} for $\eta$ if $\Phi_{x_\ell}$ is WS-legal for $\Phi_{(x_{\ell-1}, \ldots, x_1)} (\eta,h) $ for all $\ell \in \{ 1, 2, \ldots k \}$. We say that that $\alpha$ \textit{stabilizes} $\eta$ weakly in $(x,K)$ if every $x \in V$ is WS-stable in $\Phi_\alpha \eta$. \begin{lemma}[Least Action Principle for weak stabilization of $(x,K)$]\label{prop:lemma1bis} If $\alpha$ and $\beta$ are sequences of topplings for $\eta$ such that $\alpha$ is legal and stabilizes $\eta$ weakly in $(x,K)$ and $\beta$ is WS-legal and is contained in $K$, then $m_{\beta} \leq m_{\alpha}$. \end{lemma} For the proof of the lemma, we refer to \cite{Stauffer}. We now introduce a stabilization procedure of $K$ consisting of a sequence of weak stabilizations of $(x,K)$. This stabilization procedure is called \textit{stabilization via weak stabilization} and was used also in \cite{Stauffer}. From now on, we will omit the dependence of the quantities on $\eta$ and $\tau$, unless necessary, in order to lighten the notation. \textbf{Stabilization via weak stabilization} Let $\eta$ be the initial particle configuration. \textit{At the first step,} we perform the weak stabilization of $(x,K)$. Recall that $m^{1}_{(x,K)}(y)$ is the total number of instructions that are used at $y$ for the weak stabilization of $(x,K)$ and let $\eta_1$ be the resulting particle configuration. If $\eta_1$ has no particle at $x$, then $\eta_1$ is stable and we complete the stabilization procedure. If $\eta_1$ has an active particle at $x$, then we move to the next step. \textit{At the $i$th step}, $i \geq 2$, we use an instruction at $x$ and, if such an instruction is not sleep, then we perform a weak stabilization of $(x,K)$. We let $m^{i}_{(x,K)}(y)$ be the number of instructions that have been used at $y \in K$ up to this time, and denote by $\eta_i$ the configuration we then obtained. We iterate the procedure until we obtain a stable configuration and we let $T_{(x,K)}$ denote the number of iterations. More precisely, $$ T_{(x,K)} := \min\{n \in \mathbb{N}_{>0} \, \, : \, \, \eta_n \mbox{ is stable } \}. $$ Note that $T_{(x,K)}$ is strictly positive for any $\eta$ and $\tau$ and that if $T_{(x,K)}=1$, then the stable configuration $\eta_{T_{(x,K)}}$ hosts no particle at $x$. For consistency, for any $i > T_{(x, K)}$, let $\eta_i$ be the stable configuration obtained after stabilizing $K$ and, for any $y \in K$, define $m^{i}_{(x, K)}(y)=m_K(y)$, which is the total number of instructions used at $y$ for the complete stabilization of $K$. By the Abelian property, the quantities $T_{(x,K)}$ and $m^i_{(x,K)}$ are all well defined. Note that the quantity $T_{(x,K)}$ is defined slightly differently than in \cite{Stauffer}. In Section \ref{sec:enforced stabilization} we will show that the number of weak stabilizations of $(x,K)$ that is necessary to perform to stabilize $K$ is related to the probability that the stabilization of $K$ ends with one particle at $x$, which is an important quantity for the proof of Theorem \ref{theo1:transient graph}. In Section \ref{sec:enforced stabilization} we will upper bound this probability by introducing a new stabilization procedure. \section{Active phase on transient graphs}\label{sec:enforced stabilization} In this section we prove Theorem \ref{theo1:transient graph}. We first state Theorem \ref{theo:boundsQ}, where the probability $Q(x,K)$ that the vertex $x \in K$ hosts an $S$-particle after the stabilization of the finite set $K \subset V$ is bounded away from one for any value of $\lambda \in (0, \infty)$. In order for the next theorem to hold true the graph $G$ does not need to be vertex-transitive. \begin{theorem} \label{theo:boundsQ} Let $G=(V, E)$ be a locally-finite graph and let $K \subset V$ be a finite set. Then, for any set $K \subset V$, for any vertex $x \in K$, for any positive integer $H$, \begin{equation} \label{eq:ubound} Q(x,K) \leq 1 - \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big) \Big ( \frac{1}{1 + \lambda} \Big)^H. \end{equation} \end{theorem} Theorem \ref{theo1:transient graph} is proved in the end of this section and will be a direct consequence of Theorem \ref{theo:boundsQ}. We now introduce a new stabilization procedure that consists of ignoring sleep instructions at one fixed vertex and prove some auxiliary lemmas that are necessary for the proof of Theorem \ref{theo:boundsQ}. After that, we prove Theorem \ref{theo:boundsQ} and Theorem \ref{theo1:transient graph}. We introduce the function $T^x$ that associates to any instruction array $\tau$ a new instruction array $T^x(\tau)$ that is obtained from $\tau$ by ignoring all sleep instruction at $x$. More precisely, we define for any $y \in V$ and $j \in \mathbb{N}$, $$ \Big ( \, T^x(\tau)\, \Big )^{y,j} : =\begin{cases} \tau^{y,j} &\mbox{ if $y\neq x$ }\\ \tau^{y,j} &\mbox{ if $y = x$ and $\tau^{y,j} \neq \tau_{y\rho}$ } \\ \iota & \mbox{ if $y= x$ and $\tau^{y,j} = \tau_{y\rho}$}, \end{cases} $$ recalling that $\iota$ denotes a neutral instruction. Moreover, for any $y, x\in V$, we let \begin{equation}\label{eq:defenforced} m^e_{(x,K), \eta, \tau}(y) : = m_{K, \eta, T^x(\tau)}(y) \end{equation} be the number of instructions that are used at $y$ when we stabilize the set $K$ by ignoring sleep instructions at $x$. This function plays an important role in this section. For the proof of Theorem \ref{theo:boundsQ}, we will not count the total number of instructions, but only the number of jump instructions. Thus, for any $y \in K$, we let $$ M^e_{(x,K), \eta, \tau}(y): = \Big | \, \Big \{ \tau^{y, j} \, \, : \, \, \, j \in [0, m^e_{(x,K), \eta, \tau}(y)], \, \, \, \tau^{y, j} \neq \tau_{y\rho} \Big \} \, \Big | $$ be the number of jump instructions that are used at $y$ when we stabilize $K$ by ignoring sleep instructions at $x$. Similarly, we let $M_{K, \eta, \tau}(y)$ be the number of jump instructions that are used at $y$ for the stabilization of $K$ and $M^1_{(x,K), \eta, \tau}(y)$ be the number of jump instructions that are used at $y$ for the weak stabilization of $(x,K)$. In the next lemma we state some simple but important relations between these quantities. \begin{lemma}\label{lemma:independence} Let $\eta$ be an arbitrary particle configuration, let $\tau$ be an arbitrary instruction array, let $\tilde \tau = T^x(\tau)$ be obtained from $\tau$ by turning all the sleep instructions at $x$ into neutral instructions. Then, we have that, for any vertex $y \in K$, \begin{align}\label{eq:independ1} m^1_{(x,K), \eta, \tau} (y) \, &= \, m^1_{(x,K), \eta, \tilde \tau} (y) ~~~~~ M^1_{(x,K), \eta, \tau} (y) \, = \, M^1_{(x,K), \eta, \tilde \tau} (y) \\ \label{eq:independ4} m^e_{(x,K), \eta, \tau} (y) \, & \geq \, m_{K, \eta, \tau} (y) ~~~~~~~~~ M^e_{(x,K), \eta, \tau} (y) \, \geq \, M_{K, \eta, \tau} (y). \end{align} \end{lemma} \begin{proof} When we perform the weak stabilization of $(x,K)$, we topple $x$ only if $x$ contains at least two particles, so the sleep instructions at $x$ have no effect. This leads to (\ref{eq:independ1}). The relations (\ref{eq:independ4}) follow from a direct application of monotonicity with enforced activation for stabilization (Lemma \ref{prop:lemma5}). \end{proof} For the next lemma we need to recall the notion of stabilization via weak stabilization that has been introduced in Section \ref{sec:Diaconis}. Thus, let \begin{equation}\label{eq:Afunction} A_{(x,K), \eta, \tau} : = M^e_{(x,K), \eta, \tau} (x) - M^1_{(x,K), \eta, \tau} (x). \end{equation} be the total number of jump instructions that are used at $x$ when we stabilize $K$ by ignoring sleep instructions at $x$ and that are not used for the weak stabilization of $(x,K)$. Recall the stabilization-via-weak-stabilization procedure that has been introduced in Section \ref{sec:Diaconis}. \begin{lemma}\label{lemma:Qandenforced} Let $G=(V,E)$ be an arbitrary locally-finite graph and let $K \subset V$ be a finite set. Let $\eta^{\prime}$ be the particle configuration that is obtained after the stabilization of $K$. Then, for any $x \in K$, for any integer $\ell \geq 2$, \begin{equation}\label{eq:mainupperbound} \mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \big) \leq \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell-2} \, \, \mathcal{P} \big( A_{(x,K)} \geq \ell-2 \big) \end{equation} \end{lemma} \begin{proof} First of all, note that for any integer $\ell \geq 2$, \begin{align} \mathcal{P} \big( \, \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \, \big ) & = \mathcal{P} \big( \, T_{(x,K)} \geq \ell, \, \, \tau^{x, m_{(x,K)}^{\ell-1}+1} = \tau_{x \rho} \, \big ) \\ \label{eq:firstrelation} & = \frac{\lambda}{1+\lambda} \, \, \mathcal{P} \big( \, T_{(x,K)} \geq \ell \, \big ). \end{align} This first equality holds true since, if after having completed $\ell-1$ weak stabilizations the next instruction at $x$ is sleep, then the stabilization is completed with one particle at $x$. The second equality follows from independence of instructions. Recall that $\eta_1$ is the particle configuration that we obtain when the first weak stabilization of $(x,K)$ is complete. Note that, \begin{multline}\label{eq:secondrelation} \Big \{ T_{(x,K)} \geq \ell \Big \} = \Big \{ \eta_1(x) = 1 \Big \} \cap \Big \{ \forall i \in [1, \ell-2], \, \, \, \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \Big \} \subset \\ \Big \{ M_K(x) - M_{(x,K)}^1(x) \geq \ell - 2 \Big \} \end{multline} The equality holds true since, in order for the stabilization-via-weak-stabilization procedure to consist of at least $\ell \geq 2$ weak stabilizations, it is necessary that the first weak stabilization ends with one particle at $x$ (if the weak stabilization ended with no particle at $x$ then the stabilization of $K$ would be completed and $T_{(x,K)}$ would be equal to one) and that the next $\ell-2$ times the next instruction at $x$ is a jump instruction. From the previous formula we obtain that \begin{multline}\label{eq:thirdrelation} \mathcal{P} \big( \, T_{(x,K)} \geq \ell \, \big ) = \\ \mathcal{P} \Big (\big \{ \eta_1(x) = 1 \big \} \, \cap \, \big \{ \forall i \in [1, \ell-2], \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \big \} \,\cap \, \{ M_K(x) - M_{(x,K)}^1(x) \geq \ell - 2 \} \Big) = \\ \\ \sum\limits_{ \tilde \eta \in \Omega\, \, : \, \, \, \tilde \eta(x) = 1} \mathcal{P} \Big ( \big \{ \forall i \in [1, \ell-2], \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \big \} \,\cap \, \{ M_K(x) - M_{(x,K)}^1(x) \geq \ell - 2\} \, \, \big | \, \, \eta_1 = \tilde \eta \Big) \mathcal{P} \big ( \eta_1 = \tilde \eta \big ). \end{multline} Now note that, for any particle configuration $\tilde \eta$ in the previous sum, we have that, \begin{multline}\label{eq:fourthrelation} \mathcal{P} \Big ( \big \{ \forall i \in [1, \ell-2], \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \big \} \,\cap \, \{ M_K(x) - M_{(x,K)}^1(x) \geq \ell-2 \} \, \, \big | \, \,\eta_1 = \tilde \eta \Big) \leq \\ \mathcal{P} \Big ( \big \{ \forall i \in [1, \ell-2], \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \big \} \,\cap \, \{ A_{(x,K)} \geq \ell-2 \} \, \, \big | \, \, \eta_1 = \tilde \eta \Big) = \\ \mathcal{P} \Big ( \forall i \in [1, \ell-2], \, \, \tau^{x, m^i_{(x,K)}(x)+1} \, \neq \tau_{x\rho} \, \big | \, \,\eta_1 = \tilde \eta \Big) \mathcal{P} \Big ( A_{(x,K)} \geq \ell-2 \, \big | \, \, \eta_1 = \tilde \eta \Big) \leq \\ \big( \frac{1}{1 + \lambda} \big )^{\ell-2} \, \, \mathcal{P} \Big ( A_{(x,K)} \geq \ell-2 \, \big | \, \, \eta_1 = \tilde \eta \Big). \end{multline} where the first inequality follows from (\ref{eq:independ4}) and the equality follows from independence of instructions. Indeed, from Lemma \ref{lemma:independence} we have that, for any realization of $\eta$ and $\tau$, both $M_{(x,K)}^{e}(x)$ and $M^1_{(x,K)}(x)$ do not change if we turn sleep instructions at $x$ into a neutral instruction, so the function $A_{(x,K)} = M_{(x,K)}^{e}(x) - M^1_{(x,K)}(x)$ is independent from the sleep instructions which are located at $x$. By replacing (\ref{eq:fourthrelation}) in (\ref{eq:thirdrelation}) we conclude the proof of the lemma. \end{proof} \begin{remark} In \cite{Stauffer} the quantity in the left-hand side of (\ref{eq:mainupperbound}) is bounded from above by the probability that at least $\ell-2$ instructions are used at $x$ after the first weak stabilization, without distinguishing between jump and sleep instructions. Our enhancement is obtained by considering only jump instructions and, more importantly, by introducing a stabilization procedure (\ref{eq:defenforced}) that ignores sleep instructions at one vertex, on which the quantity $A_{(x,K)}$ depends. Indeed, even though replacing $M_K(x) - M^1_{(x,K)}(x)$ by $A_{(x,K)}$ in the first inequality of (\ref{eq:fourthrelation}) might make the probability larger, it allows to recover independence from sleep instructions and, thus, to split the second term of (\ref{eq:fourthrelation}) into the product of two factors, which are then bounded from above separately. \end{remark} In the next lemma we will bound from above the expectation of $A_{(x,K)}$. This will lead to an upper bound on the second factor of the last term in (\ref{eq:fourthrelation}). \begin{lemma}\label{lemma:upperboundA} Let $G$ be a locally-finite graph and let $K \subset V$ be a finite set. Then, for any $x \in K$, \begin{equation}\label{eq:upperboundA} \boldsymbol{E} \Big ( A_{(x,K)} \Big ) \leq G_{K^c}(x, x), \end{equation} where $\boldsymbol{E}$ is the expectation with respect to $\mathcal{P}$. \end{lemma} \begin{proof} Note that the expectation of $A_{(x,K)}$ can be written as follows, \begin{equation}\label{eq:claim0} \boldsymbol{E} \Big ( A_{(x,K)} \Big ) = \sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k) \, \, \Big [ \boldsymbol{E}_k \big ( M^e_{(x,K)}(x) \big ) \, - \, \boldsymbol{E}_k \big ( M^1_{(x,K)}(x)\big ) \, \, \Big ] \end{equation} where $\boldsymbol{E}_k$ is the expectation $\boldsymbol{E}$ conditional on having precisely $k$ particles starting from $x$ at time $0$ and $Poi_{\mu}(k)$ is the probability that a Poisson random variable with mean $\mu$ has outcome $k$. We claim that, for any $k \in \mathbb{N}$, \begin{equation} \label{eq:claim1} \boldsymbol{E}_{k+1} \Big ( M^1_{(x,K)}(x) \Big) = \boldsymbol{E}_k \Big (M^e_{(x,K)}(x) \Big ), \end{equation} and that \begin{equation} \label{eq:claim3} \boldsymbol{E}_{k+1} \Big ( M^1_{(x,K)}(x) \Big ) \leq \boldsymbol{E}_{k} \Big ( M^1_{(x,K)}(x) \Big ) \, + \, G_{K^c}(x,x). \end{equation} By using (\ref{eq:claim1}) and (\ref{eq:claim3}) we obtain from (\ref{eq:claim0}) that, \begin{align*} \boldsymbol{E} \Big ( A_{(x,K)} \Big ) & = \sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k) \, \, \Big [ \boldsymbol{E}_{k+1} \big ( M^1_{(x,K)}(x) \big ) \, - \, \boldsymbol{E}_k \big ( M^1_{(x,K)}(x)\big ) \, \, \Big ] \\ & \leq \sum\limits_{k=0}^{\infty} \, Poi_{\mu}(k)\, \, G_{K^c}(x, x) \\ & = G_{K^c}(x, x), \end{align*} obtaining the desired inequality (\ref{eq:upperboundA}). So, in order to conclude the proof, it remains to prove (\ref{eq:claim1}) and (\ref{eq:claim3}). The equality (\ref{eq:claim1}) holds true since adding one particle at a $x$ and never moving that particle is equivalent to stabilizing $K$ by ignoring all the sleep instructions at $x$. For a formal proof, let $\eta^{k+1}$ be an arbitrary particle configuration with $k+1$ particles at $x$ and let $\eta^k$ be obtained from $\eta^{k+1}$ by removing one of the particles at $x$. Let $\tau$ be an arbitrary array and let $\tilde \tau$ be obtained from $\tau$ by turning sleep instructions at $x$ into neutral instructions. We use the instructions of $\tau$ for $\eta^{k+1}$ and the instructions of $\tilde \tau$ for $\eta^k$ simultaneously. More specifically, let $\alpha = (x_1, x_2, \ldots x_{|\alpha|})$ be a sequence that stabilizes $\eta^k$ in $K$ by using the instructions of $\tilde \tau$. Since any step of $\alpha$ is legal for $\eta^{k}$ when we use $\tilde \tau$ (a neutral instruction is always legal), it is also WS-legal for $\eta^{k+1}$ when we use $\tau$. Moreover, since $\Phi_{\alpha} \eta^k$ is stable in $K$ and has no particle at $x$ when we use $\tilde \tau$, then $\Phi_{\alpha} \eta^{k+1}$ is weakly stable in $(x,K)$ when we use $ \tau$. Thus, the sequence $\alpha$ stabilizes $\eta^k$ in $K$ when we use the instrutions of $\tilde \tau$ and stabilizes $\eta^{k+1}$ weakly in $(x,K)$ when we use the instructions of $\tau$. From the Abelian property we deduce that for any $y \in K$, $ m^1_{(x,K), \eta^{k+1}, \tau}(y) = m^e_{(x,K), \eta^{k}, \tau}(y). $ This implies (\ref{eq:claim1}). We now prove (\ref{eq:claim3}), adapting the steps of a similar proof that appears in \cite{Stauffer} to our setting. Let $\eta$ be an arbitrary particle configuration with $k+1$ particles at $x$. In the first step, we move one of the particles that is at $x$ until it leaves the set $K$, ignoring any sleep instruction. During this step of the procedure we might use some instruction at $x$ that is WS-illegal (but legal). The expected number of times a jump instruction is used at $x$ during this step is $G_{K^c}(x,x)$. In the second step, we perform the weak stabilization of $(x,K)$ with the remaining particles. Let $M_{K, \eta, \tau}^{\prime}(x)$ be the total number of jump instructions that are used at $x$. We have that, by monotonicity with enforced activation and by the least action principle for weak stabilization, $$ M^1_{(x,K), \eta, \tau}(x) \, \, \leq \, \, M_{K, \eta, \tau}^{\prime}(x) . $$ Moreover, since in the second step we start from a configuration with $k$ particles at $x$ and instructions are independent, $$ \boldsymbol{E}_{k+1} \big ( \, M_K^{\prime} (x) \, \big ) = \boldsymbol{E}_k \big ( \, M^1_{(x,K)}(x) \, \big ) + G_{K^c}(x,x) . $$ By using the two previous relations we obtain (\ref{eq:claim3}). \end{proof} \subsection{Proof of Theorem \ref{theo:boundsQ}} The proof of Theorem \ref{theo:boundsQ} is a direct consequence of Lemma \ref{lemma:Qandenforced} and Lemma \ref{lemma:upperboundA}. From Lemma \ref{lemma:Qandenforced} we obtain that, \begin{align*} Q(x,K) & = \sum\limits_{ \ell=2}^{\infty} \, \mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = \ell \big) \\ & \leq \sum\limits_{ \ell=2}^{\infty} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell-2} \, \, \mathcal{P} \big( A_{(x,K)} \geq \ell -2 \big), \end{align*} having used the fact that $\mathcal{P} \big( \eta^{\prime}(x) = \rho, \, \, T_{(x,K)} = 1 \big) =0$ and that $T_{(x,K)} > 0 $ almost surely. We now perform simple calculations in order to prove the quantitative upper bound of Theorem \ref{theo:boundsQ}. By using the Markov's inequality and Lemma \ref{lemma:upperboundA}, we obtain that for any positive integer $H$, \begin{align*} \mathcal{P} \big( \eta^{\prime}(x) = \rho \big) & \leq \sum\limits_{ \ell=0}^{H-1} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell} \, +\, \sum\limits_{ \ell=H}^{\infty} \frac{\lambda}{1+\lambda} \, \, \big( \frac{1}{1+\lambda} \big)^{\ell} \mathcal{P} \big( A_{(x,K)} \geq \ell \big) \\ & \leq \frac{\lambda}{1+\lambda} \, \, \Big [ \sum\limits_{ \ell=0}^{H-1} \, \big( \frac{1}{1+\lambda} \big)^{\ell} \, +\, \, \frac{G_{K^c}(x,x)}{H+1} \, \sum\limits_{ \ell=H}^{\infty} \, \, \, \big( \frac{1}{1+\lambda} \big)^{\ell} \Big ] \\ & = \frac{\lambda}{1+\lambda} \, \, \Big [ \frac{1}{1 - \frac{1}{1 + \lambda}} \, \, - \, \, \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big) \Big ( \frac{1}{1 + \lambda} \Big)^H \, \, \frac{1}{ \Big ( 1 - \frac{1}{1 + \lambda} \Big)}\, \, \Big ] \\ & = 1 - \Big( 1 - \frac{G_{K^c}(x,x)}{H+1} \Big) \Big ( \frac{1}{1 + \lambda} \Big)^H. \end{align*} This concludes the proof of Theorem \ref{theo:boundsQ}. \subsection{Proof of Theorem \ref{theo1:transient graph}} Suppose that the graph is vertex-transitive and transient. We have that for any set $K \subset V$ and any vertex $x \in K$, $G_{K^c}(x,x) \leq G(0,0) < \infty$. Define, $$ g(\lambda) := \inf \big \{ 1 - \Big( 1 - \frac{G(0,0)}{H+1} \Big) \Big ( \frac{1}{1 + \lambda} \Big)^H \, \, : \, \, H \in \mathbb{N} \big \} , $$ and note that from Theorem \ref{theo:boundsQ} it follows that, \begin{equation}\label{eq:bound2} \forall K \subset V, \, \, \, \, \, \, \, \, \forall x \in K, \, \, \, \, \, \, \, \,Q(x,K) \leq g(\lambda)<1 . \end{equation} By choosing $H^{*} := \lceil \sqrt{ \frac{G(0,0)}{\log(1 + \lambda)} } \rceil$, we deduce that $\limsup_{\lambda \rightarrow 0} \frac{g(\lambda)}{\lambda^{\frac{1}{2}}} < \infty$. Suppose now that $\mu > g(\lambda)$. Since from (\ref{eq:bound2}) we have that the expected number of particles after the stabilization of $K$ is at most $g(\lambda) \, |K|$, it follows that the expected number of particles leaving $K$ during the stabilization of $K$ is at least $ ( \, \mu - g(\lambda) \, ) \, \, | K|$. Since a positive density of particles leaves the set, since the graph is amenable, and since $K$ is an arbitrary finite set, we deduce from \cite{Rolla2}[Proposition 2] that the system is active. This implies that $\mu_c(\lambda) \leq g(\lambda)$ for any $\lambda \in (0, \infty)$ and concludes the proof. \section{Fixation on non-amenable graphs} \label{sec: Fixation on non-amenable graphs} In this section we prove Theorem \ref{theo: non amenable}. We start stating an auxiliary lemma which provides an upper bound and a lower bound for the expected number of times that particles starting `close' or `far' from the boundary of a ball visit the centre of that ball. Afterwards, we state Proposition \ref{prop:fixation non-amemable}, which connects the values of the probabilities $\{Q(x,B_{L}) \}_{x \in B_L}$ to the expected number of times a particle visits the origin. Finally, we prove Theorem \ref{theo: non amenable} by showing that, if one assumes that ARW is active and that $\mu < \frac{\lambda}{1 + \lambda}$, then Proposition \ref{prop:fixation non-amemable} cannot hold. This leads to the desired contradiction. \begin{lemma}\label{lemma:RWboundary} Let $G$ be a vertex-transitive graph where the random walk has a positive speed. There exists $C_1=C_1(G) \in (0,\infty)$ such that, for any $\delta \in (0,1)$, there exists an infinite increasing sequence of integers $\{L_n\}_{n \in \mathbb{N}}$ such that \begin{equation}\label{eq:ineq1} \sum\limits_{x \in B_{L_n} \setminus B_{(1 - \delta) \, L_n} } G_{B_{L_n}^c} \big (x, 0 \big ) \, \leq \, C_1 \, \delta \, L_n. \end{equation} Moreover, there exists $C_2= C_2(G) \in (0, \infty)$ such that, for any $L \in \mathbb{N},$ \begin{equation}\label{eq:ineq2} \sum\limits_{y \in B_{ (1-\delta) \, L }} G_{B_{L}^c}(y,0) \geq \, \,C_2 \, \, (1-\delta) \, L . \end{equation} \end{lemma} \begin{proof} We start with the proof of (\ref{eq:ineq1}). For any pair of real numbers $r_2 > r_1$, let $$\Xi(r_1,r_2) : = E_0 \Big ( \sum\limits_{t=0}^{\infty} \mathbbm{1}\{ X(t) \in B_{r_2}\setminus B_{r_1} \} \Big),$$ be the expected number of vertices in the ring $B_{r_2} \setminus B_{r_1}$ which are visited by the random walk, with $\Xi(0,r_2) $ being the expected number of vertices in the ball $B_{r_2}$ which are visited by the random walk. By regularity of the graph, for any integer $n$ and $x \in B_{n}$, we have that, \begin{equation}\label{eq:regularity} P_{x}\big ( \tau_0 < \tau_{B_n^c} \big ) \, = G_{B_n^c \cup \{0\}}(x,x) \, \, P_0 \big ( \tau_x < \tau_{ \{0\} \cup B_n^c}^+ \big). \end{equation} Then, for any $\delta^{\prime} \in (0,1)$, \begin{align} \sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} G_{B_n^c}(x,0) \, \, & = \, \, G_{B_n^c}(0,0) \sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} P_{x}\big ( \tau_0 < \tau_{B_n^c} \big ) \\ \label{eq:condtris} & \leq G(0,0)^2 \, \, \sum\limits_{x \in B_n \setminus B_{(1 - \delta^{\prime}) \, n}} P_{0}\big ( \tau_x < \tau_{B_n^c} \big ) \\ \label{eq:cond4} & \leq G(0,0)^2 \, \, \Xi \Big ( \, (1 - \delta^{\prime}) n , \, n \, \Big ), \end{align} where we used (\ref{eq:regularity}) and vertex-transitivity. We have that, \begin{equation}\label{eq:cond1} \forall n \in \mathbb{N} ~~~~~ \Xi \big ( 0, n \big ) \geq \sum\limits_{\ell=1}^{ \lfloor \frac{1}{\delta^{\prime}} \rfloor } \, \Xi \big ( \, \, \delta^{\prime} \, n \, (\ell-1),\, \delta^{\prime} \, n \, \ell \, \, \big ) . \end{equation} Since the random walk has a positive speed, we have that there exists $K=K(G)$ such that, \begin{equation}\label{eq:cond2} \forall n \in \mathbb{N} ~~~~~ \Xi \big ( 0, n \big ) \, \leq \, K \, \, n. \end{equation} (see for example \cite{Stauffer}[eq. (5.16)] for a proof, and note the two typos present there: $P( X(\Delta_{(2k - 1)L}) > L ) $ should be replaced by $P( \forall \ell \in \{1, \ldots, k\}, X(\Delta_{(2\ell - 1)L}) \in B_{L}), $ which is bounded from above by $\xi^k$). Conditions (\ref{eq:cond1}) and (\ref{eq:cond2}) imply that, \begin{equation}\label{eq:cond3} \forall n \in \mathbb{N} ~~~~~~ \exists \ell_n \in [\frac{1}{2 \delta^{\prime}} , \frac{1}{\delta^{\prime}}] ~~~~~~ \mbox{s.t.} ~~~~~~\Xi \big ( \, \delta^{\prime} \, n \, (\ell_n-1) ,\, \delta^{\prime} \, n \, \ell_n \, \big ) \leq 4 \, K \, \delta^{\prime} \, n. \end{equation} For any $n \in \mathbb{N}$, define now $L_n :=\lfloor \delta^{\prime} \, n \, \ell_n \rfloor$. From (\ref{eq:cond3}) we obtain that, for any large enough $n$, \begin{multline} \Xi \big ( \, L_n ( 1 \, - \, \frac{\delta^{\prime}}{2} ) , \, \, L_n \, \big ) \, \leq \Xi \big (\, L_n ( \frac{\delta^{\prime} n \ell_n}{L_n} \, - \delta^{\prime}), L_n \big ) \leq \Xi \big (\, L_n ( \frac{\delta^{\prime} n \ell_n}{L_n} \, - \frac{1}{\ell_n}) , L_n \big ) = \\ \Xi \big (\, \delta^{\prime} n \ell_n\, - \frac{L_n}{\ell_n} , L_n \big ) \leq \Xi \big (\,\delta^{\prime} n (\ell_n\, - 1), \delta^{\prime} n \ell_n \big ) \leq \, 4 \, K \, \, \frac{\delta^{\prime} \, n \, \ell_n}{\ell_n} \leq 5 K \frac{L_n}{\ell_n} \leq 10 K \delta^{\prime} L_n. \end{multline} The proof of (\ref{eq:ineq1}) follows by defining $\delta = \frac{\delta^{\prime}}{2}$, $C_1 = 20 \, K \, G(0,0)^2$ and by selecting an infinite increasing subsequence of $\{L_n\}_{n \in \mathbb{N}}$. We now prove (\ref{eq:ineq2}). We have that, \begin{align} \sum\limits_{y \in B_L} G_{B^c_{(1-\delta) L}}(y,0) & = G_{B_{L }^c}(0,0) \, \, \mathbb{E}_0 \Big ( \sum\limits_{t=0}^{\tau^+_{B^c_L \cup \{0\}}} \mathbbm{1} \big\{ X(t) \in B_{(1-\delta)L} \big\} \Big ) \\ & \geq G_{B_{L}^c}(0,0) \, \, p \, \, (1-\delta) \, L \\ & \geq C_2 (1-\delta) \, L, \end{align} where for the second inequality we used conditional expectation and we let $p = P_0 \big ( X(t) \neq 0 \, \, \forall t >0 \big ) >0$ be the probability that the random walk does not return to its starting vertex, which is positive since the random walk has a positive speed and is then transient. \end{proof} The next proposition relates the expected number of particles visiting the origin to the quantities $\{ Q(y,B_L)\}_{y \in B_L}$. The proof of Theorem \ref{theo: non amenable} will only use the fact that (\ref{eq:relationQm}) is nonnegative. \begin{proposition}\label{prop:fixation non-amemable} Let $G$ be a graph. For any $L \in \mathbb{N}$, \begin{equation}\label{eq:relationQm} \mathbb{E}_{B_L}\big ( M_{B_L}(0) \big) = \sum\limits_{y \in B_L} \, G_{B^c_L}(y, 0) \, \, \big ( \, \mu - Q(y, B_L) \, \big). \end{equation} \end{proposition} \begin{proof} In order to prove (\ref{eq:relationQm}), we use the ghost explorer technique similarly to \cite{Shellef, Stauffer}. First, we let the particles move until the ball $B_L$ is stable. This means that some particles leave $B_L$ being absorbed at the boundary and other particles remain in $B_L$ after having turned into the S-state. We now let a \textit{ghost particle} start an independent simple random walk from every vertex that is occupied by an S-particle in $B_L$. Ghost particles are `killed' whenever they visit $B_L^c$. We let $R_{B_L}(x)$ be the total number of visits at $x \in B_L$ by ghost particles. We let $W_{B_L}(x)$ be the total number of visits at $x$ by normal particles or by ghost particles. Now we have that, $$ M_{B_L}(x) = W_{B_L}(x) - R_{B_L}(x). $$ As both particles and ghost particles stop only when they leave $B_L$ and as random walks are independent, we have that, $$ \tilde{E}_{B_L}[ W_{B_L}(x)] = \mu \, \sum\limits_{y \in B_L} \, G_{B^c_L}(y, x), $$ where $\tilde {E}_{B_L}$ is the expectation in the enlarged probability space of activated random walks and ghost particles. Moreover, since precisely one ghost leaves from every vertex where an S-particle is located, $$ \tilde{E}_{B_L}[ R_{B_L}(x)] = \sum\limits_{y \in B_L} \, G_{B^c_L}(y, x) \, \, Q(y, B_L), $$ by linearity of expectation. The proof of (\ref{eq:relationQm}) is concluded again by using linearity of expectation. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{theo: non amenable}}] The proof is by contradiction. First of all note that from Lemma \ref{lemma:upperboundA} and (\ref{eq:relationQm}) we obtain that, for any $\delta \in (0,1)$, there exists an infinite increasing sequence of integers $\{L_n\}_{n \in \mathbb{N}}$ such that for any $\mu >0$ and $\lambda \in (0, \infty)$, $$ \mathbb{E}\big (\, M_{B_{L_n}}(0) \, \big ) \leq \, \, \sum\limits_{y \in B_{(1 - \delta) L_n }} \, \, \Big ( \, \mu - Q(y,B_L) \, \Big ) \, \, G_{B_{L_n}^c}(y,0) \, \, \, \, + \, \, \,\mu \, C_1 \, \delta \, L_n. $$ Since the number of visits cannot be negative, we obtain that, \begin{equation}\label{eq:deductionconstraintQ} \sum\limits_{y \in B_{(1 - \delta) \, L_n}} \, \, \, G_{B_{L_n}^c} (y,0) \, \, \, \big ( Q(y,B_{L_n}) \, - \, \mu \big ) \, \, \, \leq \mu \, \, C_1 \, \, \delta\, L_n. \end{equation} We will show that, if $G$ is such that the random walk has a positive speed, then condition (\ref{eq:deductionconstraintQ}) cannot be satisfied when $\mu < \frac{\lambda}{1+\lambda}$ and ARW is active, obtaining a contradiction and concluding that ARW fixates when $\mu < \frac{\lambda}{1+\lambda}$. Thus, first note that, \begin{equation}\label{eq:lowerboundQ} Q(x, B_L) \geq \, \mathcal{P} \big ( \, m_{B_{ L} (x)}(x) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda} . \end{equation} This inequality was proved in \cite{Stauffer} and follows from the next relation, \begin{equation}\label{eq:relation1} \{ m^1_{(x, B_{ L} (x))}(x) \geq 1\} \, \, \cap \, \, \{ \tau^{x, m_{(x,K)}^1(x) + 1} = \tau_{x \rho} \} \, \, \subset \, \, \{ \eta^{\prime}(x) = \rho \}, \end{equation} Indeed, if one concludes weak stabilization of $(x,K)$ with one particle at $x$ and the next instruction at $x$ is sleep, then the stabilization is completed with one particle at $x$. Moreover, at least one instruction is used at $x$ during the stabilization of $K$ if and only if at least one instruction is used at $x$ during the weak stabilization of $(x,K)$. By independence of instructions, one obtains (\ref{eq:lowerboundQ}). Thus, assume that $\mu < \frac{\lambda}{1+\lambda}$ and that ARW is active and let $D : = \frac{\frac{\lambda}{1+\lambda} - \mu}{2} >0$. We have that, for any $\delta>0$ and for any $L$ large enough depending on $\delta$, \begin{align} \forall x \in B_{(1-\delta) L }, ~~~~~~~ Q(x, B_L) \, & \geq \, \mathcal{P} \big ( \, m_{B_{\delta L} (x)}(x) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda} \label{eq:step1bis} \\ & \geq \, \mathcal{P} \big ( \, m_{B_{\delta L}}(0) \geq 1 \, \big ) \, \frac{\lambda}{1+\lambda} \label{eq:step1tris} \\ & \geq \mu + D. \label{eq:step2} \end{align} where the first inequality follows from (\ref{eq:lowerboundQ}) and from monotonicity (Lemma \ref{prop:lemma3}), for the second inequality we used vertex-transitivity and for the third inequality we used the definition of activity and Lemma \ref{prop:lemma4}. Then, for any $\delta \in (0,1)$ and for any $L$ we deduce from Lemma \ref{lemma:upperboundA} that, \begin{equation}\label{eq:violation} \sum\limits_{y \in B_{ (1-\delta) \, L }} G_{B_{L}^c}(y,0)\Big ( Q(y,B_L ) \, - \, \mu \Big ) \geq \, \, D \, C_2 \, (1-\delta) \, L . \end{equation} Choose now $\delta \in (0,1)$ small enough such that, for any $L$ large enough, \begin{equation*} \sum\limits_{y \in B_{ (1-\delta) \, L }} G_{B_{L}^c}(y,0)\Big ( Q(y,B_L ) \, - \, \mu \Big ) > C_1\, \delta \, L. \end{equation*} Since the previous condition holds for any $L$ large enough, we deduce that an infinite increasing sequence $\{L_n\}_{n \in \mathbb{N}}$ satisfying (\ref{eq:deductionconstraintQ}) cannot exist when $\mu \leq \frac{\lambda}{1+\lambda}\leq1$. This leads to the desired contradiction. \end{proof} \section*{Acknowledgements} The author thanks Elisabetta Candellero and Alexandre Stauffer for very interesting discussions, Shirshendu Ganguly and Leonardo Rolla for useful comments, and the anonymous referee for his useful suggestions.
{ "timestamp": "2018-11-20T02:18:27", "yymm": "1712", "arxiv_id": "1712.05292", "language": "en", "url": "https://arxiv.org/abs/1712.05292" }
\section{Introduction} \label{sec:Intro} \kevin{Kevin: please add any further appropriate references either to your own or other people's work - thanks!} Over a period of some years, a framework has been developed for gauge theory which allows continuum computations without fixing the gauge. This is achieved by utilising the freedom to design manifestly gauge invariant versions of the continuum realisation of Wilson's renormalization group (christened exact RG in ref. \cite{Wilson:1973}). Such manifest gauge invariance was first incorporated into the exact RG in ref. \cite{Morris:1995he}, however in the limited context of pure $U(1)$ gauge theory. Following ref. \cite{Morris:1998kz} it was generalised and extensively studied first for $SU(N)$ Yang-Mills theory, then QCD \cite{Morris:2006in} and QED \cite{Arnone:2005vd,Rosten:2008zp}. For these gauge theories, regularisation is based on gauge-invariant higher derivatives set at some ultraviolet cutoff scale $\Lambda$, supplemented by gauge invariant Pauli-Villars fields \cite{Morris:1999px} with particular flavours and interactions so that their regularisation properties are preserved under RG flow. It was later realised that the resulting structure could be simply understood as arising from spontaneously broken $SU(N|N)$ super-Yang-Mills theory \cite{Morris:2000fs,Morris:2000jj}. In this scheme, the original gauge field $A^1_\mu$ is joined by a copy gauge field $A^2_\mu$ with wrong sign action, and a complex fermionic (\ie wrong-statistics) gauge field $B_\mu$: \begin{equation} \label{A} \mathcal{A}_\mu = \begin{pmatrix} A^1_\mu \, \, & \, \, B_\mu\, \\ \bar{B}_\mu \, & A^2_\mu \\ \end{pmatrix} \end{equation} This extra regularisation works because these degrees of freedom cancel each other, as happens with Parisi-Sourlas supersymmetry \cite{Parisi:1979ka}, at least sufficiently that, together with appropriately chosen covariant cutoff functions, the theory is then regularised to all orders in perturbation theory \cite{Arnone:2000bv, Arnone:2000qd,Arnone:2001iy}. The symmetry is then broken spontaneously along the fermionic directions, endowing the $B_\mu$ with a mass at the cutoff scale $\Lambda$. The computational methods were generalised in refs. \cite{Arnone:2002yh,Arnone:2002qi,Arnone:2002fa,Arnone:2003pa,Arnone:2002cs} so that universal results could be extracted in a way which was manifestly independent of the detailed form of the regularisation structure, and such that general group invariants could be handled \cite{Arnone:2005fb}. Using these techniques, the initial computation of the one-loop $\beta$ function at infinite $N$ \cite{Morris:1998kz} was generalised to finite $N$ \cite{Arnone:2002qi,Arnone:2002fb,Gatti:2002kc,Arnone:2002cs}, then to two loops \cite{Morris:2005tv,Rosten:2004aw,Arnone:2005fb,Rosten:2005qs,Rosten:2005ka}, extended to all loops in refs. \cite{Rosten:2005ep,Rosten:2006tk} and to computation of gauge invariant operators in refs. \cite{Rosten:2006qx,Rosten:2006pd}. For reviews and further advances see refs. \cite{Arnone:2006ie,Rosten:2010vm,Rosten:2011ty}. In ref. \cite{Morris:2016nda} the first steps were made in generalising these ideas so as to yield a manifestly diffeomorphism invariant exact RG for use in quantum gravity.\footnote{For an alternative attempt, see ref. \cite{Wetterich:2016ewc}.} On the one hand the renormalization group structure of quantum gravity is surely of importance \cite{Stelle:1976gc,Adler:1982ri,Weinberg:1980,Reuter:1996} and on the other hand one can expect conceptual and computational advances from a framework which allows computations to be done while keeping exact diffeomorphism invariance at every stage, \ie without gauge fixing. Indeed as shown in ref. \cite{Morris:2016nda}, it turns out that these computations can then be done without first choosing the space-time manifold and in particular without introducing a separate background metric dependence. A solution to the difficult issue of background independence is thus automatic in this formalism.\footnote{For a discussion of this issue in the asymptotic safety literature see \eg refs. \cite{Reuter:2009kq,Becker:2014qya,Dietz:2015owa,Labus:2016lkh,Morris:2016spn,Percacci:2016arh,Ohta:2017dsq}.} However, only the flow equation for classical gravity was developed in ref. \cite{Morris:2016nda}. In this paper we make the first step towards including manifestly diffeomorphism invariant quantum effects involving gravity. We will be concerned with the conformal, {a.k.a.}\ Weyl or trace, anomaly \cite{Capper1974b,Duff1977,Duff1994}\footnote{The Weyl anomaly has been investigated in flow equations in refs. \cite{Machado:2009ph,Codello2013,Codello:2015ana}.} generated by gauge fields. Although this does not involve dynamical gravity, it is clearly important to understand how the known universal answer can arise in this framework, \ie such that gauge invariance is maintained at all stages. Indeed, since the conformal anomaly can be read off from the logarithmically divergent curvature-squared terms at one loop, it is proportional to the signed number of fields (\ie with fermionic fields appearing with opposite sign). Thus in the usual calculation \cite{Capper1974b,Duff1977,Duff1994}, the ghost degrees of freedom are indispensable. The manifestly gauge invariant formalism reviewed above, proceeds without ghosts. Thus the question arises: working on a curved spacetime, is gauge fixing now necessary to recover the correct Weyl anomaly or not? As we will see in fact the correct Weyl anomaly is reproduced without gauge fixing. This is thus a dramatic confirmation of a formalism that was developed and tested only in flat space calculations. Needless to say, it is also the first time that a manifestly gauge invariant computation has been achieved for the gauge field contribution to the conformal anomaly. \kevin{As a bi-product we will also compute, without gauge fixing, the one loop gauge field contribution to the (non-universal) divergent corrections to Newton's constant and the cosmological constant.} For this exercise it is sufficient to consider Maxwell theory, \ie free $U(1)$ gauge fields. As already mentioned, manifest gauge invariance can be straightforwardly incorporated in flow equations for pure $U(1)$ gauge theory in flat space \cite{Morris:1995he}, where in fact only the gauge field $A^1_\mu$ appears. Even for manifestly gauge invariant QED, the gauge field degrees of freedom are not altered or supplemented: only the Dirac fields need regularisation with opposite statistics Pauli-Villars partners \cite{Arnone:2005vd,Rosten:2008zp}. This is because it is straightforward to regularise a $U(1)$ gauge field gauge invariantly using only a cutoff profile $c$ which is a function of partial derivatives rather than covariant derivatives: \begin{equation} \mathcal{L} = \frac14 F^1_{\mu\nu}\, c^{-1}(-\partial^2/\Lambda^2) F^{1\,\mu\nu}\,. \end{equation} (Throughout we will be working with Euclidean signature.) However the arguments above already show that such a framework could not possibly give the correct Weyl anomaly. In fact once we use a non-flat metric (and thus replace the partial derivatives in $c^{-1}$ with covariant derivatives) we introduce interactions with the metric which destroy the regularisation, since this is then again effectively covariant higher derivative regularisation, which is known to fail at one loop \cite{Slavnov:1972sq,Lee:1972fj,Morris:2016nda}. Therefore even for pure Maxwell theory, we need a wrong-statistics counterpart to play the r\^ole of the Pauli-Villars field. Following the same chain of reasoning as reviewed above, in order for this to be embedded in an exact RG framework, we are led to developing versions of spontaneously broken $U(1|1)$ theory for this purpose. As we will see the wrong-statistics fields that are introduced then ensure the correct Weyl anomaly. In fact the result can be directly compared to a more conventional calculation, although only after rearranging contributions from the wrong-statistics vector and Goldstone fields, reflecting the fact that the supergauge invariance, while spontaneously broken, is nevertheless manifest throughout. Actually, a wholesale adaptation of the previously developed manifestly gauge invariant methods is not quite what we want, because the $A^2_\mu$ sector is left unbroken. For the purposes of flat space computations in non-Abelian Yang-Mills theory, this is not a problem \cite{Appelquist:1974tg,Symanzik:1973vg,Arnone:2000bv, Arnone:2000qd,Arnone:2001iy} because all interactions with this sector are irrelevant, starting with \begin{equation} {\rm tr} F^1_{\alpha\beta} F^1_{\gamma\delta}\, {\rm tr} F^2_{\epsilon\zeta} F^2_{\eta\theta} \end{equation} (with indices contracted in some way), and thus the $A^2$ sector decouples in the continuum limit, providing we work in $D\le4$ dimensions \cite{Arnone:2001iy}. For a computation of the conformal anomaly however, and more generally the purely gravitational action at one loop, the $A^2$ sector will also contribute and thus we expect to find twice the right answer whatever the space-time dimension.\footnote{The wrong sign in the action does not contribute to the metric dependence at one loop.} We will confirm that this is indeed the case. While the above framework is enough to work out the Yang-Mills contribution to the pure gravitational action at one loop, its use would clearly be limited beyond this while the unphysical $A^2$ sector remains as part of the continuum theory. We therefore also build an alternative formulation with spontaneous breaking of both the $B_\mu$ and the $A^2_\mu$ field so that all these fields gain masses at the regularisation scale $\Lambda$. This thus leaves only the original unbroken Maxwell field $A^1_\mu$ at low energies, which, as we will see, then gives exactly the correct value for the conformal anomaly. \kevin{Given that we get the correct value for the conformal anomaly without gauge fixing, it must be that if we use this regularisation to calculate in the standard way by gauge fixing, the resulting extra contributions all cancel amongst themselves. Adapting the earlier development of such a gauge fixed framework \cite{Arnone:2000bv, Arnone:2000qd,Arnone:2001iy}, we verify that this is the case. The cancellations arise because the gauge fixing must be done for the full local supergroup, which thus also implements Parisi-Sourlas supersymmetry.} \section{Differential operators} \label{sec:differentialOps} Before proceeding, it is useful to collect together properties of the curved space differential operators that will naturally appear when working with gauge fields. For scalar fields $\omega$, the operator that naturally appears, \eg as the kernel in the kinetic term, is just the Laplace-Beltrami operator \begin{equation} \label{Delta0} \Delta_0\, \omega := - \nabla^\mu\nabla_\mu\, \omega. \end{equation} (With the sign, it is positive semi-definite.) However for a, \eg $U(1)$, gauge field\footnote{For simplicity, in this section we write $A_\mu\equiv A^1_\mu$, and trust the reader not to be confused with later usage.} the kernel from the simplest action \begin{equation} \label{actionU1} \frac14\!\int\!\!d^Dx \sqrt{g}\, F_{\mu\nu} F^{\mu\nu} = \frac12\!\int\!\!d^Dx \sqrt{g}\, A^{\mu} \Delta_1^T A_\mu, \end{equation} (where $F=dA$, or in components $F_{\mu\nu} =\nabla_\mu A_\nu -\nabla_\nu A_\mu$), is the differential operator $\Delta^T_1=\delta d$, where $d$ is the exterior derivative, and $\delta$ the co-differential. In components: \begin{equation} \label{Delta1T} \Delta^T_1 A_{\mu} := \Delta_1 A_{\mu} + \nabla_{\mu} \nabla^{\nu} A_{\nu} = -\nabla^2 A_{\mu} + \nabla^{\nu} \nabla_{\mu} A_{\nu}. \end{equation} Here we have also introduced the (positive semi-definite) Laplace--de Rham operator \begin{equation} (d+\delta)^2=d\delta+\delta d, \end{equation} which on a one-form is explicitly \begin{equation} \label{Delta1} \Delta_1 A_{\mu} := -\nabla^2 A_{\mu} + R_{\mu}\,^{\nu} A_{\nu} \end{equation} (coinciding with the Lichnerowicz Laplacian). Abelian gauge invariance (\ie $d^2=0$) ensures that $\Delta^T_1$ annihilates longitudinal one-forms, as is easily explicitly verified: \begin{equation} \Delta^T_1 \nabla_{\mu} \omega = -\nabla^2 \nabla_{\mu} \omega + \nabla^{\nu} \nabla_{\mu}\nabla_{\nu} \omega = 0. \end{equation} On the other hand since $d$ and $\delta$ commute with $(d+\delta)^2$, while on a scalar field de Rham $=$ Beltrami: \begin{equation} (d+\delta)^2\omega = \delta d\,\omega = \Delta_0\,\omega, \end{equation} we must have: \begin{equation} \label{Delta-properties} \nabla^\mu \Delta_1 A_\mu = \Delta_0 \nabla^\mu A_\mu,\qquad \Delta_1 \nabla_\mu\, \omega = \nabla_\mu \Delta_0\, \omega, \end{equation} as is also readily verified using the component formulae. Thus using \eqref{Delta1T}, we see that $\Delta_1$ and $\Delta^T_1$ commute. Ignoring normalisable zero-modes (or working on a manifold which has none), $\Pi_L = d \frac1{(d+\delta)^2} \delta$ is a longitudinal projector for one-forms, equivalently \begin{equation} \Pi_{L} A_{\mu}:= - \nabla_{\mu} \frac{1}{\Delta_0} \nabla^{\nu} A_{\nu}. \end{equation} Therefore the transverse projector is \begin{equation} \Pi_{T} := 1 - \Pi_{L}. \end{equation} By $d$, $\delta$ algebra, or using \eqref{Delta-properties}, we have \begin{equation} \Delta_1 \Pi_{T} A_{\mu} = \Pi_{T} \Delta_1 A_\mu = \Delta_1^T A_\mu, \end{equation} \ie $\Delta_1^T$ is just the transverse projection of $\Delta_1$. Splitting \begin{equation} A_{\mu} = \Pi_T A_\mu + \Pi_L A_\mu =: A_{\mu}^T + \nabla_{\mu} A^L, \end{equation} the transverse eigenmodes of $\Delta_1$ are the non-zero eigenmodes of $\Delta_1^T$, while longitudinal eigenmodes of $\Delta_1$ are eigenmodes of $\Delta_0 A^L = \lambda A^L$, since then \begin{equation} \Delta_1 \nabla_{\mu} A^L = \lambda \nabla_{\mu} A^L. \end{equation} As a result a trace involving $\Delta_1$ projected into the transverse modes can be expressed as \begin{equation} \label{trace-T} {\rm Tr}_{T} f(\Delta_1) \equiv {\rm Tr}\, \Pi_T f(\Delta_1) = {\rm Tr} f(\Delta_1) - {\rm Tr} f(\Delta_0), \end{equation} while the trace over the longitudinal sector is \begin{equation} \label{trace-L} {\rm Tr}_{L} f(\Delta_1) \equiv {\rm Tr} f(\Delta_1) - {\rm Tr} \Pi_T f(\Delta_1) = {\rm Tr} f(\Delta_0). \end{equation} \section{Manifestly gauge invariant flow equation on a curved spacetime} \label{sec:flow} We give a brief review of manifestly gauge invariant flow equations for Yang-Mills theory, making some minimal adaptations so that they apply to Maxwell theory propagating in a curved spacetime. As explained in the introduction, this will actually yield a $U(1)\times U(1)$ theory, where the second copy has wrong sign action. From this we can nevertheless extract the one-loop pure gravitational contribution. Then in sec. \ref{sec:improved} we will give an improved flow equation which leaves behind only a single physical Maxwell gauge field. Recall that the basic idea is that the flow of the Boltzmann measure $\exp(-S)$ should be a total functional derivative, \ie for some generic fields $\phi$: \begin{equation} \label{reparam} \Lambda\partial_\Lambda \,{\rm e}^{-S} = {\delta\over\delta\phi}\left(\Psi \,{\rm e}^{-S}\right) \end{equation} (corresponding to the statement that each RG step is equivalent to an infinitesimal field redefinition $\phi\mapsto \phi+\Psi\, \Lambda^{-1} \delta\Lambda$) \cite{Morris:1998kz,Latorre:2000qc}. Importantly, this ensures that the partition function ${\cal Z}=\int\!\!{\cal D}\phi\, \exp(-S)$, and hence the physics derived from it, is invariant under the RG flow. Working with a fixed background metric $g_{\mu\nu}$ and in general $D$ dimensions, we will show how to generalise the previous formulations for Yang-Mills while preserving this crucial feature, and in such a way that the solution remains straightforwardly calculable. As we sketched in the introduction the previous formulations were developed over a number of years to cope with the most general cases. For our purposes we can closely follow the one set out in ref. \cite{Arnone:2002cs}. Indeed we will see that the flow equation still takes the generic form \begin{equation} \label{sunnfl} \Lambda \partial_{\Lambda} S = - a_0[S,\Sigma_\gamma]+a_1[\Sigma_\gamma], \end{equation} where \begin{equation} \label{Sigma} \Sigma_\gamma=\gamma^2S-2\hat{S}, \end{equation} $\hat{S}$ being the so-called seed action, and $\gamma$ being the gauge coupling which, since we work in general $D$ spacetime dimensions, has mass dimension $2-D/2$. The coupling has been factored out so that it plays the r\^ole of a loop counting parameter: the loop expansion of the effective action being given by \begin{equation} \label{Sloope} S={1\over \gamma^2} S_0+S_1+\gamma^2 S_2 +\cdots, \end{equation} where $S_0$ is the classical effective action, $S_1$ the one-loop correction, and so on. This ensures that (super-)gauge invariance is manifestly maintained at each order in $\gamma$. As a consequence the super-covariant derivative is given by \begin{equation} \label{gauge-covariant-derivative} \mathcal{D}_\mu = \nabla_\mu -i\mathcal{A}_\mu, \end{equation} The bilinear functional operator that generates the tree-level contributions is manifestly supergauge invariant: \begin{equation} \label{a0} a_0[S,\Sigma_\gamma] ={1\over2}\,\frac{\delta S}{\delta {\cal A}^{\mu}}\{\dot{\triangle}^{\!{\cal A}\A}\}\frac{\delta \Sigma_\gamma}{\delta {\cal A}_{\mu}}+{1\over2}\,\frac{\delta S}{\delta {\cal C}}\{\dot{\triangle}^{{\cal C}\C}\} \frac{\delta \Sigma_\gamma}{\delta {\cal C}}, \end{equation} as is the linear functional that generates the loop corrections: \begin{equation} \label{a1} a_1[\Sigma_\gamma] = {1\over2}\,\frac{\delta }{\delta {\cal A}^{\mu}}\{\dot{\triangle}^{\!{\cal A}\A}\}\frac{\delta \Sigma_\gamma}{\delta {\cal A}_{\mu}} + {1\over2}\,\frac{\delta }{\delta {\cal C}}\{\dot{\triangle}^{{\cal C}\C}\} \frac{\delta \Sigma_\gamma}{\delta {\cal C}}. \end{equation} These expressions are exactly as in ref. \cite{Arnone:2002cs}.\footnote{\label{footnote:supergauge} Since super-gauge invariance ensures $\mathcal{D}_\mu \frac{\delta S}{\delta{\cal A}_\mu} = i[{\cal C},\frac{\delta S}{\delta{\cal C}}]$, longitudinal terms can be absorbed into the ${\cal C}$ part \cite{Arnone:2002cs}.} We now explain what the various terms mean. Unlike in ref. \cite{Arnone:2002cs}, we are interested in regularising $U(1)$ gauge theory. We therefore need to use a gauge field ${\cal A}$ valued as a generator of $U(1|1)$. The gauge field therefore takes the same form as given in eqn. \eqref{A}, except that here each field is thus a single component rather than a matrix and, unlike in the $SU(N|N)$ case, there is no resulting restriction to ${\rm str} {\cal A}=0$. The same goes anyway for the superscalar field \begin{equation} \label{defC} {\cal C} = \left( \begin{array}{cc} C^1 & D \\ \bar{D} & C^2 \end{array} \right), \end{equation} which will inflict spontaneous symmetry breaking on the supergauge invariance, breaking it down to the diagonal $U(1)\times U(1)$ carried by the bosonic gauge fields $A^i_\mu$, while supplying cutoff size masses to the complex fermionic $B_\mu$ field which thus turns into a gauge invariant Pauli-Villars regulator field. As explained in ref. \cite{Arnone:2002cs}, an elegant way to impose that the spontaneous symmetry breaking scale tracks the cutoff scale $\Lambda$, is to take ${\cal C}$ to be dimensionless, so we will do the same here. Then we can choose a potential so that the effective vacuum expectation value can be \cite{Arnone:2002cs}: \begin{equation} \label{vev} \langle {\cal C} \rangle = \sigma \end{equation} at any scale $\Lambda$. Following previous convention, we write the third Pauli matrix as \begin{equation} \label{sigma} \sigma \equiv \sigma_3 = \begin{pmatrix} 1 & 0\cr 0 & -1 \end{pmatrix}. \end{equation} This matrix appears frequently also as a result of the supergroup symmetry, for example through the supertrace: \begin{equation} {\rm str} X := {\rm tr} (\sigma X), \end{equation} $X$ being a supermatrix and str being the supergroup invariant version the trace. The result of \eqref{vev} is precisely to give a mass $\sim\Lambda$ to the off-diagonal entries in ${\cal A}$ \ie to the complex $B$ field. Since for the $U(1|1)$ theory, ${\cal A}$ is not subject to a constraint, both ${\cal A}$ and ${\cal C}$ functional derivatives are freely acting and are thus defined as follows: \begin{equation} \label{dCdef} {\delta \over {\delta{\cal C}}} := { \left(\!{\begin{array}{cc} {\delta / {\delta C^1}} & - {\delta / {\delta \bar{D}}} \\ {\delta / {\delta D}} & - {\delta / {\delta C^2}} \end{array}} \!\!\right)}, \end{equation} or in components \begin{equation} \label{Cdumbdef} {\delta \over {\delta{\cal C}}}^i_{\hspace{0.05in} j} := {\delta \over {\delta{\cal C}}^k_{\hspace{0.05in} i}}\sigma^k_{\hspace{0.05in} j}, \end{equation} the supergauge functional derivatives being defined in the same way. The advantage of this definition is that the $U(1|1)$ invariance remains manifest, for example we have: \begin{equation} \label{sow} {\partial\over\partial{\cal C}}\ {\rm str} \,{\cal C} Y = Y, \end{equation} and thus \begin{equation} {\rm str} X{\partial\over\partial{\cal C}}\ {\rm str} \,{\cal C} Y = {\rm str} XY, \end{equation} and \begin{equation} \label{split} {\rm str} {\partial\over\partial{\cal C}}X{\cal C} Y = {\rm str} X\ {\rm str} Y \end{equation} where $X$ and $Y$ are arbitrary constant supermatrices \cite{Morris:2000fs,Arnone:2002cs}. In order to maintain local supergauge invariance in \eqref{sunnfl} it is then only necessary to ensure that the bi-local kernels $\dot{\triangle}^{\!{\cal A}\A}(x,y)$ and $\dot{\triangle}^{{\cal C}\C}(x,y)$ are suitably covariantized by including ${\cal A}$ interactions, after which an invariant is constructed by taking an overall supertrace. This is essentially the meaning of the curly brackets. In fact it proved helpful to extend the definition so that\footnote{In non-abelian Yang-Mills, the couplings of $A^2$ and $A^1$ run differently, motivating further decorations \cite{Arnone:2005fb}, however this is not needed for the calculation pursued here.} \begin{equation} \label{wev} X\{W\}Y = X\,\{W\}_{\!\!{}_{\cal A}}Y -{1\over4} [{\cal C},X]\,\{W_{m}\}_{\!\!{}_{\cal A}} [{\cal C},Y], \end{equation} where $X(x)$ and $Y(y)$ are supermatrix fields produced by the functional derivatives in \eqref{a0}, $W_m(x,y)$ is a new kernel that simplifies calculations in the broken phase, and $\{\cdots\}_{\!\!{}_{\cal A}}$ stands for the gauge covariantization just described. In flat space, the most general form of gauge covariantization is described in ref. \cite{Arnone:2002cs}, following \cite{Morris:2000fs,Morris:1999px,Arnone:2002qi}, and can be couched in terms of a path integral over Wilson lines. We will not need the details for this paper. However we will need a covariantization to cope with a non-trivial metric. The most general case can again be couched in terms of an integral over Wilson lines for the Levi-Civita connection, as remarked in ref. \cite{Morris:2016nda}. In this latter paper we however made the simplest choice, promoting space-time partial derivatives to covariant derivatives in a prescribed way (corresponding implicitly to some specific choice of measure for the Wilson lines). We will do something similar in this paper. Thus for a scalar flat-space kernel: \begin{equation} \label{kdef} W(x,y) =\int\!\!{d^D\!p\over(2\pi)^D} \,W(p^2,\Lambda)\,{\rm e}^{ip.(x-y)} \ =\ W(-\partial_x^2,\Lambda)\, \delta(x-y), \end{equation} we make the replacement \begin{equation} \label{covariantize} W(-\partial^2_x,\Lambda) \mapsto W(\Delta_{0\,x},\Lambda)/\sqrt{g(x)}, \end{equation} where $\Delta_0$ is the Laplace-Beltrami operator introduced in \eqref{Delta0}. Following the framework of \eg ref. \cite{Morris:2016nda}, the factor of $1/\sqrt{g}$ is inserted to give the kernel the correct overall density of weight -1 so that combined with the $\sqrt{g}$ factors from the two functional derivatives in \eqref{a0} and after integrating over $x$ and $y$ (without further factors $\sqrt{g}$) the result is clearly generally covariant. (Note that $\nabla_\alpha$ commutes with $1/\sqrt{g}$ which can have either $x$ or $y$ dependence, and thus the kernel is symmetric as assumed.) Recognising that the vector kernel $\dot{\triangle}^{\!{\cal A}\A}$ is associated with one-forms, the elegant choice is to make this a function of the Laplace--de Rham operator $\Delta_1$, \cf \eqref{Delta1}. We will see in the remainder of the paper how this choice ensures that computations remain almost as simple as their flat-space counterparts. As in ref. \cite{Arnone:2002cs}, we discard the terms where the left-most ${\cal C}$ functional derivative in \eqref{a1} hits the ${\cal C}$ decorations in \eqref{wev}. This can be imposed by a limiting procedure \cite{Morris:1998kz}, see also \cite{Morris:2000fs,Morris:1999px}. $\hat{S}$ is used to determine the form of the classical effective kinetic terms and the kernels $\dot{\triangle}$. It therefore has to incorporate the covariant higher derivative regularisation and allow the spontaneous symmetry breaking we require. As we will review shortly, the kernels $\dot{\triangle}$ are determined by the requirement that after spontaneous symmetry breaking, the two-point vertices of the classical effective action $S_0$, and $\hat{S}$ can be set equal. This is imposed as a useful technical device, since it allows classical vertices to be immediately solved in terms of already known quantities. \section{Kernels and two-point vertices in a curved background} In this paper we are interested only in the one loop contribution to pure gravity. This arises by first solving for the classical action $S_0$. For this we extract the $1/\gamma^2$ part of \eqref{sunnfl}, using \eqref{Sigma} and \eqref{Sloope}: \begin{equation} \Lambda{\partial\over\partial\Lambda}S_0 = -a_0[S_0,S_0-2\hat{S}]. \label{ergcl} \end{equation} Then the one-loop piece $S_1$ can be solved for, by substituting back into the flow equation \eqref{sunnfl}. We see that the pure gravity contribution arises from $a_1[S_0-2\hat{S}]$ where we need only the ${\cal C}$ and ${\cal A}$ two-point vertices in $S_0$ and $\hat{S}$. We also see we can dispense with the $U(1|1)$ gauge covariantization $\{\cdots\}_{\!\!{}_{\cal A}}$. Just as in ref. \cite{Arnone:2002cs}, there are no ${\cal A}$ one-point vertices (\eg as a result of Poincar\'e invariance or charge conjugation invariance). Expanding around ${\cal C}\mapsto {\cal C}+\sigma$, where by design ${\cal C}=\sigma$ is at the minimum of the potential, there are no ${\cal C}$ one-point vertices either. Therefore we also do not need the $U(1|1)$ gauge covariantization in the classical flow \eqref{ergcl} of the two-point vertices. As just stated in the previous section, these are set equal to the seed action two-point vertices. Since $\hat{S}$ is our choice, the flow actually serves to determine the kernels. Indeed specialising to the two-point vertices, \eqref{ergcl} now simply becomes \begin{equation} \Lambda{\partial\over\partial\Lambda}\hat{S} = a_0[\hat{S},\hat{S}]\qquad\hbox{(for two-point vertices)}. \label{ergcl2} \end{equation} Universal quantities are however independent of the choices made, which are part of the freedom in \eqref{reparam} to reparametrise the fields \cite{Morris:1998kz,Latorre:2000qc}. From \eqref{Sigma}, since $\gamma$ has mass dimension $2-D/2$, $\hat{S}$ has dimension $4-D$, \ie the Lagrangian component has dimension four, independent of space-time dimension (similarly from \eqref{Sloope} for $S_0$). Since the above gives structurally the same equations for the flow of the two-point vertices as in ref. \cite{Arnone:2002cs}, we can therefore follow closely in this section the derivations given there. We thus split the supermatrix fields into their block (off-)diagonal components $A_\mu=d_+{\cal A}_\mu$, $B_\mu= d_-{\cal A}_\mu$, $C=d_+{\cal C}$ and $D=d_-{\cal C}$ where \begin{equation} \label{d} {\rm d}_\pm X = {1\over2}(X\pm\sigma X\sigma). \end{equation} Choosing a single supertrace form for $\hat{S}$, we need to determine the differential operators that form the two-point vertices $\hat{S}^{AA}_{\mu\nu}$, $\hat{S}^{BB}_{\mu\nu}$, $\hat{S}^{BD\sigma}_\mu$, $\hat{S}^{CC}$ and $\hat{S}^{DD}$ \cite{Arnone:2002cs}. In each case the superscript gives the order of the fields as they appear in the supertrace (up to cyclicity), for example $\hat{S}^{BD\sigma}_\mu(\nabla)$ sits in the seed action as the term \begin{equation} \label{BD} {\rm str} \,\,\sigma\!\! \int\!\!d^Dx \sqrt{g}\, B^\mu \hat{S}^{BD\sigma}_\mu D, \end{equation} where we have used cyclicity of the supertrace to put $\sigma$ first. The flow equations resulting from \eqref{ergcl2} then take exactly the same form as in ref. \cite{Arnone:2002cs}: \begin{eqnarray} \label{fl2} \Lambda \partial_{\Lambda} \hat{S}^{CC} &= &\hat{S}^{CC}\dot{\triangle}^{CC}\hat{S}^{CC},\nonumber\\ \Lambda \partial_{\Lambda} \hat{S}^{AA} &= &\hat{S}^{AA}\dot{\triangle}^{AA} \hat{S}^{AA},\nonumber\\ \Lambda \partial_{\Lambda}\hat{S}^{BB}_{\mu\nu} &= &\left(\hat{S}^{BB}\dot{\triangle}^{BB}\hat{S}^{BB}\right)_{\!\mu\nu}\!\! +\hat{S}^{BD\sigma}_\mu\dot{\triangle}^{DD}\hat{S}^{BD\sigma}_\nu,\nonumber\\ \Lambda \partial_{\Lambda}\hat{S}^{BD\sigma}_\mu &= &\hat{S}^{BB}\dot{\triangle}^{BB}\hat{S}^{BD\sigma}_\mu +\hat{S}^{BD\sigma}_\mu\dot{\triangle}^{DD}\hat{S}^{DD},\nonumber\\ \Lambda \partial_{\Lambda}\hat{S}^{DD} &= &\hat{S}^{DB\sigma\, \mu}\dot{\triangle}^{BB}\hat{S}^{BD\sigma}_\mu +\hat{S}^{DD}\dot{\triangle}^{DD}\hat{S}^{DD}, \end{eqnarray} (where the last two follow from the third by spontaneously broken supergauge invariance) and where the kernels \begin{equation} \label{bzptA} \dot{\triangle}^{AA} = \dot{\triangle}^{{\cal A}\A},\quad \dot{\triangle}^{CC} = \dot{\triangle}^{{\cal C}\C} \end{equation} and \begin{equation} \label{bzptB} \dot{\triangle}^{BB} = \dot{\triangle}^{{\cal A}\A}+\dot{\triangle}^{{\cal A}\A}_m,\quad \dot{\triangle}^{DD} = \dot{\triangle}^{{\cal C}\C}+\dot{\triangle}^{{\cal C}\C}_m, \end{equation} are also of the same form except that following the standard convention in gravitation, see \eg eqns. \eqref{actionU1} and \eqref{Delta1T}, the indices on the differential operators $\hat{S}^{AA}$, $\hat{S}^{BB}$, $\dot{\triangle}^{{\cal A}\A}$, and $\dot{\triangle}^{BB}$ have been suppressed where it is unambiguous to do so, and in the last line of \eqref{fl2} we recognise that the first vertex that appears on the right hand side needs now to be distinguished from the second (as discussed below). The only changes to the solutions found in ref. \cite{Arnone:2002cs} are thus induced by the covariantizations of the seed-action two-point vertices required to cope with the background metric $g_{\mu\nu}$. We are free to choose these. Thus we set \begin{equation} \label{hSAC} \hat{S}^{AA} = 2 \Delta^T_1/c_1, \quad \hat{S}^{CC} = \Lambda^2\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^4, \end{equation} and \begin{equation} \label{hSBD1} \hat{S}^{BB} = 2 \Delta^T_1/c_1+4\Lambda^2/\tilde{c}_1,\quad \hat{S}^{DD} = \Lambda^2\Delta_0/\tilde{c}_0, \end{equation} where $\lambda>0$ is a constant dimensionless parameter \cite{Arnone:2002cs}, and $c_i=c(\Delta_i/\Lambda^2)$ and $\tilde{c}_i = \tilde{c}(\Delta_i/\Lambda^2)$ are cutoff functions \cite{Arnone:2002cs} of the appropriate Laplace--de Rham operator. Similarly \begin{equation} \label{hSBD} \hat{S}^{BD\sigma}_\mu = 2i\Lambda^2 \nabla_\mu \,\tilde{c}_0^{-1} = 2i\Lambda^2 \tilde{c}_1^{-1} \,\nabla_\mu\qquad\mathrm{and}\qquad \hat{S}^{DB\sigma}_\mu = 2i\Lambda^2 \nabla_\mu \,\tilde{c}_1^{-1} = 2i\Lambda^2 \tilde{c}_0^{-1} \,\nabla_\mu, \end{equation} where we used \eqref{Delta-properties}. The second version follows from integration by parts in \eqref{BD}, followed by cycling the supertrace and anticommuting $\sigma$. In ref. \cite{Arnone:2002cs} it was not needed, since in flat space the two coincide. Substituting \eqref{hSAC}, \eqref{hSBD1} and \eqref{hSBD} into \eqref{fl2}, yields the kernels, and thus the integrated kernels defined via \begin{equation} \label{prop} \dot{\triangle} = -\Lambda \partial_{\Lambda} \triangle. \end{equation} The integration constant is determined by ensuring that the corresponding $\triangle$ vanish for large eigenvalue. We now show that we get for the (integrated) kernels, the obvious covariantization of the results found in \cite{Arnone:2002cs}. The first two equations in \eqref{fl2} are solved by straightforward integration: \begin{equation} \label{propCCAA} \triangle^{CC} = \left(\hat{S}^{CC}\right)^{-1}=\frac1{\Lambda^2} \frac{\tilde{c}_0}{\Delta_0+2\lambda\Lambda^2\tilde{c}_0},\qquad \triangle^{AA} = \frac{c_1}{2\Delta_1}, \end{equation} where the second is the inverse of $\hat{S}^{AA}$ in the transverse space. Multiplying the third equation in \eqref{fl2} by the transverse projector $\Pi_T$, isolates \begin{equation} \label{propBB} \triangle^{BB} \equiv \triangle^{BB}(\Delta_1) = \frac12\frac{c_1\tilde{c}_1}{\Delta_1\tilde{c}_1+2\Lambda^2c_1}, \end{equation} the inverse of $\Pi_T\hat{S}^{BB}$ in the transverse space. Substituting for the vertices and rearranging the last equation in \eqref{fl2} gives \begin{equation} \label{propDD} \triangle^{DD} = {\tilde{c}_0}/({\Lambda^2\Delta_0})-4\triangle^{BB}_0/\Delta_0 = \frac1{\Lambda^2}\frac{\tilde{c}^2_0}{\tilde{c}_0\Delta_0+2\Lambda^2c_0}, \end{equation} where $\triangle^{BB}_0 \equiv \triangle^{BB}(\Delta_0)$, \ie \eqref{propBB} with $\Delta_1$ replaced by $\Delta_0$. The above formulae are indeed direct maps of the results in ref. \cite{Arnone:2002cs}. \section{Twice the manifestly gauge invariant conformal anomaly} \label{sec:one-loop} From \eqref{sunnfl}, \eqref{Sigma} and \eqref{Sloope}, and the equality of classical and seed-action two-point vertices, we have that the purely gravitational part of the one-loop effective action is computed from \begin{equation} \label{flone-loop} \Lambda \partial_{\Lambda} S_1 = -a_1[\hat{S}]. \end{equation} The functional derivatives in \eqref{a1} are evaluated using \eqref{split}, after expressing the block (off)-diagonal fields in terms of the originals via \eqref{d}. The supergroup contribution is then $\pm\tfrac{1}{2}({\rm str}\sigma)^2=\pm 2$, \ie the signed number of each flavour. Combined with the sign in \eqref{flone-loop} and the $1/2$ from \eqref{a1}, we thus have that the purely gravitational piece satisfies: \begin{equation} \label{dS1} \Lambda \partial_{\Lambda} S_1 = {\rm Tr}\left[ -\hat{S}^{AA}\dot{\triangle}^{AA} +\hat{S}^{BB}\dot{\triangle}^{BB} -\hat{S}^{CC}\dot{\triangle}^{CC} +\hat{S}^{DD}\dot{\triangle}^{DD} \right]. \end{equation} where ${\rm Tr}$ stands for a space-time trace, taking into account the relevant Lorentz representation. Thus for $A$ this is a trace over transverse modes, for $B$ a trace over all vector modes, and for $C$ and $D$ it is a trace over scalar modes. The bosonic contributions are straight-forward to simplify using the equations of the previous section: \begin{equation} {\rm Tr}\left[ \hat{S}^{AA}\dot{\triangle}^{AA}+\hat{S}^{CC}\dot{\triangle}^{CC}\right] = \Lambda \partial_{\Lambda}{\rm Tr}\ln(\Delta^T_1/c_1) +\Lambda \partial_{\Lambda}{\rm Tr}\ln(\Lambda^2\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^4). \end{equation} Noting the first equation in \eqref{propDD}, we have \begin{equation} {\rm Tr}\, \hat{S}^{DD}\dot{\triangle}^{DD} = \Lambda \partial_{\Lambda} {\rm Tr} \ln(\Lambda^2\Delta_0/\tilde{c}_0) -4\Lambda^2\,{\rm Tr}\,\dot{\triangle}^{BB}_0/\tilde{c}_0, \end{equation} while using the first equation of \eqref{Delta1T} and cyclicity of the spacetime trace, \begin{equation} {\rm Tr}\, \hat{S}^{BB}\dot{\triangle}^{BB} = \Lambda \partial_{\Lambda} {\rm Tr}\ln(\Delta_1/c_1+2\Lambda^2/\tilde{c}_1)-2{\rm Tr}\,\Delta_0\dot{\triangle}^{BB}_0/c_0. \end{equation} Combining the last terms in the above two equations gives $\hat{S}^{BB}_0\dot{\triangle}^{BB}_0$, which again simplifies. Thus, substituting everything back into \eqref{dS1}, we can trivially integrate with respect to $\Lambda$. Also cancelling ${\rm Tr}\ln\Lambda^2$ between $D$ and $C$ sectors, we thus get \begin{equation} S_1 = {\rm Tr}\left[ -\ln(\Delta^T_1/c_1) +\ln(\Delta_1/c_1+2\Lambda^2/\tilde{c}_1) -\ln(\Delta_0/c_0+2\Lambda^2/\tilde{c}_0) +\ln(\Delta_0/\tilde{c}_0) -\ln(\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^2) \right]. \end{equation} From \eqref{trace-L}, the third term on the right hand side is just subtracting the longitudinal $B$ contribution. Indeed using \eqref{trace-T}, we can alternatively write: \begin{equation} \label{twiceStandard} S_1 = -{\rm Tr}_T\left[ \ln(\Delta^T_1/c_1) -\ln(\Delta_1^T/c_1+2\Lambda^2/\tilde{c}_1) \right] +{\rm Tr}\left[\ln(\Delta_0/\tilde{c}_0) -\ln(\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^2) \right]. \end{equation} In this form we recognise that the result coincides with twice what would be produced by more conventional calculational methods, reflecting the fact that we have two copies $A^i$ of the $U(1)$ gauge field. Thus the first trace is the contribution of two transverse vector fields regularised by covariant higher derivatives and Pauli-Villars. The second trace coincides with twice the Jacobian from the change of variables $A_\mu = A^T_\mu +\nabla_\mu\omega$, again regulated by covariant higher derivatives and a Pauli-Villars field. Using \eqref{trace-T} we can map \eqref{twiceStandard} to a calculation which is even closer to a standard textbook exposition. By replacing the transverse trace by a trace over the full vector representation we get: \besp \label{twiceStandard} S_1 = -{\rm Tr}\left[ \ln(\Delta_1/c_1) -\ln(\Delta_1/c_1+2\Lambda^2/\tilde{c}_1) \right]\\ +{\rm Tr}\left[ \ln(\Delta_0/c_0) -\ln(\Delta_0/c_0+2\Lambda^2/\tilde{c}_0) \right] +{\rm Tr}\left[\ln(\Delta_0/\tilde{c}_0) -\ln(\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^2) \right]. \eesp Now from \eqref{Delta1} the first term is the one loop contribution for two $U(1)$ gauge fields in Feynman gauge, regulated by covariant higher derivatives and a Pauli-Villars field, whereas the second two terms can be identified with the ghost contributions regulated by covariant higher derivatives and Pauli-Villars fields (with the option to choose different parameters for the regularisation in the second ghost action). Either way we see that, following now standard treatments, for example computing the Schwinger-Dewitt coefficients in a heat kernel expansion (see \eg \cite{Duff1994}), will yield twice the trace anomaly contribution from massless vector fields: \begin{equation} \label{text-book-x2} (4\pi)^2 g^{\alpha\beta}\langle T_{\alpha\beta}\rangle = 2b \left(C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\frac23\nabla^2 R\right) + 2b'\, {}^*\! R_{\mu\nu\rho\sigma} {}^*\! R^{\mu\nu\rho\sigma}\,, \end{equation} where $C$ is the Weyl tensor, and $b=1/10$ and $b'=-31/180$ the standard values. \section{Twice the contribution to the gravitational beta functions} \label{sec:beta} The coefficients $b$ and $b'$ also appear in the gravitational beta functions induced by the gauge fields. These are obtained by taking a derivative with respect to $\Lambda$ of \eqref{twiceStandard} obtaining the traces \begin{equation} \label{dS1W} \Lambda \partial_{\Lambda} S_1 = {\rm Tr}_1 [ W_0(\Delta_0)] +{\rm Tr}_0 [ W_0(\Delta_0)] \end{equation} with the functions \begin{eqnarray} \label{Wfunctions} W_1(\Delta_1) &=& \frac{4 \left(\tilde{c}\left(\frac{\Delta _1}{\Lambda ^2}\right) \left(\Lambda ^2 c\left(\frac{\Delta _1}{\Lambda ^2}\right)-\Delta _1 c'\left(\frac{\Delta _1}{\Lambda ^2}\right)\right)+\Delta _1 c\left(\frac{\Delta _1}{\Lambda ^2}\right) \tilde{c}'\left(\frac{\Delta _1}{\Lambda ^2}\right)\right)}{\tilde{c}\left(\frac{\Delta _1}{\Lambda ^2}\right) \left(\Delta _1 \tilde{c}\left(\frac{\Delta _1}{\Lambda ^2}\right)+2 \Lambda ^2 c\left(\frac{\Delta _1}{\Lambda ^2}\right)\right)} \\ W_0(\Delta_0) &=& \frac{4 \Delta _0 \tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right) c'\left(\frac{\Delta _0}{\Lambda ^2}\right)-4 c\left(\frac{\Delta _0}{\Lambda ^2}\right) \left(\Delta _0 \tilde{c}'\left(\frac{\Delta _0}{\Lambda ^2}\right)+\Lambda ^2 \tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right)\right)}{\tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right) \left(\Delta _0 \tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right)+2 \Lambda ^2 c\left(\frac{\Delta _0}{\Lambda ^2}\right)\right)} \nonumber \\ && + \frac{4 \lambda \left(\Delta _0 \tilde{c}'\left(\frac{\Delta _0}{\Lambda ^2}\right)-\Lambda ^2 \tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right)\right)}{2 \lambda \Lambda ^2 \tilde{c}\left(\frac{\Delta _0}{\Lambda ^2}\right)+\Delta _0}\,. \end{eqnarray} Evaluating the traces \eqref{dS1W} using the early time heat kernel expansion up to second order in curvature we have \begin{eqnarray} {\rm Tr} [W_0(\Delta_0)] &=& \frac{1}{(4\pi)^2} \left( Q_2[W_1] B_{0}(\Delta_1) + Q_1[W_1] B_{1}(\Delta_1) + Q_0[W_0] B_{2}(\Delta_0) + ... \right)\,, \end{eqnarray} and \begin{eqnarray} {\rm Tr} [W_1(\Delta_1)] &=& \frac{1}{(4\pi)^2} \left( Q_2[W_1] B_{0}(\Delta_1) + Q_1[W_1] B_{1}(\Delta_1) + Q_0[W_0] B_{2}(\Delta_1) \right) + ... \,, \end{eqnarray} where $B_n(\Delta_i)$ are the traced heat kerne coefficients for the operators $\Delta_0$ and $\Delta_1$ and $Q_m [W_i]$ are functionals of the the functions \eq{Wfunctions}. Explicitly the heat kernel coefficients are given by \begin{eqnarray} B_0(\Delta_1) &=&4 \int d^Dx \sqrt{g} \,, \nonumber \\ B_1(\Delta_1) &=& -\frac{1}{3} \int d^Dx \sqrt{g} R \,, \nonumber \\ B_2(\Delta_1) &=& \int d^4x \sqrt{g} \left( -\frac{1}{30} \nabla^2 R + \frac{7}{60} C^2 - \frac{8}{45} {}^*\! R_{\mu\nu\rho\sigma} {}^*\! R^{\mu\nu\rho\sigma} + \frac{1}{36} R^2 \right) \,, \nonumber \\ B_0(\Delta_0) &=& \int d^Dx \sqrt{g} \,, \nonumber \\ B_1(\Delta_0) &=& \frac{1}{6} \int d^Dx \sqrt{g} R \,, \nonumber \\ B_2(\Delta_0) &=& \int d^4x \sqrt{g} \left( \frac{1}{180} \left( \frac{3}{2} C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma} -\frac{1}{2} {}^*\! R_{\mu\nu\rho\sigma} {}^*\! R^{\mu\nu\rho\sigma} \right) + \frac{1}{72} R^2 + \frac{1}{30} \nabla^2 R \right) \nonumber \,. \end{eqnarray} For $m>0$ the $Q_m[W_i]$ functionals are given by the scheme dependent integrals \begin{equation} Q_m[W_i] = \frac{1}{\Gamma(m)} \int_0^\infty dz z^{m-1} W_i(z) \end{equation} whereas the $Q_0$ functionals are given by \begin{eqnarray} Q_0[W_1] = W_1(0) = 2\,, \,\,\,\,\,\,\, Q_0[W_0] = W_0(0) = -4 \end{eqnarray} which are independent of the choice of cutoffs $c$ and $\tilde{c}$. Consequently we find that the logarithmic terms give the trace anomaly \begin{equation} \label{beta} \Lambda \partial_{\Lambda} S_1 = 2 \frac{1}{(4\pi)^2} \int d^4x \sqrt{g} \left[ \Lambda^4 a_0 + a_1 \Lambda^2 R + b \left(C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+ \nabla^2 R\right) + b'\, {}^*\! R_{\mu\nu\rho\sigma} {}^*\! R^{\mu\nu\rho\sigma} +... \right] \end{equation} with $b=1/10$ and $b'=-31/180$ as in \eqref{text-book-x2}. The scheme dependent coefficients are given by $a_0 = 4 Q_2[W_1] + Q_2[W_0]$ and $a_1 = - \frac{1}{3} Q_1[W_1] + \frac{1}{6} Q_1[W_0] $ are non-universal since they depend on the form of the cutoff functions $c$ and $\tilde{c}$ and determine the running of the vacuum energy and the Newton's constant. \section{Spontaneous symmetry breaking by the vector representation} \label{sec:SSB} We have just seen that we get twice the desired gravitational contribution, because the $A^2$ part of the regularisation structure remains massless. We now repair this problem with the regularisation. Up until now we have treated the two diagonal entries in \eqref{A} equally, using $A = d_+{\cal A}$, where $d_+$ is defined in \eqref{d}. Splitting this further down to $A= A^1\sigma_+ +A^2\sigma_-$, where $\sigma_\pm = (\mathbb{1}\pm\sigma)/2$, and \begin{equation} \label{defA1A2} A^1 = \sigma_+{\cal A}\sigma_+,\qquad A^2 = \sigma_-{\cal A}\sigma_-, \end{equation} we want to give a mass to $A^2$ while leaving $A^1$ massless. Therefore we must spontaneously break the $\sigma_-$ direction while leaving $\sigma_+$ direction unbroken. That is not possible using only commutators (roughly speaking, the supergroup adjoint representation) since $\mathbb{1}$ commutes with anything and thus \[ [\sigma_+,X] =-[\sigma_-,X]\,, \] for an arbitrary supermatrix. The next simplest thing to do therefore is to introduce a `fundamental' {a.k.a.}\ vector representation,\footnote{Since $SU(N|N)$ is an example of supergroup that is reducible but not decomposable, terms such as fundamental and adjoint carry caveats \cite{Arnone:2001iy}.} redefining \begin{equation} \label{new-C} {\cal C} = \begin{pmatrix} D \\ C \end{pmatrix}\,. \end{equation} For regularising $SU(N|N)$ this would not have worked, firstly because the number of degrees of freedom are incorrect to be eaten by $B$ (and $A^2$), and secondly again because $\mathbb{1}$ commutes with anything. We pause briefly to sketch why the latter property would have led to an issue. In $SU(N|N)$, the supergauge field can be alternatively expanded as ${\cal A}={\cal A}^0\mathbb{1}+{\cal A}^AT_A$, where the generators $T_A$ are both traceless and supertraceless since ${\rm str} {\cal A}=0$ has forbidden the appearance of a $\sigma$ term. When interactions are built on commutators, this furthermore implies that ${\cal A}^0$ appears nowhere in the action, resulting in the ``no-${\cal A}^0$'' shift symmetry $\delta{\cal A}^0_\mu(x)=\lambda_\mu(x)$ \cite{Arnone:2001iy,Arnone:2002cs} which then needs to be imposed as a consistency condition. (The alternative procedure of redefining the Lie bracket in the gauge sector to exclude terms $\propto\mathbb{1}$ leads to equivalent consistency conditions \cite{Arnone:2001iy,Arnone:2002cs}.) The second problem with breaking the $SU(N|N)$ symmetry using a representation \eqref{new-C} is that ${\cal A}^0$ will now couple to the action exclusively through such terms. It would therefore work as a Lagrange multiplier field and force an unpromising non-linear constraint. This issue is analogous the problems which arise in regularised $SU(N|N)$ theory, if one attempts to impose that the matrix scalar field \eqref{defC} is supertraceless \cite{Arnone:2001iy}. But $SU(N|N)$ is not what we are interested in here. Instead it turns out that the single superfield representation \eqref{new-C} is exactly what is needed. First we notice that the number of degrees of freedom is just right to give $B_\mu$, $\bar{B}_\mu$ and $A^2_\mu$ masses, if ${\cal C}$ is taken to be complex, and if the fermionic directions are broken and one of the bosonic directions is broken. In particular, to achieve the breaking of $A^2$'s $U(1)$, we need to get a vacuum expectation value in the bottom half. That is why we placed the fermionic component in the upper half and the bosonic component in the lower half. Now suppose that \begin{equation} \label{vev-again} \langle{\cal C}\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\,. \end{equation} Under the supergroup, the fields transform as: $\delta{\cal A}_\mu = [\mathcal{D}_\mu,\Omega]$ and $\delta{\cal C} = i\Omega\,{\cal C}$. Writing \cite{Arnone:2002cs} \[ \Omega = \begin{pmatrix} \omega^1 & \tau \\ \bar{\tau} & \omega^2 \end{pmatrix} \] (where the $\omega^i$ are real and $\tau$ is complex), we see that the Goldstone modes are \[ i\Omega \begin{pmatrix} 0 \\ 1 \end{pmatrix} = i \begin{pmatrix} \tau \\ \omega^2 \end{pmatrix}\,, \] so indeed the fermionic and $A^2$ directions are completely broken as required. Furthermore, shifting ${\cal C}\mapsto \langle{\cal C}\rangle+{\cal C}$, we see that unitary gauge thus consists in setting ${\cal C}=\langle{\cal C}\rangle (1+C_R/\sqrt{2})$, where $C_R/\sqrt{2}$ is the real part of $C$. Since from \eqref{gauge-covariant-derivative}, \[ \mathcal{D}_\mu\bar{{\cal C}} = \nabla_\mu\bar{{\cal C}}+i\bar{{\cal C}}{\cal A}_\mu\,, \] we have that in unitary gauge the kinetic term for ${\cal C}$ reduces to \begin{equation} \label{kinC} -\Lambda^2\mathcal{D}_\mu\bar{{\cal C}}\mathcal{D}^\mu{\cal C} = -\frac{\Lambda^2}2 \nabla_\mu C_R\,\nabla^\mu C_R+ \Lambda^2(1+C_R/\sqrt{2})^2 \left( B^\mu\bar{B}_\mu -A^2_\mu A^{2\,\mu}\right)\,. \end{equation} The choice of sign for the kinetic term, thus provides the right sign mass for both $B$ and $A^2$, and shows that $C_R$ is a regulator field with the wrong-sign action. (Expanding the supertrace one sees that $\nabla B\nabla\bar{B}$ is the order that appears in the kinetic term with positive sign.\footnote{\label{footnote:signid}The choice of sign in \eqref{kinC} is already implicit in the previous formulation, as can be seen by taking the right hand column of \eqref{defC} and forming the supertrace.}) As before \cite{Arnone:2002cs}, we can ensure this Higgs field gets a mass term that tracks the cutoff, by making it dimensionless and assuming an appropriate potential. The minimal Lagrangian would be \begin{equation} \label{C-minimal} \mathcal{L}_{\cal C} = -\Lambda^2\mathcal{D}_\mu\bar{{\cal C}}\mathcal{D}^\mu{\cal C} -\frac{\lambda}{4}\Lambda^4 (\bar{{\cal C}}{\cal C}-1)^2\,, \end{equation} supplying a mass ($\lambda\Lambda^2$) for the Higgs field $C_R$ in the broken phase. \section{Manifestly gauge invariant flow equation for Maxwell theory} \label{sec:improved} Now we implement this spontaneous symmetry breaking scheme within a manifestly gauge invariant flow equation. Following sec. \ref{sec:flow} we keep the definitions \eqref{sunnfl}, \eqref{Sigma} and \eqref{Sloope} but rather evidently \eqref{a0} and thus \eqref{a1} should be replaced by \begin{eqnarray} \label{a0-again} a_0[S,\Sigma_\gamma] &=& {1\over2}\,\frac{\delta S}{\delta {\cal A}^{\mu}}\{\dot{\triangle}^{\!{\cal A}\A}\}\frac{\delta \Sigma_\gamma}{\delta {\cal A}_{\mu}}+{1\over2}\,\frac{\delta S}{\delta {\cal C}}\{\dot{\triangle}^{{\cal C}\C}\} \frac{\delta \Sigma_\gamma}{\delta \bar{{\cal C}}}, \\ \label{a1-again} a_1[\Sigma_\gamma] &=& {1\over2}\,\frac{\delta }{\delta {\cal A}^{\mu}}\{\dot{\triangle}^{\!{\cal A}\A}\}\frac{\delta \Sigma_\gamma}{\delta {\cal A}_{\mu}} + {1\over2}\,\frac{\delta }{\delta {\cal C}}\{\dot{\triangle}^{{\cal C}\C}\} \frac{\delta \Sigma_\gamma}{\delta \bar{{\cal C}}}, \end{eqnarray} where the functional derivatives with respect to ${\cal C}$ and $\bar{{\cal C}}$ are just the functional derivatives with respect to the components except that the functional derivative in $\delta S/\delta{\cal C} := \delta_r S/\delta{\cal C}$ should be regarded as acting on the action (and thus also $\Sigma_\gamma$) from the right, so as not to introduce unnecessary signs into the Grassmann components.\footnote{Equivalently one takes a left-derivative using the bottom row of \eqref{dCdef}, and includes an overall minus sign for the overall supertrace, consistent with the identification in footnote \ref{footnote:signid}.} The covariantization of the kernels replaces \eqref{wev} with \begin{equation} \label{wev-A} \frac{\delta}{\delta{\cal A}^\mu}\{\dot{\triangle}^{{\cal A}\A}\}\frac{\delta}{\delta{\cal A}_\mu} = \frac{\delta}{\delta{\cal A}^\mu}\,\{\dot{\triangle}^{{\cal A}\A}\}_{\!\!{}_{\cal A}}\frac{\delta}{\delta{\cal A}_\mu} -\bar{{\cal C}}\frac{\delta}{\delta{\cal A}^\mu}\,\{\dot{\triangle}^{{\cal A}\A}_{m}\}_{\!\!{}_{\cal A}}\frac{\delta}{\delta{\cal A}_\mu}{\cal C}\,, \end{equation} where $W_m$ therefore now propagates a vector representation, and \begin{equation} \label{wev-C} \frac{\delta}{\delta{\cal C}}\{\dot{\triangle}^{{\cal C}\C}\}\frac{\delta}{\delta\bar{{\cal C}}} = \frac{\delta}{\delta{\cal C}}\,\{\dot{\triangle}^{{\cal C}\C}\}_{\!\!{}_{\cal A}}\frac{\delta}{\delta\bar{{\cal C}}} +\left(\frac{\delta}{\delta{\cal C}}\otimes{\cal C} -\bar{{\cal C}}\otimes \frac{\delta}{\delta\bar{{\cal C}}}\right)\!\{\dot{\triangle}^{{\cal C}\C}_{m}\}\!\left(\frac{\delta}{\delta{\cal C}}\otimes{\cal C} -\bar{{\cal C}}\otimes \frac{\delta}{\delta\bar{{\cal C}}}\right)\,, \end{equation} where $\dot{\triangle}^{{\cal C}\C}_m$ thus propagates the matrix ($\sim$ `adjoint') representation. This tensor-product type ${\cal C}$ decoration can be understood as arising from supergauge invariance (compare footnote \ref{footnote:supergauge}): \begin{equation} {\rm str}\left(\!\Omega\, \mathcal{D}_\mu \frac{\delta S}{\delta{\cal A}_\mu}\right) = i\left( \frac{\delta S}{\delta{\cal C}}\Omega\,{\cal C} - \bar{{\cal C}}\Omega\frac{\delta S}{\delta\bar{{\cal C}}}\right)\,, \end{equation} \ie as before the ${\cal C}$ decoration in \eqref{wev-C} can be exchanged for longitudinal terms in \eqref{wev-A}. The rest of the definition of the flow equation is as in sec. \ref{sec:flow}. In particular, functional derivatives do not act on the terms that decorate the kernels, only on the relevant action $S$ or $\Sigma_\gamma$. Clearly the resulting flow equation manifestly preserves local $U(1|1)$ invariance. \section{Kernels and two-point vertices for the Maxwell flow equation} \label{sec:kernels-again} At the two-point level in the broken phase, the ${\cal C}$ and $\bar{{\cal C}}$ decorations are replaced with the vacuum expectation value \eqref{vev-again}. Defining $C=(C_R+iC_I)/\sqrt2$ and $D$ via the components in \eqref{new-C}, and the vector fields by their components, or equivalently and more conveniently via $B=d_-{\cal A}$ and \eqref{defA1A2}, we can find the resulting two-point flow equations analogous to \eqref{fl2}. Bearing in mind that we ensure that \eqref{new-C} solves the effective equations of motion, unpacking \eqref{wev-A} and \eqref{wev-C} reveals that the $B$ and $D$ kernels again collect as in \eqref{bzptB}, the $A_2$ and $C_I$ kernels coincide with these, and the new unbroken sector has adopted the old $A$ and $C$ expressions: \begin{equation} \label{bzpt1} \dot{\triangle}^{A_1A_1} = \dot{\triangle}^{{\cal A}\A},\quad \dot{\triangle}^{C_RC_R} = \dot{\triangle}^{{\cal C}\C},\quad \dot{\triangle}^{A_2A_2}= \dot{\triangle}^{BB},\quad \dot{\triangle}^{C_IC_I} = \dot{\triangle}^{DD}. \end{equation} As before, the seed action is our choice (subject to preservation of all the required symmetries, in particular spontaneously broken $U(1|1)$ invariance), and requiring that the two-point vertices of the classical effective action and the seed action can be set equal, then determines the kernels. The $\hat{S}$ kinetic terms for the vector fields follow from covariant higher derivative regularisation of the super-field strength squared \cite{Arnone:2002cs}, while the ${\cal C}$-sector is a similarly regularised version of \eqref{C-minimal}. By adjusting normalisations, we can arrange for the seed action vertices to closely parallel our previous expressions. In fact we can get exactly \eqref{hSBD1} and \eqref{hSBD} for $B$ and $D$, while \eqref{hSAC} can be adopted by $A_1$ and $C_R$: \begin{equation} \label{hSA1CR} \hat{S}^{A_1A_1} = 2 \Delta^T_1/c_1, \quad \hat{S}^{C_RC_R} = \Lambda^2\Delta_0/\tilde{c}_0 + 2\lambda\Lambda^4. \end{equation} That leaves only the $A_2$ and $C_I$ kinetic terms and the $C_I A_2$ mixing term. These are constrained by spontaneously broken $U(1|1)$ invariance, and with our choice of normalisations can be taken to coincide with the $B${}$D$ sector: \begin{equation} \label{hSA2CI} \hat{S}^{C_IC_I} = \hat{S}^{DD},\quad \hat{S}^{A_2A_2} = \hat{S}^{BB},\quad \hat{S}^{A_2C_I}_\mu = \hat{S}^{BD\sigma}_\mu,\quad \hat{S}^{C_IA_2}_\mu = -\hat{S}^{DB\sigma}_\mu. \end{equation} In view of the matches \eqref{bzpt1} we already found for the kernels, we see that the flow equation for these two-point vertices coincide with \eqref{fl2} in the sense that the first equation is now for $\hat{S}^{C_RC_R}$, the second for $\hat{S}^{A_1A_1}$, and the last three again apply to the $B${}$D$ sector but also get copied over to the $A_2${}$C_I$ sector using the maps \eqref{bzpt1} and \eqref{hSA2CI}. The solutions for the integrated kernels are thus already given in \eqref{propCCAA} -- \eqref{propDD}, where now we should rename $\triangle^{CC}$ as $\triangle^{C_RC_R}$, $\triangle^{AA}$ as $\triangle^{A_1A_1}$ and recognise that $\triangle^{A_2A_2}=\triangle^{BB}$ and $\triangle^{C_IC_I}=\triangle^{DD}$. \section{Manifestly gauge invariant conformal anomaly} \label{sec:one-loop-again} There is almost nothing left to do. Clearly equation \eqref{dS1} is replaced by \besp \Lambda \partial_{\Lambda} S_1 = \frac12{\rm Tr}\Big[ -\hat{S}^{A_1A_1}\dot{\triangle}^{A_1A_1}-\hat{S}^{A_2A_2}\dot{\triangle}^{A_2A_2} -\hat{S}^{C_RC_R}\dot{\triangle}^{C_RC_R}-\hat{S}^{C_IC_I}\dot{\triangle}^{C_IC_I}\\ +2\hat{S}^{BB}\dot{\triangle}^{BB} +2\hat{S}^{DD}\dot{\triangle}^{DD} \Big], \eesp but using the identifications of the previous section we see that the $A_2$ and $C_I$ parts just cancel half of the $B$ and $D$ parts, and thus this becomes in the old notation: \begin{equation} \label{dS1-Maxwell} \Lambda \partial_{\Lambda} S_1 = \frac12{\rm Tr}\left[ -\hat{S}^{AA}\dot{\triangle}^{AA} +\hat{S}^{BB}\dot{\triangle}^{BB} -\hat{S}^{CC}\dot{\triangle}^{CC} +\hat{S}^{DD}\dot{\triangle}^{DD} \right], \end{equation} \ie exactly half the result in \eqref{dS1}. Therefore we obtain half the expression in \eqref{twiceStandard}, \eqref{text-book-x2} and \eqref{beta}, \ie precisely the standard trace anomaly, however here computed by maintaining manifest gauge invariance at every stage. \section*{Acknowledgments} TRM thanks Roberto Percacci for helpful conversations on the Weyl anomaly, and acknowledges support from STFC through Consolidated Grant ST/L000296/1. \bibliographystyle{hunsrt}
{ "timestamp": "2017-12-15T02:01:36", "yymm": "1712", "arxiv_id": "1712.05011", "language": "en", "url": "https://arxiv.org/abs/1712.05011" }
\section{Introduction} Entanglement is one of the most significant features of quantum physics, and plays an important role in understanding quantum many-body physics, quantum field theory, quantum information as well as quantum gravity. In quantum field theory, the entanglement entropy (EE) measures the entanglement between an arbitrary subregion $A$ and its complement $\bar{A}$. It is defined as the von Neumann entropy of the reduced density matrix \begin{equation}\label{eq:ee} S_A=-\mathrm{tr}\rho_A \log \rho_A \end{equation} where $\rho_A=Tr_{\bar{A}}\rho$ is the reduced density matrix of $A$ with respect to the density matrix of the whole system. In practice, it is more convenient to compute the R\'enyi entropy first, which is defined as \begin{equation}\label{eq:Renyi} S_A^{(n)}=\frac{1}{1-n} \log\, \mathrm{tr} \rho_A^n, \end{equation} and then read the entanglement entropy by taking the limit \begin{equation} \lim_{n\to 1}S_A^{(n)}=S_A, \end{equation} provided that the continuation on $n$ is well-defined. In quantum field theory, the computation of the R\'enyi entropy leads to the replica trick\cite{replicaHLW1994,replicaCalabrese2004,replicaCalabrese2005} \begin{equation} \mathrm{tr}\rho_A^n=\frac{Z_n(C^n_A)}{Z^n}\, \end{equation} where $Z_n$ and $Z$ are the partition functions of the theory on the conical spacetime $C_A^n$ and the original spacetime, respectively. The manifold $C_A^n$ comes from the identifications of the fields along the entangling surface. Equivalently one may introduce the twist operators to induce the field identifications between different replicas, and consequently the partition function could be computed by the correlation functions of the twist operators in a replicated theory. In general, it is difficult to compute entanglement entropy directly owing to the infinite degrees of freedom in a field theory. In the past decade, the holographic entanglement entropy (HEE) has been studied intensively since its proposal in 2006 by Ryu and Takayanagi\cite{RT2006}. For a CFT dual to the Einstein AdS gravity, the entanglement entropy of the boundary subregion $A$ is given by the area of an extremal surface $\gamma_A$ in the dual bulk \begin{equation}\label{AL} S_A=\frac{Area(\gamma_A)}{4G_N}. \end{equation} Here $G_N$ is the Newton constant and $\gamma_A$ shares the common boundary $\partial A$ with $A$ and is homologous to $A$. This so-called Ryu-Takayanagi (RT) formula is reminiscent of the Hawking-Bekenstein formula for the black hole entropy\cite{Bekenstein1973, BCH1973}. Actually, from the Euclidean gravity point of view, it has been proved that the holographic EE could be taken as a kind of gravitational entropy\cite{ML2013}, a generalization of the black hole entropy. The holographic entanglement entropy not only provides a new way to compute the entanglement entropy, but more importantly sheds new light on the holography and the AdS/CFT correspondence\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}. The various aspects on the holographic entanglement entropy can be found in the nice reviews\cite{Nishioka:2009un,Rangamani:2016dms}. For two disjoint subregions $A$ and $B$, one can define the mutual information (MI) between them as \begin{equation} I(A, B)=S(A)+S(B)-S(A\cup B)\,. \end{equation} Different from the entanglement entropy, which is divergent in a field theory, the mutual information is free of ultraviolet(UV) divergence and is positive. It measures the entanglement between two subregions: two entangled subsystems are correlated because they share an amount of information that is not foreseen classically. Actually, the mutual information satisfies an inequality\cite{Wolf} \begin{equation} I({A,B})\geq \frac{\mathcal C(M_A,M_B)^2}{2\parallel M_A\parallel^2 \parallel M_B\parallel^2},\label{inequality} \end{equation} where $M_A$ and $M_B$ are the observables in the regions $A$ and $B$ respectively, and $\mathcal C(M_A,M_B):=\langle M_A\otimes M_B \rangle-\langle M_A \rangle \langle M_B\rangle $ is the connected correlation function of $M_A$ and $M_B$. This indicates that the mutual information of two disjoint regions is usually not vanishing, even when the two regions are far apart, due to the quantum correlations between them. Holographically, according to the RT formula, it is easy to verify that the holographic mutual information (HMI) suffers a phase transition from nonzero to zero when the separation distance $r>r_c$ \cite{Headrick:2010zt}, where $r_c$ is the critical distance. Thus, when considering the case that the two regions are far away enough, the holographic mutual information is simply vanishing. However, according to the inequality (\ref{inequality}), the holographic mutual information should not be vanishing. The discrepancy comes from the fact that the RT formula is given by the on-shell action of the gravitational configuration and only captures the leading order contribution to the entanglement entropy. After considering the quantum correction, the holographic mutual information is nonzero\cite{FLM2013,Barrella:2013wja}. In other words, the mutual information provides a nice probe to study the AdS/CFT correspondence beyond classical order. In particular, for the two-dimensional (2D) holographic CFT with a large central charge and sparse light spectrum, which is dual to the semiclassical AdS$_3$ gravity, the study of the R\'enyi mutual information allows us to read the 1-loop and even the 2-loop quantum corrections in gravity\cite{Chen:2013kpa,OPE}. The direct computation of the mutual information is difficult since the replica trick leads to the conical geometry which could be not only of singularity but also of nontrivial topology. For example, in two dimensions, the pasting of the multi-intervals leads to a higher genus Riemann surface. Nevertheless, when two disjoint regions are far apart, one may use the operator product expansion(OPE) of the twist operators to compute the large distance expansion of the (R\'enyi) mutual information. This turns out to be quite effective for 2D CFT\cite{Headrick:2010zt,Chen:2013kpa,Calabrese:2010he}. It can actually be applied to the higher dimensional case as well. In \cite{Cardy:2013nua}, the leading order mutual information of the disjoint spheres for free scalars has been discussed by using the OPE of spherical twist operator\cite{Hung:2014npa} and found to be consistent with the numerical results\cite{Shiba:2012np,Casini:2008wt}. The discussion has been generalized to the next-to-leading order mutual information in \cite{Agon:2015twa} and the R\'enyi mutual information in \cite{Chen:2016mya,Long:2016vkg} for free scalars. It is definitely interesting to have a better understanding of the mutual information in a general CFT, beyond the free scalar theories. At the first sight this turns out to be a formidable problem, because even for the simplest two-sphere case the computation in the OPE of the twist operator involves the one-point functions of the primary operators in the conical geometry, which requires the detailed information of the CFT. Therefore it is really surprising to find that the mutual information of two disjoint spheres presents universal behaviors at the first few leading orders\cite{CCHL2017}. For a generic CFT, it was further proposed in \cite{CCHL2017} that the mutual information could be expanded in terms of the conformal block \begin{equation} I({A,B})=\sum_{\Delta,J}{ b_{\Delta,J}}{ G_{\Delta,J}(u,v)}, \end{equation} where $\Delta$ and $J$ are the conformal dimension and the spin of the primary operator propagating between two spheres, and $G_{\Delta,J}$ is the conformal block. As the conformal block in the diagonal limit could be approximated by \begin{equation} G_{\Delta,J}(z) \simeq z^\Delta+ \cdots, \end{equation} the contributions are dominated by the operators of the first few lowest dimensions. The remarkable point is that the coefficients $b_{\Delta,J}$ take universal forms for the operators giving dominant contributions. For example, for a CFT in which the primary operator $\mathcal{O}$ of the lowest dimension $\Delta$ is a scalar type, then the leading contribution comes from the bilinear operators $O^{(s)(j_1j_2)}=\mathcal{O}^{(j_1)}\mathcal{O}^{(j_2)}$ ($j_i$ labels the replica and superscript $(s)$ stands for operators constructed from scalar type operators) with the coefficient\cite{AF2015} \begin{equation} b^{(s)}_{2\Delta,0}=\frac{\sqrt{\pi}\Gamma[2\Delta+1]}{4^{2\Delta+1}\Gamma[2\Delta+\frac{3}{2}]}, \end{equation} while the next-to-leading one could be from the bilinear operators with a derivative\footnote{Strictly speaking, whether or not this is the operator giving the next-to-leading contribution depends on the spectrum of the theory and the dimension $\Delta$. Here we assume that the operator of the next lowest dimension is of dimension at least $1/2$ higher and the lowest operator is of the dimension greater than $1/2$ as well.} \begin{equation} O^{(s)(j_1j_2)}_{\mu}=\mathcal{O}^{(j_1)}\partial_{\mu}\mathcal{O}^{(j_2)}-(j_1\leftrightarrow j_2), \end{equation} with the coefficient \begin{equation} b^{(s)}_{2\Delta+1,1}=-\frac{\sqrt{\pi}\Delta\Gamma[2\Delta+1]}{2^{4\Delta+3}\Gamma[2\Delta+\frac{5}{2}]}. \end{equation} The coefficients $b$'s are independent of the OPE coefficients of the theory. These universal behaviors persist no matter the operator of the lowest dimension is fermionic, vector or tensor type. Actually, one needs to know the exact spectrum of the CFT in order to know the leading contributions to the mutual information. Once the spectrum of the CFT is known, for example by using the bootstrap techniques, the leading contributions can be read. In this paper, we would like to understand these universal behaviors in a holographic way. In \cite{FLM2013}, Faulkner, Lewkowycz and Maldacena (FLM) proposed that the quantum corrections to the HEE are essentially given by the bulk entanglement entropy between the bulk region $A_b$ enclosed by $\gamma_A \cup A$ and its complement $\bar{A_b}$. While this proposal gives us a prescription for calculating the quantum corrections to the entanglement entropy, it is technically challenging to carry out such computations. One technical difficulty is that the bulk geometry corresponding to the replicated geometry is hard to determine due to the large backreaction\cite{ML2013,Dong2016}. However, for the holographic mutual information we are interested in, the backreaction can be ignored. Consequently one can compute the mutual information holographically by using the OPE of the twist operators. Just like other non-local Wilson-line and Wilson loop operators\cite{Berenstein:1998ij,Gomis:2009xg,Chen:2007zzr}, the OPE of the twist operator can be computed in a holographic way. In \cite{AF2015}, Ag\'on and Faulkner computed the leading order mutual information coming from scalar field holographically and found agreement with the field theory result. In this work, we study the quantum corrections to the holographic MI in more general cases, including the higher order contributions coming from the scalar field and the leading order contributions coming from non-scalar fields, including the massless vector boson, the massless graviton, the fermion and also the massive fields. We reproduce the universal behaviors found in \cite{CCHL2017} exactly. The remaining parts of this paper are organized as follows. In the next section, after giving a brief review of the spherical twist operator and its OPE expansion, we introduce the field theory computation on the mutual information from scalar, vector and tensor type operators in CFT\cite{CCHL2017}. In section 3, we investigate the bulk computation. By doing the operator product expansion of the extremal surface operator, we get the quantum corrections of the scalar, the gauge boson, the graviton and the fermion to the holographic mutual information. Especially we find that the gauge part of propagators of the massless vector boson and massless graviton play an important role in the computation. We end with conclusions and discussions in section 4. In the appendices, we collect our computations on the massive vector boson and massive graviton, and also the formulae on the graviton propagator. Without confusion, we work in the Euclidean signature throughout this paper. \section{Field theory results} Let us consider the mutual information of two disjoint spheres in a $d$-dimensional CFT. By using the global conformal symmetry, we can always set the radii of two spheres to be $R$ and the centers of two spheres to be one at the origin and the other at $x_1=1, x_i=0, i\geq 2$ respectively. Now the only independent conformal invariant quantity is the cross ratio \begin{equation} z=\bar{z}=4R^2,\hspace{5ex}u=z\bar{z},\hspace{3ex}v=(1-z)(1-\bar{z}).\nonumber \end{equation} In the disjoint case, we have $0<z<1$. We would like to compute the mutual information of two disjoint spheres. The mutual information is given by \begin{equation} I(A, B)=\lim_{n\to 1}I^{(n)}(A, B)=-\lim_{n\to 1}\frac{1}{1-n}\log\left(\frac{Z_n(C_{A\cup B}^{(n)})Z^{n}}{Z_n(C_{A}^{(n)})Z_n(C_{B}^{(n)})}\right)\,,\label{eq:RenyiMI} \end{equation} where $I^{(n)}(A, B)$ is the R\'enyi mutual information. The partition functions can be calculated using the nonlocal twist operators $\mathcal T^{(n)}$. \begin{equation} \frac{Z_n(C_{A\cup B}^{(n)})}{Z^{n}} = \left\langle \mathcal T_{A}^{(n)}\mathcal T_{B}^{(n)}\right\rangle _{M^{n}}. \end{equation} Here $M^n$ stands for $n$ copies of the original space. $\mathcal T_{A}^{(n)}$ and $\mathcal T_{B}^{(n)}$ stand for nonlocal twist operators corresponding to the regions $A$ and $B$ respectively. In the large distance regime, we can treat the twist operator as a semi-local operator. It can be expanded in terms of the primary operators of the replicated theory \begin{equation} \mathcal T^{(n)}=<\mathcal T^{(n)}>\sum_{\{\Delta,J\}}c_{\Delta,J}Q[O_{\Delta,J}], \end{equation} where $Q[O_{\Delta,J}]$ denotes all the operators generated from the primary operator $O_{\Delta,J}$ of dimension $\Delta$ and spin $J$. Note that the summation is over all the primary operators in the $n$-replicated CFT. The coefficient $c_{\Delta,J}$ is read from the one-point function of the primary operator in the presence of the spherical twist operator. Equivalently it can be computed by the one-point function of the primary operator in the conical geometry. In 2D, the coefficient can be read by using the uniformization map. In higher dimensions, it is difficult to compute, except the case that the theory is free such that one can use the method of images. The R\'enyi mutual information is captured by \setlength\arraycolsep{2pt} \begin{eqnarray} \frac{ \langle \mathcal T^{(n)}_A \mathcal T^{(n)}_B\rangle}{ \langle \mathcal T^{(n)}_A \rangle \langle \mathcal T^{(n)}_B \rangle}&=& \sum_{\{\Delta,J\}}c^2_{\Delta,J}\langle Q_A [O_{\Delta,J}]Q_B [O_{\Delta,J}] \rangle\nonumber\\ &=&\sum_{\{\Delta,J\}}s_{\Delta,J}G_{\Delta,J}(u,v). \end{eqnarray} where the building block is the two-point function of the primary module, the conformal block\cite{Dolan:0011,Dolan:0309}. The coefficient $s_{\Delta,J}$ is given by \begin{equation} s_{\Delta,J}=f_{\Delta,J}\sum_{O_{\Delta,J}}\frac{a^2_{\Delta,J}}{N_{\Delta,J}}, \label{sDJ} \end{equation} where the summation is over all the primary operators with the same $(\Delta,J)$ in the replicated theory, $a_{\Delta,J}$ is determined by the one-point function of the operator $O_{\Delta,J}$ in the planar conical geometry \begin{equation} \langle O_{\Delta,J}(x)\rangle_n=a_{\Delta,J}\frac{T_J}{|x|^\Delta} \end{equation} with $T_J$ being a kind of tensor structure. $N_{\Delta,J}$ in (\ref{sDJ}) is the normalization factor in the two-point function in the flat spacetime. \begin{equation} \langle{O}_{\Delta,J}(x){O}_{\Delta,J}(x^{\prime})\rangle=N_{\Delta,J}\frac{T^{\prime}_{J}(x-x^{\prime})}{(x-x^{\prime})^{2\Delta}} \end{equation} with $T'_J$ being the tensor structure relating to the operator with spin $J$. The coefficient $f_{\Delta,J}$ could be determined by considering one spherical operator and mapping it to a half plane. It depends only on the tensor structure of the operator\footnote{For the detailed study on the tensor structure and the coefficient $f$, please refer to \cite{CCHL2017}.}. In terms of the conformal blocks, the R\'enyi mutual information can be expressed by \begin{equation} I^{(n)}({A,B})=-\frac{1}{1-n}\log (1+\sum_{\{\Delta,J\}}s_{\Delta,J}G_{\Delta,J}), \end{equation} and the mutual information is just \begin{equation} I({A,B})=\sum_{\{\Delta,J\}}b_{\Delta,J}G_{\Delta,J}(u,v) , \end{equation} with the coefficient $b_{\Delta,J}$ being related to the expansion of $s_{\Delta,J}$ in powers of $(n-1)$ \begin{equation} b_{\Delta,J}=\frac{\partial s_{\Delta,J}}{\partial n}\big|_{n=1}. \end{equation} This is the conformal block expansion of the mutual information. As the conformal block in the diagonal limit is approximated by\cite{Matthijs:1305} \begin{equation} G_{\Delta,J}(z) \simeq z^\Delta+ \cdots, \end{equation} the leading contribution to the mutual information is from the primary operator with the lowest dimension and nonvanishing coefficient. As the one-point functions of the operators purely in one replica is simply zero, they give vanishing mutual information. It turns out that the dominant one comes from the bilinear operators composed of the operators in different replicas. For example, for a CFT in which the primary operator $\mathcal{O}$ of the lowest dimension $\Delta$ is of scalar type, then the leading contribution comes from the bilinear operators $O^{(s)(j_1j_2)}=\mathcal{O}^{(j_1)}\mathcal{O}^{(j_2)}$ ($j_1\neq j_2$). The next-to-leading one comes from the bilinear operators with a derivative \begin{equation} O^{(s)(j_1j_2)}_{\mu}=\mathcal{O}^{(j_1)}\partial_{\mu}\mathcal{O}^{(j_2)}-(j_1\leftrightarrow j_2).\label{Jscalar} \end{equation} One important point is that the primary operators of the replicated theory could be not just the tensor products of the operators $\mathcal{O}^{(j)}$ in different replicas. The above bilinear operator with a derivative is a typical example. This shows that the spectrum of the replicated theory is much involved. Even for the free scalar, there is no systematical way to construct the primary operators in the replicated theory\cite{Chen:2016mya}. However, if we are satisfied with the leading contributions to the mutual information, the relevant operators can be constructed explicitly\footnote{Note that for the 2D orbifold CFT, there could be other ways to construct the spectrum in terms of the operators in the twist sectors rather than the normal sector, depending on how to insert the complete set of state basis\cite{Chen:2014hta,Chen:2014ehg,Chen:2015kua}.}. It is remarkable that the coefficients for the leading contributions take universal forms, which means that they depend only the scaling dimensions and the spins of the primary operators and have nothing to do with the construction of the CFT itself. Naively one cannot expect to get such universal behaviors as the one-point function of the primary operator in a conical space cannot be determined in a simple way. It is feasible because the one-point function get simplified in the $n\to 1$ limit. This leads to the so-called $1/n$ prescription\cite{CCHL2017}. The cause of the $1/n$ prescription is as follows. Let $G_n$ be any periodic function in the conical geometry whose angular direction is identified as $\theta \sim \theta + 2\pi n$. It satisfies \begin{equation} G_n(r,\theta,y^i) = G_n(r,\theta + 2\pi n,y^i). \end{equation} In the limit $n\rightarrow 1$, it returns to the usual function on the original flat space $G_1(r,\theta,y^i) = \lim_{n\rightarrow 1} G_n(r,\theta,y^i)$. Using the Fourier expansion, one can show that \begin{equation} G_n(r,\theta,y^i) = G_1(r,\theta /n,y^i)+{O}(n-1). \end{equation} The periodic function in the conifold in the limit $n\rightarrow 1$ is related to the function in the original space by dividing the angular variable by $n$. This is called the $1/n$ prescription\cite{CCHL2017}. Consequently, we have \begin{equation} \lim_{n\rightarrow 1}\sum_{q=1}^{n-1} \frac{G_n^2(2\pi q)}{n-1} = \lim_{n\rightarrow 1}\sum_{q=1}^{n-1} \frac{G_1^2(2\pi q/n)}{n-1}. \end{equation} This is very useful for calculating the expansion coefficients $b_{\Delta,J}$. For the examples we discussed before, the coefficient of the bilinear operator $O^{(s)(j_1j_2)}$ turns out to be \begin{equation} b^{(s)}_{2\Delta,0}=\frac{\sqrt{\pi}\Gamma[2\Delta+1]}{4^{2\Delta+1}\Gamma[2\Delta+\frac{3}{2}]},\label{coeffscalar} \end{equation} while the coefficient for the operator (\ref{Jscalar}) is \begin{equation} b^{(s)}_{2\Delta+1,1}=-\frac{\sqrt{\pi}\Delta\Gamma[2\Delta+1]}{2^{4\Delta+3}\Gamma[2\Delta+\frac{5}{2}]}.\label{coeffJs} \end{equation} The coefficient (\ref{coeffscalar}) was first derived in \cite{AF2015}. \subsection{Scalar type operator} Let us first review the calculations of the leading order contribution to the mutual information from a primary scalar operator in the boundary CFT by using the $1/n$ prescription. In this case, the operator giving the leading order contribution is of the type $O^{(s)(jj')}=\mathcal{O}^{(j)} \mathcal{O}^{(j')}$ where $\mathcal{O}^{(j)}$ is a scalar primary operator with the lowest dimension $\Delta$ living on the $j$-th replica. For this operator, its two-point function at the leading order in the large distance limit is given by \begin{equation} \langle O^{(s)(0j)} (x_A)^{(s)(0j)} (x_B)\rangle_{M^n}\simeq \frac{1}{|x_A-x_B|^{4\Delta}}\,, \end{equation} where we have used the two-point function on the plane in CFT, \begin{equation}\label{eq:2pointscalar} \langle \mathcal{O}^{(j)}(x_A)\mathcal{O}^{(j)}(x_B)\rangle_M=\frac{1}{|x_A-x_B|^{2\Delta}}. \end{equation} In order to compute the OPE coefficients, we do a conformal transformation \begin{equation} x'^\mu=\fft{x^\mu+c^\mu}{\Omega}-\fft{c^\mu}{2|c|^2}\,,\quad \Omega=c^2+2\,c\cdot x+x^2\,,\label{ct} \end{equation} where $c^\mu$ is a $d$-dimensional constant vector, given by $c^\mu=(0,R,0,\dots,0)$. Under this transformation, the original conifold geometry ${C}_A^{(n)}$ with spherical conical singularity ($t_{\mathrm{E}}=0\,,\delta_{ij}x^i x^j=R^2$) is conformally transformed into a new conifold geometry ${C}_A^{'(n)}$ with the singularity located at the plane ($t'_{\mathrm{E}}=0\,,x'^1=0$). Moreover, the infinity $x_{\infty}=(t_{\mathrm{E}}=0\,,x^i_\infty)$ is transformed into a finite point $x'_{\infty}=(t'_{\mathrm{E}}=0\,,x'^i=-\fft{c^i}{2c^2})$. The Jacobian is approximately (note that $\Omega\mid_{x\rightarrow \infty}\approx |x|^2$) \begin{equation} \fft{\partial x'^\mu}{\partial x^\nu}\approx\Omega^{-1} I^\mu_\nu(x)\,,\qquad I_{\mu\nu}(x)=\delta_{\mu\nu}-2\hat{x}^\mu\hat{x}^\nu \,,\end{equation} where $\hat{x}^\mu=x^\mu/|x|$ is a unit vector. Under the transformation, we find that at the infinity a general spin-$s$ primary operator with the scaling dimension $\Delta$ transforms as \begin{equation} \label{spin} \langle O_{\mu_1\mu_2\dots \mu_s}(x_\infty)\rangle_{{C}_A^{(n)}}\approx \Omega^{-\Delta}(x_\infty)I^{\nu_1}_{\mu_1}(x_\infty)I^{\nu_2}_{\mu_2}(x_\infty)\dots I^{\nu_s}_{\mu_s}(x_\infty)\langle O_{\nu_1\nu_2\dots \nu_s}(x'_\infty)\rangle_{{C}_A^{'(n)}} \,. \end{equation} It should be emphasized that the right-hand side of this equation is only the leading term, which however is sufficient for us to calculate the OPE coefficients. Then for a scalar operator, we find \begin{equation} C^A_{(jj')}=\langle \mathcal{O}^{(j)}(x'_\infty)\mathcal{O}^{(j')}(x'_\infty) \rangle_{{C}_A^{'(n)}}\,. \end{equation} The one-point function on the new conifold geometry ${C}_{A}^{'(n)}$ can be computed using two different methods, as had been done in \cite{AF2015} and \cite{CCHL2017}. For a general CFT, the coefficient is theory-dependent. In \cite{AF2015}, it was shown that the correlators in the conical space could be transformed to the correlators on the hyperbolic space at finite temperature via the map suggested by H. Casini et.al. in \cite{Casini:2011kv}. Moreover by using the analyticity and the properties of the thermal field theory, the authors of \cite{AF2015} read the contribution from the bilinear operator to the mutual information \begin{equation} \label{smi} I(A\,,B)=\fft{\sqrt{\pi}\,\Gamma(2\Delta+1)}{4^{2\Delta+1}\Gamma(2\Delta+3/2)}z^{2\Delta} \,. \end{equation} This is the leading contribution of a scalar operator with the scaling dimension $\Delta$ to the mutual information. Now we would like to use the $1/n$ prescription to derive the same result. As proposed in \cite{CCHL2017}, the Green function $G_n(\theta)$ with a period $2\pi n $ living on the conifold can be expanded as $G_n(\theta)=G_1({\theta}/{n})+{O}(n-1)$ where we suppose $G_n$ is analytically continuable with $n$, and $\lim_{n\to 1}G_n(\theta)=G_1(\theta)$. In other words, when $n$ is close to unity, the Green function $G_n(\theta)$ at the leading order on the conifold geometry $C_A^{(n)}$ is simply given by its counterpart on the plane with the angle coordinate $\theta $ divided by $n$. For the bilinear scalar operators, we have \begin{equation} C_{(jj')}^A\mid_{n\to 1}=\lim_{n\to 1}\langle \mathcal{O}^{(j)}(x'_\infty)\mathcal{O}^{(j')}(x'_\infty)\rangle_{{C'}_A^{(n)}}=\lim_{n\to 1} R_A^{2\Delta}\mathrm{sin^{-2\Delta}\big(\ft{\theta_j-\theta_{j'}}{2n}\big)}\,. \end{equation} Substituting the above formula into the mutual information, we get \begin{equation} I(A,B)=\lim_{n\to1}\frac{n}{2^{4\Delta+1}(n-1)}\sum_{j=1}^{n-1}\sin^{-4\Delta}\big(\frac{j\pi}{n}\big) z^{2\Delta}\,, \end{equation} where we have set $\theta_j=2\pi j$. Provided the equality \begin{equation} \sum_{j=1}^{n-1}\sin^{-\Delta}\frac{j\pi}{n}=(n-1)\frac{\sqrt{\pi}}{2}\frac{\Gamma(\frac{\Delta}{2}+1)}{\Gamma(\frac{\Delta}{2}+3)}\,, \end{equation} we immediately arrive at the same answer (\ref{smi}). The essence of the $1/n$ prescription is that the Green's function in a conical geometry could be approximated by the Green's function in a flat spacetime in the expansion by the orders of $(n-1)$. In the leading order of $(n-1)$, the Green's function is directly related to the one in the flat spacetime. For the bilinear operator, its one-point function in the conical geometry is well approximated by the two-point function of single operators. This suggests that in the leading order of $(n-1)$ the operator $\mathcal{O}$ could be approximated to be the one in free CFT without considering the interaction. Actually, if one naively take the operator as a generalized free field, one can get the above result by using the method of images which is only applicable in the free theory. In other words, to the leading order the relevant operators could be taken as the ones in a generalized free theory. To compare with the bulk computation in the next section, we list the other contributions from the operators in the replicated theory composed of the scalar operator in the mother CFT\footnote{The detailed computation on the coefficients of the bilinear operators constructed from the scalar, the vector and the tensor type operator from mother CFT, can be found in \cite{CCHL2017}. }. Besides the bilinear one and the spin-1 one discussed before, there are other types of operators. The next one is the spin-2 operator, defined by \begin{equation} O^{(s)(j_1j_2)}_{\mu\nu}=\frac{1}{2}\mathcal{P}^{\alpha\beta}_{\mu\nu}(\partial_{\alpha}\mathcal{O}^{(j_1)}\partial_{\beta}\mathcal{O}^{(j_2)} -\frac{\Delta}{\Delta+1}\mathcal{O}^{(j_1)}\partial_{\alpha}\partial_{\beta}\mathcal{O}^{(j_2)}+(j_1\leftrightarrow j_2)), \label{tscalar} \end{equation} where \begin{equation} \mathcal{P}^{\alpha\beta}_{\mu\nu}=\frac{1}{2}\left(\delta_{\mu}^{\alpha}\delta_{\nu}^{\beta}+\delta_{\nu}^{\alpha}\delta_{\mu}^{\beta}\right)-\frac{1}{d}\delta_{\mu\nu}\delta^{\alpha\beta} \end{equation} is the operator projecting the tensor to its symmetric and traceless part. Its coefficient in the mutual information is\cite{CCHL2017} \begin{equation} b^{(s)}_{2\Delta+2,2}=\frac{(d-1)\sqrt{\pi}(2+4\Delta+3\Delta^2)\Gamma[2\Delta+3]}{d(2\Delta+1)^22^{4\Delta+5}\Gamma[2\Delta+\frac{7}{2}]}.\label{b2dp22} \end{equation} \subsection{Vector type operator} If the operator of the lowest dimension $\Delta$ in the mother CFT is a vector type $J^\mu$, then the bilinear operator giving the leading contribution to the mutual information could be of the following forms \begin{equation} O^{(v)(j_1j_2)}_{\mu\nu}=\mathcal{P}^{\alpha\beta}_{\mu\nu}(J_{\alpha}^{(j_1)}J_{\beta}^{(j_2)}),\hspace{3ex} O^{(v)(j_1j_2)}=J_{\mu}^{(j_1)}J^{(j_2)\mu}. \end{equation} The superscript $(v)$ stands for operators constructed from vector type operators. Their coefficients in the expansion of the mutual information are respectively\cite{CCHL2017} \setlength\arraycolsep{2pt} \begin{eqnarray} b^{(v)}_{2\Delta,0}&=&\frac{(d-2)^2}{d}\frac{\sqrt{\pi}\Gamma[2\Delta+1]}{4^{2\Delta+1}\Gamma[2\Delta+\frac{3}{2}]},\label{vector0}\\ b^{(v)}_{2\Delta,2}&=&\frac{(d-1)}{d}\frac{\sqrt{\pi}\Gamma[2\Delta+1]}{4^{2\Delta}\Gamma[2\Delta+\frac{3}{2}]}.\label{vector2} \end{eqnarray} \subsection{Tensor type operator} The construction can be generalized to other types of tensor operator. Here we only consider the symmetric spin-2 operator. One typical example of such type is the stress tensor, which satisfies the conservation law. We denote the spin-2 tensor as $T_{\mu\nu}$ but do not requires it to be a stress tensor. Suppose the spin-2 operator is the operator of the lowest dimension $\Delta$ in the mother CFT. Its bilinear form can be decomposed into six classes, among which only three of them have nonvanishing contribution to the mutual information. They are of the following forms respectively \setlength\arraycolsep{2pt} \begin{eqnarray} &&O^{(t)(j_1j_2)}=T_{\mu\nu}^{(j_1)}T^{(j_2)\mu\nu},\nonumber\\ &&O^{(t)(j_1j_2)}_{\mu\nu}=\mathcal{P}^{\alpha\beta}_{\mu\nu}(T_{\alpha\gamma}^{(j_1)}T_{\beta}^{(j_2)\gamma}),\nonumber\\ &&O^{(t)(j_1j_2)}_{\mu\nu\rho\sigma}=\mathcal{P}^{\alpha\beta\gamma\delta}_{\mu\nu\rho\sigma}(T_{\alpha\gamma}^{(j_1)}T_{\beta\delta}^{(j_2)}), \end{eqnarray} where the superscript $(t)$ stands for operators constructed from tensor type operators. $\mathcal{P}^{\alpha\beta}_{\mu\nu}$ is a projection operator defined in (\ref{tscalar}) and \begin{equation} \mathcal{P}_{\mu\nu\rho\sigma}^{\alpha\beta\gamma\delta}= \frac{1}{24}\delta_{(\mu}^{\alpha}\delta_{\nu}^{\beta}\delta_{\rho}^{\gamma}\delta_{\sigma)}^{\delta}-\frac{1}{12(d+4)}\delta_{(\mu\nu}\delta^{(\alpha\beta}\delta_{\rho}^{\gamma}\delta_{\sigma)}^{\delta)}+\frac{1}{3(d+2)(d+4)}\delta_{(\mu\nu}\delta_{\rho\sigma)}\delta^{(\alpha\beta}\delta^{\gamma\delta)}. \end{equation} Their coefficients in the conformal block expansion of the mutual information are respectively\cite{CCHL2017} \setlength\arraycolsep{2pt} \begin{eqnarray} b^{(t)}_{2\Delta,4}&=&\frac{\sqrt{\pi}(d^2-1)\Gamma[2\Delta+1]}{(8 + 6 d + d^2)2^{4\Delta-2}\Gamma[\frac{3}{2} + 2 \Delta]},\label{tensor4}\\ b^{(t)}_{2\Delta,2}&=&\frac{\sqrt{\pi}(d-2)(d-1)\Gamma[2\Delta+1]}{(d+4)2^{4\Delta}\Gamma[\frac{3}{2} + 2 \Delta]},\label{tensor2}\\ b^{(t)}_{2\Delta,0}&=&\frac{\sqrt{\pi}(d-2)^2(d-1)\Gamma[2\Delta+1]}{(d+2)2^{4\Delta+3}\Gamma[\frac{3}{2} + 2 \Delta]}.\label{tensor0} \end{eqnarray} \section{Bulk mutual information} In \cite{FLM2013}, it was argued that quantum corrections to the holographic entanglement entropy are essentially given by the bulk entanglement entropy between the subregion enclosed by the RT surface and its complement. We refer this the FLM proposal. It is hard to test the FLM proposal since the bulk computations of the entanglement entropy are in general very difficult. Fortunately, according to the FLM proposal, in the long distance regime the MI of two disjoint boundary subregions equals to the bulk MI between the corresponding two bulk subregions surrounded by the RT surfaces and the boundary, as shown in Fig. \ref{fig:HMIDigram}. In particular, the bulk MI for two hemispheres can be analytically computed by adopting the OPE technique. This was first done in \cite{AF2015} for a free scalar field at the leading order in the large distance limit. The results support the FLM proposal. In this section, we first extend the study of a free scalar field to the next-to-leading order and the next-to-next-to-leading order. This is nontrivial since we need carefully construct the gravity duals of the primary operators at different replicas for the boundary CFT. We further calculate the bulk MI coming from the gauge boson, the graviton and the fermion. In all these cases, our bulk results are well matched with the CFT results reported in \cite{CCHL2017} and hence verify the FLM proposal in a great careful manner. \begin{figure}[h] \begin{centering} \includegraphics[scale=0.7]{SphereDiagram} \par\end{centering} \caption{\label{fig:HMIDigram} The mutual information between two boundary regions $A$ and $B$ can be computed holographically by exchanging the bulk fields. Here the distance between $A$ and $B$ is much larger than their radius. $A_b$ and $B_b$ are bulk regions enclosed by the corresponding minimal surfaces.} \end{figure} When adopting the OPE method in the bulk, we immediately encounter a difficult problem. The gravity dual of the R\'enyi entanglement entropy (a modular version) is one quarter of the area of a cosmic brane with the tension \cite{ML2013,Dong2016} \begin{equation} T_n=\frac{n-1}{4nG_N}\,, \end{equation} which is anchored on the boundary. If $n\neq 1$, the cosmic brane is heavy and would change the spacetime. Consequently one has to solve the equations of motion of the gravity coupled with the cosmic brane. Technically speaking, this is very difficult to handle, even in numerical ways\footnote{In the semi-classical AdS$_3$/CFT$_2$, one can extend the Schottky uniformization into the bulk to find the gravitational configuration dual to the higher genus Rieman surface. The gravitational configurations are not the minimal surfaces\cite{Krasnov:2000zq,Hartman:2013mia,Faulkner:2013yia}.}. Fortunately, our goal is to compute the bulk MI rather than the general R\'enyi MI. This only requires us to consider a sufficiently light cosmic brane as $n$ close to unity. In this case, the cosmic brane becomes effectively tensionless such that we can work in the probe limit and ignore the backreaction. As a result, we can still treat a spherical twist operator as a hemisphere in the bulk ignoring the deformation. In other words, the holographic description of the spherical twist operator is a nonlocal hemisphere in the bulk. Moreover, each hemisphere can be described by the operator product expansion. This is similar to the holographic description of the Wilson loop or surface operator and its OPE\cite{Berenstein:1998ij,Chen:2007zzr}. As we argued above, the holographic configuration corresponding to the sphere is a hemisphere. This is only true when we take the $n\to 1$ limit which suggest that the dual configuration is a RT surface. However, when we apply the replica trick, the boundary sphere becomes a conical space such that the dual configuration should be very different. Nevertheless, as we are going to take $n\to 1$ limit, we expect that the bulk configuration is well-approximated by the hemisphere with transverse direction being a conical space. Simply speaking, the bulk configuration is approximated by a replicated geometry as well. Such a holographic twist operator can be expanded as \begin{equation} \mathcal T^{(n)}_A \sim 1+\sum_{j=0}^{n-1}C_{(j)}^AO^{(j)}(x_A)+\sum_{j\neq j'}C_{(jj')}^AO^{(jj')}(x_{A})+...\,, \end{equation} where the normalization factor has been dropped and $C$'s are the expansion coefficients. The operator $O^{(j)}$ stands for the operator at one replica, while the operator $O^{(jj')}$ stands for the operator composed of the operators in two different replicas, etc.. Note that these operators may have nonzero spin. The expansion coefficients can be read from the one-point functions of the operators in the presence of the twist operator. Actually the one-point function of $O^{(j)}$ is vanishing, and the first non-trivial one is the bilinear operator constructed from the fields in two different replicas. For example, from the scalar field $\phi$ dual to the scalar operator, there is a bulk bilinear operator $O^{(s)(jj')}=\phi^{(j)}\phi^{(j')}$. Its one-point OPE takes the form \begin{equation} \langle O^{(s)(jj')}(x)\rangle_{\mathcal{C}_A^{(n)}}=C_{(jj')}^A\langle\phi^{(j)}(x)\phi^{(j)}(x_A)\rangle_M\langle\phi^{(j')}(x)\phi^{(j')}(x_A)\rangle_M+\cdots\,, \end{equation} Note that in the transverse direction, we still have the identification $\theta \sim \theta +2\pi n$. Consequently we can apply the $1/n$ prescription in the bulk computation as well. In other words, in the $n \to 1$ limit, the fields can always be taken as the free fields, and the possible interaction can be ignored safely. Before doing bulk calculations in details, let us first explain our conventions. In the following we use the capital alphabets $M,N,\cdots$ to denote the bulk indices, taking values from $0$ to $d$. The bulk coordinates are denoted by $r^M=(t_E\,,x^i,\, z)$ where $i=1,\dots,(d-1)$. We refer $r^0$ to the Euclidean time $t_E$ and $r^d$ to the radial coordinate $z$. We work in the Poinc\'{a}re coordinates for the bulk metric and set the AdS radius to unity. Since now $z$ denotes the radial coordinate in the bulk, the cross ratio will be denoted by $z_{cr}$. For any two points $r=(t_E,x^i,z),r'=(t'_E,x'^i,z')$ in the $AdS_D$ ($D={d+1}$) vacuum, one can always connect them by a geodesic whose length is \begin{equation} \ell(r\,,r')=\log{\Big( \fft{1+\sqrt{1-\xi^2}}{\xi} \Big)}\,, \end{equation} where \begin{equation} \xi(r\,,r')=\fft{2zz'}{z^2+z'^2+(t_E-t'_E)^2+(\vec{x}-\vec{x}')^2}\,, \end{equation} is a biscalar. In many cases, it is convenient to introduce the chordal distance $u(r\,,r')$ \begin{equation} u\equiv \cosh{\ell}-1=\xi^{-1}-1 \,. \end{equation} We denote the bulk covariant derivative as $D_{M}$ and \begin{equation} \partial_M\equiv\fft{\partial}{\partial r^M}\,,\quad \partial_{M'}\equiv\fft{\partial}{\partial {r'}^{M}}\,. \end{equation} Since the distance between the two hemispheres are much larger than their radius, we have \begin{equation} \xi\simeq \fft{2zz'}{|x-x'|^2}\rightarrow 0\,, \end{equation} and hence \begin{equation} u\simeq 1/\xi\rightarrow \infty. \end{equation} This is a useful relation throughout this section. \subsection{Scalar field} As a warm up, let us first calculate the leading order MI from a free scalar field reported in \cite{AF2015}. For a free scalar with mass square $m^2=\Delta(\Delta-d)$, it is dual to a scalar primary operator with dimension $\Delta$ on the boundary CFT. Its bulk-to-bulk propagator is \begin{equation}\label{scalarpropagator} \langle \phi(r)\phi(r') \rangle=C_\Delta \Big(\fft{\xi}{2} \Big)^\Delta F\Big(\fft{\Delta}{2}\,,\fft{\Delta+1}{2}\,,\nu+1;\xi^2\Big) \,, \end{equation} where $\nu=\sqrt{m^2+d^2/4}$ and $C_\Delta$ is a normalization constant. For the reference points $r_A\,,r_B$, in the large distance limit, we have $z_A \,,z_B\ll |x_A-x_B| $. Then \begin{equation} \langle \phi(r_A)\phi(r_B) \rangle\simeq C_\Delta \fft{z^\Delta_A z^\Delta_B}{|x_A-x_B|^{2\Delta}} \,. \end{equation} Thus, we find that \begin{equation} I(A,B)|_{s=0}=\fft{1}{|x_A-x_B|^{4\Delta}}\lim_{n\rightarrow 1} \fft{1}{2(n-1)}\sum_{j\neq j'}\widetilde{C}^A_{(jj')}\widetilde{C}^B_{(jj')} \,,\label{MIscalar} \end{equation} where $ \widetilde{C}^A_{(jj')}\equiv C_\Delta z^{2\Delta}_A C^A_{(jj')}.$ To calculate the OPE coefficients, we do a coordinate transformation similar to (\ref{ct}) \begin{equation} r'^M=\fft{r^M+ n^M R_A}{\Omega}-\fft{n^M}{2R_A} \,,\label{bct} \end{equation} where $n^M=(0,1,0,\dots,0)$ is a $D$-dimensional unit vector, and \begin{equation} \Omega=R^2_A+2R_A\, x_1+z^2+t_E^2+\vec{x}^2\,. \end{equation} This transformation preserves the AdS metric. It should be emphasized that this is not a conformal transformation any longer. Under this transformation the original conifold geometry $\mathcal{C}_A^{(n)}$ with spherical conical singularity ($t_{\mathrm{E}}=0\,,z^2+\delta_{ij}x^i x^j=R_A^2$) is mapped to a new conifold geometry $\mathcal{C}_A^{'(n)}$ with the singularity located at the plane ($t'_{\mathrm{E}}=0\,,x'^1=0$). The infinity is mapped to a finite point \begin{equation}\label{rfp} r_{\infty}=(z_\infty\,,t_{\mathrm{E}}=0\,,x^i_\infty)\hspace{2ex}\longrightarrow \hspace{2ex}r'_{\infty}=(z'=\epsilon\,,t'_{\mathrm{E}}=0\,,x'^i=-\fft{n^i}{2R_A}) \,, \end{equation} where the large separation limit corresponds to $\epsilon\rightarrow 0$. To further simplify our calculations, we take the reference point to be $r_A=(z_A\,,t_E=0\,,x^i=0)$ which is mapped to \begin{equation}\label{rap} r_A=(z_A\,,t_E=0\,,x^i=0) \hspace{2ex}\longrightarrow \hspace{2ex}r'_A=\Big(z'_A=\fft{z_A}{R^2_A+z^2_A}\,,t'_E=0\,,x'^i=\fft{n^i}{2R_A}\fft{R^2_A-z^2_A}{R^2_A+z^2_A} \Big) \,. \end{equation} For $r_\infty$, the $\Omega$ factor $\Omega(r_\infty)\simeq x_\infty^2$ and for $r_A$, $\Omega(r_A)=R_A^2+z_A^2$. Under the coordinate transformation (\ref{bct}), at the leading order a bulk spin-$s$ operator transforms as \begin{equation}\label{eq:trans} \langle O_{M_1 M_2\dots M_s}(r_\infty)\rangle_{\mathcal{C}_A^{(n)}}\approx \Omega^{-s}(r_\infty)I^{N_1}_{M_1}(r_\infty)I^{N_2}_{M_2}(r_\infty)\dots I^{N_s}_{M_s}(r_\infty)\langle O_{N_1 N_2\dots N_s}(r'_\infty)\rangle_{\mathcal{C}_A^{'(n)}} \,. \end{equation} In fact, due to the rotational symmetry, we only need to consider the time-time-...-time component \cite{CCHL2017} \begin{eqnarray}\label{bulkspin} &&\langle O_{00\dots 0}(r_\infty)\rangle\simeq x_\infty^{-2s}\,\langle O_{00\dots 0}(r'_\infty)\rangle\,,\nonumber\\ &&\langle O_{00\dots 0}(r_A)\rangle= (R_A^2+z_A^2)^{-s}\,\langle O_{00\dots 0}(r'_A)\rangle\,. \end{eqnarray} With all these results in hand, we are ready to compute the OPE coefficients. For the scalar field, we find \begin{equation} C_{(jj')}^A =\Big(C_\Delta\epsilon^{\Delta}z_A^{\Delta}\Big)^{-2}\langle O^{(s)(jj')}(r'_\infty)\rangle_{\mathcal{C}_A^{'(n)}}\,. \end{equation} Here the one-pint function on the right-hand side can be computed using the $1/n$ prescription \begin{equation} \left\langle O^{(s)(jj')}(r'_{\infty})\right\rangle _{\mathcal{C}'{}_{A}^{(n)}}=\langle \phi^{(j)}(r'_\infty)\phi^{(j')}(r'_\infty)\rangle_{\mathcal{C}_A^{'(n)}}= \left(\frac{2C_{\Delta}}{\nu}\right)\epsilon^{2\Delta}R_{A}^{2\Delta}\sin^{-2\Delta}\frac{\theta_{jj'}}{2n}\,, \end{equation} where $\theta_{jj'}\equiv\theta_j-\theta_{j'}$. So we get \begin{equation} C_{(jj')}^{A}=\frac{\nu R_{A}^{2\Delta}}{2C_{\Delta}(z_{A})^{2\Delta}}\sin^{-2\Delta}\frac{\theta_{jj'}}{2n}\,. \end{equation} Substituting the above results into (\ref{MIscalar}), we finally obtain \begin{equation} I(A,B)|_{s=0} = \frac{\sqrt{\pi}}{4^{2\Delta+1}}\frac{\Gamma(2\Delta+1)}{\Gamma(2\Delta+\frac{3}{2})}z_{cr}^{2\Delta} \end{equation} in which $z_{cr}=\frac{4R_{A}R_{B}}{|x_{A}-x_{B}|{}^{2}}$. This is exactly matched with the boundary result (\ref{coeffscalar}) of a primary scalar operator with the scaling dimension $\Delta$, as we expected. We continue to construct a spin-1 operator from the bulk scalar fields residing at different replicas. We propose that the vector operator \begin{equation} O^{(s)(jj')}_M\equiv \phi^{(j)}\partial_M \phi^{(j')}-(j\leftrightarrow j')\,, \end{equation} is dual to the spin-1 operator (\ref{Jscalar}) with the scaling dimension $2\Delta+1$ in the boundary theory. A straightforward calculation shows that its time-time component of the propagator in the large distance limit is \begin{equation} \langle O^{(s)(jj')}_0(r_A)O^{(s)(jj')}_{0}(r_B) \rangle\simeq 4\Delta \fft{C^2_\Delta z_A^{2\Delta}z_B^{2\Delta} }{|x_A-x_B|^{2(2\Delta+1)}}\,, \end{equation} and \begin{equation} \langle O^{(s)(jj')}_0(r'_\infty)O^{(s)(jj')}_{0}(r'_A) \rangle=4\Delta C^2_\Delta \epsilon^{2\Delta} (R^2_A+z_A^2) \,. \end{equation} Using the coordinate transformation (\ref{eq:trans}) and the $1/n$ prescription, we read \begin{equation} C_{(0j)}^{(A)0}=\frac{\nu R_{A}^{2\Delta+1}}{2C_{\Delta}z_{A}^{2\Delta}}\frac{1}{n}\cos\frac{\theta_{j}}{2n}\sin^{-2\Delta-1}\frac{\theta_{j}}{2n}. \end{equation} It follows that at the leading order the mutual information from the spin-1 field is given by \begin{equation} I(A,B)|_{s=1}= -\frac{\Delta}{2}\frac{\sqrt{\pi}}{4^{2\Delta+1}}\frac{\Gamma(1+2\Delta)}{\Gamma(\frac{5}{2}+2\Delta)}z_{cr}^{2\Delta+1}. \end{equation} This exactly matches with the boundary result (\ref{coeffJs}). Note that it is negative. Next we construct a bulk spin-2 operator from the original scalar field as \begin{equation}\label{bulkspin2} O^{(s)(jj')}_{M N}\equiv \fft 12 P^{E F}_{M N}\Big( D_E \phi^{(j)}D_F \phi^{(j')}-\fft{\Delta}{\Delta+1} \phi^{(j)}D_E D_F \phi^{(j')}+(j\leftrightarrow j') \Big) \,, \end{equation} where the bulk projector is defined to be \begin{equation}\label{P2} P_{M N}^{E F}\equiv h^{E}_{(M}h^{F}_{N)}-\fft 1d\, h_{M N}h^{E F}\,,\quad h_{M N}=g_{M N}-n_M n_N. \end{equation} Here $n^M=(0\,,\sqrt{g^{zz}}\,,\cdots\,,0)$ is the unit normal vector of the time-like hypersurface orthogonal to the radial direction in the Poinc\'{a}re coordinates. $h_{M N}$ is the induced metric on the constant $z$ hypersurface. Note that the projector is symmetric and traceless. The above spin-2 field is dual to the boundary spin-2 operator (\ref{tscalar}) with the scaling dimension $2(\Delta+1)$. However, there is something subtle in the above definition that should be clarified. To show this, we recall the definition of the boundary spin-2 operator\cite{CCHL2017} \begin{equation} O^{(s)(jj')}_{\mu\nu}=\fft 12 \mathcal{P}^{\alpha\beta}_{\mu\nu}\Big(\partial_\alpha \mathcal{O}^{(j)}\partial_\beta \mathcal{O}^{(j')}-\fft{\Delta}{\Delta+1}\mathcal{O}^{(j)}\partial_\alpha\partial_\beta \mathcal{O}^{(j')}+(j\leftrightarrow j') \Big) \,,\end{equation} where the boundary projector is defined in (\ref{tscalar}) using the Euclidean metric. Naively, one may expect that the gravity duals to the higher spin operators in the boundary CFT can be constructed via a minimally replacing rule. That is by replacing $\mathcal{O}\rightarrow \phi\,,\partial_\mu\rightarrow \nabla_M\,,\delta_{\mu\nu}\rightarrow g_{MN}$ in the boundary operators, one obtains the dual bulk fields. In this case, the bulk spin-2 projector would be \begin{equation} \widetilde{P}_{MN}^{EF}=g^{E}_{(M}g^{F}_{N)}-\fft 1 D\, g_{M N}g^{E F}\,,\end{equation} where a tilde is used to distinguish it from the projector (\ref{P2}). However, this is not correct and cannot produce the correct answers. The correct bulk projector should be (\ref{P2}). It looks unnatural at first glance since the projector is defined on a time-like hypersurface instead of the AdS bulk. Nonetheless, we have a simple interpretation how it works. The projector plays two-fold roles. Firstly, it maps a bulk operator onto the time-like hypersurface $z=\mathrm{const}$, suppressing all the radial components. Secondly, the operators on the hypersurface are projected to be symmetric and traceless. In this sense, the bulk spin-2 field defined in (\ref{bulkspin2}) can be viewed as living on the curved $d$-dimensional sub-manifold with $z=\mathrm{const}$, which can be obtained by extending the boundary (and its field theory content) into the deep bulk region. On the other hand, when close to the boundary, one finds $\phi\rightarrow z^\Delta \mathcal{O}\,,h_{MN}\rightarrow z^{-2}\delta_{\mu\nu}$ such that $P_{MN}^{EF}\rightarrow \mathcal{P}_{\mu\nu}^{\alpha\beta}$ and $O^{(s)(jj')}_{MN}\rightarrow z^{2\Delta}O^{(s)(jj')}_{\mu\nu}$. Here note that the prefactor in the spin-2 operator is $z^{2\Delta}$ instead of $z^{2\Delta+2}$. It should be so because the bulk spin-2 field has a scaling dimension $2\Delta$. Now it becomes clear that our bulk spin-2 field defined in (\ref{bulkspin2}) is indeed dual to the spin-2 operator in the boundary CFT. Following our discussions, it is easy to construct the gravity duals for general higher spin operators in the boundary theory that are carefully studied in \cite{CCHL2017}. The remaining calculations are straightforward. At the large separation limit, the relevant two-point function is \begin{equation} \langle O^{(s)(jj')}_{00}(r_A) O^{(s)(jj')}_{00}(r_B) \rangle\simeq\fft{4(d-1)\Delta^2(2\Delta+1)}{d(\Delta+1)}\fft{C^2_\Delta z_A^{2\Delta}z^{2\Delta}_B}{|x_A-x_B|^{4(\Delta+1)}}\,, \end{equation} and \begin{equation} \langle O^{(s)(jj')}_{00}(r'_\infty) O^{(s)(jj')}_{00}(r'_A) \rangle=C^2_\Delta\, \epsilon^{2\Delta} z_A^{2\Delta}(R_A^2+z^2_A)^2\,\fft{4(d-1)\Delta^2(2\Delta+1)}{d(\Delta+1)}\,. \end{equation} The corresponding OPE coefficients are given by \begin{equation}\label{eq:C00jj} C^{(A)00}_{(jj')}=(R_A^2+z_A^2)^2\fft{\langle O^{(s)(jj')}_{00}(r'_\infty) \rangle_{C^{(n)}_{A'}}}{\langle O^{(s)(jj')}_{00}(r'_\infty) O^{(s)(jj')}_{00}(r'_A) \rangle}\,. \end{equation} According to the $1/n$ prescription, we find \begin{align} C_{(0j)}^{(A)00}=-\frac{\nu z_{A}^{2\Delta}R_{A}^{2\Delta+2}}{4C_{\Delta}\Delta(1+2\Delta)}\frac{(1+n^{2}+2\Delta)}{n^{2}}\left((1+2\Delta)\sin^{-2\Delta-2}\frac{\theta_{j}}{2n}-2\Delta\sin^{-2\Delta}\frac{\theta_{j}}{2n}\right). \end{align} Note that we must work in the $(r,\theta)$ coordinate system to derive the one-point function in the replicated geometry. After some simple calculations, we finally get \begin{equation} I(A,B)|_{s=2}=\frac{(d-1)}{d}\frac{2+4\Delta+3\Delta^{2}}{(1+2\Delta)^{2}}\frac{2\sqrt{\pi}}{4^{2\Delta+3}}\frac{\Gamma(2\Delta+3)}{\Gamma(2\Delta+\frac{7}{2})}z_{cr}^{2\Delta+2}. \end{equation} It matches exactly with the boundary result (\ref{b2dp22}) of the spin-2 operator. \subsection{Gauge Bosons} Now we generalize the discussions to the massless gauge fields in the bulk, which are dual to the conserved currents in the boundary. For the vector-type operators in the boundary CFT, the bulk dual should be vector fields. In general, the vector field is massive and there is no gauge symmetry. In this subsection, we focus on the case that the bulk field is a gauge field, and leave the discussion on the massive case to Appendix A. The gauge field is interesting as it often appears in the spectrum of AdS supergravity. Moreover, the computation of the mutual information due to the exchange of the gauge field presents novel feature, which we would like to report. For a U(1) gauge boson, the bulk-to-bulk propagator is given by\cite{Gauge1998} \begin{eqnarray}\label{gauge boson} G_{M N}(r,R)&=&-\partial_M\partial_{N'} u F(u)+\partial_ M \partial_ {N'} S(u)\nonumber\\ &=&\partial_ M u\partial_{N'} u S''(u)+\partial_ M \partial_{N'} u (S'(u)-F(u)), \end{eqnarray} where the function $F(u)$ is \begin{equation} F(u)=\frac{\Gamma \left(\frac{d-1}{2}\right)}{4\pi ^{(d+1/)2}}\frac{1}{\big(u(u+2)\big)^{(d-1)/2}},\end{equation} and $S(u)$ is a gauge artifact. In the Feynman gauge, it is determined by \cite{Gauge1998} \begin{equation} u(u+2)S'''+(d+3)(u+1)S''+(d+1)S'=2F.\end{equation} The explicit expression for $S(u)$ is complicated, but we only need its asymptotic behavior since we are considering the MI between two far separated regions. In the large separation limit, we have \begin{eqnarray} F(u)&\simeq&\frac{\Gamma \left(\frac{d-1}{2}\right)}{4\pi ^{(d+1/)2}}\frac{1}{u^{d-1}},\hspace{3ex} S'(u)\simeq \frac{\Gamma \left(\frac{d-1}{2}\right)}{4\pi ^{(d+1/)2}}\frac{1}{(2-d) u^{d-1}}. \end{eqnarray} In general, the gauge part gives vanishing contributions when integrated against conserved currents, for example in Witten diagrams considered in \cite{Gauge1998}. This is because the surface terms in the partial integration vanish. However, in our case, the situation is quite different owing to the presence of additional boundaries: the two separated entangling surfaces. We find that the gauge part $S(u)$ of the propagator, besides the usually called physical part $F(u)$, also contributes to the leading order of the MI. This seems in conflict with what we cognize before since clearly the MI should be gauge independent. To clarify this, let us first present our results in details. To compare with the boundary results, we construct two kinds of operators with spin 0 and spin 2 respectively \begin{align}\label{O0O1} O^{(v)(jj')}= & h^{MN}A_{M}^{(j)}A_{N}^{(j')},\hspace{3ex} O_{AB}^{(v)(jj')}= P_{AB}^{MN}A_{M}^{(j)}A_{N}^{(j')}, \end{align} where the bulk projector $P_{AB}^{MN}$ is defined in (\ref{P2}) and superscript $(v)$ stands for operators constructed from the vector gauge boson. As will be shown shortly, the above two operators are dual to the boundary operators with the same spins constructed from a current operator with the scaling dimension $\Delta=d-1$. For the spin-0 operator, the calculation is similar to the discussions for the scalar operators except that we now need a different propagator. Using the propagator for the gauge boson (\ref{gauge boson}), we find at the leading order in the large distance limit \begin{align} \left\langle O^{(v)(jj')}(r_{A})O^{(v)(jj')}(r_{B})\right\rangle _{M}\simeq \frac{\Gamma \left(\frac{d-1}{2}\right)^2}{16 \pi ^{d+1}}\frac{d (d-1)^2}{(d-2)^2}\frac{\left(2 z_A z_B\right){}^{2 (d-1)}}{|x_A-x_B|^{4 (d-1)}}. \end{align} It is worth emphasizing that this result is derived from the total bulk-to-bulk propagator of the gauge boson including the gauge part. Applying the $1/n$ prescription, we get the OPE coefficient \begin{equation} C_{(0j)}^{A}=\frac{4\pi^{(1+d)/2}(d-2)}{\Gamma\left(\frac{d-1}{2}\right)d(d-1)2^{d-1}}z_{A}^{2-2d}R_{A}^{2d-2}\frac{(d-1)n^{2}-1}{n^{2}}\sin^{2-2d}\left(\frac{\theta_{j}}{2n}\right). \end{equation} The bulk MI turns out to be \begin{equation}\label{gaugespin0} I(A, B)|_{s=0}=\frac{(d-2)^2}{4^{2 d-1}d}\frac{\sqrt{\pi } \Gamma (2 d-1)}{\Gamma \left(2 d-\frac{1}{2}\right)}z_{cr}^{2(d-1)}. \end{equation} For the spin-2 operator, its contribution to the MI at the leading order is \begin{equation} I(A,B)|_{s=2}=\lim_{n\to1}\frac{n}{2 (n-1)}\sum _{j=0}^{n-1} C_{(0j)}^{(A)KL} C_{(0 j)}^{(B)MN} \left\langle O_{MN}^{(v)(0j)}\left(r_A\right) O_{KL}^{(v)(0j)}\left(r_B\right)\right\rangle. \end{equation} As discussed before, for the higher spin operators only the time-time component has non-trivial contributions to the MI at the leading order. Hence, without loss of generality, we can drop the other components in the following. At the large separation limit, we find \begin{equation} \left\langle O_{00}^{(v)(jj')}\left(r_A\right) O_{00}^{(v)(jj')}\left(r_B\right)\right\rangle \simeq\frac{\Gamma^2(\frac{d-1}{2})}{4\pi ^{d+1}}\frac{(d-1)^3}{d(d-2)^2}\frac{(2z_Az_B)^{2d-4}}{|x_A-x_B|^{4d-4}}. \end{equation} Both the physical part $F(u)$ and the gauge part $S(u)$ in the propagator contribute. The OPE coefficient is \begin{equation} C_{(0j)}^{(A)00}= \frac{\pi^{(1+d)/2}2^{3-d}(2-d)}{(d-1)\Gamma\left(\frac{d-1}{2}\right)}z_{A}^{4-2d}R_{A}^{2d-2}\frac{(1+n^{2})}{n^{2}}\sin^{2-2d}\left(\frac{\theta_{j}}{2n}\right). \end{equation} Then taking the limit $n\to 1$, we finally get \begin{equation}\label{gaugespin2} I(A, B)|_{s=2}= \frac{d-1}{d}\frac{\sqrt{\pi}}{4^{2d-2}}\frac{\Gamma(2d-1)}{\Gamma(2d-\frac{1}{2})}z_{cr}^{2(d-1)}. \end{equation} Now we are able to compare our bulk results with the boundary results reported in \cite{CCHL2017}. We find that our results are perfectly matched with (\ref{vector0}, \ref{vector2}) of a current operator which has scaling dimension $\Delta=d-1$. Indeed, according to the AdS/CFT correspondence, a gauge boson in the bulk is dual to a conserved current in the boundary CFT. In this sense, the above results may be expected at the very start. However, the subtlety is that throughout our calculations, the gauge part of the bulk-to-bulk correlator associated with the function $S(u)$ in the two-point functions has non-trivial contribution to the holographic MI for both the spin-0 and spin-2 operators. It is remarkable that from our discussions, the usually so-called gauge artifact $S(u)$ always has significant contributions to the bulk mutual information. But this does not mean the result is gauge dependent. In fact, we find that, to the leading order in $u$, the contribution from the gauge part is independent of the gauge parameter. One possible interpretation for the nonvanishing contribution from the gauge part could be that the gauge symmetry is effectively broken around the entangling surfaces, giving rise to an extra physical degree of freedom living on the boundaries. In fact, from the leading order MI for a massive vector field in the bulk (the calculation details is given in Appendix A.1), we find that the results are exactly matched with boundary results of a current operator with a generic scaling dimension $\Delta$ as well. This supports our argument since in general a massive vector field has one more degree of freedom than the gauge boson but they both give the same results to the MI at the leading order. An interesting question is how to understand the breaking of gauge symmetry on the entangling surfaces. We recall the definition of the entanglement entropy (\ref{eq:ee}) for any subsystem $A$. The key ingredient is the reduced density matrix of $A$ which is defined by tracing out the degrees of freedom in its complement $\bar A$ from the total density matrix. However, for gauge theories the Hilbert space of physical states can not be factorized into a tensor product of the Hilbert space of the states localized in the spatial regions $A$ and $\bar A$. In \cite{Buividovich:2008gq}, it was argued that the elementary excitations in gauge theories are electric strings, which are closed loops rather than points in space. Hence, it is indispensable that there are closed loops which are belong to both $A$ and $\bar A$. So the reduced density matrix of $A$ can only be well defined if the Hilbert space of physical states is extended by including the states of electric strings that open on the boundary of $A$. The endpoints of the electric strings on the boundary were previously pure gauge degrees of freedom but now become physical and hence break gauge symmetries, giving rise to extra contributions to the entanglement entropy. We refer the readers to literatures such as \cite{Donnelly:2011hn,Casini:2013rba,Donnelly:2014gva,Donnelly:2016auv} for more discussions on this issue. It will be of great interests to compute the quantum corrections of gauge fields to the entanglement entropy for a single entangling surface in the AdS/CFT correspondence. We leave this as a direction for future research. \subsection{Gravitons} Now we consider the contribution of the massless graviton denoted by $\tilde{G}_{MN}$. This is dual to the conserved stress tensor in the boundary theory. For a general spin-2 operator in the boundary, the corresponding field could be a massive graviton, whose leading order contribution to the mutual information is put in Appendix A.2. Here we focus on the massless case. The bulk-to-bulk propagator of the graviton can be written as \begin{equation}\label{eq:gra} \langle \tilde{G}_{MN} (x_A) \tilde{G}_{E'F'}(x_B)\rangle = G_{MN,E'F'}(x_{A},x_{B})=g_{MN}g_{E'F'}T+\sum_{i=1}^{5}G^{(i)}O_{MN,E'F'}^{(i)}, \end{equation} in which the explicit forms of coefficients $T,G^{(i)}$ and the tensor structures $O_{MN,E'F'}^{(i)}$ in the Landau gauge can be found in \cite{Graviton1999_1,Graviton1999_2}. For self-consistency, we list them in Appendix B. Actually, we only need the asymptotic behavior of the propagator in the large distance limit. Note that the physical part of the propagator has the form \begin{equation} G_{MN,EF}=(\partial_{M}\partial_{E'}u\partial_{N}\partial_{F'}u+\partial_{M}\partial_{F'}u\partial_{N}\partial_{E'}u)\tilde{G}(u)+g_{MN}g_{E'F'}\tilde{H}(u). \end{equation} The relations between coefficients $\tilde{G}(u),\tilde{H}(u)$ and $T,G^{(i)}$ can be found in \cite{Graviton1999_1}. Note that similar to the gauge boson case, the gauge part of the propagator gives significant contributions to the mutual information such that the bulk result agrees with the boundary result. There are three kinds of bulk operators which have the leading contributions to the mutual information \begin{align} &O^{(t)(jj')}= h^{MN}h^{EF}\tilde{G}_{MN}^{(j)}\tilde{G}_{EF}^{(j')}\label{O21}\,,\\ &O_{IJ}^{(t)(jj')}= P_{IJ}^{AB}h^{CD}\tilde{G}_{AC}^{(j)}\tilde{G}_{BD}^{(j')}\label{O22}\,,\\ &{O}_{ABCD}^{(t)(jj')}= P_{ABCD}^{EFGK}\tilde{G}_{EG}^{(j)}\tilde{G}_{FK}^{(j')}\label{O4}\,, \end{align} in which superscript $(t)$ stands for the operators constructed from the gravitons. $P_{IJ}^{AB}$ is defined by (\ref{P2}) and \begin{eqnarray} P_{ABCD}^{EFGK}&=&\frac{1}{24}h^{E}_{(A}h^{F}_{B}h^{G}_{C}h^{K}_{D)}-\frac{h_{(AB}h^{(EF}h^G_Ch^{K)}_{D)}}{12(d+4)} +\frac{h_{(AB}h_{CD)}h^{(EF}h^{GK)}}{3(d+2)(d+4)}. \end{eqnarray} For the spin-0 and the spin-2 cases, the calculations are similar to the discussions for the U(1) gauge boson. In the large separation limit, the two-point function of the spin-0 operator is \begin{equation} \left\langle O^{(t)(jj')}(r_{A})O^{(t)(jj')}(r_{B})\right\rangle _{M}\simeq a_0(D)\frac{(2z_{A}z_{B})^{2d}}{|x_{A}-x_{B}|{}^{4d}}\,, \end{equation} in which \begin{align} a_{0}(D)=\frac{2^{2d+3}d^{2}}{(d+1)^{2}(d-1)(d+2)}b(D)^{2}. \end{align} Here the factor $b(D)$ is given in (\ref{eq:bn}) in Appendix B and $D=d+1$. Having the two-point function and using the $1/n$ prescription, we deduce the corresponding OPE coefficient \begin{equation} C_{(0j)}^{A}=\frac{4b(D)d\left(1-(d-1)n^{2}+\frac{(d+1)(d-2)}{2}n^{4}\right)}{a_{0}(D)(d+2)(d+1)(d-1)n^{4}}z_{A}^{-2d}R_{A}^{2d}\sin^{-2d}\frac{\theta_{j}}{2n}. \end{equation} It follows that the leading holographic MI is \begin{equation} I(A, B)\big|_{s=0}=\frac{1}{2^{4 d+3}}\frac{(d-1) (d-2)^2}{d+2}\frac{\sqrt{\pi } \Gamma (2 d+1)}{\Gamma \left(2 d+\frac{3}{2}\right)}z_{cr}^{2d}, \end{equation} which is in exact agreement with (\ref{tensor0}) provided $\Delta=d$. For the spin-2 operator, the time-...-time component of the two-point function is \begin{align} \left\langle O_{00}^{(t)(jj')}(r_{A})O_{00}^{(t)(jj')}(r_{B})\right\rangle _{M}= & a_2(D)\frac{(2z_{A}z_{B})^{2d-2}}{|x_{A}-x_{B}|{}^{4d}}\,, \end{align} where \begin{equation} a_{2}(D)= 2^{2d+4}\frac{(d-2)(d+4)}{(d-1)(d+1)^{2}(d+2)^{2}}b(D)^{2}. \end{equation} After some calculations we get the OPE coefficient \begin{equation} C_{(0j)}^{(A)00}=\frac{b(D)}{a_{2}(D)}\frac{8\left(-2+(d-2)n^{2}+dn^{4}\right)}{(d+2)(d+1)n^{4}}z_{A}^{-2d+2}R_{A}^{2d}\sin^{-2d}\frac{\theta_{j}}{2n}, \end{equation} and the contribution of the spin-2 operator to the holographic MI \begin{equation} I(A, B)\big|_{s=2}=\frac{(d-2) (d-1)}{2^{4 d} (d+4)}\frac{\sqrt{\pi } \Gamma (2 d+1)}{\Gamma \left(2 d+\frac{3}{2}\right)}z_{cr}^{2d}, \end{equation} which is in agreement with (\ref{tensor2}) when $\Delta=d$. For the spin-4 operator (\ref{O4}), its contribution to the MI can be formally written as \begin{equation} I(A, B)|_{s=4}=\lim_{n\to 1}\frac{n}{2 (n-1)}\sum _{j=1}^{n-1} C^{(A)ABCD}_{(0j)} C^{(B)EFHI}_{(0j)} \left\langle O_{ABCD}^{(t)(0j)}\left(r_A\right) O_{EFHI}^{(t)(0j)}\left(r_B\right)\right\rangle\,, \end{equation} where $ C^{(A)ABCD}_{(0j)}$ is the corresponding expansion coefficient for the subregion $A$. As emphasized earlier, only the time-...-time component is relevant for our purpose, namely \begin{equation}\label{gravitonmi} I(A, B)|_{s=4}=\lim_{n\to 1}\frac{n}{2 (n-1)}\sum _{j=1}^{n-1} C^{(A)0000}_{(0j)} C^{(B)0000}_{(0j)} \left\langle O_{0000}^{(t)(0j)}\left(r_A\right) O_{0000}^{(t)(0j)}\left(r_B\right)\right\rangle. \end{equation} At the large separation, the two-point function of the operator at the leading order is given by \begin{equation} \left\langle O_{0000}^{(t)(jj')}(x_{A})O_{0000}^{(t)(jj')}(x_{B})\right\rangle\simeq a_4(D)\frac{(2z_{A}z_{B})^{2d-4}}{|x_{A}-x_{B}|{}^{4d}}\,, \end{equation} where \begin{equation} a_{4}(D)=\frac{4^{d+4}d^{2}}{(d-1)(d+1)(d+2)^{3}(d+4)}b(D)^{2}. \end{equation} The coefficient $ C^{(A)0000}_{(0j)} $ can be read by using the $1/n$ prescription \begin{eqnarray} C_{(0j)}^{(A)0000}=-\frac{b(D)}{a_{4}(D)}\frac{2^{6}d(1+n^{2})^{2}}{(d+4)(d+2)^{2}n^{4}}z_{A}^{-2d+4}R_{A}^{2d}\sin^{-2d}\frac{\theta_{j}}{2n}. \end{eqnarray} Finally, plugging the above results into (\ref{gravitonmi}), we obtain \begin{equation} I(A, B)|_{s=4}=\frac{(d+1) (d-1)}{4^{2 d-1} (d+4) (d+2)}\frac{\sqrt{\pi } \Gamma (2 d+1)}{\Gamma \left(2 d+\frac{3}{2}\right)}z_{cr}^{2d}\,, \end{equation} which is in good match with (\ref{tensor4}) when $\Delta=d$. All these contributions of the graviton to the holographic MI are perfectly matched with those of a spin-2 primary operator which has scaling dimension $\Delta=d$ in the boundary CFT. However, to compare the results with the stress tensor in the CFT, we need to clarify some subtleties in the CFT side. First, although interchanging a single operator $T_{\mu\nu}^{(j)}$ does not contribute to the mutual information, the derivative of the R\'enyi mutual information with respect to $n$ in the $n\rightarrow 1$ limit contains some universal information about the underlying theories as well. To be precise, we have \begin{equation}\label{renyi} I_n(A\,,B)=\fft{d(d-1)}{4\pi^2 C_T}\,\fft{h_n^2}{n(n-1)}z_{cr}^d+\dots \,,\end{equation} where $h_n$ is the conformal dimension of the higher dimensional twist operators. For convenience, we introduce its definition from the long-distance behavior of the stress tensor, namely $|x|\rightarrow \infty$, \begin{equation} \langle T_{00}(0,x) \rangle_{\mathcal{C}_A^{(n)}}=\fft{d-1}{2\pi}\Big(\fft{2R_A}{|x|^2}\Big)^d h_n\,,\quad \langle T_{ij}(0,x) \rangle_{\mathcal{C}_A^{(n)}}=-\fft{\delta_{ij}}{2\pi}\Big(\fft{2R_A}{|x|^2}\Big)^d h_n \,,\end{equation} and all the other components are zero at $t_E=0$. Furthermore, it was proved in \cite{Hung:2014npa} that although the conformal dimension $h_n$ vanishes in the limit $n\rightarrow 1$, its derivative with respect to $n$ gives a non-trivial universal result \begin{equation} \partial_n h_n|_{n=1}=2\pi^{(d+2)/2}\fft{\Gamma{(d/2)}}{\Gamma{(d+2)}}\,C_T \,.\end{equation} Consequently, the mutual information does not receive contributions from exchange of a single operator but its derivative does. We find \begin{equation} \partial_n I_n|_{n=1}=d(d-1)\pi^d \fft{\Gamma^2{(d/2)}}{\Gamma^2{(d+2)}}\,C_T\,z_{cr}^d+\cdots \,.\end{equation} This result is consistent with (5.16-5.18) in \cite{Hung:2014npa}, in which the correlator of spherical twist operators around $n=1$ was derived in a different approach. It is easily seen that the derivative of the mutual information at the order $z_{cr}^d$ contains universal information about the underlying theories, which is however not seen at the leading order $z_{cr}^{2d}$ result in the mutual information itself. This is interesting and probably could be generalized to generic higher spin operators. Second, as the stress tensor is a quasi-primary operator, it is known that anomalous terms should be included when it transforms under a conformal transformation. One may worry about that the calculations for the stress tensor will become much more complicated than a primary spin-2 operator. Fortunately, we find that its connected two-point function transforms precisely as that of a spin-2 primary operator. By definition, we have \begin{equation} \langle T_{\mu\nu}(x_1)T_{\lambda\rho}(x_2) \rangle_{\mathcal{C}_A^{(n)}}^c=\langle T_{\mu\nu}(x_1)T_{\lambda\rho}(x_2) \rangle_{{\mathcal{C}_A^{(n)}}}-\langle T_{\mu\nu}(x_1) \rangle_{\mathcal{C}_A^{(n)}}\, \langle T_{\lambda\rho}(x_2) \rangle_{\mathcal{C}_A^{(n)}} \,. \end{equation} The second term on the right hand side of this equation contributes to $h_n^2$ order to the OPE coefficients and to $h_n^4$ order to the R\'{e}nyi mutual information. Hence this term does not contribute to the mutual information. In summary, we can safely conclude that the mutual information for various modes of the stress tensor can be obtained by simply setting $\Delta=d$ from the corresponding results for a primary spin-2 operator. As a result, we can claim that our bulk results from the graviton perfectly matches with the contributions of the stress tensor to the MI in the boundary CFT. The last remark is on the absence of the van Dam-Veltman-Zakharov discontinuity from the holographic mutual information. In the above, we showed that the agreement between the bulk massless graviton and the boundary stress tensor. Actually, this agreement extends to the massive graviton and the corresponding spin-2 operator, as shown in Appendix A.2. From the field theory point of view, the dependence of the mutual information on the scaling dimension of the tensor operator is continuous. On the bulk side, we obtain the exact agreement with the boundary result is remarkable, suggesting that the massless limit of the graviton is well-defined and the van Dam-Veltman-Zakharov discontinuity is absent\cite{Kogan2000uy,Porrati2000cp}. \subsection{Fermions} Now, we study the contribution from the fermionic field to the bulk mutual information . In $AdS_{d+1}$, the Dirac matices $\Gamma$ have dimension $N=trI_{d+1}=2^{[\frac{d+1}{2}]}$, where ${[\frac{d+1}{2}]}$ equals to the largest integer that is smaller than ${\frac{d+1}{2}}$. We choose the veilbein to be $e_{M }^a={\delta _{M }^a}/{z}$. The Dirac matrices in the tangent space satisfy $\{\gamma_a, \gamma_b\}=2\delta_{ab}$. The Gamma matrices in a curved space are defined by \begin{equation} \Gamma _{M }=\gamma _a e_{M }^a. \end{equation} Dual to a fermionic operator of dimension $\Delta$, there is a massive fermion in the bulk with mass $m=\Delta-d/2$. The fermion propagator in the Euclidean AdS reads \cite{HS1998,fermion1999} \begin{equation} S(z,w)=-\sqrt{\frac{1}{w_0 z_0}} \left(G_{\Delta _-}(u) \left(\mathcal{P}_- \gamma _{\mu } z^{\mu }-\mathcal{P}_+ \gamma _{\mu } w^{\mu }\right)+G_{\Delta _+}(u) \left(\mathcal{P}_+ \gamma _{\mu } z^{\mu }-\mathcal{P}_- \gamma _{\mu } w^{\mu }\right)\right)\,, \end{equation} where $G_{\Delta\pm}(u)$ is the scalar propagator given in (\ref{scalarpropagator}) and \begin{eqnarray} \mathcal{P}_{\pm }=\frac{1}{2} \left(1\pm \gamma _d\right), \Delta _{\pm }=\frac{d}{2}+\left(m\pm \frac{1}{2}\right). \end{eqnarray} in the large distance limit, we find \begin{equation} S(z,w)\overset{u\to \infty }{\approx }\mathcal{A} \left(I_N \left(w_d+z_d\right) -\gamma _{\mu } (z-w)^{\mu }+\gamma _i\gamma _0 \left(z^i-w^i\right)\right)\,, \end{equation} where \begin{equation} \mathcal{A}=-\frac{\Gamma \left(\frac{d+1}{2}+m\right)}{\left(\pi ^{d/2} 2^{\frac{d+3}{2}+m}\right) \Gamma \left(m+\frac{1}{2}\right) \sqrt{w_d z_d} u^{\frac{d+1}{2}+m}}\,.\end{equation} The gravity dual of the boundary spin-1 operator constructed from a fermionic operator is \begin{equation}\label{eq:o1} O_M^{(f)(jj')}(r)=\bar{\psi }^{(j)}(r) \Gamma _M \psi ^{(j')}(r)-\psi ^{(j)}(r) \Gamma _M \bar{\psi }^{(j')}(r)\,. \end{equation} Here superscript $(f)$ stands for the operators constructed from the fermionic operators. All the other bilinear operators have vanishing contributions to the mutual information due to the antisymmetry of their indices\cite{CCHL2017}. However, just like previous cases, only the time-time component of the two-point function is relevant for our purpose. We find in the large distance limit \begin{equation} \left\langle O_0^{(f)(jj')}\left(r_A\right) O_0^{(f)(jj')}\left(r_B\right)\right\rangle\simeq 16\mathcal{A}^2N \frac{(2z_A z)^{2m+d-1}}{(x_A - x_B)^{4m+2d}}\,. \end{equation} Using the $1/n$ prescription, we compute the OPE coefficient as \begin{equation} C^{(A)0}_{(jj')}= (-1)^{j+j'} \frac{2^{-m-(d+3)/2}(R_{A})^{2m+d} \sin^{-2m-d}\frac{\theta_{jj'}}{2n}}{\mathcal{A}\,N(z_{A})^{2m+d-1}}+O(n-1)\,. \end{equation} However, it is worth emphasizing that there are some subtleties when using the $1/n$ prescription for the fermions. First, the fermion propagator in the conifold geometry $C_A^{(n)}$ satisfies the boundary condition $G_F^{(n)}(\theta+2\pi n)=(-1)^{(n-1)}G_F^{(n)}(\theta)$, which is only periodic when the replica parameter $n$ is odd. We expect that odd $n$ result is already enough to derive the OPE coefficients. Second, we have included a factor $(-1)^{j+j'}$ in the OPE coefficient. This is correct since there will be a factor $(-1)$ for a fermion when it rotates $2\pi$\cite{Herzog2015}. At last we find the fermion contribution to the bulk mutual information \begin{equation} I(A,B)|_{s=1}=\frac{\sqrt{\pi } 2^{\left\lfloor \frac{d+1}{2}\right\rfloor -2} \Gamma (d+2 m+1)}{4^{d+2 m} \Gamma \left(d+2 m+\frac{3}{2}\right)}z_{cr}^{d+2m}. \end{equation} Note that it is positive which is different in sign from the vector operators constructed from the scalar operator. This is reasonable since it gives the leading order contribution to the mutual information. For $d$ is odd, the result is the same as that of a Dirac fermion in the boundary CFT\cite{CCHL2017}. However, for $d$ is even, it is just one half of that. This can be easily understood in the AdS/CFT correspondence. The bulk fermion has different duals in the boundary in different dimensions. When $d$ is odd, it is dual to a Dirac fermion operator in the boundary. However, when $d$ is even, it corresponds to a Weyl fermion operator, which can be viewed as one half of a Dirac fermion \cite{fermion1998,fermion1999_2,Iqbal2009}. We can write a Dirac fermion $\Psi=\Psi_1 +\Psi_2$ in which $\Psi_{1,2}$ are Weyl fermions. For a general primary field $O$, its contribution to the mutual information $I\sim\frac{\left\langle O\right\rangle \left\langle O\right\rangle }{\left\langle OO\right\rangle }$. Two independent fields $O_1$ and $O_2$ contribute as $I\sim\frac{\left\langle O_1\right\rangle \left\langle O_1\right\rangle }{\left\langle O_1 O_1\right\rangle }+\frac{\left\langle O_2\right\rangle \left\langle O_2\right\rangle }{\left\langle O_2 O_2\right\rangle }$. Since the bulk fermion is dual to only a Weyl fermion (half of a Dirac fermion) on the boundary when $d$ is even, its contribution to the bulk MI is only half of that contributed by a boundary Dirac fermion. In short, from the AdS/CFT correspondence, the bulk MI from the Dirac fermions in different dimensions are in match with the boundary results. \section{Conclusions} In this paper, we tried to understand holographically the universal behaviors in the leading orders of the mutual information between two disjoint spheres in a CFT. Such universal behaviors have been found in \cite{CCHL2017} by using the operator product expansion of the twist operator and the $1/n$ prescription. Holographically, the spherical twist operator can be understood as the non-local hemisphere. In the large distance regime, we can still use the operator product expansion of the hemisphere to simplify the calculations. As we are interested in the mutual information, we can safely ignore the backreaction of the twist operator to the geometry. Effectively we can still applied the replica trick in the bulk without worrying about the backreaction. Moreover in the $n\to 1$ limit, the fields could be treated as the generalized free theory such that the interaction can be ignored. In the bulk computation, the fields are treated as the free field as well. Therefore in the computation of the holographic mutual information, we consider the free fields in a fixed background. Especially to compute the OPE coefficients, we could focus on the free fields in a space with conical singularity such that we may apply the $1/n$ prescription to read the coefficients. By explicit computation, we showed that the leading mutual information in a CFT, no matter what kind of operator leads to, the scalar, the vector, the tensor or the fermionic type, can be reproduced from the holographic computation of the dual corresponding field. In retrospect, the universal behaviors in the leading mutual information in a general CFT suggests that it is independent of the details of the AdS/CFT correspondence, namely the explicit construction of the AdS gravity and the dual CFT. Such behaviors relies only on the symmetry. From the general lesson in the AdS/CFT correspondence, the fields in AdS is dual to the operator in the boundary theory. In other words, the behavior of the free fields in AdS could be captured by the dual operator constrained by the conformal symmetry, and vice versa. Our study gives another piece of evidence to support this picture, though in a subtler way. Even though the conclusion might not be a big surprise, the procedure to get this picture is remarkable. The leading contribution is from the bilinear operator composed of the fields at different replicas. In the scalar field case, we could even discuss the next-to-leading order contribution, which is from the bilinear operator with a derivative. In this case, we found a new form of the projection operator defined on the slice of fixed radius. It was defined to peel off the radial components so as to make the operators in the same form as in the boundary CFT. In the gauge field case, we can treat this as a particular gauge choice. However, when considering the massive fields, there is no such understanding. We wish the construction could be useful in other situations. Another remarkable point is on the gauge fields. For the fields with gauge symmetry, including the massless vector bosons and the massless gravitons, we found that the gauge parts in the propagators played an indispensable role in the calculation, even though the final results are gauge independent. We argue that this is due to the gauge symmetry breaking around the entangling surfaces. It gives rise to extra physical degrees of freedom to contribute to the MI. In fact, we also calculated the MI from the massive bulk fields in Appendix A. The results match exactly with boundary results of the vector and tensor operator with generic scaling dimensions. This supports our arguments since a massive vector field has one more degrees of freedom than the gauge boson. As a byproduct, we showed that the absence of the van Dam-Veltman-Zakharov discontinuity in the computation of the holographic mutual information. In our calculations, we treated AdS spacetime as a background. As a result, our results are meaningful only when we take $n\to 1$, so that the twist operator can be treated as a probe. Thus, it seems impossible to generalize the discussion to the R\'enyi entropy, since in this case the twist operator is heavy and would affect the background spacetime significantly. The key problem we really face with is how to expand the twist (or surface) operator. There are other methods to construct the gravity dual of a surface operator in CFT, like ``bubbling" surface operator as mentioned in \cite{DGM}, which have taken into account of the back reaction so that it can be used to get quantum correction to the R\'enyi entropy. It would be interesting to investigate this possibility. In \cite{CCHL2017}, it was shown that the mutual information could be expanded in terms of the conformal blocks. The conformal block carries the higher order contribution of the cross ratio. In the free fermion case, the conformal block expansion fits better with the numerical study than the simple leading order expansion of the conformal block. It would be interesting to see if one can find the conformal block expansion in the holographic picture. \section*{Acknowledgments} We would like to thank Jiang Long, Lin Chen and Peng-Xiang Hao for stimulating discussions. B. Chen would like to thank the participants of the advanced workshop ``Dark Energy and Fundamental Theory" supported by the Special Fund for Theoretical Physics from the National Natural Science Foundations of China with Grant No. 11447613 for stimulating discussion. C.-Y. Zhang thanks the APCTP focus program "Geometry and holography for quantum criticality" in Pohang, Korea. Z.-Y. Fan is supported in part by NSFC Grants No. 11273009 and No. 11303006 and also supported by Guangdong Innovation Team for Astrophysics(2014KCXTD014). C.-Y. Zhang is supported by National Postdoctoral Program for Innovative Talents BX201600005. B. Chen and W.-M. Li were supported in part by NSFC Grant No.~11275010, No.~11325522, No.~11335012 and No. 11735001.
{ "timestamp": "2018-04-30T02:04:53", "yymm": "1712", "arxiv_id": "1712.05131", "language": "en", "url": "https://arxiv.org/abs/1712.05131" }
\section{\uppercase{Introduction}} \label{sec:Intro} \PARstart{M}{obile} network operators are experiencing a significant traffic demand increase because of the emergence of bandwidth-consuming applications, such as video streaming over new-generation mobile devices \cite{CiscoWhitePaper}. \emph{Long Term Evolution - Advanced} (LTE-A) \cite{giambene2014resource} systems have addressed this issue using both high-power nodes (macro cells) and low-power ones (small cells). These \emph{Heterogeneous Cellular Networks} (HetNets) \cite{chandrasekhar2008femtocell}, also denoted as multi-tier cellular systems, need \emph{Inter-Cell Interference Coordination} (ICIC) mechanisms in order to deal with co-tier and cross-tier interference \cite{RefEMTC},\cite{icc}. The disparity between macro and micro cell transmission powers causes a load imbalance \cite{loadbalancingTrans}. Hence, it is important to offload traffic from macro to micro cells in order to improve user experience \cite{overviewloadbalancing}. In addition, the emergence of \emph{Fifth-Generation} (5G) cellular networks will lead to ultra-dense cell deployments to meet the new capacity needs. Thus, interference and load imbalance issues are becoming more critical to network performance. The commonly-used hexagonal arrangement of base stations has some limitations in modeling cellular networks. In the reality, the positions of base stations do not exactly follow a hexagonal regular layout because of both variable traffic demands among different locations (e.g., rural, urban) and obstacles (e.g., mountains, forests, etc.) \cite{TractableApproach}. Since it is difficult to achieve an analytical model in realistic conditions \cite{WynerAccuracy},\cite{StochasticSurvey}, simulations are basically the only approach for performance evaluation in terms of capacity and outage probability. On the other hand, stochastic geometry is a very powerful mathematical tool for modeling wireless networks with random topologies \cite{StochasticGeoandApplications},\cite{RandomPlaneNetwork}. The most famous point process applied to the study of HetNets is the \emph{Poisson Point Process} (PPP) \cite{StochasticGeoandApplications}. The PPP model was first proposed in \cite{TractableApproach} to describe the distribution of cells in classical cellular networks. Then, this study was extended to HetNets in \cite{HetNEtFlexibleAssoc}. The significant advantage in using PPP processes is that they can help to obtain closed-form formulas of important performance metrics, such as outage, average capacity, etc. \subsection{Related Works} \label{sec:relatedWorks} In the literature, the papers using stochastic geometry analysis usually focus on two alternative cases, which are \emph{spectrum partitioning} \cite{AUtilityPerspective}-\cite{RateCoverate} and \emph{spectrum sharing} schemes \cite{HetNEtFlexibleAssoc},\cite{OffloadinginHetNets},\cite{recentpaper}. With spectrum sharing, the entire spectrum is reused in every cell; instead, the bandwidth is divided into different parts to be reused among cells in spectrum partitioning. For example, in two-tier HetNets, the bandwidth is divided into $F_1$ and $F_2$ segments with spectrum partitioning. There are two ways to use $F_1$ and $F_2$ as follows: (\emph{i}) The macro cell tier uses $F_1$, while the micro cell tier uses $F_2$; (\emph{ii}) The macro cell tier uses only $F_1$, while the micro cell tier can use both $F_1$ and $F_2$: $F_2$ is used by micro cells for biased \emph{User Equipments} (UEs) only (i.e., those UEs, originally belonging to the macro cell tier, which are forced to associate with the micro cell tier) according to a traffic offloading scheme \cite{rangeExpansion},\cite{Qualcomm}. Spectrum sharing and spectrum partitioning schemes cause UEs to suffer from co-tier and cross-tier interference. Most of the papers adopting a stochastic geometry model assume a cell association scheme based on maximum received power (or maximum-biased received power) \cite{HetNEtFlexibleAssoc}-\cite{recentpaper}. However, forcing UEs to associate with the cell providing the maximum (or the maximum-biased) received power does not necessarily mean having \emph{Signal-to-Interference plus Noise Ratio} (SINR) higher than a minimum SINR threshold so that UEs may experience outage. The works in \cite{AnalysisMaxSIR}-\cite{AnalysisMaxSINRConnectivity} provide an analytical framework to study the performance of HetNets with a max-SINR cell association scheme. However, these papers adopt the simplified assumption of frequency reuse of 1. Thus, co-tier and cross-tier interference cause a large number of UEs to experience outage conditions. Moreover, none of these papers takes the load balance issue into account so that the macro tier has associated more UEs than the micro tier. The survey paper \cite{ultimo} deals with stochastic geometry and frequency reuse, but in a single-tier scenario, thus oversimplifying the model without cross-tier interference and load balancing issues. Finally, the paper \cite{dopoultimo} uses reduced power transmissions for macrocells in some sub-frames for a two-tier system with PPP model. The association of UEs to micro-cells is privileged by means of a range expansion approach. The limit of this work is that only full frequency reuse is considered. \subsection{Contributions and Organization} In this study, we refer to downlink since it is more critical than uplink in terms of traffic demand. We consider a scenario with \emph{Frequency Reuse} (FR) of $K$, so that the entire spectrum is divided into $K$ equal-size parts. In particular, each macro/micro cell in the system randomly uses one frequency band to reduce the interference from other cells. Moreover, we consider a cell association scheme based on the SINR at the UEs: we propose a SINR-based cell association scheme, where the micro tier has higher priority in the association to achieve load balancing with the macro tier. In particular, a UE associates with a micro cell as long as it experiences SINR greater than a minimum SINR threshold $T$ from the micro tier; if the UE is in the outage area of the micro cell tier, it will consider to associate with a macro cell. Even if a SINR-based cell association is a common approach in cellular systems, its analysis in the PPP stochastic geometry case is not so common in the literature. We address this issue and we provide analytical derivations on outage probability, average cell load, and rate coverage probability, representing the probability that the UE bit-rate is bigger than a certain threshold $R_T$. On the basis of input parameters such as bit-rate threshold $R_T$, minimum SINR threshold $T$, transmission powers of base stations, and UEs density, the purpose of this study is to determine the $K$ value and the base station densities that allow to fulfill requirements in terms of outage probability (see Section III) and rate coverage probability (see Section IV). A simulation approach has also been provided in order to validate our analysis. The original contributions of this work can be summarized as follows: \begin{itemize}[leftmargin=.2in] \item The work in \cite{AnalysisMaxSIR} adopts a max-SINR cell association scheme without differentiation between macro and micro cells. This approach tends to underutilize the resources of the micro cell layer. In our work, we remove this issue by giving a strict priority to the micro cells for the UE cell selection process (cell offloading). Only when there is no micro cell available to serve the UE, it is associated with a macro cell. \vspace{0.1cm}\item The works in \cite{HetNEtFlexibleAssoc},\cite{OffloadinginHetNets},\cite{recentpaper},\cite{AnalysisMaxSIR} adopt a full frequency reuse system. We remove such limitation so that our analysis considers $K$ frequency segments that can be assigned at random to both macro and micro cells. \vspace{0.1cm}\item On the basis of the analysis provided in this paper, a cell planning optimization approach is proposed to select the reuse factor $K$ and the ratio of micro-to-macro cell densities, depending on the ratio of micro-to-macro transmission power levels and other system parameters.\\ \end{itemize} The rest of this paper is organized as follows: Section \ref{sec:ModelandAssumptions} presents system model and assumptions for the study of two-tier HetNets with prioritized SINR-based cell association. In Section \ref{sec:OutageAverage}, we derive the outage probability and average load on each tier. Section \ref{sec:Rate} provides the analysis of rate coverage probability and mean UE bit-rate. Section \ref{sec:Simulation} shows the simulation approach adopted in this study along with the settings of the LTE-A-based HetNet system. Results are presented in Section \ref{sec:ResultsPPP}, followed by Section \ref{sec:Conclusions} that provides the conclusions. \vspace{10pt} \section{\uppercase{System Model and Assumptions}} \label{sec:ModelandAssumptions} \subsection{Scenario Description} Let us consider a two-tier HetNet scenario, where macro cells, micro cells, and UEs are placed in the service area according to independent homogeneous PPPs. In particular, let $\Phi_{M}$ with density $\lambda_{M}$, $\Phi_{\mu}$ with density $\lambda_{\mu}$, and $\Phi_{u}$ with density $\lambda_{u}$ denote the PPPs characterizing macro eNBs (M-eNBs), micro eNBs (\textmu-eNBs), and UEs, respectively; the densities represent the average number of points of the processes per area unit. $\gamma_{M}$ and $\gamma_{\mu}$ denote the path loss exponents for macro and micro cell layers. For the sake of simplicity, we assume $\gamma_{M} = \gamma_{\mu} = \gamma$. Moreover, M-eNBs use the transmission power $P_{M}$ that is higher than the transmission power $P_{\mu}$ used by \textmu-eNBs. The system bandwidth is denoted by $W$. Let $h$ denote the channel gain (power factor) due to Rayleigh fading; $h$ has an exponential distribution with unitary mean. Vector $r_i$ denotes the location of eNB $i$ (being it a M-eNB or a \textmu-eNB) assuming that a reference UE is in the origin. Fig. \ref{fig:PPPScenario} shows an example of HetNet scenario according to our PPP assumption: dots indicate M-eNBs, while circles are \textmu-eNBs. The lines among macro cells are obtained by using a Voronoi diagram for the macro tier, however, they do not reflect the actual cell association scheme adopted in our study (i.e., these lines are not real cell borders in this case). \begin{figure} \centering \epsfxsize=9cm \epsfbox{Images/PPPScenario.pdf} \caption{Heterogeneous network with PPP distribution.}\label{fig:PPPScenario} \end{figure} In this study, because of the dense deployment of eNBs, we neglect the background noise with respect to the interference to simplify the analysis \cite{AUtilityPerspective}; therefore, from now on SINR will simply become SIR. In this scenario, FR of $K$ is adopted to improve SIR of UEs, especially for those in edge areas, and to reduce outage probability. The entire frequency band is divided into $K$ equal segments, denoted as $\{F_1, F_2,...,F_K\}$. Moreover, we assume that each cell randomly selects 1 out of the $K$ segments. An example of FR with $K = 3$ is shown in Fig. \ref{fig:FFRHetNet}, where the macro cell selects $F_1$ and the two micro cells use $F_2$ and $F_3$, respectively. As we can see, interference is significantly reduced in this case with respect to spectrum sharing schemes. \begin{figure} \centering \centering \epsfxsize=6.5cm \epsfbox{Images/FFR.pdf} \caption{FR scheme adopted in the HetNet scenario for $K$ = 3.}\label{fig:FFRHetNet} \end{figure} \subsection{SIR Model} Assuming that the reference UE is located in the origin and associates with cell $i$ using frequency segment $k$, where $k \in \{1,2,...,K\}$, its SIR can be expressed as follows: \begin{equation}\label{eq:SINRmacro} \V{SIR}_i = \frac{P_i h_i \parallel r_i \parallel^{-\gamma}}{\sum_{j \in \left\{\Phi_{M,k} \cup \Phi_{\mu,k}\right\} \setminus i} P_j h_j \parallel r_j \parallel^{-\gamma}}, \end{equation} where $\Phi_{M,k} \cup \Phi_{\mu,k}$ denotes the sets of M-eNBs and \textmu-eNBs using frequency segment $k$, $\parallel r_i \parallel$ denotes the distance from eNB $i$ to the reference UE. $P_i$ can be either $P_M$ or $P_{\mu}$ depending on $i$ being a macro cell or a micro cell. Note that the background noise has been neglected in this formula as explained in previous sub-Section. Assuming that there is mapping from cell index $i$ and frequency segment $k$ assigned to cell $i$, index $k$ has been omitted from the above SIR notation. \vspace{10pt} \section{\uppercase{Outage Probability}} \label{sec:OutageAverage} \subsection{Outage Probability Analysis} \label{subsec:OutageProb} Let $T$ denote the minimum SIR threshold. A UE is not in outage conditions if it experiences SIR higher than $T$ in at least one cell of the system. The outage probability is given by $ {O} = 1 - {P}_c$, where $ {P}_c$ denotes the coverage probability of the entire network. A UE is in the coverage area of an arbitrary cell $i$ (could it be a M-eNB or a \textmu-eNB) in the network if $\V{SIR}_i > T$. Thus, the coverage probability of the entire network including all macro and micro cells can be expressed as follows: \begin{equation}\label{eq:coverageProb} {P}_c = {P}\left(\bigcup\limits_{i \in \Phi_{M} \cup \Phi_{\mu}}\left\{\V{SIR}_i > T\right\}\right), \end{equation} where $\Phi_{M}$ and $\Phi_{\mu}$ indicate the sets of macro and micro cells: $i \in \Phi_M \cup \Phi_{\mu}$ denotes one point (and then one eNB) belonging to the sum process $\Phi_M \cup \Phi_{\mu}$. This probability is difficult to obtain since it involves calculating the coverage probability of every cell (i.e., $ {P}\{ \V{SIR}_i > T \}$) and then to consider the union. Note that a UE could be in the coverage areas of different cells at the same time, so that their union is not empty. In order to deal with this issue, we consider $ {P}_{c,k} = {P} (C_k)$, the probability of event $C_k$ that the reference UE is in the coverage area of at least one of the cells that use frequency $F_k$ (considering all together micro and macro cells using $F_k$). In other words, we study the coverage event $C_k$ of each frequency segment, considering all the tiers together; then, we take the union of the coverage events of all frequency segments. Thus, we have: \begin{equation}\label{eq:coverageProb2} {P}_c = {P} \left( \bigcup\limits_{k \in \{1,2,...,K\}} C_k \right). \end{equation} By using the inclusion-exclusion property, we obtain: \begin{equation}\label{eq:in-exclusion} \begin{split} & {P} \left( \bigcup\limits_{k \in \{1,2,...,K\}} C_k \right) =\\ & \sum_{k=1}^{K}\left( (-1)^{k+1} \sum_{x_1<x_2<...<x_k} {P}(C_{x_1}\cap C_{x_2}\cap ... \cap C_{x_k}) \right), \end{split} \end{equation} where $x_i \in \{1,2,...,K \}$. According to (\ref{eq:in-exclusion}), for a given $k$, we select all possible sets of $x_i$ values satisfying the condition ${x_1<x_2<...<x_k}$. For example, when $K = 3$, we have: \begin{equation}\label{eq:coverageProb3} \begin{split} {P}_c & = {P} (C_1) + {P} (C_2) + {P} (C_3) \\ & - \left[ {P}(C_1\cap C_2)+ {P}(C_2\cap C_3)+ {P}(C_1\cap C_3)\right] \\ & + {P}(C_1\cap C_2\cap C_3) \\ & = {P}_{c,1} + {P}_{c,2} + {P}_{c,3} - {P}(C_1\cap C_2) - {P}(C_2\cap C_3) \\ & - {P}(C_1\cap C_3) + {P}(C_1\cap C_2\cap C_3). \end{split} \end{equation} The difficulty is to compute the terms $ {P}(C_1\cap C_2)$, $ {P}(C_2\cap C_3)$, $ {P}(C_1\cap C_3)$, and $ {P}(C_1\cap C_2\cap C_3)$. It is important to note that with FR of $K$, cells using frequency $F_x$ do not interfere with cells using frequency $F_y$ when $x \neq y$, thus $C_x$ and $C_y$ are totally independent events; nevertheless, coverages of $C_x$ and $C_y$ can have some degree of overlap (their intersections can be non-empty)\footnote{We will not consider overlap among cells using the same frequency. As commented in Appendix A, this is true for SINR threshold $T \ge 1$; instead, this is approximate for $T < 1$.}. On the basis of this consideration, we can re-write the expressions in (\ref{eq:coverageProb3}) as follows: \begin{equation}\label{eq:jointProb1} {P}(C_x\cap C_y) = {P}(C_x) {P}(C_y) = {P}_{c,x} {P}_{c,y} \end{equation} and similarly \begin{equation}\label{eq:jointProb1} {P}(C_1\cap C_2\cap C_3) = {P}_{c,1} {P}_{c,2} {P}_{c,3}. \end{equation} Moreover, due to the symmetry of the problem and assuming to have a sufficiently-large number of cells, we have $ {P}_{c,1}= {P}_{c,2}= {P}_{c,3} = ...= {P}_{c,K}$, thus, formula (\ref{eq:in-exclusion}) can be further simplified by using the Newton Binomial Theorem as follows: \begin{equation}\label{eq:coverageProb4} {P}_c = \sum_{k=1}^{K} (-1)^{k+1} \binom{K}{k} {P}_{c,1}^k = 1 - (1- {P}_{c,1})^K. \end{equation} \myhl{From Theorem \ref{theremOutage} below, we obtain the expression of $ {P}_{c,1}$ in the special case when both macro and micro tiers have the same SIR threshold $T$ and noise is neglected (a more general expression of $ {P}_{c,1}$ is provided in Appendix A): \begin{equation}\label{eq:theorem1} {P}_{c,1} \approx D(\gamma, T) \triangleq \frac{\pi}{C(\gamma)T^{2/\gamma}}, \end{equation} } where $C\left( \gamma \right) = \frac{{2\pi ^2 }}{{\gamma \;\sin \left( {\frac{{2\pi }}{\gamma }} \right)}}$ and where probability $D(\gamma, T)$ is defined in (\ref{eq:theorem1}) itself. In what follows, we will also use notations like $P_{c}(T)$ and $P_{c,1}(T)$ to stress that these quantities depend on SIR threshold $T$. Finally, the outage probability can be expressed as follows by means of (\ref{eq:coverageProb4}): \begin{equation}\label{eq:coverageProbFinal} {O} = 1 - {P}_c = (1- {P}_{c,1})^K. \end{equation} \begin{theorem} \label{theremOutage} \textit{The coverage probability $ {P}_{c,1}$ when only cells with frequency $F_1$ are considered, assuming that macro and micro tiers have SINR threshold $T_M$ and $T_{\mu}$ respectively, can be expressed as} \begin{equation}\label{th:generalizedCoverageProb} \begin{split} & {P}_{c,1}(T_M, T_{\mu}) \approx \sum_{\underset{}{j = \{M, \mu\}}} \frac{\lambda_j}{K} \times \\ &\int_{ {\mathbb{R}}^2} \exp\left[ -\left(\frac{T_j}{P_j}\right)^{2/\gamma} \parallel r_j \parallel^2 C(\gamma) \sum_{m = \{M,\mu\}} \frac{\lambda_m}{K} P_m^{2/\gamma}\right] \times \\ & \exp\left(\frac{-T_j \sigma^2 \parallel r_j \parallel^\gamma}{P_j}\right) \V{d}r_j, \end{split} \end{equation} where notations are detailed in Appendix A. \end{theorem} This Theorem is adapted from Theorem 1 in \cite{AnalysisMaxSIR} taking frequency reuse into account. The proof of this Theorem is provided in Appendix A. In this theorem, we only refer to (both macro and micro) cells using frequency $F_1$ out of all the cells using frequencies $F_1$, $F_2$, ..., $F_k$. In Appendix A, we also explain the approximation we made to achieve the result in (11). \subsection{Average Cell Load} The average cell load represents the fraction of active UEs belonging to the micro or the macro tier or, equivalently, the probability that a random UE belongs to a tier given that this UE is under the coverage area of the network (i.e., not in outage condition). Cells with a higher number of associated UEs will have a higher average load. Usually, UEs experience the best SIR from the macro tier, causing this tier to be overloaded if a common max-SIR cell association scheme is adopted. In order to achieve a better load balancing between micro and macro cells, we adopt a prioritized SIR-based cell association as follows: the micro tier has higher priority than the macro tier when cell association is performed. In particular, a UE associates with the micro tier as long as it experiences $\V{SIR}>T$ from this tier. Only if the UE has $\V{SIR}<T$ (outage) from the micro tier, it will consider to associate with the macro tier if it can guarantee $\V{SIR}>T$; otherwise, the UE experiences outage. This scheme allows reducing the number of UEs in macro cells while keeping the outage probability as low as possible. The rationale of this scheme is that if a UE can be served by both macro and micro tiers, we prefer that this UE be associated with the micro tier every time this is possible, thus avoiding to use macro cell resources that could be more useful in those cases where UEs can only be covered by the macro tier. This requires macro and micro tiers to have some coverage overlap so that the UEs can be offloaded. Note that even if a UE associates with the cell providing the highest SIR, it does not mean that this UE will have SIR $> T$ from this cell. As shown in Appendix A, when $K = 1$ and $T \geq 0$ dB, a UE cannot simultaneously be in the coverage area of two different cells \cite{AnalysisMaxSIR}; in these circumstances, there is no overlap area among cells, so that a UE cannot be offloaded from one cell to another. With $K = 1$ and $T < 0$ dB, there is some coverage overlap among cells, however, we will neglect it to carry out the analysis in Appendix A. On the other hand, if $K > 1$, the coverage overlap can exist for any SIR threshold $T$ value, as considered in sub-Section \ref{subsec:OutageProb}. In this case, our prioritized SIR-based cell association scheme can allow a better load balancing between macro and micro tiers. \myhl{Even though we assume an interference-limited HetNet scenario, the previous considerations are still correct when background noise is included.} \begin{prop} \label{proprosionAverageLoad} \textit{ When prioritized SIR-based cell association with reuse of $K$ is adopted, the probability that a UE associates with the micro tier under the condition that this UE is in the coverage area can be expressed as } \begin{equation}\label{eq:microLoad} A_{\mu} = \frac{1-(1- {P}_{c,1,\mu})^K}{1-(1- {P}_{c,1})^K}, \end{equation} where $ {P}_{c,1,\mu} = \frac{\lambda_{\mu}\pi P_{\mu}^{2/\gamma} T^{-2/\gamma}}{C(\gamma) \sum_{i = \{M,\mu\}} \lambda_i P_i^{2/\gamma} }$ is the coverage probability of the micro tier when only cells (both macro and micro) using frequency $F_1$ are considered. \end{prop} The proof of Proposition \ref{proprosionAverageLoad} is provided in Appendix B, which is adapted from \cite{AnalysisMaxSIR} taking our prioritized SIR-based cell association scheme into account. According to our prioritized SIR-based cell association scheme, UEs in the coverage area of the network, but not in the coverage area of the micro tier will associate with the macro tier. Thus, the probability that a reference UE associates with the macro tier (i.e., average load on the macro tier under the condition that the UE is in the coverage area) is complementary with respect to (\ref{eq:microLoad}) as shown below: \begin{equation}\label{eq:macroLoad} A_M = 1-A_{\mu} = \frac{(1-{P}_{c,1,\mu})^K - (1-P_{c,1})^K}{1-(1-P_{c,1})^K}. \end{equation} \vspace{10pt} \section{\uppercase{Rate Coverage Probability}} \label{sec:Rate} The rate coverage probability represents the probability that a reference UE can achieve a bit-rate greater than or equal to a certain minimum value denoted by $R_T > 0$ (\footnote{$R_T$ and SIR threshold $T$ correspond to two different constraints: the set of UEs that have bit-rate higher than $R_T$ is a subset of the UEs in the coverage area (i.e., SIR $> T$), assuming that $R_T$ is bigger than the bit-rate corresponding to the minimum SIR = $T$.}) \cite{RateCoverate}. We consider $R_T$ as a given system value. Let $R$ denote the bit-rate (random value) of a reference UE in the network. When this UE is in outage conditions (not under the coverage area of any tier), its bit-rate is 0 so that the probability that this UE achieves the target rate threshold is 0 as well. Let $C_M$ ($C_{\mu}$) denote the event that the reference UE associates with the macro (micro) tier according to our prioritized SIR-based cell association scheme. A UE is associated with either the macro tier (event $C_M$) or the micro cell tier (event $C_{\mu}$) or it is in outage (event $C_O$). These events are disjoint. Moreover, $\bar{C}_M$ and $\bar{C}_{\mu}$ denote the complementary events of $C_M$ and $C_{\mu}$, respectively. We notice that $C_O = \bar{C}_M \cap \bar{C}_{\mu}$. Then, following the law of total probability, the probability of the event $\{R \geq R_T\}$ can be written as follows: \begin{equation}\label{eq:rateCover} \begin{split} {P}(R \geq R_T) & = {P} (R \geq R_T|C_M) {P}(C_M) \\ & + {P}(R \geq R_T|C_{\mu}) {P}(C_{\mu}) \\ & + {P}(R \geq R_T | \bar{C}_M \cap \bar{C}_{\mu}) {P} (\bar{C}_M \cap \bar{C}_{\mu}) \\ & = {P}(R \geq R_T|C_M) {P}(C_M) + \\ & + {P}(R \geq R_T|C_{\mu}) {P}(C_{\mu}), \end{split} \end{equation} because $ {P}(R \geq R_T | \bar{C}_M \cap \bar{C}_{\mu}) = 0$ since the UEs in the outage area have no throughput. By using the Bayes rule, we obtain: \begin{equation}\label{eq:rateCoverBayes} \begin{split} {P}(R \geq R_T) & = {P}(R \geq R_T, C_M) + {P}(R \geq R_T, C_{\mu}). \end{split} \end{equation} In order to calculate the rate coverage probability, we will separately derive the rate coverage probabilities of macro and micro tiers in the next sub-Sections. \subsection{Micro UEs' Rate Coverage Probability} In this sub-Section, we compute the term $ {P}(R \geq R_T, C_{\mu})$. The event $C_{\mu}$ tells us that the UE, belonging to the micro tier, associates with the micro cell providing the best SIR. We assume that a \emph{Round Robin} (RR) scheduler is used to allocate resources to UEs within a cell. In particular, we divide the bandwidth assigned to a micro cell by the average number of UEs in the micro cell in order to obtain the average bandwidth available per UE in the micro cell. Thus, \myhl {given that the reference UE associates to the micro tier, its bit-rate can be expressed as follows according to the Shannon capacity formula:} \begin{equation}\label{eq:microRate} R = \frac{{W/K}}{{A_\mu P_c \;\frac{{\lambda _u }}{{\lambda _\mu }}}} \log_2 \left(1+ \max_{j \in \Phi_{\mu}} \V{SIR}_j \right), \end{equation} where $A_\mu P_c \frac{{\lambda _u }}{{\lambda _\mu }}$ denotes the average number of UEs per micro-cell. Let $\rho_{\mu} = 2^{\frac{R_T K A_{\mu} \lambda_u {P}_c}{W \lambda_{\mu}}} - 1$. We have: \begin{equation}\label{eq:microRateCover} \begin{split} & {P}(R \geq R_T, C_{\mu}) = \\ & = {P} \left( \frac{W\lambda_{\mu}}{K A_{\mu}\lambda_u {P}_c} \log_2 \left(1+ \max_{j \in \Phi_{\mu}} \V{SIR}_j \right) \geq R_T , C_{\mu}\right) \\ & = {P} \left(\max_{j \in \Phi_{\mu}} \V{SIR}_j \geq \rho_{\mu}, \cup_{j \in \Phi_{\mu}} \left\{\V{SIR}_j \geq T \right\}\right). \end{split} \end{equation} The event $\max_{j \in \Phi_{\mu}} \V{SIR}_j \geq \rho_{\mu}$ is equivalent to $\cup_{j \in \Phi_{\mu}} \left\{\V{SIR}_j \geq \rho_{\mu}\right\}$ since if one of the SIR values from micro cells is higher than $\rho_{\mu}$, the maximum SIR among those values will be higher than $\rho_{\mu}$ as well and vice versa. Note that depending on the values of $K$, $R_T$, $\lambda_{\mu}$, $T$, and $\lambda_u$, $\rho_{\mu}$ can be bigger or smaller than $T$ (i.e., the rate coverage requirement can be more stringent or less stringent than the outage requirement). Let $T_{\mu} = \max(\rho_{\mu},T)$. Thus, (\ref{eq:microRateCover}) can be re-written as follows: \begin{equation}\label{eq:microRateCoverSemiFinal} {P}(R \geq R_T, C_{\mu}) = {P} \left(\cup_{j \in \Phi_{\mu}} \left\{\V{SIR}_j \geq T_{\mu}\right\}\right). \end{equation} Equation (\ref{eq:microRateCoverSemiFinal}) is fairly easy to interpret because if $\rho_{\mu} < T$, then the outage condition is more stringent than the $R_T$ constraint so that all UEs in the micro tier coverage area will have higher bit-rate than $R_T$. Otherwise, if $\rho_{\mu} \geq T$, equation (\ref{eq:microRateCoverSemiFinal}) can be understood as the micro tier's coverage probability when the SIR threshold is raised from $T$ to $\rho_{\mu}$ [see equation (\ref{eq:coverageProb})]. Then, by adopting the same method as that used to obtain (\ref{eq:coverageProb4}), we derive the coverage probability of the micro tier with SIR threshold $T_{\mu}$ as follows: \begin{equation}\label{eq:microCoverageProb} {P}_{c,\mu} (T_{\mu}) = 1 - \left[1- {P}_{c,1,\mu} (T_{\mu})\right]^K, \end{equation} where $ {P}_{c,1,\mu} (T_{\mu}) = \frac{\lambda_{\mu}\pi P_{\mu}^{2/\gamma} T_{\mu}^{-2/\gamma}}{C(\gamma) \sum_{i = \{M,\mu\}} \lambda_i P_i^{2/\gamma} }$ is the micro tier coverage probability when only cells using frequency $F_1$ are considered and SIR threshold is $T_{\mu}$. Appendix B provides the details on the derivation of (\ref{eq:microCoverageProb}) . Finally, from (\ref{eq:microRateCoverSemiFinal}) and (\ref{eq:microCoverageProb}) we obtain the rate coverage probability of the micro tier as a function of $R_T$ (and then $\rho_{\mu}$) as follows: \begin{equation}\label{eq:microRateCoverFinal} {P}(R \geq R_T, C_{\mu}) = 1 - \left[1- {P}_{c,1,\mu} (T_{\mu})\right]^K. \end{equation} \subsection{Macro UEs' Rate Coverage Probability} In this sub-Section, we compute the term $ {P}(R \geq R_T, C_M)$. We denote $\rho_M = 2^{\frac{R_T K A_M \lambda_u {P}_c}{W \lambda_M}} - 1$. The event $C_M$ happens when the reference UE experiences SIR lower than $T$ for all micro cells, but it has SIR higher than $T$ for at least one macro cell. Thus, $C_M$ is characterized by two joint events as follows: \begin{equation}\label{eq:macroCoverage} C_M = \left(\cup_{i \in \Phi_{M}} \left\{\V{SIR}_i \geq T\right\}) \cap (\cap_{j \in \Phi_{\mu}} \left\{\V{SIR}_j < T\right\}\right). \end{equation} Similar to the previous part, the rate coverage probability of the macro tier can be re-written as follows: \begin{equation}\label{eq:macroRateCover} \begin{split} & {P}(R \geq R_T, C_M) \\ & = {P}\left(\max_{i \in \Phi_{M}} \V{SIR}_i \geq \rho_M, \cup_{i \in \Phi_{M}} \{\V{SIR}_i \geq T\},\cap_{j \in \Phi_{\mu}} \{\V{SIR}_j < T\} \right)\\ & = {P}\left(\cup_{i \in \Phi_{M}} \left\{\V{SIR}_i \geq \max(\rho_M,T)\right\}, \cap_{j \in \Phi_{\mu}} \{\V{SIR}_j < T\} \right). \end{split} \end{equation} In order to write the above, we have exploited the fact that $\max_{i \in \Phi_{M}} \V{SIR}_i \geq \rho_M$ is equivalent to $\cup_{i \in \Phi_{M}} \left\{\V{SIR}_i \geq \rho_M \right\}$, as in the previous case. Let $T_M = \max(\rho_M,T)$ and $D_M$ be the event $\cup_{i \in \Phi_{M}} \{\V{SIR}_i \geq T_M\}$. Note that the event $\cap_{j \in \Phi_{\mu}} \{\V{SIR}_j < T\}$ is exactly $\bar{C}_{\mu}$. We consider the following relation among events and corresponding probabilities: \begin{equation}\label{eq:setToEvent} \begin{split} & D_M \cup C_{\mu} = C_{\mu} \cup D_M \cap \bar{C}_{\mu} \Rightarrow \\ & \quad \quad {P} (D_M \cup C_{\mu}) = {P} (C_{\mu} \cup D_M \cap \bar{C}_{\mu}). \end{split} \end{equation} Since $C_{\mu} $ and $D_M \cap \bar{C}_{\mu}$ are disjoint events, we have: \begin{equation}\label{eq:eventProb} {P} (D_M \cup C_{\mu}) = {P}(C_{\mu}) + {P}(D_M \cap \bar{C}_{\mu}). \end{equation} From (\ref{eq:macroRateCover}) and (\ref{eq:eventProb}), we have: \begin{equation}\label{eq:macroRateCoverFinal} \begin{split} & {P}(R \geq R_T, C_M) \\ & = {P}\left( D_M \cap \bar{C}_{\mu} \right) = {P} \left(D_M \cup C_{\mu}\right) - {P} (C_{\mu})\\ & = {P} \left( (\cup_{i \in \Phi_{M}} \{\V{SIR}_i \geq T_M\}) \cup (\cup_{j \in \Phi_{\mu}} \{\V{SIR}_j \geq T\}) \right) \\ & \quad \quad - {P} \left(\cup_{j \in \Phi_{\mu}} \{\V{SIR}_j \geq T\} \right) \\ & = 1 - \left[ 1 - {P}_{c,1}(T_M,T)\right]^K - 1 + \left[ 1- {P}_{c,1,\mu} (T) \right]^K \\ & = \left[1- {P}_{c,1,\mu}(T) \right]^K - \left[ 1 - {P}_{c,1}(T_M,T)\right]^K, \end{split} \end{equation} where $ {P} \left( (\cup_{i \in \Phi_{M}} \{\V{SIR}_i \geq T_M\}) \cup (\cup_{j \in \Phi_{\mu}} \{\V{SIR}_j \geq T\}) \right)$ has been derived in Appendix A, referring only to the cells using $F_1$; then, we generalize to all frequency segments analogously to what is shown in (\ref{eq:coverageProb4}). Still in Appendix A, we show in (\ref{th:noiseIgnored}) that $ {P}_{c,1}(T_M,T) = \frac{\pi}{C(\gamma)}\frac{\lambda_M P_M^{2/\gamma} T_M^{-2/\gamma} + \lambda_{\mu} P_{\mu}^{2/\gamma}T^{-2/\gamma}}{\sum_{i = \{M,\mu\}} \lambda_i P_i^{2/\gamma}}$, the coverage probability of the network (including both macro and micro tiers) when only the sub-set of cells using frequency $F_1$ are considered and the SIR threshold of the macro tier is $T_M = \max(\rho_M,T)$. \subsection{Combination of the Two Cases and Mean Bit-Rate} From (\ref{eq:rateCoverBayes}), (\ref{eq:microRateCoverFinal}), and (\ref{eq:macroRateCoverFinal}), we obtain the rate coverage probability as follows: \begin{equation}\label{eq:rateCoverFinal} \begin{split} {P}(R \geq R_T) = & 1 - \left[1- {P}_{c,1,\mu} (T_{\mu})\right]^K + \left[1- {P}_{c,1,\mu} (T) \right]^K \\ & - \left[ 1 - {P}_{c,1}(T_M,T)\right]^K. \end{split} \end{equation} Further elaborating (\ref{eq:rateCoverFinal}) by means of the Netwon Bimomial formula, we obtain the expression in (\ref{eq:rateCoverFinal2}) that is shown at the top of the next page, where the only terms depending on $R_T$ are $T_\mu$ and $T_M$. This expression is useful in what follows for performing the integration over $R_T$. \begin{figure*} \begin{equation}\label{eq:rateCoverFinal2} \begin{split} {P}(R \geq R_T) & = \sum\limits_{k = 1}^K {\left( { - 1} \right)^{k + 1} \binom{K}{k} \left\{ {\left[ {P_{c,1,\mu }^{} \left( {T_\mu } \right)} \right]^k - \left[ {P_{c,1,\mu }^{} \left( T \right)} \right]^k + \left[ {P_{c,1}^{} \left( {T_M ,T } \right)} \right]^k } \right\}} \\ & = \sum\limits_{k = 1}^K {\left( { - 1} \right)^{k + 1} \binom{K}{k} D^k(\gamma, T)} \frac{{\left( {\frac{{T_\mu }}{T}} \right)^{\frac{{ - 2k}}{\gamma }} + \sum\limits_{i = 1}^k {\left( {\begin{array}{*{20}c} k \\ i \\ \end{array}} \right)\left[ {\left( {\frac{{\lambda _M }}{{\lambda _\mu }}} \right)\left( {\frac{{P_M }}{{P_\mu }}} \right)^{\frac{2}{\gamma }} \left( {\frac{{T_M }}{T}} \right)^{\frac{{ - 2}}{\gamma }} } \right]^i } }}{{\left[ {1 + \left( {\frac{{\lambda _M }}{{\lambda _\mu }}} \right)\left( {\frac{{P_M }}{{P_\mu }}} \right)^{\frac{2}{\gamma }} } \right]^k }}. \end{split} \end{equation} \end{figure*} We expect that $ {P} (R \geq R_T)$ increases as $R_T$ decreases to 0. When $R_T$ is sufficiently small so that $\max(\rho_M,\rho_{\mu}) \leq T$, we have $ {P}_{c,1,\mu} (T_{\mu}) \equiv {P}_{c,1,\mu} (T)$, $ {P}_{c,1}(T_M,T) \equiv {P}_{c,1}(T)$ and the following rate coverage probability formula: \begin{equation}\label{eq:rateSubCase} {P}(R \geq R_T) \equiv 1 - \left( 1 - {P}_{c,1} (T) \right)^K, \end{equation} which is exactly the coverage probability of the network, $ {P}_c$. In this case, all UEs in the coverage area have bit-rate satisfying the rate threshold. Thus, the maximum value of the rate coverage probability is equal to the coverage probability of the network. We can also consider the average of the UE bit-rate, $R$, that represents an important parameter characterizing the performance provided to a UE. We can compute this by using the distribution corresponding to the rate coverage probability as follows: \begin{equation}\label{average_UE_ratel} E\left[ R \right] = \int_0^{ + \infty } {P \left\{ {R > R_T } \right\}dR_T }. \end{equation} The mean UE bit-rate can be obtained by applying the above integral to the rate coverage probability expression in (\ref{eq:rateCoverFinal2}); then, by exploiting the linearity of the integral operator, we resort to apply the integral to two types of terms, as detailed below: \begin{equation}\label{average_UE_ratel2} \begin{array}{l} \int_0^{ + \infty } {\left[ {T_\mu ^{\frac{{ - 2k}}{\gamma }} } \right] \V{d}R_T = \quad } \int_0^{ + \infty } {\left[ {\max \left\{ {\rho _\mu ,T} \right\}} \right]^{\frac{{ - 2k}}{\gamma }}} \V{d}R_T \quad \\ \int_0^{ + \infty } {\left[ {T_M^{\frac{{ - 2k}}{\gamma }} } \right] \V{d}R_T = \int_0^{ + \infty } {\left[ {\max \left\{ {\rho _M ,T} \right\}} \right]^{\frac{{ - 2k}}{\gamma }} \V{d}R_T \quad } } . \\ \end{array} \end{equation} \begin{figure*} [!h] \begin{equation}\label{average_UE_ratel3} \int_0^{ + \infty } {\left[ {\max \left\{ {\rho _{} ,T} \right\}} \right]^{ - b} \V{d}R_T \quad } = \int\limits_0^{\frac{1}{a}\log _2 \left( {1 + T} \right)} {\frac{1}{{T^b }} \V{d}R_T } + \int\limits_{\frac{1}{a}\log _2 \left( {1 + T} \right)}^{ + \infty } {\frac{1}{{\left( {2^{aR_T } - 1} \right)^b }} \V{d}R_T } = \frac{1}{{aT^b }}\log _2 \left( {1 + T} \right) + \frac{{B_{\frac{1}{{1 + T}}} \left( {b,\;1 - b} \right)}}{{a\ln \left( 2 \right)}}. \end{equation} \end{figure*} These integrals can be expressed by means of the incomplete Beta function $B_x(\alpha,\beta)$ \cite{betafunction} as shown in (\ref{average_UE_ratel3}) at the top of the next page, where $\rho = \rho_{\{\mu ~{\rm or }~M\}}$, $a = a_{\{\mu ~{\rm or }~M\}} = \frac{{KA_{\left\{ {\mu \;{\rm or}\;M} \right\}} P_c \lambda _u }}{{W\lambda _{\left\{ {\mu \;{\rm or}\;M} \right\}} }}$, and $b = 2k/\gamma$ ($k$ is here a generic integer value from 1 to $K$). The minimum $R_T$ value (i.e., the value corresponding to the SIR threshold value $T$) for the macro cell coverage is $\frac{1}{{a_M }}\log _2 \left( {1 + T} \right)$ and the minimum $R_T$ value for the micro cell coverage is $\frac{1}{{a_\mu }}\log _2 \left( {1 + T} \right)$. However, in equations (\ref{eq:rateCoverFinal}) and (\ref{eq:rateCoverFinal2}), $R_T$ can also be below the previous minimum bit-rate values (so that the integral in (\ref{average_UE_ratel}) starts from $R_T = 0$), because in these circumstances, the rate coverage probability coincides with the coverage probability and its value is independent of $R_T$. Finally, numerical methods have to be used to compute (\ref{average_UE_ratel3}) to determine the mean UE bit-rate. Note that the approach in (\ref{average_UE_ratel}) to determine the mean UE bit-rate is different from that adopted in \cite{AnalysisMaxSIR}, because in that work the authors refer to a mean UE bit-rate (not considering that there are many UEs that share the cell capacity as we do here introducing coefficients $a_{\mu}$ and $a_M$) conditioned on the coverage and referring to a simple max-SINR cell association scheme with no frequency reuse. In this analysis of outage, cell load, and rate coverage probabilities, basic parameters are: $\gamma$, $T$, $W$, and $K$. Instead, the other parameters ($\lambda_M$, $\lambda_\mu$, $P_M$, $P_\mu$, $\lambda_u$) influence the numerical results only via the following ratios: $\lambda_M / \lambda_\mu$, $P_M / P_\mu$, and $\lambda_u / \lambda_M$. Thus, in Section \ref{sec:ResultsPPP}, we present an optimization approach for both $K$ and $\lambda_{\mu}/\lambda_M$ (given the other parameters). Hence, using the obtained model and analysis, network operators can select both $K$ and and decide when it is convenient to increase the density of $\mu$-eNBs ($\lambda_{\mu}/\lambda_M$) in the system, so that users' quality of experience is satisfied (see Figs. 15 and 16 in Section \ref{sec:ResultsPPP}). Even if we consider a two-tier HetNet system, this work could be extended to more than two tiers. In particular, we could add a femto layer, where each cell selects a frequency segment at random as well. Then, we can apply our prioritized SIR-based cell association scheme, using a priority order for cell associations as femto $>$ micro $>$ macro. The detailed scheme with more than two tiers is left to a future study. \vspace{10pt} \section{\uppercase{Simulation Approach and Settings}} \label{sec:Simulation} It is important to notice that the simulation of a PPP-based cellular system is quite different from that of a hexagonal-regular system, but the main idea is the same. In the case of a hexagonal-regular cellular network, a simulation is basically organized with a central macro cell and surrounding interfering cells and we extract performance results only from the central macro cell to avoid border effects. This is possible since all the cells have the same shape and size so that the central cell is taken as representative of the whole network. This method, however, is no longer applicable with stochastic geometry, where cells have different shapes so that it is difficult to identify a central cell. In the case of stochastic geometry, we have to take a UE as a reference (not a cell as a reference) in the origin around which all cells and all other UEs are scattered according to PPPs. Moreover, in the analysis, the area where PPP is applied is an infinite plane, so that all interference sources are taken into account and border effects are not present. In the simulation, we implemented PPP on a very large area with the reference UE at the center so that the interference from cells outside the area to the UE is negligible and border effects are eliminated. All macro and micro eNBs are placed on a plane with sizes $L \times L$, where $L$ is the length of the side of the plane (thus, the average number of macro cells is $N_c = \lambda_{M}L^2$, the average number of micro cells is $N_r = \lambda_{\mu}L^2$, and the average number of UEs is $N_u = \lambda_{u}L^2$). To generate points according to PPPs, we first determine the number of points in $L^2$ using Poisson random variables with mean values $N_c$, $N_r$, and $N_u$. Then, conditioning on the number of points in the $L^2$ area of the PPPs of M-eNBs, \textmu-eNBs, and UEs, the position of each point is determined according to a uniform distribution in $L^2$. Each simulation is repeated $N = 10000$ times, regenerating at each run the positions of M-eNBs, \textmu-eNBs, and UEs according to the corresponding homogeneous PPP processes. In each simulation, depending on the topology, the reference UE can associate with macro or micro tier or can be in the outage area. By repeating the simulation $N$ times and observing the results, we can obtain the probability that the UE is in the coverage area or in outage conditions, and the probability that the UE belongs to the macro tier or the micro tier. The LTE-A HetNet scenario has been implemented in a Matlab simulator, using the Monte Carlo approach. In details, we simulate a square area with side $L = 20$ km and an average number of 80 M-eNBs ($\lambda_{M} = \frac{80}{400} = 0.2 $ $\V{M-eNBs/km}^2$). The density of UEs is $\lambda_u = 100 \lambda_{M}$. The transmission power of M- (\textmu-) eNBs is 46 dBm (30 dBm). The system bandwidth is $W$ = 20 MHz. If not differently stated, SIR threshold $T$ is set to $0$ dB, micro cells density is $\lambda_{\mu} = 4\lambda_{M}$, pathloss exponent $\gamma$ is 4, and rate threshold $R_T$ is set to 1 Mbps. \vspace{10pt} \section{\uppercase{Results}} \label{sec:ResultsPPP} In this Section, we verify our analysis via simulations. Moreover, we compare our prioritized SIR-based cell association scheme with other schemes in the literature, which are: max-\emph{Reference Signal Received Power} (RSRP)\footnote{In the LTE-A standard, RSRP is defined as the linear average over the power contributions of the \emph{Resource Elements} (REs) that carry cell-specific reference signals within the considered measurement frequency bandwidth \cite{TS36211}.} with spectrum sharing \cite{HetNEtFlexibleAssoc},\cite{OffloadinginHetNets}, max-biased RSRP with spectrum partitioning \cite{SpectrumAllocInfocom},\cite{AUtilityPerspective},\cite{RateCoverate},\cite{AnalysiswithCellUnderLoad}, and max-SIR with frequency reuse of $K$. Our aim is to show the advantages of our prioritized SIR-based scheme in terms of outage probability, load balancing, and rate coverage probability. We also validate the analysis of the mean UE bit-rate by means of simulations. Finally, We propose an optimization approach to select $K$ and $\lambda_{\mu}/\lambda_M$ values to satisfy certain cell planning requirements. \subsection{Model and Analysis Validation} Let us focus on the first set of results (Figs. \ref{fig:OutagevsKGamma}, \ref{fig:LoadvsKGamma}, and \ref{fig:RatevsKGamma}), where we vary the frequency reuse factor $K$ from 1 to 8. Note that $K = 1$ means that every cell uses the entire spectrum, which is equivalent to the spectrum sharing scheme proposed in \cite{AnalysisMaxSIR}. The path loss exponent can assume different values such as 3.5, 4, or 5, which are compatible with a urban environment. We can see from Fig. \ref{fig:OutagevsKGamma} that the outage probability rapidly reduces with $K$. In particular, the outage probability for $\gamma = 4$ decreases from 36\% when $K = 1$ to 13\% when $K = 2$ and to 5\% when $K = 3$; with $K \geq 5$, there is almost no UEs in outage conditions. Fig. \ref{fig:LoadvsKGamma} shows that the average load on the micro tier $A_{\mu}$ increases from 38\% to nearly 60\% when $K$ increases from 1 to 3. The reason is that the micro tier can provide better SIR to UEs with larger $K$ value so that more UEs associate with the micro tier by means of the prioritized SIR-based cell association scheme. The rate coverage probability is shown in Fig. \ref{fig:RatevsKGamma}. It can be observed that $K=2$ gives the best results in this case ($\lambda_{\mu} = 4 \lambda_M$) for all the $\gamma$ cases. Beyond this $K$ value, the rate coverage probability reduces since the bandwidth share of each UE is smaller. The optimal value of $K$ will change when the ratio between micro cells density and macro cell density changes, as discussed later in this Section. When the path loss exponent is higher, we achieve better performance. The reason is that in this dense HetNet scenario, the high path loss exponent makes each cell to become more isolated from the rest of the network, thus reducing the interference among cells. As a final remark, we can see that the analysis results are very close to simulation ones, thus validating our theoretical approach. \begin{figure} \centering \epsfxsize=8.5cm \epsfbox{Images/OutageChangeKandGamma.pdf} \caption{Outage probability as function of frequency reuse factor $K$.}\label{fig:OutagevsKGamma} \end{figure} \begin{figure} \centering \epsfxsize=8.2cm \epsfbox{Images/AverageLoadChangeKandGamma} \caption{Average load on the micro tier as function of frequency reuse factor $K$.}\label{fig:LoadvsKGamma} \end{figure} \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/RateCoverageChangeKandGamma.pdf} \caption{Rate coverage probability as function of frequency reuse factor $K$.}\label{fig:RatevsKGamma} \end{figure} Figures \ref{fig:OutagevsSIR}, \ref{fig:LoadvsSIR}, and \ref{fig:RatevsSIR} show the outage probability, average load, and rate coverage probability for SIR threshold $T$ (both macro tier and micro tier) from $-4$ dB to $20$ dB with different values of $K$. We can see from Fig. \ref{fig:OutagevsSIR} that the outage probability reduces with $K$. When $K = 1$, analytical results closely follow simulation ones for $T \geq -2$ dB; if $T < -2$ dB, however, the analysis provides a lower outage probability as compared with simulation results because of the approximation of disjoint cells in the coverage analysis, as explained in Appendix A [see formula (\ref{th:coverageProb})]. Instead, the analysis provides very close results to simulations for $K > 1$ even if $T < 0$ dB, because the approximation improves with $K$ as stated in the Appendix A. SIR threshold $T$ has also impact on both the average load on the micro tier, $A_{\mu}$, and the rate coverage probability, $ {P} (R \geq R_T)$, as shown in Figs. \ref{fig:LoadvsSIR} and \ref{fig:RatevsSIR}. In particular, with lower $T$ values, there are more UEs under the coverage area, which leads to more UEs associated with the micro tier as a result of our prioritized SIR-based cell association scheme. Thus, SIR threshold $T$ can be used as a parameter to control the average load on each tier. Fig. \ref{fig:LoadvsSIR} also shows that the average load from analysis does not depend on $T$ when $K = 1$, because, as we can see from (\ref{eq:microLoad}), $A_{\mu}$ is independent of $T$. The rate coverage probability has a maximum when $T = 0$ dB and $K = 2$ as shown in Fig. \ref{fig:RatevsSIR}. The reason is that when $T$ increases there are more UEs in outage conditions, thus leading to more resources for UEs which are still under the coverage area; however, the number of UEs in the coverage area reduces, so that the rate coverage probability reduces as well. \begin{figure} \centering \epsfxsize=8.5cm \epsfbox{Images/OutageChangeSINRandK.pdf} \caption{Outage probability as function of SIR threshold $T$ in dB.}\label{fig:OutagevsSIR} \end{figure} \begin{figure} \centering \epsfxsize=8.5cm \epsfbox{Images/AverageLoadChangeSINRandK.pdf} \caption{Average load on the micro tier as function of SIR threshold $T$ in dB.}\label{fig:LoadvsSIR} \end{figure} \begin{figure} \centering \epsfxsize=9.2cm \epsfbox{Images/RateCoverageChangeSINRandK.pdf} \caption{Rate coverage probability as function of SIR threshold $T$ in dB.}\label{fig:RatevsSIR} \end{figure} Figure \ref{fig:RatevsRatio} shows that there are different $K$ values maximizing the rate coverage probability for different $\lambda_{\mu}/\lambda_M$ values and $T$ = 0 dB. In fact, when $\lambda_{\mu} = \lambda_M$, we should use $K = 1$. On the other hand, when $\lambda_{\mu} = 4\lambda_M$, it is better to use $K = 2$; instead, $K = 3$ should be adopted when $\lambda_{\mu} = 8\lambda_M$ or higher. This result can be justified because the more the micro cells, the higher the interference in the network, so that bigger $K$ values should be adopted. \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/RateCoverageChangeKandChangeLambda.pdf} \caption{Rate coverage probability with different micro density-to-macro density ratios $\lambda_{\mu}/\lambda_M$.}\label{fig:RatevsRatio} \end{figure} In Fig. \ref{fig:RatevsThreshold}, we show the rate coverage probability as a function of the rate threshold $R_T$ ranging from 200 kbps to 2 Mbps. We can observe that higher $\lambda_{\mu}/\lambda_M$ values provide better performance for the same $K$. We note that $K = 3$ provides higher rate coverage probability than $K = 2$ for small rate threshold $R_T$. In fact, $K = 3$ can help to better control interference in the network, thus improving bit-rate of edge UEs. Moreover, with $K = 3$, there are more UEs served by the micro tier as shown in Fig. \ref{fig:LoadvsKGamma}, thus traffic is more balanced than with $K = 2$. Instead, $K = 2$ shows better results than $K = 3$ with high rate threshold values $R_T$ since center UEs (having low interference) can benefit from more bandwidth with smaller $K$. Finally, we can note that there is a very close agreement between analysis and simulations in all the cases. \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/RateCoverageChangeThresholdandLambda.pdf} \caption{Rate coverage probability as function of rate threshold $R_T$.}\label{fig:RatevsThreshold} \end{figure} In Fig. \ref{fig:meanUErate}, we show the mean UE bit-rate rate as a function of the frequency reuse factor $K$ for different densities of the micro cells (and given density of macro cells) represented by $\lambda_{\mu}/\lambda_M$ equal to 4, 8, and 12. There is a very good agreement between simulations and the analytical results obtained according to (\ref{average_UE_ratel})-(\ref{average_UE_ratel3}). As expected, the mean UE bit-rate decreases with $K$ and increases with $\lambda_{\mu}/\lambda_M$. \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/MeanBitRate.pdf} \caption{Mean UE bit-rate rate as a function of the frequency reuse $K$.}\label{fig:meanUErate} \end{figure} \subsection{Comparisons with Other Schemes} In this sub-Section, we compare our prioritized SIR-based cell association with other schemes in the literature. In particular, the following schemes are considered for comparisons:\\ \begin{itemize}[leftmargin=.2in] \item Max-RSRP cell association with spectrum sharing \cite{HetNEtFlexibleAssoc},\cite{OffloadinginHetNets}: in this scheme, all macro and micro cells share the same frequency band ($K = 1$). A generic UE associates with the cell providing the highest RSRP value. \vspace{0.1cm} \item Max-RSRP cell association with $K = 3$: this scheme uses the same frequency reuse technique as in our study, but with an RSRP-based cell association criterion. \vspace{0.1cm}\item Max-RSRP cell association with \emph{Resource Partitioning 1} (RP1) \cite{SpectrumAllocInfocom},\cite{AUtilityPerspective}: in this scheme, the frequency band is divided into 2 parts, $F_1$ and $F_2$. All macro cells use $F_1$, while all micro cells use $F_2$. Thus, the two tiers do not interfere with each other. \vspace{0.1cm}\item Max-biased RSRP cell association with \emph{Resource Partitioning 2} (RP2): this scheme is used in \cite{AnalysiswithCellUnderLoad} and \cite{RateCoverate}. Radio resources are allocated differently from the previous RP1 scheme as described below, considering three groups of UEs, such as macro UEs, micro UEs, and biased UEs. In particular, macro UEs experience higher received power (i.e., RSRP) from the macro tier even though bias is applied to the micro tier. Micro UEs experience higher received power from the micro tier even when bias is not used at the micro tier. Finally, biased UEs experience the best received power from the macro tier when bias is not applied, but they obtain better received power from the micro tier when bias is applied to them, so that these UEs are forced to associate with the micro tier. Hence, the difference between max-RSRP and max-biased RSRP is in the association of biased UEs (associated with the macro tier with max-RSRP scheme and associated with the micro tier with max-biased RSRP). Frequency band $F_1$ is used for both macro and micro UEs, while $F_2$ is dedicated only to biased UEs to avoid strong interference from the macro tier. Thus, the macro tier uses only $F_1$, while the micro tier uses both $F_1$ and $F_2$ for separate groups of UEs. Note that in this case, macro UEs and micro UEs suffer from both cross-tier and co-tier interference, while biased UEs only suffer from co-tier interference. \vspace{0.1cm}\item Max-SIR cell association: Max-SIR cell association scheme, where a UE always associates with the cell providing the highest SIR (it does not matter if it is a macro or a micro cell), has been studied in \cite{AnalysisMaxSIR},\cite{CoverageandRateInKTier},\cite{AnalysisMaxSINRConnectivity} with frequency reuse factor of 1. We adopt here max-SIR cell association with $K=1$ to represent the case of the scheme in \cite{AnalysisMaxSIR} where no frequency reuse is adopted and max-SIR with $K=2$ and $K=3$ to improve interference conditions.\\ \end{itemize} For the sake of fair comparisons, we use the same numerical settings as in \cite{RateCoverate}, where $\lambda_M = 1$ $\V{M-eNBs/km}^2$, $P_{\mu} = 26$ dBm, and $\lambda_{\mu} = 5 \lambda_M$. The other parameters are the same as described in Section \ref{sec:Simulation}. With this configuration, according to \cite{RateCoverate}, the optimal bias value (RP2 case) of the micro tier to maximize the rate coverage probability is 15 dB and the fraction of bandwidth for biased UEs ($F_2$) is 0.47. Figure \ref{fig:OutageCompare} shows the outage probability of our cell association scheme [denoted as ``prioritized SIR(-based)"] and of the other schemes in the literature obtained from simulations. We can see that our cell association scheme with $K = 3$ achieves much lower outage probability than the other RSRP-based schemes. This is because in our scheme UEs associate with cells providing SIR greater than the minimum SIR threshold $T$. Max-RSRP with $K=1$ (spectrum sharing) is characterized by the highest outage probability because of the high interference: all cells use the same frequency band. Max-biased RSRP cell association scheme with RP2 is also quite bad at managing interference, because both micro and macro UEs suffer from cross-tier and co-tier interference; using a larger bias value can also cause a higher outage probability because of co-tier interference experienced by biased UEs. With max-RSRP and RP1 scheme, macro and micro tiers use different frequency segments and UEs also associate with cells providing the best received power, thus having lower outage probability since cross-tier interference is eliminated. Finally, max-SIR scheme achieves the same outage probability as our scheme for $K = 3$, \myhl{because the prioritized SIR-based scheme does not change the coverage condition of max-SIR.} On the other hand, max-SIR has a much worse outage probability for $K = 1$; this is a further proof that the reuse of resources is a very important strategy for reducing outage probability. \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/OutageCompareChangeSINRandKRevise.pdf} \caption{Outage probability comparisons.}\label{fig:OutageCompare} \end{figure} The rate coverage probabilities of the different schemes are shown in Fig. \ref{fig:RateCompare} (\footnote{\myhl{Note that the study in \cite{RateCoverate} for max-biased RSRP considers that UEs in outage can still have some throughput. This is different from our study where UEs in outage have no throughput, so that the rate coverage probability cannot be higher than the cell coverage probability $ {P}_c$.}}). We can see that our scheme outperforms max-RSRP with $K = 3$. This occurs because max-RSRP causes most of the UEs to associate with the macro tier, so that macro UEs suffer from lack of resources (they cannot attain the required bit-rate). Instead, other schemes (such as max-biased RSRP and our prioritized SIR-based schemes) include load balancing mechanisms, so that they can achieve better performance. Our scheme with $K = 3$ outperforms max-biased RSRP RP2 scheme when $R_T < 1$ Mbps, because our scheme provides better SIR (see Fig. \ref{fig:OutageCompare}) and \myhl{exploits better the capacity of micro cells (higher $A_{\mu}$ value)}, thus UEs in edge areas can satisfy the rate requirement. For $R_T \geq 1$ Mbps, prioritized SIR-based scheme with $K = 3$ has a lower rate coverage probability because of the smaller amount of bandwidth available for each UE\footnote{With $K$ = 3, we have better SIR for UEs, thus the number of UEs in the coverage area is large. However, since each cell uses only 1/3 of the bandwidth (frequency reuse of 3), the bandwidth (bit-rate) for each UE of the cell is small. Thus, if $R_T$ is too large (and the threshold for this is about 1 Mbps), many UEs cannot satisfy the rate requirement so that rate coverage probability is low.}. Then, it would be better to use $K = 2$ with our scheme for $R_T \geq 1$ Mbps in order to outperform max-biased RSRP. Moreover, our prioritized SIR-based scheme also obtains better rate coverage probability than max-SIR scheme for a wide range of $R_T$ values for both $K = 2$ and $K = 3$; in the $K = 1$ this is true only for small-to-medium $R_T$ values. These results of the comparison between our scheme and max-SIR depending on $K$ can be justified because more UEs associate with the micro tier (denoted below as `micro UEs') with prioritized SIR-based scheme than with max-SIR, thus spending the radio resources of the macro tier for those UEs that can only associate with macro cells. On the other hand, max-SIR is better than prioritized-SIR using the same $K$ value when $R_T$ is very high. \myhl{This occurs because micro UEs have higher bit-rates with max-SIR rather than with prioritized SIR-based scheme. In fact, prioritized-SIR causes more UEs to be associated with micro cells than with max-SIR. Thus, each micro UE can only have a smaller amount of bandwidth and its bit-rate is lower. Hence, only max-SIR can allow to satisfy the rate coverage probability requirement with high $R_T$ values.} \begin{figure} \centering \epsfxsize=7.5cm \epsfbox{Images/RateCoverageCompareChangeThresholRevise.pdf} \caption{Rate coverage probability comparisons.}\label{fig:RateCompare} \end{figure} \subsection{Optimization Approach for Cell Planning} Let us study how cellular network planning could be carried out for our prioritized SIR-based cell association scheme. We propose an optimization approach based on the analysis that has been validated in a previous sub-Section. We consider that the SIR threshold $T$ value is determined by the technology of the air interface, $\lambda_u/\lambda_M$ is given and is related to the cost of macro installations with respect to the expected revenue per user, $P_{\mu}/P_M$ is set depending on the air interface technology, $R_T$ is also given and pertains quality of experience. Then, the optimization problem has to determine $K$ and $\lambda_{\mu}/\lambda_M$ according to a criterion that reduces the costs related to the installations of $\mu$-eNBs under certain requirements on outage probability and rate coverage probability as follows: \begin{equation}\label{def_optim} \begin{array}{l} \mathop {\min }\limits_{K,\frac{{\lambda _\mu }}{{\lambda _M }}} \frac{{\lambda _\mu }}{{\lambda _M }} \\ s.t. \\ O < 10\% \\ P\left\{ {R > R_{T} } \right\} \ge 50\%, \\ \end{array} \end{equation} \noindent where $T$, $\lambda_u / \lambda_M$, $P_M/P_\mu$, $\gamma$, $W$ and $R_{T}$ are given.\\ In this study, $P\left\{ {R > R_{T} } \right\}$ is given by (\ref{eq:rateCoverFinal}). We assume that the above rate coverage constraint is feasible and defines a curve $\Gamma : P\left\{ {R > R_{T} } \right\} - 0.5 = 0$ in the $K$ - $\lambda_{\mu}/\lambda_M$ plane. This is verified under a certain range of $R_T$ values; if higher $R_T$ values have to be guaranteed, a larger $W$ and/or a lower $\lambda_u/\lambda_M$ are needed. If the planning has to guarantee a better performance it is possible to increase the $R_T$ value and/or to consider a larger value than 50\% for the rate coverage probability constraint. The outage probability constraint $O < 10\%$ can be further elaborated according to (\ref{eq:coverageProbFinal}) as a function of $P_{c,1}(T)$ given in (\ref{eq:theorem1}). In particular, we have: \begin{equation} K > \frac{{ - 1}}{{\log _{10} \left( {1 - D\left( {\gamma ,T} \right)} \right)}}. \end{equation} Let us denote $d \triangleq \left\lfloor { - \left[ {\log _{10} \left( {1 - D\left( {\gamma ,T} \right)} \right)} \right]^{ - 1} } \right\rfloor$; then, the set of $K$ values fulfilling the above outage constraint is \{$d$+1, $d$+2, ...\}. Other optimization approaches based on the maximization of the mean UE bit-rate could be considered, but the mean UE bit-rate increases with $\lambda_{\mu}/\lambda_M$ and then a limit due to cost constraints should be adopted for $\lambda_{\mu}/\lambda_M$ values and in most of the cases the optimization would just imply to select the maximum allowed $\lambda_{\mu}/\lambda_M$ value. This is the reason why these approaches are less interesting. As an alternative, we could substitute the rate coverage probability constraint with a constrain on the mean UE bit-rate; doing this way we would obtain a problem that can be solved with the same approach shown below, but with more complexity due to the integration needed to obtain the mean UE bit-rate. This is the reason why we do not deal with this case here. \begin{figure*}[!htbp] \begin{equation}\label{punto_selezione} \left\{ \begin{array}{l} \frac{\partial }{{\partial K}}P\left( {R > R_T } \right) = 0 \\ P\left( {R > R_T } \right) - 0.5 = 0 \\ \end{array} \right.\quad \Leftrightarrow \quad \left\{ \begin{array}{l} \frac{\partial }{{\partial K}}\left\{ {\left[ {1 - P_{c,1,\mu } \left( {T_\mu } \right)} \right]^K - \left[ {1 - P_{c,1,\mu } \left( T \right)} \right]^K + \left[ {1 - P_{c,1} \left( {T_M ,T} \right)} \right]^K } \right\} = 0 \\ 0.5 - \left[ {1 - P_{c,1,\mu } \left( {T_\mu } \right)} \right]^K + \left[ {1 - P_{c,1,\mu } \left( T \right)} \right]^K - \left[ {1 - P_{c,1} \left( {T_M ,T} \right)} \right]^K = 0 \\ \end{array} \right. \end{equation} \end{figure*} Since $K$ is integer, the optimization in (\ref{def_optim}) is a mixed-integer programming problem that is basically non-convex. The classical approach for solving this problem requires to find the optimal solution considering $K$ as a continuous variable (relaxation) and then applying the Branch and Bound method. This optimization has been solved by means of a heuristic approach that can be explained using the graphical example in Fig. \ref{fig:ratecoverage}, showing level curves of the rate coverage probability in the $K$ - $\lambda_{\mu}/\lambda_M$ plane for $R_T$ = 1 Mbit/s and $P_\mu$ = 30 dBm. In this figure, we have to select the point in the plane with the lowest $\lambda_{\mu}/\lambda_M$ value that is fulfilling both the outage constraint (i.e., $K \ge 3$, $d$ = 2) and the rate coverage probability constraint. Because of the behavior of the rate coverage probability, this point can be found on the level curve $\Gamma$ (i.e., rate coverage probability equal to 0.5) as the point with the lowest $\lambda_{\mu}/\lambda_M$ value for integer $K \ge 3$. Then, in our example, the optimum point is achieved for $K$ = 3 and $\lambda_{\mu}/\lambda_M \approx 5$. This graphical solution suggests the following method to solve our problem. We consider that the relaxed constraints (corresponding to curves $\Gamma$ and $K > d$) define a \emph{feasibility area} in the $K$ - $\lambda_{\mu}/\lambda_M$ plane where we have to find the point with minimum $\lambda_{\mu}/\lambda_M$. Then, as shown in the example in Fig. \ref{fig:ratecoverage}, the solution point is on the border of the area on the $\Gamma$ curve; this is an implicit function of $K$ for which we can find the minimum by means of the null derivative condition in (\ref{punto_selezione}) at the top of this page \cite{Dini}, where unknown variables are $K$ and $\lambda_{\mu}/\lambda_M$ and the other parameters are given. Because of the complexity in formally expressing the solution of (\ref{punto_selezione}), this system has been solved numerically. We have two cases:\\ \begin{itemize}[leftmargin=.2in] \item If (\ref{punto_selezione}) has a solution $K^*$, then we consider the corresponding closer integer values of $K$, denoted as $\left\lfloor {K^* } \right\rfloor$ and $\left\lfloor {K^*} \right\rfloor +1$, and we choose among them as follows: $$ \left\{ \begin{array}{l} {\rm if}\quad d + 1 \le \left\lfloor {K^* } \right\rfloor \quad \Rightarrow {\rm we ~select}~K = \left\lfloor {K^* } \right\rfloor {\rm } \\ \quad \;\;{\rm or}~K = \left\lfloor {K^* } \right\rfloor + 1~ {\rm depending~ on ~whichever ~of ~the } \\ \quad \;\;{\rm two ~allows ~the ~lower} ~\frac{{\lambda _\mu }}{{\lambda _M }}{\rm ~value ~on} ~\Gamma; \\ {\rm if}\quad \left\lfloor {K^* } \right\rfloor + 1 \le d + 1\quad \Rightarrow {\rm we ~select} ~K = d + 1{\rm ~and } \\ \quad \;\;{\rm the ~corresponding} ~\frac{{\lambda _\mu }}{{\lambda _M }}{\rm ~value ~on} ~\Gamma. \\ \end{array} \right. $$ \item If (\ref{punto_selezione}) has no solution, we just select $K$ = $d$ + 1 and the corresponding $\lambda_{\mu}/\lambda_M$ value on the $\Gamma$ curve.\\ \end{itemize} \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/rate_coverage_3.pdf} \caption{Contour plot of the rate coverage probability as function of $K$ and $\lambda_{\mu}/\lambda_M$.} \label{fig:ratecoverage} \end{figure} By applying our optimization approach, Figs. \ref{fig:selectedlamdda} and \ref{fig:selectedK} show the selected $K$ and $\lambda_{\mu}/\lambda_M$ values for different $P_\mu$ values (and $P_M$ = 46 dBm). We can see that the optimized values of $\lambda_{\mu}/\lambda_M$ reduce with $P_\mu$, because increasing the transmission power of $\mu$-eNB we enlarge the micro cell service area; a similar effect on the optimized values of $\lambda_{\mu}/\lambda_M$ can be obtained by reducing $R_T$. Moreover, the selected $K$ values reduce with $P_\mu$, because the rate coverage constraint is satisfied with lower $K$ values if $P_\mu$ increases; starting from a certain $P_\mu$ value, we can use the lowest (integer) $K$ value fulfilling the outage probability constraint, that is $d+1$ with our notations. By means of (\ref{eq:rateCoverFinal2}) we can also compute the mean UE bit-rate corresponding to the different optimized configurations depending on $\mu$-eNB transmission power: we have that the mean UE bit-rate decreases with $P_\mu$ (or, equivalently, increases with $\lambda_{\mu}/\lambda_M$) and increases with $R_T$. Hence, a good planning approach based on Figs. \ref{fig:selectedlamdda} and \ref{fig:selectedK} should select a lower $P_\mu$ value as much as possible considering the cost of a higher $\mu$-eNB density. \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/rate_coverage_1.pdf} \caption{Optimized $\lambda_{\mu}/\lambda_M$ as a function of $P_\mu$ for different $R_T$ values.}\label{fig:selectedlamdda} \end{figure} \begin{figure} \centering \epsfxsize=8cm \epsfbox{Images/rate_coverage_2.pdf} \caption{Optimized $K$ as a function of $P_\mu$ for different $R_T$ values.}\label{fig:selectedK} \end{figure} \vspace{10pt} \section{\uppercase{Conclusions}} \label{sec:Conclusions} Using small cells in combination with macro ones is a key approach towards 5G cellular systems. In this paper, we have presented an analytical framework to characterize the performance of HetNets with stochastic geometry, frequency reuse of $K$, and prioritized SIR-based cell association scheme. Analytical results have been achieved that allow us to characterize outage probability, average load, rate coverage probability, and mean UE bit-rate. Simulation and analytical results show that our prioritized SIR-based cell association scheme can obtain low outage probability values, while providing better rate coverage probability than other key schemes in the literature. The study carried out in this paper provides a closed-form analytical method that may help network operators when planning future cellular systems. A further work will be needed to provide an analytical framework for other frequency reuse schemes, such as Soft Frequency Reuse. Finally, more than two tiers (e.g., femtocells) will be included in a future study. \vspace{10pt} \section*{\uppercase{Appendix A}} \label{appendixA} \subsection*{Proof of Theorem \ref{theremOutage}} We provide the proof of Theorem \ref{theremOutage}. Out of all the cells of the cellular system, we consider here only those cells of the different tiers where $F_1$ is used (considering any other frequency segment would be the same). Let $\Phi_{M,1}$, $\Phi_{\mu,1}$ be the sub-sets of M-eNBs and \textmu-eNBs, which use frequency $F_1$. $ {P}_{c,1}(T_M, T_{\mu})$ denotes the coverage probability for the reference UE in the origin and only referring to those cells using $F_1$ and adopting threshold $T_M$ for macro and threshold $T_{\mu}$ for micro cells. The following derivations are carried out in the most general case with background noise with power $\sigma^2$ so that we refer here to SINR rather than to SIR. Note that the derivations in this Appendix are obtained by modifying the approach shown in \cite{AnalysisMaxSIR} to take the reuse of frequency with factor $K$ into account. In particular, we can express $ {P}_{c,1}(T_M, T_{\mu})$ as follows: \begin{equation}\label{th:coverageProb} \begin{split} {P}_{c,1}(T_M, T_{\mu}) & = {P} \left( \bigcup_{i \in \Phi_{M,1} \cup \Phi_{\mu,1}} \{\V{SINR}_i \geq T_i \} \right) \\ & = {P} \left( \bigcup_{i \in \Phi_{M,1} \cup \Phi_{\mu,1}} B_i\right), \end{split} \end{equation} where $B_{i}$ is the event $\{\V{SINR}_i \geq T_i\}$ and where $T_i$ has one of the two values $T_M$ or $T_{\mu}$, depending on subscript $i$ corresponding to a M-eNB or to a \textmu-eNB. We apply Lemma 1 of \cite{AnalysisMaxSIR} to solve (\ref{th:coverageProb}). In particular, when $T_i \geq 1$ (0 dB) and $K$ = 1, Lemma 1 states that $ {P}(B_i\cap B_j) = 0$, i.e., events $B_i$ and $B_j$ cannot happen at the same time (the coverage areas of different cells are disjoint). We can explain this fact as follows. Let us consider a simple scenario where there are three signals from three eNBs using frequency $F_1$ that are received at the reference UE with power levels $b_1$, $b_2$, and $b_3$ ($b_1, b_2, b_3 > 0$). We assume that SIR threshold $T$ is 1 (0 dB). Then, if $\V{SIR}_1$ = $\frac{b_1}{b_2 + b_3} > 1$ from eNB \#1, we have $b_1 > b_2 + b_3$, which leads to $\V{SIR}_2 = \frac{b_2}{b_1 + b_3} < 1$ from eNB \#2 and $\V{SIR}_3 = \frac{b_3}{b_1 + b_2} < 1$ from eNB \#3. Thus, the reference UE can experience a SIR greater than $T = 1$ from at most one eNB, that is eNB \#1 in this example. This result can be extended to the cases $T \geq 1$. Using the same reasoning, we can also prove that the coverage areas are not disjoint when $T < 1$ (0 dB). In order to solve (\ref{th:coverageProb}) and to simplify the analysis, we consider that cell coverage areas are always disjoint, even if this is true only as a first approximation when $T < 1$ (0 dB). Note that, power $K$ in formula (\ref{eq:coverageProbFinal}) reduces the effects of such approximation so that the differences between analysis and simulations are very small when $K > 1$ (see Fig. \ref{fig:OutagevsSIR}). With $B_i$'s being disjoint events, the probability of the union of events in (\ref{th:coverageProb}) is equal to the sum of individual probabilities as follows: \begin{equation}\label{th:coverageProbReduced} \begin{split} {P}_{c,1}(T_M, T_{\mu}) & = {P} \left( \bigcup_{i \in \Phi_{M,1} \cup \Phi_{\mu,1}} B_i\right) \approx {E} \left[ \sum_{i \in \Phi_{M,1} \cup \Phi_{\mu,1}} {P}(B_i) \right] \\ & = \sum_{j = \{M,\mu\}} {E} \left[\sum_{i \in \Phi_{j,1}} {P}(B_i) \right]. \end{split} \end{equation} where operator $E[.]$ is over $\Phi$ processes. \myhl{Note that this approximation provides a upper bound to the coverage probability.} Let us consider a stationary point process $\Phi$ with constant density $\lambda$ and a function $f: {\mathbb{R}}^2 \rightarrow {\mathbb{R}}$ that is applied to the generic point $x \in \Phi$ in $ {\mathbb{R}}^2$. Then, applying the Campbell-Mecke Theorem \cite{StochasticGeoandApplications}, we have: \begin{equation}\label{eq:campbellMecke} \mathbf{E} \left[ \sum_{x \in \Phi} f(x)\right] = \lambda \int_{ {\mathbb{R}}^2} f(x) \V{d}x. \end{equation} Then, in our case, $x$ is the generic position of macro or micro eNBs that use $F_1$ according to sub-process $\Phi_{j,1}$. Moreover, $\lambda$ is the density of macro or micro cells of the sub-process $\Phi_{j,1}$, being $\lambda$ equal to $\frac{\lambda_j}{K}$ since the process with density $\lambda_j$ is split in $K$ parts because of the division of the spectrum in $K$ parts \cite{StochasticSurvey},\cite{StochasticGeometryTheory}. $f(x)$ corresponds to the probability $ {P}(B_j) = {P}\left(\V{SINR}_j \geq T_j\right)$. Thus, we obtain: \begin{equation}\label{th:campbell} \begin{split} & {P}_{c,1}(T_M, T_{\mu}) \approx \sum_{\underset{}{j = \{M, \mu\}}} \frac{\lambda_j}{K} \int_{ {\mathbb{R}}^2} {P} \left( \V{SINR}_j \geq T_j \right) \V{d}r_j \\ &= \sum_{\underset{}{j = \{M, \mu\}}} \frac{\lambda_j}{K} \int_{ {\mathbb{R}}^2} {P} \left( \frac{P_j h_j \parallel r_j\parallel^{-\gamma}}{I_j + \sigma^2} \geq T_j \right) \V{d}r_j, \end{split} \end{equation} where $\parallel r_j \parallel$ denote the distances between the reference UE and eNBs in tier $j$ (points $r_j$ correspond to points $x$ according to the previous notation), $I_j$ denotes the interference power received from other eNBs in tier $j$ using $F_1$, and $\sigma^2$ denotes the noise power. By using the same approach as that in \cite{AnalysisMaxSIR}, we obtain the expression of $ {P}_{c,1}(T_M, T_{\mu})$ with frequency reuse of $K$ as shown in (\ref{th:generalizedCoverageProb}) at the top of the next page, where $C(\gamma)= \frac{2 \pi^2}{\gamma \sin(\frac{\pi}{\gamma/2})}$. If noise is ignored ($\sigma^2 = 0$), and changing to polar coordinates, we have: $\parallel r_j \parallel^2 = \upsilon^2$, $\V{d}r_j = \upsilon \V{d}\upsilon \V{d}\theta$. Then, equation (\ref{th:generalizedCoverageProb}) becomes equation (\ref{th:noiseIgnored}), as shown at the top of the next page; \begin{figure*} \begin{equation}\label{th:generalizedCoverageProb} \begin{split} {P}_{c,1}(T_M, T_{\mu}) = \sum_{\underset{}{j \approx \{M, \mu\}}} \frac{\lambda_j}{K} \int_{ {\mathbb{R}}^2} \exp\left[ -\left(\frac{T_j}{P_j}\right)^{2/\gamma} \parallel r_j \parallel^2 C(\gamma) \sum_{m = \{M,\mu\}} \frac{\lambda_m}{K} P_m^{2/\gamma}\right] \exp\left(\frac{-T_j \sigma^2 \parallel r_j \parallel^\gamma}{P_j}\right) \V{d}r_j. \end{split} \end{equation} \end{figure*} \begin{figure*} \begin{equation}\label{th:noiseIgnored} \begin{split} {P}_{c,1}(T_M, T_{\mu}) & \approx \sum_{j = \{M, \mu\}} \frac{\lambda_j}{K} \int_{0}^{+\infty}\int_{0}^{2\pi} \exp\left[ -\left(\frac{T_j}{P_j}\right)^{2/\gamma} \upsilon^2 C(\gamma) \sum_{m = \{M,\mu\}} \frac{\lambda_m}{K} P_m^{2/\gamma}\right] \upsilon \V{d}\upsilon \V{d}\theta \\ &= \sum_{j = \{M, \mu\}} \frac{\lambda_j}{K} \int_{0}^{2\pi} \V{d}\theta \int_{0}^{+\infty} \exp\left[ -\left(\frac{T_j}{P_j}\right)^{2/\gamma} \upsilon^2 C(\gamma) \sum_{m = \{M,\mu\}} \frac{\lambda_m}{K} P_m^{2/\gamma}\right] \frac{1}{2}\V{d}\upsilon^2 \\ &= \frac{\pi \sum_{j = \{M, \mu\}} \lambda_j P_j^{2/\gamma} T_j^{-2/\gamma}}{C(\gamma) \sum_{m = \{M,\mu\}} \lambda_m P_m^{2/\gamma}} = D(\gamma, T)\frac{{\left( {\frac{{T_\mu }}{T}} \right)^{ - \frac{2}{\gamma }} + \left( {\frac{{\lambda _M }}{{\lambda _\mu }}} \right)\left( {\frac{{P_M }}{{P_\mu }}} \right)^{\frac{2}{\gamma }} \left( {\frac{{T_M }}{T}} \right)^{ - \frac{2}{\gamma }} }}{{1 + \left( {\frac{{\lambda _M }}{{\lambda _\mu }}} \right)\left( {\frac{{P_M }}{{P_\mu }}} \right)^{\frac{2}{\gamma }} }}. \end{split} \end{equation} \end{figure*} if $T_M = T_{\mu} = T$, we obtain: \begin{equation}\label{th:finalProof} {P}_{c,1} = {P}_{c,1}(T, T) \approx D(\gamma, T) \triangleq \frac{\pi}{C(\gamma)T^{2/\gamma}}, \end{equation} which completes our proof. \vspace{10pt} \section*{\uppercase{Appendix B}} \label{appendixB} \subsection*{Proof of Proposition 1} Let $ {P}_{c,1,\mu} (T_{\mu})$ denote the coverage probability (for the reference UE in the origin) of the micro tier with SINR threshold $T_{\mu}$ when only the subset of micro cells using frequency $F_1$ is considered. We have: \begin{equation}\label{pr:microCoverage} {P}_{c,1,\mu} (T_{\mu}) = {P} \left( \bigcup_{i \in \Phi_{\mu,1}} \{\V{SINR}_i \geq T_{\mu} \} \right). \end{equation} By using the same method as in Appendix A and assuming no noise, we obtain the formula of $ {P}_{c,1,\mu} (T_{\mu})$ as follows: \begin{equation}\label{pr:micCoverage1Fre} \begin{split} {P}_{c,1,\mu} (T_{\mu}) &= \frac{\lambda_{\mu}\pi P_{\mu}^{2/\gamma} T_{\mu}^{-2/\gamma}}{C(\gamma) \sum_{m = \{M,\mu\}} \lambda_m P_m^{2/\gamma} }\\ & = D(\gamma,T)\frac{{\left( {\frac{{T_\mu }}{T}} \right)^{- \frac{2}{\gamma }} }}{{1 + \left( {\frac{{\lambda _M }}{{\lambda _\mu }}} \right)\left( {\frac{{P_M }}{{P_\mu }}} \right)^{\frac{2}{\gamma }} }}. \end{split} \end{equation} If we consider $T_{\mu} = T$, we obtain the expression of $ {P}_{c,1,\mu}$ that is used in Proposition \ref{proprosionAverageLoad}. Moreover, by means of the same approach as in sub-Section \ref{subsec:OutageProb}, we obtain the coverage probability $ {P}_{c,\mu}$ of the micro tier as: \begin{equation}\label{pr:finalMicCoverage} {P}_{c,\mu} = 1 - \left( 1 - {P}_{c,1,\mu} \right)^K. \end{equation} Now let us derive the average load on the micro tier. According to the prioritized SIR-based cell association scheme, the reference UE will associate with the micro tier if it experiences a SIR value higher than threshold $T$ from at least one cell of this tier. Thus, we have: \begin{equation}\label{pr:microcellAssoc} \begin{split} A_{\mu} & = {P} \left( \bigcup_{j \in \Phi_{\mu} } \{\V{SIR}_j \geq T\} | \bigcup_{i \in \Phi_{M} \cup \Phi_{\mu}} \{\V{SIR}_i \geq T\} \right) \\ & = \frac{ {P} \left( \bigcup_{j \in \Phi_{\mu} } \{\V{SIR}_j \geq T \}, \bigcup_{i \in \Phi_{M}\cup \Phi_{\mu}} \{\V{SIR}_i \geq T \}\right) }{ {P} \left(\bigcup_{i \in \Phi_{M} \cup \Phi_{\mu}} \{\V{SIR}_i \geq T \}\right)} \\ & = \frac{ {P} \left( \bigcup_{j \in \Phi_{\mu} } \{\V{SIR}_j \geq T\} \right) }{ {P} \left(\bigcup_{i \in \Phi_{M}\cup \Phi_{\mu}} \{\V{SIR}_i \geq T \}\right)} \\ & = \frac{ {P}_{c,\mu}}{ {P}_{c}} = \frac{1-(1- {P}_{c,1,\mu})^K}{1-(1- {P}_{c,1})^K}, \end{split} \end{equation} where at the third step we have simplified the joint probability because $ \bigcup_{j \in \Phi_{\mu} } \{\V{SIR}_j \geq T\}$ is a subset of $\bigcup_{i \in \Phi_{M}\cup \Phi_{\mu}} \{\V{SIR}_i \geq T \}$.\\ \noindent This completes the proof. \vspace{10pt} \bibliographystyle{IEEE}
{ "timestamp": "2017-12-15T02:06:13", "yymm": "1712", "arxiv_id": "1712.05156", "language": "en", "url": "https://arxiv.org/abs/1712.05156" }
\section{Introduction}In this short note we discuss compactness results for K\"ahler metrics in a fixed K\"ahler class, under geometric assumptions. Convergence and compactness of K\"ahler metrics has many special properties compared with the convergence of Riemannian metrics in general. We fix a compact K\"ahler manifold $X$, of complex dimension $n$, with a K\"ahler class $[\omega]$. Up to a constant, a K\"ahler metric $\tilde \omega\in [\omega]$ can be written in terms of a K\"ahler potential, i.e, $\tilde \omega = \omega_\phi:= \omega + i\partial \bar \partial \phi$. Accordingly, we denote the space of K\"ahler potentials as \[ {\mathcal H}:=\{\phi\in C^{\infty}(X): \omega_\phi=\omega+\sqrt{-1} \partial\bar\partial \phi>0\}.\] The correspondence between metrics and potentials allows to phrase compactness theorems about K\"ahler metrics (in the same class) in terms of their (normalized) potentials. A well known result of this type shows up in Yau's proof of the Calabi conjecture \cite{y}. Stated in a technical way, it asserts that the set of K\"ahler potentials $\phi\in {\mathcal H}$ with $C^2$-bounded log-volume ratio $\log\frac{\omega_\phi^n}{\omega^n}$ is $C^{3, \alpha}$-compact for any $\alpha\in (0, 1)$ (i.e. the corresponding metrics are $C^{1, \alpha}$-compact). Using pluripotential theory, Kolodziej \cite{ko} proved that compactness of K\"ahler potentials holds under much weaker assumptions: if the volume ration is in $L^p, \ p>1$, then the set of potentials is $C^{\alpha}$-compact. Using Kolodziej's estimates and integral methods, the first and the third author generalized Yau's result on the compactness of K\"ahler metrics and proved that if the log volume ratio is in $W^{1, p}, p>2n$, then the K\"ahler potenials are $C^{2, \alpha}$-compact \cite{ch2}. All these results were achieved by studying the complex Monge-Ampere equation. Another problem of interest is to consider the compactness of K\"ahler potentials under curvature conditions, in particular Ricci curvature bound, given the close relation of Ricci curvature and the volume form. Clearly, additional assumptions are needed to guarantee compactness. By using Yau's technique \cite{y}, the first and third author proved that K\"ahler potentials are $C^{3, \alpha}$-compact given uniform bound on Ricci curvature and the K\"ahler potentials \cite[Theorem 5.1]{ch1}, leading to applications related to the Calabi flow: \begin{theorem}[Chen-He]\label{tch} Consider the set of potentials $\phi \in \mathcal H$ with $\sup_X \phi =0$. If both $Ric \ {\omega_\phi}$ and $\|\phi\|_{C^0}$ are uniformly bounded, then there exists a $C >1$ such that \[\frac{1}{C}\omega\leq \omega_\phi\leq C\omega \ \textup{ and } \ \| \phi\|_{C^3,\alpha} \leq C, \] for all $\alpha \in (0,1)$. \end{theorem} The assumption on the $C^0$ bound is not satisfactory in various settings. For example, when considering equations that govern existence of canonical K\"ahler metrics, one is often initially led to energy bounds on the potentials instead of $C^0$ bounds. To put the problem in a more geometric context, we recall the $L^2 $ Mabuchi metric and its $L^p$ generalizations defined on ${\mathcal H}$: \[\|\psi\|_\phi^p=\int_M |\psi|^p \omega_\phi^n,\] where $\phi \in \mathcal H$ and $\psi \in T_\phi \mathcal H$ is a ``tangent vector". These metrics have been studied extensively, and are closely related to existence of canonical K\"ahler metrics, per Donaldson's program \cite{do}. In \cite{c1} the first author solved Donaldson's conjecture on the $L^2$ geodesic equation and confirmed that the Mabuchi path length pseudo distance $d_2$ is indeed a distance on $\mathcal H$. The $L^p$ analog of this same result was obtained in \cite{da}, i.e., it was shown that the $d_p$ path length pseudo distances are bona fide distances on $\mathcal H$ for all $p \geq 1$. Along these lines, instead of assuming a $C^0$ bound, one can study the compactness of K\"ahler potentials with bounded Ricci curvature and bounded $L^2$ Mabuchi distance. This problem was proposed around 2006 by the first author. Unfortunately the estimates obtained on the Mabuchi distance at that time were not effective enough. Recently, in \cite{da} the second author has proved effective estimates comparing the $d_p$ metrics to ``more friendly" analytic expressions (see \eqref{doubleest} below). In particular the following estimate holds for all $d_p$ metrics (see \cite{DR} for more precise estimates related to the $d_1$ metric): \begin{equation}\label{eq: I_d_p_est} \mathcal I(\omega_u, \omega_v):=\int_X (v-u) (\omega_u^n-\omega_v^n) \leq C d_p(u,v), \ \ u,v \in \mathcal H, \end{equation} for some absolute constant $C:=C(n)>1$. The main result of this short note is a compactness theorem for K\"ahler potentials with bounded Ricci curvature and bounded ${\mathcal I}$ functional. In short, Theorem \ref{tch} still holds if we replace the $C^0$ bound by the bound on ${\mathcal I}$-functional, with some precisions made along the way: \begin{theorem}[Theorem \ref{main}, Corollary \ref{cor1}] For $C>0$ consider the set of potentials $\phi \in \mathcal H$ with $\sup_X \phi =0$ and $\mathcal I(\omega, \omega_\phi) \leq C$. \begin{itemize} \item[(i)] If $Ric_{\omega_\phi} \leq C \omega_\phi$ then there exists $D:=D(X,\omega,J,C)>1$ such that $$0 \leq \omega_\phi \leq D \omega.$$ In particular, $\|\phi\|_{C^{1,\alpha}} \leq D'$ for any $\alpha \in (0,1)$. \item[(ii)] If $-C \omega_\phi \leq Ric_{\omega_\phi} \leq C \omega_\phi$ then $$\frac{1}{D} \omega \leq \omega_\phi \leq D \omega \ \textup{ and } \ \|\phi \|_{C^{3,\alpha}} \leq D.$$ \end{itemize} \end{theorem} By the discussion preceding the theorem, the same result holds if we replace the bound on $\mathcal I(\omega,\omega_\phi)$ with a corresponding bound on $d_p(0,\phi)$, for any $p \geq 1$. As in \cite{ch1}, this last compactness result has direct applications to the smooth Calabi flow. Recall that by the results of this latter paper, uniform Ricci curvature bound along the Calabi flow implies existence of the flow for all times $t \geq 0$. Our theorem in this direction is the following: \begin{theorem}\label{thm: Calabif} Suppose $t \to c_t , \ t \in [0,\infty)$ is a smooth Calabi flow trajectory with uniformly bounded Ricci curvature. Then the following hold: \begin{itemize} \item[(i)] If there exists a cscK metric in $\mathcal H$ then $t \to c_t$ converges smoothly to one such metric as $t \to \infty$. \item[(ii)] If the Mabuchi K-energy is proper then there exists a cscK metric in $\mathcal H$ and $t \to c_t$ converges smoothly to one such metric. \end{itemize} \end{theorem} Properness of the K-energy is understood as introduced by Tian (see \cite[Chapter 7]{t}). Part (i) in the above theorem strengthens a theorem of the third author \cite{h}, proved in the case when $X$ does not admit non-trivial holomorphic vector-fields. We mention that according to recent work of Li-Wang-Zheng \cite{LWZ} if an extremal metric exists, and the Calabi flow exists for all time with bounded scalar curvature (such a curvature bound is not needed on K\"ahler surface), then the Calabi flow converges to an extremal metric, after taking a subsequence. We also note that $L^1$ convergence of the Calabi flow to a cscK metric holds unconditionally, as proved in \cite{bdl1}, but convergence in H\"older norms is not yet known. Part (ii) strengthens \cite[Theorem 2]{sz}, where a bound on the full curvature tensor is assumed instead. Lastly, we consider the continuity path for the constant scalar curvature (cscK) equation, via twisted cscK metrics, as introduced by the first author \cite{c2}, generalizing previous approach to K\"ahler-Einstein metrics. Twisted csck metrics appear in the work of J. Fine \cite{Fine}, Song-Tian \cite{ST}, Stoppa \cite{Stoppa1} and Darvas-Berman-Lu \cite{bdl} to highlight a few works in a very fast expanding literature. The main advantage of Chen's continuity path is that a twisted cscK metric is a minimizer of twisted K-energy, which is always strictly convex. In particular this implies that the kernel of the linearized operator (of fourth order) is always zero, hence openness holds. For more details and discussion, see Chen \cite{c2}. More precisely, for $t \in [0,1]$ we consider the following family of equations: \begin{equation}\label{CSCK} t(\underline R-R_\psi)+(1-t)(\mathop{\rm tr}\nolimits_{\omega_\psi}\omega-n)=0, \ \ \psi \in \mathcal H. \end{equation} For $t=1$ this equation reduces to the cscK equation. By the results of \cite{Zeng} and \cite{Hashimoto}, this equation is always solvable in a neigborhood of $t=0$. As alluded to above, the values $t \in [0,1]$ for which the above equation is solvable forms an open set \cite{c2}. If the equation has a smooth solution up to $t=1$, then a desired cscK metric exists. Hence we can assume that there exists a maximal interval of the type $[0, T)$, for which solutions exist, and want to study what happens at the maximal singular time $T$. In this direction we note the following result: \begin{theorem}\label{estimate}Suppose that the K-energy is proper on $\mathcal H$. If we assume that the Ricci curvature is bounded above along the continuity path then $T=1$ and a smooth cscK metric exists in $\mathcal H$. \end{theorem} As we will see, this theorem will be a direct consequence of Theorem \ref{main}(i) above and \cite[Theorem 1.7]{HZ}. Ideally, we expect the assumption on the Ricci curvature bound to be a technicality. This result suggests that if we assume properness of the K-energy, then any upper bound for the Ricci curvature has to blow up as we are near singular time. Finally, let us summarize by comparing our findings with known results in K\"ahler-Einstein and complex Monge-Amp\`ere theory. Our results giving $C^2$ bounds rely crucially on the Ricci upper bound. As a comparison, for the complex Monge-Amp\'ere equation, the K\"ahler-Einstein equation, or the K\"ahler-Ricci flow, one obtains the second order estimates using the $C^0$ estimate (the $C^0$ estimates in turn can be obtained through the equation directly, or using properness). Such a second order estimate is not known for fourth order equations like the cscK equation, or the Calabi flow. This is a crucial technical difficulty, and represents one of the major technical differences between second order equations and fourth order equations in K\"ahler geometry. Our results suggest that the second order estimates and the Ricci upper bound are essentially equivalent for the cscK equation, assuming properness. However it seems to be an extremely hard problem in general to obtain a Ricci upper bound, or even a Laplacian bound, along the Calabi flow or along the continuity path. \section{The compactness theorem} In this section we will prove Theorem 1.2. For this we need to use several delicate theorems from pluripotential theory that we now recall: \begin{theorem} \label{Skoda} Suppose $\mathcal L \subset \textup{PSH}(X,\omega)$ is $L^1$ weak compact and the Lelong numbers of all elements in $\mathcal L$ are zero. Then for any $\alpha > 0$ there exists $C(\alpha,\omega)>0$ such that $$\int_X e^{-\alpha u} \omega^n \leq C, \ u \in \mathcal L.$$ \end{theorem} This theorem is a well known consequence of Skoda's uniform integrability theorem, as proved in \cite[Corollary 3.2]{ze} and will be one of the key ingredients in our argument. As courtesy to the reader, we give a proof. \begin{proof} Let $ U_1, \ldots, U_k, V_1, \ldots, V_k \subset X$ be coordinate neighborhoods such that $\overline V_k \subset U_k$, $ V_1, \ldots, V_k$ covers $X$ and $i\partial\bar \partial \beta_j = \omega$ for some $\beta_j \in C^\infty(U_j), j =1,\ldots, k$. We introduce the following local families: $$\mathcal L_j = \{ \beta_j + v \ | \ v \in \mathcal L \} \subset \textup{PSH}(U_j).$$ As $\mathcal L \subset \textup{PSH}(X,\omega)$ is weak $L^1$ compact, for any $\alpha >0$ the families $\alpha \mathcal L_j \subset \textup{PSH}(U_j)$ are also weak $L^1$ compact. For analogous reasons, the elements of $\alpha \mathcal L_j$ have zero Lelong numbers, hence we can apply \cite[Corollary 3.2]{ze} which yields: $$\int_{V_j} e^{-\alpha u} \omega^n \leq C_j(\alpha,\mathcal L_j), \ u \in \mathcal L_j, j =1,\ldots, k.$$ Summing up over the coordinate patches yields the desired estimate. \end{proof} \begin{theorem}[Kolodziej's $C^0$ estimate]\label{Kolotheorem} Suppose $f \in L^1(X)$ is such that $f \geq 0$, $\int_X f \omega^n =1$ and $$\int_X f \log(1+f)^p \omega^n < C$$ for some $p > n$ and $C <\infty$. Then there exists $u \in PSH(X,\omega)\cap L^\infty$ with $\sup_X u =0$ and $\sup_X |u| \leq D(C,p,\omega)$ such that $$(\omega + i\partial\bar \partial u)^n = f\omega^n.$$ \end{theorem} The $d_p$ metric on $\mathcal H$ is the metric induced by the $L^p$ Mabuchi geometry on $\mathcal H$, as in studied in \cite{da}. For a survey on these metrics and related matters, we refer to \cite{da2}. The only things of importance here are that the every $d_p$ metric dominates the $d_1$ metric which in turn dominates the weak $L^1$ topology of $\textup{PSH}(X,\omega)$. There is also the following double estimate: \begin{equation}\label{doubleest} \frac{1}{C}d_p(u,v) \leq \Big(\int_X |u-v|^p\omega_u^n\Big)^{1/p} + \Big(\int_X |u-v|^p\omega_v^n\Big)^{1/p} \leq C d_p(u,v). \end{equation} This last estimate implies that Aubin's $\mathcal I$ functional, recalled below, is dominated by all metrics $d_p$. \begin{equation}\label{AubinIest} \mathcal I(\omega_u, \omega_v)=\int_X (v-u) (\omega_u^n-\omega_v^n) \leq C d_p(u,v), \ u,v \in \mathcal H. \end{equation} Lastly, we record the following compactness theorem, which is a consequence of Theorem \ref{Skoda} and \eqref{doubleest}: \begin{theorem}[strong compactness]\textup{\cite[Proposition 2.6, Theorem 2.17]{bbegz}} \label{bbegzcomp} Suppose $\{ u_k\}_{k \in \Bbb N} \subset \mathcal H$ is such that $|\sup_X u_k| \leq D$ and $\int_X \log \frac{\omega_{u_k}^n}{\omega^n} \omega^n_{u_k} \leq D$ for some $D \geq0$. Then there exists $u \in \mathcal E^1(X,\omega)$ and $k_l \to \infty$ such that $\int_X |u_{k_l}- u| \omega^n \to 0$. \end{theorem} Actually, we also have $\lim_{l \to \infty} d_1(u_{k_l},u)=0$ in the above theorem, but this will not be important for us. The crucial fact here is that the elements of $\mathcal E^1(X,\omega)$ have zero Lelong numbers \cite[Corollary 1.8]{gz}. Now we are ready to prove our main result in this section. \begin{theorem}\label{main}Let $\mathcal L \subset \mathcal H$ for which there exists $C>0$ satisfying: $$\mathcal I(\omega, \omega_\phi) \leq C, \ Ric \omega_\phi \leq C \omega_\phi, \ \phi \in \mathcal L.$$ Then there exists $C'(\mathcal L) >0$ such that: $$0 \leq \omega +i\partial\bar\partial \phi \leq C' \omega, \ \phi \in \mathcal L.$$ \end{theorem} \begin{proof} Without loss of generality we can assume that $\int_X \omega^n =1$. First we establish the $C^0$ bound. We can suppose that $\sup_X \phi =0$ for all $\phi \in \mathcal L$. This implies that $\int_X \phi \omega^n$ is uniformly bounded, hence by the $\mathcal I$ functional bound, also $\int_X \phi \omega_{\phi}^n$ is uniformly bounded. Let $F_\phi= \log(\omega_\phi^n/\omega^n)$ for $\phi \in \mathcal L$. We have \begin{equation}\label{secondderivest} Ric \ \omega_\phi -Ric \ \omega = -\sqrt{-1} \partial \bar \partial F_\phi. \end{equation} We can suppose that $-Ric \ \omega \leq C\omega$ and $Ric \ \omega_\phi \leq C \omega_\phi$. Hence we have \[C(\omega + \omega_\phi)\geq -i\partial\bar\partial F_\phi.\] It follows that $\frac{1}{2}\phi + \frac{1}{2C}F_\phi\in \mathcal H$. By Jensen's inequality, we also have that $$ \int_X F_\phi \omega^n \leq \log \int_X \frac{\omega_u^n}{\omega^n} \omega^n=0.$$ Putting these facts together we obtain that there exists $D>0$ such that $ \sup_X (\frac{1}{2}\phi + \frac{1}{2C}F_\phi) \approx \int_X (\frac{1}{2}\phi + \frac{1}{2C}F_\phi)\omega^n \leq 0$, hence \begin{equation}\label{trivest}F_\phi+C\phi \leq D, \end{equation} for some $D>0$, which in turn implies that \begin{equation}\label{trivexpest} \omega_\phi^n \leq D' e^{-C\phi}\omega^n. \end{equation} Next we claim that the weak $L^1$ closure of $\mathcal L$ is a compact family (in the weak $L^1$ topology of $\textup{PSH}(X,\omega)$) with zero Lelong numbers. Compactness is guaranteed by the fact that $\sup_X \phi =0$ for all $\phi \in \mathcal L$. To verify the condition on zero Lelong numbers, let $\phi_k \in \mathcal L$ such that $\phi_k \to_{L^1} \phi \in \overline {\mathcal L} \subset \textup{PSH}(X,\omega)$. If we can argue that $\phi \in \mathcal E^1(X,\omega)$, then we are done as elements of $\mathcal E^1(X,\omega)$ have zero Lelong numbers \cite[Corollary 1.8]{gz}. But this follows as the conditions of Theorem \ref{bbegzcomp} are verified. Indeed, we have that $$ \int_X \log \frac{\omega_{\phi_k^n}}{\omega^n} \omega^n_{\phi_k} = \int_X F_{\phi_k} \omega_{\phi_k}^n\leq \int_X (D/c - \phi_k/c) \omega^n_{\phi_k} \leq E,$$ where the last estimate follows from our choice of normalization at the beginning of the proof. Hence, we can apply Theorem \ref{bbegzcomp} which gives the claim. Using the claim and Theorem \ref{Skoda} we conclude that for any $\alpha >0$ there exists $C(\alpha, \mathcal L) >0$ such that \begin{equation}\label{Skodaest} \int_X e^{-\alpha \phi} \omega^n \leq C, \ \phi \in \mathcal L. \end{equation} For any $p \geq 1$, we can start to write: \begin{flalign*} \int_X \Big(\frac{\omega^n_\phi}{\omega^n}\Big)^p\omega^n \leq D \int_X e^{-Cp\phi}\omega^n \leq C(p,\mathcal L), \ \phi \in \mathcal L, \end{flalign*} where we have used \eqref{trivexpest} and \eqref{Skodaest}. Finally, by choosing $p \geq 2$, we can apply Kolodziej's estimates (Theorem \ref{Kolotheorem}) to conclude the proof of the uniform $C^0$ estimate. With the $C^0$ bound in hand, the bound on $\Delta_\omega \phi$ is derived using Yau' techniques \cite{y}. Fix a large constant $C_3>0$, that will eventually be under control. At the point $p \in X$ where $\exp(-C_3\phi)(n+\Delta_\omega \phi)$ is maximized we obtain: \begin{eqnarray*} \Delta_{\omega_\phi}\left\{\exp(-C_3\phi)(n+\Delta \phi)\right\}(p)\leq 0.\end{eqnarray*} Then, using normal coordinates, Yau's calculation yields: \begin{eqnarray}\label{yau}0&\geq& \Delta F_\phi-n^2\inf_{i\neq l}R_{i\bar{i}l\bar{l}}-C_3n(n+\Delta_\omega \phi)\nonumber\\ &&\quad+\left(C_3+\inf_{i\neq l}R_{i\bar{i}l\bar{l}}\right)\exp\left\{\frac{-F}{n-1}\right\}(n+\Delta \phi)^{n/(n-1)}.\end{eqnarray} Using \eqref{secondderivest}, we continue to write \begin{eqnarray}0&\geq& -C\Delta_\omega \phi-C_2-n^2\inf_{i\neq l}R_{i\bar{i}l\bar{l}}-C_3n(n+\Delta \phi)\nonumber\\ &&\quad+\left(C_3+\inf_{i\neq l}R_{i\bar{i}l\bar{l}}\right)\exp\left\{\frac{-F}{n-1}\right\}(n+\Delta_\omega \phi)^{n/(n-1)}.\end{eqnarray} This implies that $(n+\Delta_\omega\phi)(p)$ has an upper bound $C_0$ depending only on $\sup_X F$. It follows that \begin{eqnarray*} \exp(-C_3\phi)(n+\Delta_\omega \phi) &\leq& \exp(-C_3\phi)(n+\Delta_\omega \phi)(p)\\ &\leq& C_0\exp(-C_3\phi(p)).\end{eqnarray*} Because $\phi$ is uniformly bounded, we obtain \begin{equation}\label{5-5} 0<n+\Delta \phi \leq C(C_3, \sup_X F_\phi, X).\end{equation} Hence $\Delta_\omega \phi$ is uniformly bounded. \end{proof} As the next example shows, it is not possible to obtain a lower bound for the metrics without further assumptions. Let $X$ be the torus $\Bbb C / (\Bbb Z + i \Bbb Z)$ with the flat metric $\omega = i dz \wedge \bar{dz}$. For small enough $\varepsilon >0$ let $\alpha_\varepsilon: X \to \Bbb R$ be functions satisfying the following properties: $0 < \alpha_\varepsilon <2$, $\int_X \alpha_\varepsilon \omega=1$, $\alpha_\varepsilon\big|_{B(0,1/6)}=\varepsilon + |z|^2$ and $\alpha_\varepsilon\big|_{X\setminus B(0,1/4)}$ is independent of $\varepsilon$. Because $\dim(X) =1$, there exists $\beta_\varepsilon \in \mathcal H$ such that $\omega_{\beta_\varepsilon} = \alpha_\varepsilon \omega$. For the Ricci curvature of $\omega_{\beta_\varepsilon}$ we have $$Ric \ \omega_{\beta_\varepsilon}\Big|_{B(0,1/6)} = - \sqrt{-1} \partial \bar \partial \log \alpha_\varepsilon \Big|_{B(0,1/6)} = -i\frac{\varepsilon}{(\varepsilon + |z|^2)^2}dz \wedge \bar{dz} \leq 0.$$ Using this, one can see that $Ric \ \omega_{\beta_\varepsilon}$ is in fact uniformly bounded above on $X$. Hence, for small enough $\varepsilon$ we obtained a family of metrics $\{ \omega_{\beta_\varepsilon}\}_{\varepsilon>0}$ such that $\mathcal I(\omega,\omega_{\beta_\varepsilon})$ and $Ric \ \omega_{\beta_\varepsilon}$ are uniformly bounded above but $\omega_{\beta_\varepsilon}$ is not uniformly bounded away from zero. Combining the conclusion of our last result with the compactness theorem \cite[Theorem 5.1]{ch1}, we obtain the following corollary: \begin{corollary}\label{cor1}Let $\mathcal L \subset \mathcal H$ for which there exists $C>0$ satisfying: $$\mathcal I(\omega, \omega_\phi) \leq C, \ - C \omega_\phi \leq Ric \ \omega_\phi \leq C \omega_\phi, \ \phi \in \mathcal L.$$ Then for any $\alpha \in [0,1)$ there exists $C'(\mathcal L) >1$ such that: $$ \frac{1}{C'} \omega \leq \omega_\phi \leq C'\omega, \ \| \phi\|_{C^{3,\alpha}} \leq C', \ \phi \in \mathcal L.$$ \end{corollary} We note that this last result also improves \cite[Theorem 1]{sz}, where instead of bounded Ricci curvature the author assumes boundedness of the full curvature tensor, via a result of Schoen-Uhlenbeck \cite{su}. \begin{rmk} It would be very interesting to understand the compactness of potentials with weaker curvature conditions, such as replacing Ricci curvature by scalar curvature. We ask: is a family of K\"ahler metrics in a fixed K\"ahler class with uniformly bounded scalar curvature and uniformly bounded potential compact (say in $C^{1, \alpha}$ topology)? Namely, can we obtain second order estimates on the potential assuming scalar curvature bound and $C^0$ bound? \end{rmk} \section{Applications to the Calabi flow} We study applications to the Calabi flow in this section. Recall that in the presence of bounded Ricci curvature, the first and third named authors proved the long time existence of the Calabi flow. \begin{proof}[Proof of Theorem \ref{thm: Calabif}] First we prove part (i). If $\phi \in \mathcal H$ is cscK then by the distance shrinking property of the Calabi flow and \eqref{AubinIest} we obtain that $$\mathcal I(\omega_\phi,\omega_{c_t}) \leq Cd_2(\phi,c_t)\leq Cd_2(\phi,c_0), \ t \geq 0.$$ By \cite[Corollary 4]{da} we have additionally that $ |\sup_X c_t| $ is bounded by $d_2(\phi,c_t)$. Given all this and the Ricci curvature bound, we can apply the compactness theorem of the previous section to conclude that $\{c_t \}_t$ is $C^{3,\alpha}$-compact. Hence there exists a $C^{3, \alpha}$ K\"ahler potential $c_\infty$ that minimizes the K-energy. Using \cite[Theorem 1.7]{HZ} (or more generally \cite[Theorem 1.1]{bdl}), we conclude that $c_\infty$ is in fact a smooth cscK potential. Part (ii) is argued in a similar way. Using the formalism of \cite{DR}, properness of the K-energy simply means that $\mathcal K(u) \geq C d_1(0,u) - D$ for all $u \in \mathcal H$ and some $C,D>0$ (See \cite[Proposition 5.5]{DR}). As $t \to c_t$ decreases the Mabuchi K-energy, it follows that $d_1(0,{c_t})$ is bounded. Using again \cite[Corollary 4]{da} we get that $|\sup_X c_t|$ is bounded. We can now apply our compactness result to conclude the argument, as in part (i). \end{proof} \section{Applications to the method of continuity} In this section, we denote by $\psi_t, t \in [0,1]$ the potential solutions to the twisted cscK equation along the continuity path \eqref{CSCK}. It is well known that such $\psi_t$ minimize the twisted K-energy: $$\mathcal K_t(u)=\mathcal K(u) + \frac{1-t}{t}\mathcal J(u), \ u \in \mathcal H,$$ where $\mathcal K$ is Mabuchi's K-energy and $\mathcal J$ is the following functional: $$\mathcal J(u) = \frac{1}{V }\sum_{j=0}^{n-1}\int_X u \omega^j \wedge \omega_u^{n-1-j} - \frac{n}{(n+1) V }\sum_{j=0}^{n} \int_X u \omega^j \wedge \omega_u^{n-j}, \ \ u \in \mathcal H.$$ Finally, we give the argument of our last main result: \begin{proof}[Proof of Theorem \ref{estimate}] Since $\mathcal J \geq 0$ it follows that $\mathcal K_{t_1} \geq \mathcal K_{t_2}$ for any $t_1,t_2 \in (0,T)$ with $t_1 \leq t_2$. In particular, since $\psi_t$ minimizes $\mathcal K_t$, it follows that $t \to \mathcal K_{t}(\psi_t)$ is decreasing for $t \in [0,T)$. We know that $\mathcal K$ is proper, i.e., $\mathcal K(\cdot) \geq C \mathcal J(\cdot) - D$ for some $C,D>0$. Putting everything together we obtain that $\mathcal J(\psi_t)$ is bounded for $t \in [0,T)$, hence so is $\mathcal I(\omega,\omega_{\psi_t})$. By Theorem \ref{main}(i), $\Delta_\omega \psi_t$ has to be uniformly bounded along the continuity path. On the other hand, using Theorem \cite[Theorem 1.7]{HZ} we obtain that under such circumstance $T=1$ and a smooth cscK metric exists. \end{proof} \paragraph{Acknowledgements.} The first named author has been partially supported by NSF grant DMS--1515795. The second named author has been partially supported by NSF grant DMS--1610202 and BSF grant 2012236. The third named author has been partially supported by NSF grant DMS--1611797.
{ "timestamp": "2017-12-15T02:03:50", "yymm": "1712", "arxiv_id": "1712.05095", "language": "en", "url": "https://arxiv.org/abs/1712.05095" }
\section{Introduction} In 2008 A. Premet and H. Strade~\cite{PrStr-VI} completed the classification of finite-dimensional simple Lie algebras over an algebraically closed field of characteristic $\neq 2,3$. As a result, the Block--Wilson--Premet--Strade classification theorem~\cite{Strade-bookII} establishes that every such Lie algebra is one of the following: \begin{itemize} \item ``classical'' simple Lie algebras, classified by Dynkin diagrams $A_l-D_l$, $E_6$, $E_7$, $E_8$, $F_4$, $G_2$. \item Cartan type Lie algebras in characteristic $p>3$. \item Melikian Lie algebras in characteristic $p=5$. \end{itemize} We say that a finite-dimensional central simple Lie algebra $\LL$ over a field $k$ is \emph{of Chevalley type}, if it is the central quotient of the tangent Lie algebra of a simply connected simple algebraic group over $k$, or, equivalently, the derived Lie algebra of the Lie algebra of an adjoint simple algebraic group. In~\cite{BdMS} we have established that if $\LL$ is a central simple Lie algebra of Chevalley type with a non-trivial $\ZZ$-grading over a field $k$ of characteristic $\neq 2,3$, then there is a simple Kantor pair $\K$ over $k$ such that $\LL$ is the $5$-graded Lie algebra to $\K$. If, moreover, $\Char k\neq 5$, then we showed that the converse is also true, namely, any central simple $5$-graded Lie algebra is of Chevalley type. Previously, the non-degeneracy of simple Kantor pairs over a ring with invertible $2,3,5$ was established in~\cite{GGLN}. Among the simple Lie algebras in the Block--Wilson--Premet--Strade classification, only the Lie algebras of Chevalley type are non-degenerate~\cite[p. 124]{Sel67}, while all simple Lie algebras of Cartan and Melikian type are degenerate (this observation goes back to A. I. Kostrikin and A. Premet; see e.g.~\cite{Wil76} and~\cite[Corollary 12.4.7]{Strade-bookII}). Thus, the non-degeneracy of simple Kantor pairs implies that the associated $5$-graded Lie algebras are of Chevalley type. In the present paper we extend the above results to the charateristic $5$ case. Using the Block--Wilson--Premet--Strade classification, we deduce that any central simple $5$-graded Lie algebra over a field of characteristic $\neq 2,3$ is of Chevalley type. Consequently, for any central simple structurable algebra or Kantor pair over such a field the associated $5$-graded Lie algebra is of Chevalley type, and hence their classification derives from the classification of isotropic simple algebraic groups. This allows to extend to $\Char=5$ the classification of central simple structurable algebras over fields of characteristic $\neq 2,3,5$ due to Allison and Smirnov~\cite{A78,S}. We also describe all $5$-gradings on simple Lie algebras of Chevalley type over algebraically closed fields $k$ of characteristic $\neq 2,3$ that make them into $5$-graded Lie algebras associtated with structurable algebras. The classification is combinatorial and independent of $k$. It derives from the classification of nilpotent classes in such algebras~\cite{LieSe-conj}. All commutative rings we consider are assumed to be associative and unital. All algebras are finite-dimensional over the respective scalars unless explicitly mentioned otherwise. \section{Graded Lie algebras} \begin{definition} Let $R$ be a commutative ring, and let $\LL=\bigoplus\nolimits_{i\in\ZZ}\LL_i$ be a $\ZZ$-graded Lie algebra over $R$. We say that $\LL$ is \emph{$(2n+1)$-graded}, if $\LL_i=0$ for all $i\in\ZZ$ such that $|i|>n$. The grading is~\emph{non-trivial}, if $\LL\neq\LL_0$. \end{definition} \begin{construction}\label{constr:newL} For any $(2n+1)$-graded Lie algebra $\LL$ over $R$ we define the $(2n+1)$-graded Lie algebra \[ \newl \LL=\bigoplus\limits_{i\in\ZZ}\newl \LL_i \] over $R$ as follows. For any $i\neq 0$, $\newl \LL_i=\LL_i$. For $i=0$, we define $\newl \LL_0$ to be the set of all $\phi=(\phi_i)\in\prod\limits_{i\in\ZZ\setminus\{0\}}\End_R(\LL_i)$ satisfying the following conditions: \begin{align}\label{eq:newL} \phi_{i+j}([a,b]) &= [\phi_i(a),b]+[a,\phi_j(b)] \notag \\ &\text{for all} -n\leq i,j\leq n,\ i,j\neq 0,\ i\neq -j;\ a\in \LL_i,\ b\in \LL_j; \notag \\[1ex] \phi_j([[a,b],c]) &= [[\phi_i(a),b],c]+[[a,\phi_{-i}(b)],c]+[[a,b],\phi_{j}(c)] \notag \\ &\text{for all} -n\leq i,j\leq n,\ i,j\neq 0, \ a\in \LL_i,\ b\in \LL_{-i},\ c\in \LL_j. \end{align} In other words, $\phi\in\newl \LL_0$ behaves as a derivation of $\LL$ that preserves grading, except that $\phi$ is not defined on $\LL_0$. Clearly, $\newl \LL_0$ is an $R$-module. We define the Lie bracket $[-,-]_{\newl \LL}$ on $\newl \LL$ in terms of the Lie bracket $[-,-]$ on $\LL$ as follows. For any $u\in \newl \LL_i,v\in \newl \LL_j$, $i,j\in\ZZ$, we let \begin{equation}\label{eq:[]} [u,v]_{\newl \LL}= \begin{cases} [u,v] & \text{if } i\neq -j,\ i,j\neq 0; \\ \bigl(\ad([u,v])|_{\LL_i}\bigr)_{i\in\ZZ\setminus\{0\}} & \text{if } i= -j,\ i\neq 0; \\ u(v) & \text{if } i=0,\ j\neq 0;\\ -v(u) & \text{if } i\neq 0,\ j=0;\\ uv-vu & \text{if } i=j=0. \end{cases} \end{equation} It is straightforward to check that $\newl \LL$ is indeed a $(2n+1)$-graded Lie algebra over~$R$. Since it is not likely to provoke confusion, in what follows we denote the Lie bracket on $\newl \LL$ simply by $[-,-]$. Note that the Lie algebra $\newl{\LL}$ by construction contains an element $\zeta\in\newl \LL_0$ which acts as a {\it grading derivation}: \[ [\zeta,x]=ix\quad\mbox{for any $i\in\ZZ$ and $x\in\newl \LL_i$.} \] It is also easy to see that there is a natural graded Lie algebra homomorphism $\LL\to\newl \LL$ that sends any element $x\in \LL_0$ to $\bigl(\ad(x)|_{\LL_i}\bigr)_{i\in\ZZ\setminus\{0\}}$. \end{construction} \begin{lemma}\label{lem:basechange-newL}\cite[Lemma 4.1.4]{BdMS} Let $\LL$ be a $(2n+1)$-graded finite-dimensional Lie algebra over a field $k$. Then for any commutative $k$\dash algebra $R$, there is a natural isomorphism of $(2n+1)$-graded $R$-Lie algebras \[ \newl{\LL}\otimes_k R\cong (\LL\otimes_k R)^{\newl{}}. \] \end{lemma} To simplify the notation, under the assumptions of Lemma~\ref{lem:basechange-newL} we will consider the Lie algebras $\LL$ and $\newl{\LL}$ as functors on the category of $k$\dash algebras $R$, so that, by definition, \[ \newl{\LL}(R)=\newl{\LL}\otimes_k R\cong (\LL\otimes_k R)^{\newl{}}. \] Similarly, $\Aut(\LL)$ and $\Der(\LL)$ will stand for the functors of Lie automorphisms and Lie derivations of $\LL$ respectively: \[ \Aut(\LL)(R)=\Aut_R(\LL\otimes_k R),\qquad \Der(\LL)(R)=\Der_R(\LL\otimes_k R). \] Note that $\Aut(\LL)$ and $\Der(\LL)$ are naturally represented by closed $k$\dash subschemes of the affine $k$\dash scheme of linear endomorphisms of $\LL$. In particular, Lemmas~\ref{lem:40} and~\ref{lem:basechange-newL} imply that there is an isomorphism of functors \begin{equation}\label{eq:L-Der} \newl{\LL}\cong\Der(\newl{\LL}). \end{equation} \begin{lemma}\label{lem:Gseparable}\cite[Lemma 4.1.5]{BdMS} Let $\LL$ be a central simple finite-dimensional $(2n+1)$-graded Lie algebra over a field $k$, and let $K$ be a field extension of $k$. \begin{compactenum}[\rm (i)] \item\label{gsep:i} Let $I_i\subseteq\LL(K)_i$, $i\in\ZZ\setminus\{0\}$, be $K$-subspaces that satisfy the following conditions: for all $i,j\in\ZZ\setminus\{0\}$ such that $i\neq -j$ we have \begin{gather} \label{eq:Iij} [\LL(K)_i,I_j]\subseteq I_{i+j};\\ \label{eq:Iiii-1} [[\LL(K)_i,I_{-i}],\LL(K)_i]\subseteq I_i;\\ \label{eq:Iiii-2} [[\LL(K)_i,I_{-i}],\LL(K)_{-i}]\subseteq I_{-i}. \end{gather} Then either $I_i=0$ for all $i\in\ZZ\setminus\{0\}$, or $I_i=\LL(K)_i$ for all $i\in\ZZ\setminus\{0\}$. \item\label{gsep:ii} If $f\in\Aut(\newl\LL)(K)$ satisfies $f|_{\LL(K)_i}=\id_{\LL(K)_i}$ for all $i\in\ZZ\setminus\{0\}$, then $f=\id_{\newl\LL(K)}$. \end{compactenum} \end{lemma} \begin{definition}\label{def:gradingtorus} Let $\LL=\bigoplus\limits_{i\in\ZZ}\LL_i$ be a $\ZZ$-graded Lie algebra over a field $k$. Consider the 1-dimensional split $k$\dash subtorus $\GS\cong\Gm$ of $\Aut(\LL)$ defined as follows: for any $k$\dash algebra $R$, any $t\in\Gm(R)$, and any $i\in\ZZ$, $v\in\LL_i\otimes_{k} R$, we set $t\cdot v=t^iv$. We call $\GS$ \emph{the grading torus of $\LL$}. \end{definition} \begin{notation} Let $\LL$ be a finite-dimensional $5$\dash graded Lie algebra over a field $k$, $\Char k\neq 2,3$. For any commutative $k$\dash algebra $R$ and any $(x,s)\in (\LL\otimes_k R)_\sigma\oplus (\LL\otimes_k R)_{2\sigma}=(\LL_\sigma\oplus \LL_{2\sigma})\otimes_k R$ we set \[ e_\si(x,s)=\sum\limits_{i=0}^4\frac 1{i!}\ad(x+s)^i\in\End_R(\LL\otimes_k R). \] \end{notation} \begin{definition} Let $\LL$ be a finite-dimensional $5$\dash graded Lie algebra over $k$. The \emph{grading derivation} on $\LL$ is the derivation $\zeta\in\Der_k(\LL)$ such that for any $-2\le i\le 2$ and any $x\in\LL_i$ one has \[ \zeta(x)=i\cdot x\quad\mbox{for any $-2\le i\le 2$ and any $x\in\LL_i$}. \] If $\LL$ contains an element $\zeta$ such that $\ad_\zeta$ is the grading derivation, we call $\zeta$ a grading derivation of $\LL$ by abuse of language. \end{definition} \begin{definition}\label{def:algebraic} Let $\LL$ be a finite-dimensional $5$\dash graded Lie algebra over a commutative ring $R$ such that $2,3\in R^\times$. We say that $x\in\LL$ is \emph{algebraic}, if the endomorphism $\sum\limits_{i=0}^4\frac 1{i!}\ad(x)^i$ is an automorphism of $\LL$ as an $R$-Lie algebra. We say that $\LL$ is \emph{algebraic}, if all elements $(x,s)\in \LL_\si\oplus\LL_{2\sigma}$ are algebraic. \end{definition} \begin{remark}\label{rem:algebraic} By~\cite[Lemma 3.1.7]{BdMS} any 3-graded Lie algebra, and hence any Jordan algebra over $R$ with $2,3\in R^\times $ is algebraic; any 5-graded Lie algebra, and hence any structurable algebra over $R$ with $2,3,5\in R^\times$ is algebraic. (Despite that lemma is stated for algebras over a field, the proof is also valid over commutative rings.) By~\cite[Theorem 4.2.8]{BdMS} any central simple structurable {\em division} algebra over a field of characteristic $\neq 2,3$ is algebraic. \end{remark} \begin{lemma}\label{lem:ext-alg} Let $\LL$ be a finite-dimensional $5$\dash graded Lie algebra over a field $k$, $\Char k\neq 2,3$. (i) Let $R$ be any commutative associative unital $k$\dash algebra. If $(x,s)\in\LL_\si\oplus\LL_{2\si}$ is algebraic, then $(\lambda x,\mu s)$ is an algebraic element of $\LL\otimes_k R$ for any $\lambda,\mu\in R$. (ii) Let $\LL'$ be another finite-dimensional $5$\dash graded Lie algebra over $k$, and let $f \colon \LL\to\LL'$ be a graded $k$\dash homomorphism of Lie algebras, such that $f|_{\LL_i} \colon \LL_i\to\LL'_i$ is a bijection for all $i\in\{\pm 1,\pm 2\}$. If $(x,s)\in\LL_\si\oplus\LL_{2\si}$ is algebraic in $\LL$, then $f(x,s)$ is algebraic in $\LL'$. \end{lemma} \begin{proof} The proof of (i) is the same as in~\cite[Lemma 3.1.6]{BdMS}. The proof of (ii) is the same as in~\cite[Lemma 3.1.8]{BdMS}. \end{proof} The following results relate algebraicity to the classification of simple Lie algebras. \begin{theorem}\cite[Theorem 4.1.8]{BdMS} Let $\LL$ be an algebraic central simple $5$\dash graded Lie algebra over a field $k$ of characteristic different from $2,3$, such that $\LL\neq\LL_0$. Then the algebraic $k$\dash group $\GG=\Aut(\LL)^\circ$ is an adjoint absolutely simple group of $k$\dash rank $\geq 1$, satisfying $\LL=[\Lie(\GG),\Lie(\GG)]$ and $\Lie(\GG)\cong\newl{\LL}$. \end{theorem} \begin{lemma}\cite[Lemma 4.2.4 (i)]{BdMS} Let $k$ be a field, $\Char k\neq 2,3$. Let $\GG$ be an adjoint simple algebraic group over $k$. Let $\LL=\Lie(\GG)$ be its Lie algebra. Let $\LL=\bigoplus\limits_{i=-2}^2\LL_i$ be any $5$\dash grading on $\LL$ such that $\LL_1\oplus\LL_{-1}\neq 0$. The $5$-graded Lie algebra $\LL$ is algebraic. \end{lemma} \begin{theorem}\label{thm:Che-alg} Let $\LL$ be a central simple $5$\dash graded Lie algebra over a field $k$ of characteristic different from $2,3$, such that $\LL\neq\LL_0$. Then $\LL$ is of Chevalley type if and only if $\LL$ is algebraic. \end{theorem} \begin{proof} This follows from the two previous results and Lemma~\ref{lem:ext-alg}, since Lie algebras of adjoint simple algebraic groups are by definition the central simple Lie algebras of Chevalley type. \end{proof} \section{5-Graded simple Lie algebras} \begin{lemma}\label{lem:L-1} Let $\LL\neq\LL_0$ be a simple $5$\dash graded Lie algebra over a field $k$ of characteristic $\neq 2,3$, and let $\mathbf{S}$ be the corresponding grading torus. Let $\LL=\bigoplus_{i=-r}^s\LL_{[i]}$ be another $\ZZ$-grading on $\LL$ such that $\bigoplus_{i=-r}^{-1}\LL_{[i]}$ is generated as a Lie algebra by $\LL_{[-1]}\neq 0$. If the second grading is preserved by $\mathbf{S}(k)$, then $\LL_{[-1]}\not\subseteq\LL_0$. \end{lemma} \begin{proof} Since $\mathbf{S}$ preserves the second grading, we have $$ \LL_{[i]}=\bigoplus_{j=-2}^2 (\LL_{[i]}\cap\LL_j) $$ for all $-r\le i\le s$. Assume that $\LL_{[-1]}\subseteq\LL_{0}$. Then $\LL_{[-i]}\subseteq\LL_0$ for all $1\le i\le r$. For any $x\in\LL_j$, $-2\le j\le 2$, we have $t\cdot x=t^j x$ for any $t\in\mathbf{S}(k)$. Since $\Char(k)\neq 2,3$, there is $t\in k$ such that $t^{\pm 1}\neq 1$ and $t^{\pm 2}\neq 1$. Then $\LL_j\subseteq\bigoplus_{i\ge 0}\LL_{[i]}$ for all $j\neq 0$. Since $\LL$ is simple, it is generated by $\LL_{j}$, $j\neq 0$, hence $\LL=\bigoplus_{i\ge 0}\LL_{[i]}$. However, this contradicts $\LL_{[-1]}\neq 0$. \end{proof} \begin{lemma}\label{lem:graded-cartan} Let $\LL$ be one of the simple graded Cartan type Lie algebras $X(m,\underline{n})^{(2)}$, $X\in\{W,S,H,K\}$, in the notation of~\cite{Strade-bookI}, over an algebraically closed field $k$ of characteristic $5$. Then $\LL$ does not have a non-trivial $5$-grading. \end{lemma} \begin{proof} Let $\LL=\bigoplus_{i=-2}^2\LL_i$ a non-trivial $5$-grading on $\LL$, and let $\LL=\bigoplus_{i=-r}^s\LL_{[i]}$ be the standard grading on $\LL$. Note that $r=1$ for $X=W,S,H$, and $r=2$, $\dim(\LL_{[-2]})=1$ for $X=K$. By~\cite[Theorem 7.4.1]{Strade-bookI} we can assume that the grading torus $\mathbf{S}$ of the $5$-grading preserves the standard grading. Then by Lemma~\ref{lem:L-1} the $5$-grading on $\LL_{[-1]}$ is non-trivial. Since $\LL_{[-1]}$ is contained in the restricted subalgebra $X(m,\underline{1})^{(2)}$ of $\LL$, the induced $5$-grading on the restricted subalgebra is non-trivial, and hence we can assume that $\LL=X(m,\underline{1})^{(2)}$ from the start. In particular, $\ad(\LL_{[-2]})^5=0$. Let $x\in\LL_{[-1]}$ be an element of non-zero $5$-grading. Then $\ad(x)^3(\LL)$ is contained in a nilpotent Lie subalgebra $\LL_{\pm 1}\oplus\LL_{\pm 2}$ of $\LL$. By~\cite[Theorem 2]{Wilson-auto} the space $\LL_{[-1]}$ is an irreducible representation for the group of Lie algebra automorphisms of $\LL$ preserving the standard grading. Thus, $\LL_{[-1]}$ has a basis consisting of elements $y$ such that $\ad(y)^3(\LL)$ is contained in a nilpotent Lie subalgebra of $\LL$. Since $\LL$ is restricted, we have $\ad(\LL_{[-2]})^5=0$. Then the Jacobi identity readily implies that $$ \ad(\LL_{[-1]})^{2\cdot\dim(\LL_{[-1]})+1+2\cdot 4\cdot\dim(\LL_{[-2]})}(\LL)\subseteq\ad(y)^3(\LL) $$ for an element $y$ as above. On the other hand, we have $\LL_{[0]}\subseteq\ad(\LL_{[-1]})^s(\LL_s)$. Since $\LL_{[0]}$ is not nilpotent, this implies that \begin{equation}\label{eq:main} s<2\cdot\dim(\LL_{[-1]})+1+2\cdot 4\cdot\dim(\LL_{[-2]}). \end{equation} If $X=W,S,H$, then $\dim(\LL_{[-1]})=m$. Hence~\eqref{eq:main} becomes $s<2m+1$. If $X=W$, then $m\ge 1$ and $s=4m-1$. If $X=S$, then $m\ge 2$ and $s=4m-2$. If $X=H$, then $s=4m-3$ and $m\ge 2$. This contradicts~\eqref{eq:main}. If $X=K$, then $\dim(\LL_{[-1]})=m-1$. Hence~\eqref{eq:main} becomes $s<2(m-1)+9=2m+7$. We have $s=4m$, if $m+3\equiv 0\pmod{5}$, and $s=4m+1$ otherwise. Since $m\ge 3$, both cases contradict~\eqref{eq:main}. \end{proof} \begin{theorem}\label{thm:main} Let $\LL$ be a central simple $5$\dash graded Lie algebra over an algebraically closed field $k$ of characteristic different from $2,3$, such that $\LL\neq\LL_0$. Then $\LL$ is a classical simple (Chevalley) Lie algebra. \end{theorem} \begin{proof} If $\Char k\neq 5$, then $\LL$ is a Chevalley Lie algebra by Remark~\ref{rem:algebraic} combined with Theorem~\ref{thm:Che-alg}. Assume $\Char k=5$. According to the Block--Wilson--Premet--Strade classification theorem~\cite{Strade-bookI}, it is enough to check that $\LL$ is not of Cartan or Melikian type. Assume first that $\LL$ is a simple Lie algebra of Cartan type~\cite[Definition 4.2.4]{Strade-bookI}. Let $\LL=\LL_{(-r)}\supseteq\ldots\supseteq\LL_{(s)}$ be a standard filtration of $\LL$. By~\cite[Theorem 4.2.7 (3)]{Strade-bookI} the standard filtration is invariant under all automorphisms of $\LL$. In particular, it is invariant under the grading torus $\mathbf{S}$ of the $5$-grading. Let $$ Gr(\LL)=\bigoplus_{i=-r}^sGr(\LL)_{[i]} $$ be the associated graded Lie algebra. Hence $Gr(\LL)$ carries an induced $5$-grading. The induced $5$-grading is non-trivial, since there is $0\neq x\in\LL_i$, $i\neq 0$, and $t\in\mathbb{F}_5^\times\subseteq\mathbf{S}(k)$ such that $t\cdot x=t^ix\neq x$, and hence $t$ acts non-trivially on the image of $x$ in $Gr(\LL)$. The derived series of $Gr(\LL)$ also inherits the $5$-grading, hence $Gr(\LL)^{(\infty)}$ is $5$-graded. We show that the induced $5$-grading on $Gr(\LL)^{(\infty)}$ is also non-trivial. Indeed, if it were trivial, then $Gr(\LL)^{(\infty)}\subseteq Gr(\LL)_0$. Since $Gr(\LL)$ is also a Lie algebra of Cartan type in the sense of~\cite[Definition 4.2.4]{Strade-bookI}, $Gr(\LL)^{(\infty)}$ is simple by~\cite[Theorem 4.2.7 (1)]{Strade-bookI}. Then we have $$ Gr(\LL)^{(\infty)}\subseteq (Gr(\LL)_0)^{(\infty)}\subseteq Gr(\LL)^{(\infty)}, $$ which implies $Gr(\LL)^{(\infty)}=(Gr(\LL)_0)^{(\infty)}$. Then by~\cite[Lemma 4.2.5]{Strade-bookI} we have $Gr(\LL)=Gr(\LL)_0$, which contradicts the non-triviality of the $5$-grading on $Gr(\LL)$. By~\cite[Theorem 4.2.7 (2)]{Strade-bookI} we have $Gr(\LL)^{(\infty)}=X(m,\underline{n})^{(2)}$, $X=W,S,H,K$. By Lemma~\ref{lem:graded-cartan} these algebras do not have non-trivial $5$-gradings. Now let $\LL=M(2,n_1,n_2)$ be a simple Lie algebra of Melikian type. By~\cite[Theorem 1.2]{BKMcG} we can assume that the grading torus corresponding to the $5$-grading under consideration preserves the standard granding $\LL=\bigoplus_{i=-3}^s\LL_{[i]}$ of $\LL$. Then the grading derivation $\zeta$ corresponding to the $5$-grading is a homogenous derivation of $\LL$. Let $W(2,n_1,n_2)$ be the standard simple Witt subalgebra of $\LL$. Then by~\cite[Theorem 7.1.4]{Strade-bookI} any homogeneous derivation of $\LL$ acts non-trivially on $W(2,n_1,n_2)$, and hence this subalgebra carries a non-trivial $5$-grading induced by $\zeta$. However, this is not possible by Lemma~\ref{lem:graded-cartan}. \end{proof} \begin{theorem}\label{thm:Che-type} Let $\LL$ be a central simple $5$\dash graded Lie algebra over a field $k$ of characteristic different from $2,3$, such that $\LL\neq\LL_0$. Then $\LL$ is of Chevalley type. \end{theorem} \begin{proof} Let $\bar k$ be the algebraic closure of $k$. Then $\LL\otimes_k\bar k$ is a central simple Lie algebra over $\bar k$ with a non-trivial $5$-grading. Then by Theorem~\ref{thm:main} $\LL\otimes_k\bar k$ is a Lie algebra of Chevalley type. Then it is algebraic by Theorem~\ref{thm:Che-alg}. Hence $\LL$ is algebraic, since $\LL$ embeds into $\LL\otimes_k\bar k$. Again by Theorem~\ref{thm:Che-alg} we conclude that $\LL$ is of Chevalley type. \end{proof} \section{Structurable algebras and Kantor pairs associated to algebraic groups} \begin{definition} A {\em structurable algebra} over a field $k$ of characteristic not~$2$ or $3$ is a finite-dimensional, unital $k$\dash algebra with involution $(\A,\bar{\ })$ such that \begin{equation}\label{struct id} [V_{x,y}, V_{z,w}] = V_{\{x,y,z\},w} - V_{z,\{y,x,w\}} \end{equation} for $x,y,z,w \in \A$, where the left hand side denotes the Lie bracket of the two operators, and where \[ V_{x,y}z := \{x \ y \ z\} := (x\overline{y})z + (z\overline{y})x - (z\overline{x})y . \] For all $x,y,z \in \A$, we write $U_{x,y}z:=V_{x,z}y$ and $ U_xy:=U_{x,x}y$. The trilinear map $(x,y,z) \mapsto \{ x \ y \ z \}$ is called the {\em triple product} of the structurable algebra. \end{definition} In \cite{A78} and \cite{A79}, a structurable algebra is defined as an algebra with involution such that \begin{align}\label{eqdef} [T_z,V_{x,y}]=&V_{T_z x,y}-V_{x,T_{\overline{z}}y} \end{align} for all $x,y,z \in \A$ with $T_x:=V_{x,1}$. The equivalence of \eqref{struct id} and \eqref{eqdef} follows from \cite[Corollary 5.(v)]{A79}. \begin{definition} Let $(\A,\bar{\ })$ be a structurable algebra; then $\A=\mathcal{H}\oplus\Ss$ for \[ \mathcal{H}=\{h\in \mathcal{A}\mid \overline{h}=h\} \quad \text{and} \quad \Ss=\{s\in \mathcal{A}\mid \overline{s}=-s\}. \] The elements of $\mathcal{H}$ are called {\em hermitian elements}, the elements of $\Ss$ are called {\em skew-hermitian elements} or briefly {\em skew elements}. The dimension of~$\Ss$ is called the {\em skew-dimension} of $\A$. \end{definition} As usual, the commutator and the associator are defined as \[ [x,y]=xy-yx,\quad [x,y,z]=(xy)z-x(yz),\] for all $x,y,z\in \A$. For each $s \in \Ss$, we define the operator $L_s \colon \A\to\A$ by \[ L_s x := sx . \] The following map is of crucial importance in the study of structurable algebras: \[\psi \colon \A\times \A\to\Ss \colon (x,y)\mapsto x\overline{y}-y\overline{x}.\] \begin{definition} An {\em ideal} of $\A$ is a two-sided ideal stabilized by $\barop$. A structurable algebra $(\A,\barop)$ is {\em simple} if its only ideals are $\{0\}$ and $\A$, and it is called {\em central} if its center \begin{multline*} Z(\A,\barop)=Z(\A)\cap \mathcal{H} \\ =\{c\in\A\mid [c,\A]=[c,\A,\A]=[\A,c,\A]=[\A,\A,c]=0\}\cap \mathcal{H} \end{multline*} is equal to $k1$. The {\em radical} of $\A$ is the largest solvable ideal of $\A$. A structurable algebra is {\em semisimple} if its radical is zero. \end{definition} If $\cha(k)\neq2,3,5$, a semisimple structurable algebra is the direct sum of simple structurable algebras~\cite[Section 2]{S}. Recall the generalization of the Tits--Kantor--Koecher construction that associates to any structurable algebra $\A$ a $5$-graded Lie algebra $K(\A)$. Let $\End(\A)$ be the ring of $k$\dash linear maps from $\A$ to $\A$. For each $A\in \End(\A)$, we define new $k$\dash linear maps \begin{align*} A^\eps&=A-L_{A(1)+\overline{A(1)}},\\ A^\delta&=A+R_{\overline{A(1)}}, \end{align*} where $L_x$ and $R_x$ denote left and right multiplication by an element $x \in \A$, respectively. Define the Lie subalgebra $\Strl(\A,\barop)$ of $\End(\A)$ as \begin{equation}\label{defstrl} \Strl(\A,\barop)=\{A\in \End(\A)\mid [A, V_{x,y}]=V_{Ax,y}+V_{x,A^\eps y}\}. \end{equation} (This definition follows from \cite[Corollary 5]{A78}.) It follows from the definition of structurable algebras that $V_{x,y}\in \Strl(\A,\barop)$, so we can define the Lie subalgebra \[\Instrl(\A, \barop)=\Span \{V_{x,y}\mid x,y\in \A\},\] which is, in fact, an ideal of $\Strl(\A,\barop)$. \begin{definition}\label{def:Lie alg} Consider two copies $\A_+$ and $\A_-$ of $\A$ with corresponding isomorphisms $\A \to \A_+ \colon x \mapsto x_+$ and $\A \to \A_- \colon x \mapsto x_-$, and let $\Ss_+\subset A_+$ and $\Ss_-\subset A_-$ be the corresponding subspaces of skew elements. Let \[ K(\A)=\Ss_-\oplus \A_-\oplus \Instrl(\A) \oplus \A_+ \oplus \Ss_+ \] as a vector space; as in \cite[\S 3]{A79}, we make $K(\A)$ into a Lie algebra by extending the Lie algebra structure of $\Instrl(\A)$ as follows: \begin{alignat*}{2} \intertext{$\bt\ [\Instrl,K(\A)]$} [V_{a,b},V_{a',b'}]&=V_{\{a,b,a'\},b'}-V_{a',\{b,a,b'\}} &\in \Instrl(\A),&\\ [V_{a,b},x_+] &:= (V_{a,b}x)_+ \in \A_+ , \quad & [V_{a,b},y_-] &:= (V_{a,b}^\eps y)_- \\* &&&\ =(-V_{b,a}y)_-\in \A_-,\\ [V_{a,b},s_+] &:= (V_{a,b}^\delta s)_+ & [V_{a,b},t_-] &:= (V_{a,b}^{\eps\delta} t)_- \\* &\ = -\psi(a,sb)_+\in \Ss_+,&&\ =\psi(b,ta)_-\in \Ss_-,\\ \intertext{$\bt\ [\Ss_\pm,\A_\pm]$} [s_+,x_+] &:= 0, & [t_-,y_-] &:= 0,\\ [s_+,y_-] &:= (sy)_+\in \A_+,& [t_-,x_+] &:= (tx)_-\in \A_-,\\ \intertext{$\bt\ [\A_\pm,\A_\pm]$} [x_+,y_-] &:= V_{x,y}\in \Instrl(\A),\\ [x_+,x_+'] &:= \psi(x,x')_+\in \Ss_+,&[y_-,y_-'] &:= \psi(y,y')_-\in \Ss_-,\\ \intertext{$\bt \ [\Ss_\pm,\Ss_\pm]$} [s_+,s_+']&:=0,&[t_-,t_-']&:=0,\\ [s_+,t_-]&:=L_{s}L_{t}\in \Instrl(\A), \end{alignat*} for all $x,x',y,y'\in \A$, all $s,s',t,t'\in \Ss$, and all $V_{a,b},V_{a',b'}\in \Instrl(\A)$. \end{definition} From the definition of the Lie bracket we clearly see that the Lie algebra $K(\A)$ has a $5$\dash grading given by $K(\A)_j=0$ for all $|j|>2$ and \begin{multline*} K(\A)_{-2}=\Ss_-,\quad K(\A)_{-1}=\A_-,\quad K(\A)_{0}=\Instrl(\A),\\ K(\A)_{1}=\A_+,\quad K(\A)_{2}=\Ss_+. \end{multline*} In the case where $\A$ is a Jordan algebra, we have $\Ss=0$, and thus the Lie algebra $K(\A)$ has a $3$-grading; in this case $K(\A)$ is exactly the Tits--Kantor--Koecher construction of a Lie algebra from a Jordan algebra. It is shown in \cite[\S 5]{A79} that the structurable algebra $\A$ is simple if and only if $K(\A)$ is a simple Lie algebra, and that $\A$ is central if and only if $K(\A)$ is central. \begin{definition} A structurable algebra $\A$ is called \emph{algebraic}, if $K(\A)$ is algebraic. \end{definition} \begin{definition} Let $\A$ be a structurable algebra over a field $k$ of characteristic $\neq 2,3$. An element $x\in\A$ is called an \emph{absolute zero divisor} if $U_xy=0$ for any $y\in\A$. The algebra $\A$ is called \emph{non-degenerate} if it has no non-trivial absolute zero divisors. \end{definition} If an element $x\in K(\A)_\si=\A_\si$ is an absolute zero divisor of $K(\A)$, then it is represented by an absolute zero divisor of $\A$; this follows from the fact that by Definition~\ref{def:Lie alg}, \[ [x_\sigma, [x_\sigma, y_{-\sigma}]] = -V_{x,y} x \in \A_\sigma \] for all $x,y \in \A$. The following theorem strengthens~\cite[Theorem 4.1.1]{BdMS}. \begin{theorem}\label{thm:structurable} Let $\A$ be a central simple structurable algebra over a field $k$ of characteristic different from $2,3$. Then the algebraic $k$\dash group $\GG=\Aut(K(\A))^\circ$ is an adjoint absolutely simple group of $k$\dash rank $\geq 1$, and $K(\A)=[\Lie(\GG),\Lie(\GG)]$. \end{theorem} \begin{proof} The same claim was established in~\cite[Theorem 4.1.1]{BdMS} under the additional assumption that $\A$ is algebraic. The Lie algebra $K(\A)$ is a central simple Lie algebra with a non-trivial 5-grading, hence it is a Lie algebra of Chevalley type by Theorem~\ref{thm:Che-type}. Hence it is algebraic by Theorem~\ref{thm:Che-alg}. \end{proof} Given a non-trivial $5$-grading on a Lie algebra of Chevalley type, there may not be a structurable algebra associated to it. However, we always obtain an associated Kantor pair by~\cite[Lemma 4.3.3]{BdMS}. \begin{definition}[\cite{AF99}]\label{def:Kantor} A \emph{Kantor pair} is a pair of finite-dimensional vector spaces $(K_+,K_-)$ over $k$ equipped with a trilinear product \[ \lK\cdot,\cdot,\cdot\rK \colon K_\si\times K_{-\si}\times K_\si\to K_\si,\quad \si\in\{-1,1\}, \] satisfying the following two identities: \begin{compactitem} \item[(KP1)] $[\VK_{x,y}, \VK_{z,w}] = \VK_{\lK x,y,z\rK,w} - \VK_{z,\lK y,x,w \rK}$; \item[(KP2)] $\KK_{a,b}\VK_{x,y}+\VK_{y,x}\KK_{a,b}=\KK_{\KK_{a,b}x,y}$; \end{compactitem} where $\VK_{x,y}z:=\lK xyz\rK$ and $\KK_{a,b}z:=\lK azb \rK -\lK bza \rK$. \end{definition} There is a tight connection between Kantor pairs and Lie triple systems. In~\cite{AF99}, a $\ZZ$-graded Lie triple system $\TT=\bigoplus\limits_{i\in\ZZ}\TT_i$ is called \emph{sign-graded}, if $\TT_i=0$ for all $i\neq\pm 1$. By~\cite[Theorem 7]{AF99} for any Kantor pair on $\TT=K_+\oplus K_-$ there is a natural structure of a sign-graded Lie triple system with two non-zero graded components $\TT_1=K_+$ and $\TT_{-1}=K_-$. Let $\TT=\TT_1\oplus\TT_{-1}$ be a sign-graded Lie triple system over $k$. In \cite[p.\@ 532]{AF99} the $5$\dash graded Lie algebra $\G(\TT)=\bigoplus\limits_{i=-2}^2\G(\TT)_i$ is defined, which is called the standard graded embedding of $\TT$. If $\TT=K_+\oplus K_-$ is the Lie triple system corresponding to a Kantor pair $(K_+,K_-)$, the $5$\dash graded Lie algebra $\G(K_+\oplus K_-)$ is also called the standard graded embedding of this Kantor pair. According to the following result, every $5$-grading on a simple Lie algebra of Chevalley type over a field of characteristic $\neq 2,3$ arises from a Kantor pair. In particular, every isotropic adjoint simple algberaic group can be constructed from a simple Kantor pair. \begin{lemma}\label{lem:4.3.3} Let $k$ be a field of characteristic different from $2,3$. Let $\GG$ be an adjoint simple algebraic group over $k$. Let $\LL=\Lie(\GG)$ be its Lie algebra, and let $\LL=\bigoplus\limits_{i=-2}^2\LL_i$ be any $5$\dash grading on $\LL$ such that $\LL_1\oplus\LL_{-1}\neq 0$. Then $(\LL_1,\LL_{-1})$ is a central simple Kantor pair with respect to the triple product operation $\LL_\sigma\times \LL_{-\sigma}\times \LL_{\sigma}\to \LL_\sigma$ given by \[ \lK x,y,z \rK=-[[x,y],z], \] and its standard $5$\dash graded Lie algebra $\G=\G(\LL_1\oplus\LL_{-1})$ is canonically isomorphic to the graded subalgebra $[\LL,\LL]+k\zeta$ of $\LL$. \end{lemma} \begin{proof} By~\cite[Lemma 4.3.3]{BdMS} $(\LL_1,\LL_{-1})$ is a Kantor pair, and $\G(\LL_1\oplus\LL_{-1})$ is as required. It remains to prove centrality and simplicity. By~\cite[Lemma 4.1.6]{BdMS} the $k$-Lie algebra $[\LL,\LL]$ is central simple, and differs from $\LL$ only in the grading $0$ component. Let $(\LL_1,\LL_{-1})'$ be the Kantor pair which differs from $(\LL_1,\LL_{-1})$ by the sign of the triple product, i.e. $\lK x,y,z \rK'=[[x,y],z]$. Then $[\LL,\LL]$ envelopes the Kantor pair $(\LL_1,\LL_{-1})'$ in the sense of~\cite{AllFauSmi}. Then by~\cite[Theorem 4.20]{AllFauSmi} $(\LL_1,\LL_{-1})'$ is central simple. Clearly, this is equivalent to $(\LL_1,\LL_{-1})$ being central simple. \end{proof} Now we can establish the converse as well, namely, that any simple Kantor pair over a field of characteristic $\neq 2,3$ arises from an isotropic simple algebraic group. \begin{theorem}\label{thm:kp} Let $(K_+,K_-)$ be a central simple Kantor pair over a field $k$ of characteristic $\neq 2,3$, and let $\G(K_+\oplus K_-)$ be its standard $5$-graded Lie algebra. Then the algebraic $k$\dash group $\GG=\Aut(\G(K_+\oplus K_-))^\circ$ is an adjoint absolutely simple group of $k$\dash rank $\geq 1$, and $\G(K_+\oplus K_-)\cong [\Lie(\GG),\Lie(\GG)]+k\zeta$. \end{theorem} \begin{proof} Set $\LL=\G(K_+\oplus K_-)$, so that $\LL_1=K_+$, $\LL_{-1}=K_-$. Let $(\LL_1,\LL_{-1})'$ be the Kantor pair which differs from $(\LL_1,\LL_{-1})$ by the sign of the triple product, i.e. $\lK x,y,z \rK'=-\lK x,y,z \rK=[[x,y],z]$. Since $(\LL_1,\LL_{-1})$ is central simple, $(\LL_1,\LL_{-1})'$ is also central simple. Let $\mathcal{K}=\mathfrak{K}((\LL_1,\LL_{-1})')$ be the 5-graded Lie algebra associated to $(\LL_1,\LL_{-1})'$ in~\cite[\S~4.3]{AllFauSmi}. By concstruction, $\G(K_+\oplus K_-)=k\zeta+\mathcal{K}$. By~\cite[Corollary 4.14]{AllFauSmi} the $\mathcal{K}$ is central simple. Then by Theorem~\ref{thm:Che-type} $\mathcal{K}$ is of Chevalley type. Then by Theorem~\ref{thm:Che-alg} it is algebraic. Then by~\cite[Theorem 4.1.8]{BdMS} $\GG=\Aut(\mathcal{K})^\circ$ is an adjoint absolutely simple group of $k$\dash rank $\geq 1$, and $\mathcal{K}\cong [\Lie(\GG),\Lie(\GG)]$. Then by~\cite[Lemma 4.1.6]{BdMS} we have $\Aut(\G(K_+\oplus K_-))^\circ\cong\GG$. \end{proof} \begin{corollary} Let $(K_+,K_-)$ be a simple Kantor pair over a commutative ring $k$ such that $2,3\in k^\times$. Then $(K_+,K_-)$ is non-degenerate in the sense of~\cite{GGLN}. \end{corollary} \begin{proof} Since $(K_+,K_-)$ is simple, $k$ does not have non-trivial ideals, and hence is a field. Extending $k$, we can assume that $(K_+,K_-)$ is central simple. Then by Theorem~\ref{thm:kp} its standard 5-graded embedding $\LL$ is a central simple Lie algebra of Chevalley type. Then $\LL\otimes_k\bar k$ is non-degenerate. Then $\LL$ is non-degenerate. Then by~\cite[Corollary 2.5]{GGLN} $(K_+,K_-)$ is non-degenerate. \end{proof} \begin{lemma}\label{lem:struct-kan} Let $R$ be a commutative ring with $2,3\in R^\times$. Let $\LL=\mathfrak{g}(\LL_{-1}\oplus\LL_1)$ be the 5-graded Lie algebra over $R$ associated with a Kantor pair $(\LL_{-1},\LL_1)$ over $R$. Then $(\LL_{-1},\LL_1)$ is the Kantor pair associated with a structurable algebra $\A$ over $R$ if and only if there exist $u\in\LL_{1}$, $v\in\LL_{-1}$ such that $\zeta=[u,v]$ is the grading derivation of $\LL$. \end{lemma} \begin{proof} Assume first that $\LL$ is graded-isomorphic to $K(\A)$. Let $1\in\LL_1$, $\hat 1\in\LL_{-1}$ denote the images of $1\in\A$. Then $[1,\hat{1}]\in\LL_0$ acts as the grading derivation $\zeta$ on $\LL\cong K(\A)$. Conversely, assume that $\zeta=[u,v]$ for some $u\in\LL_{1}$, $v\in\LL_{-1}$. By~\cite[Corollary 14]{AF99} an element $(x,0)\in\LL_1\oplus\LL_2$ is 1-invertible if and only if there is $\hat x\in\LL_{-1}$ such that $V_{x,\hat x}=2\id_{\LL_1}$, $V_{\hat x,x}=2\id_{\LL_{-1}}$. Since $[u,v]=\zeta$, we have $V_{u,v}=-\id_{\LL_1}$ and $V_{v,u}=-\id_{\LL_{-1}}$. Then $x=u$ is 1-invertible with $\hat x=-2v$. Since $(\LL_1,\LL_{-1})$ contains a 1-invertible element of the form $(x,0)$, by~\cite[Corollary 15]{AF99} it is isomorphic to a Kantor pair associated with a structurable algebra. \end{proof} \begin{lemma}\label{lem:A-zeta} Let $k$ be a field of characteristic different from $2,3$. Let $\GG$ be an adjoint simple algebraic group over $k$. Let $\LL=\Lie(\GG)$ be its Lie algebra, and let $\LL=\bigoplus\limits_{i=-2}^2\LL_i$ be any $5$\dash grading on $\LL$ such that $\LL_1\oplus\LL_{-1}\neq 0$, and let $\zeta\in\LL_0$ be the grading derivation. Then $\zeta=[u,v]$ for some $u\in\LL_{1}$, $v\in\LL_{-1}$ if and only if there is a structurable algebra $\A$ over $k$ such that $[\LL,\LL]$ is graded-isomorphic to $K(\A)$. \end{lemma} \begin{proof} Assume first that $\LL$ is graded-isomorphic to $K(\A)$. Let $1\in\LL_1$, $\hat 1\in\LL_{-1}$ denote the images of $1\in\A$. Then $[1,\hat{1}]\in\LL_0$ acts as the grading derivation $\zeta$ on $[\LL,\LL]$. Since $\LL\cong\Der([\LL,\LL])$ by~\cite[Lemma 4.1.6]{BdMS}, $[1,\hat{1}]=\zeta$. Conversely, assume that $\zeta=[u,v]$ for some $u\in\LL_{1}$, $v\in\LL_{-1}$. Consider the Kantor pair $(\LL_{-1},\LL_1)$ with the triple product operation $\LL_\sigma\times \LL_{-\sigma}\times \LL_{\sigma}\to \LL_\sigma$ given by \[ \lK x,y,z \rK=-[[x,y],z]. \] By Lemma~\ref{lem:4.3.3} $\G(\LL_1\oplus\LL_{-1})$ is canonically isomorphic to the graded subalgebra $[\LL,\LL]+k\zeta$ of $\LL$. Then by Lemma~\ref{lem:struct-kan} the pair $(\LL_{-1},\LL_1)$ is associated to a structurable algebra $\A$. Then $K(\A)$ is graded-isomorphic to $[\G(\LL_1\oplus\LL_{-1}),\G(\LL_1\oplus\LL_{-1})]\cong [\LL,\LL]$. \end{proof} \section{$5$-Gradings that correspond to structurable algebras} \begin{definition}\label{def:rootsys} Let $\GG$ be an algebraic group over a field $k$ and $\GT\subseteq\GG$ be a split $n$-dimensional $k$\dash subtorus of $\GG$. Let $X^*(\GT)\cong \ZZ^n$ be the group of characters of $\GT$, and let \[ \Lie(\GG)=\bigoplus_{\alpha\in X^*(\GT)}\Lie(\GG)_\alpha \] be the $\ZZ^n$-grading on $\Lie(\GG)$ induced by the adjoint action of $\GT$. We call \[ \Phi(\GT,\GG)=\{\alpha\in X^*(\GT) \mid \Lie(\GG)_\alpha\neq 0\} \] the set of \emph{roots of $\GG$ with respect to $\GT$}. \end{definition} If $\GG$ is a reductive algebraic group over $k$ and $\GT$ is a maximal split $k$\dash subtorus of $\GG$, then $\Phi(\GT,\GG)\setminus \{0\}$ is a root system in the sense of Bourbaki~\cite{BorelTits}. By abuse of language, we call $\Phi(\GT,\GG)$ a root system of $\GG$. Let $\Phi$ be a root system and $\Pi\subseteq\Phi$ be a system of simple roots. For any $\alpha\in\Phi$ we write \[ \alpha=\sum\limits_{\beta\in\Pi}m_\beta(\alpha)\beta, \] where the coefficients $m_\beta(\alpha)$ are either all non-negative, or all non-positive. Once $\Pi$ is fixed, we denote the corresponding sets of positive and negative roots by $\Phi^\pm$. Recall that for any pair of opposite parabolic subgroups $\GP_{\pm }$ of $\GG$ with unipotent radicals $\GU_{\pm}$, there is a maximal split $k$\dash subtorus $\GT$ of $\GG$ such that \[ \GT\subseteq\GP_+\cap\GP_-, \] and a system of simple roots $\Pi\subseteq\Phi$ and a non-empty subset $J\subseteq\Pi$ such that \begin{equation}\label{eq:lieP} \begin{aligned} &\Phi(\GT,\GP_\sigma)=\Phi^{\sigma}\cup\bigl(\Phi\cap\ZZ(\Pi\setminus J)\bigr),\\ &\Phi(\GT,\GU_\sigma)=\Phi^{\sigma}\setminus\ZZ (\Pi\setminus J),\\ &\Phi(\GT,\GP_+\cap\GP_-)=\Phi\cap\ZZ (\Pi\setminus J). \end{aligned} \end{equation} The set $t(\GP_+)=\Pi\setminus J$ is called the type of $\GP_+$; it is a system of simple roots of the root system of the Levi subgroup $\GP_+\cap\GP_-$ of $\GP_+$. In general, the type of a parabolic subgroup $P$ is the type of $P$ over an algebraic closure of the field of definition. \begin{notation} Let $G$ be a reductive algebraic group over a field $k$, and let $\lambda:\Gm\to G$ be a cocharacter of $G$ over $k$. Then $\lambda$ induces a $\ZZ$-grading on $\Lie(G)$, and we denote by $\Lie(G)(\lambda,i)$ the $i$-th component of this grading, i.e. $$ \Lie(G)(\lambda,i)=\{ v\in\Lie(G)\ |\ \lambda(t)\cdot v=t^iv\mbox{ for any }t\in\Gm(k)=k^\times\}. $$ By~\cite[Exp.\@~XXVI, Prop. 6.1]{SGA3} there is a unique pair of (not necessarily proper) opposite parabolic subgroups $P(\lambda)$ and $P(-\lambda)$ in $G$ such that $$ \Lie(P(\lambda))=\bigoplus_{i\ge 0}\Lie(G)(\lambda,i)\quad\mbox{and}\quad \Lie(P(-\lambda))=\bigoplus_{i\ge 0}\Lie(G)(\lambda,-i). $$ We denote by $C_G(\lambda)$ the centralizer of $\lambda(\Gm)$ in $G$; this is a Levi subgroup $P(\lambda)\cap P(-\lambda)$ of $P(\lambda)$. \end{notation} In order to classify $5$-gradings that correspond to structurable algebras, we rely on the theory of nilpotent orbits in Lie algebras of simple algebraic groups over an algebraically closed field. This theory originates from the work of E. Dynkin (1952)~\cite{Dyn}, with further developements by B. Kostant, G. E. Wall, R. W. Richardson, T.A. Springer, R. Steinberg, G. B. Elkington, P. Bala and R. Carter, K. Pommerening, and many others. As a result, the particular statements we need are seriously scattered in the literature, and we cite them according to more recent sources where they are the most explicit. We recall the essence of Dynkin's classification of nilpotent elements in complex simple Lie algebras. Let $\LL_{\CC}$ be a simple Lie algebra over $\CC$, and let $e\in\LL_{\CC}$ be a nilpotent element. By the Jacobson--Morozov theorem, $\LL_{\CC}$ contains an $sl_2$-triple of the form $\{e,h,f\}$. Let $\mathcal{H}\le\LL_{\CC}$ be a Cartan subalgebra of $\LL_{\CC}$ containing $h$. Let $\Phi$ be the root system of $\LL_{\CC}$, and let $e_\alpha$, and $h_\alpha=[e_\alpha,e_{-\alpha}]$, $\alpha\in\Phi$, be the standard root vectors in a Chevalley basis of $\LL_{\CC}$ with respect to $\mathcal{H}$. There is a choice of a system of simple roots $\Pi\subseteq\Phi$ such that for any $\alpha\in\Pi$ one has $\alpha(h)\ge 0$. Dynkin established that, morever, $\alpha(h)\in\{0,1,2\}$. The Dynkin diagram of $\Phi$ with the integers $\alpha(h)$ associated to the nodes corresponding to roots $\alpha\in\Pi$ is called a \emph{weighted Dynkin diagram} of $e$. It is uniquely determined by $e$, and the weighted diagrams of two nilpotents $e,e'$ coincide if and only if $e$ and $e'$ are conjugate by an inner automorphism of $\LL_{\CC}$~\cite[Theorems 8.1 and 8.3]{Dyn}. Let $\LL_{\ZZ}$ be the $\ZZ$-Lie subalgebra of $\LL_{\CC}$ generated by all $e_\alpha$ and $h_\alpha$, $\alpha\in\Phi$. Then $\LL_{\ZZ}$ is a $\ZZ$-form of $\LL_{\CC}$, i.e. $\LL_{\CC}=\LL_{\ZZ}\otimes_{\ZZ}\CC$. Let $k$ be any algebraically closed field, and let $G^{sc}$ be a simply connected simple algebraic group over $k$ of the same type $\Phi$ as $\LL_{\CC}$, and let $G^{ad}$ be the corresponding adjoint group. Then $\Lie(G^{sc})\cong\LL_{\ZZ}\otimes_{\ZZ} k$, see e.g.~\cite[Theorem 23.72]{Milne}. Define the cocharacter $\lambda:\Gm\to G^{ad}\le \Aut(\Lie(G^{sc}))$ in such a way that $\lambda(t)\cdot e_{\alpha}=t^{\alpha(h)}e_\alpha$ and $\lambda(t)\cdot h_\alpha=h_\alpha$ for any $\alpha\in\Pi\cup(-\Pi)$ and $t\in k^\times$. Thus, we associate to any weighted Dynkin diagram a $k$-cocharacter $\lambda:\Gm\to G^{ad}$. The classification of weighted Dynkin diagrams corresponding to nilpotents involves the notion of a distinguished parabolic subgroup. Note that the classification of types of distinguished parabolic subgroups, as defined below, is independent of the characteristic of the base field. \begin{definition}\cite[\S~2.6]{LieSe-conj} Let $G$ be a semisimple algebraic group over a field $k$, and let $\Phi=\Phi(T,G)$ be the root system of $G$. A parabolic subgroup $P$ of $G$ is called \emph{distinguished} if \begin{equation}\label{eq:dist} \dim L_P=\dim \hspace{-2ex}\bigoplus\limits_{\substack{\alpha\in\Phi \colon \\[.6ex] \sum\limits_{\beta\in J}m_\beta(\alpha)=1}} \hspace*{-2ex}\Lie(G)_\alpha, \end{equation} where $L_P$ is a Levi subgroup of $P$, and $\Pi\setminus J$ is the type of $P$ in a system of simple roots $\Pi$ of $\Phi$. \end{definition} \begin{theorem}~\cite{Premet03,LieSe-conj}\label{thm:nilp-orbits} Let $G$ be an adjoint simple algebraic group over an algebraically closed field $k$ of type $\Phi$ such that $\Char(k)\neq 2$ if $\Phi=B_l,C_l$ ($l\ge 2$) or $\Phi=D_l$ ($l\ge 4$), and $\Char(k)\neq 2,3$ if $\Phi=E_6,E_7,E_8,G_2,F_4$. \begin{enumerate} \item Let $\lambda:\Gm\to G$ be a $k$-cocharacter of $G$ corresponding to a weighted Dynkin diagram of a nilpotent element in a complex simple Lie algebra of the same type as $G$. Then \begin{enumerate}[(i)] \item $C_G(\lambda)$ has a unique dense open orbit $V$ in $\Lie(G)(\lambda,2)$. \item For any $e\in V(k)$, $C_G(e)\le P(\lambda)$. \item For any $e\in V(k)$, let $C(\lambda,e)=C_G(\lambda)\cap C_G(e)$. Then $C(\lambda,e)^\circ$ is a reductive subgroup of $G$, and for any maximal torus $S$ of $C(\lambda,e)$ the parabolic subgroup $Q=P(\lambda)\cap H$ is a distinguished parabolic subgroup of the semisimple group $H=[C_G(S),C_G(S)]$. Moreover, $\lambda(\Gm)\le H$, and $e$ lies in the dense open orbit of $C_H(\lambda)$ in $\Lie(H)(\lambda,2)$. \end{enumerate} \item Conversely, for any nilpotent element $e\in\Lie(G)$ there is a cocharacter $\lambda:\Gm\to G$ as above, such that $e$ belongs to the unique dense open orbit of $C_G(\lambda)$ in $\Lie(G)(\lambda,2)$. \end{enumerate} \end{theorem} \begin{proof} (1) Set $\LL=\Lie(G)$, $P=P(\lambda)$, $L_P=C_G(\lambda)$, and $L_Q=C_H(\lambda)$ for short. Clearly, $L_P$ is a Levi subgroup of $P$ and $L_Q$ is a Levi subgroup of $Q$. Assume first that $G$ is not of type $E_8$ if $\Char(k)=5$. Then the characteristic of $k$ is good for $G$. There is reductive group $\tilde G$ over $k$ such that $[\tilde G,\tilde G]$ is the simply connected group isogenous to $G$, and $\Lie(\tilde G)$ admits a non-degenerate $\tilde G$-invariant trace form, see~\cite[2.3]{Premet03}. To prove our theorem, clearly, we can replace $G$ by $\tilde G$; this makes other results of Premet applicable. By the discussion before~\cite[Theorem 2.3]{Premet03} $L_P$ has a dense open orbit $V$ in $\LL(\lambda,2)$, and for any $e\in V(k)$, $C_G(e)\le P$. By~\cite[Theorem 2.3 (iii)]{Premet03} $C(\lambda,e)$ is a reductive subgroup of $G$. By~\cite[Theorem 2.3 (ii)]{Premet03} $C_G(e)\le P$. The remaining statements of (iii) follow from~\cite[Proposition 2.5]{Premet03}. Assume that $G$ is of type $E_8$ and $\Char(k)\neq 2,3$. Consider the table~\cite[Table 22.1.1]{LieSe-conj}. By~\cite[Theorem 15.1]{LieSe-conj} the weighted diagrams of complex nilpotent elements are exactly the ones in the 2nd column of this table, and, conversely, for any $\lambda$ corresponding to such a diagram, and any field $k$ as above, there is a nilpotent element $e\in\LL(\lambda,2)$ such that $e^P$ is dense in $\bigoplus_{i\ge 2}\LL(\lambda,i)$ and $C_G(e)\le P$. This implies that $e^{L_P}=V$ is a dense open orbit of $L_P$ in $\LL(\lambda,2)$. Since any two such orbits would intersect, this orbit is unique. Clearly, it is enough to establish (iii) for this particular element $e\in V(k)$. By~\cite[Theorem 1 (c)]{LieSe-conj} $C_G(e)=C(\lambda,e)R_u(C_G(e))$, where $R_u(C_G(e))$ is the unipotent radical of $C_G(e)$ and $C(\lambda,e)^\circ$ is a reductive group (note that, contrary to our conventions, in~\cite{LieSe-conj} reductive groups are not required to be connected). Let $S$ be a maximal torus of $C(\lambda,e)$, then $C_G(S)$ is a Levi subgroup of a parabolic subgroup of $G$ by~\cite[Lemma 2.2]{LieSe-conj}. Set $H=[C_G(S),C_G(S)]$. Clearly, $e\in \LL(\lambda,2)\cap\Lie(H)$, since $e$ is nilpotent and belongs to $\Lie(C_G(S))$. By the proof of~\cite[Lemma 2.13]{LieSe-conj} $e$ is a distinguished element of $H=[C_G(S),C_G(S)]$, i.e. $C_{H}(e)^\circ$ is a unipotent group. Since $S\le C_G(e)$, this implies that $S=\Cent(C_G(S))^\circ$. On the other hand, by the actual statement of~\cite[Lemma 2.13]{LieSe-conj}, $C_G(S)$ is conjugate to the Levi subgroup $\bar L$ of a parabolic subgroup of $G$ used in the original construction of the 1-dimensional torus $\lambda(\Gm)=T$ given in~\cite[Lemma 15.3 (i)]{LieSe-conj}. Let $\bar S=\Cent(\bar L)^\circ$, then $\bar S\le C_G(e)$. Since $S$ and $\bar S$ are conjugate in $G$, they have the same dimension, and hence they are both maximal tori in $C_G(e)$. Moreover, they are both contained in $C(\lambda,e)^\circ\le C_G(e)^\circ$, hence they are conjugate in this group, i.e. by an element centralizing $T$. Then the remaining statements of our claim (iii) follow from the corresponding properties of $T$ with respect to $\bar L$ stated in~\cite[Lemma 15.3 (i)]{LieSe-conj}. (2) Let $e\in\Lie(G)$ be any nilpotent. The classification of nilpotent classes~\cite[Theorem 1 c)]{LieSe-conj} implies that there is a cocharacter $\lambda:\Gm\to G$, that corresponds to a weighted Dynkin diagram, and such that $e$ belongs to the dense open orbit of $C_G(\lambda)$ in $\Lie(G)(\lambda,2)$. Moreover, the explicit classification of occurring weighted Dynkin diagrams~\cite[Theorem 3.1; Tables 22.1.1-22.1.5]{LieSe-conj} is independent of the ground field under the assumption $\Char k\neq 2,3$, hence any such weighted Dynkin diagram is a diagram of a complex nilpotent. \end{proof} \begin{lemma}\label{lem:H-2} In the setting of Theorem~\ref{thm:nilp-orbits} (1) (iii), assume moreover that $H$ is not of type $E_8$ if $\Char(k)=5$. Then $[\Lie(H)(\lambda,-2),\, e]=\Lie(H)(\lambda,0)$. \end{lemma} \begin{proof} Let $\Psi$ be the root system of $H$, let $\Sigma$ be a system of simple roots of $\Psi$, and let $J\subset\Sigma$ be the set of simple roots corresponding to the parabolic subgroup $Q=P(\lambda)\cap H$ of $H$. Since $Q$ is distinguished, one has $\lambda(\alpha)=2$ for all $\alpha\in J$, see~\cite[Lemma 10.3]{LieSe-conj}. Hence one has $$ \dim\bigl(\Lie(H)(\lambda,2)\bigr)=\dim\bigl(\Lie(H)(\lambda,-2)\bigr)=\dim\bigl(\Lie(H)(\lambda,0)\bigr). $$ Hence it remains to prove that $[u,e]\neq 0$ for any $0\neq u\in\Lie(H)(\lambda,-2)$. If $(\Psi,\Char(k))\neq (E_8,5)$, then this follows from~\cite[Theorem 2.3 (iv)]{Premet03}. \end{proof} Assume that $k$ is a field of characteristic $\neq 2,3$, and $G$ is an adjoint simple algebraic group over $k$. Denote by $\bar k$ an algebraic closure of $k$. Let $\LL=\Lie(G)$ be the Lie algebra of $G$, and let $\LL=\bigoplus\limits_{i\in\ZZ}\LL_i$ be any $\ZZ$-grading on $\LL$ such that $\LL\neq\LL_0$. By~\cite[Lemma 4.1.6]{BdMS} one has $G=\Aut(\Lie(G))^\circ$, hence there is a unique closed embedding of a 1-dimensional split $k$\dash torus $\lambda:\GS\to G$, such that $\LL_i=\Lie(G)(\lambda,i)$ for all $i\in\ZZ$. \begin{definition} Assume that the $\ZZ$-grading $\Lie(G)=\bigoplus_{i\in\ZZ}\Lie(G)_i$ of the Lie algebra of an adjoint simple algebraic group $G$ corresponds to the cocharacter $\lambda:\Gm\to G$. We define the \emph{type} of the grading to be the type $t(P(\lambda))$ of the corresponding positive parabolic subgroup $P(\lambda)$. \end{definition} \begin{theorem}\label{thm:A-e} Let $\LL=\bigoplus\limits_{i=-2}^2\LL_i$ be a $5$-grading on the Lie algebra $\LL=\Lie(G)$ of $G$ such that $\LL_1\oplus\LL_{-1}\neq 0$, and let $\lambda:\Gm\to G$ be the corresponding cocharacter of $G$. Let $\Delta$ be the weighted Dynkin diagram such that simple roots in $t(P(\lambda))$ have weight 0, and roots in $\Pi\setminus t(P(\lambda))$ have weight 2. Then $[\LL,\LL]$ is graded-isomorphic to $K(\A)$ for a structurable algebra $A$ over $k$ if and only if $\Delta$ is a weighted Dynkin diagram of a nilpotent element in a complex simple Lie algebra of the same type as $G$. \end{theorem} \begin{proof} Consider the cocharacter $2\lambda:\Gm\to G$, so that $\LL_i=\Lie(G)(2\lambda,2i)$, $i\in\ZZ$. Then $\Delta$ is the weighted Dynkin diagram corresponding to $2\lambda$ over $\bar k$. Assume the condition on $\Delta$ is satisfied. We show that there are $u\in\LL_{-1}$, $e\in\LL_1$ such that $[u,e]=\zeta$, the grading derivation of $\LL$; then $[\LL,\LL]$ is graded-isomorphic to $K(\A)$ by Lemma~\ref{lem:A-zeta}. Let $1+\epsilon\in\Gm(k[\epsilon])$ be the unit element of $\Lie(\Gm)(k)\cong k$. Then $2\zeta=2\lambda(1+\epsilon)\in\Lie(G)$ is the grading derivation of $\Lie(G)$ with respect to $2\lambda$. We have $$ 2\zeta\in\Lie(G)(2\lambda,0)\cap\Lie(2\lambda(\Gm))\subseteq\Lie(H)(2\lambda,0), $$ where $H$ is as in Theorem~\ref{thm:nilp-orbits}. By Lemma~\ref{lem:H-2} there is a dense open orbit $V\subseteq\LL_1=\Lie(G)(2\lambda,2)$ of $L_P$, such that for any $e\in V(k)$ one has $[\Lie(H)(2\lambda,-2),\, e]=\Lie(H)(2\lambda,0)$. In particular, $\zeta\in [\Lie(G)(2\lambda,-2),e]=[\LL_{-1},e]$. Now let $k$ be not necessarily algebraically closed, and let $\bar k$ be its algebraic closure. Note that $P(\pm\lambda)$ and $C_G(\lambda)=P(\lambda)\cap P(-\lambda)$ are defined over $k$. Since $C_G(\lambda)\times_k \bar k$ has a unique dense open orbit $V$ in $\LL_1\otimes_k \bar k$, the open subvariety $V$ of the affine space $\LL_1$ is defined over $k$ (although the action of $C_G(k)$ on it does not have to be transitive). If $k$ is infinite, then $V(k)\neq\emptyset$ just because $V$ is an open subvariety of an affine space. If $k$ is finite, then $V(k)\neq\emptyset$ by~\cite[2.7]{SpSt-conj}, since $V$ is a homogeneous space for $C_G(\lambda)$. Thus, there is a $k$-point $e\in V(k)$. Then $\zeta\in [\LL_{-1},e]$, since the same holds over $\bar k$. Next, assume that $[\LL,\LL]$ is graded-isomorphic to $K(\A)$, and show that $\Delta$ is a weighted Dynkin diagram of a nilpotent element in a complex simple Lie algebra of type $\Phi$. We can assume that $k$ is algebraically closed without loss of generality. Let $e=1_+\in\LL_1$ and $f=1_-\in\LL_{-1}$ be the elements representing the unit of the structurable algebra $\A$. By the very definition of $K(\A)$, we conclude that $[e,f]=\zeta\in\LL_0$. Furthermore, for any $0\neq x\in\A$ one has $V_{1,x}(1)=2\bar x-x\neq 0$, since $2\bar x=x$ implies $2x=\bar x$ and $3\bar x=0$, whence $x=0$. By Theorem~\ref{thm:nilp-orbits} there is a cocharacter $\lambda':\Gm\to G$ that corresponds to a weighted Dynkin diagram of complex nilpotent, and such that $e$ belongs to the dense open orbit of $C_G(\lambda')$ in $\Lie(G)(\lambda',2)$. Let $\zeta'$ be the grading derivation of $\LL$ corresponding to $\lambda'$. Then $[\zeta',e]=2e$. Assume for the moment that the subgroup $H$ corresponding to $\lambda'$ and $e$ is not of type $E_8$ if $\Char k=5$. Then by Lemma~\ref{lem:H-2} there is $f'\in\Lie(G)(\lambda',-2)$ such that $[e,f']=\zeta'$. We have $\lambda(\Gm),\lambda'(\Gm)\le N_G(k\cdot e)$. Hence after conjugating $\lambda'(\Gm)$ by an element of $C_G(e)(k)\le N_G(k\cdot e)(k)$ we can assume that $\lambda'(\Gm)$ and $\lambda(\Gm)$ lie in the same maximal torus of $N_G(k\cdot e)$, and thus centralize each other. In particular, $\zeta'\in\LL_0$, and hence without loss of generality $f'\in\LL_{-1}$. Then $2\zeta-\zeta'=[e,2f-f']$ and $[2\zeta-\zeta',e]=0$. In other words, $[[e,2f-f'],e]=0$. However, $[e,2f-f']$ acts on $\LL_1$ as $V_{1,2f-f'}$ acts on $\A$, whence $2f-f'=0$ by the above computation. Thus $2\zeta=\zeta'$, and we are done. Assume that $H=G$ has type $E_8$; then $P(\lambda')$ is a distinguished parabolic subgroup of $G$. We claim that this case cannot occur in our setting. The types of distinguished parabolic subgroups are listed in~\cite[Table 13.2]{LieSe-conj}. Denote by $\LL=\bigoplus_{i\in\ZZ}\LL_{[i]}$ the grading on $\LL$ induced by $\lambda'$. In all cases, one readily sees that $\LL_{[2i]}\neq 0$ for all $1\le i\le k$, where $k\ge 5$. Then by~\cite[proof of Propositions 13.4 and 13.5]{LieSe-conj}, under the assumption $\Char k\neq 2,3$ there is a nilpotent $e'\in\LL_{[2]}$ such that ${e'}$ lies in the dense orbit of $C_G(\lambda')$ in $\LL_{[2]}$, and one has $\bigl(\ad(e')|_{\bigoplus_{i\ge 0}\LL_{[i]}}\bigr)^5\neq 0$. Clearly, $e$ and $e'$ are $C_G(\lambda')$-conjugate. However, since $e\in\LL_1$, and $\LL$ is $5$-graded, one has $\ad(e)^5=0$, hence $\ad(e')^5=0$ as well, a contradiction. \end{proof} \begin{lemma}\label{lem:5-lambda} Let $\LL=\bigoplus_{i=-2}^2\LL_i$ be a $5$-grading on the Lie algebra $\LL=\Lie(G)$ of $G$ such that $\LL_1\oplus\LL_{-1}\neq 0$, and let $\lambda:\Gm\to G$ be the corresponding cocharacter of $G$. Let $\tilde\alpha$ be the root of maximal height in $\Phi$ with respect to $\Pi$. Let $J=\Pi\setminus t(P(\lambda))$, then $J$ satisfies (a) $J=\{\alpha_1\}$ or (b) $J=\{\alpha_1,\alpha_2\}$. If (a) holds, then $m_{\alpha_1}(\tilde\alpha)=1$ or $2$. If (b) holds, then $m_{\alpha_1}(\tilde\alpha)=m_{\alpha_2}(\tilde\alpha)=1$. In both cases $\lambda(t)\cdot e_\alpha=t e_\alpha$ for any $\alpha\in J$ and $t\in k^\times$. \end{lemma} \begin{proof} By~\cite[Lemma 4.2.2]{BdMS} the root system $\Phi\cap\ZZ J$ is of type $A_1$, $BC_1$, $A_2$, or $A_1\times A_1$, and for any $-2\leq i\leq 2$ we have \begin{equation}\label{eq:Li} \LL_i = \hspace{-2ex}\bigoplus\limits_{\substack{\alpha\in\Phi \colon \\[.6ex] \sum\limits_{\beta\in J}m_\beta(\alpha)=i}} \hspace*{-2ex}\Lie(G)_\alpha. \end{equation} Then $\lambda(t)e_\alpha=t e_\alpha$ for $\alpha\in J$ by the definition of $\lambda$. Since $\sum_{\alpha\in J}m_\alpha(\tilde\alpha)\le 2$, the remaining claims are clear. \end{proof} \begin{theorem} Let $G$ be an adjoint simple algebraic group over a field $k$, $\Char k\neq 2,3$. Let $\LL=\bigoplus\limits_{i=-2}^2\LL_i$ be a $5$-grading on the Lie algebra $\LL=\Lie(G)$ of $G$ such that $\LL_1\oplus\LL_{-1}\neq 0$. Then $[\LL,\LL]$ is graded-isomorphic to $K(\A)$ for a structurable $k$-algebra $\A$ if and only if the type of grading is the complement of the set $J$ of simple roots of the root system $\Phi$ of $G$ listed in the following table. \end{theorem} {\renewcommand{\arraystretch}{1.3} \begin{tabular}{r|l} $\Phi$& $J$\\ \cline{1-2} $A_l$, $l\ge 1$ & $\{\alpha_i,\alpha_{l+1-i}\}$, $1\le i\le (l+1)/3$;\\ & $\{\alpha_{l+1/2}\}$, if $l$ is odd\\ \cline{1-2} $B_l$, $l\ge 2$ & $\{\alpha_i\}$, $1\le i\le (2l+1)/3$\\ \cline{1-2} $C_l$, $l\ge 3$ & $\{\alpha_{i}\}$, $1\le i\le 2l/3$, $i$ is even; $\{\alpha_l\}$\\ \cline{1-2} $D_l$, $l\ge 4$ & $\{\alpha_i\}$, $1\le i\le 2l/3$; \\ & $\{\alpha_{l-1}\}$ and $\{\alpha_l\}$, if $l$ is even\\ \cline{1-2} $E_6$ & $\{\alpha_1,\alpha_6\}$; $\{\alpha_2\}$\\ \cline{1-2} $E_7$ & $\{\alpha_1\}$; $\{\alpha_2\}$; $\{\alpha_6\}$; $\{\alpha_7\}$\\ \cline{1-2} $E_8$ & $\{\alpha_1\}$; $\{\alpha_8\}$\\ \cline{1-2} $F_4$ & $\{\alpha_1\}$; $\{\alpha_4\}$\\ \cline{1-2} $G_2$ & $\{\alpha_2\}$\\ \end{tabular} } \begin{proof} Let $\lambda:\Gm\to G$ be the cocharacter of $G$ cooresponding to the grading. By Theorem~\ref{thm:A-e} it remains to check if $2\lambda$ is a $k$-cocharacter corresponding to a weighted Dynkin diagram of a nilpotent element in the complex case. By Lemma~\ref{lem:5-lambda} one has $2\lambda(t)\cdot e_\alpha=t^2e_\alpha$ for all $\alpha\in J$ and $2\lambda(t)\cdot e_\alpha=t^0e_\alpha$ for all $\alpha\in\Pi\setminus J$. If $\Phi$ is of exceptional type, then one readily checks that in all cases $2\lambda$ is as required, since its weighted Dynkin diagram occurs in the Tables 22.1.1--22.1.5 of nilpotent conjugacy classes in~\cite{LieSe-conj}. Assume $\Phi$ is of classical type. We use the descriptions of weighted Dynkin diagrams of nilpotent orbits given in~\cite{CMc}. {\bf Case $\Phi=A_l$.} In the notation of~\cite[\S~3.6]{CMc}, we have $n=l+1$ and $h_1-h_n=2|J|\in\{2,4\}$. We need to describe suitable non-negative partitions $n=\sum_{i=1}^n d_i$. By definition, $d_i\ge 1$ for $1\le d_i\le k$ and $d_i=0$ for $k+1\le i\le n$. If $h_1-h_n=2$, then $d_i\in \{1,2\}$ for all $1\le i\le k$. Then the sequence $h_1\ge h_2\ge\ldots\ge h_n$ contains numbers $1$ and $-1$ repeated $m\ge 1$ times each, and $k-m$ zeroes. Since $h_i-h_{i+1}=2$ only for one $i$, we have $k-m=0$ and $n=2k$. In particular, $l$ is odd, and $J=\{\alpha_{k}\}=\{\alpha_{(l+1)/2}\}$. If $h_1-h_n=4$, then $d_i\in\{1,2,3\}$ for all $1\le i\le k$. Then the sequence $h_1\ge h_2\ge\ldots\ge h_n$ contains numbers $2,0,-2$ repeated $m\ge 1$ times each, numbers $1,-1$ repeated $m'$ times each, and $k-m-m'$ zeroes. Since $h_i-h_{i+1}=2$ for exactly two $i$'s, we conclude that $m'=0$. Then $l+1=n=3m+(k-m)$, hence $l+1\ge 3m$ and $J=\{\alpha_m,\alpha_{l+1-m}\}$, as required. {\bf Case $\Phi=B_l$.} In the notation of~\cite[Lemma 5.3.3]{CMc}, we have $n=l$. Nilpotent orbits are classified by partitions $2n+1=\sum_{i=1}^{2n+1}d_i$ of $2n+1$ in which even parts occur with even multiplicity~\cite[Theorem 5.1.2]{CMc}. In~\cite[Lemma 5.3.3]{CMc}, $h_1$ is the sum of labels of the weighted Dynkin diagram, hence $h_1=2$. Then $d_i\in\{1,2,3\}$. If $h_n=2$, then $h_1=h_2=\ldots=h_n=2$, which is not possible, since there are not enough zeroes. Hence the label $2$ occurs only for one $i$ between $1$ and $n-1$, and $h_1=\ldots=h_i=2$, $h_{i+1}=\ldots=h_n=0$. Then partition consists of $i$ numbers $3$ and $2n+1-3i$ numbers $1$. The condition on parity is void. The set $\{0,\pm h_1,\ldots,\pm h_n\}$ contains $i+(2n+1-3i)$ zeroes, therefore, $2(n-i)\ge i-1$, which implies $3i\le 2n+1$. {\bf Case $\Phi=C_l$.} In the notation of~\cite{CMc}, we have $n=l$. Nilpotent orbits are classified by partitions $2n=\sum_{i=1}^{2n}d_i$ of $2n$ in which odd parts occur with even multiplicity~\cite[Theorem 5.1.3]{CMc}. In~\cite[Lemma 5.3.1]{CMc}, $h_1+h_n=2$ is the sum of labels of the weighted Dynkin diagram. Since $2h_n$ is the label of $\alpha_l$, we have $h_n=0$ or $h_n=1$. If $h_n=1$, then $h_1=1$, and hence $h_i=1$ for all $i$. Then $d_i=2$ for all non-zero $d_i$, hence the partition is $2n=2+\ldots+2$. The condition on odd parts is satisfied, hence $J=\{\alpha_l\}$ is valid. If $h_n=0$, then $h_1=2$ and $d_i\in\{1,2,3\}$ for $1\le i\le 2n$. Since $h_i-h_{i+1}=2$ only for one $i$ between $1$ and $n-1$, we conclude that $h_1=\ldots=h_i=2$ and $h_{i+1}=\ldots=h_{n-1}=0$. Then $2n$ is partitioned into the sum of $i$ times number $3$, and $2n-3i\ge 0$ times number $1$, where $i$ is even. Summing up, $J=\{\alpha_i\}$ with even $i=2m$, $1\le m\le l/3$, or $J=\{\alpha_l\}$. {\bf Case $\Phi=D_l$.} In our notation, $l=n$. Nilpotent orbits are classified by partitions $2n=\sum_{i=1}^{2n}d_i$ of $2n$ in which even parts occur with even multiplicity, except that "very even"{} partitions with only even parts (each having even multiplicity) correspond to two different orbits~\cite[Theorem 5.1.4]{CMc}. Since all weights of the Dynkin diagram in our seting are even, all numbers $h_1,\ldots,h_n$ have the same parity. Assume first that the partition is not very even, i.e. the numbers $d_i$ are odd, and the numbers $h_i$ are even. By~\cite[Lemma 5.3.4]{CMc} the sum of labels on the Dynkin diagram equals $h_1+h_{n-1}\in\{2,4\}$. Since $h_1\ge h_{n-1}$, we have $h_1=2$, $h_{n-1}=0$ or $h_1=h_{n-1}=2$. If $h_1=h_{n-1}=2$, then $h_1=h_2=\ldots=h_{n-1}=2$ and $h_n=0$, which is not possible (not enough zeroes). Hence $h_1=2$, $h_{n-1}=0$. Then $h_n=0$ as well. The partition consists of $i$ times $3$, where $i\ge 1$, and $2n-3i$ numbers $1$. The multiplicity condition is void. Since numbers $3$ produce triples $2,0,-2$, one has $2n\ge 3i$. This case corresponds to $J=\{\alpha_i\}$, $1\le i\le 2n/3$. Assume the partition is very even. Then by~\cite[Lemma 5.3.5]{CMc} the sum of labels of vertices $\alpha_n$ and $\alpha_{n-1}$ equals $2$. Then either all other labels are $0$, or there is label $2$ at $\alpha_1$. In the first case we have $h_1=h_2=\ldots=h_{n-1}$, and since all $d_i$ are even, this implies that the partition is $n$ numbers $2$. Since it is very even, $n$ is even. In the second case $h_1-h_2=2$, $h_2=\ldots=h_{n-1}$. Since $h_1\ge h_2\ge h_n$, then $d_1=h_1+1$ can only have multiplicity one, which is wrong. Therefore, this case does not take place in our setting. Summing up, very even partitions occur with $J=\{\alpha_{n-1}\}$ and $J=\{\alpha_n\}$ for $n$ even. \end{proof} \renewcommand{\refname}{References}
{ "timestamp": "2018-10-02T02:25:41", "yymm": "1712", "arxiv_id": "1712.05288", "language": "en", "url": "https://arxiv.org/abs/1712.05288" }
\section{Introduction} \label{sec:intro} We consider the evolution of a scalar quantity $u: \mathbb{R}\times (0,\infty) \to U\subset\mathbb{R}$, $(x,t) \mapsto u(x,t)$, which is governed by the Cauchy problem \begin{align} \sdiff{u}{t} + \sdiff{}{x} f(u) &= \RieszFeller u &&\text{for } (x,t)\in\mathbb{R}\times (0,\infty), \label{eq:FCL} \\ u(0,x) &= u_0(x) &&\text{for } x\in\mathbb{R}, \nonumber \end{align} with an initial datum $u_0: \mathbb{R} \to U \subset \mathbb{R}$, a flux function $f: U\subset\mathbb{R} \to \mathbb{R}$ and a Riesz-Feller operator~$\RieszFeller$ for some $1<\alpha \le 2$ and $\abs{\theta}\leq 2-\alpha$. Equation~\eqref{eq:FCL} models nonlinear transport and nonlocal diffusion of a quantity~$u(x,t)$ in space over time. The flux function~$f$ is assumed to be smooth and convex as well as to satisfy w.l.o.g. $f(0)=0$. The Riesz-Feller operator can be defined as a Fourier multiplier operator, see also~\cite{Mainardi+etal:2001}. Precisely, the Riesz-Feller operator~$\RieszFeller$ of order~$\alpha$ and skewness~$\theta$ is defined as \begin{equation} \label{eq:RF:Fourier} \operator{F} [\RieszFeller v](k) = \RFsymbol (k) \operator{F} [v] (k) \,,\qquad k \in\mathbb{R} \,, \end{equation} with symbol \begin{equation} \label{eq:RF:symbol} \RFsymbol(k) = -|k|^\alpha \exp\left( \mathrm i\, \sgn(k)\, \theta \tfrac{\pi}{2}\right) = -|k|^\alpha \left( \cos( \theta \tfrac{\pi}{2} ) + \mathrm i\, \sgn(k)\, \sin(\theta\tfrac{\pi}{2}) \right) \end{equation} and parameters $0<\alpha\leq 2$ and $|\theta| \leq \min\{\alpha, 2-\alpha\}$, where $\mathcal{F}$ denotes the Fourier transform. \medskip \begin{remark} (i) Riesz-Feller operators $\RieszFeller$ with $\theta = 0$ are also known as fractional Laplacians $\Riesz=-(-\sdifff{u}{x}{2})^{\alpha/2}$ with $0< \alpha \leq 2$ and Fourier symbol $-|k|^\alpha$. In particular, the Laplacian $D_0^2=\sdifff{}{x}{2}$ is a special case with $\alpha=2$ and $\theta=0$. (ii) For $0 < \gamma <1$, Riesz-Feller operators $\RieszFeller$ with $\alpha = \gamma$ and $\theta = -\gamma$, can be identified with fractional Caputo derivatives of order $0<\gamma<1$: \begin{equation} \label{Caputo} -(\Caputo u)(x)= -\frac{1}{\Gamma(1-\gamma)} \integrall{-\infty}{x}{ \frac{u'(y)}{(x-y)^{\gamma}} }{y} \Xx{for} x\in\mathbb{R} \,, \end{equation} which have Fourier symbol $-(-\mathrm i k)^\gamma$. The symbol $(-\mathrm i k )^\gamma$ is multi-valued, however (only) the choice $(-\mathrm i k)^\gamma = \left(|k|\exp(-\mathrm i\sgn(k)\, \tfrac\pi2)\right)^\gamma=|k|^\gamma \exp(-\mathrm i\sgn(k)\, \gamma\tfrac\pi2)$ yields a causal operator. For details, see \cite{Kempfle+etal:2002}. Moreover, its derivative $\sdiff{}{x} (\Caputo u)$ is a Riesz-Feller operator with $\alpha = 1+\gamma$ and $\theta = 2 -\alpha$. \end{remark} Taking $\alpha = 2$ and $\theta = 0$ in \eqref{eq:FCL}, we formally obtain a classical viscous conservation law: \begin{equation} \label{eq:VCL} \sdiff{u}{t} + \sdiff{}{x} f(u) = \sdifff{u}{x}{2} \Xx{for} (x,t)\in\mathbb{R}\times (0,\infty). \end{equation} The existence and asymptotic stability of traveling wave solutions of equation~\eqref{eq:VCL} has been studied thoroughly. A first example of equation~\eqref{eq:FCL} with nonlocal diffusion is \begin{equation} \label{eq:FCL:FL} \partial_t u + \partial_x f(u) = \Riesz u \Xx{for} (x,t)\in\mathbb{R}\times (0,\infty)\,, \end{equation} with a fractional Laplacian $\Riesz$, $0<\alpha\leq 2$, which has been studied e.g. in~\cite{Biler+etal:1998,Droniou+etal:2003}. For $1<\alpha\leq 2$, the Cauchy problem for~\eqref{eq:FCL:FL} with $f\in C^\infty(\mathbb{R})$ and essentially bounded initial data has a global-in-time mild solution which becomes smooth for positive times, see~\cite{Droniou+etal:2003} and its extension to~\eqref{eq:FCL} in~\cite{Achleitner+etal:2012}. Other examples of equation~\eqref{eq:FCL} with nonlocal diffusion appear in viscoelasticity~\cite{Sugimoto+Kakutani:1985} and fluid dynamics~\cite{Kluwick+etal:2010}. In particular, \begin{equation} \label{eq:FCL:onesided} \sdiff{u}{t} + \sdiff{}{x} f(u) = \sdiff{}{x} \Caputo u \Xx{for} (x,t)\in\mathbb{R}\times (0,\infty)\,, \end{equation} with $0<\gamma<1$ is used as a model for the far-field behavior of uni-directional viscoelastic waves~\cite{Sugimoto+Kakutani:1985}, and derived as a model for the internal structure of hydraulic jumps in near-critical single-layer flows~\cite{Kluwick+etal:2010}. Moreover the nonlocal operator $\Caputo[1/3]$ appears in Fowler's equation \begin{equation} \label{eq:Fowler} \partial_t u + \partial_x u^2= \partial_x^2 u-\partial_x \Caputo[1/3] u \,, \end{equation} which models the uni-directional evolution of sand dune profiles~\cite{Fowler:2002}. In the theory of water waves similar models $\partial_t u + \partial_x u^2 =\mathcal{N}[u]$ with different (nonlocal) Fourier multiplier operators $\mathcal{N}$ are studied, see the book~\cite{Naumkin+Shishmarev:1994} and references therein. \medskip To explain our main results, we introduce traveling wave solutions for equation~\eqref{eq:FCL}. Traveling wave solutions (TWS) are of the form $u(x,t)=\overline u(\xi)$ for some profile~$\overline u$ with $\xi = x - s t$ and (constant) wave speed~$s\in\mathbb{R}$. We are interested in TWS with profiles~$\overline u$ connecting distinct endstates~$u_{\pm}$ such that \begin{equation}\label{endpoint} \lim_{x \to \pm \infty} \overline u(x) = u_{\pm} \ . \end{equation} Using this ansatz in equation \eqref{eq:FCL} and assumption \eqref{endpoint}, we find that the wave speed $s$ has to satisfy the Rankine-Hugoniot condition \begin{equation} \label{RH} s = \frac{f(u_+) - f(u_-)}{u_+ -u_-}. \end{equation} Here, an extension of Riesz-Feller operators to non-integrable functions is needed, see Appendix~\ref{app:RF}. Due to translational invariance of equation~\eqref{eq:FCL}, traveling wave solutions are only unique up to a shift. For classical viscous conservation laws~\eqref{eq:VCL}, the profile of a TWS satisfies an ordinary differential equation $\overline u' = f(\overline u)-s\overline u - (f(u_-)-su_-)$. In fact, TWS exist only for parameters $(\um,\up;\wavespeed)$ satisfying~\eqref{RH} and $u_+<u_-$. In case of equation~\eqref{eq:FCL:onesided}, the existence and asymptotic stability (without decay rates) of traveling wave solutions for parameters $(\um,\up;\wavespeed)$ satisfying~\eqref{RH} and $u_+<u_-$ has been shown~\cite{Achleitner+Hittmeir+Schmeiser:2011,Cuesta+Achleitner:2017}. Here, a profile satisfies a fractional differential equation $\Caputo \overline u = f(\overline u)-s\overline u - (f(u_-)-su_-)$. The proof of existence relies on the causality of the Caputo derivative~$\Caputo$, i.e. to evaluate $\Caputo\overline u$ at $x$ the profile $\overline u$ on $(-\infty,x)$ is needed. In contrast, the profile for a TWS of a nonlocal conservation law~\eqref{eq:FCL:FL} for $1<\alpha<2$ has to satisfy \[ \Riesz \overline u (x) =\integral{\mathbb{R}}{ \frac{\overline u(x+\xi)-\overline u(x)-\overline u'(x)\,\xi}{\xi^{1+\alpha}} }{\xi} = \sdiff{}{x} \big(f(\overline u)-s\overline u - (f(u_-)-su_-) \ . \] Thus $\Riesz \overline u (x)$ depends on the whole profile~$\overline u$. For fractal Burgers equation, i.e. equation~\eqref{eq:FCL:FL} with $1<\alpha<2$ and Burgers flux function $f(u)=u^2$, the existence of traveling wave solutions has been proven recently~\cite{Chmaj:2014}. The idea is to approximate the operators $\Riesz$ by convolution operators $\cK_\epsilon[u] =K_\epsilon\ast u -u$ for suitable convolution kernels $K_\epsilon\in L^1(\mathbb{R})$. The existence of TWS for the approximate equations is known and the TWS is established as the limit of this family. It is conceivable to use this approach to prove the existence of traveling wave solutions in the general case~\eqref{eq:FCL} for convex flux functions~$f$ with $1<\alpha<2$ and $|\theta|\leq 2-\alpha$. For fractal Burgers equation~\eqref{eq:FCL:FL} results in the complementary cases $\alpha\in (0,1)$ and/or $u_-\lequ_+$ are also available: For example, for $\alpha\in (0,1)$ and~\eqref{endpoint} no traveling wave solutions of~\eqref{eq:FCL:FL} with smooth profile exists~\cite{Biler+etal:1998}. Whereas under the assumption $u_-<u_+$ the solution of~\eqref{eq:FCL:FL} converges as $t\to\infty$ to a rarefaction wave of the underlying Burgers equation if $\alpha\in (1,2)$ and to a self-similar solution if $\alpha=1$; see~\cite{Karch+Miao+Xu:2008} and~\cite{Alibaud+Imbert+Karch:2010}, respectively. \medskip The asymptotic stability of traveling wave solutions of classical viscous conservation laws~\eqref{eq:VCL} has been studied thoroughly. At first, historically, Il'in and Oleinik \cite{IO60} proved the asymptotic stability of nonlinear waves for viscous conservation laws \eqref{eq:VCL} by making use of the maximum principle for linear parabolic equations. For Burgers' equation, i.e.~equation~\eqref{eq:VCL} with Burgers' flux function~$f(u) = u^2$, Nishihara \cite{N85} obtained the decay estimates toward traveling wave solutions by making use of the explicit solution formula. And, Kawashima and Matsumura \cite{Kawashima+Matsumura:1985} generalized Nishihara's time decay result to a class of viscous conservation laws. They considered weighted $L^2$ spaces and used a weighted energy method. Furthermore, Kawashima, Nishibata and Nishikawa \cite{Kawashima+etal:2004} extended the $L^2$ energy method to general $L^p$ spaces. Their techniques have been applied to a model system for compressible viscous gas in \cite{MN85} and a hyperbolic system with relaxation in \cite{U09}. Assuming the existence of a traveling wave solution of~\eqref{eq:FCL} with monotone decreasing profile, we show that asymptotic stability of a traveling wave solution in a Sobolev space setting follows from a standard Lyapunov functional argument: To investigate the stability of the traveling wave solution with profile~$\overline u$, we consider initial data $u_0$ such that $u_0 - \overline u$ is integrable and determine the unique shift $x_0$ which yields $ \integrall{-\infty}{\infty}{ (u_0(\xi) - \overline u(\xi + x_0)) }{\xi} = 0. $ Moreover, we restrict the domain of initial data $u_0$ further such that $ W_0(\xi)=\integrall{-\infty}{\xi}{ (u_0(\eta)-\overline u(\eta))}{\eta} $ exists (using a suitable shifted profile~$\overline u$) and satisfies $W_0 \in H^2$. (For details, we refer to \cite{U09}.) More precisely, we can derive the following theorem. \begin{theorem}\label{theorem:AS} Suppose $1<\alpha \le 2$ and $\theta\leq\min\{\alpha,2-\alpha\}$. Let the flux function $f\in C^2(\mathbb{R})$ be convex and let $u(x,t)=\overline u(x- s t)$ be a traveling wave solution of~\eqref{eq:FCL} with monotone decreasing profile~$\overline u$. Let $u_0$ be an initial datum for~\eqref{eq:FCL} such that $W_0(\xi)=\integrall{-\infty}{\xi}{ (u_0(\eta)-\overline u(\eta)) }{\eta}$ satisfies $W_0\in H^2(\mathbb{R})$. Then there exists a positive constant $\delta_0$ such that if $\norm{W_0}_{H^2} \le \delta_0$, then the Cauchy problem \eqref{eq:FCL} has a unique global solution converging to the traveling wave in the sense that $$ \| (u - \overline u)(t)\|_{L^\infty} \longrightarrow 0 \qquad \text{for} \quad t \to \infty. $$ \end{theorem} The proof of Theorem~\ref{theorem:AS} for the general equation~\eqref{eq:FCL} is similar to the one of~\cite[Theorem 4]{Achleitner+Hittmeir+Schmeiser:2011} for the special case~\eqref{eq:FCL:onesided} without decay rates. Our main result is to prove the asymptotic stability with algebraic-in-time decay rate for traveling wave solutions of~\eqref{eq:FCL} with monotone decreasing profiles. \begin{theorem}\label{theorem:CR} Suppose the same assumptions as in Theorem~\ref{theorem:AS} hold and $f\in C^\infty(\mathbb{R})$. For all $W_0\in W^{1,\infty}(\mathbb{R})\cap W^{1,1}(\mathbb{R})$, the Cauchy problem~\eqref{eq:FCL} has a unique global solution. Moreover, there exists a positive constant $\delta_1$ such that if $\norm{W_0}_{W^{1,1}} \le \delta_1$ then the unique global solution~$u$ satisfies \begin{equation}\label{decay} \|(u - \overline u)(t)\|_{L^2} \leq C E_1 (1+t)^{-1/(2\alpha)} \end{equation} for $t \ge 0$, where $E_1 := \|W_0\|_{H^1} + \|W_0\|_{W^{1,1}}$ and $C$ is a constant which is independent of time~$t$. \end{theorem} \begin{remark We employ sharp interpolation inequalities in Sobolev spaces to derive~\eqref{decay}. In this way optimal decay estimates for the asymptotic stability of viscous rarefaction waves in scalar viscous conservation laws~\eqref{eq:VCL} have been derived in~\cite{HUK09}. \end{remark} \begin{remark We want to explain the functional setting in Theorem~\ref{theorem:CR}: We considered the function spaces $H^2(\mathbb{R})\cap W^{2,1}(\mathbb{R}) \subset W^{1,\infty}(\mathbb{R})\cap W^{1,1}(\mathbb{R}) \subset H^1(\mathbb{R})\cap W^{1,1}(\mathbb{R})$ in variants of Theorem~\ref{theorem:CR}. The choice $H^1(\mathbb{R})\cap W^{1,1}(\mathbb{R})$ leads to the restriction $\alpha\in(3/2,2)$ if we use an estimate of the nonlinearity like Dix~\cite{Dix:1992,Dix:1996} to establish the existence of solutions for the Cauchy problem. Assuming higher regularity of the initial data removes the need for this restriction: Under the assumptions of Theorem~\ref{theorem:AS} with $W_0\in H^2(\mathbb{R})\cap W^{2,1}(\mathbb{R})$, the solution constructed in Theorem~\ref{theorem:AS} satisfies \begin{equation*} \|(u - \overline u)(t)\|_{H^1} \leq C \widetilde{E}_1 (1+t)^{-1/(2\alpha)} \end{equation*} for $t \ge 0$, where $\widetilde{E}_1 := \|W_0\|_{H^2} + \|W_0\|_{W^{2,1}}$ and a constant $C$ independent of time $t$. Our choice $W_0 \in W^{1,\infty}(\mathbb{R})\cap W^{1,1}(\mathbb{R})$ in Theorem~\ref{theorem:CR} leads to the technical assumption $f\in C^\infty(\mathbb{R})$, since we use a result on the existence of global-in-time solutions for the Cauchy problem with essentially bounded initial data~\cite{Droniou+etal:2003,Achleitner+etal:2012}. The assumption $f\in C^2(\mathbb{R})$ in Theorem~\ref{theorem:AS} could be retained by aiming for less regularity in their approach. \end{remark} \medskip Unfortunately it is difficult to apply the weighted energy method in \cite{Kawashima+Matsumura:1985} to our problem (to derive the convergence rate). Instead of this method, we employ another technique which focuses on the interpolation property in Sobolev space. For example, this argument is utilized in \cite{HUK09}. The contents of this paper are as follows. In Section~\ref{sec:reformulation}, we reformulate our problem and consider the well-posedness of the new one. In Section~\ref{sec:AS}, we derive the asymptotic stability result by uniform energy estimates as {\it a-priori} estimates of solutions in the Sobolev space $H^2$. Furthermore, our main result on the asymptotic stability with explicit algebraic decay rate in Theorem~\ref{theorem:CR} is proved in Section~\ref{sec:Rates}, by using the energy method with an $L^2$--$L^1$ interpolation argument. In Appendix~\ref{app:RF}, we collect results on the singular integral representation of Riesz-Feller operators. \bigskip \noindent {\bf Notation.} Before closing this section, we give some notations used in this paper. We define the Fourier transform for $v\in\SchwartzTF$ in the Schwartz space $\SchwartzTF$ as \begin{align*} \hat{v}(k) = \operator{F} [v](k) &:= \integral{\mathbb{R}}{ e^{-\mathrm i k x} v(x) }{x} \Xx{for} k \in \mathbb{R} \,, \intertext{and the inverse Fourier transform as} \Fourier^{-1} [v](x) &:= \frac{1}{2\pi} \integral{\mathbb{R}}{ e^{\mathrm i k x} v(k) }{k} \Xx{for} x\in\mathbb{R} \,. \end{align*} The Fourier transform and its inverse are linear operators and $\operator{F}$ and $\Fourier^{-1}$ will denote also their respective extensions to $L^2(\mathbb{R})$. For $1 \le p \le \infty$, we denote by $L^p=L^p(\mathbb{R})$ the usual Lebesgue space over $\mathbb{R}$ with norm $\| \cdot \|_{L^p}$, and $W^{s,p}=W^{s,p}(\mathbb{R})$ the usual Sobolev space over $\mathbb{R}$ with norm $\| \cdot \|_{W^{s,p}}$. Using the short-hand notation $H^{s}(\mathbb{R}) := W^{s,2}(\mathbb{R})$ with norm $\| \cdot \|_{H^{s}}$. Moreover, we set $\norm{W(t)}_{W^{1,\infty}} =\max\{\norm{W(t)}_{L^\infty},\ \norm{\sdiff{W}{\xi}(t)}_{L^\infty}\}$ and its analog in case of $\norm{W(t)}_{W^{\ell,\infty}}$ for all $\ell\in\mathbb{N}$. Finally, for nonnegative integer~$\ell$, $C^{\ell}(I;X)$ (respectively $C^{\ell}_b(I;X)$) denotes the space of $\ell$-times continuously differentiable functions (respectively with bounded derivatives) on the interval $I$ with values in the Banach space $X$. The constants in our estimates may change their value from line to line. \section{Reformulation for the problem} \label{sec:reformulation} In the special case~\eqref{eq:FCL:onesided}, the existence and asymptotic stability of traveling wave solutions $u(x,t)=\overline u(x-s t)$ with monotone decreasing profile~$\overline u$ has been proven without rates of decay~\cite{Achleitner+Hittmeir+Schmeiser:2011,Cuesta+Achleitner:2017}. However, assuming in the general case~\eqref{eq:FCL} the existence of a traveling wave solution $u(x,t)=\overline u(x-s t)$ with monotone decreasing profile~$\overline u$, then the proof of asymptotic stability generalizes with obvious modifications: To prove the asymptotic stability of a traveling wave solution~$\overline u$ of~\eqref{eq:FCL}, one can follow the standard approach called the anti-derivative method introduced in \cite{Kawashima+Matsumura:1985} for viscous conservation laws. It is convenient to cast~\eqref{eq:FCL} in a moving coordinate frame $(x,t)\mapsto (\xi,t)$, such that \begin{equation} \label{CLND+MCF} \partial_t u + \partial_\xi (f(u)-s u) = \RieszFeller u \,, \end{equation} and $\overline u$ is a stationary solution of~\eqref{CLND+MCF}. The Cauchy problem for~\eqref{CLND+MCF} with initial datum $u_0$ governs the evolution of $u_0$. If its solution $u$ is considered as a perturbation of the traveling wave solution $\overline u$, then this perturbation $U(\xi,t) := u(\xi,t) - \overline u(\xi)$ satisfies the Cauchy problem \begin{equation} \label{CP:U} \begin{split} \partial_t U + \partial_\xi (f(\overline u+U)-f(\overline u)) -s \partial_\xi U &= \RieszFeller U, \\ U(\xi,0) &=U_0(\xi), \end{split} \end{equation} where $U_0(\xi):=u_0(\xi)-\overline u(\xi)$. % To obtain the desired result, we try to construct the $L^2$-energy estimate for $U$ by employing the energy method. However, because of the decreasing property of traveling wave solutions, it is hard to construct the $L^2$-energy estimate. To overcome this difficulty, we apply the anti-derivative method. Precisely, we introduce the new function $W(\xi,t)$ which satisfies $\partial_\xi W = U$. Then we can formally rewrite \eqref{CP:U} as \begin{equation}\label{CP:W} \begin{split} \sdiff{W}{t} + f(\overline u+\sdiff{W}{\xi})-f(\overline u)-s \sdiff{W}{\xi} &= \RieszFeller W,\\ W(\xi,0) &= W_0(\xi). \end{split} \end{equation} If a global-in-time solution of \eqref{CP:W} with $W_0(\xi) = \integrall{-\infty}{\xi}{ U_0(\eta) }{\eta}$ is sufficiently smooth, then its derivative $\partial_\xi W$ satisfies Cauchy problem \eqref{CP:U}. Therefore, we try to construct a global-in-time solution of \eqref{CP:W}, instead of \eqref{CP:U}. For this purpose, we discuss the well-posedness of problem \eqref{CP:W} in this section. The well-posedness of the Cauchy problem for~\eqref{CP:W}, will follow from a contraction argument. Assuming $f(u)=u^2$ and $\alpha >3/2$ allows to estimate the nonlinearity in the fashion of Dix~\cite{Dix:1992,Dix:1996} implying the well-posedness in $H^1$. For general flux functions and $\alpha \in (1,2]$, we have to require more regularity of the initial data, {e.g.} $W_0\in H^2$. \begin{proposition} \label{prop:CP:H2} Let $f\in C^2(\mathbb{R})$, $1<\alpha \le 2$ and $\abs{\theta}\leq \min\{\alpha,2-\alpha\} =2-\alpha$. Suppose $M$ is an arbitrary positive constant and suppose $W_0\in H^2(\mathbb{R})$ such that $\|W_0\|_{H^2} \le M$. Then there exists a positive constant $T$, which depends on $M$, such that the Cauchy problem~\eqref{CP:W} has a unique mild solution $W \in C([0,T];H^2)$ with $\|W(t)\|_{H^2} \le 2M$ for $t \in [0,T]$. \end{proposition} To prove Proposition~\ref{prop:CP:H2}, we first present some properties of the fundamental solution of $\sdiff{u}{t}= \RieszFeller u$. \begin{lemma}[{\cite[Lemma 2.1]{Achleitner+Kuehn:2015}}] \label{lem:SSPM} For $1<\alpha\leq 2$ and $\abs{\theta} \leq \min\{\alpha,2-\alpha\} =2-\alpha$, $\Green(x,t) := \mathcal{F}^{-1}[e^{t \psi^\alpha_\theta (\cdot)}](x)$ with $\psi^\alpha_\theta$ defined in~\eqref{eq:RF:symbol} is the fundamental solution of $\sdiff{u}{t} =\RieszFeller u$. Moreover, $\Green$ satisfies for all $(x,t)\in \mathbb{R} \times (0,\infty)$ the properties \begin{enumerate}[label=(G\arabic*)] \item \label{K:SFDE:prop0} $\Green(x,t)\geq 0$, \item \label{K:SFDE:prop1} $\Green(x,t)=t^{-1/\alpha} \Green (x t^{-1/\alpha},1)$, \item \label{K:SFDE:prop2} $\norm{\Green(\cdot,t)}_{L^1(\mathbb{R})}=1$, \item \label{K:SFDE:prop3} $\Green(\cdot,s)\ast \Green(\cdot,t) = \Green(\cdot,s+t)$ for all $s,t\in(0,\infty)$, \item \label{K:SFDE:prop4} $\norm{\Green(\cdot,t)}_{L^p(\mathbb{R})}\leq \norm{\Green(\cdot,1)}_{L^p(\mathbb{R})} t^{-\frac1\alpha (1-\frac1p)}$ for all $1\leq p <\infty$, \item \label{K:SFDE:prop5} $\Green\in C^\infty_0(\mathbb{R}\times (0,\infty))$, \item \label{K:SFDE:prop7} For all $t>0$, there exists a constant $\cK$ such that $\norm{\sdiff{G}{x} (\cdot,t)}_{L^1(\mathbb{R})} \leq \cK t^{-1/\alpha}$. \end{enumerate} \end{lemma} Due to the properties of $\Green$, it is easy to show that $\RieszFeller$ generates a semigroup. \begin{lemma} \label{lem:SFDE:semigroup} For $1<\alpha\leq 2$, $\abs{\theta} \leq \min\{\alpha,2-\alpha\} =2-\alpha$, the Riesz-Feller operator~$\RieszFeller$ generates a strongly continuous, convolution semigroup \[ S_t: L^p(\mathbb{R}) \to L^p(\mathbb{R})\,, \quad u_0 \mapsto S_t u_0 = \Green(\cdot,t)\ast u_0 \] with $\Green$ defined in Lemma~\ref{lem:SSPM}. Moreover, the semigroup satisfies the dispersion property for $u\in L^1(\mathbb{R})$ \begin{equation} \label{prop:SG:dispersion} \norm{S_t u}_{L^p(\mathbb{R})} \leq C_p\; t^{-\frac1\alpha (1-\frac1p)} \norm{u}_{L^1(\mathbb{R})} \quad \end{equation} for all $1\leq p<\infty$ and some $C_p>0$. \end{lemma} \begin{proof} Due to~\ref{K:SFDE:prop2} and Young's inequality for convolutions, \[ \norm{S_t u}_{L^p} \leq \norm{\Green(\cdot ,t)}_{L^1} \norm{u}_{L^p} = \norm{u}_{L^p} \] for all $u\in L^p(\mathbb{R}^n)$. Therefore $S_t:L^p(\mathbb{R})\rightarrow L^p(\mathbb{R})$ are well-defined bounded linear operators for all~$t\geq 0$. $(S_t)_{t\geq 0}$ is a semigroup, since $S_{t+s}=S_t S_s$ for all $s,t\geq 0$ holds due to \ref{K:SFDE:prop3} and $S_0:=\Id$. Strong continuity of~$(S_t)_{t\geq 0}$ follows from a standard result about convolutions~\cite[p.64]{Lieb+Loss:2001} and~\ref{K:SFDE:prop1}. The dispersion property \[ \forall 1\leq p<\infty \quad \exists C_p>0 \,: \quad \norm{S_t u}_{L^p(\mathbb{R})} \leq C_p\, t^{-\frac1\alpha (1-\frac1p)} \norm{u}_{L^1(\mathbb{R})} \quad \forall u\in L^1(\mathbb{R}) \] can be proved using \ref{K:SFDE:prop4} and Young's inequality \cite[p.98-99]{Lieb+Loss:2001}. \end{proof} \begin{lemma} \label{lem:ker} Let $1<\alpha \le 2$ and $\abs{\theta}\leq \min\{\alpha,2-\alpha\}$. The fundamental solution~$\Green$ defined in Lemma~\ref{lem:SSPM} satisfies for all $\ell\in\mathbb{N}_0$ and $0 \le r \le \ell$ the following estimates: \begin{equation} \label{ker_est} \| \partial_x^{\ell} \big( \Green (t) \ast \phi\big)\|_{L^2} \le C t^{- (\ell -r)/\alpha} \|\partial_x^r \phi\|_{L^2}, \qquad t>0\,, \end{equation} where $C$ is a certain positive constant. If $r = \ell$, then inequality \eqref{ker_est} with $C=1$ is optimal. \end{lemma} \begin{proof} By using Plancherel's theorem, we compute that \begin{multline*} \| \partial_x^{\ell} \big( \Green (t) \ast \phi\big)\|_{L^2} = \|(\mathrm i k)^{\ell} e^{t \psi^\alpha_\theta (k)} \hat{\phi}\|_{L^2} \\ \le \|(\mathrm i k)^{\ell - r}e^{t \psi^\alpha_\theta (k)}\|_{L^\infty} \|(\mathrm i k)^{r} \hat{\phi}\|_{L^2} \le C t^{- (\ell -r)/\alpha} \|\partial_x^{r}\phi\|_{L^2} ; \end{multline*} since $\|(\mathrm i k)^{\ell - r}e^{t \psi^\alpha_\theta (k)}\|_{L^\infty} = \sup_{k \in \mathbb{R}} |k|^{\ell -r}e^{-t |k|^\alpha \cos (\theta \pi/2)} \le C t^{- (\ell -r)/\alpha}$, due to the positivity of $\cos (\theta \pi/2)$ under the assumption in Lemma~\ref{lem:ker}. If $r = \ell$, then we obtain $\| \partial_x^{\ell} \big( \Green (t) \ast \phi\big)\|_{L^2} \leq \|G^\alpha_\theta\|_{L^1} \|\partial_x^{\ell}\phi\|_{L^2} = \|\partial_x^{\ell}\phi\|_{L^2}$, by using the fact that $\Green$ is a non-negative integrable function with mass one. \end{proof} \begin{lemma} \label{lem:conti} Suppose that the same assumption as in Lemma~\ref{lem:ker} holds, and $\phi \in H^\sigma$ for $\sigma \ge 0$. Then the fundamental solution satisfies $\Green \ast \phi \in C([0,\infty);H^\sigma)$. \end{lemma} \begin{proof} For arbitrary constants $t_1, t_2 \in [0,\infty)$, we have $$ \norm{\Green(t_1)\ast \phi - \Green(t_2) \ast \phi}^2_{H^\sigma} \le \integral{\mathbb{R}}{ (1+|k|)^{2\sigma}|e^{t_1\RFsymbol(k)} - e^{t_2\RFsymbol(k)}|^2|\hat{\phi}(k)|^2 }{k}, $$ where the integral is bounded by $4 \|\phi\|^2_{H^\sigma}$. Thus, the Dominated Convergence Theorem allows to pass to the limit under the integral sign, which completes the proof. \end{proof} \bigskip \begin{proof}[Proof of Proposition~\ref{prop:CP:H2}] Using the fundamental solution~$\Green$ of the linear evolution equation $\sdiff{u}{t}= \RieszFeller u$, the mild formulation of~\eqref{CP:W} reads \begin{equation} \label{mildform:W} W(t) = \Green(t)\ast W_0 - \integrall{0}{t}{ \Green(t-\tau)\ast F(\overline u,\sdiff{W}{\xi}) }{\tau}, \end{equation} where $F(\overline u,\sdiff{W}{\xi}) := f(\overline u+\sdiff{W}{\xi})-f(\overline u)-s \sdiff{W}{\xi}$. To employ a fix point argument, we consider the mapping $\operator{G}[W]$ defined by \begin{equation}\label{map:G} \operator{G}[W](t) := \Green(t)\ast W_0 - \integrall{0}{t}{ \Green(t-\tau)\ast F(\overline u,\sdiff{W}{\xi})}{\tau}, \end{equation} on the Banach space $X:= C([0,T];H^2)$ with norm $\norm{W}_X:=\sup_{t\in[0,T]} \norm{W(t)}_{H^2}$. Then we show that $\operator{G}$ is a contraction mapping on a closed convex subset $S_R$ of $X$, where $S_R := \{W \in X ; \|W\|_X \le R\}$ for some parameter $R > 0$ which will be determined later. Due to a Sobolev embedding, $\|W\|_{X} \le R$ implies that $\|W(t)\|_{W^{1,\infty}} \le R$ for $t \in [0,T]$. Thus, if $\|W\|_{X} \le R$ and $\ell = 0, 1$, then we compute that \begin{equation*} \begin{split} \|\partial_\xi^{\ell}(\operator{G}[W] &- \operator{G}[V])(t)\|_{L^2} \\ &\le \integrall{0}{t}{ \|\partial_\xi^{\ell} \Green(t-\tau)\ast \{F(\overline u,\sdiff{W}{\xi}) - F(\overline u,\sdiff{V}{\xi})\}\|_{L^2} }{\tau} \\ &\le C \integrall{0}{t}{ (t-\tau)^{-\ell/\alpha}\|\{F(\overline u,\sdiff{W}{\xi}) - F(\overline u,\sdiff{V}{\xi})\}(\tau)\|_{L^2} }{\tau} \\ &\le C(C(R) + |s|) \integrall{0}{t}{ (t-\tau)^{-\ell/\alpha}\|\partial_\xi(W-V)(\tau)\|_{L^2} }{\tau} \\ &\le C_\ell(R)\ t^{1-\ell/\alpha}\ \|W-V\|_{X} \end{split} \end{equation*} where we used Lemma~\ref{lem:ker} and the identity \begin{multline*} F(\overline u,\sdiff{W}{\xi}) - F(\overline u,\sdiff{V}{\xi}) = f(\overline u + \sdiff{W}{\xi}) - f(\overline u + \sdiff{V}{\xi}) -s \partial_\xi(W - V) \\ = \integrall{0}{1}{\big[f'(\overline u + \sigma \partial_\xi W +(1-\sigma)\partial_\xi V))-s\big]\, \partial_\xi(W - V) }{\sigma}. \end{multline*} Similarly, we can calculate that \begin{equation*} \begin{split} \|\partial_\xi^2 (\operator{G}[W] &- \operator{G}[V])(t)\|_{L^2} \\ &\le \integrall{0}{t}{ \|\partial_\xi \Green(t-\tau)\ast \partial_\xi \{F(\overline u,\sdiff{W}{\xi}) - F(\overline u,\sdiff{V}{\xi})\}\|_{L^2} }{\tau} \\ &\le C \integrall{0}{t}{ (t-\tau)^{-1/\alpha}\|\partial_\xi \{F(\overline u,\sdiff{W}{\xi}) - F(\overline u,\sdiff{V}{\xi})\}(\tau)\|_{L^2} }{\tau} \\ &\le C(C(R) + |s|) \integrall{0}{t}{ (t-\tau)^{-1/\alpha} \| (W-V)(\tau)\|_{H^2} }{\tau} \\ &\le C_2(R)\ t^{1-1/\alpha}\ \|W-V\|_{X}. \end{split} \end{equation*} Combining the above estimates, we obtain \begin{equation*} \begin{split} \|\operator{G}[W] - \operator{G}[V]\|_{X} \le \{C_0(R)T^{1/\alpha} + C_1(R) + C_2(R)\}T^{1 - 1/\alpha} \|W-V\|_{X}. \end{split} \end{equation*} Therefore, letting $T = \min \{ 1, (2C_*(R))^{-\alpha/(\alpha-1)} \}$, we deduce \begin{equation}\label{W-V} \|\operator{G}[W] - \operator{G}[V]\|_{X} \le \frac12 \|W-V\|_{X}, \end{equation} where $C_*(R) := C_0(R) + C_1(R) + C_2(R)$. On the other hand, letting $V \equiv 0$ in \eqref{W-V}, we get \begin{equation* \|\operator{G}[W] \|_{X} \le \|\operator{G}[0]\|_X + \frac12 \|W\|_{X} \le \|W_0\|_{H^2} + \frac12 \|W\|_{X} \le M + \frac12 R, \end{equation*} where we used \eqref{ker_est} with $\ell = r$. Therefore, choosing $R = 2M$, we obtain $\|\operator{G}[W] \|_{X} \le 2 M$. Finally we discuss the continuity of $\operator{G} [W]$ in time $t$. It follows from the continuity at time~$0$ and the semigroup property~\ref{K:SFDE:prop3} of $\Green$. Due to Lemma~\ref{lem:conti}, for $W_0\in H^\sigma(\mathbb{R})$ with $\sigma\geq 0$, the convergence $\lim_{t\searrow 0} \Green(\cdot,t)\ast W_0 = W_0$ in $H^\sigma$ holds. Moreover, for $t\in[0,T]$ and $s\geq 0$ the identity \begin{align*} \operator{G} [W](s+t) &= \Green(\cdot,s+t)\ast W_0 (x) - \integrall{0}{s+t}{ \Green(\cdot,s+t-\tau)\ast F(\overline u,\sdiff{W}{\xi}(\tau)) }{\tau} \\ &= \Green(\cdot,s)\ast \BiggPar{ \operator{G} [W](t) - \integrall{t}{s+t}{ \Green(\cdot,t-\tau)\ast F(\overline u,\sdiff{W}{\xi}(\tau)) }{\tau} } \end{align*} holds, where the last integral converges to zero for $s\to 0$. Thus, for~$t_1, t_2 \in [0,T]$ with $t_1<t_2$ (without loss of generality), we have \begin{multline} \label{conti_G} \operator{G}[W](t_1) - \operator{G}[W](t_2) = \operator{G}[W](t_1) - \operator{G}[W]((t_2-t_1) +t_1) \\ = \operator{G}[W](t_1) - \Green(\cdot,t_2 -t_1)\ast \BiggPar{ \operator{G} [W](t_1) - \integrall{t_1}{t_2}{ \Green(\cdot,t_1-\tau)\ast F(\overline u,\sdiff{W}{\xi}(\tau)) }{\tau} }. \end{multline} Therefore, by the fact that $W_0 \in H^2$, $W \in X$ and Lemma~\ref{lem:conti}, we find that the right hand side of \eqref{conti_G} tends to zero in $H^2$ as $t_1 \to t_2$. Hence, we deduce the continuity of $\operator{G}[W]$ in $t$ and that $\operator{G}[W] \in S_{2M}$ for $W \in S_{2M}$. Consequently, we conclude that there exist $T=T(M)$ such that $\operator{G}$ is a contraction mapping of $S_{2M}$. This means that the mapping $\operator{G}$ admits a unique fixed point $W$ in $S_{2M}$, such that $W = \operator{G}[W]$. Hence the proof of Proposition~\ref{prop:CP:H2} is complete. \end{proof} \section{Asymptotic stability of traveling waves} \label{sec:AS} In this section, we consider the asymptotic stability of traveling wave solutions with monotone decreasing profile in~\eqref{eq:FCL}. To this end we derive the existence of global-in-time solutions for evolution equation~\eqref{CP:W} and that these perturbations decay. Precisely we prove the following theorem. \begin{theorem}\label{theorem:ASW} Suppose that the same assumptions as in Theorem~\ref{theorem:AS} hold. Then the Cauchy problem \eqref{CP:W} has a unique global solution $W(\xi,t)$ satisfying $W \in C([0,\infty);H^2) \cap C^1([0,\infty);H^1)$ and \begin{equation}\label{energy_est} \|W(t)\|_{H^2}^2 + C \sum_{\ell = 0}^2 \integrall{0}{t}{ \|W(\tau)\|^2_{\dot{H}^{\alpha/2 + \ell}} }{\tau} - \integrall{0}{t}{ \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} \le \|W_0\|_{H^2}^2 \end{equation} for some positive constant $C$ and for all $t \ge 0$. Furthermore, the solution $W(\xi,t)$ converges to zero in the sense that \begin{equation}\label{asyW} \|W(t)\|_{W^{1,\infty}} \longrightarrow 0 \qquad \text{for} \quad t \to \infty. \end{equation} \end{theorem} We note that the third integral of the left hand side in \eqref{energy_est} is non-negative, since the flux function $f\in C^2$ is convex such that $f''\geq 0$ and the profile $\overline u$ is monotone decreasing, i.e. $\overline u'\leq 0$. For the solution $W$ constructed in Theorem~\ref{theorem:ASW}, it is easy to check that $\partial_\xi W$ satisfies Cauchy problem \eqref{CP:U}. Consequently we obtain Theorem~\ref{theorem:AS}. Global existence will be the consequence of the existence of a Lyapunov functional, which also allows to deduce the asymptotic stability of traveling waves, see also~\cite[Theorem~4]{Achleitner+Hittmeir+Schmeiser:2011} for the special case $\theta=2-\alpha$. \begin{lemma}\label{lem:a_priori} Suppose that the same assumptions as in Theorem~\ref{theorem:AS} hold. Let~$W$ be a solution to~\eqref{CP:W} satisfying $W \in C([0,T];H^2)$ for some $T > 0$. Then there exists some positive constant $\delta_1$ independent of $T$ such that if $\sup_{0 \le t \le T}\|W(t)\|_{H^2} \le \delta_1$, the \emph{a-priori} estimate expressed in \eqref{energy_est} holds for $t \in [0,T]$. \end{lemma} \begin{proof} We rewrite the first equation of~\eqref{CP:W}, \[ \sdiff{W}{t} + (f(\overline u+\sdiff{W}{\xi}) -f(\overline u) -f'(\overline u)\sdiff{W}{\xi}) +(f'(\overline u) -s) \sdiff{W}{\xi} = \RieszFeller W , \] and test it with $W$, \begin{multline*} \frac12 \sdiff{(W^2)}{t} +\frac12 \sdiff{}{\xi} \{(f'(\overline u)-s)\, W^2\} -\frac12 f''(\overline u)\overline u' W^2 -W \RieszFeller W \\ = -\big(f(\overline u +\partial_\xi W) -f(\overline u) -f'(\overline u)\sdiff{W}{\xi}\big)\, W . \end{multline*} Integrating with respect to $\xi \in \mathbb{R}$, we obtain \begin{multline*} \frac12 \sdiff{}{t} \|W\|_{L^2}^2 - \frac12 \integral{\mathbb{R}}{ f''(\overline u)\overline u' W^2 }{\xi} + \cos \bigPar{ \theta \tfrac{\pi}{2} } \|W\|^2_{\dot{H}^{\alpha/2}} \\ = - \integral{\mathbb{R}}{ \integrall{0}{1}{ \integrall{0}{\sigma}{ f''(\overline u +\gamma \sdiff{W}{\xi}) (\sdiff{W}{\xi})^2 }{\gamma} }{\sigma} W }{\xi} \\ \leq L(\norm{\partial_\xi W}_{L^\infty}) \norm{W}_{L^\infty} \norm{\partial_\xi W}_{L^2}^2 \end{multline*} where $L$ is a positive non-decreasing function. Due to a Sobolev embedding and the assumption on $W$, we deduce $\norm{W(t)}_{W^{1,\infty}} \leq \norm{W(t)}_{H^2}\leq \delta_1$ for all $t\in [0,T]$. Thus the energy estimate becomes \begin{equation} \label{energy:W} \frac12 \sdiff{}{t} \|W\|_{L^2}^2 - \frac12 \integral{\mathbb{R}}{ f''(\overline u)\overline u' W^2 }{\xi} + \cos \bigPar{ \theta \tfrac{\pi}{2} } \|W\|^2_{\dot{H}^{\alpha/2}} \\ \leq 2C_{\delta_1} \norm{W}_{L^\infty} \norm{\partial_\xi W}_{L^2}^2 \end{equation} for some positive constant $C_{\delta_1}$ depending on $\delta_1$. Note that we keep $\norm{W}_{L^\infty}$ for further reference. Here we used that \begin{equation*} \integral{\mathbb{R}}{ W \RieszFeller W }{\xi} = \integral{\mathbb{R}}{ \RFsymbol(k) |\hat{W}(k)|^2 }{k} = -\cos \bigPar{ \theta \tfrac{\pi}{2} } \|W\|^2_{\dot{H}^{\alpha/2}} \end{equation*} due to Plancherel's theorem and $\sgn(k)|\hat{W}(k)|^2$ being an odd function. Similarly, we multiply the first equation of \eqref{CP:U} by $U$, obtaining \begin{multline* \frac12 \partial_t (U^2) + \partial_\xi \Big\{(f(\overline u + U) - f(\overline u))U - \integrall{0}{U}{ (f(\overline u + \eta) - f(\overline u)) }{\eta} - \frac12 s U^2 \Big\} \\ + \overline u' \integrall{0}{U}{ (f'(\overline u + \eta) - f'(\overline u)) }{\eta} - U \RieszFeller U = 0. \end{multline*} Thus, integrating with respect to $\xi \in \mathbb{R}$, we have \begin{equation} \label{energy:U} \frac12 \sdiff{}{t} \|U\|_{L^2}^2 + \cos \bigPar{ \theta \tfrac{\pi}{2} } \|U\|^2_{\dot{H}^{\alpha/2}} \le \tfrac12 \norm{\overline u'}_{L^\infty} L(\norm{U}_{L^\infty})\ \|U\|_{L^2}^2 \, \leq \breve{C}_{\delta_1} \norm{U}_{L^2}^2 \end{equation} with a positive constant $\breve{C}_{\delta_1}$ depending on $\delta_1$. Next, we differentiate \eqref{CP:U}, obtaining $\partial_t \partial_\xi U + \partial_\xi^2 \{f(\overline u+U)-f(\overline u)\} -s \partial_\xi^2 U = \RieszFeller \partial_\xi U$. Testing this equation by $\partial_\xi U$ yields \begin{multline*} \frac12 \partial_t (|\partial_\xi U|^2) +\frac12 \partial_\xi \{(f'(\overline u + U) -s )(\partial_\xi U)^2\} -\partial_\xi U \RieszFeller \partial_\xi U \\ = -\frac12 \partial_\xi f'(\overline u +U)\, (\partial_\xi U)^2 -\partial_\xi \big((f'(\overline u +U) -f'(\overline u))\, \overline u'\big)\, \partial_\xi U . \end{multline*} Integrating with respect to $\xi \in \mathbb{R}$, we get \begin{multline* \frac12 \sdiff{}{t} \|\partial_\xi U\|_{L^2}^2 + \cos \bigPar{ \theta \tfrac{\pi}{2} } \|\partial_\xi U\|^2_{\dot{H}^{\alpha/2}} \\ =-\frac12 \integral{\mathbb{R}}{ \partial_\xi f'(\overline u +U)\, (\partial_\xi U)^2 }{\xi} - \integral{\mathbb{R}}{ \partial_\xi \big((f'(\overline u +U) -f'(\overline u))\, \overline u'\big)\, \partial_\xi U }{\xi} , \end{multline*} and hence \begin{equation} \label{energy:U'} \frac12 \sdiff{}{t} \|\partial_\xi U\|_{L^2}^2 + \cos \bigPar{ \theta \tfrac{\pi}{2} } \|\partial_\xi U\|^2_{\dot{H}^{\alpha/2}} \le \widetilde{C}_{\delta_1} \big(\norm{U}_{H^1}^2 + \|\partial_\xi U\|_{L^3}^3\big), \end{equation} where $\widetilde{C}_{\delta_1}$ is a positive constant depending on $\delta_1$. By combining \eqref{energy:W}, \eqref{energy:U} and \eqref{energy:U'}, we construct the good energy estimate. For this purpose, we prepare some useful interpolation inequalities. For $0 \le \sigma \le 2$ and $\varepsilon > 0$, we obtain \begin{equation}\label{interpol} \|v\|^2_{\dot{H}^1} \leq \varepsilon^{\sigma -2} \|v\|^2_{\dot{H}^{\sigma/2}} + \varepsilon^{\sigma} \|v\|^2_{\dot{H}^{\sigma/2+1}}. \end{equation} The inequality \eqref{interpol} is proved as follows. For arbitrary constants $\varepsilon > 0$ and $k \in \mathbb{R}$, we put $h = \varepsilon k$. Then, by the fact that $h^2 \le |h|^{\sigma} + |h|^{2+\sigma}$ for all $h\in\mathbb{R}$ and $0 \le \sigma \le 2$, we obtain $k^2 \le \varepsilon^{\sigma -2}|k|^{\sigma} + \varepsilon^{\sigma}|k|^{2+\sigma}$. Thus, by using this inequality and Plancherel's theorem, we arrive at \eqref{interpol}. On the other hand, for $\sigma > 1/4$, we have \begin{equation}\label{GN} \|v\|_{L^3}^3 \leq C_0 \|v\|_{L^2} \|v\|^2_{H^{\sigma}} \le 2^{\sigma} C_0 \|v\|_{L^2} (\|v\|_{L^2}^2 + \|v\|^2_{\dot{H}^{\sigma}}), \end{equation} where $C_0$ is a certain positive constant. The first interpolation inequality of \eqref{GN} is a generalization of the celebrated Gagliardo-Nirenberg inequalities (see e.g. \cite{Henry:1981}) to Sobolev spaces with fractional order, which was proven by Amann~\cite[Proposition~4.1]{Amann:1985}. The second inequality holds as a consequence of $(1 + |k|^2)^{\sigma} \le 2^{2\sigma}(1 + |k|^{2 \sigma})$ for all $k \in \mathbb{R}$. We multiply \eqref{energy:U} by $\gamma_1$ and combine the resultant inequality with \eqref{energy:W}, obtaining \begin{multline* \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} + \cos \bigPar{ \theta \tfrac{\pi}{2} } (\|W\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U\|^2_{\dot{H}^{\alpha/2}} ) \\ \le \gamma_1 \breve{C}_{\delta_1} \|U\|_{L^2}^2 + 2C_{\delta_1} \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}, \end{multline*} where $\gamma_1$ is a positive constant to be determined later. By the fact that $\partial_\xi W = U$, we can apply \eqref{interpol} with $v = W$ and $\sigma = \alpha$ to the above inequality, and get \begin{multline*} \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} \\ + \{ \cos \bigPar{ \theta \tfrac{\pi}{2} } - \varepsilon_1^{\alpha - 2}\gamma_1 \breve{C}_{\delta_1} \} \|W\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \{\cos \bigPar{ \theta \tfrac{\pi}{2} } - \varepsilon_1^{\alpha}\breve{C}_{\delta_1}\} \|U\|^2_{\dot{H}^{\alpha/2}} \\ \le 2C_{\delta_1} \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}. \end{multline*} Therefore, we choose $\varepsilon_1$ satisfying $4 \varepsilon_1^\alpha \breve{C}_{\delta_1} = \cos(\theta \pi/2)$, and $\gamma_1 = \varepsilon_1^2$ to get \begin{multline} \label{energy:WU} \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} \\ + \frac34 \cos \bigPar{ \theta \tfrac{\pi}{2} } (\|W\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U\|^2_{\dot{H}^{\alpha/2}} ) \le 2C_{\delta_1} \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}. \end{multline} Similarly we multiply \eqref{energy:U'} by $\gamma_2$ and combine the resultant inequality with~\eqref{energy:WU}. Furthermore, applying \eqref{interpol} to the resultant inequality, we have \begin{multline*} \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2 + \gamma_2 \|\partial_\xi U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} + \{ \frac34 \cos \bigPar{ \theta \tfrac{\pi}{2} } - \varepsilon_2^{\alpha - 2}\gamma_2 \widetilde{C}_{\delta_1} \} \|W\|^2_{\dot{H}^{\alpha/2}} \\ + \{ \frac34 \gamma_1 \cos \bigPar{ \theta \tfrac{\pi}{2} } - (1 + \varepsilon_2^{-2}) \varepsilon_2^{\alpha}\gamma_2 \widetilde{C}_{\delta_1}\} \|U\|^2_{\dot{H}^{\alpha/2}} + \gamma_2 \{\cos \bigPar{ \theta \tfrac{\pi}{2} } - \varepsilon_2^{\alpha}\widetilde{C}_{\delta_1}\} \|\partial_\xi U\|^2_{\dot{H}^{\alpha/2}} \\ \le 2C_{\delta_1} \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}} + \gamma_2 \widetilde{C}_{\delta_1} \|\partial_\xi U\|_{L^3}^3. \end{multline*} Then, choosing $\varepsilon_2$ such that $4 \varepsilon_2^\alpha \widetilde{C}_{\delta_1} = \cos(\theta \pi/2)$, and $\gamma_2 = \min\{ \varepsilon_2^2, \gamma_1(1 + \varepsilon_2^{-2})^{-1}\}$, yields \begin{multline}\label{energy:WUU'} \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2 + \gamma_2 \|\partial_\xi U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} \\ + \frac12 \cos \bigPar{ \theta \tfrac{\pi}{2} } (\|W\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U\|^2_{\dot{H}^{\alpha/2}} + \gamma_2 \|\partial_\xi U\|^2_{\dot{H}^{\alpha/2}}) \\ \le 2C_{\delta_1} \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}} + \gamma_2 \widetilde{C}_{\delta_1} \|\partial_\xi U\|_{L^3}^3. \end{multline} We introduce the energy and dissipation norms as follows. \begin{align*} E(t)^2 &:= \sup_{0 \le \tau \le t}(\|W(\tau)\|_{L^2}^2 + \gamma_1 \|U(\tau)\|_{L^2}^2 + \gamma_2 \|\partial_\xi U(\tau)\|_{L^2}^2), \\ D(t)^2 &:= \integrall{0}{t}{(\|W(\tau)\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U(\tau)\|^2_{\dot{H}^{\alpha/2}} + \gamma_2 \|\partial_\xi U(\tau)\|^2_{\dot{H}^{\alpha/2}}) }{\tau}. \end{align*} Then, integrating \eqref{energy:WUU'} with respect to $t$, we have \begin{multline*} \|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2 + \gamma_2 \|\partial_\xi U\|_{L^2}^2 + \cos \bigPar{ \theta \tfrac{\pi}{2} } D(t)^2 - \integrall{0}{t}{\integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} \\ \le E_0^2 + \integrall{0}{t}{ \big(4C_{\delta_1} \norm{W}_{L^\infty} \|U\|^2_{L^2} + 2 \gamma_2 \widetilde{C}_{\delta_1} \|\partial_\xi U\|_{L^3}^3\big) }{\tau}, \end{multline*} where we define $E_0^2 := \|W_0\|_{L^2}^2 + \gamma_1 \|U_0\|_{L^2}^2 + \gamma_2 \|\partial_\xi U_0\|_{L^2}^2$. Thus, by employing \eqref{interpol}, and \eqref{GN} with $v = \partial_\xi U$ and $\sigma = \alpha/2$, we arrive at \begin{equation*} E(t)^2 + \cos \bigPar{ \theta \tfrac{\pi}{2} } D(t)^2 - \integrall{0}{t}{ \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} \le E_0^2 + C E(t)D(t)^2 \end{equation*} for some positive constant~$C$. Finally, using the fact that $E(t) \le \delta_1^2 C$, we arrive at the desired a-priori estimate. \end{proof} \bigskip \begin{proof}[Proof of Theorem~\ref{theorem:ASW}] The existence of global-in-time solutions to the initial value problem \eqref{CP:W} can be obtained by the continuation argument based on a local existence result in Proposition~\ref{prop:CP:H2} combined with the {\it a-priori} estimate in Lemma~\ref{lem:a_priori}. Because the argument is standard, we may omit the details here. In the rest of this proof, we prove only the asymptotic stability result \eqref{asyW}. To this end, we prepare the following interpolation inequality. For $0 \le \sigma \le 2$, we have \begin{equation* \|v\|_{\dot{H}^\sigma} \leq 2(\|v\|_{\dot{H}^{\sigma/2}} + \|v\|_{\dot{H}^{\sigma/2+1}}), \end{equation*} by using the fact that $k^{2\sigma} \le 2(|k|^{\sigma} + |k|^{2+\sigma})$. By virtue of this interpolation inequality, \eqref{interpol}, and the first equation of \eqref{CP:U}, we have \begin{equation*} \begin{split} \|\partial_t U\|_{L^2} & \le \|\RieszFeller U\|_{L^2} + \|\{f'(\overline u+U)-f'(\overline u)\}\overline u'\|_{L^2} + \| \{f'(\overline u + U) -s\}\partial_\xi U\|_{L^2} \\ & \le \|U\|_{\dot{H}^\alpha} + C \|U\|_{H^1} \le C \sum_{\ell = 0}^2 \|W\|_{\dot{H}^{\alpha/2 + \ell}}. \end{split} \end{equation*} Thus, by the above estimate, we compute that \begin{equation*} \begin{split} \Big| \sdiff{}{t} \| U \|^2_{L^2} \Big| \le \|U\|_{L^2}^2 + \|\partial_t U\|_{L^2}^2 \le C \sum_{\ell = 0}^2 \|W\|_{\dot{H}^{\alpha/2 + \ell}}^2. \end{split} \end{equation*} This estimate and \eqref{interpol} with \eqref{energy_est} tell us that $\|U(\cdot)\|_{L^2}^2 \in W^{1,1}(0,\infty)$, and hence $\|U(t)\|_{L^2} \to 0$ as $t \to \infty$. Finally, employing the Sobolev inequality that $\| v \|_{L^\infty} \le \sqrt{2} \|v\|_{L^2}^{1/2} \|\partial_\xi v\|_{L^2}^{1/2}$, we arrive at the desired result. \end{proof} \section{Convergence rate toward traveling waves} \label{sec:Rates} We consider the convergence rate of the solution toward the corresponding traveling waves. Kawashima, Nishibata and Nishikawa~\cite{Kawashima+etal:2004} proposed an $L^p$ energy method to study the asymptotic stability and the associated convergence rates of planar viscous rarefaction waves of multi-dimensional viscous conservation laws. When the authors obtain the convergence estimate, they derived the $L^1$ estimate by using the energy method associated with the sign function. This approach is useful. It is however difficult to apply this method because of a Riesz-Feller operator. To overcome this difficulty, we employ not only the energy method but also the representation of the mild solution. Precisely, our purpose in this section is to derive the following theorem. \begin{theorem}\label{theorem:CRW} Suppose that the same assumptions as in Theorem~\ref{theorem:AS} and $f\in C^\infty(\mathbb{R})$ hold. Then the Cauchy problem \eqref{CP:W} with $W_0\in W^{1,1}(\mathbb{R}) \cap W^{1,\infty}(\mathbb{R})$ has a unique global solution $W(\xi,t)$ satisfying \[ W \in C([0,\infty);W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))\cap L^\infty(0,\infty;W^{1,\infty}(\mathbb{R})) \] with estimates \eqref{low_energy_est} and~\eqref{est-L1}. Moreover, there exists a positive constant~$\delta_1$ such that if $\norm{W_0}_{W^{1,1}}\leq \delta_1$ then \begin{equation}\label{decay_est_W} \|W(t)\|_{H^1} \le C E_1\ (1+t)^{-1/(2\alpha)} \end{equation} for $t \ge 0$, where $E_1 := \|W_0\|_{H^1} + \|W_0\|_{W^{1,1}}$ and $C$ is a certain positive constant independent of $t$. \end{theorem} The proof of the existence of global-in-time solutions is based on results for the Cauchy problem~\eqref{eq:FCL} with fractional Laplacian~\cite{Droniou+etal:2003} and its extension to the Cauchy problem~\eqref{eq:FCL} with Riesz-Feller operators~\cite{Achleitner+etal:2012}. There the assumption $f\in C^\infty(\mathbb{R})$ is made to simplify the presentation. The method is applicable also in case of $f\in C^k(\mathbb{R})$, $k\geq 2$, but yields a lower regularity for the unique solution $u$. \begin{lemma} \label{lem:CP:W11nW1infinity} Suppose that $f\in C^\infty(\mathbb{R})$ and $W_0\in W^{1,1}(\mathbb{R})\cap W^{1,\infty}(\mathbb{R})$. Then Cauchy problem~\eqref{CP:W} has a unique mild solution $W \in C([0,T];W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))\cap L^\infty(0,T;W^{1,\infty}(\mathbb{R}))$ for any $T>0$ with \begin{align} \norm{W(t)}_{L^1} &\leq \norm{W_0}_{L^1} + L(\sup_{\tau\in[0,t]} \norm{\sdiff{W}{\xi}(\tau)}_{L^\infty}) \norm{\sdiff{W_0}{\xi}}_{L^1}\ t \ , \label{est:W:L1} \\ \norm{\sdiff{W}{\xi}(t)}_{L^1} &\leq \norm{\sdiff{W_0}{\xi}}_{L^1} \ , \label{est:U:L1} \\ \norm{W(t)}_{L^\infty} &\leq \norm{\sdiff{W_0}{\xi}}_{L^1} \ , \label{est:W:Linfinity} \\ \norm{\sdiff{W}{\xi}(t)}_{L^\infty} &\leq \norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty} \ , \label{est:U:Linfinity} \end{align} for $0 \le t \le T$, where $L$ is a positive non-decreasing function. Moreover, for any positive time $t_0>0$, $W\in C_b^\infty(\mathbb{R}\times (t_0,\infty))$ and it is a classical solution of the first equation of \eqref{CP:W}. \end{lemma} \begin{proof} We use again $U =\sdiff{W}{\xi}$ and analyze the Cauchy problem~\eqref{CP:U} with initial datum $U_0 :=\sdiff{W_0}{\xi} \in L^1(\mathbb{R})\cap L^{\infty}(\mathbb{R})$ first. We recall $U = u -\overline u$ where $u$ and $\overline u$ solve equation~\eqref{CLND+MCF}, and $\overline u$ is a monotone decreasing function satisfying $\lim_{\xi\to\pm\infty} \overline u(\xi) =u_{\pm}$. Thus, $u_0 := U_0 +\overline u$ is essentially bounded. Due to~\cite[Theorem~1]{Droniou+etal:2003} and its extension to equations with Riesz-Feller operators in~\cite{Achleitner+etal:2012}, the Cauchy problem for~\eqref{CLND+MCF} with initial datum $u_0\in L^\infty(\mathbb{R})$ has a (unique) solution which satisfies $\norm{u(t)}_{L^\infty(\mathbb{R})}\leq \norm{u_0}_{L^\infty(\mathbb{R})}$ for all $t\geq 0$; in fact, the solution $u$ takes values between the essential lower and upper bounds of $u_0$. Therefore, $U(t) = u(t) -\overline u \in L^\infty(\mathbb{R}_\xi)$ for all $t\geq 0$ and estimate~\eqref{est:U:Linfinity} follows. Due to~\cite[Remark 1.2]{Droniou+etal:2003} and its extension to equations with Riesz-Feller operators, equation~\eqref{CLND+MCF} supports an $L^1$ contraction principle: If $u_0$, $v_0 \in L^\infty(\mathbb{R})$ satisfy $u_0 -v_0 \in L^1(\mathbb{R})$, then the associated solutions $u$ and $v$ of the Cauchy problem for~\eqref{CLND+MCF} satisfy $\norm{u(t) -v(t)}_{L^1(\mathbb{R})} \leq \norm{u_0 -v_0}_{L^1(\mathbb{R})}$ for all $t\geq 0$. Therefore, $U(t) = u(t) -\overline u \in L^1(\mathbb{R}_\xi)$ with $\norm{U(t)}_{L^1}\leq \norm{u_0 -\overline u}_{L^1} =\norm{U_0}_{L^1}$ for all $t\geq 0$, which implies estimate~\eqref{est:U:L1}. Moreover, its primitive $W(t) \in L^\infty(\mathbb{R}_\xi)$ for all $t\geq 0$, since \[ \norm{W(t)}_{L^\infty} = \Norm{ \integrall{-\infty}{\xi}{ \sdiff{W}{y}(y,t) }{y} }_{L^\infty} \leq \integrall{-\infty}{\infty}{ \abs{\sdiff{W}{y}(y,t)} }{y} = \norm{\sdiff{W}{\xi}(t)}_{L^1} \ . \] Then, we are left to prove that $W(t) \in L^1(\mathbb{R}_\xi)$ for all $t\geq 0$ and the stated continuity in time. Considering the mild formulation~\eqref{mildform:W}, we obtain the estimate \begin{align} \|W(t)\|_{L^1} &\leq \|\Green(t)\ast W_0 \|_{L^1} + \integrall{0}{t}{ \| \Green(t-\tau) \ast \{f(\overline u+U)-f(\overline u)-s U\} \|_{L^1} }{\tau} \nonumber \\ &\leq \|W_0 \|_{L^1} + \integrall{0}{t}{ \|f(\overline u+U)-f(\overline u)-s U\|_{L^1} }{\tau} \nonumber \\ &\leq \|W_0\|_{L^1} + \integrall{0}{t}{ \big(\widetilde{L}(\norm{U(\tau)}_{L^\infty})\ \|U(\tau)\|_{L^1}\big) }{\tau} \nonumber \\ &\leq \|W_0\|_{L^1} + \widetilde{L}(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \|U_0\|_{L^1}\ t \ , \label{low_L1_W} \end{align} for $t \ge 0$ by using the local Lipschitz continuity of $f$ and the previous estimates on $U =\sdiff{W}{\xi}$; again, $\widetilde{L}$ is a positive non-decreasing function. Moreover, for any positive time $t_0>0$, $U \in C_b^\infty(\mathbb{R}\times (t_0,\infty))$ and $U = \partial_\xi W$ satisfies the first equation of \eqref{CP:U} in the classical sense, see~\cite{Droniou+etal:2003,Achleitner+Hittmeir+Schmeiser:2011}. Due to integrability of $U$, also $W$ is a global-in-time solution of \eqref{CP:W}, and $W\in C_b^\infty(\mathbb{R}\times (t_0,\infty))$ is a classical solution of the first equation of \eqref{CP:W} for all $t\geq t_0>0$. To prove that $W \in C([0,T];W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))$, we will use the mild formulation \begin{equation} \label{mildform:W:2} W(t) = \Green(t)\ast W_0 - \integrall{0}{t}{ \Green(t-\tau)\ast F(\overline u,\sdiff{W}{\xi}) }{\tau}, \end{equation} where $F(\overline u,\sdiff{W}{\xi}) := f(\overline u+\sdiff{W}{\xi})-f(\overline u)-s \sdiff{W}{\xi}$. The first summand on the right hand side satisfies $\Green(\cdot)\ast W_0 \in C([0,T];W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))$, due to the assumptions on $W_0$ and the strong continuity of the semigroup in Lemma~\ref{lem:SFDE:semigroup}. To prove continuity of the second summand, \[ \mathcal{G}_2[W](t) :=\integrall{0}{t}{ \Green(t-\tau)\ast F(\overline u,\sdiff{W}{\xi}) }{\tau} \ , \] we use the estimates~\eqref{est:W:L1}--\eqref{est:U:Linfinity} and the strong continuity of the semigroup in Lemma~\ref{lem:SFDE:semigroup}. In particular, we assume w.l.o.g. $0<t_1<t_2$ and rewrite \begin{align*} &\mathcal{G}_2[W](t_1) -\mathcal{G}_2[W](t_2) \\ &\ = \integrall{0}{t_1}{ (\Green(t_1-\tau) -\Green(t_2-\tau))\ast F(\overline u,\sdiff{W}{\xi}) }{\tau} +\integrall{t_1}{t_2}{ \Green(t_2-\tau)\ast F(\overline u,\sdiff{W}{\xi}) }{\tau} \\ &\ = \integrall{0}{t_1}{ \big[\Green(t_1-\tau)\ast F(\overline u,\sdiff{W}{\xi}) -\Green(t_2-t_1) \ast\big(\Green(t_1-\tau)\ast F(\overline u,\sdiff{W}{\xi})\big) \big] }{\tau} \\ &\ \quad +\integrall{t_1}{t_2}{ \Green(t_2-\tau)\ast F(\overline u,\sdiff{W}{\xi}) }{\tau} \end{align*} using the semigroup property~\ref{K:SFDE:prop3}. The first summand converges to zero as $t_2\to t_1$ in the $W^{1,p}$-norms, $p=1,2$, due to the Dominated Convergence Theorem, the strong continuity of the semigroup in Lemma~\ref{lem:SFDE:semigroup} and that $\integrall{0}{t_1}{ \big(\Green(t_1-\tau) \ast F(\overline u,\sdiff{W}{\xi})\big) }{\tau} \in W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R})$. Similarly, the second summand converges to zero as $t_2\to t_1$ in the $W^{1,p}$-norms, $p=1,2$, since $\Green(t_2-\cdot) \ast F(\overline u,\sdiff{W}{\xi}) \in L^1((t_2,t_1);W^{1,1}(\mathbb{R})\cap W^{1,\infty}(\mathbb{R}))$. Thus, the right hand side of~\eqref{mildform:W:2} is continuous in time with respect to the $W^{1,p}$-norms, $p=1,2$, hence $W \in C([0,T];W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))$. Finally, $W \in L^\infty(0,T;W^{1,\infty}(\mathbb{R}))$ follows from the estimates~\eqref{est:W:Linfinity}-\eqref{est:U:Linfinity}. \end{proof} Next we prove the following {\it a-priori} estimate obtained by Lemma~\ref{lem:CP:W11nW1infinity}. \begin{lemma}\label{lem:a_priori_CR} Suppose that the same assumptions as in Theorem~\ref{theorem:CRW} hold. Let $W(\xi,t)$ be a solution to \eqref{CP:W} satisfying $W \in C([0,T]; W^{1,1}(\mathbb{R})\cap H^1(\mathbb{R}))\cap L^\infty(0,T;W^{1,\infty}(\mathbb{R}))$ for any $T > 0$. Then there exists some positive constants $\delta_1$ independent of $T$ such that if $\norm{W_0}_{W^{1,1}} \le \delta_1$, the {\it a-priori} estimates \begin{multline}\label{low_energy_est} \|W(t)\|_{H^1}^2 + C \integrall{0}{t}{(\|W(\tau)\|^2_{\dot{H}^{\alpha/2}} + \|W(\tau)\|^2_{\dot{H}^{\alpha/2 + 1}}) }{\tau} - \integrall{0}{t}{ \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} \le \|W_0\|_{H^1}^2 \ , \end{multline} \begin{equation}\label{est-L1} \|W(t)\|_{W^{1,1}} \leq C (\|W_0\|_{W^{1,1}} + \|W_0\|_{H^1}^2) \ , \end{equation} hold for $t \in [0,T]$, where $C$ is a constant independent of time $t$. \end{lemma} \begin{proof} Following the proof of Lemma~\ref{lem:a_priori}, we deduce again estimate~\eqref{energy:WU}, i.e. \begin{multline*} \frac12 \sdiff{}{t} (\|W\|_{L^2}^2 + \gamma_1 \|U\|_{L^2}^2) - \frac12 \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} \\ + \frac34 \cos \bigPar{ \theta \tfrac{\pi}{2} } (\|W\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U\|^2_{\dot{H}^{\alpha/2}} ) \le L(\norm{\sdiff{W}{\xi}}_{L^\infty})\ \norm{W}_{L^\infty} \|\partial_\xi W\|^2_{L^{2}} \end{multline*} for some positive non-decreasing function~$L$. Integrating this inequality with respect to time and using \eqref{interpol}, the estimates~\eqref{est:W:Linfinity}--\eqref{est:U:Linfinity} as well as the smallness of $\norm{W_0}_{W^{1,1}}$, we arrive at \eqref{low_energy_est}. Thus it remains to prove \eqref{est-L1}. Due to Lemma~\ref{lem:CP:W11nW1infinity}, for all $t_0>0$, $W\in C_b^\infty(\mathbb{R}\times (t_0,\infty))$ and it is a classical solution of the first equation of \eqref{CP:W}. Therefore we can adapt the $L^1$ energy method introduced by Kawashima, Nishibata and Nishikawa~\cite{Kawashima+etal:2004}. For a non-negative function $\rho: \mathbb{R}\to\mathbb{R}$ satisfying $\rho\in\mathbb{C}_0^\infty(\mathbb{R})$ and $\integral{\mathbb{R}}{\rho(x)}{x}=1$, the convolution operator $\rho_\delta\ast$ with $\rho_\delta(x) = \delta^{-1} \rho(x/\delta)$ is a Friedrichs' mollifier. We introduce the functions \[ s_\delta(x) := (\rho_\delta \ast \sgn)(x) \XX{and} S_\delta(x) := \integrall{0}{x}{ s_\delta(\xi) }{\xi} \,, \] in which the signature function~$\sgn(x)$ is defined by \[ \sgn(x) := \begin{cases} -1 & \text{for } x<0 \,, \\ 0 & \text{for } x=0 \,, \\ 1 & \text{for } x>0 \,. \end{cases} \] Note that the convergence of $s_\delta(x)\to\sgn(x)$ as $\delta\to 0$ is in the sense of a weak~$\star$ convergence in $L^\infty(\mathbb{R})$, respectively, a strong convergence in $L_{loc}^q(\mathbb{R})$, $1\leq q < \infty$. The function $s_\delta(x)$ satisfies $s'_\delta(x)=2\rho_\delta(x)\geq 0$ and $s_\delta(0)=0$ by choosing $\rho$ to be an even function. Moreover $S_\delta(x)\to |x|$ converges strongly in $L^1(\mathbb{R})$ as $\delta\to 0$. To estimate $\norm{W(t)}_{W^{1,1}}$, we recall that $\norm{U(t)}_{L^1} \leq \norm{U_0}_{L^1}$ for all $t\in [0,T]$, due to estimate~\eqref{est:U:L1} in Lemma~\ref{lem:CP:W11nW1infinity}. Next we show that \begin{equation}\label{est-L1-W} \|W(t)\|_{L^1} \leq C \|W_0\|_{W^{1,1}} + C\|W_0\|_{H^1}^2 \end{equation} for $t\in [0,T]$. We will use estimate~\eqref{est:W:L1} for small times $t\leq 1$, and derive~\eqref{est-L1-W} for large times $t\geq 1$: We multiply the first equation of \eqref{CP:W} by $s_\delta(W)=(\rho_\delta\ast\sgn)(W)$ and obtain \begin{equation} \label{eq:U:primitive:L1} \partial_t S_\delta(W) + s_\delta(W) \{h(\overline u+U)-h(\overline u)\} = s_\delta(W) \RieszFeller W \ , \end{equation} where $h(v):=f(v)-s v$ is a convex function. We integrate equation~\eqref{eq:U:primitive:L1} over $\mathbb{R} \times [t_0,t]$ and derive \begin{multline}\label{eq:U:primitive:L1:approximate} \integrall{t_0}{t}{ \integral{\mathbb{R}}{ \partial_t S_\delta(W) }{x} }{\tau} + \integrall{t_0}{t}{ \integral{\mathbb{R}}{ s_\delta(W) \{h(\overline u+U)-h(\overline u)\} }{x} }{t} = \integrall{t_0}{t}{ \integral{\mathbb{R}}{ s_\delta(W) \RieszFeller W }{x} }{t}. \end{multline} The first integral satisfies, due to Fubini's theorem and the strong convergence of $S_\delta$ in $L^1$, \begin{multline}\label{St-est} \integrall{t_0}{t}{ \integral{\mathbb{R}}{ \partial_t S_\delta(W) }{x} }{\tau} = \integral{\mathbb{R}}{ \{ S_\delta(W(x,t))-S_\delta(W(x,t_0)) \} }{x} \to \|W(t)\|_{L^1} - \|W(t_0)\|_{L^1} \end{multline} as $\delta \to 0$. Next, we prove that the integral on the right-hand side of~\eqref{eq:U:primitive:L1:approximate} is non-positive, \begin{equation}\label{DW-est} \integrall{t_0}{t}{ \integral{\mathbb{R}}{ s_\delta(W) \RieszFeller[W] }{x} }{\tau} \leq 0. \end{equation} Indeed, $S_\delta\in C^2(\mathbb{R})$ is a convex function with $S_\delta'=s_\delta$ and $S_\delta''=s_\delta'=2\rho_\delta\geq 0$. Moreover, under our assumptions, $W(\cdot,t)\in H^1(\mathbb{R})$ for $t\geq 0$ and $W\in C_b^\infty(\mathbb{R}\times (t_0,\infty))$ for $t_0 >0$. Thus, $\lim_{\xi\to\pm\infty} W(\xi,t) = 0$ and $S_\delta(W) \in C^2_b$ with \[ s_\delta(W)\, \RieszFeller[W] = S_\delta'(W)\, \RieszFeller[W] \leq \RieszFeller[S_\delta(W)] \,, \] due to Lemma~\ref{lemma:CI}. Consequently, \[ \integral{\mathbb{R}}{ s_\delta(W)\, \RieszFeller[W] }{x} \leq \integral{\mathbb{R}}{ \RieszFeller[S_\delta(W)] }{x} = 0\,, \] due to Proposition~\ref{prop:RieszFeller:estimate}. We estimate the second term on the left-hand side of~\eqref{eq:U:primitive:L1:approximate} as follows. Using the fact that $|s_\delta (W)| \le 1$ and $h(\overline u+U)-h(\overline u) = h'(\overline u)U + O(|U|^2)$, we have \begin{equation*} \integral{\mathbb{R}}{ s_\delta(W) \{h(\overline u+U)-h(\overline u)\} }{\xi} = \integral{\mathbb{R}}{ s_\delta(W) h'(\overline u)U }{\xi} + R \end{equation*} with $|R| \le L(\norm{U}_{L^\infty})\ \|U\|_{L^2}^2 /2$. Furthermore, we compute from the fact $U = \partial_\xi W$ that \begin{equation*} \integral{\mathbb{R}}{ s_\delta(W) h'(\overline u)U }{\xi} = - \integral{\mathbb{R}}{ S_\delta(W) h''(\overline u) \overline u' }{\xi} \geq 0, \end{equation*} since the function $S_\delta$ is non-negative with $S_\delta(0)=0$, $h\in C^2(\mathbb{R})$ is a convex function, and $\overline u$ is a monotone decreasing traveling wave profile. Therefore, employing the previous estimates and taking the limit $\delta\to 0$ in equation~\eqref{eq:U:primitive:L1:approximate} yields \begin{multline}\label{high_L1_W} \|W(t)\|_{L^1} \leq \|W(t_0)\|_{L^1} + L(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \integrall{t_0}{t}{ \|U(\tau)\|_{L^2}^2 }{\tau} \\ \leq \|W(t_0)\|_{L^1} + C\|W_0\|_{H^1}^2 \end{multline} for $t \ge t_0 >0$ and some positive constant~$C$; here we used~\eqref{low_energy_est} and~\eqref{interpol}. The estimate \eqref{high_L1_W} is valid for an arbitrary positive constant $t_0$. Thus we can estimate from~\eqref{high_L1_W} and~\eqref{low_L1_W} that \begin{equation* \|W(t)\|_{L^1} \leq \|W(1)\|_{L^1} + C\|W_0\|_{H^1}^2 \leq \|W_0\|_{L^1} + C\|U_0\|_{L^1} + C\|W_0\|_{H^1}^2 \end{equation*} for $t \ge 1$. Eventually, combining this estimate and \eqref{low_L1_W} again, we arrive at the desired estimate \eqref{est-L1-W}. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:CRW}] The existence of the global solution follows from Lemma~\ref{lem:CP:W11nW1infinity} and the a-priori estimates in Lemma~\ref{lem:a_priori_CR}. We derive just the decay estimate \eqref{decay_est_W}. To this end, we first introduce the following Nash inequality: \begin{equation} \label{ineq:Nash} \|v\|_{L^2}^{2(1+2\sigma)} \leq C_\sigma \|v\|_{L^1}^{4\sigma} \|v\|_{\dot{H}^\sigma}^2 \end{equation} for $\sigma > 0$ and $v \in L^1(\mathbb{R}) \cap H^\sigma (\mathbb{R})$, where $C_\sigma$ is a positive constant which depends on $\sigma$. Following the proof of Lemma~\ref{lem:a_priori}, we deduce again estimate~\eqref{energy:WU}. Multiplying this inequality with $(1+\tau)^\beta$ for $\beta \in \mathbb{R}$ and integrating over $\tau\in [0,t]$, we obtain \begin{align* \mathcal{E}_{\beta}(t)^2 & -\integrall{0}{t}{ (1+\tau)^\beta \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} + \frac32 \cos \bigPar{ \theta \tfrac{\pi}{2} } \integrall{0}{t}{ \mathcal{D}_\beta(\tau)^2 }{\tau} \\ &\le \|W_0\|_{L^2}^2 + \gamma_1 \|U_0\|_{L^2}^2 + \beta \integrall{0}{t}{ \mathcal{E}_{\beta-1}(\tau)^2 }{\tau} \\ & \qquad + L(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \integrall{0}{t}{ (1+\tau)^\beta \|W\|_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}}{\tau} \end{align*} where $\mathcal{E}_\beta(t)^2 := (1+t)^\beta (\|W(t)\|_{L^2}^2 + \gamma_1 \|U(t)\|_{L^2}^2)$, and \begin{equation* \mathcal{D}_\beta(t)^2 := (1+t)^\beta (\|W(t)\|^2_{\dot{H}^{\alpha/2}} + \gamma_1 \|U(t)\|^2_{\dot{H}^{\alpha/2}} ). \end{equation*} We compute via Nash's inequality~\eqref{ineq:Nash} with $\sigma=\alpha/2$ and Young's inequality that \begin{align*} (1+t)^{\beta-1} \|v\|_{L^2}^2 & \leq C (1+t)^{\beta-1} \|v\|_{\dot{H}^{\alpha/2}}^{\frac{2}{1+\alpha}} \|v\|_{L^1}^{\frac{2\alpha}{1+\alpha}} \\ & = C \{ (1+t)^{\beta} \norm{v}_{\dot{H}^{\alpha/2}}^2 \}^{\frac{1}{1+\alpha}} \{ (1+t)^{\beta-\frac{1+\alpha}{\alpha}} \|v\|_{L^1}^2 \}^{\frac{\alpha}{1+\alpha}}\\ & \le \epsilon (1+t)^{\beta} \norm{v}_{\dot{H}^{\alpha/2}}^2 + C_\epsilon (1+t)^{\beta-\frac{1+\alpha}{\alpha}} \|v\|_{L^1}^2 \ , \end{align*} for all $\epsilon>0$ and some positive constant $C_\epsilon$. Thus we get $\mathcal{E}_{\beta-1}(t)^2 \le \epsilon \mathcal{D}_{\beta}(t)^2 + C_\epsilon (1+t)^{\beta-\frac{1+\alpha}{\alpha}} (\|W\|_{L^1}^2 + \gamma_1 \|U\|_{L^1}^2) $. Therefore, employing this estimate and \eqref{est-L1}, we obtain \begin{align* \mathcal{E}_{\beta}(t)^2 & -\integrall{0}{t}{ (1+\tau)^\beta \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} + \Big\{ \frac32 \cos \bigPar{ \theta \tfrac{\pi}{2} } - \epsilon \beta \Big\}\integrall{0}{t}{ \mathcal{D}_\beta(\tau)^2 }{\tau} \\[-2mm] & \le \|W_0\|_{L^2}^2 + \gamma_1 \|U_0\|_{L^2}^2 + \beta C_\epsilon \integrall{0}{t}{ \ (1+\tau)^{\beta-\frac{1+\alpha}{\alpha}} (\|W\|_{L^1}^2 + \gamma_1 \|U\|_{L^1}^2) }{\tau} \\[-2mm] &\qquad + L(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \integrall{0}{t}{ (1+\tau)^\beta \|W\|_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}}{\tau} \\[-2mm] & \le C\|W_0\|_{H^1}^2 + \beta C_\epsilon (\|W_0\|_{H^1}^2 + \|W_0\|_{W^{1,1}})^2 \integrall{0}{t}{ \ (1+\tau)^{\beta-\frac{1+\alpha}{\alpha}} }{\tau} \\[-2mm] &\qquad + L(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \integrall{0}{t}{ (1+\tau)^\beta \|W\|_{L^\infty} \|\partial_\xi W\|^2_{L^{2}}}{\tau}. \end{align*} For this inequality, we take $\beta$ and $\epsilon$ which satisfy $$ \beta-\frac{1+\alpha}{\alpha} > 1, \qquad \frac32 \cos \bigPar{ \theta \tfrac{\pi}{2} } - \epsilon \beta > 0, $$ obtaining \begin{align* \mathcal{E}_{\beta}(t)^2 &- \integrall{0}{t}{ (1+\tau)^\beta \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} + c \integrall{0}{t}{ \mathcal{D}_\beta(\tau)^2 }{\tau} \\[-2mm] &\le C(\|W_0\|_{H^1}^2 + \|W_0\|_{W^{1,1}})^2 \ (1+t)^{\beta-1/\alpha} \\ & \qquad + L(\norm{\sdiff{W_0}{\xi}}_{L^\infty} +2\norm{\overline u}_{L^\infty})\ \integrall{0}{t}{ (1+\tau)^\beta \|W\|_{L^\infty} \|U\|^2_{L^{2}}}{\tau} \ , \end{align*} for some positive constant~$c$. Finally, using \eqref{interpol}, the estimates~\eqref{est:W:Linfinity}--\eqref{est:U:Linfinity} and the smallness of $\norm{W_0}_{W^{1,1}}$, we arrive at \begin{multline* \mathcal{E}_{\beta}(t)^2 - \integrall{0}{t}{ (1+\tau)^\beta \integral{\mathbb{R}}{ f''(\overline u) \overline u' W^2 }{\xi} }{\tau} + c \integrall{0}{t}{ \mathcal{D}_\beta(\tau)^2 }{\tau} \\[-1mm] \le C(\|W_0\|_{H^1}^2 + \|W_0\|_{W^{1,1}})^2 (1+t)^{\beta-1/\alpha} \le C E_1^2 \ (1+t)^{\beta-1/\alpha} \end{multline*} and the desired estimate \eqref{decay_est_W}. \end{proof} % % %
{ "timestamp": "2017-12-15T02:07:09", "yymm": "1712", "arxiv_id": "1712.05199", "language": "en", "url": "https://arxiv.org/abs/1712.05199" }
\section{Introduction}\label{Intro} Be stars as a group are among the most rapidly rotating stars, both in terms of their spin angular velocity rate ($\Omega_{\rm eq}/\Omega_{\rm crit}$) and their equatorial velocity ($V_{\rm eq}$). The defining observed characteristics of these main-sequence (MS), intermediate-mass stars include the presence of hydrogen and metallic lines in emission in their spectra, an infrared colour excess, as well as short and long-term photometric and spectroscopic variability. These observed characteristics, together with radio observations, polarimetric signatures and interferometric data, can be globally explained within the framework of the viscous decretion disk model \citep[for a recent review on this subject, we refer the reader to][]{Rivinius2013}. However, despite all the progress in understanding the characteristics of Be stars and their circumstellar disks, the underlying mechanism(s) triggering the formation of such a circumstellar envelope remains elusive. According to \citet{Sana2012}, over 70$\%$ of stars with masses larger than 8~M$_{\sun}$ exchange mass with a companion during some period of their evolution. Therefore a significant number of binaries is expected among B stars, in particular among earlier spectral types. For instance, the bimodal distribution of rotational velocities for the single early B-type stars found in the 30 Doradus region by \citet{Dufton2013}, could be partly due to evolutionary effects related to binarity. An incidence of 30$\%$ of binarity among Be stars was found by \citet{Oudmaijer2010}, consistent with the incidence among normal, MS B stars. According to these authors, binarity may not be a key aspect involved in the appearance of the Be phenomenon; however, when the companion is close enough to the Be star, it could affect the circumstellar disk, for instance, by truncating it or triggering disk oscillations \citep[e.g.][]{Okazaki2002,Oktariani2016}. In other binary systems, episodes of mass transfer, or even mergers, could lead to the formation of a rapidly rotating star that could potentially become a Be star. Understanding how the existence of Be stars depends on metallicity, spectral type, and evolutionary stage can help to understand the mechanism(s) involved in the appearance of the Be phenomenon. Open clusters constitute ideal laboratories to study the conditions in which Be stars form and evolve. We can assume that cluster stars come from the same primordial cloud and share a common spatial location, proper motions, initial chemical composition, and age. It has been long known that the detected fraction of Be stars with colour excess increases with wavelength \citep[e.g.][]{Dougherty1994}, as expected if the excess emission comes from free-free and bound-free processes occurring in their circumstellar disks. That is why the near-infrared (near-IR) spectral regions are particularly useful in detecting and confirming Be stars. IR surveys such as {\bf \it Spitzer} (the fourth and final of the NASA Great Observatories program, an infrared space telescope launched in 2003) and AKARI (an infrared astronomy satellite developed by Japan Aerospace Exploration Agency, in cooperation with institutes of Europe and Korea, launched in 2006) have shown that the near-IR spectral region allows photometric detection and confirmation of Be stars \citep{Ita2010,Bonanos2010}. The Wide-Field Infrared Survey Explorer \citep[WISE,][]{Wright2010}, which surveyed the sky in four bandpasses between 3.4\,$\mu$m and 22\,$\mu$m, provides a better understanding of the infrared sky. The AllWISE source catalogue \citep{Cutri2013} gives observations with good angular resolution which are suitable to study the Be stellar population in open clusters. In the present article, we use the IR photometry provided by the AllWISE source catalogue \citep{Cutri2013} for a group of five open clusters with ages between 10 and 30 Myr, well known for hosting Be stars and which have been extensively studied in the literature. We explore how these cluster Be stars are distributed in the WISE colour-magnitude diagram (CMD) and investigate whether the location of Be stars in these plots can provide global information on the characteristics of the circumstellar disks and activity cycles for these stars. In order to interpret the observed near-IR characteristics of Be stars in open clusters, we generate a grid of synthetic WISE magnitudes and colours for star-plus-disk systems using the {\sc beray} code \citep{Sigut2011a}. This paper is organized as follows: in Section~2 we describe the observations and selection method. Section~3 presents our results, Section~4 describes our disk model predictions, and conclusions are provided in Section~5. \section{WISE photometry of young open clusters} \subsection{WISE observations} For the present work, we use data available at the AllWISE source catalogue \citep{Cutri2013}. This program extended the work of the successful WISE survey \citep{Wright2010}, by combining data from different phases of the mission. AllWISE provides astrometry and photometry in four bandpasses in 3.4$\mu$m (W1), 4.6$\mu$m (W2), 12$\mu$m (W3) and 22$\mu$m (W4), for nearly 750 million objects, with a better sensitivity than the WISE All-Sky Release Catalogue. Faint source flux biases were corrected, and a more robust estimation of the background level was obtained. The angular resolution in W1, W2, W3 and W4 bands are 6.1", 6.4", 6.5", and 12.0", respectively \citep{Wright2010}. As described in the AllWISE Data Processing documents, the AllWISE Source Catalog is intended to be a highly reliable and complete set of single, unique detections for compact objects on the sky, so targets in severely crowded regions are not considered in the AllWISE Source Catalogue. That is why in the present study, even though we might miss some targets, we are not affected by severely crowded regions. For the W1 and W2 filters the saturation limits are W1$=$8 mag and W2$=$7 mag, and the limiting magnitudes, for which the background level becomes an issue in crowded cluster regions, occurs at W1$=$14 mag and W2$=$13.5 \citep{Cutri2013}. Therefore, it is important for our work to select clusters in which MS B stars have a brightness below the saturation limit and above the limiting magnitude. This way, we ensure that most of the cluster members have small errors in the W1 and W2 bands. It is worth stating that some targets located in rather crowded cluster regions, or with a nearby bright star, could suffer from a poor background determination, which may lead to an underestimation of the brightness and larger error bars. Most stars with brightness between the saturation limit and background level in the W1 and W2 bands, have a brightness that is fainter than the background sky level in the W4 band. That is why we focus only in the W1 and W2 data and, eventually, the W3 data for those objects with good quality observations. \subsection{The selected sample of open clusters} As we are interested in studying the general behaviour of Be stars, we selected five Galactic, young, open clusters broadly studied in the literature: NGC 663, NGC 3766, NGC 4755, and the double cluster NGC 869--NGC 884. Not only are they known for being particularly rich in Be stars, but also the B type stars within these clusters have a brightness below the saturation limit and above the limiting magnitude as discussed in the previous section. Even though various authors give different values for the cluster parameters (e.g. \citet{Phelps1994, Pigulski2001, Fabregat2005} for NGC 663, \citet{Slesnick2002, Keller2001, Marco2001, Maciejewski2007} for NGC 869 and NGC 884, \citet{Aidelman2012, Piatti1998, McSwain2005b} for NGC 3766 or \citet{Aidelman2012, Balona1994, Sanner2001} for NGC 4755), we chose to use those reported by \citet{Kharchenko2013}, which provides a homogeneous database. For all the clusters under study the parameters given by these authors agree with others in the literature. The names, ages, distance modulus, reddening, and radius assumed for the selected clusters are listed in Table~\ref{TablaCumulos}. The estimates of the errors given by \citet{Kharchenko2013} for E(B-V), age, distance and radius are 7\%, 39\%, 11\% and 25\%, respectively. The error estimate in distance of 11\% corresponds to an error of the absolute distance modulus of 0.275 mag. To select the probable, early-type, cluster MS members, we used the cluster radius given by \citet{Kharchenko2013} and the 2MASS photometry provided together with the WISE photometry \citep{Cutri2013}. We converted the observed J magnitude and (J-H) colour to absolute values using the tabulated distances and E(B-V) excesses of Table \ref{TablaCumulos} and the empirical relations for the extinction A$_{\rm J}$ and excess E(J-H) given by \citet{Yuan2013}. The extinction coefficients given by these authors agree with the average values obtained by \citet{Davenport2014} as determined from 5$\times$10$^5$ stars. \subsection{Synthetic Populations with SYCLIST} Even though a detailed cluster parameter determination is beyond the scope of the present article, we generated synthetic stellar populations with SYCLIST, the Geneva population synthesis code \citep{Georgy2014}, with the aim of determining the regions in the CMD where we expect to have {\bf MS} and red supergiant stars for different cluster ages. We seek to check whether the parameters chosen from the literature are adequate to describe the observations. We decided to build stellar populations (as described in the next paragraph), instead of using typical isochrones, in order to account for the effects of stellar rotation in the evolution, as well as some observational effects. Synthetic populations of $50000$ stars with masses between 1.7\,M$_{\sun}$ and 15\,M$_{\sun}$ at the ZAMS were created following the typical Salpeter initial mass function (IMF) and the initial rotational velocity distribution given by \citet{Huang2010a}. The inclination angles (angle between the line of sight and the stellar rotation axis) are assumed to follow a random distribution. For these synthetic clusters, a fraction of $30\%$ was adopted for unresolved binaries, which produces broadening in the low mass range where we have mainly equal mass binaries. We considered this value following \citet{Oudmaijer2010}, who found that the incidence of binaries among Be stars to be $30\%$. We used the colour-effective temperature calibration from \citet{Worthey2011}. Because we do not intend to do detailed model fitting to the observations, we have not introduced artificial errors in magnitude and colour to the stars in our synthetic clusters. Figures~\ref{NGC663_2017}, \ref{NGC869_2017}, \ref{NGC884_2017}, \ref{NGC3766_2017} and \ref{NGC4755_2017} (left panels) show the synthetic populations (red dots) corresponding to the ages given by \citet{Kharchenko2013} together with the cluster observations, in the 2MASS M$_J$ versus M$_{\rm J}$-M$_{\rm H}$ CMD. All observed objects within the given cluster radius are indicated as gray crosses. For NGC~3766, blue circles correspond to a synthetic population with a younger age than the one provided by \citet{Kharchenko2013}, as has been proposed by different authors for this cluster \citep{Moitinho1997,Tadross2001,Aidelman2012}, and also appears in the WEBDA\footnote{http://www.univie.ac.at/webda/webda.html}. The younger age is in better agreement with the existence of RSG stars and the large number of Be stars found in this cluster \citep{Granada2013a}. The synthetic populations indicate that objects with J$\le$1.5 and intrinsic colour J-H$<$0.15, correspond to MS stars of spectral type earlier than A0, and to blue supergiant (BSG) stars. Then, from all the stars within the given cluster radius, we selected objects within these magnitude and colour limits. By doing this, we removed foreground and background non-cluster members, pre-MS stars \citep[see e.g.][]{Bonatto2006}, as well as red giants and red supergiant (RSG) stars. These objects are plotted as black points in Figures~\ref{NGC663_2017} to \ref{NGC4755_2017} (left panel). For all the clusters, the region in the CMD occupied by MS stars, indicated with black points, is correctly traced by the models of ages given in Table \ref{TablaCumulos}, including the position of the cluster turn-off and the location of RSG stars. \subsection{The selection of WISE B-type stars} As mentioned in the previous subsection, the left panels of Figures~\ref{NGC663_2017} to \ref{NGC4755_2017} show the 2MASS CMD of absolute magnitude M$_J$ versus the intrinsic colour J-H for synthetic clusters and observations. Our selection of targets, stars that are likely OB MS cluster members, determined using the procedure described above, are shown as black and green symbols. For the stars in each cluster, we used the empirical relations given by \citet{Yuan2013} for the extinction $A(W1)=0.19\,E(B-V)$ and colour excess $E(W1-W2)=0.036\,E(B-V)$, as well as the colour excesses and distance modulus ($\mu_0$) given by \citet{Kharchenko2013} to convert the observed W1 magnitude and W1-W2 colour to absolute magnitudes and intrinsic colours. An error of 0.1 mag in E(B-V), larger than the estimates given for the clusters under study, leads to a difference smaller than 0.004\,mag in the determination of W1-W2, while the typical observational error for this colour is larger than 0.02 mag. The effect of such error in colour excess in WISE magnitudes is also small, particularly in comparison to the error of 0.275\,mag introduced in the determination of cluster distances. However, these errors in the determination of $\mu_0$ neither produce significant changes in the spectral type nor affect the main results of this work regarding the colour excesses of Be stars. The WISE CMD for the clusters are presented in the right panels of Figures~\ref{NGC663_2017} to \ref{NGC4755_2017}. The colour of the points in these figures indicate different ranges of absolute J magnitude. From our models, we obtain that MS stars with effective temperatures between 10000\,K and 30000\,K, corresponding to the B-type range, have absolute J magnitudes between -4 and 1: we subdivide this range in J magnitude as -4$\leq$ J$<$-3 (green points), -3$\leq$ J$<$-2 (blue points), -2$\leq$ J$<$-1 (magenta points), -1$\leq$J$<$0 (cyan points), and 0$\leq$ J$<$1 (yellow points). Red and black points with J$<$-4 correspond to O type stars and supergiants. Objects with J$>$1 are indicated with black full circles. Open squares indicate the known Be stars. The Be nature of these objects has been previously determined either from spectroscopic observations \citep[e.g.][]{McSwain2005b,McSwain2009a,Marsh2012,Huang2010a,Mathew2011}, or H$\alpha$ narrow band photometry \citep[e.g.][]{Pigulski2001}. \begin{figure*} \figurenum{1} \gridline{\fig{./NGC663_Nov2017.eps}{0.85\textwidth}{}} \caption{Colour magnitude diagrams of the cluster NGC 663. Left: M$_{\rm J}$ versus M$_{\rm J}$-M$_{\rm H}$ plot. The gray symbols indicate all the objects within the cluster field. Black symbols correspond to cluster early type MS stars and green small squares indicate known Be stars. Red points correspond to a synthetic stellar population computed with SYCLIST. Right: WISE M$_{\rm W1}$ versus M$_{\rm W1}$-M$_{\rm W2}$ for cluster members. Different colours correspond to different M$_{\rm J}$ magnitudes, a proxy of spectral type. Black open squares indicate known Be stars. \label{NGC663_2017}} \end{figure*} \begin{figure*} \figurenum{2} \gridline{\fig{./NGC869_Nov2017.eps}{0.85\textwidth}{}} \caption{The same as Fig. \ref{NGC663_2017} for the cluster NGC 869. \label{NGC869_2017}} \end{figure*} \begin{figure*} \figurenum{3} \gridline{\fig{./NGC884_Nov2017.eps}{0.85\textwidth}{}} \caption{The same as Fig. \ref{NGC663_2017} for the cluster NGC 884 \label{NGC884_2017}} \end{figure*} \begin{figure*} \figurenum{4} \gridline{\fig{./NGC3766_Nov2017.eps}{0.85\textwidth}{}} \caption{The same as Fig. \ref{NGC663_2017} for the cluster NGC 3766 \label{NGC3766_2017}} \end{figure*} \begin{figure*} \figurenum{5} \gridline{\fig{./NGC4755_Nov2017.eps}{0.85\textwidth}{}} \caption{The same as Fig. \ref{NGC663_2017} for the cluster NGC 4755 \label{NGC4755_2017}} \end{figure*} \begin{table}[h] \caption{Selected Open Clusters} \label{TablaCumulos} \begin{center} \begin{tabular}{c l c c l } \hline\hline Name & Age$^*$ & $\mu_{0}$$^*$ & E(B-V)$^*$ & R$_{cl}$$^*$ \\% A(J) A(W1) E(W1-W2) &[log(yr)]& [mag] & [mag] & [arcmin] \\ \hline NGC 663 & 7.50 & 11.61 & 0.700 & 14.4 \\ NGC 869 & 7.28 & 11.81 & 0.520 & 14.4$^{\dagger\dagger}$ \\ NGC 884 & 7.20 & 11.85 & 0.560 & 10.5$^{\dagger\dagger}$ \\ NGC 3766& 7.40$^{\dagger}$ & 11.13 & 0.208 & 13.8 \\ NGC 4755& 7.30 & 11.47 & 0.396 & 13.5 \\ \hline\hline \end{tabular}\\ \end{center} {\tiny $*$ \citet{Kharchenko2013} estimate errors for E(B-V), age, distance and radius of 7\%, 39\%, 11\% and 25\%, respectively. The error estimate in distance of 11\% corresponds to an error of the absolute distance modulus of 0.275 mag.} {\tiny $\dagger$ This cluster age, that adequately describes the cluster turnoff and presence of red supergiant stars using SYCLIST, was not taken from \citet{Kharchenko2013}.} {\tiny $\dagger\dagger$ The cluster radii considered for these two clusters correspond to the angular radius of the central part (R1) by \citet{Kharchenko2013}, instead of the angular radius of the cluster (R2). This is to avoid cluster overlapping.} \end{table} The WISE CMD using all OB stars from the five open clusters (with absolute magnitude J$<$1 and intrinsic colour J-H$<$0.15) is presented in Figure \ref{5Clusters_a}. Due to the low quality of the data of most objects with W1-W2$<$-0.25, we removed stars with W1-W2 colour beyond this limit, and this is why the B star sample is incomplete, particularly towards later spectral types. Stars with W1-W2$\geq$0.5 are likely not MS B stars, but Class II young stellar objects \citep{Koenig2012}, so we removed them from the sample of B stars as well. Panel a) of Figure \ref{5Clusters_a} shows all the stars with gray points with errorbars, and panel b) highlights stars with a previous Be classification with coloured squares. Full squares indicate stars of B spectral type (yellow, cyan, magenta, blue and green points) and open squares correspond to earlier-type emission line stars (red and black). This sample of B and Be stars will be analyzed in the following section. Table \ref{BeStars} lists the 95 stars studied in this work with a previous Be classification. Their coordinates are tabulated in columns 1 and 2, their WISE and 2MASS intrinsic magnitudes, colours and errors are listed in columns 3 to 14, a number indicating the WISE variability of the star in column 15 (0 where no variability flag could be assigned, 1 for stars that are stable, 2 for those without a clear variability signature and 3 for stars that are variable). The spectral type (0 corresponds to B0, 1 to B1 and so on) and luminosity class are indicated in columns 16 and 17, respectively, column 18 gives the name of the object as available in SIMBAD together with references relevant to the Be star classification of the object. In the last column, we give the Be class assigned to the object in this work, either early, mid or late Be star. \begin{figure*} \figurenum{6} \gridline{\fig{./CMD_WISE_ALL_errorbars_01Oct2017.eps}{0.465\textwidth}{(a)} \fig{./CMD_WISE_ALL_OB_01Oct2017.eps}{0.465\textwidth}{(b)} } \caption{Colour magnitude diagram of the 5 clusters. a) All datapoints are shown with error bars and the vertical line indicates the colour limit for {\it naked} stars ; b) All known Be stars are indicated with coloured squares with the different colours representing different absolute J magnitudes, an indication of spectral type. \label{5Clusters_a}} \end{figure*} \onecolumngrid \newpage \begin{deluxetable*}{ c c c c c c c c c c c c c c c c c p{3.5cm} c } \tabletypesize{\tiny} \tablewidth{0pt} \tablecolumns{19} \tablecaption{95 Be Stars analyzed in this work.\label{BeStars}} \tablehead{ \colhead{RA} & \colhead{declination} & \colhead{M$_{\rm W1}$} & \colhead{eM$_{\rm W1}$} & \colhead{M$_{\rm W12}$}&\colhead{eM$_{\rm W12}$}&\colhead{M$_{\rm W23}$} &\colhead{eM$_{\rm W23}$}&\colhead{M$_{\rm J}$}&\colhead{eM$_{\rm J}$} & \colhead{M$_{\rm H}$}&\colhead{eM$_{\rm H}$} & \colhead{M$_{\rm K}$}&\colhead{eM$_{\rm K}$} & \colhead{Var} &\colhead{ST} &\colhead{LC} & \colhead{SIMBAD name $\&$ remarks} &\colhead{Be class} } \startdata ($^o$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&&&&&\\\hline 26.525441 & 61.227595 & -2.55 & 0.10 & 0.32 & 0.11 & 1.21 & 0.05 & -2.47 & 0.03 & -2.52 & 0.04 & -2.78 & 0.02 & 0 & 0.5 & 5 & EM* VES 616, $^{(a)}$ & early\\ 26.558441 & 61.228838 & -0.77 & 0.02 & 0.13 & 0.03 & 0.50 & - & -0.48 & 0.02 & -0.48 & 0.03 & -0.66 & 0.02 & 2 & 5 & 5 & V* V979 Cas, $^{(a)}$ & late\\ 26.508573 & 61.250567 & -0.05 & 0.03 & 0.23 & 0.04 & 1.52 & 0.10 & 0.28 & 0.04 & 0.19 & 0.03 & 0.05 & 0.03 & 0 & 5 & 5 & NGC 663 84, $^{(a)}$ * & late\\ 26.584334 & 61.239349 & -1.56 & 0.02 & 0.17 & 0.03 & 1.07 & 0.06 & -1.71 & 0.02 & -1.74 & 0.03 & -1.90 & 0.02 & 0 & 2 & 5 & EM* GGA 98, $^{(a)}$ & mid\\ 26.497126 & 61.212676 & -3.28 & 0.03 & 0.15 & 0.04 & 0.64 & 0.04 & -2.69 & 0.03 & -2.65 & 0.03 & -2.66 & 0.02 & 0 & 0.5 & 5 & BD+60 332A, $^{(a)}$ & early\\ 26.483769 & 61.212565 & -2.17 & 0.02 & 0.22 & 0.03 & 0.86 & 0.04 & -1.85 & 0.03 & -1.94 & 0.03 & -2.15 & 0.02 & 3 & 2 & 5 & EM* GGA 97, $^{(a)}$ & mid\\ 26.612098 & 61.236493 & -1.38 & 0.02 & -0.03 & 0.03 & 0.48 & 0.07 & -1.23 & 0.03 & -1.21 & 0.04 & -1.20 & 0.02 & 1 & - & - & EM* GGA 99, $^{(b)}$ * & mid $\dagger$\\ 26.619315 & 61.230679 & -0.70 & 0.02 & -0.09 & 0.03 & 0.25 & 0.15 & -0.78 & 0.02 & -0.72 & 0.03 & -0.81 & 0.02 & 1 & 3 & 5 & NGC 663 14, $^{(a)}$ & late\\ 26.627616 & 61.241449 & -2.08 & 0.02 & 0.22 & 0.03 & 0.78 & 0.04 & -1.29 & 0.02 & -1.27 & 0.03 & -1.36 & 0.02 & 1 & 1 & 5 & V* V984 Cas, $^{(a)}$ & mid\\ 26.615266 & 61.207074 & -2.84 & 0.02 & 0.10 & 0.03 & 0.75 & 0.04 & -2.86 & 0.02 & -2.88 & 0.03 & -2.92 & 0.02 & 1 & 5 & - & V* V983 Cas, $^{(c)}$ * & early\\ 26.648359 & 61.227523 & -1.98 & 0.02 & 0.26 & 0.03 & 1.01 & 0.04 & -1.35 & 0.03 & -1.49 & 0.03 & -1.70 & 0.02 & 0 & 1 & 5 & EM* GGA 101, $^{(a)}$ & mid\\ 26.449515 & 61.273316 & -0.10 & 0.02 & -0.05 & 0.03 & 0.60 & 0.14 & -0.05 & 0.02 & -0.13 & 0.03 & -0.08 & 0.02 & 1 & - & - & 2MASSJ01454789+6116239, $^{(b)}$ & late $\dagger$\\ 26.415063 & 61.216452 & -1.50 & 0.02 & 0.24 & 0.03 & 1.09 & 0.04 & -1.19 & 0.02 & -1.20 & 0.03 & -1.33 & 0.02 & 0 & 2 & 5 & EM* GGA 94, $^{(a)}$ & mid\\ 26.601678 & 61.177033 & 0.17 & 0.03 & -0.07 & 0.03 & 1.42 & 0.14 & 0.11 & 0.02 & 0.13 & 0.04 & 0.10 & 0.02 & 1 & 6 & 5 & NGC 663 61, $^{(a)}$ & late\\ 26.672702 & 61.220864 & -2.13 & 0.02 & -0.07 & 0.03 & 0.10 & 0.07 & -2.22 & 0.03 & -2.15 & 0.03 & -2.12 & 0.02 & 0 & 4 & - & EM* GGA 622, $^{(b)}$ & early\\ 26.612723 & 61.167195 & -3.03 & 0.02 & -0.03 & 0.03 & -0.04 & 0.07 & -3.13 & 0.03 & -3.09 & 0.03 & -3.00 & 0.02 & 1 & 5 & 3 & BD+60 340, $^{(c)}$ & early\\ 26.689256 & 61.199832 & -0.97 & 0.02 & -0.07 & 0.03 & -1.15 & - & -1.05 & 0.03 & -1.01 & 0.03 & -0.93 & 0.02 & 1 & - & - & EM* VES 623, $^{(d)}$ & mid $\dagger$\\ 26.443298 & 61.155813 & -2.14 & 0.02 & 0.24 & 0.03 & 0.87 & 0.03 & -1.79 & 0.02 & -1.81 & 0.03 & -1.93 & 0.02 & 0 & 1 & 5 & EM* GGA 95, $^{(a)}$ & mid\\ 26.641521 & 61.150654 & -3.17 & 0.02 & -0.03 & 0.03 & 0.32 & 0.05 & -3.19 & 0.04 & -3.11 & 0.03 & -3.09 & 0.02 & 1 & 2 & 5 & BD+60 334, $^{(b)}$ & early\\ 26.748328 & 61.208206 & -0.43 & 0.02 & 0.18 & 0.03 & 1.41 & 0.09 & -0.08 & 0.02 & -0.13 & 0.03 & -0.23 & 0.02 & 1 & 5 & 5 & EM* VES 624, $^{(a)}$ & late\\ 26.611840 & 61.128259 & -3.51 & 0.02 & 0.26 & 0.03 & 0.85 & 0.03 & -2.89 & 0.02 & -2.92 & 0.04 & -3.15 & 0.02 & 1 & 0.5 & 5 & BD+60 341, $^{(a)}$ & early\\ 26.407547 & 61.133112 & -1.98 & 0.02 & 0.22 & 0.03 & 0.75 & 0.05 & -1.01 & 0.03 & -1.00 & 0.04 & -1.18 & 0.02 & 1 & 2 & 5 & EM* GGA 93, $^{(a)}$ * & mid\\ 26.765592 & 61.292237 & -1.53 & 0.02 & 0.29 & 0.03 & 1.13 & 0.04 & -1.14 & 0.03 & -1.19 & 0.03 & -1.29 & 0.02 & 3 & 2 & 5 & V* V986 Cas, $^{(a)}$ & mid\\ 26.634543 & 61.119229 & -1.74 & 0.02 & -0.05 & 0.03 & 0.29 & 0.07 & -1.81 & 0.03 & -1.78 & 0.03 & -1.73 & 0.02 & 1 & 2 & 5 & Cl* NGC 663 L604$^{(b)}$ & mid\\ 26.645246 & 61.107708 & 0.03 & 0.02 & -0.06 & 0.03 & 1.99 & 0.10 & 0.01 & 0.03 & 0.03 & 0.04 & 0.07 & 0.02 & 1 & 5 & 5 & NGC 663 180, $^{(a)}$ & late\\ 26.822992 & 61.221549 & 0.75 & 0.03 & 0.01 & 0.04 & 0.27 & - & 0.85 & 0.03 & 0.84 & 0.03 & 0.80 & 0.03 & 1 & 6 & 5 & NGC 663 151, $^{(a)}$ & late\\ 26.325104 & 61.115690 & -3.97 & 0.03 & 0.29 & 0.03 & 0.79 & 0.02 & -3.51 & 0.02 & -3.53 & 0.03 & -3.63 & 0.02 & 0 & 1 & 5 & BD+60 325, $^{(a)}$ & early\\ 26.750870 & 61.356541 & -2.43 & 0.02 & 0.13 & 0.03 & 0.63 & 0.03 & -2.22 & 0.03 & -2.23 & 0.04 & -2.34 & 0.02 & 1 & 1 & 3 & EM* GGA 104, $^{(b)}$ & early\\ 26.861504 & 61.145601 & -2.88 & 0.02 & 0.36 & 0.03 & 0.99 & 0.04 & -1.69 & 0.03 & -1.66 & 0.03 & -1.59 & 0.02 & 2 & 0.5 & 5 & EM* GGA 108, $^{(a)}$ & mid\\ 26.914048 & 61.305712 & -2.05 & 0.02 & 0.30 & 0.03 & 0.96 & 0.04 & -1.38 & 0.03 & -1.49 & 0.03 & -1.68 & 0.02 & 1 & 1 & 5 & EM* GGA 109, $^{(a)}$ & mid\\ 26.739843 & 61.027863 & -1.68 & 0.02 & 0.01 & 0.03 & 0.13 & 0.06 & -1.76 & 0.02 & -1.76 & 0.03 & -1.89 & 0.02 & 3 & 3 & - & EM* GGA 103 $^{(b)}$ & mid\\ 34.737496 & 57.128716 & -1.20 & 0.02 & -0.12 & 0.03 & -0.20 & 0.14 & -1.08 & 0.05 & -1.02 & 0.05 & -1.06 & 0.04 & 1 & 3 & - & [KPK99]J021856.82+570742.5, $^{(e)}$ * & mid\\ 34.740807 & 57.138137 & -1.88 & 0.02 & -0.09 & 0.03 & -0.18 & 0.09 & -1.75 & 0.03 & -1.65 & 0.03 & -1.60 & 0.02 & 1 & 2 & 4 & NSV 776 $^{(f)}$& mid\\ 34.750714 & 57.145720 & -3.00 & 0.03 & 0.03 & 0.03 & 0.25 & 0.04 & -2.94 & 0.02 & -2.81 & 0.04 & -2.79 & 0.02 & 1 & 1 & 5 & V* V614 Per, $^{(e)}$ & early\\ 34.724407 & 57.139512 & -3.19 & 0.02 & -0.06 & 0.03 & 0.01 & 0.04 & -3.30 & 0.03 & -3.20 & 0.03 & -3.16 & 0.02 & 1 & 0.5 & 5 & BD+56 515, $^{(e)}$ & early\\ 34.806949 & 57.128771 & -3.63 & 0.02 & 0.25 & 0.03 & 0.71 & 0.03 & -3.12 & 0.03 & -3.14 & 0.03 & -3.29 & 0.02 & 1 & 1 & 5 & BD+56 529, $^{(a)}$ & early\\ 34.864416 & 57.138239 & -3.45 & 0.02 & -0.06 & 0.03 & 0.06 & 0.03 & -3.60 & 0.03 & -3.53 & 0.03 & -3.43 & 0.02 & 1 & 0.5 & 5 & HD 14162, $^{(e)}$ & early\\ 34.870057 & 57.117908 & -3.09 & 0.02 & 0.19 & 0.03 & 0.44 & 0.03 & -2.54 & 0.02 & -2.60 & 0.03 & -2.76 & 0.02 & 2 & 0.5 & 5 & EM* GGA 156, $^{(a)}$ & early\\ 34.699721 & 57.067276 & -4.01 & 0.03 & 0.10 & 0.03 & 0.57 & 0.03 & -3.89 & 0.04 & -3.86 & 0.05 & -3.87 & 0.02 & 1 & 0.5 & 5 & BD+56 511, $^{(a)}$ & early\\ 34.815054 & 57.188465 & 0.09 & 0.02 & -0.02 & 0.03 & 0.01 & - & 0.23 & 0.03 & 0.21 & 0.03 & 0.19 & 0.02 & 1 & 10 & - & 2MASSJ02191561+5711185, $^{(e)}$ & late\\ 34.624291 & 57.150883 & -3.10 & 0.02 & -0.07 & 0.03 & 0.05 & 0.04 & -3.29 & 0.02 & -3.22 & 0.03 & -3.15 & 0.02 & 1 & 0.5 & 1 & V* V611 Per, Teff=25400K, logg=3.38, $^{(g)}$ & early\\ 34.860474 & 57.078364 & -4.69 & 0.03 & 0.31 & 0.04 & 0.99 & 0.03 & -3.78 & 0.03 & -3.83 & 0.02 & -4.10 & 0.02 & 1 & 3 & 5 & BD+56 534, Teff=26400K, logg= 3.55, $^{(a)}$ & early\\ 34.870366 & 57.190083 & -1.22 & 0.06 & -0.01 & 0.07 & 0.33 & 0.08 & -1.52 & 0.02 & -1.56 & 0.03 & -1.68 & 0.02 & 3 & 0.5 & 5 & NGC 869 1278, Teff=25100K, logg=4.24, $^{(a)}$ & mid\\ 34.636556 & 57.211031 & -3.31 & 0.06 & -0.06 & 0.07 & 0.04 & 0.03 & -3.48 & 0.03 & -3.30 & 0.03 & -3.32 & 0.02 & 1 & 1 & 5 & BD+56 502, Teff=26700K, logg=3.91, $^{(g)}$ & early\\ 34.949225 & 57.111018 & -1.77 & 0.11 & -0.12 & 0.11 & 0.65 & 0.05 & -2.00 & 0.02 & -2.03 & 0.03 & -2.10 & 0.02 & 2 & 2 & - & 2MASSJ02194783+5706395, Teff=20709K, logg=3.94, $^{(f)}$ & early\\ 34.562878 & 57.171130 & -3.06 & 0.02 & 0.10 & 0.03 & 0.60 & 0.03 & -2.89 & 0.03 & -2.83 & 0.03 & -2.89 & 0.02 & 1 & 3 & - & BD+56 489, $^{(e)}$ & early\\ 34.700842 & 57.240094 & -3.73 & 0.09 & 0.01 & 0.09 & 0.30 & 0.03 & -3.10 & 0.03 & -2.95 & 0.05 & -2.91 & 0.04 & 3 & 1 & - & NAME BD+56 509AB, $^{(e)}$, * & early\\ 34.700054 & 57.285527 & -3.43 & 0.02 & -0.04 & 0.03 & 0.00 & 0.03 & -3.59 & 0.02 & -3.55 & 0.03 & -3.43 & 0.02 & 1 & 2 & 5 & V* V665 Per, $^{(e)}$ & early\\ 35.573678 & 57.123500 & -3.69 & 0.02 & -0.01 & 0.03 & 0.55 & 0.03 & -3.77 & 0.02 & -3.67 & 0.07 & -3.69 & 0.02 & 1 & 2 & 3 & V* V622 Per, $^{(h)}$ & early\\ 35.510306 & 57.155664 & -2.49 & 0.02 & 0.21 & 0.03 & 0.67 & 0.04 & -2.16 & 0.02 & -2.20 & 0.03 & -2.35 & 0.02 & 3 & 1 & 5 & NGC 869 2242, $^{(a)}$ & early\\ 35.709491 & 57.147412 & -1.49 & 0.02 & 0.21 & 0.03 & 0.92 & 0.04 & -1.21 & 0.02 & -1.21 & 0.03 & -1.26 & 0.02 & 3 & 3 & - & 2MASSJ02225028+5708506, $^{(e)}$ & mid\\ 35.519004 & 57.177461 & -2.24 & 0.07 & -0.08 & 0.07 & 0.05 & 0.06 & -2.39 & 0.02 & -2.27 & 0.03 & -2.25 & 0.02 & 1 & 2 & 5 & EM* MWC 39, $^{(e)}$ & early\\ 35.481492 & 57.099632 & -1.77 & 0.02 & -0.06 & 0.03 & -0.17 & 0.09 & -1.94 & 0.02 & -1.86 & 0.03 & -1.77 & 0.02 & 1 & 3 & 5 & [CHI2010] h Per M2623, $^{(e)}$ & mid\\ 35.470567 & 57.166364 & -4.00 & 0.03 & 0.19 & 0.04 & 0.51 & 0.03 & -2.79 & 0.02 & -2.69 & 0.03 & -2.61 & 0.02 & 1 & 1 & 5 & BD+56 566, Teff=24800K, logg=3.79, $^{(a)}$ & early\\ 35.430794 & 57.125830 & -3.53 & 0.02 & 0.16 & 0.03 & 1.18 & 0.03 & -3.43 & 0.02 & -3.44 & 0.03 & -3.44 & 0.02 & 3 & 1 & 3 & BD+56 563, Teff=21900, logg=3.72, $^{(g)}$ & early\\ 35.767418 & 57.127469 & -2.65 & 0.02 & 0.04 & 0.03 & 0.66 & 0.03 & -2.57 & 0.02 & -2.52 & 0.03 & -2.53 & 0.02 & 3 & 2 & - & EM* GGA 163, $^{(e)}$ & early\\ 35.700408 & 57.200277 & -3.28 & 0.02 & 0.30 & 0.03 & 0.93 & 0.03 & -2.67 & 0.02 & -2.65 & 0.03 & -2.72 & 0.03 & 2 & 0.5 & 5 & EM* MWC 711, $^{(a)}$, * & early\\ 35.435269 & 57.181195 & -2.09 & 0.17 & 0.15 & 0.17 & 0.89 & 0.04 & -1.57 & 0.02 & -1.62 & 0.03 & -1.72 & 0.02 & 1 & 3 & 5 & EM* GGA 162, $^{(a)}$ & mid\\ 35.353822 & 57.197898 & -1.44 & 0.02 & 0.24 & 0.03 & 0.94 & 0.04 & -1.03 & 0.02 & -1.08 & 0.03 & -1.21 & 0.02 & 1 & 2 & - & EM* GGA 161, $^{(i)}$ & mid\\ 35.594727 & 57.284735 & -4.24 & 0.03 & 0.09 & 0.03 & 0.62 & 0.03 & -3.75 & 0.03 & -3.73 & 0.06 & -3.69 & 0.02 & 1 & 1 & 3 & BD+56 582, $^{(i)}$ & early\\ 35.887723 & 57.075727 & -3.42 & 0.02 & 0.20 & 0.03 & 0.56 & 0.03 & -2.92 & 0.02 & -3.02 & 0.03 & -3.21 & 0.02 & 3 & - & - & TYC 3694-1331-1, Teff=22535K, logg=3.94, $^{(j)}$ & early\\ 174.090981 & -61.608303 & -0.96 & 0.02 & -0.07 & 0.03 & -0.72 & 0.44 & -0.81 & 0.03 & -0.72 & 0.03 & -0.70 & 0.03 & 1 & - & - & CPD-60 3149, Teff=16890K, logg=3.84, $^{(i)}$ & late\\ 174.091240 & -61.624731 & -1.00 & 0.02 & 0.12 & 0.03 & 0.60 & 0.07 & -0.64 & 0.03 & -0.61 & 0.03 & -0.70 & 0.02 & 1 & - & - & CPD-60 3144, Teff=17687K, logg=3.61, $^{(i)}$ & late\\ 174.058513 & -61.626579 & -1.35 & 0.02 & -0.08 & 0.03 & -0.02 & 0.08 & -1.55 & 0.02 & -1.51 & 0.02 & -1.52 & 0.02 & 1 & 2 & 5 & CPD-60 3133, Teff=17519K, logg=3.69, $^{(i)}$ & mid\\ 174.042312 & -61.627764 & -2.62 & 0.02 & 0.23 & 0.03 & 0.81 & 0.03 & -2.11 & 0.02 & -2.06 & 0.02 & -2.05 & 0.02 & 3 & 1.5 & 5 & CPD-60 3126, Teff=18564K, logg=3.53 , $^{(i)}$ & early\\ 174.049414 & -61.597301 & -2.78 & 0.02 & -0.03 & 0.03 & 0.19 & 0.05 & -2.91 & 0.02 & -2.84 & 0.03 & -2.77 & 0.02 & 1 & 2 & 5 & CPD-60 3128, Teff=18817K, logg=3.31, $^{(i)}$ & mid\\ 174.039818 & -61.593906 & -2.26 & 0.02 & 0.01 & 0.03 & 0.34 & 0.04 & -2.07 & 0.02 & -1.98 & 0.02 & -1.96 & 0.02 & 3 & - & - & CPD-60 3125, Teff=18274K, logg=3.49, $^{(i)}$ & early\\ 174.088804 & -61.587768 & -1.91 & 0.02 & -0.05 & 0.03 & -0.31 & 0.12 & -1.79 & 0.03 & -1.70 & 0.03 & -1.75 & 0.02 & 1 & 2 & 5 & CPD-60 3147, Teff=18883K, logg=3.23, $^{(i)}$ & mid\\ 174.004467 & -61.621609 & -1.77 & 0.02 & 0.15 & 0.03 & 1.07 & 0.03 & -1.28 & 0.02 & -1.19 & 0.03 & -1.15 & 0.03 & 3 & - & - & CPD-60 3108, Teff=17792K, logg=3.75, $^{(i)}$ & mid\\ 174.033412 & -61.643960 & -2.27 & 0.02 & 0.26 & 0.03 & 0.80 & 0.03 & -1.67 & 0.02 & -1.80 & 0.02 & -1.99 & 0.02 & 1 & - & - & CPD-60 3122, Teff=13254K, logg=3.29, $^{(i)}$ & mid\\ 173.981095 & -61.603857 & -2.67 & 0.02 & -0.05 & 0.03 & 0.08 & 0.06 & -2.76 & 0.02 & -2.68 & 0.03 & -2.67 & 0.02 & 1 & 2 & 4 & HD 100856, Teff=18725, logg=3.34, $^{(i)}$ & early\\ 174.131523 & -61.573839 & -3.25 & 0.02 & 0.10 & 0.03 & 0.42 & 0.03 & -3.03 & 0.02 & -3.02 & 0.04 & -3.11 & 0.02 & 3 & - & - & HD 306791,Teff=18399K, logg=3.30, $^{(i)}$ & early\\ 174.174609 & -61.631627 & -0.37 & 0.02 & -0.12 & 0.03 & -0.33 & 0.29 & -0.46 & 0.03 & -0.38 & 0.03 & -0.38 & 0.03 & 1 & - & - & CPD-60 3165, Teff=15945K, logg=3.95, $^{(i)}$ & late\\ 174.205566 & -61.605880 & -0.51 & 0.02 & -0.03 & 0.03 & 0.09 & 0.08 & -0.45 & 0.03 & -0.38 & 0.02 & -0.35 & 0.02 & 3 & - & - & CPD-60 3174, Teff=15650, logg=3.78, $^{(i)}$ & late\\ 174.051578 & -61.545558 & -2.48 & 0.02 & 0.16 & 0.03 & 0.54 & 0.03 & -1.27 & 0.03 & -1.17 & 0.07 & -1.15 & 0.05 & 1 & - & - & CD-60 3626, Teff=17834K, logg=3.82, $^{(i)}$ & mid\\ 174.022890 & -61.701685 & -2.06 & 0.02 & 0.13 & 0.03 & 0.53 & 0.03 & -1.85 & 0.02 & -1.86 & 0.02 & -2.00 & 0.02 & 1 & - & - & HD 306797, Teff=16301K, logg=3.51, $^{(i)}$ & mid\\ 173.842030 & -61.536173 & -0.86 & 0.02 & 0.12 & 0.03 & 0.90 & 0.23 & -1.03 & 0.02 & -1.03 & 0.02 & -1.08 & 0.02 & 3 & - & - & HD 306793, Teff=18995K, logg=4.02, $^{(i)}$ & mid\\ 173.813207 & -61.699876 & -1.85 & 0.02 & 0.25 & 0.03 & 0.94 & 0.04 & -1.38 & 0.02 & -1.41 & 0.03 & -1.57 & 0.02 & 3 & - & - & HD 306657, Teff=19580K, logg=4.00, $^{(i)}$ & mid\\ 174.453112 & -61.751439 & -1.60 & 0.03 & -0.03 & 0.03 & -0.50 & 0.11 & -1.15 & 0.03 & -1.23 & 0.03 & -1.39 & 0.02 & 1 & - & - & CPD-60 3087, $^{(k)}$ & mid\\ 193.432798 & -60.374656 & -2.45 & 0.02 & -0.03 & 0.03 & 1.77 & 0.03 & -2.29 & 0.03 & -2.18 & 0.04 & -2.18 & 0.02 & 2 & 1.5 & 5 & V* CT Cru, $^{(i)}$ & early\\ 193.446492 & -60.372198 & -3.71 & 0.02 & 0.25 & 0.03 & 0.97 & 0.03 & -2.99 & 0.04 & -3.04 & 0.06 & -3.22 & 0.03 & 1 & 1.5 & 5 & 2MASS J12534725-6022200, Teff=25040K, logg=3.91, $^{(i)}$ & early\\ 193.465538 & -60.366237 & -2.01 & 0.03 & -0.18 & 0.03 & 2.34 & 0.04 & -2.04 & 0.02 & -1.94 & 0.03 & -1.83 & 0.02 & 2 & 1 & 5 & V* CX Cru, $^{(l)}$ & early\\ 193.356909 & -60.366710 & -1.77 & 0.02 & -0.12 & 0.03 & -0.45 & 0.08 & -1.97 & 0.02 & -1.84 & 0.02 & -1.82 & 0.02 & 1 & 1 & 5 & CPD-59 4532, $^{(l)}$ & mid\\ 193.466655 & -60.371088 & -2.58 & 0.03 & -0.12 & 0.04 & 0.40 & 0.05 & -2.66 & 0.02 & -2.57 & 0.02 & -2.53 & 0.02 & 1 & 2 & - & V* EI Cru, $^{(l)}$ & early\\ 193.470570 & -60.358505 & -2.24 & 0.02 & -0.15 & 0.03 & 0.60 & 0.04 & -1.95 & 0.03 & -1.93 & 0.04 & -1.94 & 0.02 & 1 & 2 & 5 & V* CZ Cru, $^{(l)}$ & mid\\ 193.467558 & -60.374383 & -2.49 & 0.02 & -0.07 & 0.03 & 0.47 & 0.03 & -2.45 & 0.02 & -2.31 & 0.02 & -2.30 & 0.02 & 2 & 1.5 & 5 & V* CY Cru, $^{(l)}$ & early\\ 193.412417 & -60.395461 & -3.49 & 0.02 & 0.27 & 0.03 & 0.82 & 0.03 & -2.82 & 0.02 & -2.93 & 0.06 & -3.16 & 0.02 & 1 & 2 & 4 & CPD-59 4546, $^{(m)}$ & early\\ 193.397882 & -60.396305 & -2.19 & 0.02 & -0.08 & 0.03 & -0.01 & 0.04 & -2.38 & 0.02 & -2.30 & 0.02 & -2.20 & 0.02 & 1 & 1.5 & 5 & V* BT Cru, $^{(l)}$ & early\\ 193.464869 & -60.388024 & -3.38 & 0.02 & 0.30 & 0.03 & 0.73 & 0.03 & -2.51 & 0.02 & -2.62 & 0.02 & -2.83 & 0.02 & 1 & 2 & 4 & CPD-59 4559, $^{(l)}$ & early\\ 193.469057 & -60.399304 & -0.63 & 0.03 & -0.04 & 0.04 & -0.59 & 0.25 & -0.15 & 0.02 & -0.15 & 0.03 & -0.18 & 0.02 & 1 & - & - & CPD-59 4561, $^{(i)}$ & late\\ 193.445743 & -60.309989 & -2.23 & 0.02 & -0.07 & 0.03 & 0.15 & 0.04 & -2.31 & 0.02 & -2.21 & 0.02 & -2.18 & 0.02 & 1 & 1.5 & 5 & V* CV Cru, $^{(l)}$ & early\\ 193.489734 & -60.416141 & -2.70 & 0.02 & -0.06 & 0.03 & -0.01 & 0.05 & -2.91 & 0.02 & -2.81 & 0.05 & -2.73 & 0.02 & 1 & 1 & 5 & V* BW Cru, $^{(l)}$ & early\\ 193.220361 & -60.296769 & 0.63 & 0.03 & -0.08 & 0.04 & -0.18 & - & 0.65 & 0.02 & 0.71 & 0.03 & 0.71 & 0.03 & 1 & - & - & Cl* NGC 4755 ESL 101, $^{(n)}$ & late\\ 193.159809 & -60.338676 & -2.32 & 0.02 & 0.27 & 0.03 & 0.87 & 0.03 & -1.33 & 0.02 & -1.24 & 0.02 & -1.20 & 0.02 & 1 & - & - & HD 312076, $^{(o)}$ & mid $\dagger$\\ 193.150957 & -60.307060 & -3.00 & 0.02 & 0.30 & 0.03 & 0.90 & 0.03 & -2.46 & 0.02 & -2.55 & 0.02 & -2.75 & 0.02 & 1 & 0 & - & HD 312075, $^{(o)}$ & early\\ \enddata \tablecomments{a. \citet{Mathew2011}, b.\citet{Pigulski2001}, c. \citet{Kohoutek1997}, d. \citet{Coyne1978}, e. \citet{Abad1995}, f. \citet{Slesnick2002}, g. \citet{Marsh2012}, h. \citet{Hog2000}, i. \citet{McSwain2009a}, j. \citet{Huang2010a}, k. \citet{Slettebak1985}, l. \citet{Sanner2001}, m. \citet{Jaschek1982}, n. \citet{McSwain2005b}, o. \citet{Wray1966}. The symbol * in column 18 indicates that there is a source within 8asec of the target, for most of them significantly weaker. The symbol $\dagger$ in column 19 indicates that this is the first time an estimate of the spectral type is given for the star.} \end{deluxetable*} \section{Results} \subsection{WISE colour-magnitude diagram} Similar to \citet{Bonanos2010}, we define {\it photometric Be stars} as B type stars having colour excesses within a certain range. In the case of WISE colours, \citet{Nikutta2014} defined {\it naked} stars as stellar objects without a protostellar dusty disk with W1-W2$<$0.8. This limit prevents contamination from faint IR sources. For the whole group of {\it naked} stars not severely affected by extinction and dominated by MS or {\it normal} stars, objects with no evidence of a circumstellar disk, these authors find a mean value $\mu_{W_{12}}$=-0.04 for the colour W$_{12}$=W1-W2, with a standard deviation $\sigma_{W_{12}}$=0.03. For our sample of B stars with absolute J magnitude values between -4 and 1, the mean colour is $\mu_{W_{12}}$=-0.066 with a standard deviation of $\sigma_{W_{12}}$=0.002, whereas for the subgroup of early and mid B stars (-4$<$J$<$-1), the values are $\mu_{W_{12}}$=-0.023 and $\sigma_{W_{12}}$=0.006. We chose to use the conservative criteria given by \citet{Nikutta2014} according to which most {\it naked} or {\it normal} stars have W1-W2$<$0.05. This limit is represented with a vertical black line in Figure \ref{5Clusters_a} (a) and represents the limit between stars behaving as the majority (or normal) and those B-type stars having an infrared excess. Figure \ref{5Clusters_a} (b) shows all the stars in our sample of five clusters as gray circles and known Be stars as coloured squares. As described before, different colours correspond to different J magnitude bins. We can see that Be stars cluster mainly in two regions of this plot, a large group is found together with {\it naked} or {\it normal} B stars in the region with W1-W2 between -0.25 and 0.05, and another group in the region with W1-W2 between 0.1 and 0.3. Indeed, as can be seen in Figure~\ref{Histo_Be}, we find that $98.8\%$ of the non-Be objects or {\it normal stars} have W1-W2$<$0.05. We define Be star candidates as objects that have no Be star classification but have W1-W2$\geq$0.05. \begin{figure} \figurenum{7} \gridline{\fig{./Histograma_B_01Oct2017.eps}{0.465\textwidth}{}} \caption{Fraction of stars in each W1-W2 colour bin. The black continuous line shows the distribution of all B stars in our sample, the red line shows the distribution of Be stars and the blue line shows the distribution of objects that are not known to be Be stars.\label{Histo_Be}} \end{figure} \begin{figure} \figurenum{8} \gridline{\fig{./CC_WISE_early_01Oct2017.eps}{0.465\textwidth}{}} \caption{Colour-colour diagram W2-W3 versus W1-W2 for mid and early B stars. The quality of W3 band of late B type stars is usually poor so we did not consider these objects. Gray points indicate all early and mid B-type stars. Coloured symbols indicate Be stars. The region occupied by Be stars in an active phase, W1-W2$\geq$0.05, is indicated with a black rectangle. \label{CC_all}} \end{figure} We split our sample of B stars in absolute magnitude bins corresponding to early (-4$\leq$J$<$-2, green and blue points in Figure \ref{5Clusters_a} (b)), mid (-2$\leq$J$<$-1, magenta points) and late B stars (-1$\leq$J$<$1 cyan and yellow points). Red and black points in Figure \ref{5Clusters_a} (b) correspond to O type stars and supergiants, with J$<$-4. The presence of B stars of luminosity class III could eventually pollute the different bins. However, as we do not expect to have a significant number of giants given the ages of the clusters under study, we do not take this possibility into account in the present article. This may be of interest when older clusters are studied. Then, we have 144 early B stars of which 47 are known Be stars (32.6$\%$). Among the 166 mid B stars, 33 are known Be stars (19.9$\%$) and within the 836 late B stars, 15 (1.8$\%$) are known Be stars. If we look at the early and mid B stars together, 25.8$\%$ are Be stars. The fraction of Be stars decreases significantly when the sample contains later type stars. This dependency of the fraction of Be stars on the spectral type range under consideration has been extensively reported in the literature, and is the reason why different authors find large Be fractions when observing the upper MS of open clusters \citep[e.g.][]{Grebel1992, Grebel1997, Maeder1999a, Keller2001, McSwain2005b, Martayan2010a, Iqbal2013}. These works do not properly account for the late B and Be-type stars. When looking at our overall sample ($\rm -4<J<1$), the fraction of Be stars is 8.5$\%$. Because of the lack of completeness, in particular of the late B-type sample, the proportion of Be stars might certainly be lower for the whole B sample. Of all early Be stars, 49$\%$ have $\rm W1-W2\geq0.05$; among mid Be stars the percentage increases to 60.6$\%$ and for late Be stars decreases to $26.7\%$. The number of Be candidates in our sample is 1 in the early Be group, 1 in the mid Be group, and 12 in the late Be group. In the mid and early Be star groups, 43 out of the 45 stars with an excess $\rm W1-W2\geq0.05$ are known Be stars (95.6$\%$). We assume that we know all the Be stars in these spectral ranges, for the open clusters studied in this article. This indicates that there may remain only a few objects to be discovered as mid and early Be stars within these clusters. On the contrary, in the late Be group, the large proportion of new candidates indicates that for late B stars the classification as Be stars can be more difficult, and a significant number of these objects may have eluded a previous detection. In Table \ref{Interesting_objects} we list the coordinates, absolute W1 magnitude, W1-W2 (indicated as W12), W2-W3 (indicated as W23), as well as other relevant data for all the Be candidates and other interesting objects as well. \begin{table*}[h] \caption{Be candidates and other interesting objects that deserve further study} \label{Interesting_objects} \centering \begin{tabular}{c c c c c c p{4cm}} \hline\hline Cluster& R.A. & $\delta$ & M$_{\rm W1}$ &W12&W23&Simbad Name/Relevant Data\\ \hline \multicolumn{7}{|c|}{Candidate Be} \\ \hline &01 45 55.876&+61 12 33.69& 0.573&0.057&-&MV 34016$^{(1)}$\\ NGC 663&01 45 45.559&+61 10 55.47&-1.214&0.070&-&G 111$^{(2)}$\\ &01 47 28.119&+61 22 45.95& 0.622&0.079&-&-\\\hline &02 18 34.121&+57 13 50.24&-2.630&0.427&-&HG 731$^{(3)}$, V$_{\rm proj}$=165kms$^{-1\,(4)}$, T$_{\rm eff}$=20561K$^{(4)}$, logg=3.944$^{(4)}$, SB1$^{(3)}$.\\ &02 19 03.292&+57 08 20.34&-2.111&0.226&0.492&W2$^{(4)}$, B2V, T$_{ eff}$=20566K$^{(5)}$, logg=3.96$^{(5)}$,V$_{\rm proj}$=171kms$^{-1 (5)}$.\\ &02 19 51.091&+57 17 34.14& 0.634&0.093&-&NGC 869 1455\\ NGC 869/884&02 20 24.749&+57 06 53.07&0.952&0.099&-&LAV 1432$^{(6)}$\\ &02 21 33.416&+57 12 01.62&0.783&0.134&-&LAV 1757$^{(6)}$\\ &02 21 44.694&+57 04 09.23&0.192&0.104&-&LAV 1825$^{(6)}$\\ &02 23 17.891&+57 14 23.71&0.417&0.069&-&LAV 2353$^{(6)}$\\\hline NGC 3766&11 36 30.153&-61 39 47.62&0.293&0.184&1.130&MG\,173$^{(7)}$, V$_{\rm proj}$=171kms$^{-1\,(8)}$, T$_{\rm eff}$=14210K$^{(8)}$, logg=4.21$^{(8)}$, He weak.\\\hline &12 53 37.775&-60 17 45.24&-1.340&0.273&2.331&SB 133$^{(10)}$\\ NGC 4755&12 53 29.469&-60 21 16.56&-0.605&0.103&1.562&ESL 56$^{(11)}$, B3Vn.\\ &12 53 52.896&-60 23 06.78&-1.081&0.087&0.451&ESL 51$^{(11)}$, B3Vn.\\ \hline \multicolumn{7}{|c|}{Interesting objects with W12$<$0.05 but large W23} \\ \hline &12 53 43.872&-60 22 28.76&-2.449&-0.030&1.769&CT Cru, Variable Star of $\beta$ Cep type, V$_{\rm proj}$=195km$s^{-1\,(12)}$, B1.5V\\ NGC 4755&12 53 48.177&-60 21 54.32&-1.779&-0.033&1.988&CPD-59 4556. Eclipsing binary ($\beta$ Lyr type), B1V.\\ &12 53 51.729&-60 21 58.45&-2.010&-0.180&2.340&CX Cru, Variable Star of $\beta$ Cep type, V$_{\rm proj}$=278km$s^{-1\,(12)}$, B1V.\\ \hline\hline \end{tabular}\\ References: (1)\citet{Moffat1974}, (2)\citet{Gushee1919}, (3)\citet{Huang2006a}, (4)\citet{Marsh2012},\\ (5)\citet{Huang2010a}, (6)\citet{Lavdovsky1961}, (7)\citet{McSwain2005b}, (8)\citet{McSwain2009a},\\ (9)\citet{Wildey1964}, (10)\citet{Sanner2001}, (11)\citet{Evans2005}, (12)\citet{Hunter2009}.\\ All data available in SIMBAD astronomical database \citep{Wenger2000}. \end{table*} The presence of a circumstellar disk may not always be evident if the disk is not dense or large enough and/or the spectroscopic observations are not done with sufficient resolution. In this sense, the WISE photometric classification can identify new targets to look for the Be phenomenon, which could be studied later spectroscopically. Figure \ref{CC_all} represents the WISE colour-colour diagram W2-W3 versus W1-W2. We included only early and mid B type stars because of the low quality of W3 for most of late B-type stars. All Be stars with W1-W2$>$0.05 reside within a well determined region, indicated as a black rectangle in the plot. Most Be stars with W1-W2$<$0.05 behave as normal B stars, most of which have W2-W3$<$0.6, with the remarkable exception of two Be stars with W2-W3$>$1.5, and a third non Be star. These very red colours are definitely not characteristic of Be stars, and interestingly both stars are not only Be stars but also have a $\beta$ Cephei classification. The non-Be star that behaves similar to these two objects is an eclipsing binary. The colours of these objects agree with those presented by \citet{Nikutta2014} for a central star with a temperature of 10\,000K surrounded by an optically thin dusty shell with a rather flat density distribution. This kind of density profile places more dust at large radial distances, where the temperature is lower. In this case, a small increase in optical depth can significantly enhance long wavelength emission and therefore increasing W2-W3 while having a value of close to 0 for the warmer colour W1-W2. This observable signature could be evidence of the $\beta$ Cephei behaviour, as pulsations could enhance the shifting of dust at large radial distances. All three objects deserve further study. The coordinates and other characteristics of both Be stars and the third interesting star are listed in Table~\ref{Interesting_objects}. \begin{figure} \figurenum{9} \gridline{\fig{./CMD_WISE_ALL_variableBe_01Oct2017.eps}{0.465\textwidth}{}} \caption{CMD of Be stars. The different colours indicate different WISE variability flags: Red circles indicate stable Be stars while blue squares correspond to objects that are variable objects. The remaining stars do not have a clear variability signature or have low quality data so a variability flag could not be identified. The arrows indicate the extent of variability between different observing epochs of the blue squares.\label{Variability_Be}} \end{figure} \subsection{WISE variability of Be stars} In order to probe the variability of our sample of Be stars, we investigate the variability flags given in the ALLWISE catalogue \citep{Cutri2013}. Figure \ref{Variability_Be} shows the WISE CMD for all Be stars in our sample. The different symbols and colours represent Be stars that have different variability flags in their WISE magnitudes \citep{Hoffman2012}. As explained in the AllWISE data release\footnote{http://wise2.ipac.caltech.edu/docs/release/allwise/}, the AllWISE source catalog contains one set of calibrated magnitudes per object. With this aim, during the AllWISE science data processing, single-exposure images obtained in different epochs were coadded, allowing the detection of sources and the measurement of the position, apparent motion and photometry for each of them. Profile-fit photometry was performed simultaneously in the four WISE bands, and also for each filter and single-exposure image, so not only one set of calibrated magnitudes per object was obtained, but also multiepoch photometry is available for each object and in each band. During this data processing, flux variability in a band was evaluated by analyzing the distribution of flux measurements of a source on the individual single-exposure frames, and hence a variability flag was assigned to each band. Because the clusters under study are near the Galactic plane, the single-exposure images from which the observations of the Be stars presented in our article originate were performed in either two or three different epochs, separated by around 180 days. In each epoch, ten to sixteen individual observations were obtained within two to five days, all of them available in the AllWISE Multiepoch Photometry Database\footnote{http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec3$\_$1.html}. In Figure \ref{Variability_Be}, the red circles indicate Be stars with variability flags compatible with no variation (flags 0-5) while blue squares correspond to objects that are very likely variable objects (flags 8-9). For the remaining objects, identified with black crosses, either the quality of the data did not allow any variability flag to be provided in W1 or W2 bands, or the variability flags did not provide a clear variability signature (flag 6 or 7). We see that red and blue symbols occupy different regions of the CMD: while the 61 non-variable Be stars tend to reside either with the {\it normal} B stars, or with stars with a significant excess, the 17 variable Be stars tend to occupy a transition region between {\it normal} B stars and stars having a significant excess. \begin{figure} \figurenum{10} \gridline{\fig{./Cumulative_01Oct2017.eps}{0.465\textwidth}{}} \caption{Cumulative distribution of stable and variable Be stars. The two samples are unlikely to come from the same distribution , because the the Kolmogorov–Smirnov statistic (D$_{\rm max}$) is larger than the corresponding critical value for the two-sample Kolmogorov-Smirnov test (D$_{\rm crit}$) .\label{Cumulative_var}} \end{figure} By doing a two-sample Kolmogorov-Smirnov test of these variable and non-variable Be stars, we find that it is unlikely that these two samples come from the same distribution, as shown in Figure \ref{Cumulative_var}. The null hypothesis that both samples come from a population with the same distribution can be rejected at a significance level of $\alpha=0.005$. We interpret the differences between the two samples in the WISE CMD in terms of quiescent ({\it normal} B) and active phases, in which the star hosts a circumstellar disk. A lack of variability is found either for Be stars in a quiescent (diskless phase), or for stars that have a developed disk that is not changing significantly. On the other hand, variable stars are Be stars undergoing disk changes. Among the 95 Be stars of our sample, 17 are definitely variable stars, with light and colour changes observed during the 18 months of WISE observations. This constitutes 18\% of the sample. To show the variability of the Be stars flagged as variables, we calculated the mean W1 and W2 magnitudes for each epoch and plotted the corresponding data in Figure \ref{Variability_Be}. The variability between consecutive epochs is indicated with an arrow pointing in the direction of the time evolution. The six variable stars that are in the quiescent state, undergo small color and magnitude changes, indicating that some minor mass loss episodes could occur in these objects. There is a significant change in colour and magnitude in most of the variable active stars. As described in the following section, the variability observed in the most active variable Be stars is compatible with a disk dissipation phase. In order to interpret the nature of the variability behavior of Be stars, we have generated synthetic WISE colours using {\sc beray} circumstellar disk code \citep{Sigut2011a}. \section{Disk Model Predictions} In order to obtain the IR continuum flux of B1, B3 and B7 spectral type stars\footnote{The stellar parameters for these spectral types are taken from \citet{Cox2000}, consistent with previous work of the authors.} hosting circumstellar disks, we used the codes {\sc bedisk} \citep{Sigut2007} and {\sc beray} \citep{Sigut2011a}. The former computes the temperature structure for a given disk density structure; the statistical equilibrium equations are solved to obtain the atomic level populations used for computation of the heating and cooling rates, which are balanced to fix the temperature. The latter solves the radiative transfer equation along 10$^5$ rays through the star-plus-disk system for a given inclination angle. Rays that terminate on the stellar surface use an appropriately limb-darkened intensity for the boundary condition on the transfer equation. In this way, different observables can be computed, such as line profiles, spectral energy distributions (SEDs), or monochromatic images projected on the sky. These codes have been broadly used for the interpretation of H$\alpha$ lines \citep{Silaj2010a,Ahmed2012,Sigut2013,Silaj2014}, near-IR spectroscopy \citep{Jones2009}, and interferometric observations \citep{Jones2008,Tycner2008,Sigut2015} of Be star disks. In the present work, SEDs are the focus of the modelling as these can be used to compute the required near-IR magnitudes and colours. For the present work, we use an axisymmetric disk density distribution in which the radial density depends on two parameters: $\rho_{0}$, the density at the base of the disk, and $n$, the power-law exponent that determines how the density decreases with distance in the equatorial plane from the star. The disk density distribution in the cyclindrical co-ordinates $(R,Z)$ is \begin{equation} \rho(R,Z)=\rho_0\left(\frac{r_0}{R}\right)^n e^{-\left(\frac{Z}{H}\right)^2}. \end{equation} Here, $r_0$ is the stellar radius and $H$ is the disk scale height, computed assuming the disk is in hydrostatic equilibrium in the vertical (i.e.\ $Z$) direction with an isothermal temperature of $0.6\,T_{\rm eff}$ \citep[see][]{Sigut2009}. This isothermal temperature is used only to fix the disk scale height, and this, coupled with the assumption of vertical hydrostatic equilibrium, produces a flaring disk with $H\propto R^{3/2}$. We computed spectral energy distributions assuming different values of $\rho_{0}$, between $10^{-12}\,{\rm g}\,{\rm cm^{-3}}$ and $10^{-10}\,{\rm g}\,{\rm cm^{-3}}$, and for different values of $n$, ranging between 2 and 4. These are typical values often considered for Be stars in the literature \citep{Rivinius2013}. For the present calculations, we considered a disk size of 50 stellar radii, and a variety of inclination angles. For each model, we computed WISE magnitudes by convolving our synthetic energy distributions with each WISE filter, as described by \citet{Jarrett2011}. We also obtained 2MASS J magnitudes by using 2MASS filter definitions and the fluxes for zero magnitude from \citet{Cohen2003}. We present the computed magnitudes (W1, J, H, K) and WISE colours (W1-W2 indicated as W12 and W2-W3 indicated as W23) for three different spectral types for five different values of $n$, eight values of $\rho_0$, and five different inclination angles in Tables~\ref{Tab_n20}, \ref{Tab_n25}, \ref{Tab_n30}, \ref{Tab_n35} and \ref{Tab_n40}. In addition, we also provide for each model the predicted equivalent width of the H$\alpha$ line (EW$_{H\alpha}$), a quantity of special interest for Be stars as emission in the Balmer series (i.e.\ a negative value of EW$_{H\alpha}$) is the defining characteristic of the Be stars and a signature of disk emission. \floattable \begin{deluxetable}{ c c | c c c c c c c| c c c c c c c | c c c c c c c} \tabletypesize{\tiny} \rotate \tablewidth{0pt} \tablecolumns{23} \tablecaption{Synthetic infrared magnitudes and colours computed for n=2.\label{Tab_n20}} \tablehead{ \colhead{$\rho_{0}/10^{-12}$} & \colhead{i} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} } \startdata (gcm$^{-3}$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)\\ & & & & &B1 & & & & & & & B3 & & & & & & & B7 & & & \\ \hline & 18 & -6.557 & 0.416 & 1.010 & -5.519 & -5.577 & -5.996 & -6.86 & -4.924 & 0.332 & 0.834 & -3.990 & -4.114 & -4.457 & -10.47 & -2.794 & 0.357 & 0.937 & -1.853 & -1.951 & -2.343 & -13.89 \\ & 45 & -6.380 & 0.408 & 0.984 & -5.366 & -5.415 & -5.825 & -9.95 & -4.721 & 0.326 & 0.831 & -3.814 & -3.930 & -4.264 & -18.2 & -2.655 & 0.351 & 0.910 & -1.722 & -1.814 & -2.209 & -19.62 \\ 100 & 60 & -6.143 & 0.400 & 0.965 & -5.163 & -5.201 & -5.596 & -11.53 & -4.474 & 0.323 & 0.836 & -3.596 & -3.703 & -4.024 & -23.92 & -2.493 & 0.343 & 0.878 & -1.577 & -1.660 & -2.055 & -26.98 \\ & 72 & -5.748 & 0.403 & 0.993 & -4.806 & -4.826 & -5.198 & -11.16 & -4.103 & 0.333 & 0.882 & -3.254 & -3.344 & -3.653 & -23.13 & -2.259 & 0.330 & 0.842 & -1.381 & -1.449 & -1.834 & -25.19 \\ & 84 & -4.992 & 0.466 & 1.027 & -3.950 & -3.944 & -4.348 & -8.18 & -3.514 & 0.400 & 0.986 & -2.554 & -2.622 & -2.983 & -20.41 & -1.790 & 0.314 & 0.843 & -0.989 & -1.029 & -1.383 & -22.62 \\\hline & 18 & -6.206 & 0.437 & 1.104 & -5.137 & -5.207 & -5.633 & -9.61 & -4.691 & 0.362 & 0.878 & -3.692 & -3.825 & -4.189 & -12.87 & -2.526 & 0.361 & 0.973 & -1.606 & -1.705 & -2.075 & -14.7 \\ & 45 & -6.046 & 0.430 & 1.074 & -4.996 & -5.058 & -5.477 & -14.21 & -4.503 & 0.352 & 0.865 & -3.529 & -3.657 & -4.011 & -21.95 & -2.383 & 0.357 & 0.950 & -1.472 & -1.565 & -1.937 & -20.75 \\ 75 & 60 & -5.835 & 0.423 & 1.045 & -4.812 & -4.865 & -5.272 & -17.01 & -4.268 & 0.342 & 0.861 & -3.330 & -3.449 & -3.790 & -29.31 & -2.218 & 0.350 & 0.921 & -1.328 & -1.411 & -1.779 & -28.86 \\ & 72 & -5.478 & 0.420 & 1.042 & -4.495 & -4.533 & -4.918 & -16 & -3.908 & 0.342 & 0.895 & -3.019 & -3.122 & -3.441 & -28.24 & -1.981 & 0.338 & 0.886 & -1.142 & -1.209 & -1.560 & -27.31 \\ & 84 & -4.715 & 0.479 & 1.104 & -3.659 & -3.673 & -4.070 & -13.76 & -3.272 & 0.407 & 1.032 & -2.323 & -2.400 & -2.739 & -26.79 & -1.518 & 0.319 & 0.879 & -0.788 & -0.824 & -1.125 & -23.77 \\\hline & 18 & -5.683 & 0.453 & 1.219 & -4.605 & -4.685 & -5.108 & -15.32 & -4.311 & 0.406 & 0.999 & -3.257 & -3.391 & -3.774 & -16.63 & -2.174 & 0.366 & 1.022 & -1.292 & -1.387 & -1.726 & -15.36 \\ & 45 & -5.541 & 0.450 & 1.196 & -4.477 & -4.549 & -4.968 & -22.66 & -4.147 & 0.397 & 0.968 & -3.113 & -3.243 & -3.617 & -27.99 & -2.026 & 0.363 & 1.004 & -1.160 & -1.248 & -1.582 & -21.77 \\ 50 & 60 & -5.359 & 0.445 & 1.168 & -4.315 & -4.379 & -4.789 & -28.09 & -3.942 & 0.385 & 0.939 & -2.940 & -3.063 & -3.424 & -37.79 & -1.857 & 0.357 & 0.980 & -1.025 & -1.101 & -1.423 & -30.47 \\ & 72 & -5.055 & 0.441 & 1.144 & -4.048 & -4.099 & -4.490 & -26.79 & -3.620 & 0.371 & 0.934 & -2.677 & -2.786 & -3.121 & -36.74 & -1.621 & 0.344 & 0.948 & -0.861 & -0.921 & -1.213 & -29.42 \\ & 84 & -4.306 & 0.490 & 1.212 & -3.272 & -3.301 & -3.676 & -26.64 & -2.951 & 0.416 & 1.079 & -2.038 & -2.117 & -2.421 & -36.43 & -1.173 & 0.317 & 0.924 & -0.570 & -0.596 & -0.818 & -24.14 \\\hline & 18 & -4.758 & 0.463 & 1.334 & -3.731 & -3.799 & -4.194 & -30.51 & -3.540 & 0.450 & 1.249 & -2.511 & -2.613 & -2.985 & -23.48 & -1.614 & 0.378 & 1.100 & -0.834 & -0.903 & -1.172 & -15.45 \\ & 45 & -4.641 & 0.461 & 1.325 & -3.640 & -3.698 & -4.082 & -46.06 & -3.408 & 0.445 & 1.228 & -2.413 & -2.506 & -2.862 & -38.06 & -1.466 & 0.373 & 1.090 & -0.737 & -0.794 & -1.039 & -22.61 \\ 25 & 60 & -4.496 & 0.458 & 1.312 & -3.528 & -3.574 & -3.945 & -57.27 & -3.249 & 0.439 & 1.199 & -2.296 & -2.379 & -2.715 & -50.81 & -1.300 & 0.363 & 1.074 & -0.639 & -0.684 & -0.899 & -31.29 \\ & 72 & -4.268 & 0.453 & 1.295 & -3.352 & -3.385 & -3.731 & -60.1 & -3.010 & 0.427 & 1.155 & -2.127 & -2.198 & -2.498 & -50.01 & -1.085 & 0.338 & 1.042 & -0.526 & -0.557 & -0.728 & -31.7 \\ & 84 & -3.644 & 0.468 & 1.318 & -2.822 & -2.829 & -3.109 & -54.52 & -2.429 & 0.420 & 1.186 & -1.702 & -1.742 & -1.955 & -46.17 & -0.703 & 0.279 & 0.968 & -0.341 & -0.344 & -0.438 & -23.22 \\\hline & 18 & -3.501 & 0.419 & 1.375 & -2.972 & -2.939 & -3.105 & -39.25 & -2.349 & 0.425 & 1.393 & -1.840 & -1.822 & -1.959 & -27.6 & -0.832 & 0.353 & 1.229 & -0.464 & -0.451 & -0.535 & -14.25 \\ & 45 & -3.425 & 0.408 & 1.361 & -2.934 & -2.893 & -3.047 & -49.49 & -2.264 & 0.411 & 1.378 & -1.802 & -1.776 & -1.896 & -44.24 & -0.741 & 0.331 & 1.199 & -0.430 & -0.410 & -0.473 & -21.62 \\ 10 & 60 & -3.334 & 0.393 & 1.342 & -2.887 & -2.839 & -2.977 & -58.9 & -2.164 & 0.393 & 1.360 & -1.756 & -1.724 & -1.822 & -57.53 & -0.641 & 0.305 & 1.160 & -0.393 & -0.367 & -0.406 & -29.01 \\ & 72 & -3.205 & 0.370 & 1.311 & -2.814 & -2.759 & -2.877 & -64.74 & -2.026 & 0.367 & 1.330 & -1.688 & -1.650 & -1.719 & -59.93 & -0.519 & 0.265 & 1.092 & -0.345 & -0.314 & -0.326 & -31.41 \\ & 84 & -2.870 & 0.325 & 1.245 & -2.592 & -2.526 & -2.599 & -46.9 & -1.723 & 0.311 & 1.235 & -1.512 & -1.466 & -1.483 & -45.67 & -0.319 & 0.188 & 0.898 & -0.252 & -0.216 & -0.193 & -19.92 \\\hline & 18 & -2.794 & 0.236 & 1.202 & -2.759 & -2.662 & -2.659 & -23.17 & -1.802 & 0.237 & 1.199 & -1.716 & -1.642 & -1.656 & -18.95 & -0.459 & 0.196 & 1.095 & -0.395 & -0.342 & -0.344 & -10.29 \\ & 45 & -2.754 & 0.222 & 1.170 & -2.743 & -2.642 & -2.633 & -21.65 & -1.764 & 0.222 & 1.162 & -1.702 & -1.625 & -1.631 & -22.92 & -0.421 & 0.179 & 1.046 & -0.383 & -0.328 & -0.322 & -13.37 \\ 5 & 60 & -2.711 & 0.206 & 1.129 & -2.724 & -2.620 & -2.604 & -22.2 & -1.721 & 0.206 & 1.117 & -1.685 & -1.605 & -1.603 & -28.06 & -0.379 & 0.160 & 0.989 & -0.369 & -0.312 & -0.297 & -17.61 \\ & 72 & -2.653 & 0.188 & 1.065 & -2.693 & -2.586 & -2.562 & -22.65 & -1.665 & 0.185 & 1.050 & -1.658 & -1.577 & -1.565 & -30.02 & -0.328 & 0.136 & 0.904 & -0.349 & -0.291 & -0.264 & -19.61 \\ & 84 & -2.509 & 0.153 & 0.923 & -2.589 & -2.480 & -2.442 & -13.34 & -1.537 & 0.147 & 0.900 & -1.576 & -1.493 & -1.465 & -18.88 & -0.237 & 0.100 & 0.714 & -0.295 & -0.237 & -0.197 & -10.33 \\\hline & 18 & -2.480 & 0.061 & 0.713 & -2.687 & -2.563 & -2.494 & -8.05 & -1.525 & 0.071 & 0.701 & -1.658 & -1.562 & -1.516 & -8.31 & -0.287 & 0.064 & 0.630 & -0.370 & -0.302 & -0.271 & -4.19 \\ & 45 & -2.466 & 0.054 & 0.677 & -2.681 & -2.556 & -2.485 & -5.83 & -1.512 & 0.064 & 0.666 & -1.652 & -1.556 & -1.508 & -7.07 & -0.275 & 0.058 & 0.592 & -0.365 & -0.297 & -0.264 & -3.97 \\ 2.5 & 60 & -2.450 & 0.047 & 0.638 & -2.673 & -2.547 & -2.474 & -4.56 & -1.497 & 0.057 & 0.626 & -1.646 & -1.548 & -1.498 & -7.11 & -0.262 & 0.050 & 0.549 & -0.360 & -0.292 & -0.256 & -5.05 \\ & 72 & -2.429 & 0.040 & 0.591 & -2.659 & -2.533 & -2.458 & -3.61 & -1.479 & 0.050 & 0.578 & -1.636 & -1.537 & -1.485 & -6.97 & -0.246 & 0.042 & 0.496 & -0.353 & -0.284 & -0.245 & -5.67 \\ & 84 & -2.370 & 0.030 & 0.499 & -2.609 & -2.481 & -2.404 & -0.63 & -1.433 & 0.041 & 0.488 & -1.599 & -1.500 & -1.445 & -2.15 & -0.213 & 0.033 & 0.406 & -0.329 & -0.260 & -0.217 & -0.75 \\\hline & 18 & -2.366 & -0.023 & 0.169 & -2.658 & -2.525 & -2.436 & 0.08 & -1.415 & -0.009 & 0.177 & -1.631 & -1.527 & -1.460 & 0.64 & -0.209 & 0.000 & 0.163 & -0.354 & -0.281 & -0.234 & 2.23 \\ & 45 & -2.363 & -0.024 & 0.156 & -2.656 & -2.523 & -2.433 & 0.74 & -1.412 & -0.010 & 0.164 & -1.630 & -1.526 & -1.458 & 1.49 & -0.207 & -0.001 & 0.152 & -0.352 & -0.280 & -0.233 & 2.97 \\ 1 & 60 & -2.359 & -0.025 & 0.144 & -2.653 & -2.520 & -2.430 & 1.25 & -1.409 & -0.011 & 0.153 & -1.627 & -1.523 & -1.456 & 1.99 & -0.204 & -0.002 & 0.142 & -0.351 & -0.278 & -0.231 & 3.29 \\ & 72 & -2.353 & -0.027 & 0.132 & -2.648 & -2.515 & -2.424 & 1.75 & -1.404 & -0.012 & 0.142 & -1.624 & -1.520 & -1.452 & 2.43 & -0.201 & -0.003 & 0.132 & -0.349 & -0.276 & -0.228 & 3.56 \\ & 84 & -2.331 & -0.028 & 0.114 & -2.628 & -2.494 & -2.404 & 2.69 & -1.389 & -0.014 & 0.127 & -1.610 & -1.506 & -1.437 & 4.07 & -0.192 & -0.004 & 0.120 & -0.340 & -0.267 & -0.220 & 5.16\\ \enddata \end{deluxetable} \floattable \begin{deluxetable}{ c c | c c c c c c c| c c c c c c c | c c c c c c c} \tabletypesize{\tiny} \rotate \tablewidth{0pt} \tablecolumns{23} \tablecaption{Synthetic infrared magnitudes and colours computed for n=2.5\label{Tab_n25}} \tablehead{ \colhead{$\rho_{0}/10^{-12}$} & \colhead{i} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} } \startdata (gcm$^{-3}$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)\\ & & & & &B1 & & & & & & & B3 & & & & & & & B7 & & & \\ \hline & 18 & -5.163 & 0.359 & 1.030 & -4.335 & -4.413 & -4.716 & -15.09 & -4.035 & 0.359 & 0.968 & -3.171 & -3.292 & -3.578 & -10.81 & -2.110 & 0.323 & 0.913 & -1.338 & -1.436 & -1.713 & -8.67 \\ & 45 & -4.981 & 0.360 & 1.033 & -4.153 & -4.229 & -4.531 & -22.28 & -3.845 & 0.359 & 0.953 & -2.980 & -3.100 & -3.384 & -17.35 & -1.931 & 0.321 & 0.903 & -1.167 & -1.260 & -1.536 & -12.13 \\ 100 & 60 & -4.763 & 0.362 & 1.038 & -3.941 & -4.012 & -4.310 & -30.51 & -3.619 & 0.357 & 0.934 & -2.761 & -2.878 & -3.158 & -25.74 & -1.726 & 0.318 & 0.888 & -0.986 & -1.070 & -1.337 & -18.07 \\ & 72 & -4.438 & 0.367 & 1.053 & -3.635 & -3.695 & -3.983 & -36.92 & -3.290 & 0.353 & 0.920 & -2.460 & -2.566 & -2.833 & -29.14 & -1.445 & 0.309 & 0.869 & -0.778 & -0.843 & -1.078 & -20.28 \\ & 84 & -3.686 & 0.419 & 1.157 & -2.893 & -2.925 & -3.183 & -38.32 & -2.585 & 0.385 & 1.033 & -1.815 & -1.887 & -2.111 & -31.34 & -0.952 & 0.282 & 0.852 & -0.465 & -0.490 & -0.645 & -14.59 \\\hline & 18 & -4.867 & 0.353 & 1.038 & -4.076 & -4.147 & -4.436 & -17.29 & -3.765 & 0.362 & 1.029 & -2.927 & -3.038 & -3.317 & -12.32 & -1.915 & 0.323 & 0.936 & -1.172 & -1.264 & -1.524 & -9.01 \\ & 45 & -4.688 & 0.355 & 1.044 & -3.897 & -3.965 & -4.253 & -24.77 & -3.580 & 0.364 & 1.023 & -2.741 & -2.851 & -3.128 & -19.01 & -1.734 & 0.322 & 0.930 & -1.003 & -1.089 & -1.344 & -12.71 \\ 75 & 60 & -4.474 & 0.356 & 1.052 & -3.692 & -3.755 & -4.038 & -33.78 & -3.364 & 0.365 & 1.012 & -2.532 & -2.638 & -2.909 & -27.93 & -1.528 & 0.319 & 0.919 & -0.832 & -0.907 & -1.147 & -18.73 \\ & 72 & -4.163 & 0.360 & 1.072 & -3.406 & -3.458 & -3.727 & -41.87 & -3.055 & 0.364 & 0.998 & -2.256 & -2.349 & -2.604 & -32.26 & -1.252 & 0.307 & 0.902 & -0.647 & -0.703 & -0.901 & -21.43 \\ & 84 & -3.450 & 0.398 & 1.169 & -2.762 & -2.779 & -2.996 & -39.82 & -2.384 & 0.379 & 1.070 & -1.703 & -1.756 & -1.943 & -31.32 & -0.786 & 0.267 & 0.866 & -0.386 & -0.402 & -0.513 & -14.18 \\\hline & 18 & -4.464 & 0.342 & 1.033 & -3.726 & -3.787 & -4.057 & -19.92 & -3.376 & 0.353 & 1.064 & -2.597 & -2.688 & -2.953 & -14.25 & -1.652 & 0.322 & 0.965 & -0.957 & -1.037 & -1.272 & -9.1 \\ & 45 & -4.289 & 0.344 & 1.041 & -3.560 & -3.614 & -3.881 & -26.36 & -3.193 & 0.356 & 1.070 & -2.424 & -2.510 & -2.767 & -21.05 & -1.469 & 0.322 & 0.963 & -0.800 & -0.870 & -1.091 & -12.91 \\ 50 & 60 & -4.084 & 0.345 & 1.051 & -3.373 & -3.418 & -3.676 & -34.79 & -2.983 & 0.359 & 1.074 & -2.234 & -2.312 & -2.557 & -30.2 & -1.264 & 0.318 & 0.956 & -0.652 & -0.709 & -0.902 & -18.71 \\ & 72 & -3.792 & 0.347 & 1.071 & -3.131 & -3.159 & -3.393 & -42.79 & -2.693 & 0.360 & 1.075 & -2.001 & -2.063 & -2.276 & -35.43 & -1.002 & 0.298 & 0.940 & -0.502 & -0.540 & -0.683 & -21.85 \\ & 84 & -3.148 & 0.353 & 1.146 & -2.640 & -2.628 & -2.781 & -34.84 & -2.100 & 0.351 & 1.099 & -1.591 & -1.609 & -1.733 & -29.54 & -0.587 & 0.237 & 0.867 & -0.307 & -0.308 & -0.365 & -13.13 \\\hline & 18 & -3.771 & 0.320 & 1.003 & -3.170 & -3.186 & -3.403 & -19.73 & -2.666 & 0.331 & 1.045 & -2.046 & -2.083 & -2.285 & -13.86 & -1.193 & 0.319 & 0.996 & -0.633 & -0.671 & -0.840 & -7.61 \\ & 45 & -3.609 & 0.318 & 1.008 & -3.059 & -3.058 & -3.254 & -20.96 & -2.487 & 0.329 & 1.060 & -1.931 & -1.952 & -2.124 & -18.44 & -1.016 & 0.313 & 0.999 & -0.530 & -0.554 & -0.687 & -10.55 \\ 25 & 60 & -3.425 & 0.313 & 1.013 & -2.941 & -2.925 & -3.092 & -24.35 & -2.287 & 0.324 & 1.076 & -1.811 & -1.817 & -1.951 & -24.96 & -0.830 & 0.297 & 0.995 & -0.439 & -0.449 & -0.541 & -15.05 \\ & 72 & -3.184 & 0.296 & 1.015 & -2.801 & -2.768 & -2.893 & -27.58 & -2.038 & 0.307 & 1.086 & -1.679 & -1.668 & -1.748 & -29.34 & -0.620 & 0.259 & 0.962 & -0.351 & -0.348 & -0.390 & -17.95 \\ & 84 & -2.736 & 0.243 & 0.981 & -2.553 & -2.489 & -2.533 & -17.43 & -1.632 & 0.248 & 1.031 & -1.477 & -1.438 & -1.437 & -19.04 & -0.333 & 0.177 & 0.806 & -0.240 & -0.216 & -0.200 & -9.24 \\\hline & 18 & -2.910 & 0.248 & 0.950 & -2.788 & -2.705 & -2.731 & -8.79 & -1.905 & 0.250 & 0.955 & -1.740 & -1.679 & -1.718 & -7.14 & -0.579 & 0.236 & 0.962 & -0.417 & -0.379 & -0.407 & -3.28 \\ & 45 & -2.811 & 0.222 & 0.925 & -2.749 & -2.657 & -2.664 & -7.36 & -1.806 & 0.222 & 0.923 & -1.704 & -1.635 & -1.653 & -6.98 & -0.484 & 0.206 & 0.925 & -0.386 & -0.341 & -0.348 & -4.08 \\ 10 & 60 & -2.713 & 0.190 & 0.882 & -2.710 & -2.611 & -2.600 & -7.03 & -1.707 & 0.189 & 0.875 & -1.667 & -1.593 & -1.590 & -8.13 & -0.394 & 0.171 & 0.870 & -0.357 & -0.307 & -0.293 & -6.03 \\ & 72 & -2.610 & 0.152 & 0.799 & -2.665 & -2.560 & -2.532 & -6.75 & -1.603 & 0.147 & 0.793 & -1.626 & -1.548 & -1.523 & -8.95 & -0.305 & 0.130 & 0.772 & -0.325 & -0.273 & -0.239 & -7.31 \\ & 84 & -2.448 & 0.096 & 0.624 & -2.579 & -2.468 & -2.418 & -3.27 & -1.459 & 0.090 & 0.618 & -1.559 & -1.475 & -1.427 & -3.52 & -0.204 & 0.079 & 0.567 & -0.280 & -0.224 & -0.175 & -1.84 \\\hline & 18 & -2.535 & 0.095 & 0.735 & -2.698 & -2.579 & -2.521 & -3.2 & -1.577 & 0.102 & 0.727 & -1.668 & -1.576 & -1.543 & -2.2 & -0.326 & 0.096 & 0.698 & -0.374 & -0.311 & -0.287 & 0.18 \\ & 45 & -2.493 & 0.074 & 0.667 & -2.683 & -2.561 & -2.496 & -1.95 & -1.538 & 0.082 & 0.656 & -1.655 & -1.561 & -1.519 & -1.35 & -0.290 & 0.076 & 0.622 & -0.363 & -0.298 & -0.266 & 0.43 \\ 5 & 60 & -2.456 & 0.055 & 0.588 & -2.668 & -2.543 & -2.473 & -1.24 & -1.503 & 0.063 & 0.575 & -1.642 & -1.546 & -1.498 & -1.22 & -0.257 & 0.057 & 0.539 & -0.353 & -0.286 & -0.247 & -0.03 \\ & 72 & -2.421 & 0.039 & 0.493 & -2.649 & -2.523 & -2.448 & -0.59 & -1.470 & 0.047 & 0.478 & -1.626 & -1.529 & -1.476 & -1.04 & -0.227 & 0.039 & 0.440 & -0.341 & -0.273 & -0.229 & -0.28 \\ & 84 & -2.362 & 0.018 & 0.352 & -2.608 & -2.481 & -2.402 & 0.9 & -1.422 & 0.027 & 0.341 & -1.597 & -1.499 & -1.441 & 1.42 & -0.192 & 0.023 & 0.310 & -0.321 & -0.253 & -0.204 & 2.45 \\\hline & 18 & -2.397 & 0.000 & 0.343 & -2.666 & -2.536 & -2.452 & -0.09 & -1.444 & 0.013 & 0.342 & -1.638 & -1.537 & -1.475 & 0.86 & -0.233 & 0.020 & 0.325 & -0.359 & -0.288 & -0.246 & 2.63 \\ & 45 & -2.384 & -0.007 & 0.291 & -2.660 & -2.529 & -2.443 & 0.68 & -1.432 & 0.006 & 0.290 & -1.634 & -1.531 & -1.468 & 1.58 & -0.223 & 0.013 & 0.275 & -0.355 & -0.283 & -0.240 & 3.14 \\ 2.5 & 60 & -2.372 & -0.013 & 0.245 & -2.654 & -2.522 & -2.435 & 1.21 & -1.421 & 0.001 & 0.245 & -1.629 & -1.526 & -1.461 & 1.98 & -0.213 & 0.008 & 0.230 & -0.351 & -0.279 & -0.234 & 3.29 \\ & 72 & -2.359 & -0.017 & 0.203 & -2.645 & -2.513 & -2.425 & 1.73 & -1.411 & -0.003 & 0.205 & -1.622 & -1.519 & -1.453 & 2.37 & -0.204 & 0.004 & 0.191 & -0.347 & -0.275 & -0.228 & 3.49 \\ & 84 & -2.336 & -0.021 & 0.147 & -2.626 & -2.493 & -2.404 & 2.56 & -1.394 & -0.007 & 0.154 & -1.609 & -1.505 & -1.438 & 3.7 & -0.193 & 0.001 & 0.146 & -0.338 & -0.266 & -0.218 & 4.86 \\\hline & 18 & -2.349 & -0.035 & 0.051 & -2.652 & -2.518 & -2.426 & 2.06 & -1.400 & -0.020 & 0.069 & -1.626 & -1.521 & -1.452 & 2.91 & -0.197 & -0.009 & 0.075 & -0.350 & -0.277 & -0.228 & 4.47 \\ & 45 & -2.346 & -0.036 & 0.038 & -2.650 & -2.516 & -2.423 & 2.39 & -1.397 & -0.021 & 0.057 & -1.625 & -1.520 & -1.450 & 3.3 & -0.195 & -0.010 & 0.065 & -0.349 & -0.276 & -0.226 & 4.83 \\ 1 & 60 & -2.342 & -0.037 & 0.029 & -2.648 & -2.513 & -2.421 & 2.67 & -1.394 & -0.022 & 0.048 & -1.623 & -1.518 & -1.448 & 3.59 & -0.193 & -0.010 & 0.057 & -0.348 & -0.274 & -0.225 & 5.07 \\ & 72 & -2.339 & -0.037 & 0.021 & -2.644 & -2.510 & -2.417 & 2.97 & -1.392 & -0.022 & 0.042 & -1.621 & -1.516 & -1.445 & 3.91 & -0.191 & -0.011 & 0.052 & -0.347 & -0.273 & -0.224 & 5.33 \\ & 84 & -2.330 & -0.038 & 0.014 & -2.636 & -2.502 & -2.409 & 3.35 & -1.386 & -0.023 & 0.036 & -1.615 & -1.510 & -1.440 & 4.47 & -0.188 & -0.011 & 0.047 & -0.343 & -0.270 & -0.220 & 6.06 \\ \enddata \end{deluxetable} \floattable \begin{deluxetable}{ c c | c c c c c c c| c c c c c c c | c c c c c c c} \tabletypesize{\tiny} \rotate \tablewidth{0pt} \tablecolumns{23} \tablecaption{Synthetic infrared magnitudes and colours computed for n=3.\label{Tab_n30}} \tablehead{ \colhead{$\rho_{0}/10^{-12}$} & \colhead{i} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} } \startdata (gcm$^{-3}$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)\\ & & & & &B1 & & & & & & & B3 & & & & & & & B7 & & & \\ \hline & 18 & -4.343 & 0.264 & 0.806 & -3.786 & -3.829 & -4.027 & -9.74 & -3.285 & 0.281 & 0.871 & -2.683 & -2.761 & -2.948 & -6.74 & -1.700 & 0.282 & 0.851 & -1.079 & -1.160 & -1.362 & -4.43 \\ & 45 & -4.132 & 0.266 & 0.815 & -3.579 & -3.618 & -3.814 & -11.52 & -3.067 & 0.285 & 0.883 & -2.467 & -2.543 & -2.726 & -8.8 & -1.495 & 0.282 & 0.853 & -0.887 & -0.963 & -1.159 & -6.34 \\ 100 & 60 & -3.886 & 0.267 & 0.828 & -3.347 & -3.380 & -3.570 & -15.03 & -2.818 & 0.288 & 0.897 & -2.229 & -2.300 & -2.475 & -13.13 & -1.264 & 0.280 & 0.851 & -0.697 & -0.761 & -0.937 & -9.89 \\ & 72 & -3.546 & 0.268 & 0.853 & -3.052 & -3.070 & -3.238 & -19.19 & -2.483 & 0.291 & 0.914 & -1.942 & -1.996 & -2.147 & -17.45 & -0.966 & 0.265 & 0.844 & -0.509 & -0.551 & -0.677 & -12.7 \\ & 84 & -2.882 & 0.258 & 0.917 & -2.579 & -2.548 & -2.628 & -13.18 & -1.877 & 0.275 & 0.940 & -1.537 & -1.540 & -1.602 & -11.31 & -0.531 & 0.206 & 0.767 & -0.298 & -0.299 & -0.337 & -5.47 \\\hline & 18 & -4.132 & 0.255 & 0.792 & -3.613 & -3.645 & -3.832 & -8.97 & -3.062 & 0.267 & 0.846 & -2.507 & -2.570 & -2.746 & -6.21 & -1.543 & 0.279 & 0.858 & -0.954 & -1.027 & -1.215 & -3.77 \\ & 45 & -3.924 & 0.256 & 0.800 & -3.412 & -3.439 & -3.624 & -10.47 & -2.842 & 0.270 & 0.860 & -2.296 & -2.356 & -2.525 & -8.23 & -1.336 & 0.279 & 0.861 & -0.768 & -0.833 & -1.011 & -5.59 \\ 75 & 60 & -3.684 & 0.257 & 0.811 & -3.193 & -3.211 & -3.386 & -13.51 & -2.591 & 0.272 & 0.875 & -2.071 & -2.122 & -2.277 & -12.2 & -1.106 & 0.275 & 0.861 & -0.595 & -0.646 & -0.795 & -8.82 \\ & 72 & -3.356 & 0.254 & 0.831 & -2.933 & -2.931 & -3.075 & -16.73 & -2.261 & 0.271 & 0.895 & -1.818 & -1.849 & -1.964 & -16 & -0.822 & 0.254 & 0.850 & -0.436 & -0.465 & -0.559 & -11.31 \\ & 84 & -2.762 & 0.222 & 0.850 & -2.557 & -2.508 & -2.559 & -10.44 & -1.714 & 0.233 & 0.887 & -1.500 & -1.482 & -1.502 & -8.95 & -0.430 & 0.183 & 0.740 & -0.268 & -0.259 & -0.270 & -4.25 \\\hline & 18 & -3.838 & 0.243 & 0.773 & -3.368 & -3.386 & -3.559 & -8 & -2.762 & 0.248 & 0.805 & -2.266 & -2.311 & -2.474 & -5.27 & -1.324 & 0.270 & 0.853 & -0.791 & -0.847 & -1.014 & -2.61 \\ & 45 & -3.635 & 0.243 & 0.780 & -3.185 & -3.192 & -3.358 & -8.77 & -2.541 & 0.248 & 0.820 & -2.075 & -2.108 & -2.257 & -6.89 & -1.117 & 0.269 & 0.858 & -0.625 & -0.668 & -0.814 & -4.17 \\ 50 & 60 & -3.404 & 0.242 & 0.788 & -2.998 & -2.989 & -3.136 & -10.7 & -2.292 & 0.247 & 0.837 & -1.883 & -1.901 & -2.020 & -9.96 & -0.892 & 0.261 & 0.859 & -0.484 & -0.510 & -0.616 & -6.87 \\ & 72 & -3.103 & 0.230 & 0.797 & -2.799 & -2.768 & -2.870 & -12.43 & -1.980 & 0.235 & 0.855 & -1.691 & -1.687 & -1.749 & -12.6 & -0.636 & 0.228 & 0.837 & -0.362 & -0.370 & -0.420 & -8.84 \\ & 84 & -2.630 & 0.172 & 0.743 & -2.548 & -2.476 & -2.492 & -7.01 & -1.543 & 0.173 & 0.787 & -1.481 & -1.436 & -1.412 & -5.83 & -0.318 & 0.148 & 0.684 & -0.245 & -0.222 & -0.204 & -2.53 \\\hline & 18 & -3.329 & 0.228 & 0.743 & -2.980 & -2.954 & -3.079 & -5.76 & -2.288 & 0.231 & 0.746 & -1.910 & -1.904 & -2.030 & -3.51 & -0.937 & 0.247 & 0.806 & -0.551 & -0.561 & -0.673 & -0.61 \\ & 45 & -3.142 & 0.221 & 0.745 & -2.867 & -2.820 & -2.915 & -5.23 & -2.088 & 0.222 & 0.753 & -1.798 & -1.772 & -1.860 & -3.84 & -0.744 & 0.235 & 0.808 & -0.452 & -0.445 & -0.516 & -1.37 \\ 25 & 60 & -2.945 & 0.204 & 0.741 & -2.765 & -2.700 & -2.759 & -5.41 & -1.881 & 0.200 & 0.755 & -1.698 & -1.656 & -1.700 & -5.07 & -0.560 & 0.206 & 0.795 & -0.375 & -0.355 & -0.383 & -2.98 \\ & 72 & -2.736 & 0.165 & 0.700 & -2.669 & -2.589 & -2.606 & -5.39 & -1.665 & 0.157 & 0.722 & -1.608 & -1.552 & -1.547 & -5.93 & -0.383 & 0.156 & 0.730 & -0.311 & -0.281 & -0.268 & -4.06 \\ & 84 & -2.476 & 0.096 & 0.541 & -2.561 & -2.460 & -2.427 & -2.52 & -1.437 & 0.090 & 0.557 & -1.519 & -1.445 & -1.395 & -1.6 & -0.210 & 0.087 & 0.525 & -0.253 & -0.209 & -0.163 & 0.29 \\\hline & 18 & -2.700 & 0.162 & 0.697 & -2.734 & -2.632 & -2.612 & -2.04 & -1.733 & 0.165 & 0.699 & -1.703 & -1.626 & -1.628 & -0.61 & -0.451 & 0.158 & 0.705 & -0.397 & -0.346 & -0.349 & 1.61 \\ & 45 & -2.597 & 0.126 & 0.653 & -2.698 & -2.587 & -2.547 & -1.17 & -1.634 & 0.130 & 0.646 & -1.669 & -1.585 & -1.566 & -0.09 & -0.358 & 0.121 & 0.643 & -0.369 & -0.312 & -0.293 & 1.78 \\ 10 & 60 & -2.514 & 0.090 & 0.574 & -2.668 & -2.551 & -2.496 & -0.7 & -1.553 & 0.093 & 0.561 & -1.641 & -1.552 & -1.517 & -0.06 & -0.283 & 0.084 & 0.553 & -0.346 & -0.285 & -0.250 & 1.45 \\ & 72 & -2.445 & 0.057 & 0.456 & -2.639 & -2.519 & -2.452 & -0.18 & -1.486 & 0.059 & 0.440 & -1.616 & -1.525 & -1.477 & 0.11 & -0.225 & 0.051 & 0.429 & -0.326 & -0.263 & -0.217 & 1.33 \\ & 84 & -2.367 & 0.020 & 0.296 & -2.601 & -2.477 & -2.401 & 1.09 & -1.417 & 0.025 & 0.287 & -1.587 & -1.493 & -1.434 & 1.97 & -0.178 & 0.023 & 0.275 & -0.307 & -0.242 & -0.189 & 3.33 \\\hline & 18 & -2.458 & 0.044 & 0.504 & -2.679 & -2.554 & -2.482 & 0.03 & -1.504 & 0.054 & 0.501 & -1.651 & -1.554 & -1.505 & 1.2 & -0.278 & 0.056 & 0.482 & -0.367 & -0.299 & -0.266 & 3.05 \\ & 45 & -2.419 & 0.023 & 0.415 & -2.665 & -2.537 & -2.459 & 0.75 & -1.468 & 0.033 & 0.411 & -1.639 & -1.540 & -1.484 & 1.77 & -0.244 & 0.036 & 0.391 & -0.356 & -0.287 & -0.247 & 3.42 \\ 5 & 60 & -2.391 & 0.007 & 0.325 & -2.653 & -2.524 & -2.441 & 1.23 & -1.441 & 0.018 & 0.322 & -1.629 & -1.528 & -1.468 & 2.06 & -0.220 & 0.020 & 0.301 & -0.348 & -0.278 & -0.233 & 3.49 \\ & 72 & -2.369 & -0.004 & 0.244 & -2.641 & -2.511 & -2.426 & 1.73 & -1.422 & 0.008 & 0.242 & -1.620 & -1.518 & -1.455 & 2.41 & -0.202 & 0.010 & 0.222 & -0.341 & -0.270 & -0.222 & 3.67 \\ & 84 & -2.342 & -0.015 & 0.149 & -2.623 & -2.492 & -2.405 & 2.5 & -1.400 & -0.003 & 0.155 & -1.607 & -1.505 & -1.439 & 3.34 & -0.187 & 0.001 & 0.146 & -0.332 & -0.261 & -0.211 & 4.8 \\\hline & 18 & -2.373 & -0.017 & 0.208 & -2.659 & -2.527 & -2.439 & 1.43 & -1.423 & -0.002 & 0.216 & -1.633 & -1.529 & -1.464 & 2.43 & -0.217 & 0.007 & 0.212 & -0.355 & -0.283 & -0.238 & 4.11 \\ & 45 & -2.361 & -0.023 & 0.153 & -2.653 & -2.520 & -2.431 & 1.94 & -1.411 & -0.009 & 0.162 & -1.628 & -1.524 & -1.457 & 2.9 & -0.207 & 0.001 & 0.161 & -0.352 & -0.279 & -0.232 & 4.48 \\ 2.5 & 60 & -2.352 & -0.027 & 0.114 & -2.648 & -2.515 & -2.424 & 2.32 & -1.403 & -0.013 & 0.125 & -1.624 & -1.520 & -1.452 & 3.22 & -0.200 & -0.003 & 0.125 & -0.349 & -0.276 & -0.228 & 4.68 \\ & 72 & -2.344 & -0.030 & 0.087 & -2.643 & -2.509 & -2.418 & 2.7 & -1.397 & -0.016 & 0.099 & -1.620 & -1.516 & -1.447 & 3.56 & -0.195 & -0.006 & 0.101 & -0.346 & -0.273 & -0.224 & 4.93 \\ & 84 & -2.334 & -0.032 & 0.054 & -2.634 & -2.500 & -2.409 & 3.16 & -1.389 & -0.018 & 0.070 & -1.614 & -1.509 & -1.440 & 4.13 & -0.190 & -0.007 & 0.076 & -0.342 & -0.269 & -0.220 & 5.64 \\\hline & 18 & -2.344 & -0.038 & 0.019 & -2.650 & -2.515 & -2.423 & 2.5 & -1.395 & -0.023 & 0.040 & -1.624 & -1.519 & -1.449 & 3.4 & -0.194 & -0.011 & 0.051 & -0.349 & -0.276 & -0.226 & 4.97 \\ & 45 & -2.341 & -0.039 & 0.007 & -2.648 & -2.513 & -2.420 & 2.76 & -1.393 & -0.024 & 0.029 & -1.623 & -1.518 & -1.447 & 3.68 & -0.192 & -0.012 & 0.041 & -0.348 & -0.274 & -0.225 & 5.23 \\ 1 & 60 & -2.338 & -0.039 & 0.000 & -2.646 & -2.511 & -2.418 & 2.97 & -1.391 & -0.024 & 0.022 & -1.622 & -1.517 & -1.446 & 3.9 & -0.190 & -0.012 & 0.036 & -0.347 & -0.274 & -0.224 & 5.42 \\ & 72 & -2.336 & -0.040 & -0.004 & -2.644 & -2.509 & -2.416 & 3.19 & -1.389 & -0.025 & 0.019 & -1.620 & -1.515 & -1.444 & 4.15 & -0.189 & -0.013 & 0.032 & -0.346 & -0.273 & -0.223 & 5.64 \\ & 84 & -2.332 & -0.040 & -0.008 & -2.640 & -2.505 & -2.412 & 3.31 & -1.386 & -0.025 & 0.015 & -1.618 & -1.512 & -1.442 & 4.45 & -0.187 & -0.013 & 0.030 & -0.345 & -0.271 & -0.221 & 6.08 \\ \enddata \end{deluxetable} \floattable \begin{deluxetable}{ c c | c c c c c c c| c c c c c c c | c c c c c c c} \tabletypesize{\tiny} \rotate \tablewidth{0pt} \tablecolumns{23} \tablecaption{Synthetic infrared magnitudes and colours computed for n=3.5\label{Tab_n35}} \tablehead{ \colhead{$\rho_{0}/10^{-12}$} & \colhead{i} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} } \startdata (gcm$^{-3}$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)\\ & & & & &B1 & & & & & & & B3 & & & & & & & B7 & & & \\ \hline & 18 & -3.890 & 0.199 & 0.633 & -3.511 & -3.521 & -3.659 & -4.01 & -2.825 & 0.203 & 0.663 & -2.422 & -2.463 & -2.585 & -2.04 & -1.420 & 0.239 & 0.748 & -0.926 & -0.988 & -1.139 & -0.34 \\ & 45 & -3.664 & 0.200 & 0.640 & -3.294 & -3.299 & -3.433 & -4.62 & -2.581 & 0.203 & 0.676 & -2.194 & -2.230 & -2.344 & -2.97 & -1.197 & 0.238 & 0.752 & -0.725 & -0.779 & -0.920 & -1.35 \\ 100 & 60 & -3.406 & 0.198 & 0.649 & -3.062 & -3.057 & -3.181 & -5.98 & -2.307 & 0.202 & 0.691 & -1.953 & -1.979 & -2.077 & -4.89 & -0.949 & 0.233 & 0.753 & -0.540 & -0.578 & -0.689 & -3.14 \\ & 72 & -3.069 & 0.191 & 0.660 & -2.811 & -2.783 & -2.870 & -7.17 & -1.958 & 0.194 & 0.707 & -1.708 & -1.709 & -1.759 & -6.59 & -0.658 & 0.205 & 0.737 & -0.384 & -0.400 & -0.453 & -4.43 \\ & 84 & -2.582 & 0.135 & 0.596 & -2.547 & -2.473 & -2.475 & -3.81 & -1.508 & 0.134 & 0.627 & -1.483 & -1.438 & -1.405 & -1.91 & -0.318 & 0.129 & 0.586 & -0.252 & -0.231 & -0.213 & 0.33 \\\hline & 18 & -3.726 & 0.192 & 0.622 & -3.378 & -3.378 & -3.506 & -3.64 & -2.663 & 0.193 & 0.639 & -2.293 & -2.322 & -2.439 & -1.67 & -1.286 & 0.230 & 0.731 & -0.829 & -0.880 & -1.020 & 0.2 \\ & 45 & -3.503 & 0.192 & 0.629 & -3.169 & -3.161 & -3.285 & -4.03 & -2.422 & 0.192 & 0.652 & -2.074 & -2.095 & -2.202 & -2.47 & -1.063 & 0.228 & 0.734 & -0.637 & -0.677 & -0.802 & -0.67 \\ 75 & 60 & -3.251 & 0.190 & 0.635 & -2.955 & -2.934 & -3.042 & -4.98 & -2.152 & 0.189 & 0.667 & -1.855 & -1.862 & -1.944 & -4.06 & -0.820 & 0.220 & 0.734 & -0.474 & -0.497 & -0.584 & -2.21 \\ & 72 & -2.935 & 0.175 & 0.638 & -2.747 & -2.702 & -2.765 & -5.59 & -1.823 & 0.171 & 0.678 & -1.654 & -1.636 & -1.662 & -5.24 & -0.551 & 0.183 & 0.706 & -0.346 & -0.349 & -0.378 & -3.22 \\ & 84 & -2.529 & 0.111 & 0.533 & -2.550 & -2.465 & -2.452 & -2.66 & -1.453 & 0.106 & 0.562 & -1.488 & -1.431 & -1.385 & -1.03 & -0.264 & 0.106 & 0.532 & -0.246 & -0.218 & -0.186 & 1.05 \\\hline & 18 & -3.494 & 0.183 & 0.608 & -3.185 & -3.174 & -3.290 & -3.16 & -2.451 & 0.182 & 0.609 & -2.115 & -2.127 & -2.242 & -1.19 & -1.101 & 0.215 & 0.694 & -0.700 & -0.733 & -0.857 & 0.93 \\ & 45 & -3.275 & 0.182 & 0.613 & -2.999 & -2.972 & -3.077 & -3.16 & -2.216 & 0.179 & 0.621 & -1.922 & -1.920 & -2.015 & -1.66 & -0.879 & 0.211 & 0.697 & -0.534 & -0.550 & -0.647 & 0.36 \\ 50 & 60 & -3.035 & 0.176 & 0.615 & -2.828 & -2.781 & -2.856 & -3.55 & -1.959 & 0.171 & 0.632 & -1.751 & -1.729 & -1.785 & -2.69 & -0.648 & 0.194 & 0.692 & -0.407 & -0.407 & -0.457 & -0.78 \\ & 72 & -2.765 & 0.146 & 0.593 & -2.682 & -2.613 & -2.640 & -3.57 & -1.680 & 0.137 & 0.619 & -1.612 & -1.569 & -1.568 & -3.28 & -0.422 & 0.147 & 0.639 & -0.315 & -0.297 & -0.297 & -1.45 \\ & 84 & -2.468 & 0.080 & 0.445 & -2.558 & -2.460 & -2.427 & -1.12 & -1.416 & 0.074 & 0.465 & -1.509 & -1.438 & -1.384 & 0.08 & -0.212 & 0.076 & 0.444 & -0.250 & -0.210 & -0.167 & 2 \\\hline & 18 & -3.087 & 0.173 & 0.585 & -2.887 & -2.836 & -2.907 & -2 & -2.093 & 0.178 & 0.578 & -1.845 & -1.816 & -1.901 & -0.26 & -0.777 & 0.191 & 0.627 & -0.509 & -0.500 & -0.578 & 1.97 \\ & 45 & -2.888 & 0.162 & 0.584 & -2.779 & -2.705 & -2.741 & -1.47 & -1.888 & 0.163 & 0.576 & -1.739 & -1.689 & -1.734 & -0.09 & -0.578 & 0.172 & 0.622 & -0.416 & -0.389 & -0.423 & 1.94 \\ 25 & 60 & -2.701 & 0.133 & 0.563 & -2.695 & -2.604 & -2.603 & -1.24 & -1.698 & 0.130 & 0.555 & -1.658 & -1.594 & -1.597 & -0.34 & -0.407 & 0.132 & 0.586 & -0.351 & -0.313 & -0.307 & 1.47 \\ & 72 & -2.542 & 0.087 & 0.472 & -2.631 & -2.529 & -2.497 & -0.82 & -1.537 & 0.081 & 0.467 & -1.599 & -1.524 & -1.492 & -0.3 & -0.272 & 0.082 & 0.478 & -0.306 & -0.260 & -0.224 & 1.28 \\ & 84 & -2.394 & 0.035 & 0.305 & -2.579 & -2.464 & -2.403 & 0.72 & -1.407 & 0.034 & 0.308 & -1.555 & -1.469 & -1.411 & 1.65 & -0.174 & 0.037 & 0.299 & -0.277 & -0.222 & -0.169 & 3.34 \\\hline & 18 & -2.600 & 0.113 & 0.547 & -2.710 & -2.599 & -2.557 & -0.04 & -1.645 & 0.119 & 0.551 & -1.684 & -1.599 & -1.581 & 1.26 & -0.390 & 0.116 & 0.551 & -0.390 & -0.332 & -0.322 & 3.19 \\ & 45 & -2.501 & 0.074 & 0.485 & -2.677 & -2.558 & -2.497 & 0.62 & -1.549 & 0.080 & 0.482 & -1.653 & -1.561 & -1.524 & 1.74 & -0.301 & 0.077 & 0.472 & -0.362 & -0.299 & -0.270 & 3.49 \\ 10 & 60 & -2.434 & 0.040 & 0.383 & -2.653 & -2.529 & -2.457 & 1.05 & -1.485 & 0.048 & 0.377 & -1.631 & -1.535 & -1.486 & 1.95 & -0.241 & 0.044 & 0.364 & -0.344 & -0.278 & -0.236 & 3.5 \\ & 72 & -2.389 & 0.016 & 0.273 & -2.634 & -2.508 & -2.429 & 1.54 & -1.442 & 0.024 & 0.267 & -1.614 & -1.517 & -1.460 & 2.27 & -0.203 & 0.021 & 0.254 & -0.331 & -0.264 & -0.214 & 3.67 \\ & 84 & -2.346 & -0.007 & 0.157 & -2.615 & -2.486 & -2.402 & 2.31 & -1.403 & 0.003 & 0.159 & -1.599 & -1.500 & -1.437 & 3.15 & -0.176 & 0.004 & 0.156 & -0.321 & -0.252 & -0.199 & 4.69 \\\hline & 18 & -2.423 & 0.020 & 0.376 & -2.670 & -2.542 & -2.464 & 1.14 & -1.473 & 0.032 & 0.378 & -1.645 & -1.545 & -1.489 & 2.26 & -0.257 & 0.037 & 0.368 & -0.363 & -0.294 & -0.257 & 4.02 \\ & 45 & -2.388 & -0.001 & 0.278 & -2.658 & -2.527 & -2.443 & 1.69 & -1.439 & 0.012 & 0.280 & -1.633 & -1.531 & -1.470 & 2.72 & -0.226 & 0.018 & 0.271 & -0.354 & -0.283 & -0.240 & 4.35 \\ 5 & 60 & -2.366 & -0.013 & 0.196 & -2.648 & -2.517 & -2.429 & 2.09 & -1.418 & 0.000 & 0.199 & -1.625 & -1.522 & -1.457 & 3.01 & -0.207 & 0.007 & 0.191 & -0.347 & -0.276 & -0.229 & 4.5 \\ & 72 & -2.352 & -0.019 & 0.135 & -2.640 & -2.508 & -2.419 & 2.51 & -1.406 & -0.006 & 0.143 & -1.618 & -1.515 & -1.449 & 3.35 & -0.196 & 0.000 & 0.137 & -0.342 & -0.270 & -0.222 & 4.75 \\ & 84 & -2.337 & -0.027 & 0.073 & -2.631 & -2.498 & -2.408 & 2.96 & -1.393 & -0.013 & 0.086 & -1.611 & -1.508 & -1.440 & 3.9 & -0.187 & -0.005 & 0.088 & -0.338 & -0.266 & -0.216 & 5.41 \\\hline & 18 & -2.363 & -0.024 & 0.143 & -2.656 & -2.523 & -2.433 & 1.99 & -1.412 & -0.010 & 0.155 & -1.629 & -1.526 & -1.458 & 2.99 & -0.210 & 0.001 & 0.158 & -0.353 & -0.281 & -0.234 & 4.64 \\ & 45 & -2.351 & -0.030 & 0.090 & -2.650 & -2.517 & -2.426 & 2.4 & -1.402 & -0.016 & 0.104 & -1.625 & -1.521 & -1.452 & 3.36 & -0.201 & -0.005 & 0.111 & -0.350 & -0.277 & -0.229 & 4.94 \\ 2.5 & 60 & -2.345 & -0.033 & 0.060 & -2.646 & -2.512 & -2.421 & 2.71 & -1.397 & -0.019 & 0.075 & -1.622 & -1.518 & -1.448 & 3.63 & -0.195 & -0.008 & 0.083 & -0.348 & -0.274 & -0.226 & 5.13 \\ & 72 & -2.339 & -0.035 & 0.041 & -2.642 & -2.508 & -2.416 & 3.01 & -1.393 & -0.020 & 0.058 & -1.620 & -1.515 & -1.445 & 3.92 & -0.192 & -0.009 & 0.067 & -0.346 & -0.272 & -0.223 & 5.37 \\ & 84 & -2.334 & -0.036 & 0.020 & -2.638 & -2.504 & -2.411 & 3.23 & -1.388 & -0.021 & 0.040 & -1.616 & -1.511 & -1.441 & 4.27 & -0.189 & -0.010 & 0.051 & -0.343 & -0.270 & -0.221 & 5.82 \\\hline & 18 & -2.341 & -0.039 & 0.005 & -2.649 & -2.514 & -2.421 & 2.69 & -1.393 & -0.024 & 0.027 & -1.624 & -1.519 & -1.448 & 3.61 & -0.193 & -0.012 & 0.040 & -0.349 & -0.275 & -0.226 & 5.18 \\ & 45 & -2.338 & -0.040 & -0.006 & -2.646 & -2.512 & -2.419 & 2.91 & -1.391 & -0.025 & 0.017 & -1.622 & -1.517 & -1.446 & 3.84 & -0.190 & -0.013 & 0.032 & -0.348 & -0.274 & -0.224 & 5.39 \\ 1 & 60 & -2.337 & -0.040 & -0.011 & -2.645 & -2.511 & -2.417 & 3.08 & -1.390 & -0.025 & 0.012 & -1.621 & -1.516 & -1.445 & 4.03 & -0.190 & -0.013 & 0.027 & -0.347 & -0.273 & -0.223 & 5.55 \\ & 72 & -2.335 & -0.040 & -0.014 & -2.644 & -2.509 & -2.416 & 3.22 & -1.388 & -0.025 & 0.010 & -1.620 & -1.515 & -1.444 & 4.22 & -0.189 & -0.013 & 0.025 & -0.346 & -0.273 & -0.223 & 5.74 \\ & 84 & -2.333 & -0.041 & -0.016 & -2.642 & -2.507 & -2.414 & 3.26 & -1.387 & -0.026 & 0.008 & -1.619 & -1.513 & -1.442 & 4.34 & -0.187 & -0.014 & 0.023 & -0.345 & -0.272 & -0.222 & 6.03 \\ \enddata \end{deluxetable} \floattable \begin{deluxetable}{ c c | c c c c c c c| c c c c c c c | c c c c c c c} \tabletypesize{\tiny} \rotate \tablewidth{0pt} \tablecolumns{23} \tablecaption{Synthetic infrared magnitudes and colours computed for n=4.\label{Tab_n40}} \tablehead{ \colhead{$\rho_{0}/10^{-12}$} & \colhead{i} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} & \colhead{W1} & \colhead{W12} & \colhead{W23} & \colhead{J} & \colhead{H} & \colhead{K} & \colhead{EW$_{H\alpha}$} } \startdata (gcm$^{-3}$)&($^o$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)&(mag)&(mag)&(mag)&(mag)&(mag)&(mag)&($\AA$)\\ & & & & &B1 & & & & & & & B3 & & & & & & & B7 & & & \\ \hline & 18 & -3.601 & 0.157 & 0.520 & -3.341 & -3.327 & -3.424 & -1.85 & -2.555 & 0.154 & 0.523 & -2.274 & -2.289 & -2.376 & -0.11 & -1.220 & 0.196 & 0.621 & -0.832 & -0.875 & -0.991 & 1.59 \\ & 45 & -3.365 & 0.156 & 0.525 & -3.120 & -3.100 & -3.192 & -2.03 & -2.301 & 0.152 & 0.535 & -2.042 & -2.051 & -2.127 & -0.53 & -0.984 & 0.193 & 0.621 & -0.626 & -0.659 & -0.762 & 1.16 \\ 100 & 60 & -3.103 & 0.153 & 0.530 & -2.896 & -2.862 & -2.939 & -2.52 & -2.021 & 0.148 & 0.547 & -1.813 & -1.807 & -1.860 & -1.41 & -0.728 & 0.183 & 0.617 & -0.453 & -0.468 & -0.533 & 0.35 \\ & 72 & -2.790 & 0.132 & 0.520 & -2.700 & -2.640 & -2.670 & -2.61 & -1.698 & 0.123 & 0.547 & -1.623 & -1.592 & -1.590 & -1.85 & -0.462 & 0.140 & 0.574 & -0.330 & -0.324 & -0.333 & -0.04 \\ & 84 & -2.464 & 0.070 & 0.385 & -2.555 & -2.460 & -2.427 & -0.33 & -1.406 & 0.064 & 0.407 & -1.501 & -1.433 & -1.377 & 0.89 & -0.221 & 0.071 & 0.392 & -0.251 & -0.215 & -0.174 & 2.88 \\\hline & 18 & -3.465 & 0.151 & 0.511 & -3.232 & -3.210 & -3.298 & -1.61 & -2.433 & 0.149 & 0.507 & -2.172 & -2.177 & -2.262 & 0.12 & -1.106 & 0.186 & 0.596 & -0.752 & -0.785 & -0.891 & 1.93 \\ & 45 & -3.233 & 0.150 & 0.516 & -3.019 & -2.988 & -3.070 & -1.67 & -2.184 & 0.145 & 0.517 & -1.952 & -1.947 & -2.021 & -0.19 & -0.871 & 0.182 & 0.596 & -0.559 & -0.578 & -0.666 & 1.62 \\ 75 & 60 & -2.977 & 0.145 & 0.518 & -2.822 & -2.772 & -2.830 & -1.94 & -1.912 & 0.139 & 0.527 & -1.752 & -1.729 & -1.770 & -0.84 & -0.625 & 0.167 & 0.589 & -0.411 & -0.411 & -0.456 & 0.98 \\ & 72 & -2.698 & 0.114 & 0.489 & -2.667 & -2.594 & -2.605 & -1.79 & -1.626 & 0.104 & 0.508 & -1.604 & -1.558 & -1.544 & -1.03 & -0.389 & 0.118 & 0.527 & -0.312 & -0.294 & -0.287 & 0.75 \\ & 84 & -2.435 & 0.054 & 0.339 & -2.561 & -2.459 & -2.417 & 0.28 & -1.394 & 0.049 & 0.357 & -1.515 & -1.440 & -1.381 & 1.32 & -0.196 & 0.055 & 0.343 & -0.255 & -0.213 & -0.166 & 3.26 \\\hline & 18 & -3.273 & 0.144 & 0.500 & -3.074 & -3.042 & -3.120 & -1.26 & -2.268 & 0.144 & 0.489 & -2.028 & -2.020 & -2.106 & 0.44 & -0.950 & 0.172 & 0.560 & -0.645 & -0.661 & -0.755 & 2.37 \\ & 45 & -3.046 & 0.142 & 0.502 & -2.888 & -2.838 & -2.900 & -1.08 & -2.028 & 0.139 & 0.495 & -1.839 & -1.813 & -1.877 & 0.36 & -0.718 & 0.165 & 0.559 & -0.482 & -0.479 & -0.542 & 2.26 \\ 50 & 60 & -2.807 & 0.129 & 0.498 & -2.740 & -2.668 & -2.694 & -1.06 & -1.778 & 0.123 & 0.496 & -1.692 & -1.646 & -1.666 & 0.02 & -0.494 & 0.139 & 0.542 & -0.371 & -0.351 & -0.367 & 1.85 \\ & 72 & -2.588 & 0.087 & 0.433 & -2.636 & -2.547 & -2.531 & -0.69 & -1.555 & 0.079 & 0.438 & -1.594 & -1.531 & -1.504 & 0.08 & -0.308 & 0.088 & 0.452 & -0.302 & -0.268 & -0.243 & 1.77 \\ & 84 & -2.401 & 0.034 & 0.278 & -2.571 & -2.461 & -2.406 & 0.98 & -1.392 & 0.032 & 0.289 & -1.538 & -1.456 & -1.396 & 1.9 & -0.176 & 0.037 & 0.279 & -0.266 & -0.216 & -0.165 & 3.73 \\\hline & 18 & -2.937 & 0.137 & 0.480 & -2.834 & -2.767 & -2.805 & -0.44 & -1.971 & 0.144 & 0.476 & -1.807 & -1.763 & -1.821 & 1.06 & -0.676 & 0.155 & 0.509 & -0.484 & -0.464 & -0.519 & 3.07 \\ & 45 & -2.734 & 0.122 & 0.475 & -2.732 & -2.641 & -2.641 & 0.07 & -1.766 & 0.125 & 0.468 & -1.707 & -1.642 & -1.659 & 1.36 & -0.477 & 0.129 & 0.496 & -0.398 & -0.357 & -0.369 & 3.24 \\ 25 & 60 & -2.568 & 0.084 & 0.431 & -2.664 & -2.560 & -2.526 & 0.4 & -1.599 & 0.085 & 0.422 & -1.642 & -1.565 & -1.546 & 1.41 & -0.327 & 0.086 & 0.437 & -0.343 & -0.295 & -0.272 & 3.14 \\ & 72 & -2.454 & 0.045 & 0.320 & -2.621 & -2.508 & -2.452 & 0.88 & -1.485 & 0.044 & 0.311 & -1.601 & -1.517 & -1.474 & 1.68 & -0.229 & 0.044 & 0.316 & -0.311 & -0.257 & -0.213 & 3.25 \\ & 84 & -2.362 & 0.007 & 0.186 & -2.592 & -2.471 & -2.398 & 1.88 & -1.401 & 0.012 & 0.189 & -1.575 & -1.483 & -1.423 & 2.81 & -0.167 & 0.015 & 0.188 & -0.294 & -0.234 & -0.179 & 4.45 \\\hline & 18 & -2.543 & 0.082 & 0.447 & -2.697 & -2.580 & -2.527 & 0.87 & -1.590 & 0.090 & 0.454 & -1.671 & -1.582 & -1.552 & 2.08 & -0.354 & 0.092 & 0.454 & -0.385 & -0.323 & -0.307 & 3.91 \\ & 45 & -2.449 & 0.041 & 0.371 & -2.666 & -2.542 & -2.471 & 1.42 & -1.501 & 0.051 & 0.372 & -1.644 & -1.547 & -1.500 & 2.51 & -0.270 & 0.052 & 0.363 & -0.359 & -0.292 & -0.257 & 4.22 \\ 10 & 60 & -2.396 & 0.013 & 0.263 & -2.647 & -2.520 & -2.440 & 1.82 & -1.451 & 0.023 & 0.263 & -1.626 & -1.527 & -1.470 & 2.77 & -0.223 & 0.024 & 0.253 & -0.344 & -0.276 & -0.231 & 4.33 \\ & 72 & -2.366 & -0.004 & 0.175 & -2.634 & -2.505 & -2.421 & 2.26 & -1.422 & 0.007 & 0.177 & -1.615 & -1.515 & -1.453 & 3.1 & -0.197 & 0.009 & 0.168 & -0.335 & -0.266 & -0.216 & 4.56 \\ & 84 & -2.338 & -0.020 & 0.091 & -2.623 & -2.492 & -2.404 & 2.73 & -1.396 & -0.008 & 0.101 & -1.605 & -1.504 & -1.438 & 3.68 & -0.179 & -0.003 & 0.102 & -0.328 & -0.259 & -0.206 & 5.23 \\\hline & 18 & -2.404 & 0.006 & 0.296 & -2.666 & -2.536 & -2.454 & 1.67 & -1.452 & 0.019 & 0.301 & -1.639 & -1.538 & -1.478 & 2.75 & -0.244 & 0.029 & 0.297 & -0.361 & -0.291 & -0.252 & 4.47 \\ & 45 & -2.371 & -0.013 & 0.195 & -2.654 & -2.522 & -2.434 & 2.14 & -1.422 & 0.000 & 0.202 & -1.629 & -1.526 & -1.461 & 3.15 & -0.216 & 0.009 & 0.201 & -0.353 & -0.281 & -0.235 & 4.77 \\ 5 & 60 & -2.355 & -0.022 & 0.125 & -2.646 & -2.513 & -2.424 & 2.49 & -1.407 & -0.009 & 0.134 & -1.623 & -1.519 & -1.452 & 3.42 & -0.202 & 0.000 & 0.135 & -0.347 & -0.275 & -0.227 & 4.94 \\ & 72 & -2.345 & -0.027 & 0.081 & -2.640 & -2.507 & -2.417 & 2.84 & -1.399 & -0.013 & 0.093 & -1.618 & -1.514 & -1.446 & 3.73 & -0.194 & -0.004 & 0.097 & -0.344 & -0.271 & -0.222 & 5.19 \\ & 84 & -2.335 & -0.032 & 0.038 & -2.635 & -2.501 & -2.410 & 3.08 & -1.391 & -0.018 & 0.054 & -1.614 & -1.510 & -1.441 & 4.08 & -0.188 & -0.008 & 0.062 & -0.341 & -0.268 & -0.218 & 5.63 \\\hline & 18 & -2.357 & -0.028 & 0.106 & -2.654 & -2.521 & -2.430 & 2.27 & -1.408 & -0.013 & 0.120 & -1.629 & -1.524 & -1.456 & 3.26 & -0.206 & -0.003 & 0.127 & -0.353 & -0.280 & -0.232 & 4.9 \\ & 45 & -2.347 & -0.034 & 0.056 & -2.649 & -2.515 & -2.423 & 2.63 & -1.398 & -0.019 & 0.073 & -1.624 & -1.519 & -1.450 & 3.58 & -0.197 & -0.007 & 0.082 & -0.350 & -0.276 & -0.228 & 5.16 \\ 2.5 & 60 & -2.341 & -0.036 & 0.032 & -2.645 & -2.511 & -2.419 & 2.89 & -1.394 & -0.021 & 0.050 & -1.622 & -1.517 & -1.447 & 3.83 & -0.193 & -0.010 & 0.062 & -0.348 & -0.274 & -0.225 & 5.34 \\ & 72 & -2.337 & -0.037 & 0.019 & -2.643 & -2.508 & -2.416 & 3.13 & -1.391 & -0.022 & 0.039 & -1.620 & -1.515 & -1.444 & 4.07 & -0.191 & -0.011 & 0.051 & -0.346 & -0.273 & -0.223 & 5.56 \\ & 84 & -2.334 & -0.038 & 0.004 & -2.640 & -2.505 & -2.413 & 3.22 & -1.388 & -0.023 & 0.025 & -1.618 & -1.513 & -1.442 & 4.24 & -0.189 & -0.011 & 0.039 & -0.344 & -0.271 & -0.221 & 5.87 \\\hline & 18 & -2.341 & -0.040 & -0.003 & -2.649 & -2.514 & -2.421 & 2.79 & -1.393 & -0.025 & 0.020 & -1.624 & -1.519 & -1.448 & 3.72 & -0.192 & -0.012 & 0.034 & -0.349 & -0.276 & -0.226 & 5.29 \\ & 45 & -2.338 & -0.040 & -0.012 & -2.646 & -2.512 & -2.418 & 2.99 & -1.390 & -0.025 & 0.011 & -1.622 & -1.517 & -1.446 & 3.93 & -0.190 & -0.013 & 0.026 & -0.348 & -0.274 & -0.224 & 5.47 \\ 1 & 60 & -2.336 & -0.041 & -0.016 & -2.645 & -2.510 & -2.417 & 3.13 & -1.389 & -0.026 & 0.007 & -1.621 & -1.516 & -1.445 & 4.09 & -0.189 & -0.014 & 0.023 & -0.347 & -0.273 & -0.223 & 5.61 \\ & 72 & -2.335 & -0.041 & -0.018 & -2.644 & -2.509 & -2.416 & 3.21 & -1.388 & -0.026 & 0.006 & -1.621 & -1.515 & -1.444 & 4.23 & -0.189 & -0.014 & 0.022 & -0.347 & -0.273 & -0.223 & 5.77 \\ & 84 & -2.333 & -0.041 & -0.020 & -2.642 & -2.508 & -2.414 & 3.23 & -1.387 & -0.026 & 0.004 & -1.619 & -1.514 & -1.443 & 4.27 & -0.188 & -0.014 & 0.020 & -0.346 & -0.272 & -0.222 & 5.94 \\ \enddat \end{deluxetable} \begin{table} \caption{Disk Masses}\label{Tab_Masses} \centering \begin{tabular}{| c c | c c c|} \hline\hline n &$\rho_0/10^{-12}$&{\small B1 (13.2M$\odot$)} & {\small B3 (7.6M$\odot$)} & {\small B7 (4.2M$\odot$)} \\ &g\,cm$^{-3}$& \multicolumn{3}{|c|}{M$_d$[M$_{*}$]}\\\hline 2 & 100 & 2.74$\times$10$^{-8}$ & 1.95$\times$10$^{-8}$ & 9.97$\times$10$^{-9}$\\ 2 & 75 & 2.05$\times$10$^{-8}$ & 1.46$\times$10$^{-8}$ & 7.47$\times$10$^{-9}$\\ 2 & 50 & 1.37$\times$10$^{-8}$ & 9.74$\times$10$^{-9}$ & 4.98$\times$10$^{-9}$\\ 2 & 25 & 6.84$\times$10$^{-9}$ & 4.87$\times$10$^{-9}$ & 2.49$\times$10$^{-9}$\\ 2 & 10 & 2.74$\times$10$^{-9}$ & 1.95$\times$10$^{-9}$ & 9.97$\times$10$^{-10}$\\ 2 & 5 & 1.37$\times$10$^{-9}$ & 9.74$\times$10$^{-10}$ & 4.98$\times$10$^{-10}$\\ 2 & 2.5 & 6.84$\times$10$^{-10}$ & 4.87$\times$10$^{-10}$ & 2.49$\times$10$^{-10}$\\ 2 & 1 & 2.74$\times$10$^{-10}$ & 1.95$\times$10$^{-10}$ & 9.97$\times$10$^{-11}$\\\hline 2.5 & 100 & 5.71$\times$10$^{-9}$ & 4.06$\times$10$^{-9}$ & 2.08$\times$10$^{-9}$\\ 2.5 & 75 & 4.28$\times$10$^{-9}$ & 3.05$\times$10$^{-9}$ & 1.56$\times$10$^{-9}$\\ 2.5 & 50 & 2.85$\times$10$^{-9}$ & 2.03$\times$10$^{-9}$ & 1.04$\times$10$^{-9}$\\ 2.5 & 25 & 1.43$\times$10$^{-9}$ & 1.02$\times$10$^{-9}$ & 5.19$\times$10$^{-10}$\\ 2.5 & 10 & 5.71$\times$10$^{-10}$ & 4.06$\times$10$^{-10}$ & 2.08$\times$10$^{-10}$\\ 2.5 & 5 & 2.85$\times$10$^{-10}$ & 2.03$\times$10$^{-10}$ & 1.04$\times$10$^{-10}$\\ 2.5 & 2.5 & 1.43$\times$10$^{-10}$ & 1.02$\times$10$^{-10}$ & 5.19$\times$10$^{-11}$\\ 2.5 & 1 & 5.71$\times$10$^{-11}$ & 4.06$\times$10$^{-11}$ & 2.08$\times$10$^{-11}$\\\hline 3 & 100 & 1.41$\times$10$^{-9}$ & 1.01$\times$10$^{-9}$ & 5.15$\times$10$^{-10}$\\ 3 & 75 & 1.06$\times$10$^{-9}$ & 7.55$\times$10$^{-10}$ & 3.86$\times$10$^{-10}$\\ 3 & 50 & 7.07$\times$10$^{-10}$ & 5.03$\times$10$^{-10}$ & 2.57$\times$10$^{-10}$\\ 3 & 25 & 3.54$\times$10$^{-10}$ & 2.52$\times$10$^{-10}$ & 1.29$\times$10$^{-10}$\\ 3 & 10 & 1.41$\times$10$^{-10}$ & 1.01$\times$10$^{-10}$ & 5.15$\times$10$^{-11}$\\ 3 & 5 & 7.07$\times$10$^{-11}$ & 5.03$\times$10$^{-11}$ & 2.57$\times$10$^{-11}$\\ 3 & 2.5 & 3.54$\times$10$^{-11}$ & 2.52$\times$10$^{-11}$ & 1.29$\times$10$^{-11}$\\ 3 & 1 & 1.41$\times$10$^{-11}$ & 1.01$\times$10$^{-11}$ & 5.15$\times$10$^{-12}$\\\hline 3.5 & 100 & 4.56$\times$10$^{-10}$ & 3.24$\times$10$^{-10}$ & 1.66$\times$10$^{-10}$\\ 3.5 & 75 & 3.42$\times$10$^{-10}$ & 2.43$\times$10$^{-10}$ & 1.24$\times$10$^{-10}$\\ 3.5 & 50 & 2.28$\times$10$^{-10}$ & 1.62$\times$10$^{-10}$ & 8.29$\times$10$^{-11}$\\ 3.5 & 25 & 1.14$\times$10$^{-10}$ & 8.11$\times$10$^{-11}$ & 4.15$\times$10$^{-11}$\\ 3.5 & 10 & 4.56$\times$10$^{-11}$ & 3.24$\times$10$^{-11}$ & 1.66$\times$10$^{-11}$\\ 3.5 & 5 & 2.28$\times$10$^{-11}$ & 1.62$\times$10$^{-11}$ & 8.29$\times$10$^{-12}$\\ 3.5 & 2.5 & 1.14$\times$10$^{-11}$ & 8.11$\times$10$^{-12}$ & 4.15$\times$10$^{-12}$\\ 3.5 & 1 & 4.56$\times$10$^{-12}$ & 3.24$\times$10$^{-12}$ & 1.66$\times$10$^{-12}$\\\hline 4 & 100 & 2.00$\times$10$^{-10}$ & 1.42$\times$10$^{-10}$ & 7.28$\times$10$^{-11}$\\ 4 & 75 & 1.50$\times$10$^{-10}$ & 1.07$\times$10$^{-10}$ & 5.46$\times$10$^{-11}$\\ 4 & 50 & 1.00$\times$10$^{-10}$ & 7.12$\times$10$^{-11}$ & 3.64$\times$10$^{-11}$\\ 4 & 25 & 5.00$\times$10$^{-11}$ & 3.56$\times$10$^{-11}$ & 1.82$\times$10$^{-11}$\\ 4 & 10 & 2.00$\times$10$^{-11}$ & 1.42$\times$10$^{-11}$ & 7.28$\times$10$^{-12}$\\ 4 & 5 & 1.00$\times$10$^{-11}$ & 7.12$\times$10$^{-12}$ & 3.64$\times$10$^{-12}$\\ 4 & 2.5 & 5.00$\times$10$^{-12}$ & 3.56$\times$10$^{-12}$ & 1.82$\times$10$^{-12}$\\ 4 & 1 & 2.00$\times$10$^{-12}$ & 1.42$\times$10$^{-12}$ & 7.28$\times$10$^{-13}$\\ \hline\hline \end{tabular} \end{table} \subsection{Allowed $n$ and $\rho_0$ for Be stars} From Figure~\ref{5Clusters_a}, we see that the late-type Be stars and candidates have W1-W2$<$0.25, whereas the mid- and early B-type objects have W1-W2$<$0.35. We now explore our models to see which combinations of $\rho_0$ and $n$ give W1-W2 compatible with these limits. Figure~\ref{rho_n} shows, for an inclination of 60$^o$, which combinations of the disk density parameters $\rho_0$ and $n$ are consistent with the above limits. The blue, magenta and black lines describe the limits for early, mid, and late spectral types, respectively. There is a clear trend with spectral type: larger $\rho_0$ base densities are allowed for smaller $n$ for earlier spectral types, indicating that earlier spectra types can have more massive disks. Changing the inclination angle changes these limits somewhat, particularly larger inclination angles where the disk is projected against the stellar surface, but the overall trend with spectral type is not modified. The limits here agree with disk density {\it forbidden zone} as described by \citet{Vieira2017} (shown in the figure as a gray line), obtained for a large sample of Be stars with different inclination angles. Finally, the open squares indicate the combinations of parameters that lead to W1-W2$<$0.05, thus appearing as normal B stars. \begin{figure} \figurenum{11} \gridline{\fig{./Rho_n_allowed_23Jun2017.eps}{0.465\textwidth}{} } \caption{$\rho_0$ versus $n$ plot for our disk models for an intermediate inclination angle of 60 degrees. Small gray full circles correspond to models that do not describe the observed W1-W2 colours for any spectral type, blue triangles show the combinations of parameters that describe early-type Be stars, magenta open circles correspond to intermediate-type stars and black open squares correspond to late-type Be stars. The black full squares correspond to models with W1-W2$<$0.05. The blue, magenta and black lines describe the limits for each spectral type. The thin gray line indicates the limit of the {\it forbidden zone} as described by \citet{Vieira2017}. \label{rho_n}} \end{figure} \begin{figure} \figurenum{12} \gridline{\fig{./CMD_WISE_Be_3_28Nov2017.eps}{0.465\textwidth}{}} \caption{Emission line stars along with synthetic points in the WISE CMD obtained with {\sc bedisk}/{\sc beray}. Full coloured symbols indicate emission-line stars, those surrounded by a black square indicate classical Be stars. The black line separates active Be stars (with W1-W2$\geq$0.05) from quiescent Be stars. Circles, hexagons, triangles, crosses and diamonds indicate models with different combinations of $n$ and $\rho$ computed with {\sc bedisk}/{\sc beray}. The gray symbols correspond to combinations of $n$ and $\rho$ that are unlikely to be found in Be stars, following \citet{Vieira2017}. Families of models with different $n$ for early B stars are indicated in the plot. \label{CMD_Models}} \end{figure} In Figure~\ref{CMD_Models}, we plot the known Be stars (squares with a black central point) for the five open clusters, together with the synthetic colours and magnitudes in the W1 versus W1-W2 colour magnitude diagram (gray symbols and coloured circles). Once more, the different colours indicate different J magnitude ranges. The gray symbols correspond to models with combinations of $n$ and $\rho_{0}$ that are outside the usual range of parameters found when fitting Be star disk models, called the {\it forbidden region} by \citet{Vieira2017}. Circles and hexagons correspond to models with an early-type central stars, triangles to mid B central stars and crosses and diamonds to late B type stars. The different shapes help to distinguish models with different $n$, and different symbol sizes correspond to different viewing angles. We see that there are no Be stars in the region of the CMD occupied by the forbidden models, indicating that indeed these kinds of more massive disks do not correspond to MS Be stars. The computed models are seen to nicely cover the region occupied by Be stars and to correctly describe their corresponding $J$ magnitudes. The models with the smallest disks (lowest $\rho_{0}$) are located in the region of {\it normal} B stars or very close to it. With increasing $\rho_{0}$, the effect of inclination becomes more prominent, as discussed in the next section. \subsection{Effects of the inclination angle} In Figure~\ref{CMD_Models}, identical disk models but with different viewing inclination angles of 18$^o$ (smallest symbols), 45$^o$, 60$^o$, 72$^o$ and 84$^o$ (largest symbols) are connected with a continuous line. For a fixed stellar mass, the models with the smallest $n$ have more massive disks for the same $\rho_0$ and have redder colours. Interestingly, the analysis of near-IR photometry of Be stars has the potential of being useful to study whether there is a preferential viewing angle in stellar clusters, as has already been found in certain clusters \citep{Corsaro2017}. According to our results, we expect that clusters with preferred pole-on inclinations and Be stars with developed disks, with W1-W2$\geq$0.15, have W1 magnitudes around one magnitude brighter than those clusters with preferential equator-on inclinations. \subsection{Mass of the disk and its relation to the H$\alpha$ equivalent width} By integrating our model disk densities over the volume of the disk to radial distance of 50 stellar radii, we obtain an estimate of the mass of the disk. Table~\ref{Tab_Masses} provides the disk mass for each model in units of the stellar mass (M$_{*}$). The assumed stellar mass for each spectral type is indicated on top of each column, so conversion to solar masses is straightforward. Figure~\ref{EW_M} shows the H$\alpha$ equivalent width (EW) versus the corresponding model disk mass (in M$_{\odot}$) for different spectral types and disk density parameters, all for an inclination angle of 60$^o$. Models with other inclination angles are indicated with dots, for ease of comparison. Small crosses correspond to combinations of parameters that belong to the disk density forbidden region. The colour coding in Figure~\ref{EW_M} is the same as in Figure~\ref{rho_n}: black squares correspond to late B stars, magenta circles to intermediate B stars, and blue triangles to early B stars. The point size is proportional to the value of $n$, the smallest correspond to $n$=2 and the largest, to $n$=4. \begin{figure} \figurenum{13} \gridline{\fig{./EWversusMd_06Oct2017.eps}{0.465\textwidth}{} } \caption{H$\alpha$ equivalent width versus disk mass. {\bf Small} dots indicate models for all five inclinations given in this article. All bigger symbols correspond to an inclination angle of 60$^o$. Blue triangles show the values for early B type models, magenta circles for intermediate mass and black squares for late B-type stars. Open symbols indicate forbidden models with W1-W2$<$0.05 and small crosses indicate models that produce large colours not observed for cluster Be stars. Gray symbols indicate those models with W1-W2$\geq$0.15, and the coloured curves are second order polynomial fits to these points. The gray horizontal line separates emission (negative values) and absorption (positive values) H$\alpha$ lines. The green symbols identify the three stars with available H$\alpha$ spectroscopy.\label{EW_M}} \end{figure} We find that the early-type models can have more massive disks and produce larger H$\alpha$ equivalent widths than the later B-type models , in agreement with \citet{Arcos2017}. Open symbols indicate objects with W1-W2$<$0.05, the region typically occupied by normal B stars. Interestingly, all these models have positive EW, indicating that these objects do not have a significant disk emission contribution to the photospheric H$\alpha$ line. Full symbols correspond to models with W1-W2 below the forbidden region limit and W1-W2$\geq$0.05. It is remarkable that for a fixed inclination angle and spectral type of the central star, there is a one to one relation between EW and the mass of the disk, particularly for those models with large colour excess. We discuss this later in the context of stars with stable, developed disks. The one-to-one relation for the three spectral types and an inclination angle of 60$^o$ is represented by coloured lines (second-order polynomial fits) in Figure~\ref{EW_M}. We see that some of the models with large $n$ (the largest symbols) and small disk masses are very close to the EW=0 line. Moreover, for late-type Be stars, many of the models with W1-W2$\geq$0.05 have a positive ${H\alpha}$ EW. Interestingly, for late-type B stars, many objects with a small disk mass (smaller than 10$^{-9.5}$M$_{\sun}$ for the inclination of 60$^o$), there is a clear IR excess, indicating the presence of a circumstellar disk, but no significant H$\alpha$: such objects could easily elude the Be star classification. \begin{figure*} \figurenum{14} \gridline{\fig{./CMD_WISE_allBe_loop_28Nov2017.eps}{0.465\textwidth}{(a)} \fig{./CC_WISE_early_loop_01Oct2017.eps}{0.465\textwidth}{(b)} } \caption{Be stars shown together with the modeled loops predicted during disk formation and dissipation phases in the colour-magnitude (a) and colour-colour (b) diagrams. The colour coding is the same as in previous plots. We see in (a) that except for systems with a large inclination angle, the direction of the loop is clockwise. Our modelling shows that active Be stars with W1-W2$\geq$0.15 are likely to have stable developed disks. The loops shown correspond to the three different spectral types and three inclination angles. Panel (b) shows only early- and mid-type B stars in the colour-colour diagram, and the predicted loop for mid B-type stars with an inclination angle of 60$^o$. In this case, the direction of the loop is counterclockwise. The black crosses indicate all non-Be objects. \label{CMD_loop}} \end{figure*} \subsection{Disk growth and dissipation phases} As suggested by \citet{Vieira2017}, disk growth and dissipation phases can be represented with different combinations of the parameters $n$ and $\rho_0$. During these formation/dissipation phases, the star is expected to describe a loop in the CMD \citep{Dougherty1994,deWit2006,Sigut2013}. Figure~\ref{CMD_loop} shows our confirmed Be stars together with models of disk growth and dissipation phases, similar to those presented by \citet{Vieira2017}: first the disk forms and grows in size, represented by a decrease in $n$ from 4 to 3.5 for a constant $\rho_0$, and then subsequently dissipates, represented by a decrease in $\rho_0$ and decreasing $n$. For each stellar model, two different values of $\rho_0$ at the beginning of the disk growth were considered in an attempt to mimmick two different disk mass loss rates. Models represented by gray circles correspond to a logarithm of the mass loss rates (in M$_{\sun}$yr$^{-1}$) of -9.1 for the B1 star, -10.1 for the B3 star, and -11.1 for the B7 star, as in \citet{Vieira2017}. Models indicated with black diamonds represent mass loss rates of an order of magnitude larger and are consistent with the mass loss rates predicted by \citet{Granada2013a} from stellar angular momentum loss rates for critically rotating B-type stellar models. The models joined with dashed and dotted lines correspond to disk evolutionary tracks through a formation/dissipation phase for three different inclination angles, 18$^{\circ}$, 60$^{\circ}$ and 84$^{\circ}$. The size of the points increases from pole to equatorial viewing angle. For most of the models, once the disk appears, the star very quickly reaches a large colour excess and becomes brighter in W1 band. We see that for the confirmed early and mid Be stars, both sets of disk evolutionary tracks qualitatively describe the location of the stars having an excess in WISE colours. The observed late Be stars are better described by disks with the larger density presented here. The large number of Be candidates in this spectral type range that have not been confirmed yet (see Figure~\ref{5Clusters_a} (b)) shows that smaller disks may be frequent but hard to detect, but this conclusion requires further investigation. We can see in Figure~\ref{CMD_loop} that W1-W2$>$0.15 is predicted for models with combinations of $\rho_0$ and $n$ that describe disks being continuously fed by the central star. Therefore, we propose that most Be stars with W1-W2$>$0.15 (observed in particular among mid and early Be stars) host developed, stable disks. Most Be stars with W1-W2$\leq$0.05 are likely in a diskless phase, and objects with W1-W2 between 0.05 and 0.15 are either stars with small mass loss rates or objects with dissipating disks. Following this scheme, and as mentioned before, 53.1$\%$ of the early Be stars in our sample and 39.4$\%$ of the mid Be stars are in a quiescent or diskless phase. Among active stars 14.3$\%$ of all early Be stars and 9.1$\%$ of all mid Be stars have a dissipating or small disk, and 32.7$\%$ of early Be stars and 51.5$\%$ of mid Be stars have a developed disk. For the overall sample of 80 early and mid Be stars, 46.3$\%$ are in a diskless phase, 12.5$\%$ have a small or dissipating disk, and 41.3$\%$ have a developed stable disk. This last value is consistent with the fraction (37$\%$) of Be stars with long term photometric variability \citep{Labadie2017}. For objects with W1-W2$\geq$0.15, our modelling shows that the allowed combinations of $\rho$ and $n$ that correspond to stable, developed disks, lead to a tight relation between the corresponding H$\alpha$ equivalent width and the mass of the disk when the inclination angle and the spectral type of the star are fixed. Therefore, in the frame of this simple disk modelling, we suggest that for very stable disks, we could spectroscopically estimate the mass of the disk using the spectral type of the star, an estimate of the viewing inclination (via the morphology of the H$\alpha$ emission line), and the EW$_{\rm H \alpha}$. For three Be stars in our sample with W1-W2$>$0.15 and flagged as stable that are likely to host stable developed disks, there is available data of H$\alpha$ spectroscopy from the BeSS catalogue: there is one early Be star BD+60 341 (EW$_{\rm H\alpha}$=-35.2$\AA$), and two mid Be stars: V* V984 Cas (EW$_{\rm H\alpha}$=-24.0), and EM* GGA 93 (EW$_{\rm H\alpha}$=-22.2). The Bess catalogue gathers the information of classical Be stars and Herbig Ae/Be stars, and assembles spectra obtained by professional and amateur astronomers of these stars. For the three objects with available spectra, using the relations shown in Figure \ref{EW_M}, we obtained disk masses (log(M$_{disk}$)) of -7.302, -8.025, -8.064 respectively. The green symbols in Figure \ref{EW_M}, indicate the values derived from observations, and in all three cases correspond to the upper limit of disk mass. \section{Conclusions} Open clusters provide a unique laboratory to study stellar populations and, in particular, the Be stars. The five young open clusters studied in this work, NGC 663, NGC 869, NGC 884, NGC 3766 and NGC 4755, have long been known to host numerous Be stars that have been broadly studied in the literature. WISE near-IR photometry allows identification of Be stars and the detection of new Be star candidates which could be confirmed spectroscopically. In these clusters, virtually all mid- and early-type Be stars with W1-W2$>$0.05 are known Be stars, which leads us to conclude that in this spectral range, almost all of Be stars in these clusters have been identified. Conversely, many late-type Be stars may not yet have been identified as such. {\sc bedisk/beray} models with typical disk density structures derived for Be stars correctly describe the global near-IR photometric characteristics of Be stars in our sample. For small and intermediate inclination angles, we obtain that stars with W1-W2$\geq$0.15 have values of $n\leq$3.5 and intermediate values of $\rho_0$. Models with very large disk density do not lead to colours observed in the Be stars of our sample, and the limits we derive are coincident with the disk density ``forbidden region" defined by \citet{Vieira2017}. For models with large inclination angles (nearly equator-on), the near-IR excesses are rather small, particularly when the central star is of a late spectral type. The location of the mid- and early-type Be stars with fully-developed disks in the CMD, W1-W2$\geq$0.15, requires mass loss rates in agreement with those of \citet{Vieira2017}, obtained for a large sample of observed Be stars, and those of \citet{Granada2013a} predicted from the stellar angular momentum loss rates obtained for critically rotating models. We find for these stars, if the spectral type and inclination of the central star is known, the disk mass can be estimated from the tight relation between ${\rm H \alpha}$ EW and the mass of the disk. The location of Be stars in WISE CMD and CC diagrams provides a convenient method to separate them into active (Be stars hosting a developed circumstellar disk with W1-W2$\geq$0.05) and quiescent stages (Be stars in a diskless phase with W1-W2$<$0.05). This can be used as a tool to explore the frequency of these different activity states. From the analysis of Be stars in five open clusters between 15 and 30 Myrs, if we understand our observed sample as a “snapshot” of the behaviour of these objects, we deduce that half of the time, early Be stars are in an active phase, while mid Be stars are in an active phase 60\% of the time. In particular, 34\% of the time, early Be stars seem to host a large, developed disk with W1-W2$>$0.15. For mid Be stars the fraction grows to 51.5\%. Among early and mid Be stars, 15\% and 9\% of the objects respectively, have W1-W2 between 0.05 and 0.15, which points towards the finding that the formation of small short lived disks (perhaps from outbursts) is more common among early type stars, in agreement with other authors \citep[e.g.][]{Hubert1998,Barnsley2013,Labadie2017}. In the late B-type group there is a considerable number of unconfirmed Be candidates. This is why we consider that the sample of late Be stars is not complete enough to make an analysis as the one performed for early and mid Be stars: a large fraction of objects with small infrared excesses might have disks hard to detect from H$\alpha$ spectroscopy. Near IR observations in the H band such as those presented by \citet{Chojnowski2015, Chojnowski2017}, or the K and L bands (\citep{Lenorzer2002,Mennickent2009,Granada2010, Sabogal2017}), trace the innermost regions of the circumstellar disk, and are more appropriate to study the disks around these late Be stars. The WISE colours of the sample of active Be stars show that early-type objects have more massive disks than late-type Be stars. Some of the models of late-type Be stars with low mass disks in an active phase (W1-W2$>$0.05) have positive equivalent widths, indicating that many late-type stars hosting a disk might be more difficult to detect spectroscopically. This could explain the large number of late Be candidates in our sample. Finally, the analysis of near-IR photometry of Be stars in open clusters could be useful to explore whether there is a preferential viewing angle in open stellar clusters (i.e., preferential alignment of the stellar rotation axes), as has already been found in certain clusters \citep{Corsaro2017}. Further analysis of WISE photometric observations may prove to be a valuable diagnostics for setting constraints on rotating stellar models. Following this work, we plan to analyze Be stars in open clusters of different ages and metallicities. \vspace{5mm} \acknowledgments AG acknowledges the Swiss National Science Foundation, Advanced Postdoc Mobility Grant number P300P2$\_$158443. CEJ and TAAS wish to acknowledge support though the Natural Sciences and Engineering Research Council of Canada. This work is sponsored the Swiss National Science Foundation (project number 200020-172505). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work has made use of the BeSS database, operated at LESIA, Observatoire de Meudon, France: {\it http://basebe.obspm.fr}. We thank Christian Buil, the amateur astronomer who obtained the three H$\alpha$ spectra used in this article and made them available through the BeSS database. We also thank the referee of this paper for his/her careful reading of the manuscript and a very constructive report. \facility{WISE} \software{SYCLIST \citep{Georgy2014}, {\sc Bedisk} \citep{Sigut2007}, {\sc Beray} \citep{Sigut2011a}}
{ "timestamp": "2017-12-15T02:07:26", "yymm": "1712", "arxiv_id": "1712.05209", "language": "en", "url": "https://arxiv.org/abs/1712.05209" }
\section{Introduction} There is a wide variety of quantum fluids with internal degrees of freedom, such as superfluid $^3$He~\cite{Vollhardt}, $p$-wave and $d$-wave superconductors~\cite{Norman}, possible superfluids in neutron stars~\cite{Hoffberg,Tama}, and spinor Bose-Einstein condensates (BECs) of atomic gases~\cite{Ho,Ohmi}. In these systems, the order parameters have spin or angular momentum degrees of freedom, and their ground-state phases, dynamics, and topological excitations are richer than those of single-component superfluids. If two or more quantum fluids with internal degrees of freedom are mixed, the order-parameter space is greatly extended and the physics is further enriched. Spinor BECs of ultracold atoms are suitable systems for realizing such a mixture of quantum fluids due to their high controllability. However, in most previous experiments, spinor BECs of spin-1, spin-2, and spin-3 atoms have been realized only individually~\cite{Stamper,Stenger,Schmal,Kuwamoto,Naylor}. The ground state of a spin-1 BEC can be ferromagnetic or antiferromagnetic, and topological excitations, such as monopoles~\cite{Ray}, skyrmions~\cite{Leslie}, half-quantum vortices~\cite{Seo}, and knots~\cite{Hall}, are possible. A spin-2 BEC is more intriguing because of the presence of the cyclic phase and non-Abelian vortices~\cite{Koashi,Ciobanu,Kobayashi}. We expect that a mixture of such spinor BECs will exhibit novel quantum phases and topological excitations. A spin-1/spin-1 mixture has been studied theoretically and phase diagrams and many-body properties have been determined~\cite{Luo, Xu, Shi, Xu2, Zhang, Xu3, Shi2, Zhang2, Xu4, Xu5, Zhang3}. The spin dynamics in a mixture of a spin-1 $^{23}$Na BEC and a spin-1 $^{87}$Rb thermal gas have been observed experimentally~\cite{Li}. Recently, a mixture of spin-1 and spin-2 $^{87}$Rb BECs was realized experimentally, and the spin dynamics were observed~\cite{Eto}. Motivated by this experiment, in the present paper we theoretically investigate the ground-state phase diagrams of the mixture of spin-1 and spin-2 BECs at zero magnetic field. The spin-1 and spin-2 BECs have one and two spin-dependent interaction coefficients, respectively. In addition to these intra-spin interactions, in a spinor mixture we must consider the inter-spin interaction, which is described by two spin-dependent interaction coefficients for the spin-1/spin-2 mixture. This gives a total of five spin-dependent interaction coefficients. We therefore study the ground-state phase diagrams by varying these five interaction coefficients. Using the Monte Carlo method, we determine the phase diagrams for various sets of the parameters. Unlike for the phase diagrams of the individual spin-1 and spin-2 BECs, the spinor mixture has phases that continuously change with respect to the interaction coefficients, including phases in which the spin-1 and spin-2 vectors are tilted from each other, breaking the axial symmetry. According to the interaction coefficients measured in Ref.~\cite{Eto}, the ground state of the mixture of spin-1 and spin-2 $^{87}$Rb BECs is different from that of the individual spin-1 and spin-2 BECs. This paper is organized as follows. Section~\ref{s:form} presents the problem and reviews the ground states of spin-1 and spin-2 BECs. Section~\ref{s:numerical} details the numerical calculations and the various phase diagrams of the spinor mixture. Section~\ref{s:conc} provides the conclusions of this study. \section{Formulation of the Problem} \label{s:form} The spin state of spin-1 and spin-2 atoms are denoted by $|f, m\rangle$, where $f = 1, 2$ and $m = -f, -f + 1, \cdots, f$. We consider BECs with spin-1 and spin-2 atoms at zero temperature and zero magnetic field in the mean-field approximation. The macroscopic wave function for the BEC of spin state $|f, m\rangle$ is expressed as $\psi^{(f)}_m(\bm{r}) = \sqrt{\rho_f(\bm{r})} \zeta^{(f)}_m(\bm{r})$, where $\rho_f(\bm{r})$ is the density and $\zeta^{(f)}_m(\bm{r})$ is the complex spin vector normalized as $\sum_m |\zeta^{(f)}_m(\bm{r})|^2 = 1$. The energy $E_f$ of a spin-$f$ BEC with atomic mass $M_f$ confined in a trap potential $V_f(\bm{r})$ is given by~\cite{Ho,Ohmi,Koashi,Ciobanu} \begin{eqnarray} \label{E1} E_1 & = & \int d\bm{r} \sum_{m=-1}^1 \psi^{(1)*}_m(\bm{r}) \left[ -\frac{\hbar^2}{2M_1} \nabla^2 + V_1(\bm{r}) \right] \psi^{(1)}_m(\bm{r}) \nonumber \\ & & + \frac{1}{2} \int d\bm{r} \left[ g_0^{(1)} + g_1^{(1)} \bm{F}^{(1)}(\bm{r}) \cdot \bm{F}^{(1)}(\bm{r}) \right] \rho_1^2(\bm{r}) \end{eqnarray} for a spin-1 BEC and \begin{eqnarray} \label{E2} E_2 & = & \int d\bm{r} \sum_{m=-2}^2 \psi^{(2)*}_m(\bm{r}) \left[ -\frac{\hbar^2}{2M_2} \nabla^2 + V_2(\bm{r}) \right] \psi^{(2)}_m(\bm{r}) \nonumber \\ & & + \frac{1}{2} \int d\bm{r} \Bigl[ g_0^{(2)} + g_1^{(2)} \bm{F}^{(2)}(\bm{r}) \cdot \bm{F}^{(2)}(\bm{r}) \nonumber \\ & & + g_2^{(2)} |A_0^{(2)}(\bm{r})|^2 \Bigr] \rho_2^2(\bm{r}) \end{eqnarray} for a spin-2 BEC, where \begin{equation} \bm{F}^{(f)}(\bm{r}) = \sum_{mm'} \zeta_m^{(f)*}(\bm{r}) \bm{S}^{(f)}_{mm'} \zeta_{m'}^{(f)}(\bm{r}) \end{equation} is the mean spin vector, with $\bm{S}^{(f)}$ being the vector of $(2f + 1) \times (2f + 1)$ matrices for spin $f$, and \begin{equation} A_0^{(2)} = \frac{1}{\sqrt{5}} \left( 2 \zeta_2^{(2)} \zeta_{-2}^{(2)} - 2 \zeta_1^{(2)} \zeta_{-1}^{(2)} + \zeta_0^{(2)2} \right) \end{equation} is the spin-singlet scalar for spin 2. The interaction coefficients in Eqs.~(\ref{E1}) and (\ref{E2}) have the forms \begin{eqnarray} g_0^{(1)} & = & \frac{4\pi\hbar^2}{M_1} \frac{a_0^{(1)} + 2 a_2^{(1)}}{3}, \\ g_1^{(1)} & = & \frac{4\pi\hbar^2}{M_1} \frac{a_2^{(1)} - a_0^{(1)}}{3}, \\ g_0^{(2)} & = & \frac{4\pi\hbar^2}{M_2} \frac{4 a_2^{(2)} + 3 a_4^{(2)}}{7}, \\ g_1^{(2)} & = & \frac{4\pi\hbar^2}{M_2} \frac{a_4^{(2)} - a_2^{(2)}}{7}, \\ g_2^{(2)} & = & \frac{4\pi\hbar^2}{M_2} \frac{7 a_0^{(2)} - 10 a_2^{(2)} + 3 a_4^{(2)}}{7}, \end{eqnarray} where $a_{\cal F}^{(f)}$ is the $s$-wave scattering length between spin-$f$ atoms with colliding channel of total spin ${\cal F}$. We denote the spin vectors as $\bm{\zeta}^{(1)} = (\zeta_1^{(1)}, \zeta_0^{(1)}, \zeta_{-1}^{(1)})$ and $\bm{\zeta}^{(2)} = (\zeta_2^{(2)}, \zeta_1^{(2)}, \zeta_0^{(2)}, \zeta_{-1}^{(2)}, \zeta_{-2}^{(2)})$. Before considering the mixture of spinor BECs, we summarize the ground-state phases for individual spin-1 and spin-2 BECs in a uniform system. The ground state of a spin-1 BEC depends on the sign of $g_1^{(1)}$. When $g_1^{(1)} < 0$, the ground state is the fully-polarized ferromagnetic state \begin{equation} \label{f1} \bm{\zeta}^{(1)}_F \equiv e^{i\chi} \hat R (1, 0, 0), \end{equation} where $\chi$ is an arbitrary phase and $\hat R$ is an arbitrary SO(3) rotation in the spin space. When $g_1^{(1)} > 0$, the ground state is the polar state \begin{equation} \label{p1} \bm{\zeta}^{(1)}_P \equiv e^{i\chi} \hat R (0, 1, 0). \end{equation} The spin-2 BEC has more variety of ground states. When $g_1^{(2)} < 0$ and $g_2^{(2)} > 20 g_1^{(2)}$, the ground state is the ferromagnetic state \begin{equation} \label{f2} \bm{\zeta}^{(2)}_F \equiv e^{i\chi} \hat R (1, 0, 0, 0, 0). \end{equation} When $g_2^{(2)} < 0$ and $g_2^{(2)} < 20 g_1^{(2)}$, the ground state has continuous degeneracy: a linear combination of the uniaxial nematic state \begin{equation} \label{u2} \bm{\zeta}^{(2)}_{\rm UN} \equiv e^{i\chi} \hat R (0, 0, 1, 0, 0) \end{equation} and the biaxial nematic state \begin{equation} \label{b2} \bm{\zeta}^{(2)}_{\rm BN} \equiv e^{i\chi} \hat R (1, 0, 0, 0, 1) / \sqrt{2} \end{equation} is the ground state. When $g_1^{(2)} > 0$ and $g_2^{(2)} > 0$, the ground state is the cyclic state \begin{equation} \label{c2} \bm{\zeta}^{(2)}_C \equiv e^{i\chi} \hat R (1 / 2, 0, i / \sqrt{2}, 0, 1 / 2). \end{equation} For later use, we define the state \begin{equation} \label{f2p} \bm{\zeta}^{(2)}_{F'} \equiv e^{i\chi} \hat R (0, 1, 0, 0, 0), \end{equation} which is not the ground state but a stationary state of the Gross-Pitaevskii equation. The spherical harmonic representation of the spin state is convenient for visualizing the symmetry of the system~\cite{Kawaguchi}, \begin{equation} S(\theta, \phi) = \sum_{m = -f}^f \zeta_m^{(f)} Y_f^m(\theta, \phi), \end{equation} where $Y_f^m$ is the spherical harmonics. The spherical harmonic representations of the above spin states are shown in Fig.~\ref{f:spin12}. \begin{figure}[tb] \includegraphics[width=8.5cm]{fig1.eps} \caption{ (color online) Spherical harmonic representations $S(\theta, \phi)$ of the spin states. (a) spin-1 ferromagnetic state $\bm{\zeta}^{(1)}_F$. (b) spin-1 polar state $\bm{\zeta}^{(1)}_P$. (c) spin-2 ferromagnetic state $\bm{\zeta}^{(2)}_F$. (d) spin-2 state $\bm{\zeta}^{(2)}_{F'}$. (e) spin-2 uniaxial nematic state $\bm{\zeta}^{(2)}_{\rm UN}$. (f) spin-2 biaxial nematic state $\bm{\zeta}^{(2)}_{\rm BN}$. (g) spin-2 cyclic state $\bm{\zeta}^{(2)}_C$. The labels F, P, F$'$, U, B, and C shown for each representation are used to identify the spin state in the phase diagram. } \label{f:spin12} \end{figure} We consider a mixture of spin-1 and spin-2 BECs. The interaction energy between the spin-1 and spin-2 BECs is obtained to be (see Appendix for derivation) \begin{eqnarray} \label{E12} E_{12} & = & \int d\bm{r} \Bigl[ g_0^{(12)} + g_1^{(12)} \bm{F}^{(1)}(\bm{r}) \cdot \bm{F}^{(2)}(\bm{r}) \nonumber \\ & & + g_2^{(12)} P_1^{(12)}(\bm{r}) \Bigr] \rho_1(\bm{r}) \rho_2(\bm{r}), \end{eqnarray} where $P_1^{(12)}$ is defined in Eq.~(\ref{P1}). The interaction coefficients in Eq.~(\ref{E12}) are given by \begin{subequations} \label{g12} \begin{eqnarray} g_0^{(12)} & = & \frac{2\pi\hbar^2}{M_{12}} \frac{2 a_2^{(12)} + a_3^{(12)}}{3}, \\ g_1^{(12)} & = & \frac{2\pi\hbar^2}{M_{12}} \frac{a_3^{(12)} - a_2^{(12)}}{3}, \\ g_2^{(12)} & = & \frac{2\pi\hbar^2}{M_{12}} \frac{3 a_1^{(12)} - 5 a_2^{(12)} + 2 a_3^{(12)}}{3}, \end{eqnarray} \end{subequations} where $M_{12} = (M_1^{-1} + M_2^{-1})^{-1}$ is the reduced mass and $a_{\cal F}^{(12)}$ is the $s$-wave scattering length between spin-1 and spin-2 atoms with colliding channel of total spin ${\cal F}$. In the following analysis, we assume that the spin healing lengths are much larger than the size of the atomic cloud and we neglect the spatial variation of the spin states $\bm{\zeta}^{(f)}$. The kinetic and potential energy terms in $E_1$ and $E_2$ in Eqs.~(\ref{E1}) and (\ref{E2}) then become independent of the spin states $\bm{\zeta}^{(f)}$. The spin-dependent part of the total energy $E = E_1 + E_2 + E_{12}$ thus reduces to \begin{eqnarray} \label{Espin} E_{\rm spin} & = & \frac{1}{2} \left( c_1^{(1)} \bm{F}^{(1)} \cdot \bm{F}^{(1)} + c_1^{(2)} \bm{F}^{(2)} \cdot \bm{F}^{(2)} + c_2^{(2)} |A_0^{(2)}|^2 \right) \nonumber \\ & & + c_1^{(12)} \bm{F}^{(1)} \cdot \bm{F}^{(2)} + c_2^{(12)} P_1^{(12)}, \end{eqnarray} where \begin{eqnarray} c_n^{(f)} & = & g_n^{(f)} \int \rho_f^2(\bm{r}) d\bm{r}, \nonumber \\ c_n^{(12)} & = & g_n^{(12)} \int \rho_1(\bm{r}) \rho_2(\bm{r}) d\bm{r} \label{cdef} \end{eqnarray} with $n = 1, 2$. In the rest of this paper, we normalize the interaction coefficients $c_1^{(1)}$, $c_1^{(2)}$, $c_2^{(2)}$, $c_1^{(12)}$, and $c_2^{(12)}$ by $4\pi \hbar^2 a_B \int \rho_1^2 d\bm{r} / M_1$, where $a_B$ is the Bohr radius, and therefore these interaction coefficients are dimensionless. Our purpose is to find the spin states $\bm{\zeta}^{(1)}$ and $\bm{\zeta}^{(2)}$ that minimize the energy $E_{\rm spin}$. We numerically obtain the ground state as follows. First we set complex random numbers to $\zeta_m^{(f)}$ and minimize the energy in a stochastic manner, that is, we try a small random change to the spin state $\zeta_m^{(f)} + \delta\zeta_m^{(f)}$ and adopt the change if the energy is lowered. After sufficiently many steps in this random walk in the spin space, we obtain a metastable state or the ground state. Repeating this procedure many times with different initial random states, we can exclude metastable states and determine the true ground state. \section{Ground states of a spin-1/spin-2 mixture} \label{s:numerical} \begin{figure}[tb] \includegraphics[width=8.5cm]{fig2.eps} \caption{ (color online) Ground-state phase diagram with respect to the inter-spin interactions $c_1^{(12)}$ and $c_2^{(12)}$ without the intra-spin interactions, $c_1^{(1)} = c_1^{(2)} = c_2^{(2)} = 0$. The spherical harmonic representations of the spin states are also shown, where the left- and right-hand figures indicate the spin-1 and spin-2 states, respectively. The letter-pairs specify the spin-1 and spin-2 states, which are defined in Fig.~\ref{f:spin12}, and the subscript $\pm$ indicates the sign of $\bm{F}^{(1)} \cdot \bm{F}^{(2)}$. In the striped region, the linear combination of the polar-ferromagnetic and polar-biaxial nematic states are continuously degenerate. } \label{f:noint} \end{figure} \begin{figure}[tb] \includegraphics[width=7.6cm]{fig3.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = -0.05$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the ferromagnetic state for spin 1 and the nematic state for spin 2. The region of many phases in (a) is magnified in (b). The upper-case letter-pairs indicate the spin-1 and spin-2 states as defined in Fig.~\ref{f:spin12} and the lower-case letters indicate the intermediate states as defined in Table~\ref{t:abc}. The subscripts $\pm$ denote the sign of $\bm{F}^{(1)} \cdot \bm{F}^{(2)}$. The physical quantities along the dotted line are shown in Fig.~\ref{f:ferro_nem2}(a). The spin states at the black dots are shown in Fig.~\ref{f:ferro_nem2}(b). } \label{f:ferro_nem} \end{figure} To see the effect of the interaction between the spin-1 and spin-2 BECs, we first consider the case without the intra-spin interactions, $c_1^{(1)} = c_1^{(2)} = c_2^{(2)} = 0$. The spin-dependent energy then reduces to $E_{\rm spin} = c_1^{(12)} \bm{F}^{(1)} \cdot \bm{F}^{(2)} + c_2^{(12)} P_1^{(12)}$. Figure~\ref{f:noint} shows the ground-state phase diagram with respect to $c_1^{(12)}$ and $c_2^{(12)}$. When $c_1^{(12)}$ is sufficiently large and negative, the state in which the spin vectors $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ are fully-polarized in the same direction is energetically favored, and the ground state is $\bm{\zeta}^{(1)} = \bm{\zeta}^{(1)}_F$ and $\bm{\zeta}^{(2)} = \bm{\zeta}^{(2)}_F$. We abbreviate this state as ``FF$_+$'', in which the first and second letters indicate the spin-1 and spin-2 states, respectively, and the subscript $+$ denotes that the two spin vectors are in the same direction. The capital letters indicate the spin states shown in Fig.~\ref{f:spin12}. The energy of the FF$_+$ state is $E_{\rm spin} = 2 c_1^{(12)}$. In a similar manner, when $c_1^{(12)}$ is large and positive, the ground state is the ferromagnetic state with $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ being in opposite directions, whose energy is $E_{\rm spin} = -2 c_1^{(12)} + 3 c_2^{(12)} / 5$. This phase is denoted as ``FF$_-$'', where the subscript $-$ represents that the two spin vectors are in the opposite directions. In general, we define the subscripts $\pm$ to indicate the sign of $\bm{F}^{(1)} \cdot \bm{F}^{(2)}$. As shown in Fig.~\ref{f:noint}, there are two regions between these ferromagnetic phases. When $c_2^{(12)} < 0$ and $c_2^{(12)} / 5 < c_1^{(12)} < c_2^{(12)} / 10$, the ground state is $\bm{\zeta}^{(1)} = \bm{\zeta}^{(1)}_P$ and $\bm{\zeta}^{(2)} = \bm{\zeta}^{(2)}_{\rm UN}$ with an energy $E_{\rm spin} = 2 c_2^{(12)} / 5$, which is denoted as ``PU''. When $c_2^{(12)} > 0$ and $0 < c_1^{(12)} < 3 c_2^{(12)} / 10$, the ground state is continuously degenerate: the linear combination of the ``PF'' (polar-ferromagnetic) and ``PB'' (polar-biaxial nematic) states is the ground state with an energy $E_{\rm spin} = 3 c_2^{(12)} / 10$. \begin{table} \begin{tabular}{l|lllll} & $|\bm{F}^{(1)}|$ & $|\bm{F}^{(2)}|$ & $A_0^{(2)}$ & $\bm{F}^{(1)} \times \bm{F}^{(2)}$ & isotropy group \\ \hline a & nonzero & nonzero & nonzero & 0 & $\mathbb{Z}_2$ \\ b & nonzero & nonzero & nonzero & nonzero & E \\ c & 1 & nonzero & nonzero & 0 & $\mathbb{Z}_4$ \\ d & 1 & nonzero & 0 & 0 & $\mathbb{Z}_3$ \\ e & 0 & 0 & nonzero & 0 & $\mathbb{Z}_2 \times \mathbb{Z}_2$ \end{tabular} \caption{ Classification of the intermediate states that change continuously in the phase diagram. ``nonzero'' indicates that the value depends on $c_1^{(12)}$ and $c_2^{(12)}$. E indicates the trivial group. The subscript $+$ or $-$ is added to a-d to indicate the sign of $\bm{F}^{(1)} \cdot \bm{F}^{(2)}$. } \label{t:abc} \end{table} Next we consider the cases of nonzero intra-spin interaction coefficients $c_1^{(1)}$, $c_1^{(2)}$, and $c_2^{(2)}$. Figure~\ref{f:ferro_nem} shows the ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = -0.05$, which correspond to the interaction coefficients of $^{87}$Rb for $\rho_1 = \rho_2$ in Eq.~(\ref{cdef}). There is a remarkable number of phases with complicated structures. If the inter-spin interaction is absent, i.e., at the origin of the phase diagram, the ground state for spin 1 is the ferromagnetic state and that for spin 2 is the nematic state. Comparing Fig.~\ref{f:ferro_nem} with Fig.~\ref{f:noint}, we find that the four phases in Fig.~\ref{f:noint}, FF$_+$, FF$_-$, PU, and PB, also appear in Fig.~\ref{f:ferro_nem}, where the continuous degeneracy in Fig.~\ref{f:noint} is removed and the PF state disappears in Fig.~\ref{f:ferro_nem}. There are many intermediate states, labeled by lower-case letters classified in Table~\ref{t:abc}. In the regions of these intermediate states, either or both of the spin-1 and spin-2 states continuously change with respect to $c_1^{(12)}$ and $c_2^{(12)}$. \begin{figure}[tb] \includegraphics[width=8.5cm]{fig4.eps} \caption{ (color online) (a) Dependence of $F_z^{(1)}$, $F_z^{(2)}$, and $F_\perp^{(2)}$ on $c_1^{(12)}$ along the dotted line in Fig.~\ref{f:ferro_nem}, where $F_\perp^{(f)} = [(F_x^{(f)})^2 + (F_y^{(f)})^2]^{1/2}$. Here, the spin-1 and spin-2 states are rotated so that $\bm{F}^{(1)}$ is in the $z$ direction, and hence $F_\perp^{(1)}$ is always zero. (b) Spherical-harmonic representations of the spin states marked by the black dots in Fig.~\ref{f:ferro_nem}, where the left- and right-hand figures are the spin-1 and spin-2 states, respectively. } \label{f:ferro_nem2} \end{figure} We now consider the phases along the dotted line in Fig.~\ref{f:ferro_nem}. When $c_1^{(12)}$ is large and negative, the ground state is the FF$_+$ state. When $c_1^{(12)}$ crosses the phase boundary between FF$_+$ and a$_+$, the lengths of the spin-1 and spin-2 vectors begin to decrease, as shown in Fig.~\ref{f:ferro_nem2}(a). In this a$_+$ phase, the spin vectors $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ remain in the same direction. In contrast, in the b$_+$ phase, the directions of the spin vectors $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ become different. This can be regarded as axisymmetry breaking of the magnetization, that is, if we fix the vector $\bm{F}^{(1)}$ to the $z$ direction, the vector $\bm{F}^{(2)}$ has a component $F_\perp^{(2)}$ perpendicular to the $z$ axis. Examples of such axisymmetry breaking states are shown in Fig.~\ref{f:ferro_nem2}(b). Axisymmetry breaking has been found in a spin-1/spin-1 mixture in the presence of an external magnetic field~\cite{Xu3}. In the FF$'_+$ phase, the directions of the spin vectors $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ become the same again. In this phase, the spin-1 state returns to $\bm{\zeta}^{(1)} = \bm{\zeta}^{(1)}_F$ and the spin-2 state is $\bm{\zeta}^{(2)} = \bm{\zeta}^{(2)}_{F'}$, which does not depend on $c_1^{(12)}$ and $c_2^{(12)}$ within the phase, as seen from the plateau in Fig.~\ref{f:ferro_nem2}(a). In the a$_+$, b$_+$, and a$_+$ phases, the spin states continuously change again; the spin vectors $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ are in the same direction in the a$_+$ phase, while they take different directions in the b$_+$ phase. The FU state is connected to the origin of the phase diagram. The phases on the right-hand side of the phase diagram, a$_-$, b$_-$, $\cdots$ are similar to the corresponding phases a$_+$, b$_+$, $\cdots$ where the spin vector $\bm{F}^{(1)}$ or $\bm{F}^{(2)}$ is flipped, i.e., the time-reversal transformation is applied to the spin-1 or spin-2 state. For example, in the FF$'_-$ phase, when the spin-1 state is $\bm{\zeta}^{(1)} = (1, 0, 0)$, the spin-2 state is $\bm{\zeta}^{(2)} = (0, 0, 0, 1, 0)$, which is the time-reversal state of $\bm{\zeta}^{(2)} = (0, 1, 0, 0, 0)$ in the FF$'_+$ phase. For $c_2^{(12)} < 0$, the phase structures are simpler. In the c$_\pm$ phases, the spin-1 state is fixed to the ferromagnetic state, while the spin-2 state continuously changes with $\bm{F}^{(1)}$ and $\bm{F}^{(2)}$ being kept in the same direction. A typical $c$ state is shown in Fig.~\ref{f:ferro_nem2}(b). In the experiment in Ref.~\cite{Eto}, the values of the inter-spin scattering lengths of $^{87}$Rb were measured, which correspond to $c_1^{(12)} \simeq 0.83$ and $c_2^{(12)} \simeq 4.8$ in the present case, if $\rho_1 = \rho_2$ in Eq.~(\ref{cdef}), i.e., an almost 1:1 mixture of spin-1 and spin-2 atoms. In the phase diagram in Fig.~\ref{f:ferro_nem}, these values correspond to the PB state, namely, the polar state for spin 1 and the biaxial nematic state for spin 2. The ground state phase of the spin-1 $^{87}$Rb BEC alone is the ferromagnetic state and that for spin-2 is the biaxial or uniaxial nematic state. Thus, the ground state of the 1:1 mixture of spin-1 and spin-2 $^{87}$Rb BECs is different from those of the individual BECs due to the inter-spin interaction. \begin{figure}[tb] \includegraphics[width=8cm]{fig5.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = -1.1$, and $c_2^{(2)} = 1.5$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the ferromagnetic state for both spin 1 and spin 2. The spherical harmonic representations of the spin states are also shown, where the left- and right-hand figures represent the spin-1 and spin-2 states, respectively. } \label{f:ferro_ferro} \end{figure} Figure~\ref{f:ferro_ferro} shows the ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = -1.1$, and $c_2^{(2)} = 1.5$. If the inter-spin interaction is absent, the ground state is the ferromagnetic state both for spin 1 and spin 2 for these parameters. The phase diagram is much simpler than Fig.~\ref{f:ferro_nem}. Comparing Fig.~\ref{f:ferro_ferro} with Fig.~\ref{f:noint}, we find that the PB and PU states disappear in Fig.~\ref{f:ferro_ferro}. Between the PF and FF$_\pm$ phases, there exists the region of the b state, in which the axisymmetry is broken. For the present parameters, the spin 2 state is almost the ferromagnetic state in the b phase. The angle between the two spin vectors changes from 0 to $\pi$ across the region of the b state. \begin{figure}[tb] \includegraphics[width=7.6cm]{fig6.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = 1.5$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the ferromagnetic state for spin 1 and the cyclic state for spin 2. The region of many phases in (a) is magnified in (b). The physical quantities along the dotted line are shown in Fig.~\ref{f:ferro_cy2}(a). The spin states at the black dots are shown in Fig.~\ref{f:ferro_cy2}(b). } \label{f:ferro_cy} \end{figure} \begin{figure}[tb] \includegraphics[width=8.5cm]{fig7.eps} \caption{ (color online) (a) Dependence of $F_z^{(1)}$, $F_z^{(2)}$, $F_\perp^{(2)}$, and $|A_0^{(2)}|^2$ on $c_1^{(12)}$ along the dotted line in Fig.~\ref{f:ferro_cy}. Here, the spin-1 and spin-2 states are rotated so that $\bm{F}^{(1)}$ is in the $z$ direction, and hence $F_\perp^{(1)}$ is always zero. The small changes in $F_z^{(1)}$ and $|A_0^{(2)}|$ are magnified in the insets. (b) Spherical-harmonic representations of the spin states marked by the black dots in Fig.~\ref{f:ferro_cy}, where the left- and right-hand figures are the spin-1 and spin-2 states, respectively. } \label{f:ferro_cy2} \end{figure} Figure~\ref{f:ferro_cy} shows the ground-state phase diagram for $c_1^{(1)} = -0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = 1.5$. If the inter-spin interaction is absent, i.e., at the origin of the phase diagram, the ground state of the spin-1 BEC is the ferromagnetic state and that of the spin-2 BEC is the cyclic state for these parameters. The phase diagram is again very complicated. Let us examine the phases along the dotted line. As $c_1^{(12)}$ is increased from a large negative value, the ground state changes from the FF$_+$ state to the a$_+$, b$_+$, and FF$'_+$ states, which is similar to the case in Fig.~\ref{f:ferro_nem}. After that, a new phase appears, labeled by d$_+$. In this phase, the value of $|A_0^{(2)}|$ in the spin-2 state vanishes, as in the cyclic state, whereas $|\bm{F}^{(2)}|$ is finite, as shown in Fig.~\ref{f:ferro_cy2}(a). The spin-1 state is in the ferromagnetic state $\bm{\zeta}^{(1)} = \bm{\zeta}^{(1)}_F$. From the shape of the spherical harmonic representation in Fig.~\ref{f:ferro_cy2}(b), we find that this state may be regarded as an intermediate state between the FC and FF$'$ states. The d$_\pm$ states also exist in the region $c_2^{(12)} < 0$. The structures of the a$_\pm$, b$_\pm$, and c$_\pm$ regions in Fig.~\ref{f:ferro_cy} appear to be different from those in Fig.~\ref{f:ferro_nem}. \begin{figure}[tb] \includegraphics[width=8cm]{fig8.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = -1$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the polar state for spin 1 and the nematic state for spin 2. } \label{f:pol_nem} \end{figure} Figure~\ref{f:pol_nem} shows the ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = -1$. If the inter-spin interaction is absent, the ground state of the spin-1 BEC is the polar state and that of the spin-2 BEC is the nematic state for these parameters. We find from Fig.~\ref{f:pol_nem} that the PB and PU phases extend and contact each other at $c_2^{(12)} = 0$. In this phase diagram there is no symmetry broken state, such as the b state. \begin{figure}[tb] \includegraphics[width=7.6cm]{fig9.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = 1.5$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the polar state for spin 1 and the cyclic state for spin 2. The region of many phases in (a) is magnified in (b). The physical quantities along the dotted line in (a) are shown in Fig.~\ref{f:pol_cy2}(a). The spin states at the black dots are shown in Fig.~\ref{f:pol_cy2}(b). } \label{f:pol_cy} \end{figure} \begin{figure}[tb] \includegraphics[width=8.5cm]{fig10.eps} \caption{ (color online) (a) Dependence of $F_z^{(1)}$, $F_z^{(2)}$, $F_\perp^{(2)}$, and $|A_0^{(2)}|^2$ on $c_1^{(12)}$ along the dotted line in Fig.~\ref{f:pol_cy}. Here, the spin-1 and spin-2 states are rotated so that $\bm{F}^{(1)}$ is in the $z$ direction, and hence $F_\perp^{(1)}$ is always zero. The small changes in $F_z^{(1)}$ and $|A_0^{(2)}|$ are magnified in the insets. (b) Spherical-harmonic representations of the spin states marked by the black dots in Fig.~\ref{f:ferro_cy}, where the left- and right-hand figures are the spin-1 and spin-2 states, respectively. } \label{f:pol_cy2} \end{figure} Figure~\ref{f:pol_cy} shows the ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = 1.1$, and $c_2^{(2)} = 1.5$. If the inter-spin interaction is absent, the ground state of the spin-1 BEC is the polar state and that of the spin-2 BEC is the cyclic state for these parameters. In this phase diagram, a new state appears, labeled e. The e state has no magnetization for both spin 1 and spin 2, $\bm{F}^{(1)} = \bm{F}^{(2)} = 0$, as shown in Fig.~\ref{f:pol_cy2}(a). From the shapes of the spherical harmonic representation in Fig.~\ref{f:pol_cy2}(b), the e state is an intermediate state between the cyclic and nematic states. In the phase diagram, the regions of the e state are located at the heads of the PB and PU regions. For the parameters in Fig.~\ref{f:pol_cy}, interestingly, the two regions of the e state are detached from each other near the origin, where the a$_\pm$ states fill in. Although in Fig.~\ref{f:pol_cy2} the quantities $\bm{F}^{(1)}$, $\bm{F}^{(2)}$, and $A_0^{(2)}$ seem to jump at the boundary of the e region, they continuously change across the very narrow regions of the a$_\pm$ states. In all of the phase diagrams presented above, these quantities continuously change at the phase boundaries of the intermediate (a, b, c, d, and e) regions. \begin{figure}[tb] \includegraphics[width=8cm]{fig11.eps} \caption{ (color online) Ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = -0.0005$, and $c_2^{(2)} = 1$. The ground state for $c_1^{(12)} = c_2^{(12)} = 0$ is the polar state for spin 1 and the ferromagnetic state for spin 2. The spherical harmonic representations of the spin states are also shown, where the left- and right-hand figures represent the spin-1 and spin-2 states, respectively. } \label{f:pol_ferro} \end{figure} Figure~\ref{f:pol_ferro} shows the ground-state phase diagram for $c_1^{(1)} = 0.46$, $c_1^{(2)} = -0.0005$, and $c_2^{(2)} = 1$. If the inter-spin interaction is absent, the ground state of the spin-1 BEC is the polar state and that of the spin-2 BEC is the ferromagnetic state for these parameters. We take the small value of $c_1^{(2)}$, because the PU region is far from the origin for a larger value of $c_1^{(2)}$. The a$_\pm$ states occupy the region near the origin instead of the PU state. Compared with Fig.~\ref{f:noint}, the degeneracy is removed and the PF state remains in the upper region of Fig.~\ref{f:pol_ferro}. Finally, we mention the order-parameter manifold of the ground-state. In the case of individual spin-1 and spin-2 BECs, the Hamiltonian is invariant with respect to changes in the global phase, U(1), and the rotation in the spin space, SO(3). The ground state therefore has continuous degeneracy, with a manifold represented by U(1) $\times$ SO(3). However, for example, the spin-1 ferromagnetic state in Fig.~\ref{f:spin12}(a) is invariant with respect to rotation around the symmetry axis (with a global phase shift due to the spin-gauge symmetry). In other words, the isotropy group of the spin-1 ferromagnetic state is SO(2). The order-parameter manifold of the spin-1 ferromagnetic state is thus U(1) $\times$ SO(3) / SO(2) $\simeq$ SO(3)~\cite{Ho}. The isotropy group of the spin-1 polar state is SO(2) $\times \mathbb{Z}_2$, since Fig.~\ref{f:spin12}(b) is invariant with respect to rotation around the symmetry axis and upside-down rotation with global phase $\pi$. In the case of the spin-1/spin-2 mixture, the Hamiltonian is invariant with respect to changes in the global phase for each of the spin-1 and spin-2 states, in addition to the spin rotation of both spin-1 and spin-2 states, and then the symmetry group of the Hamiltonian is U(1) $\times$ U(1) $\times$ SO(3). For example, the isotropy group of the FF state is SO(2), and therefore the order-parameter manifold of the FF state is U(1) $\times$ SO(3). Similarly, the FF$'$ and FU states have this manifold. The isotropy groups of the intermediate states are summarized in Table~\ref{t:abc}, whose symmetries are lower than those of individual spin states. For example, the symmetry-broken state b in Table~\ref{t:abc} only has the trivial isotropy group (only the identity element). \section{Conclusions} \label{s:conc} We have investigated the ground-state phase diagrams of a mixture of spin-1 and spin-2 BECs in the mean-field approximation. We obtained two types of ground states. One is a pair of known stationary states in spin-1 and spin-2 BECs, such as the FF and PB states. In the other type of ground state, either or both of the spin states continuously change with respect to the interaction coefficients. The latter type of ground state is classified in Table~\ref{t:abc}. For the various choices of the intra-spin interaction coefficients, $c_1^{(1)}$, $c_1^{(2)}$, and $c_2^{(2)}$, we obtained the phase diagrams with respect to the inter-spin interaction coefficients, $c_1^{(12)}$ and $c_2^{(12)}$. These phase diagrams have remarkably rich structures. In all the phase diagrams, the FF$_+$ and FF$_-$ phases occupy the regions of large negative and positive $c_1^{(12)}$, respectively. Also, the PF, or the PB and FF$'_\pm$ phases are located in the $c_2^{(12)} > 0$ region, and the PU phase is located in the $c_2^{(12)} < 0$ region (except Fig.~\ref{f:ferro_ferro}). Between these phases, there exist various intermediate phases with interesting phase structures. Among them, we found the axisymmetry broken phase (b in Table.~\ref{t:abc}), in which the spin-1 and spin-2 vectors are tilted from each other. We have also determined the ground-state phase of a mixture of spin-1 and spin-2 $^{87}$Rb BECs, using the measured interaction coefficients~\cite{Eto}. It has been known that the ground state of the spin-1 $^{87}$Rb BEC alone is the ferromagnetic state and that of spin-2 BEC is a linear combination of the uniaxial and biaxial nematic states at zero magnetic field. By contrast, for an almost 1:1 mixture, the ground state is the polar state for spin 1 and the biaxial-nematic state for spin 2. The ground state of the spinor mixture of $^{87}$Rb BECs is thus changed by the interaction between spin-1 and spin-2 BECs. The present study can be extended in various directions. For example, the magnetic field dependence (linear and quadratic) of the phase diagrams is the next planned extension of this work. Since the ground-state manifolds of the spinor mixture are different from those of single BECs, novel topological excitations will be possible. If phase separation occurs in the spinor mixture, we expect that the interface between domains will create interesting problems. \begin{acknowledgments} This work was supported by JSPS KAKENHI Grant Numbers JP17K05595, JP17K05596, JP25103007, JP16K05505, and JP15K05233. YE acknowledges support by Leading Initiative for Excellent Young Researchers (LEADER). \end{acknowledgments}
{ "timestamp": "2017-12-15T02:04:17", "yymm": "1712", "arxiv_id": "1712.05111", "language": "en", "url": "https://arxiv.org/abs/1712.05111" }
\section{Introduction} \label{intro} Fragmentation in molecular clouds has been well studied over many years \citep{Larson78,Miyama84,Monaghan91,Rodriguez05,Contreras16,Li17}. Fragmentation is a process that produces "fragments" or structures in a molecular cloud. A hierarchy of nested structures is often created by the process of hierarchical fragmentation as seen in some recent observation and simulation studies (see \citealt{Dobbs14} and \citealt{Heyer15} for recent reviews). These studies show that clouds, which are typically $\gtrsim$10 pc in size have a wide range of structures from larger filaments and clumps to dense cores and disks. \begin{figure}[h] \centering \includegraphics[scale=0.3]{Hierarchy_filled3.pdf} \caption{A cartoon display of a molecular cloud showing hierarchical structures inside the cloud. The figure shows the cloud, clumps, filaments, cores, envelopes, and protostellar systems that we consider in this study. The image is not drawn to scale. \label{hierarchy} } \end{figure} Figure \ref{hierarchy} summarizes the scales and terms we utilize for this analysis in a cartoon of the hierarchical structures in a molecular cloud. We use "cloud" to identify the largest structure of our interest on scales of $\gtrsim$10 pc. A cloud fragments into "clumps" which are $\sim$1 pc in size \citep{Ridge06, Sadavoy14}. Inside the clumps, we observe elongated gaseous filaments that are $\sim$0.1 pc wide \citep{Arzoumanian11}. Inside the filaments we find $\sim$0.05-0.1 pc cores \citep{di07} which are the sites where new stars are able to form. In this paper we report the detection of further dense condensations of size scale $\sim$300-3000 AU which we term "envelopes". Dense, inner envelopes or protostellar disks surrounding a central young star are often found inside the envelope. Disks have a range of size from $<$10 AU (B335; \citealt{Yen15}) to $>$200 AU (L1448IRS3B; \citealt{Looney00,Tobin16}). \begin{figure*} \centering \includegraphics[scale=0.61]{hierarchy5.pdf} \caption{Multi-scale structures in the Perseus molecular cloud. In each panel, beam size is shown in lower left and scale is shown in lower right. The five different panels are explained below.\\ \textbf{ \underline{Cloud:}} The entire Perseus cloud at 350 $\micron$ obtained from \emph{Herschel}. Yellow contours correspond to A$_V$ = 7 mag (see \citealt{Sadavoy14}) and are derived from the opacity map from \cite{Zari16}. The coordinates of the center of map are $R.A.$(J2000) = 3h35m06.08s \& $Dec$(J2000) = +31d24m10.61s. The FWHM beam size is 24.9$''$.\\ \textbf{ \underline{Clump:}} One of the clumps from Herschel 350 $\micron$ map, L1448 is magnified to show the details. Yellow contour shows A$_V$ = 7 mag (see Panel Cloud). The coordinates of the center of map are $R.A.$(J2000) = 3h25m25.91s \& $Dec$(J2000) = +30d38m47.91s. The FWHM beam size is 24.9$''$.\\ \textbf{ \underline{Core:}} SCUBA 850 $\micron$ map of one of the cores (J032536.1+304514) that resides in L1448 (map from \citealt{Di08}). Yellow contour represents a 5$\sigma$ level where $\sigma$ = 0.1 Jy/beam. The coordinates of the center of map are $R.A.$(J2000) = 3h25m35.77s \& $Dec$(J2000) = +30d45m25.49s. The FWHM beam size is $\sim$23$''$.\\ \textbf{ \underline{Envelopes:}} SMA 1.3 mm map of the region that is shown by magenta box in Panel Core. The yellow contours represent 6$\sigma$ detection, where $\sigma$ = 0.012 Jy/beam. The coordinates of the center of map are $R.A.$(J2000) = 3h25m31.15s \&1 $Dec$(J2000) = +30d45m23.89s. The angular resolution of this map is $\sim$ 4$''$ $\times$ 3$''$.\\ \textbf{ \underline{Protostellar object:}} VLA map from VANDAM survey \citep{Tobin16} for one of the envelope `Per-emb-33'. Yellow contours represent 15$\sigma$ limit (see \citealt{Lee15}) where $\sigma$ = 7.25 $\mu$Jy/beam. The coordinates of the center of map are $R.A.$(J2000) = 3h25m36.34s \& $Dec$(J2000) = +30d45m15.07s. The FWHM beam size is 0.065$''$. \label{hierarchy_perseus} } \end{figure*} Figure \ref{hierarchy_perseus} displays the hierarchical structures in the Perseus molecular cloud from actual observations. The figure includes 5 panels where each panel represents structures of varying size scales starting from the largest structure in our study, the whole cloud, and moving subsequently towards smaller structures such as clumps, cores, envelopes and protostellar objects. The first panel "Cloud" shows the larger scale $Herschel$ 350 $\micron$ emission map where 7 clumps are detected (see \citealt{Sadavoy14,Mercimek17}). In one of the clumps, L1448 \citep{Terebey97, Looney00, Kwon06}, \cite{Sadavoy10} found the presence of four cores (three protostellar and one starless) from SCUBA observations \citep{Di08}. One of the cores, J032536.1+304514 inside L1448 when observed with the SMA revealed three envelope scale fragments. Observations from the VLA show the presence of three protostellar objects in one of the SMA detected envelopes, Per-emb-33 \citep{Lee15, Tobin16}. The multi-scale structures in a molecular cloud can be produced by a variety of fragmentation processes. Some of these processes include magnetohydrodynamic turbulence (e.g., \citealt{MacLow04}, \citealt{Hennebelle12}), self-gravity of the gas (e.g., \citealt{Heyer09,Ballesteros11,Ballesteros12}) and ionization radiation (e.g., \citealt{Whitworth94},\citealt{Dale09}). According to the turbulence regulated star formation theory \citep{Padoan99,MacLow04,Krumholz05}, supersonic turbulence creates a series of density fluctuations, where long-lasting high density fluctuations are able to gravitationally collapse. In self-gravity regulated star formation theory, cloud fragmentation is dominated by gravity, and gravity rather than turbulence is responsible for the structure hierarchy (e.g., \citealt{Hoyle53}, \citealt{Zinnecker84}, \citealt{Heitsch08}, \citealt{Ballesteros11}). In some cases, the colliding clouds produce initial turbulence which creates non-uniform density distribution, and then gravity takes over (gravo-turbulent fragmentation; \citealt{Klessen04}). Although what controls the fragmentation process is still debated, it is likely some combination of gravitational instability, turbulence, magnetic fields, and stellar feedback (e.g., \citealt{Padoan02}, \citealt{Hosking04}, \citealt{Machida05}, \citealt{Girart13}). In terms of support, gas thermal pressure is expected to be the most important factor against gravitational collapse on the smaller scales relevant to the formation of individual stars \citep{Larson06}. At these scales, cloud fragmentation is expected to follow classical Jeans instability that is obtained by balancing gravity with thermal pressure \citep{Jeans29}. If the actual mass of a cloud is greater than its Jeans mass, self gravity wins over the thermal support and the cloud fragments. Another prevailing view is that self-gravitating clouds are supported against collapse by non-thermal motions \citep{Heitsch00,Clark05} rather than the thermal support. For this case, the non-thermal motions provides the pressure necessary to balance the inward pull of gravity. This study stands out when compared to other similar studies regarding cloud fragmentation for mainly two reasons. First, we focus on hierarchical fragmentation over multiple scales in the same cloud, rather than combining observations from various different clouds. Thus we have a uniform sampling region and same physical conditions at each scale. Second, this study covers the entire cloud down to the scale of protostellar objects. Previous analyses were unable to probe well these small scales because of limitations in observational techniques. Hence, this is the first study to investigate a detailed hierarchical fragmentation picture in a single molecular cloud from the scale of the cloud to the scale of protostellar objects. We explain our observations in \S \ref{obs} where we describe our new SMA observations as well as the complementary data from the literature. In \S \ref{result} we present the newly identified SMA sources. In \S \ref{jeansanalysis}, we present the Jeans analysis for each level of hierarchy. In \S \ref{combination}, we combine all the hierarchies for a comprehensive study. We discuss our results in \S \ref{conclusions} and finally we present our conclusions in \S \ref{summary}. \subsection{Target selection} The Perseus molecular cloud ($d$ = 230 pc, \citealt{Hirota08,Hirota11}) is an ideal target for this analysis. It is one of the best studied nearby star forming regions with ample data available in the literature, including observations at mid-IR (\emph{Spitzer}), far-IR (\emph{Herschel}) and sub-mm (JCMT, CSO) wavelengths. These observations probe the warm dust emission from young stars as well as cooler dust from the ambient cloud and its dense clumps and cores. The Perseus protostars have also been probed with the VLA \cite{Tobin16} at the scales of protostellar disks. Finally, Perseus has a relatively large population of young stars compared to other nearby molecular clouds. Since we want to focus on the hierarchical structure, it is advantageous to examine younger populations that are still embedded in their natal environment. Thus, Perseus provides a large, unbiased sample necessary to obtain the statistics for this study. \section{Observations} \label{obs} \subsection{Archival Data} \label{archivaldata} Our study spans spatial scales from $\gtrsim$15 AU to $\gtrsim$10 pc. To observe this multiscale structure, we require data from multiple telescopes including both single dish telescopes and interferometers. For the cloud scales, we used global properties of Perseus from near-infrared extinction maps in \cite{Sadavoy10}. For clump scales, we used the physical properties determined in \cite{Sadavoy14} from observations with the {\emph Herschel Space Observatory} \citep{Pilbratt10} at far-IR wavelengths. For core scales, we used the source lists provided in \cite{Sadavoy10} and \cite{Mercimek17} at submillimeter wavelengths from the Submillimeter Common-User Bolometer Array (SCUBA; \citealt{Holland99}) at the James Clerk Maxwell Telescope. The cores were initially identified from the SCUBA Legacy Catalogue \citep{Di08} and classified as starless or protostellar using infrared observations from Spitzer (see \citealt{Sadavoy10} for details). Finally, for disk scales, we used the results from the "VLA Nascent Disk and Multiplicity" survey (VANDAM; PI: J. Tobin) undertaken with the Karl G. Jansky Very Large Array (VLA; \citealt{Thompson80}) at 8 mm \citep{Tobin16}. These data probed all protostars in Perseus at a common, high resolution of 15 AU. At this spatial resolution, VANDAM sources probe the dense gas and dust immediately surrounding the protostars. The VLA sources represent scales from protostellar vicinity to compact dust disks. For the purpose of this study, we term all such VLA sources as "protostellar objects". Thus by "protostellar objects" we encompass the size scales from protostars to compact disks. As noted above, we have literature data for the scale of the entire cloud, clumps, cores and disks for the Perseus molecular cloud. However, we lack the data for the envelope scales. The MASSES data from the SMA (see \S \ref{smaobservation}) fill that gap and enables us to study envelope scale structures. \subsection{SMA observations} \label{smaobservation} \subsubsection{MASSES} We used observations from the large-scale SMA project ($\sim$600 observing hours, 3-4 years) "Mass Assembly of Stellar Systems and Their Evolution with the SMA" (MASSES; co-PIs: M. Dunham and I. Stephens). MASSES targeted all known 73 protostars in Perseus in dust continuum and spectral line emission at 230 and 345 GHz. The data were taken in the sub-compact (SUB) and extended (EXT) array configurations. The SUB configuration has an angular resolution of $\sim$ 4$''$ at 230 GHz, which corresponds to a spatial scale of $\sim$1000 AU at the distance of Perseus. The EXT configuration has an angular resolution of $\sim$ 1$''$ at 230 GHz ($\sim$200 AU). The MASSES observations include line emission at $^{12}$CO (2-1), $^{13}$CO (2-1), C$^{18}$O (2-1) \& N$_2$D$^+$ (3-2) at 230 GHz. We do not include the line data in this study. We also do not discuss the 345 GHz (0.87 mm) data at this time and instead focus on the 230 GHz (1.3 mm) results. The VANDAM and MASSES projects target the same protostars in Perseus and complement each other. Nevertheless, the MASSES data at 1.3 mm are better able to trace the envelope emission than the VANDAM data at 8 mm, because thermal dust emission is brighter at 1.3 mm than 8 mm by two orders of magnitude. Due to this limitation, the VANDAM data will primarily trace material associated with the very inner envelope and disk \citep{Tobin16} where the densities are highest rather than the surrounding envelope. Thus, the SMA data presented here are key to trace the envelope scales of our analysis. For this study, we used only 230 GHz continuum data observed in the SUB configuration. The data were observed with the ASIC correlator with 2 GHz bandwidth in each of the lower and upper sidebands. Each 2 GHz band has 24 chunks with 82 MHz usable bandwidth. Our correlator setup includes 8 chunks with 64 channels in each chunk for continuum observations. The remaining chunks are used for line observations. We averaged the chunks with 64 channels per chunk to generate the continuum. The continuum thus generated has an effective bandwidth of 1312 MHz considering both the upper and lower sidebands. \subsubsection{SMA Data Reduction} We used the MIR software package\footnote{https://www.cfa.harvard.edu/~cqi/mircook.html} with standard calibration procedures to reduce and calibrate the visibility data. First, we did the baseline correction on the visibility dataset and flagged the bad data points. We then corrected the amplitude and phase data with the system temperature. We calibrated bandpass using antenna based solutions for the bandpass calibrator which is then followed by the gain calibration and ultimately the flux calibration using bright quasars or planets. Typically we used the quasar 3c84 for gain calibration, either 3c84, 1058+015, 3c454.3 or a similar bright quasar for bandpass calibration, and Uranus for flux calibration. The uncertainty in the flux calibration is $\sim$25\% (see \citealt{Lee15}). We used the MIRIAD software package \citep{Sault95} to image the calibrated visibility data. After taking the inverse Fourier transform of the visibility data, the image was obtained using the robust parameter = 1 with MIRIAD task $clean$. This provided the midway solution of both the natural and uniform weighting, enabling the detection of both small scale structures and extended emission. The images were cleaned and restored until finally they were corrected for primary beam attenuation using an image of the primary beam pattern. \section{SMA Results} \label{result} \subsection{SMA source identification} \label{smasource} For the purpose of this study, we defined an SMA source $(envelope)$ as a source that is detected at $>$ 5$\sigma$, where $\sigma$ is the noise in the background image. Figure \ref{per11} shows an example of SMA sources at 5$\sigma$ level that are detected in the region of Per-emb-11. We overplotted the higher resolution VLA sources in the reduced SMA tracks, which are shown as purple stars. The figure shows two SMA sources, "IC348 MMS1" and "IC348 MMS2". The first source "IC348 MMS1" contains two VLA sources, Per-emb-11-A and Per-emb-11-B. The second source "IC348 MMS2" contains only one VLA source, Per-emb-11-C. The nomenclatures IC348 MMS1 and MMS2 for SMA sources are adopted from \cite{Lee16}. Images corresponding to all the SMA-detected sources will be publicly available in FITS (Flexible Image Transport System) format in the online version of this paper. \begin{figure}[tbh] \centering \includegraphics[scale=0.3, angle=270]{paper_Per11.pdf} \caption{The VLA detected sources (protostellar objects shown by purple stars) are overplotted in the SMA image (SMA envelopes are shown by 5$\sigma$ orange contours) in the case of Per-emb-11. The two SMA sources are IC348 MMS1 and IC348 MMS2, and the three VLA sources are Per-emb-11-A, Per-emb-11-B and Per-emb-11-C. The angular resolution size is shown at lower left and scale bar is shown at lower right respectively. Dash circle represents primary beam of the pointing. \label{per11} } \end{figure} We found a total of 73 SMA sources in the Perseus molecular cloud. To avoid duplications of the same source from different tracks, we excluded the detections that are far from the center of primary beam. After excluding the duplicated sources, we had a total of 56 unique SMA sources (53 sources at $>$ 5$\sigma$ and 3 sources at $>$ 6$\sigma$ level). We list these sources in Table \ref{envelopes}. There are also 3 unique detections at $>$ 4$\sigma$ give in Table \ref{envelopes} which we consider robust enough detections for further analysis. Thus, we identify 59 distinct sources with the SMA in the Perseus molecular cloud. \subsection{SMA Source fitting} \label{sourcefitting} We calculated SMA source sizes by fitting models of each source in the visibility plane. The reason we chose to fit in visibility plane instead of the image plane is because some of the SMA sources had extended structure. These structures are better seen in visibilities and in some instances are not adequately recovered after we inverse Fourier Transformed the visibility data and deconvolved the dirty image from the dirty beam. For example, we found that source sizes were generally underestimated when fit in the image plane over the visibility plane because of spatial filtering. Thus, we calculated the source sizes in the visibility plane. To determine the best fit model that describes the nature of the source, we inspected plots of the amplitude with $uv$ distance ($amp$ versus $uvdist$). If the variation of amplitude with u-v distance showed a Gaussian nature, we fitted a Gaussian model to the source since the Fourier transform of a Gaussian function is also a Gaussian function (but of a varying width). Similarly if visibility amplitude is constant across the range of $uv$ distance, we fitted the source with a point source as the Fourier transform of a uniform function is a point source. Finally if the variation showed a Gaussian nature with a uniform tail, we fitted a combined model of a point and a Gaussian function. Figure \ref{ampvsuvdist} shows an example of a combined fit in the case of IC348 MMS1 (one of the two SMA detected sources in Per-emb-11 in Figure \ref{per11}). In the cases of multiple sources in the same field, we need to specify the location and flux of each source separately in the visibility plane. To estimate such source properties, first we used the MIRIAD routine $imfit$ to find source position and flux in the image plane. Then we used them as initial guesses while using MIRIAD task $uvfit$ to fit the sources in the visibility plane. Our technique of source fitting works in MIRIAD as long as there are less than 20 initial free parameters because of restrictions in \emph{uvfit}. If there are more than 20 initial free parameters, we reduced the number of sources by subtracting a source in the image plane and again obtained the fits for the residual $u-v$ data in the visibility plane. In brief, first we transformed the actual visibility data to the image plane. Then we cleaned the data and restored the clean map by deconvolving with the dirty beam. We identified the source that we want to subtract. After subtracting the source, we Fourier transformed the residual image data back to the visibility plane and fitted the remaining continuum sources. We repeated the process by subtracting other sources to cross check the consistency in values of fitted parameters. We plotted the best fit models on top of the continuum images and visually confirmed that these were indeed good fits. \startlongtable \begin{deluxetable*}{ccccccccc} \centering \tabletypesize{\scriptsize} \tablecolumns{9} \tablewidth{0pc} \tablecaption{SMA source properties obtained by fitting the source \label{envelopes}} \tablehead{\colhead{SMA source} & \colhead{Fitting model} & \colhead{R.A.\tablenotemark{(b)}} & \colhead{Dec.\tablenotemark{(b)}} & \colhead{Peak flux\tablenotemark{(c)}} & \colhead{Integrated flux\tablenotemark{(c)}} & \colhead{Major axis\tablenotemark{(d)}} & \colhead{Minor axis\tablenotemark{(d)}} & \colhead{Group\tablenotemark{(e)}}\\ \colhead{name\tablenotemark{(a)}} & \colhead{} & \colhead{(J2000)} & \colhead{(J2000)} & \colhead{(mJy)} & \colhead{(mJy)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{} } \startdata B1-bN & Point + Gaussian & 03:33:21.198 & +31:07:43.931 & 152.7 $\pm$ 3.7 & 248.7 $\pm$ 11.4 & 6.41 $\pm$ 1.24 & 6.04 $\pm$ 1.53 & A \\ IC348 MMS1\tablenotemark{(f)} & Point + Gaussian & 03:43:57.055 & +32:03:04.669 & 195.6 $\pm$ 2.6 & 477.9 $\pm$ 7.6 & 6.71 $\pm$ 0.22 & 5.56 $\pm$ 0.22 & A \\ IC348 MMS2\tablenotemark{(f)} & Point + Gaussian & 03:43:57.735 & +32:03:10.098 & 23.1 $\pm$ 3.4 & 78.8 $\pm$ 6.7 & 5.41 $\pm$ 0.72 & 3.13 $\pm$ 0.77 & B \\ IRAS4B$'$ & Point + Gaussian & 03:29:12.825 & +31:13:06.962 & 227.1 $\pm$ 166.0 & 311.6 $\pm$ 232.6 & 1.97 $\pm$ 2.6 & 0.51 $\pm$ 2.6 & B \\ L1448IRS3\tablenotemark{(g)} & Point + Gaussian & 03:25:35.675 & +30:45:35.163 & 51.9 $\pm$ 2.7 & 337.4 $\pm$ 12.2 & 12.96 $\pm$ 0.48 & 4.77 $\pm$ 0.23 & A \\ L1448NW\tablenotemark{(g)} & Point + Gaussian & 03:25:36.464 & +30:45:21.425 & 105.1 $\pm$ 2.8 & 218.8 $\pm$ 9.5 & 10.87 $\pm$ 1.0 & 3.48 $\pm$ 1.0 & A \\ L1451-MMS & Point & 03:25:10.241 & +30:23:55.013 & 39.1 $\pm$ 3.1 & 39.1 $\pm$ 3.1 & ... & ... & B \\ Per-bolo-45-SMM\tablenotemark{(h)} & Point + Gaussian & 03:29:06.764 & +31:17:22.297 & 7.5 $\pm$ 3.9 & 108.6 $\pm$ 24.1 & 14.21 $\pm$ 2.39 & 9.44 $\pm$ 2.39 & A \\ Per-bolo-58 & Gaussian & 03:29:25.417 & +31:28:14.205 & 24.3 $\pm$ 5.2 & 94.5 $\pm$ 24.4 & 14.27 $\pm$ 1.41 & 7.54 $\pm$ 0.99 & A \\ Per-emb-1 & Point + Gaussian & 03:43:56.770 & +32:00:49.865 & 118.7 $\pm$ 2.7 & 331.6 $\pm$ 6.9 & 6.65 $\pm$ 0.25 & 4.64 $\pm$ 0.25 & A \\ Per-emb-2 & Point + Gaussian & 03:32:17.915 & +30:49:48.033 & 350.6 $\pm$ 9.0 & 764.7 $\pm$ 12.0 & 3.54 $\pm$ 0.12 & 2.89 $\pm$ 0.08 & B \\ Per-emb-3 & Point & 03:29:00.554 & +31:11:59.849 & 59.5 $\pm$ 3.0 & 59.5 $\pm$ 3.0 & ... & ... & B \\ Per-emb-5 & Point + Gaussian & 03:31:20.931 & +30:45:30.334 & 206.3 $\pm$ 3.5 & 329.0 $\pm$ 6.4 & 5.98 $\pm$ 0.39 & 3.85 $\pm$ 0.39 & A \\ Per-emb-8 & Point + Gaussian & 03:44:43.975 & +32:01:34.968 & 111.2 $\pm$ 2.7 & 183.1 $\pm$ 10.7 & 8.91 $\pm$ 1.88 & 7.71 $\pm$ 1.88 & A \\ Per-emb-9 & Point + Gaussian & 03:29:51.876 & +31:39:05.516 & 15.8 $\pm$ 3.4 & 174.2 $\pm$ 25.0 & 13.3 $\pm$ 1.47 & 10.76 $\pm$ 1.47 & A \\ Per-emb-10 & Point + Gaussian & 03:33:16.412 & +31:06:52.384 & 13.6 $\pm$ 1.9 & 58.8 $\pm$ 8.9 & 9.42 $\pm$ 1.57 & 7.58 $\pm$ 1.57 & A \\ Per-emb-10-SMM & Point + Gaussian & 03:33:18.470 & +31:06:33.629 & 4.9 $\pm$ 1.8 & 19.1 $\pm$ 4.0 & 4.01 $\pm$ 2.43 & 3.99 $\pm$ 2.43 & B \\ Per-emb-12 & Point + Gaussian & 03:29:10.490 & +31:13:31.369 & 1484.0 $\pm$ 14.8 & 4093.0 $\pm$ 22.8 & 4.4 $\pm$ 0.05 & 3.28 $\pm$ 0.04 & B \\ Per-emb-13 & Point + Gaussian & 03:29:11.993 & +31:13:08.137 & 687.0 $\pm$ 13.6 & 1173.5 $\pm$ 18.1 & 4.1 $\pm$ 0.18 & 3.24 $\pm$ 0.15 & B \\ Per-emb-14 & Point + Gaussian & 03:29:13.517 & +31:13:57.754 & 87.0 $\pm$ 4.6 & 123.3 $\pm$ 7.9 & 4.24 $\pm$ 1.35 & 1.22 $\pm$ 1.35 & B \\ Per-emb-15 & Point + Gaussian & 03:29:04.207 & +31:14:48.642 & 8.2 $\pm$ 4.0 & 69.2 $\pm$ 12.1 & 8.34 $\pm$ 1.6 & 5.24 $\pm$ 1.36 & A \\ Per-emb-16 & Point + Gaussian & 03:43:50.999 & +32:03:23.858 & 11.0 $\pm$ 2.3 & 93.6 $\pm$ 10.8 & 9.12 $\pm$ 1.17 & 7.73 $\pm$ 1.17 & A \\ Per-emb-17 & Point + Gaussian & 03:27:39.120 & +30:13:02.526 & 47.7 $\pm$ 3.0 & 116.5 $\pm$ 11.4 & 9.65 $\pm$ 1.6 & 6.24 $\pm$ 1.59 & A \\ Per-emb-18 & Point + Gaussian & 03:29:11.261 & +31:18:31.326 & 117.4 $\pm$ 4.0 & 217.9 $\pm$ 16.6 & 8.71 $\pm$ 1.57 & 7.67 $\pm$ 1.38 & A \\ Per-emb-19 & Point & 03:29:23.476 & +31:33:28.940 & 14.7 $\pm$ 2.5 & 14.7 $\pm$ 2.5 & ... & ... & B \\ Per-emb-19-SMM\tablenotemark{(h)} & Point & 03:29:24.331 & +31:33:22.569 & 8.9 $\pm$ 2.6 & 8.9 $\pm$ 2.6 & ... & ... & B \\ Per-emb-20 & Gaussian & 03:27:43.199 & +30:12:28.962 & 1.1 $\pm$ 1.0 & 53.8 $\pm$ 16.7 & 9.56 $\pm$ 1.25 & 3.93 $\pm$ 0.91 & A \\ Per-emb-20-SMM & Gaussian & 03:27:42.778 & +30:12:25.936 & 7.2 $\pm$ 0.9 & 14.8 $\pm$ 16.4 & 3.59 $\pm$ 1.31 & 0.02 $\pm$ 84.0 & B \\ Per-emb-21 & Point + Gaussian & 03:29:10.688 & +31:18:20.151 & 43.6 $\pm$ 4.1 & 193.6 $\pm$ 14.3 & 7.14 $\pm$ 1.37 & 6.41 $\pm$ 1.28 & A \\ Per-emb-22 & Point + Gaussian & 03:25:22.353 & +30:45:13.213 & 92.8 $\pm$ 3.9 & 400.4 $\pm$ 13.0 & 8.28 $\pm$ 0.48 & 6.2 $\pm$ 0.47 & A \\ Per-emb-23 & Point + Gaussian & 03:29:17.249 & +31:27:46.336 & 12.4 $\pm$ 1.9 & 78.5 $\pm$ 8.8 & 12.57 $\pm$ 1.49 & 7.2 $\pm$ 1.04 & A \\ Per-emb-25 & Point & 03:26:37.492 & +30:15:27.904 & 87.8 $\pm$ 3.7 & 87.8 $\pm$ 3.7 & ... & ... & B \\ Per-emb-26 & Point + Gaussian & 03:25:38.872 & +30:44:05.299 & 180.1 $\pm$ 2.3 & 480.6 $\pm$ 13.0 & 11.62 $\pm$ 0.46 & 7.28 $\pm$ 0.25 & A \\ Per-emb-27 & Point + Gaussian & 03:28:55.562 & +31:14:37.167 & 259.6 $\pm$ 2.8 & 709.7 $\pm$ 7.8 & 6.85 $\pm$ 0.15 & 5.61 $\pm$ 0.14 & A \\ Per-emb-28 & Point + Gaussian & 03:43:50.987 & +32:03:07.967 & 12.0 $\pm$ 2.0 & 58.8 $\pm$ 12.0 & 11.89 $\pm$ 2.89 & 8.12 $\pm$ 2.2 & A \\ Per-emb-29 & Point + Gaussian & 03:33:17.860 & +31:09:32.307 & 144.2 $\pm$ 3.6 & 468.2 $\pm$ 11.7 & 7.88 $\pm$ 0.3 & 5.98 $\pm$ 0.26 & A \\ Per-emb-30 & Point & 03:33:27.302 & +31:07:10.187 & 50.9 $\pm$ 3.9 & 50.9 $\pm$ 3.9 & ... & ... & B \\ Per-emb-33\tablenotemark{(g)} & Point + Gaussian & 03:25:36.324 & +30:45:14.771 & 495.1 $\pm$ 5.8 & 1050.7 $\pm$ 8.8 & 5.1 $\pm$ 0.13 & 3.47 $\pm$ 0.13 & A \\ Per-emb-35 & Point + Gaussian & 03:28:37.124 & +31:13:31.236 & 43.6 $\pm$ 2.9 & 127.2 $\pm$ 10.4 & 9.7 $\pm$ 2.13 & 6.14 $\pm$ 1.65 & A \\ Per-emb-36 & Point + Gaussian & 03:28:57.363 & +31:14:15.610 & 129.3 $\pm$ 1.9 & 220.8 $\pm$ 11.5 & 13.69 $\pm$ 1.45 & 6.83 $\pm$ 0.82 & A \\ Per-emb-37 & Point + Gaussian & 03:29:18.936 & +31:23:13.109 & 12.0 $\pm$ 2.0 & 59.1 $\pm$ 7.3 & 10.63 $\pm$ 1.45 & 5.5 $\pm$ 1.21 & A \\ Per-emb-40 & Point & 03:33:16.646 & +31:07:54.808 & 25.3 $\pm$ 13.5 & 25.3 $\pm$ 13.5 & ... & ... & B \\ Per-emb-41 & Point + Gaussian & 03:33:21.338 & +31:07:26.439 & 285.5 $\pm$ 4.1 & 374.6 $\pm$ 11.4 & 5.86 $\pm$ 1.65 & 5.56 $\pm$ 1.65 & A \\ Per-emb-44 & Point + Gaussian & 03:29:03.719 & +31:16:03.295 & 333.4 $\pm$ 3.2 & 759.1 $\pm$ 17.7 & 9.16 $\pm$ 0.44 & 4.56 $\pm$ 0.22 & A \\ Per-emb-47 & Point & 03:28:34.513 & +31:00:50.702 & 9.2 $\pm$ 2.4 & 9.2 $\pm$ 2.4 & ... & ... & B \\ Per-emb-50 & Point & 03:29:07.764 & +31:21:57.162 & 96.4 $\pm$ 2.9 & 96.4 $\pm$ 2.9 & ... & ... & B \\ Per-emb-51 & Point + Gaussian & 03:28:34.521 & +31:07:05.467 & 12.1 $\pm$ 5.4 & 115.4 $\pm$ 10.7 & 5.77 $\pm$ 0.84 & 3.69 $\pm$ 0.71 & A \\ Per-emb-53 & Point + Gaussian & 03:47:41.577 & +32:51:43.745 & 24.9 $\pm$ 4.0 & 74.3 $\pm$ 9.9 & 6.9 $\pm$ 1.67 & 5.02 $\pm$ 1.26 & A \\ Per-emb-54 & Point + Gaussian & 03:29:02.828 & +31:20:41.321 & 21.7 $\pm$ 4.6 & 197.4 $\pm$ 13.0 & 10.48 $\pm$ 0.68 & 5.9 $\pm$ 0.52 & A \\ Per-emb-56 & Point & 03:47:05.422 & +32:43:08.330 & 14.1 $\pm$ 6.1 & 14.1 $\pm$ 6.1 & ... & ... & B \\ Per-emb-57 & Point & 03:29:03.322 & +31:23:14.338 & 23.3 $\pm$ 1.4 & 23.3 $\pm$ 1.4 & ... & ... & B \\ Per-emb-58\tablenotemark{(h)} & Point & 03:28:58.361 & +31:22:16.811 & 7.7 $\pm$ 1.6 & 7.7 $\pm$ 1.6 & ... & ... & B \\ Per-emb-61 & Point & 03:44:21.301 & +31:59:32.526 & 11.6 $\pm$ 3.9 & 11.6 $\pm$ 3.9 & ... & ... & B \\ Per-emb-62 & Point & 03:44:12.973 & +32:01:35.289 & 75.8 $\pm$ 3.1 & 75.8 $\pm$ 3.1 & ... & ... & B \\ Per-emb-63 & Point & 03:28:43.279 & +31:17:33.248 & 18.2 $\pm$ 2.9 & 18.2 $\pm$ 2.9 & ... & ... & B \\ Per-emb-64 & Point & 03:33:12.848 & +31:21:23.950 & 45.7 $\pm$ 24.3 & 45.7 $\pm$ 24.3 & ... & ... & B \\ Per-emb-65 & Point & 03:28:56.301 & +31:22:27.693 & 27.5 $\pm$ 2.6 & 27.5 $\pm$ 2.6 & ... & ... & B \\ SVS13B & Point + Gaussian & 03:29:03.032 & +31:15:51.362 & 248.6 $\pm$ 3.2 & 774.5 $\pm$ 19.3 & 8.97 $\pm$ 0.44 & 6.75 $\pm$ 0.18 & A \\ SVS13C & Point + Gaussian & 03:29:01.969 & +31:15:38.199 & 55.1 $\pm$ 3.1 & 189.0 $\pm$ 11.1 & 12.12 $\pm$ 0.97 & 3.49 $\pm$ 0.97 & A \\ \enddata \tablenotetext{(a)}{ The SMA source names are adopted from \cite{Tobin16} for consistency with previous nomenclature. For some of the Per-emb sources, we detected a secondary source with the SMA that could not be found in literature. For these sources, we added the suffix "SMM" to the end of the name. For example, Per-bolo-45-SMM, does not lie in the same region as Per-bolo-45. All the SMA sources are detected at 5-$\sigma$ contour, unless otherwise stated.} \tablenotetext{(b)}{ R.A. and Dec. refers to the peak position of SMA source obtained by fitting a model to the source (see \S \ref{sourcefitting}).} \tablenotetext{(c)}{ The reported uncertainties are statistical and they exclude any calibration/systematic error.} \tablenotetext{(d)}{ Deconvolved FWHM size estimates with the model synthesized beam.} \tablenotetext{(e)}{ Group "A": Size estimates in both image and visibility plane agree, axes size / axes error $>$ 3. Group "B": Either one or both of these conditions are not met.} \tablenotetext{(f)}{ Nomenclature adopted from \cite{Lee16}.} \tablenotetext{(g)}{ Source is detected at 6$\sigma$ contour.} \tablenotetext{(h)}{ Source is detected at 4$\sigma$ contour.} \end{deluxetable*} \begin{figure}[ht] \centering \vskip -1.0in \includegraphics[scale=0.4, angle=0]{Per11_ampVsuvdist_pg.pdf} \vskip -1.0in \caption{Radial profiles of amplitude with u-v distance for IC348 MMS1. The green circles represent actual visibility data with 3-$\sigma$ error bars on noise (before taking flux calibration error into account). The data is fit by a model that is a combination of a Gaussian function and a point function. This model is shown by magenta squares. The position of the source is determined by fitting the source in the image plane before fitting them in visibility plane. \label{ampvsuvdist} } \end{figure} Not all SMA sources are robust even if they are detected at $>$ 5$\sigma$. For example, the sources that we fit with only a point function are unresolved point sources and thus do not have size estimates, we use the resolution limit as an upper limit on size. Other sources are not well fit by the models and have large uncertainties in their axis ratios. Based on these possible sources of errors, in Table \ref{envelopes} we divided the SMA sources into 2 groups, "A" where the fitting results are trustworthy and can be considered for further analyses, and "B" where the fitting results may have systematic errors and are not robust. There are 34 SMA sources that belong to group "A" and 25 SMA sources belong to group "B". For the sources that belong to group "A", the sizes estimated in both the image plane and the visibility plane are within 10 percent of each other. For our main analyses, we focus on the group "A" sources. To calculate the peak and integrated flux of an SMA source, we fitted the same model (that we obtain for that source in the visibility plane) in the primary beam corrected SMA map. These flux estimates are used to determine the masses of the SMA sources in \S \ref{mass estimation}. \subsection{SMA versus VLA multiplicity} \label{sourceproperties} Figure \ref{per11} shows an example where multiplicity is seen at the scale for both SMA envelopes and VLA protostellar objects. The observed multiplicity at different scales raises an important question of whether or not the multiplicity seen at the larger scales in the previous generation (envelopes) are transferred to the smaller scales in next generation (disk scale and protostellar objects). To study this, we have counted the multiplicity for both SMA envelopes and VLA protostellar objects for all the available samples. The number counting of SMA and VLA sources are defined by the resolution limit and the primary beam of the observation. Hence the SMA sources are counted within 1,000 AU and 10,000 AU and the VLA sources are counted within 15 AU and 1,000 AU. For the purpose of counting sources, each SMA field is centered at the center of the primary beam (c.f., Figure \ref{per11}). We consider only those SMA and VLA sources that lie within the primary beam of the SMA image to have a consistency in the number of sources. For the sources that lie in more than one primary beam (overlapping beams), we only include the source what is close to the center of the primary beam and discard the ones that are away from the center of primary beam, as those regions are prone to be less sensitive and noisier. This way we do not end up counting the same source more than once and have a consistent sample of sources. The multiplicity at scales of both the SMA and VLA sources are shown in Figure \ref{smavdm}. In Figure \ref{smavdm}, we have differentiated the SMA and VLA sources into four categories. The first category contains the isolated SMA source that has an isolated VLA source inside. We had 25 such cases. The second category includes isolated SMA sources that have multiple or grouped ($>$1) VLA sources. We had 9 such sources. The third category contains SMA sources that are grouped within 1,000-10,000 AU but single isolated VLA source in them. We had 12 such cases. The fourth category contains grouped SMA sources that have multiple VLA sources. We had 5 such cases. For an isolated SMA source, there are an average of 1.32 VLA sources, and for the grouped SMA sources there are an average of 1.47 VLA sources (shown by green cross in Figure \ref{smavdm}). The isolated and grouped SMA objects show relatively equal numbers of VLA objects (within errors), although there are hints it could be increasing. Hence, the trend in Figure \ref{smavdm} is limited by statistical uncertainty. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4, angle=0]{ovrplt_vdm_stats1_SMAvsVdm1_coll.pdf} \vskip -1.0in \caption{X-axis shows the number of SMA sources between 1,000 AU and 10,000 AU that are either single (isolated) or multiple (grouped), and y-axis shows the number of VLA sources between 10 AU and 1,000 AU that are either isolated or grouped. The SMA sources are detected with at least 5$\sigma$ contour. The scale in each axis is determined by the resolution limit and the primary beam of the respective telescope array. The sizes of the yellow circles are proportional to the number of SMA envelopes (written inside yellow markers). The dash green line connects the average number of VLA sources per SMA source. \label{smavdm} } \end{figure} \section{Multi-scale Jeans Analysis} \label{jeansanalysis} As discussed in \S \ref{intro}, the most accepted means of external support to a cloud structure against the gravitational pull is the thermal support, the turbulent support, and the support due to magnetic fields. For the foregoing SMA and VLA sources, and for the larger regions which enclose them, we tested the observed hierarchical structures under two possible Jeans fragmentation cases. First, we assume that the structures are supported entirely by thermal gas motions. Next, we assume that the structure is supported by the combined effect of both thermal and non-thermal motions. These two cases are useful because they may be considered simple lower and upper limits to the true level of support against gravitational fragmentation and collapse. The non-thermal motions adopted here from observed line widths are simpler than those in numerical simulations of MHD turbulent fragmentation, which are more anisotropic, time-varying, and scale-dependent (\citealt{Padoan99,Padoan02,Hennebelle11,hopkins13}). Although the terms "non-thermal" and "turbulent" are often used interchangeably, to avoid confusion in this paper we refer to the motions inferred from line widths as "non-thermal" motions. Our non-thermal Jeans analysis simply tests whether turbulence can act as an isotropic pressure, rather than testing turbulence fragmentation models. A gas cloud is said to be Jeans stable against fragmentation when the outward thermal pressure exerted by gas motion balances the inward gravitational pull of the cloud. If the inward gravitational force wins over the outward thermal balance, the system becomes Jeans unstable and can fragment. The critical mass when the cloud becomes unstable is called the Jeans mass (M$_{\rm{J}}$). We used Equation \ref{eq:1} to calculate the Jeans mass assuming a spherical geometry at all the levels of the cloud hierarchy and also assuming that the Jeans length represents the diameter of the sphere \citep{Binney87}, i,e., \begin{equation} \label{eq:1} M_{\rm{J}} = \frac{\pi^{5/2}}{6G^{3/2}}c_{\rm{eff}}^3\rho_{\rm{eff}}^{-1/2}, \end{equation} where $c_{\rm{eff}}$ is the `effective sound speed', $G$ is the universal gravitational constant, and $\rho_{\rm{eff}}$ is the average density of the region assuming spherical geometry. For the first case of a pure thermal support to a cloud structure, thermal Jeans mass $M_{\rm{J}}^{\rm{th}}$ is calculated assuming $c_{\rm{eff}}$ same as the thermal sound speed, $c_{\rm{s}}$, which is calculated as, \begin{equation}\label{soundspeed} c_{\rm{s}} = \sqrt{\frac{\gamma k_{\rm{B}} T}{\mu_{H_2} m_{\rm{H}}}}, \end{equation} where $\gamma$ is the adiabatic constant which is unity for an isothermal medium, $k_{\rm{B}}$ is the Boltzmann constant, $T$ is the average temperature of the region, $\mu_{H_2}$ is the mean molecular weight per hydrogen molecule ($\sim$2.8 for a cloud with 71\% molecular hydrogen, 27\% helium and 2\% metals; \citealt{Kauffmann08}), $m_{\rm{H}}$ is hydrogen mass. For the second case, we applied an upper limit to the thermal fragmentation. Here, we adopted ``thermal temperatures'' based on the combined support from both thermal and non-thermal gas motions. We used different molecular line tracers from the literature to trace gas motions at all scales. For each tracer, we used the observed velocity dispersion of the line ($\sigma^{\rm{obs}}$), which is comprised of both thermal ($\sigma^{\rm{th}}$) and non-thermal ($\sigma^{\rm{nth}}$) components. We then calculated the non-thermal component of the lines by subtracting out the thermal velocity dispersion, i.e., $\sigma^{\rm{nth}}$ = $\sqrt{(\sigma^{\rm{obs}})^2 - (\sigma^{\rm{th}})^2}$, using $\sqrt{k_{\rm{B}} T/\mu m_{\rm{H}}}$ for the thermal velocity dispersion and the appropriate molecular weight, $\mu$, for each tracer (for example 29 for $^{13}$CO, 17 for NH$_3$). Finally, we added, in quadrature, the non-thermal line widths to the thermal sound speed, i.e., $\sigma^{\rm{th,nth}}$ = $\sqrt{c_{\rm{s}}^2 + (\sigma^{\rm{nth}})^2}$, and used this combined velocity dispersion ($\sigma^{\rm{th,nth}}$) to calculate the Jeans mass in Equation \ref{eq:5}. For the system that is supported by both thermal and non-thermal motions, the Jeans mass is given as (see \citealt{Palau14,Palau15}), \begin{equation}\label{eq:5} \Bigg[\frac{M_{\rm{J}}^{\rm{th,nth}}}{M_{\odot}} \Bigg] = 0.8 \Bigg[ \frac{\sigma^{\rm{th,nth}}}{0.19~\rm{kms^{-1}}} \Bigg]^3 \Bigg[\frac{n_{\rm{H_2}}}{10^5~\rm{cm^{-3}}} \Bigg]^{-1/2} \end{equation} For both conditions of support, the expected number of fragments that are produced in a structure in any generation is given by the ratio of total mass of the structure to the Jeans mass of the same structure. This ratio is also called the Jeans number and is calculated as \begin{equation} N_{\rm{J}} = \frac{M_{\rm{total}}}{M_{\rm{J}}}. \end{equation} We have studied the possibility of Jeans fragmentation for the observed multi-scale substructures in the Perseus molecular cloud. We performed this analysis in a hierarchical fashion from the cloud scale to the scale of protostellar objects in Perseus (the approximate size-scale of each structure is shown in Figure \ref{hierarchy}). The fragmenting scale is hereafter called the parent structure and its subsequent fragments are hereafter child structures. For example, if cloud is the parent structure then clump is the child structure, and so on. We define the formation efficiency of fragments as the ratio of the number of child or child structures to the Jeans number of the parent structure. This definition is similar to the core formation efficiency (CFE), which is defined as the ratio of the number of cores detected in a clump to the Jeans number of that particular clump (\citealt{Bontemps10, Palau15}). Since the children are formed from the available mass of the parent structure, the formation efficiency of a child structure can not be greater than one. \subsection{Cloud to Clump} \label{cloudtoclump} For the Perseus molecular cloud, the largest scale fragmentation is the cloud to clump scale. Perseus has a mass of 3.3 $\times$ 10$^4$ M$_{\odot}$ and covers an area of roughly 66 deg$^2$ above extinction $A_V$ = 1 \citep{Sadavoy10}. These measurements assume a different distance to the cloud and hence for consistency the measurements are corrected for 230 pc distance. The cloud has been studied extensively in dust and molecular line emission to identify its clumps \citep{Ridge06,Sadavoy14,Zari16}. Clumps are relatively dense parsec scale structures that are often defined as the regions in which most stars form (regions within $A_V$ $\sim$ 7 mag, \citealt{Andre10,Lada10,Evans14}). Based on this definition, there are seven clumps in the Perseus cloud \citep{Sadavoy14,Mercimek17}. For our Jeans analysis of the Perseus cloud, we first assumed that only thermal pressure is supporting the cloud against its self-gravitation. \cite{Zari16} gives a line-of-sight average temperature map for the Perseus cloud from modified blackbody fits to thermal dust emission. Based on this temperature map, we adopted the average dust temperature of 18 K to use in the our Jeans analysis. The transition between atomic and molecular form takes place between $A_V$ $\sim$1 and 2, so we perform the Jeans analysis in cloud where $A_V$ > 2. The corresponding density for $A_V$ = 2 in Perseus molecular cloud is 200 cm$^{-3}$ \citep{Evans09}. Using these parameters, we get thermal Jeans mass $\sim$35 M$_{\odot}$ for the Perseus cloud. The corresponding mass at Av $>$ 2 is $\sim$4000 M$_{\odot}$ which gives thermal Jeans number $\sim$120 using the cloud mass above. This Jeans number far exceeds the observed number of clumps (7), and leads to a clump formation efficiency in Perseus of only 0.06. Molecular clouds, however, are unlikely to be supported against fragmentation by solely thermal pressure. In particular, clouds show substantial non-thermal motions that can provide additional support. For example, $^{13}$CO observations in Perseus \citep{Ridge03, Kirk10} have a typical velocity dispersion of 0.9 kms$^{-1}$ whereas the thermal line width of this molecule is expected to be $<$ 0.1 kms$^{-1}$. The non-thermal motions are predominantly present at the cloud scale as inferred from the typical velocity dispersion of 0.9 kms$^{-1}$ from \cite{Kirk10}. The total Jeans mass using $\sigma^{\rm{th,nth}}$ = 0.9 kms$^{-1}$ is $\sim$2000 M$_{\odot}$, assuming a typical cloud density of 200 cm$^{-3}$ for material at Av $>$ 2, which is appropriate for tracing $^{13}$CO \citep{Evans09}. Similarly, we find a Jeans number of 2 and a Jeans efficiency of 3.8. This efficiency greater than unity is not physical. There are additional factors like magnetic fields which can provide support in the low density environment of clouds that have not been considered in this analysis. \subsection{Clump to Core} \label{clumptocore} For the second level of hierarchy, we explored the scale from clumps to cores. Cores of size scale $\sim$0.1 pc reside in the clumps. \cite{Sadavoy10} used SCUBA (850 $\micron$) and \emph{Spitzer Space Telescope} (3.6-70 $\micron$) to explore the dense cores in Perseus. They classified the sub-mm cores that were found with SCUBA as starless or protostellar using point source photometry from $Spitzer$ wide field surveys (see \citealt{Sadavoy10} for details). The details of individual starless and protostellar cores in each clump are presented in \cite{Sadavoy10}. \cite{Mercimek17} characterized the distribution of these cores inside the clumps. Similar to the previous hierarchy, first we tested the expected number of thermal Jeans fragments against the observed number of fragments. To calculate the Jeans number of the clumps, we used the line-of-sight averaged temperatures and mass derived in \cite{Sadavoy14}. Table \ref{clumps} gives the Jeans masses, numbers, and efficiencies for each clump assuming pure thermal support. We use the mass and areas from \cite{Mercimek17} to determine the average density of each clump for $A_{\rm{V}}$ $>$ 7 mag and the dust temperatures from \cite{Sadavoy14} to estimate the thermal support. The velocity dispersion at the scales where $A_{\rm{V}}$ $>$ 7 mag can be studied by using C$^{18}$O line width. The typical line width in Perseus from C$^{18}$O is 0.4 kms$^{-1}$ \citep{Hatchell05}. We used this average velocity dispersion to find $\sigma^{\rm{th,nth}}$ and estimate the Jeans parameters assuming that both thermal and non-thermal motions are supporting the stability of clumps. Table \ref{clumps} also gives an estimate of the Jeans mass, Jeans number and Jeans efficiency for each clump assuming this combined thermal and non-thermal case. We find the values of $\epsilon^{\rm{th}}$ between 0.06 and 0.6, similar to the independent estimates of CFE by \cite{Palau15} using a different sample of objects and observations. We find an average $\epsilon^{\rm{th}}$ of 0.2. For the combined support, the CFE is $>$ 1 for most of the clumps. Figure \ref{NjVsN_cl} compares the number of enclosed cores in each clump ($Num_{\rm{CORE}}$) with the corresponding Jeans number of the clumps ($N_{\rm{J, CLUMP}}$). The plot shows that the number of cores increases with the Jeans number of the clumps (Pearson's correlation coefficient = 0.8). This agreement suggests that thermal Jeans fragmentation may play a significant role in forming cores. Nevertheless, there are systematically fewer cores than predicted, which suggests that thermal pressure is not sufficient. In Figure \ref{NjVsN_cl}, we consider Poisson statistics in estimating the uncertainty in the number of cores. Thus the uncertainty in the number of cores is given by the square root of that number, which is an upper limit of uncertainty. For the Jeans number of the clumps, the sources of uncertainty are mass, temperature and area of the clump. However, uncertainty in mass is the dominant source of error (correct within a factor of a few). We propagated uncertainty on the dependent variables and found that the Jeans number is uncertain up to a factor of 3, if we a take factor of 2 as the lower limit mass uncertainty. This is true for all other levels of hierarchy as well so we have implemented the same technique for error estimates in other hierarchies. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{paper_jeansNumVsNum_clump1_log.pdf} \vskip -1.0in \caption{Comparison between number of enclosed cores with the Jeans number of the clumps. The error in number of cores assume Poisson statistics and the Jeans number is correct within a factor of 3. \label{NjVsN_cl} } \end{figure} \begin{deluxetable*}{lccccccccc} \centering \tablecolumns{10} \tablecaption{Jeans analysis in the clumps \label{clumps}} \tablehead{\colhead{Clump} & \colhead{Mass\tablenotemark{(a)}} & \colhead{Area\tablenotemark{(a)}} & \colhead{M$_{\rm{J}}$$^{\rm{th}}$} & \colhead{M$_{\rm{J}}$$^{\rm{th,nth}}$} & \colhead{N$_{\rm{J}}$$^{\rm{th}}$} & \colhead{N$_{\rm{J}}$$^{\rm{th,nth}}$} & \colhead{Num$_{\rm{CORE}}$} & \colhead{$\epsilon^{\rm{th}}$\tablenotemark{(b)}} & \colhead{$\epsilon^{\rm{th,nth}}\tablenotemark{(b)}$}\\ \colhead{} & \colhead{[M$_{\odot}$]} & \colhead{[pc$^2$]} & \colhead{[M$_{\odot}$]} & \colhead{[M$_{\odot}$]} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{}} \startdata B5 & 62 & 0.32 & 3.8 & 41.7 & 16.2 & 1.5 & 1 & 0.06 & 0.67 \\ B1-E & 88 & 0.57 & 5.1 & 54.3 & 17.2 & 1.6 & 0 & 0.0 & 0.0 \\ L1448 & 159 & 0.48 & 2.9 & 34.6 & 55.1 & 4.6 & 4 & 0.07 & 0.87 \\ L1455 & 251 & 1.3 & 4.7 & 57.9 & 53.1 & 4.3 & 7 & 0.13 & 1.61 \\ IC 348 & 511 & 2.9 & 8.7 & 79.3 & 58.6 & 6.4 & 35 & 0.6 & 5.43 \\ NGC1333 & 568 & 2.0 & 4.8 & 54.0 & 119.0 & 10.5 & 42 & 0.35 & 4.0 \\ B1 & 598 & 2.5 & 5.8 & 62.8 & 103.9 & 9.5 & 23 & 0.22 & 2.41 \\ \enddata \tablenotetext{(a)}{ For the regions that are contoured by an equivalent of A$_{v}$ $>$ 7 mag in $Herschel$ derived column density maps \citep{Mercimek17}.} \tablenotetext{(b)}{ Efficiency is calculated by taking ratio of the number of cores to the Jeans number of clumps considering both thermal ($\epsilon^{\rm{th}}$) and combined ($\epsilon^{\rm{th,nth}}$) support.} \end{deluxetable*} \subsection{Core to envelope} \label{coretoenvelope} At a next level of hierarchy, we explored the scales of cores to envelopes (see Figures \ref{hierarchy} and \ref{hierarchy_perseus} for the difference between core and envelope scales). The properties of cores are discussed in \S \ref{clumptocore}. For the envelopes, we used the SMA observations from MASSES discussed in \S \ref{sourcefitting} and \ref{sourceproperties}. To estimate the number of envelopes present in each core, we examined the spatial correspondence between the SMA envelopes and the SCUBA cores. Figure \ref{envMapping} shows the distribution of cores and envelopes in IC 348. The mass and area of cores are taken from \cite{Sadavoy10}. The positions of SMA envelopes are the peak positions obtained by fitting the sources as explained in \S \ref{sourcefitting}. To determine whether or not the envelopes are spatially coincident with the dense cores, we used a set of boundary conditions as outlined below. \begin{figure}[tbh] \centering \includegraphics[scale=0.32, angle = 270]{M16_coresVsSma_ic348e.pdf} \caption{Positions of cores and envelopes in IC 348. The cyan circles are protostellar cores, yellow circles are starless cores and magenta stars are the SMA envelopes. Background image is 350$\micron$ $Herschel$ dust emission map. \label{envMapping} } \end{figure} First, we found the core that is closest to the given envelope. Second, we used a minimum distance criterion to identify whether or not the envelope is associated with its nearest core. For simplicity, we consider an envelope associated with a core if it is within one core radius of the core center, where radius is taken to be same as the effective radius. This effective radius is calculated from the area of core by assuming a spherical geometry ($\sqrt[]{A/\pi}$). Applying the selection criteria, we found either 0, 1, 2 or 3 envelopes inside a single core by counting the number of SMA sources. If an envelope is expected in a core from pre-existing data \citep{Enoch09} but is not detected with the SMA, we consider that core to have 0 envelopes for consistency. The minimum envelope distance is calculated in terms of core radii by dividing the distance between the centers of SMA envelope and its nearest core by the radius of that core. Figure \ref{nnd} represents the histogram of the minimum envelope distance. The mean and median of the histogram is $\sim$0.2 and 0.15, showing that the envelopes lie mostly around the center of core. This degree of central concentration is highly significant compared to a random distribution of envelopes within cores. This is consistent with \cite{Jorgensen07} where they find that young stars are primarily found in the interiors of dense cores. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{M16_hist_nnd.pdf} \vskip -1.0in \caption{Distribution of the nearest envelope distance between the envelopes and the cores in terms of the core radii. \label{nnd} } \end{figure} \cite{Rosolowsky08} measured the velocity dispersion in cores and core candidates in the Perseus molecular cloud using the ammonia observations with Green Bank Telescope (GBT). They find a typical gas kinetic temperature of $\sim$11 K and a median velocity dispersion of $\sim$0.18 kms$^{-1}$. We used these values and the core properties from \cite{Sadavoy10} to perform Jeans analysis for the core and envelope scales. Table \ref{cores} summarizes the Jeans instability in the Perseus cores for both thermal support and combined thermal and non-thermal support. The Table lists only the cores where envelopes were sampled by the SMA so it doesn't represent all the cores in Perseus (see \citealt{Sadavoy10} for all the SCUBA detected cores in Perseus). The average envelope formation efficiency for a thermally supported core ($\epsilon^{\rm{th}}$) is $\sim$0.4, and for the combined support ($\epsilon^{\rm{th,nth}}$) it is $\sim$1. \begin{deluxetable*}{lccccccccc} \centering \tabletypesize{\scriptsize} \tablecolumns{10} \tablecaption{Jeans analysis in the cores \label{cores}} \tablehead{\colhead{Core} & \colhead{Mass\tablenotemark{(a)}} & \colhead{Area\tablenotemark{(a)}} & \colhead{M$_{\rm{J}}$$^{\rm{th}}$} & \colhead{M$_{\rm{J}}$$^{\rm{th,nth}}$} & \colhead{N$_{\rm{J}}$$^{\rm{th}}$} & \colhead{N$_{\rm{J}}$$^{\rm{th,nth}}$} & \colhead{Num$_{\rm{ENVELOPE}}$} & \colhead{$\epsilon^{\rm{th}}$\tablenotemark{(b)}} & \colhead{$\epsilon^{\rm{th,nth}}\tablenotemark{(b)}$}\\ \colhead{} & \colhead{[M$_{\odot}$]} & \colhead{[pc$^2$]} & \colhead{[M$_{\odot}$]} & \colhead{[M$_{\odot}$]} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{}} \startdata J032522.2+304514 & 3.6 & 0.00478 & 0.5 & 1.2 & 7.4 & 3.0 & 1 & 0.14 & 0.34 \\ J032536.1+304514 & 17.3 & 0.00985 & 0.4 & 1.0 & 44.1 & 17.8 & 3 & 0.07 & 0.17 \\ J032538.9+304402 & 4.9 & 0.0043 & 0.4 & 1.0 & 12.5 & 5.0 & 1 & 0.08 & 0.2 \\ J032739.2+301259 & 2.0 & 0.00283 & 0.5 & 1.1 & 4.3 & 1.7 & 1 & 0.23 & 0.57 \\ J032742.9+301228 & 2.1 & 0.00385 & 0.6 & 1.4 & 3.8 & 1.5 & 2 & 0.53 & 1.31 \\ J032832.2+311108 & 2.0 & 0.00694 & 0.9 & 2.2 & 2.3 & 0.9 & 0 & 0.0 & 0.0 \\ J032834.5+310702 & 0.4 & 0.00102 & 0.5 & 1.2 & 0.7 & 0.3 & 1 & 1.38 & 3.42 \\ J032836.9+311326 & 3.7 & 0.00724 & 0.7 & 1.7 & 5.4 & 2.2 & 1 & 0.18 & 0.46 \\ J032839.2+310556 & 3.1 & 0.00785 & 0.8 & 1.9 & 4.0 & 1.6 & 0 & 0.0 & 0.0 \\ J032845.2+310549 & 1.4 & 0.00454 & 0.8 & 1.9 & 1.7 & 0.7 & 0 & 0.0 & 0.0 \\ J032855.2+311437 & 12.4 & 0.01208 & 0.5 & 1.3 & 23.0 & 9.3 & 2 & 0.09 & 0.22 \\ J032900.3+311201 & 0.8 & 0.00212 & 0.6 & 1.5 & 1.3 & 0.5 & 1 & 0.79 & 1.96 \\ J032901.3+312031 & 15.1 & 0.01094 & 0.5 & 1.1 & 33.3 & 13.4 & 1 & 0.03 & 0.07 \\ J032903.6+311455 & 7.5 & 0.00817 & 0.5 & 1.3 & 14.5 & 5.8 & 1 & 0.07 & 0.17 \\ J032906.9+311725 & 1.5 & 0.00229 & 0.4 & 1.1 & 3.5 & 1.4 & 1 & 0.28 & 0.71 \\ J032907.4+312155 & 5.9 & 0.00817 & 0.6 & 1.4 & 10.1 & 4.1 & 1 & 0.1 & 0.24 \\ J032910.1+311331 & 24.5 & 0.00636 & 0.2 & 0.6 & 103.6 & 41.7 & 1 & 0.01 & 0.02 \\ J032910.7+311824 & 10.8 & 0.01169 & 0.6 & 1.4 & 19.2 & 7.7 & 2 & 0.1 & 0.26 \\ J032912.0+311306 & 15.2 & 0.00754 & 0.3 & 0.8 & 44.5 & 17.9 & 2 & 0.04 & 0.11 \\ J032913.4+311354 & 4.6 & 0.00528 & 0.5 & 1.2 & 9.7 & 3.9 & 1 & 0.1 & 0.26 \\ J032917.4+312748 & 3.3 & 0.0095 & 0.9 & 2.2 & 3.8 & 1.5 & 1 & 0.27 & 0.66 \\ J032918.7+312312 & 1.4 & 0.00264 & 0.5 & 1.3 & 2.7 & 1.1 & 1 & 0.37 & 0.92 \\ J032925.4+312818 & 0.9 & 0.00246 & 0.6 & 1.5 & 1.6 & 0.6 & 1 & 0.63 & 1.56 \\ J032951.4+313904 & 1.4 & 0.00342 & 0.6 & 1.5 & 2.3 & 0.9 & 1 & 0.44 & 1.1 \\ J033120.7+304531 & 3.4 & 0.00608 & 0.6 & 1.5 & 5.4 & 2.2 & 1 & 0.18 & 0.46 \\ J033217.6+304947 & 7.2 & 0.0095 & 0.6 & 1.5 & 12.3 & 4.9 & 1 & 0.08 & 0.2 \\ J033313.2+311956 & 2.1 & 0.00581 & 0.7 & 1.9 & 2.9 & 1.2 & 0 & 0.0 & 0.0 \\ J033315.9+310656 & 14.6 & 0.01496 & 0.6 & 1.4 & 25.1 & 10.1 & 1 & 0.04 & 0.1 \\ J033316.4+310750 & 6.2 & 0.00916 & 0.6 & 1.5 & 9.9 & 4.0 & 1 & 0.1 & 0.25 \\ J033317.8+310932 & 17.8 & 0.02488 & 0.8 & 1.9 & 23.0 & 9.3 & 1 & 0.04 & 0.11 \\ J033318.2+310608 & 1.4 & 0.00478 & 0.8 & 2.0 & 1.7 & 0.7 & 1 & 0.6 & 1.49 \\ J033321.0+310732 & 17.5 & 0.01327 & 0.5 & 1.2 & 36.0 & 14.5 & 2 & 0.06 & 0.14 \\ J033327.1+310707 & 3.0 & 0.00785 & 0.8 & 2.0 & 3.8 & 1.5 & 1 & 0.26 & 0.65 \\ J034351.0+320321 & 6.1 & 0.01057 & 0.7 & 1.7 & 8.7 & 3.5 & 2 & 0.23 & 0.57 \\ J034356.7+320051 & 10.0 & 0.01094 & 0.6 & 1.4 & 18.0 & 7.3 & 1 & 0.06 & 0.14 \\ J034357.2+320303 & 6.9 & 0.00694 & 0.5 & 1.2 & 14.5 & 5.8 & 2 & 0.14 & 0.34 \\ J034401.4+320157 & 3.4 & 0.00554 & 0.6 & 1.4 & 5.9 & 2.4 & 0 & 0.0 & 0.0 \\ J034412.7+320133 & 0.1 & 0.00021 & 0.4 & 1.0 & 0.1 & 0.1 & 1 & 7.92 & 19.65 \\ J034421.0+315923 & 2.3 & 0.00882 & 1.0 & 2.5 & 2.3 & 0.9 & 1 & 0.44 & 1.08 \\ J034443.9+320132 & 3.5 & 0.00754 & 0.7 & 1.8 & 4.9 & 2.0 & 1 & 0.2 & 0.5 \\ \enddata \tablenotetext{(a)}{ \cite{Sadavoy10}.} \tablenotetext{(b)}{ Efficiency is calculated by taking ratio of the number of envelopes to the Jeans number of cores considering both thermal ($\epsilon^{\rm{th}}$) and combined ($\epsilon^{\rm{th,nth}}$) support.} \end{deluxetable*} Figure \ref{NjVsN_core} shows the number of enclosed envelopes with the thermal Jeans number of their parent cores with the same format as in Figure \ref{NjVsN_cl} for cores in clumps. The magenta dash line represents $\epsilon^{\rm{th}} = 1$ line where thermal Jeans fragmentation predicts the exact number of fragments. The relation between the number of enclosed envelopes and the Jeans number of the cores is hard to constrain because of the high uncertainties. Nevertheless, the average number of envelopes is less than that predicted by the thermal Jeans analysis of the cores. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{paper_jeansNumVsNum_core1_log.pdf} \vskip -1.0in \caption{Comparison of the number of enclosed envelopes with the Jeans number of the parent cores considering pure thermal Jeans analysis. The green circles have Jeans number > 1 and the hollow squares have Jeans number < 1. The magenta dash line represents $\epsilon^{\rm{th}}$ = 1 relation. The uncertainty in the number of enclosed envelopes follow Poisson statistics, which is an upper limit uncertainty. Jeans number of cores are uncertain within a factor of 3. \label{NjVsN_core} } \end{figure} Figure \ref{box_core} represents the box and whisker plot for the distribution of the Jeans number of cores. The plot is shown for two different populations of cores. The first population consists of the cores that have either no envelopes or one envelope. The second population corresponds to cores with two or three envelopes. The p-value using K-S test in these two populations is $\sim$2 percent, so the distributions are significantly different within 95 percent confidence limit. Overall, Figure \ref{box_core} shows an increase in the number of enclosed envelopes with an increase in Jeans number of the cores. \begin{figure}[tbh] \centering \includegraphics[scale=0.42]{coreToEnvelope_jeansnumhist2pop_box.pdf} \caption{Box and Whisker plot showing the distributions of the Jeans number of cores for two different population of enclosed envelopes. The first population constitutes the cores that have either 0 or 1 envelopes inside them. The second population constitutes the cores that have either 2 or 3 envelopes inside them. The numbers at the right side of the box and whisker diagram represent the 95$^{th}$ percentile, 3$^{rd}$ quartile, mean, median, 1$^{st}$ quartile and the 5$^{th}$ percentile going from the top to bottom respectively. Inside the box plot, the red square shows the value of mean and the red line shows the value of median. \label{box_core} } \end{figure} \subsection{Envelope to protostellar objects} \label{envelopetodisk} The envelope scale structures were probed with the SMA as part of the MASSES project. The protostellar objects were probed with the VLA as part of the VANDAM project. Below we explain the procedure in estimating mass and temperature of the SMA envelopes that are used to perform Jeans analysis in the envelopes. \subsubsection{Envelope mass estimation} \label{mass estimation} The mass of the SMA envelopes are estimated from the integrated flux of the SMA sources using Equation \ref{eq:2} \citep{Jorgensen07,Lee15} which converts 1.3 mm thermal dust emission into mass assuming that the emission is optically thin at 1.3 mm. \begin{equation} \label{eq:2} \begin{split} M_{\rm{1.3~mm}} = 1.3~M_{\odot} \Bigg(\frac{F_{\rm{1.3~mm}}}{1~\rm{Jy}} \Bigg)\Bigg(\frac{D}{200~\rm{pc}} \Bigg)^2 \\ \times \Bigg \{\rm{exp} \Bigg[ 0.36 \Bigg( \frac{30~\rm{K}}{{\it T}_{\rm{d}}} \Bigg) \Bigg] - 1\Bigg\} \end{split} \end{equation} where $F_{1.3 \rm{mm}}$ is the integrated flux density emitted by the source at 1.3 mm, $D$ is the distance to the source (230 pc) and $T_{\rm{d}}$ is the dust temperature of the envelopes. Equation \ref{eq:2} assumes the power law dust opacity which is calculated from \cite{Ossenkopf94} with the models that have thin ice mantles coagulated at 10$^6$ cm$^{-3}$. We also assume the canonical gas-to-dust ratio of 100 \citep{Predehl95}. To estimate $T_{\rm{d}}$, we used the model described in Equation 2 of \cite{Chandler00}. In brief, this model assumes a spherically symmetric envelope surrounding a central protostar and the temperature profile follows the power-law, $T~ \propto ~r^{-q}$ where $q$ is a function of dust emissivity ($\beta$), $q = 2/(4 + \beta)$, and $r$ is the distance of envelope from the central protostar. If $L_{\rm{bol}}$ is the bolometric luminosity of the protostar, the temperature of the envelope at a distance $r$ is, \begin{equation}\label{eq:3} T(r) = 60\Bigg(\frac{r}{2 \times 10^{15}~\rm{m}} \Bigg)^{-q} \Bigg(\frac{L_{\rm{bol}}}{10^5~L_{\odot}} \Bigg)^{q/2} \rm{K} \end{equation} Limited by the resolution of the SMA data, we calculated envelope temperature at a distance of 1000 AU from the central protostar. Similarly, consistent with the value of dust emissivity while calculating masses of the SMA sources, we used $q$ = 0.33. For $L_{\rm{bol}}$, we used the values from \cite{Tobin16}. Table \ref{envelopes_jeans} gives the temperature measurements at 1000 AU and the resulting masses for each envelope. The table also shows the group of envelopes with unreliable source fits. For these objects, the measured source properties such as mass, Jeans mass, etc are also unreliable. Such groups are designated as ``B" in Table \ref{envelopes_jeans}. In contrast, parameter estimates for the envelopes that belong to group ``A" are more robust. For further analysis below, we consider the envelopes that belong to group A only. \subsubsection{Jeans Mass of Envelopes} \label{envelope jeans mass} Table \ref{envelopes_jeans} gives the Jeans instability parameters for envelopes when they are supported by pure thermal motion and when they are supported by a combined thermal and non-thermal motion. For the pure thermal support, we used the mass and temperature estimates given in Table \ref{envelopes_jeans}. For the combined thermal and non-thermal support, we used N$_2$H$^+$ line width measurements from \cite{Kirk07}. The critical density of N$_2$H$^+$ is $\sim$10$^5$ cm$^{-3}$ and so it is suitable for studying the line width of envelopes in Perseus. We calculated the typical velocity dispersion at envelope scales as $\sim$0.13 km/s from line width measurements presented in Table 3 of \cite{Kirk07}. Figure \ref{NjVsN_envelope} compares the number of VLA sources with the Jeans number of the SMA envelopes, assuming pure thermal support. The green solid circles in Figure \ref{NjVsN_envelope} represent the envelopes for which N$_{\rm{J}}$ $>$ 1 and the hollow square markers represent the envelopes for which N$_{\rm{J}}$ $<$ 1. The median of the Jeans number of envelopes increase with the number of enclosed protostellar objects. Nevertheless, the robustness of this relation is limited by large uncertainties. There is a significant population of envelopes with N$_{\rm{J}}$ $<$ 1, which are less likely to fragment and form further stars. Hence, for further analysis we are only interested in the envelopes with N$_{\rm{J}}$ $>$ 1. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{paper_jeansNumVsNum_env1_log.pdf} \vskip -1.0in \caption{Comparison of the number of protostellar objects with the Jeans number of the parent envelopes with thermal Jeans analysis. The green solid circles have N$_{\rm{J}}$ > 1 and the hollow squares have N$_{\rm{J}}$ < 1. The magenta dash line represents $\epsilon^{\rm{th}}$ = 1 line for perfect thermal Jeans fragmentation. Uncertainties in both axes are calculated similar to Figure \ref{NjVsN_cl}. \label{NjVsN_envelope} } \end{figure} \startlongtable \begin{deluxetable*}{lccccccccccc} \centering \tabletypesize{\scriptsize} \tablecolumns{12} \tablecaption{Jeans analysis in the envelopes \label{envelopes_jeans}} \tablehead{\colhead{Envelope\tablenotemark{(a)}} & \colhead{Mass} & \colhead{Area ($\times 10^{-5}$)} & \colhead{T$_{1000AU}$} & \colhead{M$_{\rm{J}}$$^{\rm{th}}$\tablenotemark{(b)}} & \colhead{M$_{\rm{J}}$$^{\rm{th,nth}}$\tablenotemark{(b)}} & \colhead{N$_{\rm{J}}$$^{\rm{th}}$\tablenotemark{(c)}} & \colhead{N$_{\rm{J}}$$^{\rm{th,nth}}$\tablenotemark{(c)}} & \colhead{Num$_{\rm{PROTOSTAR}}$} & \colhead{$\epsilon^{\rm{th}}$\tablenotemark{(d)}} & \colhead{$\epsilon^{\rm{th,nth}}\tablenotemark{(d)}$} & \colhead{Group\tablenotemark{(e)}}\\ \colhead{} & \colhead{[M$_{\odot}$]} & \colhead{[pc$^2$]} & \colhead{[K]} & \colhead{[M$_{\odot}$]} & \colhead{[M$_{\odot}$]} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{}} \startdata B1-bN & 0.365 & 3.785798 & 17 & 0.08 & 0.11 & 4.3714 & 3.2674 & 1 & 0.23 & 0.31 & A \\ IC348 MMS1\tablenotemark{(f)} & 0.504 & 3.648606 & 22 & 0.1 & 0.12 & 4.9748 & 4.0686 & 2 & 0.4 & 0.49 & A \\ IC348 MMS2\tablenotemark{(f)} & 0.083 & 1.365433 & 22 & 0.12 & 0.15 & 0.6963 & 0.5695 & 1 & 1.44 & 1.76 & B \\ IRAS4B$'$ & 0.269 & 1.299492 & 26 & 0.08 & 0.09 & 3.3 & 2.8329 & 1 & 0.3 & 0.35 & B \\ L1448IRS3\tablenotemark{(g)} & 0.252 & 6.037167 & 29 & 0.32 & 0.36 & 0.7893 & 0.6998 & 1 & 2.53 & 2.86 & A \\ L1448NW\tablenotemark{(g)} & 0.163 & 3.69774 & 29 & 0.27 & 0.31 & 0.5953 & 0.5278 & 1 & 1.68 & 1.89 & A \\ L1451-MMS & 0.088 & 1.240456 & 12 & 0.05 & 0.07 & 1.9028 & 1.2451 & 1 & 0.53 & 0.8 & B \\ Per-bolo-45-SMM\tablenotemark{(h)} & 0.245 & 13.109837 & 12 & 0.16 & 0.25 & 1.5006 & 0.9819 & 0 & 0.0 & 0.0 & A \\ Per-bolo-58 & 0.213 & 10.513288 & 12 & 0.15 & 0.23 & 1.4362 & 0.9398 & 1 & 0.7 & 1.06 & A \\ Per-emb-1 & 0.337 & 3.011857 & 23 & 0.11 & 0.14 & 2.9991 & 2.4762 & 1 & 0.33 & 0.4 & A \\ Per-emb-2 & 0.897 & 1.390417 & 20 & 0.03 & 0.04 & 27.6627 & 22.0026 & 2 & 0.11 & 0.14 & B \\ Per-emb-3 & 0.079 & 2.858335 & 18 & 0.16 & 0.21 & 0.4888 & 0.3756 & 1 & 2.05 & 2.66 & B \\ Per-emb-5 & 0.357 & 2.248882 & 22 & 0.08 & 0.1 & 4.4256 & 3.592 & 2 & 0.68 & 0.84 & A \\ Per-emb-8 & 0.172 & 6.709789 & 24 & 0.31 & 0.37 & 0.5501 & 0.4626 & 1 & 1.82 & 2.16 & A \\ Per-emb-9 & 0.223 & 13.97958 & 19 & 0.33 & 0.43 & 0.6707 & 0.5211 & 1 & 1.49 & 1.92 & A \\ Per-emb-10 & 0.075 & 6.983227 & 19 & 0.34 & 0.44 & 0.2213 & 0.1719 & 1 & 4.52 & 5.82 & A \\ Per-emb-10-SMM & 0.024 & 1.399501 & 19 & 0.18 & 0.23 & 0.1364 & 0.106 & 0 & 0.0 & 0.0 & B \\ Per-emb-12 & 3.16 & 1.586092 & 29 & 0.03 & 0.04 & 99.6786 & 87.7338 & 2 & 0.02 & 0.02 & B \\ Per-emb-13 & 1.013 & 1.299492 & 26 & 0.04 & 0.05 & 24.1226 & 20.7087 & 1 & 0.04 & 0.05 & B \\ Per-emb-14 & 0.153 & 1.590404 & 19 & 0.08 & 0.1 & 1.8672 & 1.464 & 1 & 0.54 & 0.68 & B \\ Per-emb-15 & 0.097 & 4.269399 & 18 & 0.19 & 0.25 & 0.5156 & 0.3909 & 0 & 0.0 & 0.0 & A \\ Per-emb-16 & 0.131 & 6.885163 & 18 & 0.23 & 0.3 & 0.5668 & 0.4297 & 1 & 1.76 & 2.33 & A \\ Per-emb-17 & 0.1 & 5.881426 & 26 & 0.42 & 0.49 & 0.2367 & 0.2037 & 2 & 8.45 & 9.82 & A \\ Per-emb-18 & 0.202 & 6.523647 & 25 & 0.29 & 0.34 & 0.7003 & 0.5911 & 2 & 4.28 & 5.07 & A \\ Per-emb-19 & 0.021 & 1.362548 & 17 & 0.17 & 0.22 & 0.1262 & 0.095 & 1 & 7.93 & 10.52 & B \\ Per-emb-19-SMM\tablenotemark{(h)} & 0.005 & 1.362548 & 35 & 0.92 & 0.99 & 0.0059 & 0.0055 & 0 & 0.0 & 0.0 & B \\ Per-emb-20 & 0.058 & 3.666573 & 22 & 0.3 & 0.36 & 0.1947 & 0.1587 & 1 & 5.13 & 6.3 & A \\ Per-emb-20-SMM & 0.016 & 1.366165 & 22 & 0.27 & 0.33 & 0.0591 & 0.0482 & 0 & 0.0 & 0.0 & B \\ Per-emb-21 & 0.18 & 4.469851 & 25 & 0.23 & 0.27 & 0.7787 & 0.6573 & 1 & 1.28 & 1.52 & A \\ Per-emb-22 & 0.353 & 5.013858 & 26 & 0.19 & 0.22 & 1.8501 & 1.5805 & 2 & 1.08 & 1.27 & A \\ Per-emb-23 & 0.094 & 8.834424 & 20 & 0.39 & 0.49 & 0.2429 & 0.1919 & 1 & 4.12 & 5.21 & A \\ Per-emb-25 & 0.097 & 0.951497 & 21 & 0.08 & 0.1 & 1.2157 & 0.9825 & 1 & 0.82 & 1.02 & B \\ Per-emb-26 & 0.358 & 8.273704 & 30 & 0.34 & 0.38 & 1.0524 & 0.9335 & 1 & 0.95 & 1.07 & A \\ Per-emb-27 & 0.451 & 3.754182 & 34 & 0.2 & 0.22 & 2.2002 & 2.0156 & 2 & 0.91 & 0.99 & A \\ Per-emb-28 & 0.082 & 9.434481 & 18 & 0.37 & 0.49 & 0.223 & 0.1691 & 1 & 4.48 & 5.92 & A \\ Per-emb-29 & 0.41 & 4.606372 & 26 & 0.17 & 0.2 & 2.4561 & 2.1009 & 1 & 0.41 & 0.48 & A \\ Per-emb-30 & 0.052 & 0.673948 & 23 & 0.09 & 0.11 & 0.5724 & 0.4712 & 1 & 1.75 & 2.12 & B \\ Per-emb-33\tablenotemark{(g)} & 0.784 & 1.731601 & 29 & 0.07 & 0.08 & 11.0661 & 9.8107 & 3 & 0.27 & 0.31 & A \\ Per-emb-35 & 0.093 & 5.822 & 30 & 0.52 & 0.59 & 0.1786 & 0.159 & 2 & 11.2 & 12.58 & A \\ Per-emb-36 & 0.18 & 9.135532 & 27 & 0.46 & 0.53 & 0.3908 & 0.3398 & 2 & 5.12 & 5.89 & A \\ Per-emb-37 & 0.079 & 5.711896 & 18 & 0.27 & 0.36 & 0.2876 & 0.221 & 1 & 3.48 & 4.52 & A \\ Per-emb-40 & 0.027 & 1.597658 & 22 & 0.24 & 0.29 & 0.1125 & 0.092 & 2 & 17.78 & 21.74 & B \\ Per-emb-41 & 0.464 & 3.179652 & 19 & 0.08 & 0.1 & 5.8805 & 4.6105 & 1 & 0.17 & 0.22 & A \\ Per-emb-44 & 0.871 & 4.083969 & 21 & 0.08 & 0.09 & 11.4903 & 9.1933 & 3 & 0.26 & 0.33 & A \\ Per-emb-47 & 0.01 & 1.498126 & 21 & 0.35 & 0.43 & 0.0293 & 0.0237 & 1 & 34.07 & 42.16 & B \\ Per-emb-50 & 0.059 & 1.509439 & 35 & 0.3 & 0.33 & 0.196 & 0.1809 & 1 & 5.1 & 5.53 & B \\ Per-emb-51 & 0.24 & 2.081363 & 13 & 0.05 & 0.07 & 5.3183 & 3.5728 & 1 & 0.19 & 0.28 & A \\ Per-emb-53 & 0.062 & 3.384611 & 27 & 0.36 & 0.42 & 0.1717 & 0.1485 & 1 & 5.82 & 6.73 & A \\ Per-emb-54 & 0.128 & 6.045196 & 33 & 0.53 & 0.58 & 0.2412 & 0.2199 & 0 & 0.0 & 0.0 & A \\ Per-emb-56 & 0.018 & 1.137566 & 19 & 0.17 & 0.22 & 0.1073 & 0.0828 & 1 & 9.32 & 12.07 & B \\ Per-emb-57 & 0.046 & 1.517082 & 14 & 0.09 & 0.13 & 0.5252 & 0.3596 & 1 & 1.9 & 2.78 & B \\ Per-emb-58\tablenotemark{(h)} & 0.01 & 1.489637 & 19 & 0.3 & 0.38 & 0.0327 & 0.0255 & 1 & 30.58 & 39.24 & B \\ Per-emb-61 & 0.018 & 1.440329 & 16 & 0.17 & 0.23 & 0.1079 & 0.0792 & 0 & 0.0 & 0.0 & B \\ Per-emb-62 & 0.077 & 1.473029 & 23 & 0.14 & 0.17 & 0.5601 & 0.4624 & 1 & 1.79 & 2.16 & B \\ Per-emb-63 & 0.018 & 1.557983 & 23 & 0.3 & 0.36 & 0.061 & 0.0505 & 1 & 16.4 & 19.8 & B \\ Per-emb-64 & 0.041 & 1.592964 & 25 & 0.23 & 0.27 & 0.1798 & 0.1527 & 1 & 5.56 & 6.55 & B \\ Per-emb-65 & 0.047 & 1.489332 & 15 & 0.1 & 0.14 & 0.485 & 0.3461 & 1 & 2.06 & 2.89 & B \\ SVS13B & 0.889 & 5.917212 & 21 & 0.1 & 0.12 & 8.9669 & 7.1743 & 1 & 0.11 & 0.14 & A \\ SVS13C & 0.199 & 4.127341 & 22 & 0.18 & 0.22 & 1.1279 & 0.9224 & 1 & 0.89 & 1.08 & A \\ \enddata \tablenotetext{(a)}{ Envelopes are the SMA sources and their nomenclature is adopted from \cite{Tobin16} for consistency. We could not find some sources in the literature so we designated them "SMM" at the end of their name. For example, Per-bolo-45-SMM is a new detection that does not lie in the same region as Per-bolo-45. All the envelopes are detected at 5-$\sigma$ contour, unless otherwise stated.} \tablenotetext{(b)}{ Jeans mass considering thermal (M$_{\rm{J}}$$^{\rm{th}}$) and total support (M$_{\rm{J}}$$^{\rm{th,nth}}$).} \tablenotetext{(c)}{ Jeans number considering thermal (N$_{\rm{J}}$$^{\rm{th}}$) and total support (N$_{\rm{J}}$$^{\rm{th,nth}}$).} \tablenotetext{(d)}{ Efficiency is calculated by taking ratio of the number of protostellar objects to the Jeans number of envelopes considering both thermal ($\epsilon^{\rm{th}}$) and combined ($\epsilon^{\rm{th,nth}}$) support.} \tablenotetext{(e)}{ Refer Table \ref{envelopes}. A: Reliable fits, B: Unreliable fits.} \tablenotetext{(f)}{ Nomenclature adopted from \cite{Lee16}.} \tablenotetext{(g)}{ SMA envelope is detected at 6$\sigma$ contour.} \tablenotetext{(h)}{ SMA envelope is detected at 4$\sigma$ contour.} \end{deluxetable*} Figure \ref{box_envelope} shows the distribution of the Jeans number of envelopes that belong to group "A" in a box and Whisker plot. For this plot, we have two populations of envelopes, with the first population have 0 or 1 protostellar objects while the other population has 2 or 3 protostellar objects. The median values are $\sim$0.67 for the first population and $\sim$1.85 for the second. The representation of statistics at the right of the box diagram is same as Figure \ref{box_core}. The p-value obtained from the K-S test for these two populations is $\sim$50 percent. Thus, unlike the previous hierarchy, we cannot statistically distinguish between the distributions of the Jeans numbers for the two envelope populations. \begin{figure}[tbh] \centering \vskip -1.1in \includegraphics[scale=0.4]{EnvelopeToDisk_jeansnumhist2pop_box.pdf} \vskip -1.0in \caption{Box and Whisker plot showing the distribution of the Jeans number of envelopes for two different population of enclosed disk scale objects. The first population constitutes the envelopes that have either 0 or 1 disk scale objects inside them. The second population constitutes the cores that have either 2 or 3 disk scale objects inside them. The numbers at the right side of the box and whisker diagram represent the 95$^{th}$ percentile, 3$^{rd}$ quartile, mean, median, 1$^{st}$ quartile and the 5$^{th}$ percentile going from the top to bottom respectively. Inside the box plot, the red square shows the value of mean and the red line shows the value of median. The p-value obtained from the K-S test is $\sim$50\%, implying that these two populations are not significantly different. \label{box_envelope} } \end{figure} \section{Combining all hierarchies} \label{combination} We examined the hierarchical structure in Perseus from cloud scales to protostellar objects in \S \ref{jeansanalysis}. In general, we find a correlation between Jeans number and the number of children objects, where parent structures with higher Jeans numbers have more substructure. To illustrate the multiscale correlation, Figure \ref{surfden_th} combines the results in each hierarchy in a single plot. Figure \ref{surfden_th} compares the Jeans number of each parent structure with their number of children objects, with both values shown as a surface density. If we plot the number of child objects with the Jeans number of parent objects for all the scales without dividing by area, the data overlap with each other because of the small range of Jeans number of parent objects and the number of child objects (see Figures \ref{NjVsN_cl}, \ref{NjVsN_core}, and \ref{NjVsN_envelope}). In such a plot, different physical scales from cloud to protostellar objects cannot be visualized, which motivated the need to separate them by dividing by the area. Since the five scales of hierarchy vary widely in terms of their physical scale, we used a surface density plot to visualize each scale distinctly. In Figure \ref{surfden_th}, the solid circles represent structures with $N_{\rm{J}}$ $>$ 1, and the hollow squares show the data for which $N_{\rm{J}}$ $<$ 1 for the parent population. The typical uncertainty is shown in lower right region of Figure \ref{surfden_th}. The dash line shows the $\epsilon^{\rm{th}} = 1$ relation for perfect thermal fragmentation. Solid line represents the best fit line for all the data for the scales of cloud, clump and core, and $N_{\rm{J}}$ $>$ 1 data for the envelopes as noted in \S \ref{envelope jeans mass}. The best fit results do not change if we include $N_{\rm{J}}$ $>$ 1 criteria for fitting at all scales as there are only two other cores that have $N_{\rm{J}}$ $<$ 1. We used the Markov Chain Monte Carlo (MCMC) method \citep{van03} to fit a linear model to the data. MCMC uses random numbers from a Markov Chain to characterize a probability distribution. We fit the relation within the uncertainties for the different scales. For the details of the use of MCMC in fitting astronomical data, see \cite{pokhrel16}. For the underlying assumption and choice of priors, we followed \cite{pokhrel16} and we used the PYTHON package $pymc$ \citep{patil10} to apply the MCMC method. Figure \ref{surfden_th} assumes that the thermal gas motions are solely responsible for stability of the structure against gravitational collapse. The slope of the best fit line is 1.03 $\pm$ 0.02, with a Pearson correlation coefficient of 0.95. The best fit line is close to but offset from the $\epsilon^{\rm{th}} = 1$ line relation which implies that only a fraction ($<$ 1) of the mass in the parent structure has been converted into children structures. This lower efficiency is similar to the result seen for individual hierarchy levels in Figures \ref{NjVsN_cl}, \ref{NjVsN_core} and \ref{NjVsN_envelope}, and is similar to the result of \cite{Palau15} for cores. We estimated the formation efficiency of children objects for each level of hierarchy ($\epsilon^{\rm{th}}$) using the process described in \S \ref{jeansanalysis}. We found average values of $\epsilon^{\rm{th}}$ as 0.06, 0.2, 0.4 and 0.5 for the formation of clumps, cores, envelopes and protostars respectively (see Figure \ref{surfden_th}), but these scales can also have a broad range. For example, we found CFEs between 0.04 and 0.6 assuming their parent clumps are thermally supported (see \S \ref{clumptocore}) which is similar to the value of CFE from other independent measurements \citep{Bontemps10,Palau13,Palau15}. If we exclude the two cores for which $N_{\rm{J}}$ $<$ 1, the $\epsilon^{\rm{th}}$ for cores is $\sim$0.2, however the power-law relation stays the same. Thus, we find that thermal support alone cannot predict the amount of fragmentation detected on cloud or clump scales, while there is better agreement on the scales of cores and envelopes. Nonetheless, the tendencies for $\epsilon^{\rm{th}} < 1$ and for $\epsilon^{\rm{th}}$ to increase with decreasing size scale remain to be explained. Figure \ref{surfden_th} shows an increasing trend in thermal efficiencies towards smaller scales. To test the robustness of this trend, we calculated the uncertainty in typical thermal efficiency with a Monte Carlo approach. Since efficiency is the ratio of the number of children objects to the Jeans number of the parent object, the uncertainty in the efficiency is dominated by the uncertainty in the Jeans number. The Jeans number is certain within a factor of $\sim$3, whereas the number count of children objects follow Poisson statistics as an upper limit uncertainty. Thus the efficiencies are varied randomly within a factor of 3-4 to simulate a range of datasets within the errors. We used 5000 iterations for each level and for each iteration we calculated the average efficiency. Finally we computed the standard deviation of all the simulated average efficiencies to find the uncertainties at each scale. The thermal efficiencies have uncertainties of 0.05, 0.08, 0.16 and 0.11 for the clumps, cores, envelopes and protostars respectively. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{paper_surden_th.pdf} \vskip -1.0in \caption{Surface density plot combining all the hierarchies. The x-axis shows the Jeans number surface density of the parent structure (Jeans number of parent/ area of parent) and the y-axis shows the number surface density of the child structure (number of children objects / area of parent). The data shown in different colors represent clumps in cloud, cores in clump, envelopes in core, and the protostellar objects in envelopes. The solid circles have $N_{\rm{J}}$ > 1 and hollow squares have $N_{\rm{J}}$ < 1. The dash line represents a $\epsilon^{\rm{th}} = 1$ line, where the number of children objects are equal to the Jeans number of parent objects. Solid line shows the linear best fit for all the data for cloud, clump and cores, and for $N_{\rm{J}}$ > 1 data for envelopes. Slope of the best fit line is $\sim$1. \label{surfden_th} } \end{figure} As a companion to Figure \ref{surfden_th}, Figure \ref{surfden_tot} assumes that the means of support for cloud stability is the combination of both thermal and non-thermal motion of gas. We performed Jeans analysis for the combined support using the process described in \S \ref{jeansanalysis}. The best fit is performed on the structures with $N_{\rm{J}}$ $>$ 1. For the combined support, the formation efficiency $\epsilon^{\rm{th,nth}}$ decreases while going from large to small scales. We found $\epsilon^{\rm{th,nth}}$ as 3.8 $\pm$ 2.9, 2.1 $\pm$ 0.8, 1.0 $\pm$ 0.4 \& 0.5 $\pm$ 0.1 for the formation of clumps, cores, envelopes and protostellar objects. The uncertainties in $\epsilon^{\rm{th,nth}}$ are obtained using Monte Carlo method similar to the pure thermal case. Thus the combined thermal and non-thermal support follow a different (and opposing) trend as the thermal only case. It is interesting to note that for the formation of protostellar objects inside envelopes, the case with combined thermal and non-thermal support gives a very similar efficiency as the thermal only case. This implies that the non-thermal motions are relatively insignificant at these scales and fragmentation is entirely driven by the competition between gravity and thermal support. As we move towards the larger scales, the combined efficiency is greater than unity and hence unphysical. Finally, we performed best fit in Figures \ref{surfden_th} and \ref{surfden_tot} by including all the data. This includes that data with $N_{\rm{J}}$ $>$ 1 and also $N_{\rm{J}}$ $<$ 1. The hierarchy level concerning envelope to protostellar objects has the most data with $N_{\rm{J}}$ $<$ 1. Thus the values of $\epsilon^{\rm{th}}$ and $\epsilon^{\rm{th,nth}}$ are changed for the envelope scale. $\epsilon^{\rm{th}}$ changes from 0.5 (with $N_{\rm{J}}$ $>$ 1 data only) to 2.1 (including all data) and $\epsilon^{\rm{th,nth}}$ changes from 0.5 to 2.6. These values are similar within their uncertainties. However the results with efficiencies greater than unity are unrealistic to explain. \begin{figure}[tbh] \centering \vskip -1.0in \includegraphics[scale=0.4]{paper_surden_tot.pdf} \vskip -1.0in \caption{Surface density plot similar to Figure \ref{surfden_th}, now considering the combined support of both the thermal and non-thermal motions against gravitational collapse. \label{surfden_tot} } \end{figure} The good correlation in Figure \ref{surfden_th} and \ref{surfden_tot} is due largely to the fact that the range of $\epsilon^{\rm{th}}$ is much smaller than the range of area or surface density. We stress that the point of making these surface density plots is not to claim any kind of correlation between the Jeans number of parent objects and number of child objects. Rather, we include these plots only to show that there is sub-thermal efficiency at each scale. The dependence of $\epsilon^{\rm{th}}$ on size scale, without normaliation by area, is shown in Figure \ref{effVsReff}. To remove the possible degeneracy introduced by area in the surface density plot, in Figure \ref{effVsReff} we plotted the thermal efficiency of each parent object with their effective radius. For cloud, clump and core scale we calculated the effective radius by assuming spherical geometry of the structures. For envelope scales the effective radius is the geometric mean of major and minor axes of the source. The solid line in Figure \ref{effVsReff} represents the best fit line with slope of a power-law -0.26 $\pm$ 0.08. The data for best fit is taken to be same as in Figure \ref{surfden_th}. If we fit the data with $N_{\rm{J}}$ $>$ 1 for all the scales, the slope is -0.23 $\pm$ 0.07 which is within the uncertainty range of -0.26 $\pm$ 0.08. The plot explicitly depicts the increasing trend of thermal efficiency value for smaller objects. The $\epsilon^{\rm{th}}$ is maximum for protostars and gradually decreases when we probe larger scales. Thus we can conclude that as the size scale decreases in structures in a molecular cloud, the efficiency of thermal fragmentation increases. \begin{figure}[tbh] \centering \includegraphics[scale=0.4]{paper_effVsReff_envNjgt1.pdf} \caption{Comparison of the thermal efficiency with the size of parent structures. The size is represented in terms of an effective radius assuming a spherical geometry. Symbols are the same as in Figure \ref{surfden_th}. The typical error bar on the data is shown on lower left side of the plot. \label{effVsReff} } \end{figure} \section{Discussion} \label{conclusions} Our study shows that fragmentation is a scale dependent process. The mass of the structures in upper level hierarchy such as cloud and clumps are higher than the thermal Jeans mass. In contrast, masses are around thermal Jeans mass for the envelope and disk scales which provides a further support to the idea of thermal fragmentation at smaller scales. This provides further clue that we may be reaching the coherence level while going from the cores to envelope when the role of thermal fragmentation starts dominating. In the later stage the fragmentation process in low mass star forming regions seem to be controlled mostly by the gravitational contraction with the decrease of the thermal Jeans mass with an increase in density during contraction. However to conclude this statement we need to analyze the magnetic field contribution as well in the future. Nevertheless, our work supports the view that the thermal motion can provide support against gravity and stabilize the cloud sub-structure, especially at smaller scales. Our results are consistent with some recent studies for cores \citep{Miettinen12, Palau15,Busquet16} that supports the notion of thermal Jeans fragmentation over non-thermal fragmentation. \cite{Miettinen12} detected low mass class 0 protostellar fragments inside the SMM6 core in B9 region of the orion molecular cloud and conclude that the origin of the substructure is due to thermal Jeans fragmentation. Similarly, \cite{Palau15} studied 19 dense cores in nearby molecular clouds and found that most of the fragments detected in their sample are around the thermal Jeans limit. A more recent study by \cite{Palau17} in the Orion Molecular Cloud 1 South (OMS-1S) shows that fragmentation from 100 AU to 40 AU is also consistent with thermal Jeans processes. Thus, Jeans fragmentation seems to be a viable process in some high mass star forming regions as well (e.g., \citealt{Samal15}). On the other hand, our results do not appear to agree with some studies of higher mass IRDCs (\citealt{Zhang09,Pillai11,Wang14,Lu15}). They find that the fragments have masses much larger than the thermal Jeans mass and are consistent with the non-thermal Jeans mass. However, this is similar to our results in massive clumps. Hence thermal fragmentation may be dominant only in low-intermediate star forming regions. This suggests that although non-thermal motion seems important for fragmentation and the formation of massive cores in a cluster, the low mass cores may be produced by thermal fragmentation. Indeed, \cite{Zhang15} reported a population of low mass cores in a protocluster using more sensitive observations with ALMA, which appears to be consistent with thermal fragmentation. In another study, \cite{Lu15} find cores more massive than Perseus cores in clumps with $\epsilon^{\rm{th}}$ = 0.01 - 0.02 and $\epsilon^{\rm{th,nth}}$ = 0.2 - 0.3. In contrast, in their simulation work \cite{offner16} reported that fragmentation was less common in lower mass cores, where thermal pressure was more important (relative to turbulence and magnetic pressure). It is important to compare Jeans fragmentation in high and low mass clouds in more detail. We performed Jeans analysis at all the scales in the hierarchy, comparing the Jeans number of parent object with number of child objects. An alternative procedure would be to compute the effective critical Bonnor-Ebert (BE) mass \citep{Ebert55,Bonnor56} for each parent structure. This mass has the same dependence on temperature and density as the Jeans mass, but its value is less by a factor 2.47 \citep{Mckee07}. We calculated the BE efficiencies considering BE mass and we found $\epsilon^{\rm{th}}$ as 0.02 $\pm$ 0.02, 0.08 $\pm$ 0.03, 0.16 $\pm$ 0.06 and 0.35 $\pm$ 0.08 from the cloud scale to protostellar objects, which are within the uncertainty limit of $\epsilon^{\rm{th}}$ that we obtained with Jeans analysis. Moreover, using the BE mass would be less convenient for comparing results with many previous studies which rely on the Jeans mass (for example \citealt{Zhang09,Pillai11,Miettinen12,Lu15,Palau15,Busquet16,Palau17}). Therefore in this paper we use the Jeans mass rather than the BE mass. Our use of non-thermal velocity dispersion derived from line widths provides a comparison between the Jeans number based on this velocity dispersion, the Jeans number based on the gas kinetic temperature, and the number of observed fragments. This comparison is a test of which velocity dispersion gives better agreement with observed fragment numbers, but it is not a test of fragmentation in MHD turbulence-regulated star formation models and simulations (for example by \citealt{Padoan99,Padoan02,MacLow04,Hennebelle11}, etc). Such numerical models represent motions which are more anisotropic, time-varying, magnetized, and scale-dependent than those analyzed here with simple models of Jeans fragmentation. Similar observational works on hierarchical fragmentation in other nearby molecular clouds are needed to further test our results. These works should be further extended to the massive star forming regions where the relative importance of non-thermal motions may be different from Perseus. Also, a detailed comparison with simulations of low mass star forming regions is necessary to further constrain the role of thermal and non-thermal support in both the smaller scales such as protostars and disks, and the larger scale such as the cloud and clumps. \section{Conclusion} \label{summary} In this study, we examined the multiscale structure in the Perseus molecular cloud from the scale of the cloud ($\geq$ 10pc) to the scale of dust and ionized gas around protostars ($\sim$15 AU). To study the scales of the cloud, clump, core and disk scale objects, the data is derived from the available literature, and for the scale of envelopes we used new SMA data from the MASSES project. This breadth of scale is unique to this study and reveals how clouds themselves are structured from large to small scales. We traced 5 distinct scales and compared the number of fragments seen in each child structure with the expected number that could be produced by the parent structure according to Jeans fragmentation. We first considered purely thermal Jeans fragmentation. For such system we found a positive correlation between the number of children objects and the Jeans number of their parent objects at all scales. This trend, however, is not one-to-one. The average number of children objects are always less than the Jeans number of parent object. Under pure thermal support, the efficiency of the structure formation is 0.06, 0.2, 0.4 and 0.5 for clumps, cores, envelopes and protostellar objects. Thus thermal motions are least efficient in providing support at larger scales such as the whole cloud, and most efficient at smaller scales such as the protostellar objects. Considering the combined support of both thermal and non-thermal motions, the efficiency of formation is largest and unphysical ($>$1) for the clumps, cores and envelopes, and least for the protostellar objects. We quantified the combined efficiency as 3.8, 2.1, 1.0 and 0.5 for the formation of clumps, cores, envelopes and protostellar objects. For the protostellar objects, both $\epsilon^{\rm{th,nth}}$ and $\epsilon^{\rm{th}}$ have value $\sim$0.5, which shows that the thermal support is significant at these scales, however this doesn't rule out the possibility of other means of support such as magnetic pressure. \acknowledgments R.P. acknowledges the support from the NASA Grant NNX14AG96G. J.J.T. acknowledges support from the Netherlands Organization for Scientific Research (NWO) from Veni grant 639.041.439. We thank the anonymous referee for their comments and suggestions. We also thank Seyma Mercimek for fruitful discussion regarding core distribution in the clumps. SMA is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and funded by the Smithsonian Institution and the Academia Sinica. The authors thank the SMA staff for executing these observations as part of the queue schedule, Charlie Qi and Mark Gurwell for their technical assistance with the SMA data, and Eric Keto for his guidance with SMA large-scale projects. \software{We have used the following software packages for this study- APLpy \citep{Robitaille12}, AstroPy \citep{astropyref}, Matplotlib \citep{Hunter07}, MIR \url{(https://www.cfa.harvard.edu/~cqi/mircook.html)}, MIRIAD \citep{Sault95}, NumPy \url{(https://doi.org/10.1109/MCSE.2011.37)}, pymc \citep{patil10}, SciPy \citep{scipyref}. } \bibliographystyle{yahapj}
{ "timestamp": "2017-12-15T02:00:28", "yymm": "1712", "arxiv_id": "1712.04960", "language": "en", "url": "https://arxiv.org/abs/1712.04960" }
\section*{Preliminary outline} \section{Introduction} \label{sec:intro} General relativity (GR) can be considered among the most successful physical theories. It predicts with formidable accuracy the revolution period of binary pulsar systems \cite{WTF81,TaW82,TaW89,WNT10}, and more importantly, it is still getting corroborations by observational data, \textit{e.g.}, the direct detection of gravitational waves \cite{LIGO16,LIGO16b,LIGO17}. Despite such a success, GR is not free of problems. It presents limitations in the large distance regime as well as at the shortest length scales \cite{CaD11}. Alternative theories of gravity have been proposed in order to explain the accelerated expansion and the structure formation without invoking dark energy or dark matter components. Quantum gravity, on the other hand, is expected to provide the ultraviolet completion of GR and describe gravity at Planckian energy scales. Among all the predictions of GR, black holes are certainly the most sensitive to the gravity short distance behavior. Classical black hole solutions display a curvature singularity and an entropy proportional to the event horizon area in Planck units. From this perspective, every black hole event horizon is a holographic surface pixelized in fundamental cells \cite{Bek73}. Being the black hole temperature inversely proportional to the mass, $T\propto 1/M$, it is natural to consider the case of microscopic black holes to study quantum mechanical effects like the Hawking radiation. Such a breed of black holes might have primordially been produced, via a variety of mechanisms, including the gravitational collapse of the early Universe fluctuations \cite{Haw71,CaH74} or the quantum mechanical decay of deSitter space \cite{MaR95,BoH96,MaN11}. Being extremely hot, microscopic black holes undergo a rapid decay and disclose a further inconsistency of GR, namely the end stage of the evaporation. The latter is plagued by a divergent temperature and cannot longer be described in semi-classical terms. To solve the problem of black hole spacetimes at short scales, new metrics have been proposed on the basis of quantum gravity arguments. As a general result, one has that quantum mechanical effects can improve the divergent behavior \cite{BoR00,IMN13} or even replace the curvature singularity with a regular spacetime region \cite{Bar68,AyG98,AyG99,AyG99b,AyG00,AyG05,Dym92,Dym02,Dym03,Hay06,BDD03,BrF06,BrD07,BMD07,Mod04, Mod06,Nic05,NSS06a,NSS06b,ANS07,Nic09,SSN09,NiS10,SmS10,MoN10b,MMN11,Nic12,SpS14,SpS15,SpS17a,SpS17b,NSS17}. In the context of Planckian black holes there exist even more radical proposals. Rather than assuming gravity as a starting point, one can consider a purely quantum mechanical formulation for a microscopic black hole, provided one recovers the GR description in the large distance/large particle number limit. Such models include the horizon wave function formalism \cite{CaS14,CMS14,CMN15b,CGM16}, the particle-like black hole description \cite{SpS16,SpS17book}, the quantum $N$-portrait \cite{DvGo12,DvG13b,DvG14,DvGo14}, the quantum bound state description \cite{HoR16,GHM15}, the black hole precursor \cite{Cal14,CaC15} and the fuzzball model \cite{Mat05,SkT08}. Such pregeometrical models can be roughly grouped under an umbrella paradigm called ``gravity ultraviolet self-completeness''. This term refers to a special character of gravity. Contrary to ordinary non-renornalizable theories, gravity does not admit an obvious separation between a perturbative regime and a non-perturbative one. Gravity is problematic just at the Planck scale. Above the Planck scale, gravity works classically and no quantum gravity description has to be invoked \cite{DGG11,MuN12,AuS13,Car14}. In support of this claim, it can be shown that the scattering of two objects at trans-Planckian energies leads to a classical state, namely a black hole \cite{DFG11,DvG12,DGI15,DGL15}. As a result sub-Planckian distances are no longer accessible and gravity is ultraviolet self-complete \cite{DvG10,DFG12}. This is equivalent to saying that curvature singularities are just an artifact of the geometric description of gravity. The latter virtually ceases to exist below the Planck length where only a quantum pregeometry is available. The transition between the quantum and the geometrical description is often termed as ``classicalization'' \cite{Dva17}. From a black hole physics perspective, the classicalization produces a black hole that is approximately classical. In \cite{NiS12,FKN16}, an effective metric has been proposed to smoothly interpolate the two opposite regimes. The transition, however, might be non-analytic. In this case, deviations from classical black hole geometries might be even smaller and negligible around the Planck scale too. The Schwarzschild metric is, however, inadequate to fit in the self-completeness paradigm. One of the major drawbacks is the huge quantum back-reaction the metric suffers when the black hole mass approaches the Planck mass, $M\gtrsimM_{\rm P}$. More seriously, the Schwarzschild metric admits event horizons at any length scales, even below the Planck length, $r_{\mathrm{H}}<L_{\rm P}$, for $M<M_{\rm P}$. Such limitations lead us to consider families of metrics that allow for the horizon extremisation close to the Planck scale. From this vantage point, one can employ the semiclassical approximation up to the Planck scale, being the ratio temperature mass small, $T/M\ll 1$, during the whole black hole life \cite{DeS78,BaB88,BaB89}. Interestingly, the extremal configuration corresponds to a threshold mass, $M_\mathrm{e}$, below which no horizon forms. The degenerate horizon is also a zero temperature state. As a result, the Hawking decay is expected to undergo a SCRAM\footnote{This term, previously known as ``black hole switching off'' \cite{DeS78}, has been adopted in \cite{Nic09} by borrowing the terminology for emergency shutdowns of nuclear reactors. SCRAM is a backronym for ``Safety Control Rod Axe Man'', introduced by Enrico Fermi in 1942, during the Manhattan Project at Chicago Pile-1.}, \textit{i.e.}, an asymptotic relaxation at $M\approx M_\mathrm{e}\simeq M_{\rm P}$, that precludes the access to length scales smaller than $r_\mathrm{e}=r_\mathrm{H}(M_\mathrm{e})\simeq L_{\rm P}$. Within such a framework, one might be led to consider the case of Planckian charged black holes as privileged models fitting in the gravity self-completeness paradigm. There exist both technical and phenomenological arguments in support of this choice. First, charged black holes naturally encode the sought horizon extremization. This is related to the instability of the Schwarzschild metric in the parameter space of solutions. The $r=0$ surface is turned into a timelike surface for any perturbation of the static neutral black hole. Second, it has been shown that Planckian charged black holes might have plentifully been created in the early Universe due to relevant cold, ultracold, lukewarm and Nariai instanton contributions \cite{MaR95}. As a result, we present an analysis of the properties of Planckian charged black hole solutions in order to scrutinize the self-complete character of gravity. The paper is organized as follows: in Section \ref{sec:chargedmetric}, we present a general set up for charged black hole solutions in short scale modified gravity and their thermodynamics. In Section \ref{sec:classicalization}, we consider the classicalization of the solutions presented in Section \ref{sec:chargedmetric}. In Section \ref{sec:decay}, we study the evolution of such Planckian black holes while in Section \ref{sec:concl} we draw our conclusions. \section{Charged black hole metrics at the Planck scale} \label{sec:chargedmetric} We start from a generic charged, spherically symmetric black hole metric of the kind \begin{equation} \d{s}^2=-F(r)\d{t}^2+F^{-1}(r)\d{r}^2+\d{\Omega}^2 \label{eq:metric} \end{equation} where \begin{equation} F(r)=1-2G\ \frac{m(r)}{r}+\frac{q^2(r)}{r^2} \end{equation} Here the functions $m(r)$ and $q(r)$ account for gravity and electrodynamics short scale modifications that are expected to occur in a region of size $L_{\rm P}$ around the origin. For larger radii the above metric has to match the usual Reissner-Nordstr\"{o}m geometry. This implies a condition on the profile of the above functions, \textit{i.e.}, for $r\ggL_{\rm P}$ one has $m(r)\to M$ and $q(r)\to Q$, where $M$ is the ADM mass and $Q$ is the total charge of the system. Solutions like \eqref{eq:metric} can be obtained by short scale improved gravity and electrodynamics actions of the kind \begin{equation} {\cal S}_\mathrm{tot}= \frac{1}{16\pi G}\int \mathfrak{F}({\cal R},\, \Box/\mu^2,\, \dots)\, {\sqrt {-g}}\,\mathrm {d} ^{4}x\; + \int \mathfrak{L}({\cal F}^2,\, \Box/\mu^2)\, {\sqrt {-g}}\,\mathrm {d} ^{4}x\; , \label{eq:action} \end{equation} where $\dots$ stand for higher order corrections. In the weak field limit $\C{R}\approx\Box\approx\C{F}^2\ll \mu^2\sim M_{\rm P}^2$, the functions $\mathfrak{F}$, $\mathfrak{L}$ match the conventional GR values, $\mathfrak{F}\to {\cal R}$ and $\mathfrak{L}\to {\cal F}^2$. The resulting field equations can be derived and, upon some regularity conditions for $\mathfrak{F}$ and $\mathfrak{L}$, can be cast in a form equivalent to Einstein's equations coupled to an effective energy momentum tensor \begin{equation} \frac{1}{8\pi G}\left(\C{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\C{R}\right)= \C{T}_{\mu\nu} \label{eq:einsteineqs} \end{equation} where $\C{T}^{\mu\nu}=\C{T}_{\mathfrak{F}}^{\mu\nu}+\C{T}_\mathfrak{L}^{\mu\nu}$ contains both a gravity and an electromagnetic part. Accordingly the electromagnetic field equations can be written in a Maxwell's form, \textit{i.e.}, \begin{equation} \frac{1}{\sqrt{-g}}\, \partial_\mu \left( \sqrt{-g} \C{F}^{\mu\nu}\right)=\C{J}_\mathfrak{L}^\mu \end{equation} where $\C{J}_\mathfrak{L}^\mu$ is an effective current depending on $\C{F}$, $\Box$ and $\mu$. For specific profiles of the functions $\mathfrak{F}$ and $\mathfrak{L}$ one can consider examples in \cite{ANS07,MMN11,IMN13,Mod12a}. Spacetime regularity implies conditions for $m(r)$ and $q(r)$, namely \begin{eqnarray} m(r)=O\left(r^3\right)\quad\mathrm{as}\quad r\to 0\nonumber\\ q(r)=O\left(r^2\right)\quad\mathrm{as}\quad r\to 0.\nonumber \end{eqnarray} Within the gravity ultraviolet self-complete paradigm, the above conditions are too restrictive. The Planck scale is dominated by a quantum pregeometry for which the metric description is just a loose concept. As a result, it is sufficient to require that the functions $m(r)$ and $q(r)$ exist in the interval $\left(L_{\rm P}, \infty\right)$. For the ease of discussion, we ask, however, $m(r)$ and $q(r)$ to be monotonically increasing functions and non-vanishing in the aforementioned interval. This is equivalent to excluding a multi-horizon structure and having, at the most, two zeros for $F(r)$, namely an event horizon, $r_+$, and a Cauchy horizon, $r_-$, with $F(r_\pm)=0$. From \eqref{eq:metric} one can calculate the black hole temperature that reads \begin{equation} T=\frac{1}{4\pi r_+}\left[1-r_+\frac{m^\prime(r_+)}{m(r_+)}-\frac{q^2(r_+)}{r_+^2}\left( 1+ r_+\frac{m^\prime(r_+)}{m(r_+)}-2r_+\frac{q^\prime(r_+)}{q(r_+)}\right)\right]. \label{eq:temperature} \end{equation} Being the function $m^\prime(r_+)=\left.\frac{\d m(r)}{\d r}\right|_{r=r_+}$ and $q^\prime(r_+)=\left.\frac{\d q(r)}{\d r}\right|_{r=r_+}$ positive defined, one obtains that there exists a zero temperature state corresponding to the extremal black hole configuration, $r_\mathrm{e} =r_-=r_+$. This is equivalent to saying that event horizon radii are limited from below, $r_+\geq r_\mathrm{e}$. The metric \eqref{eq:metric} can describe a system that is simultaneously a black hole and an elementary particle if \begin{equation} r_+=\lambda \label{eq:particlebh} \end{equation} where $\lambda$ is the Compton wavelength. The latter is expressed in terms of the black hole internal energy, namely \begin{equation} \lambda=\frac{2\pi}{M(r_+, Q)},\quad \mathrm{with}\quad M(r_+, Q)=\frac{1}{2 r_+\mathbb{H}(r_+) G}\left[r_+^2+q^2(r_+)\right], \end{equation} where $\mathbb{H}(r)=m(r)/M$. The temperature \eqref{eq:temperature} fulfills the first law of thermodynamics \begin{equation} \d M=T\d S+\Phi \d Q \end{equation} from which one obtains the area law $\d S=\d A_+ /4 G(r_+)$, being $\Phi$ the electrostatic potential. Here $G(r)=G\ \mathbb{H}(r)$ is an effective gravitational constant that takes into account for the higher derivative corrections to Einstein gravity in \eqref{eq:action}. The particle-black hole equation \eqref{eq:particlebh} selects a family of parameters $(M_\lambda,\, r_\lambda)$ depending on $Q$. This is in contrast to what is found in the case of neutral black holes, whose particle-black hole equation is satisfied for a unique pair of black hole parameters. Such a presence of a family of solutions $(M_\lambda,\, r_\lambda)$ resembles the situation of regular neutral black hole solutions in the presence of a minimal length $\ell$ \cite{SpA11}. In the latter case, a unique pair of solutions has been identified by choosing the specific value of $\ell$ such that the particle-black hole condition is satisfied at the extremal configuration only, \textit{i.e.}, the smallest black hole. Such a procedure can be repeated in the case currently under investigation since for arbitrary values $(M_\lambda,\, r_\lambda)$ the black hole might be rather unstable. Due to a nonvanishing temperature the hole will decay and lose charge quickly. There are however two important differences with respect to the neutral regular black hole in \cite{SpA11}. First, the charge $Q$ cannot assume arbitrary values as $\ell$. By expressing the $Q$ in terms of the elementary charge, $Q=n\ e\ G^{1/2}$ with $n\in\mathbb{N}$, one obtains a natural quantization rule and a consequent pixelization of the event horizon. This is equivalent to saying that the equations \begin{equation} M_\lambda=M_\mathrm{e},\quad r_\lambda=r_\mathrm{e}\quad \Longrightarrow(M_0, r_0) \end{equation} might not have an exact solution but rather determine a quasi-extremal configuration $(M_0, r_0)$. Second, if one assumes the decay of the deSitter space as a formation mechanism for charged Planckian black holes, it has to be noted in \cite{MaR95} that extremal black hole production rates are suppressed versus those of non-extremal ones. As a result, the quasi-extremal black hole $(M_0, r_0)$ would turn to be a favorite state despite having a non-vanishing temperature. \section{Classicalization of charged black holes} \label{sec:classicalization} Up to now, our considerations have had a model independent character. We preferred not to specify the profile of the functions $\mathfrak{F}$ and $\mathfrak{L}$ in \eqref{eq:action}. Such a choice is corroborated by the ultraviolet self-completeness scenario, according to which gravity is dominated by a classical state in the trans-Planckian regime. Only tiny modifications with respect to Einstein gravity are allowed around the Planck scale \cite{DFG11}. We will therefore neglect them in the present section by assuming the following conditions: \begin{equation} m^\prime\ll m/r, \quad q^\prime\ll q/r. \label{eq:smallderivatives} \end{equation} As a result, the metric \eqref{eq:metric} actually describes a Reissner-Nordstr\"{o}m geometry up to subleading non-classical corrections. The particle-black hole condition \eqref{eq:particlebh} therefore implies \begin{equation} M_\lambda =\frac{2\pi}{\sqrt{4\pi G-Q^2}}\, . \label{eq:mlambda} \end{equation} From the above formula one obtains a bound for the charge $Q<\sqrt{4\pi G}$, which we assume positive defined for the ease of discussion. By plugging $M_\lambda$ in $r_+$ one obtains two values for $r_\lambda$. One of these coincides with the corresponding Compton wavelength and holds only for $Q<\sqrt{2\pi G}$. In such an interval, one has $r_\lambda>L_{\rm P}$. By expressing $Q$ in terms of the elementary charge one can write \eqref{eq:mlambda} as \begin{equation} M_\lambda =\sqrt{\frac{\pi}{1-n^2\alpha/4\pi}}\ M_{\rm P} \label{eq:mlambdafinale} \end{equation} where $\alpha=e^2$. The bound on $Q$ implies a condition for the charge number $n$, namely $n\leq n_\mathrm{max}$, where \begin{equation} n_\mathrm{max}=\left\lfloor \sqrt{\frac{2\pi}{\alpha}}\right\rfloor =29 \label{eq:nmax} \end{equation} Here $\lfloor x\rfloor$ indicates the floor function of $x$, $ {\displaystyle {\text{floor}}(x)=\lfloor x\rfloor } $, namely the greatest integer that is less or equal to $x$. In \eqref{eq:nmax} we have not taken into account the increase of the fine structure constant at the Planck scale. The Landau pole occurs at a scale ${\displaystyle \Lambda _{\text{Landau}}}\simeq 10^{286}$ eV $\gg M_{\rm P}$ but non-perturbative effects might change the scenario drastically. If the run to the Planck scale makes $\alpha$ of order unity, the value of $ n_\mathrm{max}$ would be reduced to $2$. Another bound on the charge is due to the electrostatic repulsion. If we consider a probe charge, \textit{e.g.} a proton, near a charged black hole, the gravitational attraction will overcome the electrostatic repulsion only if the hole charge $Q$ is bounded by the relation \begin{equation} Q<\left(\frac{Gm_{\rm p}}{q_{\rm p}}\right) GM \end{equation} where $m_{\rm p}$ is the proton mass and $q_{\rm p}=G^{1/2}\ \sqrt{\alpha}$. The above relation, based on Newton's law and Coulomb's law arguments, is used to estimate the maximal charge a hole can accumulate \cite{Pag06}. By plugging $M_\lambda$ in the above inequality one gets \begin{equation} Q^2<2\pi G\left(1-\sqrt{1-\alpha^{-1}\left(m_{\rm p}/M_{\rm P}\right)^2}\right) GM\simeq \pi\alpha^{-1}\left(m_{\rm p}/M_{\rm P}\right)^2\, . \end{equation} Being $m_{\rm p}/M_{\rm P}\simeq 7.68\times 10^{-20}$, particle black holes cannot accumulate charge unless they are nucleated in the early Universe with a given $Q$. In order to fulfill the condition of extremal particle black hole one has to require $M_\lambda=M_\mathrm{e}$, where \begin{equation} M_\mathrm{e}=n\sqrt{\alpha}\ M_{\rm P}. \end{equation} As a result one finds $M_\mathrm{e}=\sqrt{2\pi}M_{\rm P}$ and $r_\mathrm{e}=Q_\mathrm{e}=\sqrt{2\pi}L_{\rm P}$. Clearly this condition cannot be met since the electric charge is quantized. The closest configuration to the extremal one has a charge $Q_0=\left\lfloor Q_\mathrm{e}\right\rfloor$. Not surprisingly $Q_0$ corresponds to a charge number, $n_0=n_\mathrm{max}$. The corresponding horizon radius and black hole mass are \begin{eqnarray} r_0&=&\sqrt{4\pi-n_0^2\alpha}\ L_{\rm P}\simeq 2.54\ L_{\rm P}\\ M_0&=&\frac{2\pi}{\sqrt{4\pi-n_0^2\alpha}}\ M_{\rm P}\simeq 2.48\ M_{\rm P} \, . \end{eqnarray} From this perspective $r_0$ represents the minimal horizon radius for a black hole mass $M_0$. For comparison, the neutral particle-black hole, $n=0$, has a radius $r_\mathrm{S}\simeq 3.54 L_{\rm P}$ and a mass $M_\mathrm{S}\simeq 1.77 M_{\rm P}$. The quasi-extremality condition implies a non-vanishing temperature, namely \begin{equation} T_0=\frac{M_{\rm P}}{4\pi\sqrt{4\pi-n_0^2\alpha}}\left[1-\frac{n_0^2\alpha}{4\pi-n_0^2\alpha}\right] \simeq \frac{M_{\rm P}}{2\pi\sqrt{2\pi}}\left(1-\frac{n_0\alpha}{2\pi}\right) \label{eq:quasiexttemp} \end{equation} that corresponds to $T_0\simeq 1.48\times 10^{-3}\ M_{\rm P}$ or equivalently $T_0\simeq 2.09\times 10^{29}$ K. Such black holes would be very hot and bright but they are not expected to suffer from relevant quantum back reaction being $T_0/M_0\simeq 5.98\times 10^{-4}$. In contrast to Planckian neutral black holes, they can safely be described in terms of semiclassical gravity, despite their minuscule size. Another difference with respect to the neutral case is the sign of the heat capacity \begin{equation} C = 2\pi\ \frac{r_+^2-Q^2}{\left(3Q^2r_+^{-2}-1\right)}. \end{equation} Being $Q_\mathrm{e}\lesssim r_0< \sqrt{3}Q_\mathrm{e}$, the heat capacity is positive. As a result charged quasi-extremal particle-black holes enjoy thermodynamic stability. For the entropy one obtains \begin{equation} S_0=\pi \left(4\pi-n_0^2 \alpha\right) \simeq 20.2\, . \end{equation} Being the production rate of extremal black holes suppressed relative to quasi-estremal ones by a factor $e^{S_0}\simeq 5.92 \times 10^{8}$, the production of such charged particle-black holes in the early Universe would be highly favored with respect to short scale modified neutral black hole remnants with a degenerate horizon \cite{NSS06b,MaN11,SpA11,NiS12}. \section{Planckian charged black hole decay} \label{sec:decay} The main result of the previous section is that the presence of charge prevents a clear separation between particles and black holes by means of a stable particle-black hole state at the Planck scale. The presence of the charge allows for quasi-extremal particle-black holes at the best. A natural question to address concerns now the nature of their evolution. The Hawking temperature \eqref{eq:quasiexttemp} implies a rapid decay that is accompanied by a loss of mass and charge. If we now consider the emission of a positron $(m_e, |e|)$, the charge variation is $\delta Q_0=-|\delta Q_0|= -\sqrt{\alpha}\ L_{\rm P}$, while the mass variation \begin{equation} \delta M_0=-|\delta M_0|= -m_e +\frac{2\pi Q_0\ \delta Q_0}{\left(4\pi G-Q_0^2\right)^{3/2}}\ \simeq -\frac{2\pi n_0\ \alpha}{\left(4\pi-n_0^2\ \alpha\right)^{3/2}}\ M_{\rm P} \end{equation} is controlled by the charge variation being $m_e\ll M_{\rm P}$. From the comparison of the first order corrections to the Compton wavelength and the horizon radius, \begin{eqnarray} &&\frac{2\pi}{M_0}\left(1+\frac{|\delta M_0|}{M_0}\right)\nonumber\\ &&\neq GM_0\left(1-\frac{|\delta M_0|}{M_0}\right)\left[1+\sqrt{1-\frac{Q_0^2}{G^2 M_0^2}\left(1-2\ \frac{|\delta Q_0|}{Q_0}+2\ \frac{|\delta M_0|}{M_0}\right)}\right], \end{eqnarray} it turns out that the evaporation will disrupt the particle-black hole condition. Being $|\delta Q_0|/Q_0\simeq |\delta M_0|/M_0$ and $Q_0^2/(G^2 M_0^2)\simeq 1$, the Compton wavelength increases while the horizon radius decreases. This means that the state $(r_0, M_0)$ will not evolve to a state $(r_\lambda, M_\lambda)$ but rather to a generic quasi-extremal charged black hole configuration $(r_+, M)$. From here on, such a black hole will keep radiating towards a neutral or quasi neutral black hole configuration. In the process the black hole will progressively decrease its temperature. In such a phase the condition \eqref{eq:smallderivatives} cannot longer be valid and the temperature will be described by \eqref{eq:temperature}. The destiny of the black hole could be either an extremal neutral particle-black hole configuration or an extremal charged black hole configuration. We underline here that the presence of charge would prevent an extremal particle-black hole configuration for what discussed in the previous section. This would represent a serious limitation to the gravity self-completeness paradigm when the charge is taken into account. To address this ambiguity on the final fate of quasi-extremal charged particle-black hole we recall that the Hawking emission is accompanied by another emission process, namely the Schwinger $e^\pm$ pair production. The latter occurs for the presence of intense electric fields, $E\geq E_\mathrm{c}=\pi m_e^2/|e|$, near the event horizon. In our case the condition leads to \begin{equation} \frac{Q}{r^2_+}\ M_{\rm P}\ge \pi\, \frac{m^2_e}{\sqrt{\alpha}} \label{eq:schwinger} \end{equation} Being $r_+\sim L_{\rm P}$, one has that the field at the horizon is overcritical, $E(r_+)\geq E_\mathrm{c}$, if $n\geq (\pi/\alpha)(m_e/M_{\rm P})^2$. This means that it is sufficient to have just one elementary charge to trigger the $e^\pm$ pair production at the event horizon. One particle of the pair will be rejected while the other absorbed leading to a quick hole's discharge. Such a scenario is in agreement with early findings about the spontaneous loss of charge of black holes \cite{Gib75}. In conclusion the final stages of the evolution of the quasi-extremal particle-black hole will be described by a neutral configuration asymptotically relaxing to a zero temperature stable particle-black hole state. As a final comment we want to address particle black holes in the opposite regime, \textit{i.e.}, quasi-neutral particle black holes $(r_\lambda, M_\lambda)$, one can obtain for $n\sim 1$. From \eqref{eq:mlambdafinale} one obtains $M_\lambda\simeq M_\mathrm{S}$ and $r_\lambda\simeq r_\mathrm{S}$, being $\alpha/4\pi\ll 1$. From the calculation of their entropy one can estimate the production rate in the early Universe relative to extremal black holes. It turns out that the relative rate is $1.37\times 10^{17}$, making them even more probable deSitter space decay products with respect to the quasi-extremal particle-black holes. This is in agreement to the fact that they have lighter masses. As mentioned in Section \ref{sec:chargedmetric}, they decay more wildly since their temperature $T\sim 2.24\times 10^{-2}\ M_{\rm P}\simeq 3.19\times 10^{30}$ K is hotter than \eqref{eq:quasiexttemp}. The Hawking emission will be accompanied by a Schwinger discharge. As a result, we expect a quick formation of a neutral black hole configuration. The latter might approach a stable zero temperature particle-black hole configuration provided the subleading corrections of the function $m(r)$ are taken into account. \section{Conclusions} \label{sec:concl} In this paper we analyze the role of the charge within the self-complete gravity paradigm. We presented the thermodynamics for a general short scale modified gravity Planckian black hole solution. We showed that the particle-black hole condition is satisfied by a family of black hole parameters. By assuming an approximately classical black hole state, we showed that the charge prevents the formation of zero temperature particle-black hole configuration. Only quasi-extremal particle-black holes are allowed, that could have plentifully been produced during the early Universe quantum mechanical decay. We showed that, due to their temperature, they can decay in lighter black holes but not in particle-black holes. We also studied the Schwinger pair production mechanism and we concluded that at the event horizon the electric field is always overcritical provided one elementary charge is left on the hole. This fact allowed to solve the potential ambiguity about the fate of such quasi-extremal particle-black holes. Rather than cooling down to a extremal charged configuration, they will slowly relax towards a neutral configuration. We also showed that the latter can fulfill the condition of extremal particle-black hole provided corrections to the classical metrics are taken into account. We also commented about the evolution of quasi-neutral particle-black holes, by showing that they will rapidly decay via Hawking and Schwinger mechanisms towards an extremal neutral particle-black hole configuration. In conclusion, we have shown that the charge seriously disturbs the stability of the particle-black hole phase diagram at the Planck scale. The self-completeness is however safely restored thanks to a rapid discharge the charged particle-black hole is expected to undergo after its formation. \subsection*{Acknowledgments} The work of P.N. has been supported by the project ``Evaporation of the microscopic black holes'' of the German Research Foundation (DFG) under the grant NI 1282/2-2.
{ "timestamp": "2018-01-09T02:17:03", "yymm": "1712", "arxiv_id": "1712.05062", "language": "en", "url": "https://arxiv.org/abs/1712.05062" }
\section{Introduction}\label{CH1:Introduction} The support for Internet of Things (IoT) plays a major role in the evolution of wireless/cellular systems, including the upcoming 5G communication systems~\cite{Palattella2016JSAC}. There are two general classes of IoT transmission technologies: (1) Low-power wide-area (LPWA) communication technologies (e.g., LoRa~\cite{lora}, Sigfox~\cite{sigfox}, IEEE 802.11ah) that operate in unlicensed spectra, and offer low-cost devices and ease of network deployment; (2) Cellular IoT technologies (e.g., LTE Cat-1, LTE Cat-0, LTE-M Cat-M1, NB-IoT) that operate in licensed spectra. Among these, NB-IoT is a recent technology that has gained a significant momentum, as observed by the fast standardization during 2016~\cite{TS36211} and the increasing number of deployments. NB-IoT is designed to accommodate a massive number of low-throughput, low-cost, and delay-tolerant devices. Similarly to LTE networks, each device registers at the network through an Access Reservation Protocol (ARP), i.e., random access, in which time and frequency misalignments can be adjusted. The preamble sequence is transmitted at the first step of the ARP, however, the preamble structure is no longer based on Zadoff-Chu sequence. The preamble design and detection algorithm for the NB-IoT ARP was presented in~\cite{Lin2016WCL}, while an overview of the NB-IoT air interface was given in~\cite{Wang2017CM}. The performance of the ARP significantly degrades due to preamble mis-detections and collisions. The preambles in NB-IoT systems were designed with a goal of extending coverage and reducing the occurrence of mis-detections. Yet, the number of devices within cell coverage is expected to grow, leading to an increasing number of preamble collisions. This motivates us to configure the preamble structure by considering the effect of collisions. This letter presents an enhanced ARP with a partial preamble transmission (PPT) mechanism that leverages the trade-off between mis-detections and collisions. We can significantly improve the performance of the ARP by puncturing the preamble sequence through the proposed PPT mechanism. It is worth noting that the proposed ARP requires only minor modifications on how the preambles are transmitted and detected, which can be easily implemented in NB-IoT systems. \vspace{-0.2cm} \section{Access Reservation Protocol in NB-IoT}\label{Backgrounds} In NB-IoT systems, the ARP consists of 5 steps, and the detail of each step is described as follows: \begin{itemize} \item \textbf{Step 1}: The device selects one of the available preamble sequences, and transmits it in the Narrowband Physical Random Access Channel (NPRACH); \item \textbf{Step 2}: The eNodeB detects the preamble sequences and responds to the detected preamble sequences by sending a Random Access Response (RAR), which includes the index of the preamble sequence, the time alignment (TA) offset and an uplink grant; \item \textbf{Step 3}: The device proceeds with the signaling information exchange on the resources indicated by the RAR, termed the \emph{RRC Connection Request}; \item \textbf{Step 4}: The eNodeB acknowledges the signaling information received from the device with the \emph{RRC Connection Setup} message; \item \textbf{Step 5}: Finally, the device transmits its data concatenated with the \emph{RRC Connection Setup Complete} message. \end{itemize} NB-IoT can be implemented with a $180$kHz bandwidth, composed of $48$ sub-carriers with each sub-carrier spacing of $3.75$kHz, which can be configured for the NPRACH; allowing 12, 24, 36, or 48 orthogonal RA preamble sequences to be available. A preamble sequence is composed of multiple \emph{symbol groups} as shown in Fig.~\ref{fig:preamble_structure}, where a single \emph{symbol group} consists of a cyclic prefix (CP) and $\xi$ symbols. The rationale behind this structure is to reduce the relative CP overhead, and, thus, $\xi$ should be carefully set to be sufficiently small so that the channel condition remains the same within each of the symbol groups. All of symbols have the same value of ``1''. $\nu$ symbol groups configure \emph{a basic unit} for repetition, which can be repeated $M$ times, where $M\!=\!2^q$, $q\!=\!0,...,7$. Therefore, the length of a preamble sequence, $L$, can be represented as $L\!=\!\nu\!\cdot\!M$. {$\xi$ and $\nu$ are commonly set to 5 and 4, respectively.} \begin{figure}[t] \centering \includegraphics[width=9.0cm]{Kim1.eps} \vspace{-0.3cm} \caption{Structure of a preamble sequence in a single NPRACH.} \label{fig:preamble_structure} \vspace{-0.5cm} \end{figure} Each symbol group uses a single carrier, and hops across frequency to facilitate to estimate uplink timing alignment at the eNodeB. Thus, selecting a preamble sequence implies that each device selects a hopping pattern, and $\Omega(\cdot)$ represents a mapping function from a preamble index to the set of sub-carrier indices which are used by the corresponding preamble sequence. The sub-carrier index used by the $l$-th symbol group of the $i$-th preamble sequence is denoted as $\omega_l^i$, $l\!=\!1,...,L$, and, thus, $\Omega(i)\!=\!\left[\omega_1^i,...,\omega_L^i \right]$. Finally, NB-IoT networks can support up to 3 coverage classes, which can be configured with different values of $M$, in order to support specific coverage requirements. However, in this letter, we focus on a single coverage class. \section{Proposed Access Reservation Protocol with a Partial Preamble Transmission Mechanism}\label{Proposed_Scheme} \subsection{Partial Preamble Transmission Mechanism}\label{PPT_mechanism} The key idea of a partial preamble transmission (PPT) mechanism is to allow each device to transmit \emph{a fraction of a preamble sequence}, which is called \emph{a partial preamble sequence} (PPS). According to the PPS configuration, the NPRACH can be virtually divided into multiple sub-NPRACHs, {each of} which is named \emph{a partial unit}, where a single PPS is transmitted. We note that this approach does not alter the intrinsic structure of the protocol, and is instead a reconfiguration of the protocol. In the baseline ARP, when the amount of NPRACH resources is determined, the length of the preamble sequence is configured as $L_b=\nu \cdot M_b$, where $M_b$ represents the number of repetitions. However, in the proposed scheme, a PPS with a length of $L_p$ can be configured as $L_p=\nu \cdot M_p=\nu \cdot 2^{q_p}$, with $q_p=0,1,...,7$, where $M_p$ represents the number of repetitions in the PPSs. In this case, the eNodeB can configure $G$ non-overlapping PPSs, where $G=L_b/L_p$. In Fig.~\ref{fig:PPT_mechanism}, we show three examples of the configuration of PPSs when $L_b=16$. Note that when $L_p=L_b$, the proposed ARP utilizes the assigned NPRACH in the same way as in the baseline ARP. This partitioning of the preamble sequences may lead to degradation in the detection performance, while allowing the same preamble sequence to be reused up to $G$ times by multiple devices, thus reducing the occurrence of collisions. In essence, we establish a trade-off relationship between the occurrence of mis-detections and collisions. A decrease in the detection performance of the PPSs can be to some extent compensated by suitably increasing the PPS transmit power. \begin{figure}[t] \centering \subfigure[$L_p=4$ ($M_p=1$), $G=4$.] {\includegraphics[width=7.5cm]{Kim2a.eps}\label{fig:pt_1}}\\ \vspace{-0.2cm} \subfigure[$L_p=8$ ($M_p=2$), $G=2$.] {\includegraphics[width=7.5cm]{Kim2b.eps}\label{fig:pt_2}}\\ \vspace{-0.3cm} \subfigure[$L_p=16$ ($M_p=4$), $G=1$.] {\includegraphics[width=7.5cm]{Kim2c.eps}\label{fig:pt_3}}\\ \vspace{-0.1cm} \caption[]{Configuration of partial preamble sequences when $L_b=16$ ($M_b$=4). } \label{fig:PPT_mechanism} \vspace{-0.3cm} \end{figure} \subsection{An Enhanced Access Reservation Protocol with a Partial Preamble Transmission Mechanism}\label{ARP_PPT} The proposed ARP with the PPT mechanism mainly differs from the baseline NB-IoT ARP at the first two steps as follows: \begin{itemize} \item {{\bf{Step 1}}: Each device randomly selects an index of preamble sequence among $N_\mathrm{P}$ preambles, and randomly selects \emph{a partial unit} among the $G$ available partial units. Each device transmits a PPS on the selected partial unit. } \item {{\bf{Step 2}}: The eNodeB determines which PPSs are received. The eNodeB accumulates the received power spread over each of the partial units, and compares it with the pre-defined detection threshold, $d_\mathrm{TH}$, at every partial unit\footnote{The number of detection events is increased by G times, however, the number of correlations per detection event remains the same, and, thus, this is not a severe burden for the eNodeB from the detection complexity perspective.}. If a certain PPS is detected, then the eNodeB transmits the RAR, which consists of an index of preamble, {\emph{an index of partial unit}}, a TA offset, and an uplink grant. Each device uses both the index of preamble and the index of partial unit to identify the destination of the RAR. } \end{itemize} \section{Performance Analysis}\label{CH3:Performance_Analysis} We now mathematically characterize the detection and collision probabilities associated with the proposed ARP; and formulate an optimization problem where the objective is to maximize the ARP success probability. \subsection{System Model} \label{sub:system_model} We focus on a single NB-IoT cell, consisting of an eNodeB and IoT devices, which attempt to access the network through the ARP. Let $N_\mathrm{M}$ denote the number of IoT devices which attempt the ARP in a single ARP session. Let $N_\mathrm{P}$ denote the number of preambles configured in a single NPRACH. We assume that each device performs an open-loop power control to compensate for the path loss. Thus, the channel between each device and the eNodeB can be modeled as a single-tap flat fading channel, where the channel coefficient follows a Rayleigh distribution, i.e., $h\sim\mathcal{CN}(0,1)$. For simplicity, we assume that the channel does not vary in a block of $\nu$ symbol groups, i.e., a single basic unit, but varies independently over the blocks.\footnote{Note that under the block fading channel model where $\nu$ is any positive integer, our mathematical analysis is still applicable.} Let $\mathbf{y}^{k}$ denote the received signal of a tagged preamble sequence, which is simultaneously utilized by $k$ devices. It can be expressed as $\mathbf{y}^{k}=\left[ y_{m,j,i}^{k} \right]$, for $m=1,...,M$, $j=1,...,\nu$, and $i=1,...,\xi$, where $y_{m,j,i}^{k}$ represents the $i$-th received symbol in the $j$-th symbol group at the $m$-th repetition. $y_{m,j,i}^{k}$ can be represented as: \begin{equation} y_{m,j,i}^{k} = \sum\nolimits_{k' = 1}^{k} {\sqrt P \cdot h_{m,j,i}^{k'} \cdot x_{m,j,i} + w_{m,j,i}}, \end{equation} where $P$, $h_{m,j,i}^{k'}$, $x_{m,j,i}$, and $w_{m,j,i}$ represent the received power per symbol at the eNodeB, the channel coefficient between the eNodeB and the $k'$-th device among $k$ devices which use the tagged preamble sequence, the $i$-th transmitted symbol in the $j$-th symbol group at the $m$-th repetition, and the Gaussian noise with \emph{zero} mean and variance of $2\sigma^2$, respectively. \subsection{Normalized Received power} To be able to perform the decision whether a certain PPS has been transmitted or not, the eNodeB needs to accumulate the received power of the PPS spread over the multiple symbol groups corresponding to the sequence. \emph{The normalized received power} of \emph{a tagged PPS}, which is transmitted simultaneously by $k$ IoT devices, $J_n^k$ is represented as: \begin{equation} J_n^k \!=\! \frac{1}{M_p} \sum\limits_{m = 1}^{M_p} {\left| {R_m } \right|^2 } \!=\! \frac{1}{M_p} \sum\limits_{m = 1}^{M_p} {\left| {\sum\limits_{j = 1}^\nu {\sum\limits_{i = 1}^\xi {r_{y_{m,j,i}^{k} x_{m,j,i}} } } } \right|^2 }, \end{equation} where $r_{yx}$ represents the correlation between $y$ and $x$ given by $r_{y x}=y \cdot x^*$, where $(\cdot)^{*}$ denotes the complex conjugate. The term $r_{y_{m,j,i}^{k} x_{m,j,i}}$ can be expressed as: \begin{equation} r_{y_{m,j,i}^{k} x_{m,j,i}} = \sum\nolimits_{k' = 1}^{k} {\sqrt P \cdot h_{m,j,i}^{k'} + \tilde{w}_{m,j,i}}, \end{equation} where $\tilde{w}_{m,j,i}$ follows the same distribution as ${w}_{m,j,i}$, and, thus, $r_{y_{m,j,i}^{k} x_{m,j,i}} \sim \mathcal{CN}(0, 2(kP+1)\sigma^2)$~\cite{Kim2016ICC}. Therefore, \begin{equation} J_n^k \sim \Gamma\left( {M_p,\frac{M_p}{{2(kP(\nu \xi )^2 + \nu \xi )\sigma^2 }}} \right), \end{equation} where $\Gamma\left( \alpha, \beta\right)$ represents a gamma distribution with shape $\alpha$ and rate $\beta$. \subsection{False-alarm and mis-detection probabilities} A \emph{false-alarm} occurs when a PPS that was not transmitted by any device, is detected to be active at the eNodeB. The false-alarm probability, $p_\mathrm{fa}$, is expressed as: \begin{align}\label{eq_pfa} p_\mathrm{fa} &= \Pr \left\{ {J_n^0 > d_\mathrm{TH}} \right\} \\ &= 1 - F\left( {d_\mathrm{TH};M_p,\frac{M_p}{{2\nu \xi \sigma ^2 }}} \right) \nonumber\\ &= 1 - \frac{1}{{\Gamma \left( M_p \right)}} \cdot \gamma \left( {M_p,\frac{{d_\mathrm{TH}\cdot M_p}}{{2\nu \xi \sigma ^2 }}} \right), \nonumber \end{align} where $F(x; \alpha, \beta)$ represents the cumulative distribution function (CDF) of a gamma distribution, $\Gamma\left( \alpha, \beta\right)$. Note that $F(x; \alpha, \beta)$ can be expressed as $F(x; \alpha, \beta)= \frac{{\gamma (\alpha ,\beta x)}}{{\Gamma (\alpha )}}$, where $\gamma (\alpha ,\beta x)$ and $\Gamma(\cdot)$ represent the lower incomplete gamma function and the gamma function, respectively. A \emph{mis-detection} occurs when a desired PPS is not detected, and its probability, $p_\mathrm{md}$, can be expressed as: \begin{align}\label{eq_pmd} p_\mathrm{md} &= \frac{1}{\delta }\sum\nolimits_{k = 1}^{N_\mathrm{M} } {p_k } \cdot \Pr \left\{ {J_n^k < d_\mathrm{TH} } \right\} \\ &= \frac{1}{\delta }\sum\nolimits_{k = 1}^{N_\mathrm{M} } {p_k } \cdot F\left( {d_\mathrm{TH} ;M_p ,\frac{M_p}{{(kP(\nu \xi )^2 + \nu \xi )2\sigma ^2 }}} \right) \nonumber\\ &= \frac{1}{\delta }\sum\nolimits_{k = 1}^{N_\mathrm{M} } {\frac{{p_k }}{{\Gamma \left( {M_p } \right)}}\cdot\gamma \left( {M_p ,\frac{{d_\mathrm{TH} \cdot M_p }}{{(kP(\nu \xi )^2 + \nu \xi )2\sigma ^2 }}} \right)}, \nonumber \end{align} where $\delta$ represents a scaling factor which is defined as $\delta=1-p_0$, while $p_k$ represents the probability that $k$ IoT devices simultaneously utilize the same tagged PPS, which is modeled by a binomial distribution as: \begin{equation} p_k = \left( \begin{array}{l} N_\mathrm{M} \\ ~k \\ \end{array} \right)\left( {\frac{1}{{N_\mathrm{P} \cdot G}}} \right)^k \left( {1 - \frac{1}{{N_\mathrm{P} \cdot G}}} \right)^{N_\mathrm{M}- k }. \end{equation} \subsection{Collision probability} A collision occurs when two or more IoT devices select the same PPS. When a tagged IoT device selects a PPS, the average collision probability of the tagged IoT device, $p_\mathrm{c}$, can be expressed as: \begin{equation} p_\mathrm{c} = 1 - \left(1 - \frac{1}{N_\mathrm{P}\cdot G} \right)^{N_\mathrm{M}-1}. \end{equation} \subsection{Optimal PPS configuration} The occurrence of mis-detections and collisions affects the ARP performance \footnote{The limited amount of available uplink/downlink resources can also affect the ARP performance, yet the same level of performance degradation would be observed in the baseline protocol. Therefore, in this letter, our focus is solely on the effects associated with the expansion of the contention space through the PPT mechanism.}. We evaluate this performance through the ARP success probability, $p_\mathrm{s}$, which is expressed as: \begin{equation} p_\mathrm{s}=(1-p_\mathrm{c})(1-p_\mathrm{md}), \end{equation} where both $p_\mathrm{c}$ and $p_\mathrm{md}$ are functions of $M_p$, and, thus, we can formulate an optimization problem to find the optimal configuration of the PPSs as follows: \begin{equation}\label{eq:opt_problem} M_{p}^\star = \mathop { \mathrm{argmax} }\limits_{M_p} p_\mathrm{s}(M_p). \end{equation} The optimal value of $M_p$ can be found numerically, while assuming that $M_b$, $P$, $\nu$, $\xi$ and an estimate of the number of contending devices is available. \section{Numerical Results}\label{CH4:Numerical_results} We present the numerical results related to the detection performance and success probability of the proposed ARP. We have performed simulations using Matlab and the system parameters used during the evaluation are listed in Table~\ref{tb:simulation_env}. \begin{table} \centering \caption{Simulation parameters and values}\label{tb:simulation_env} \vspace{-0.2cm} \begin{tabular} {>{\centering}m{5.9cm}|c} \Xhline{1.8\arrayrulewidth} Parameters & Values \\ \Xhline{1.8\arrayrulewidth} $\xi$, $\nu$ & 5, 4\\ Number of preambles ($N_\mathrm{P}$) & 12 \\ Number of repetitions in baseline ARP ($M_b$) & $64$ \\ Number of repetitions in proposed ARP ($M_p$) & $2^{q_p}$,$q_p=0,..,7$\\ Number of devices per a single ARP session ($N_\mathrm{M}$) & 1 $\sim$ 10 \\ Detection threshold ($d_\mathrm{TH}$) & -5 $\sim$ 15 dB \\ SNR ($\rho$) & -10, -5 dB\\ \Xhline{1.8\arrayrulewidth} \end{tabular} \vspace{-0.2cm} \end{table} We first evaluate the impact of the PPT mechanism on the detection performance metrics for a single device, i.e. $N_\mathrm{M}=1$. In Fig.~\ref{fig:detection_performance}, we depict $p_\mathrm{fa}$ and $p_\mathrm{md}$ for varying $d_\mathrm{TH}$ values. Generally, the $p_\mathrm{fa}$ decreases and the $p_\mathrm{md}$ increases as the $d_\mathrm{TH}$ value increases when the $M_p$ value is given. Furthermore, a higher $M_p$ leads to a lower $p_\mathrm{fa}$ and $p_\mathrm{md}$. The former is due to noise averaging; the latter is due to a higher diversity gain. In the proposed ARP, only a fraction of a preamble sequence is transmitted by each device, therefore the eNodeB is not able to fully exploit the diversity associated with the original preamble sequence, and, thus, the detection performance degrades (i.e., a higher $p_\mathrm{md}$ is observed). However, if each device transmits the PPS with a higher transmit power, then the $p_\mathrm{md}$ can be decreased. In practice, the preamble detector aims to provide a constant $p_\mathrm{fa}$, e.g., if we set a target $p_\mathrm{fa}$, i.e., $p_\mathrm{fa}^\mathrm{target}$, to $10^{-4}$, then the detection thresholds should be set to $1.86$dB and $3.44$dB, for $M_p=64$ and $M_p=16$, respectively. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Kim3.eps} \vspace{-0.3cm} \caption{Detection performance for varying the detection threshold, $d_\mathrm{TH}$. (Marker: Simulation, Line: Analysis)} \label{fig:detection_performance} \vspace{-0.5cm} \end{figure} Fig.~\ref{fig:optimization_result} shows the performance of the proposed ARP for varying the $M_p$ value of PPSs when $M_b\!=\!64$, $P\!=\!-5$dB, and $N_\mathrm{M}\!=\!5$. Decreasing the $M_p$ implies that the eNodeB can generate more contention resources from a single original preamble sequence while slightly sacrificing the detection performance. As the $M_p$ increases, the $p_\mathrm{md}$ decreases, however, the $p_\mathrm{c}$ increases. When the $M_p$ is equal to the $M_b$, the performance becomes the same as that of the baseline ARP. Due to this trade-off relationship according to how to utilize the given NPRACH resources, there exists an optimal point for maximizing the ARP success probability. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Kim4.eps} \vspace{-0.3cm} \caption{Collision probability, mis-detection probability, and ARP success probability for varying the $M_p$ value of PPSs.} \label{fig:optimization_result} \vspace{-0.2cm} \end{figure} Table~\ref{tb:optimization_solution} shows the solutions of the optimization problem in Eq. (\ref{eq:opt_problem}) for varying $N_\mathrm{M}$ and $P$ when the parameters are given as $N_\mathrm{P}=12$, $M_b=64$, and $p_\mathrm{fa}^\mathrm{target}=10^{-4}$. The baseline ARP can guarantee extremely low mis-detection probabilities, since a sufficient number of repetitions are used to improve the detection performance. As a result, the ARP success probability is affected mostly by the collision probability. On the other hand, in the proposed ARP, we can adjust the configuration of the PPSs. Therefore, when the system load is light, then it mitigates the mis-detection probability. However, when the system load becomes heavy, it mitigates the collision probability even though the detection performance degrades, and, thus, the ARP success probability can be drastically improved, compared to that of the baseline ARP. Note that if we consider more constraints on either $p_\mathrm{c}$ or $p_\mathrm{md}$ then the $M_p^{\star}$ may change. \begin{table}[t] \caption{The solutions of the optimization problem} \label{tb:optimization_solution} \vspace{-0.7cm} \begin{center} \setlength\tabcolsep{2pt} \begin{tabular}{c|c||c|c|c|c||c|c|c|c} \Xhline{2.0\arrayrulewidth} \multicolumn{10}{c}{ $M_b=64$, $N_\mathrm{P}=12$, $p_\mathrm{fa}^\mathrm{target}=10^{-4}$} \\ \Xhline{2.0\arrayrulewidth} \multicolumn{2}{c||}{Parameters} & \multicolumn{4}{c||}{Conventional ARP} & \multicolumn{4}{c}{Proposed ARP}\\ \Xhline{1.5\arrayrulewidth} $N_\mathrm{M}$ & $P(\mathrm{dB})$ & $M_{b}$ & $p_\mathrm{c}(\%)$ & $p_\mathrm{md}$ & $p_\mathrm{s}(\%)$ & $M_{p}^{\star}$ & $p_\mathrm{c}(\%)$ & $p_\mathrm{md}$ & $p_\mathrm{s}(\%)$ \\ \hline {\multirow{2}{*}{1}} & $-5$ & {\multirow{8}{*}{64}} & {\multirow{2}{*}{0}} & 0 & 100 & 64 & 0 & 0 & 100 \\ & $-10$ & & & $8.41e^{-7}$ & 100 & 64 & 0 & $8.41e^{-7}$ & 100 \\ \cline{1-2} \cline{4-10} {\multirow{2}{*}{2}} & $-5$ & & {\multirow{2}{*}{8.3}} & 0 & 91.7 & 16 & 2.08 & $4.46e^{-6}$ & 97.9 \\ & $-10$ & & & $8.04e^{-7}$ & 91.7 & 32 & 4.2 & $4.34e^{-3}$ & 95.4\\ \cline{1-2} \cline{4-10} {\multirow{2}{*}{5}} & $-5$ & & {\multirow{2}{*}{29.4}} & 0 & 70.6 & 8 & 4.10 & $1.48e^{-2}$ & 94.5 \\ & $-10$ & & & $7.01e^{-7}$ & 70.6 & 32 & 15.7 & $4.06e^{-3}$ & 84.0 \\ \cline{1-2} \cline{4-10} {\multirow{2}{*}{10}} & $-5$ & & {\multirow{2}{*}{54.3}} & 0 & 45.7 & 8 & 8.99 & $1.44e^{-2}$ & 89.7\\ & $-10$ & & & $5.51e^{-7}$ & 45.7 & 16 & 17.3 & $1.26e^{-1}$ & 72.3\\ \Xhline{2.0\arrayrulewidth} \end{tabular} \end{center} \vspace{-0.6cm} \end{table} \vspace{-0.2cm} \section{Conclusions} \label{conclusions} We proposed an enhanced access reservation protocol (ARP) with a partial preamble transmission (PPT) mechanism for NB-IoT systems. The proposed ARP can mitigate the collision probability while slightly sacrificing the detection performance. We mathematically analyzed our proposed ARP in terms of the false alarm and mis-detection probabilities, and collision probability. We also investigated the trade-off relationship between the mis-detection probability and the collision probability, and found an optimal resource utilization strategy according to the system loads. Through extensive simulations, we verified that the proposed ARP outperforms the conventional NB-IoT ARP. \vspace{-0.2cm} \input{nbiot_arxiv.bbl} \end{document}
{ "timestamp": "2017-12-15T02:05:12", "yymm": "1712", "arxiv_id": "1712.05133", "language": "en", "url": "https://arxiv.org/abs/1712.05133" }
\section{Introduction} \subsection{History} Motivated by arithmetic considerations, there has recently been much work in the setting of functional transcendence, and specifically on generalizations of the famous Ax--Schanuel theorem on the exponential function to the context of hyperbolic uniformizations. Indeed, the strategy of Pila and Zannier for proving the Andr\'e--Oort conjecture is reliant on a functional transcendence result dubbed the `Ax-Lindemann theorem' by Pila. The approach originates in the celebrated paper \cite{P}, where Pila used his counting theorem with Wilkie to establish the result in the case of the Shimura variety $X(1)^n$, for $ n\geq 1$. The Ax-Lindemann theorem was finally established in full generality for Shimura varieties in \cite{KUY} by Klingler, Ullmo, and Yafaev, and for mixed Shimura varieties by Gao \cite{Gao}. Motivated by an analogous (though much more difficult to carry out) approach to the more general Zilber--Pink conjectures, Mok, Pila, and the second author recently proved the full Ax--Schanuel conjecture for general Shimura varieties \cite{MPT}. In this paper we prove the Ax--Schanuel conjecture in the more general setting of variations of (pure) Hodge structures (formulated recently by Klingler \cite[Conjecture 7.5]{klingler}). This is motivated largely by a recent approach of Lawrence--Venkatesh to establishing arithmetic Shafarevich-like theorems for large classes of varieties, which seems to require the theorem we prove to work in full generality. \subsection{Statement of Results} Let $\mathbf{S}$ be the Deligne torus $\textrm{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_m$. Given a pure polarized Hodge structure $h:\mathbf{S}\to {\bf Aut}(H_\mathbb{Z},Q_\mathbb{Z})$, the Mumford-Tate group $\mathbf{MT}_h\subset{\bf Aut}(H_\mathbb{Z},Q_\mathbb{Z})$ is the $\mathbb{Q}$-Zariski closure of $h(\mathbf{S})$. The associated Mumford--Tate domain $D(\mathbf{MT}_h)$ is the $\mathbf{MT}_h(\mathbb{R})$-orbit of $h$ in the full period domain of polarized Hodge structures on $(H_\mathbb{Z},Q_\mathbb{Z})$. By a \emph{weak Mumford--Tate domain} $D(\mathbf{M})$ we mean the $\mathbf{M}(\mathbb{R})$-orbit of $h$ for some normal $\mathbb{Q}$-algebraic subgroup $\mathbf{M}$ of $\mathbf{MT}_h$. Let $X$ be a smooth algebraic variety over $\mathbb{C}$ of dimension $n$ supporting a pure polarized integral variation of Hodge structures $\mathscr{H}_\mathbb{Z}$. Let $\mathbf{MT}_{\mathscr{H}_\mathbb{Z}}$ be the generic Mumford--Tate group, and let $\Gamma\subset \mathbf{MT}_{\mathscr{H}_\mathbb{Z}}(\mathbb{Z})$ be the image of the monodromy representation $\pi_1(X)\to \mathbf{MT}_{\mathscr{H}_\mathbb{Z}}(\mathbb{Z})$ after possibly passing to a finite cover. Let $\mathbf{G}$ be the identity component of the $\mathbb{Q}$-Zariski closure of $\Gamma$. Let $D=D(\mathbf{G})$ be the associated weak Mumford--Tate domain and $\phi: X\to \Gamma\backslash D$ the period map of $\mathscr{H}_\mathbb{Z}$. The compact dual $\check D$ of $D$ is a projective variety containing $D$ as an open set in the archimedean topology. Consider the fiber product \[ \xymatrix{ X\times D&W\ar@{}[l]|{\supset\hspace{-.7em}}\ar[r]^{\tilde \phi}\ar[d]&D\ar[d]^\pi\\ &X\ar[r]_\phi&\Gamma\backslash D. } \] In this situation, for any weak Mumford--Tate subdomain $D'=D(\mathbf{M}')\subset D$ such that $\Gamma\cap \mathbf{M}'(\mathbb{Q})$ is $\mathbb{Q}$-Zariski dense, $\phi^{-1}\pi(D')$ is an algebraic subvariety of $X$ by a result of Cattani--Deligne--Kaplan \cite{alghodge}, and we refer to such subvarieties as \emph{weak Mumford--Tate subvarieties} of $X$. \begin{thm}[Ax--Schanuel for variations of Hodge structures]\label{main}In the above setup, let $V\subset X\times \check D$ be an algebraic subvariety, and let $U$ be an irreducible analytic component of $V\cap W$ such that \[\operatorname{codim}_{X\times \check D}(U)<\operatorname{codim}_{X\times\check D}(V)+\operatorname{codim}_{X\times\check D}(W).\] Then the projection of $U$ to $X$ is contained in a proper weak Mumford--Tate subvariety. \end{thm} The theorem for example implies that the (analytic) locus in $X$ where the periods satisfy a given set of algebraic relations must be of the expected codimension unless there is a reduction in the generic Mumford--Tate group. See \cite{klingler} for some related discussions. \subsection{Outline of the proof} We follow closely the proof in \cite{MPT}. There are two serious complications that have to be addressed, which are as follows: First, we need to find a suitable fundamental domain in $D$ for the image of $X$ in $\Gamma\backslash D$. This domain has to be definable in the o-minimal structure $\mathbb{R}_{an,exp}$, and have certain growth properties. In the Shimura case, this is done by using a Siegel set. In our current setup this seems more difficult, due to the absence of toroidal co-ordinates. Instead, we use Schmid's theory of degenerations of Hodge structures to define our fundamental domain, which also provides a new approach in the setting of Shimura varieties. For more details on this, see \S\ref{definable}. Second, the proof of Theorem \ref{main} requires a volume bound on Griffiths transverse\footnote{It is essential to restrict to Griffiths transverse subvarieties, as the general statement is false since, for example, $D$ contains compact subvarieties.} subvarieties $X\subset D$ analogous to those proven by Hwang--To for hermitian symmetric domains \cite{hwangto1}. We prove this in \S\ref{volume} and the result is as follows: \begin{thm}\label{volumebound}There are constants $\beta,\rho>0$ (only depending on $D$) such that for any $R>\rho$, any $x\in D$, and any positive-dimensional Griffiths transverse global analytic subvariety $Z\subset B_x(R)\subset D$, \[\operatorname{vol}(Z)\gg e^{\beta R}\operatorname{mult}_xZ\] where $B_x(R)$ is the radius $R$ ball centered at $x$ and $\operatorname{vol}(Z)$ the volume with respect to the natural left-invariant metric on $D$. \end{thm} In \S\ref{heights} we establish all the required comparisons between the various height and distance functions that show up, and \S\ref{proof} completes the proof. \subsection*{Acknowledgements} The first author was partially supported by NSF grant DMS-1702149. \section{Volume estimates}\label{volume} In this section we prove Theorem \ref{volumebound}; we begin with some general remarks. Without loss of generality, we may clearly assume $D$ is a full period domain. Further, letting $\mathbb{H}$ be the upper half-plane, $D\times \mathbb{H}$ embeds isometrically into a period domain $D'$ of weight one larger by tensoring with the weight one Hodge structure of an elliptic curve, and it therefore suffices to consider $D$ of odd weight. We make both of these assumptions for the remainder of this section. For general background on period domains and Hodge structures, see for example \cite{perioddomain}. \subsection{Hodge norms} A point $x\in D$ yields a Hodge structure $H_x$ on $H_\mathbb{Z}$ polarized by $Q_Z$. Recall that the Hodge metric $h_x(v,w)=Q_\mathbb{Z}(v,C_x\bar w)$ is postive-definite, where $C_x$ is the Weil operator of $H_x$. For any $w\in H_\mathbb{C}$ we can define a function $h_x(w):=h_x(w,w)$ on $D$. Note that $g^*h_x(w)=h_x(g^{-1}w)$ for $g\in\mathbf{G}(\mathbb{R})$. Recall also that a choice of point $x\in D$ naturally endows the Lie algebra $\frak{g}_\mathbb{R}$ of $\mathbf{G}(\mathbb{R})$ with a weight zero Hodge structure $\frak{g}_x$ polarized by the Killing form, and that the holomorphic tangent space at $x$ is naturally identified with $\frak{g}_x^-$, where as usual we give $\frak{g}^{p,-p}$ grading $p$. We refer to the odd part of $\frak{g}_x^-$ as the horizontal directions, and to $\frak{g}_x^{-1,1}$ as the Griffiths transverse directions. \begin{lemma}\label{derivatives} For Hodge-pure horizontal (in particular Griffiths transverse) directions $X\in \frak{g}_x^-$, we have \begin{align*} \d h_x(w)(X) &= -2 h_x(Xw,w)\\ \partial\bar{\partial} h_x(w)(X,\bar X)&= 2h_x(Xw)+2h_x(\bar X w) \end{align*} \end{lemma} \begin{proof} Note that in $\mathbb{C}[z,\bar{z}]/(z^2,\bar{z}^2)$ we have \begin{align*} &\exp(-zX)\exp\left(zX+\bar{z}\bar{X}+\frac{|z|^2}{2}\left([X,\bar X]^{<0}+[\bar X,X]^{>0}\right)\right)=\\ &=\left(1-zX\right)\left(1+zX+\bar{z}\bar{X}+\frac{|z|^2}{2}\left([X,\bar X]^{<0}+[\bar X,X]^{>0}\right)+\frac{|z|^2}{2}\left(X\bar{X}+\bar{X}X\right)\right)\\ &=1+\bar{z}\bar{X}+|z|^2\left(-X\bar{X}+\frac{1}{2}\left([X,\bar X]^{<0}+[\bar X,X]^{>0}\right)+\frac{1}{2}\left(X\bar{X}+\bar{X}X\right)\right)\\ &=1+\bar{z}\bar{X}+\frac{1}{2}\left(-[X,\bar{X}] +[X,\bar X]^{<0}+[\bar X,X]^{>0}\right)\\ &=1+\bar{z}\bar{X}+\frac{1}{2}\left(-[X,\bar X]^{\geq 0}+[\bar X,X]^{>0}\right) \end{align*} which is in the parabolic stabilizing the Hodge flag at $x$. Thus, modulo $(z^2,\bar z^2)$ we have \[\exp(zX).x= \exp\left(M(z X,\bar z\bar X)\right).x\] where $M(zX,\bar z\bar X)=zX+\bar{z}\bar{X}+\frac{|z|^2}{2}\left([X,\bar X]^{<0}+[\bar X,X]^{>0}\right)\in \frak{g}$. Thus, \begin{align*} \d h_x(w)(X) &= \frac{\d}{\d z} \exp(zX)^*h_x(w)|_{z=0}\\ &= \frac{\d}{\d z} h_0\left(\exp\left(-M(z X,\bar z\bar X)\right).w\right)|_{z=0}\\ &=h_x(-Xw,w)+h_x(w,-\bar X w)\\ &=-2h_x(Xw,w) \end{align*} where we have used that $X$ is horizontal and thus conjugate self-adjoint with respect to $h_x$. Likewise, \begin{align*} \partial\bar{\partial} h_x(w)(X,\bar X) &= \frac{\d^2}{\d z\d \bar z} \exp(zX)^*h_x(w)|_{z=0}\\ &= \frac{\d^2}{\d z\d \bar z} h_0\left(\exp\left(-M(z X,\bar z\bar X)\right).w\right)|_{z=0}\\ &=h_x(-Xw,-Xw)+h_x(-\bar X w,-\bar X w)\\ &+\operatorname{Re} h_x(-[X,\bar X]^{<0}w,w)+\operatorname{Re} h_x(-[\bar X,X]^{>0}w,w)\\ &+\operatorname{Re} h_x((X\bar X+\bar X X)w,w)\\ &=2h_x(Xw)+2h_x(\bar Xw) \end{align*} where we have used that $[X,\bar X]^{<0}=[\bar X,X]^{>0}=0$ since $X$ is Hodge-pure, as well as the conjugate self-adjointness of $X$. \end{proof} \subsection{Distance functions} Let $\pi: D\to D_W$ be the projection to the associated symmetric space by taking the Weil Jacobian. For every $x\in D$, we denote the Weil Hodge structure on $\frak{g}_\mathbb{C}$ by $\frak{h}_x$. Note that both Hodge structures $\frak{g}_x$ and $\frak{h}_x$ induce the same Hodge metric on $\frak{g}_\mathbb{C}$. Given $x_0\in D$, $\pi$ is identified with $\mathbf{G}(\mathbb{R})/V\to \mathbf{G}(\mathbb{R})/K$, where $V$ is the stabilizer of $x_0$ under $\mathbf{G}(\mathbb{R})$ and $K$ is the unitary subgroup of $\mathbf{G}(\mathbb{R})$ with respect to $h_{x_0}$. $K$ is a maximal compact subgroup of $\mathbf{G}(\mathbb{R})$. Let $v_0$ be a unit-length generator of $\det \frak{h}_{x_0}^{+}$ in $\bigwedge^{\dim D_W}\frak{h}_{x_0}$, and define a function $\phi_0:D\to \mathbb{R}$ by \[\phi_0(x):=\log h_{x}(v_0) \] $\phi_0$ factors through the projection $\pi$. If $F_0$ is the fiber of $\pi$ containing $x_0$, then by the $KAK$ decomposition of $\mathbf{G}(\mathbb{R})$, $\phi_0$ in fact only depends on $F_0$ since $K$ fixes $v_0$ up to a phase. \begin{lemma}\label{nonzero} $i\partial\bar{\partial} \phi_0$ is strictly positive on Griffiths transverse tangent directions at $x_0$. \end{lemma} \begin{proof} Let $X\in \frak{g}_{x_0}^{-1,1}$, and note that $X\in \frak{h}_{x_0}^{-1,1}\oplus \frak{h}_{x_0}^{1,-1}$. Let $X^{-1,1},X^{1,-1}$ be the graded pieces of $X$ with respect to the Weil Hodge structure. Fixing a basis $Y_i$ of $\frak{h}_{x_0}^+$, we see that \[\operatorname{ad}(X)\left(Y_1\wedge\cdots\wedge Y_k\right)=\sum_i (-1)^{i-1}Y_1\wedge\cdots\wedge \operatorname{ad}(X^{-1,1})Y_i\wedge\cdots\wedge Y_k. \] The vectors on the right-hand side are all linearly independent, so if $\operatorname{ad}(X)v_0=0$ then $\operatorname{ad}(X)\frak{h}_{x_0}^+=0$. Likewise, if $\operatorname{ad}(\bar X)v_0=0$, then $\operatorname{ad}(X)\frak{h}_{x_0}^-=0$. Thus, if $i\partial\bar{\partial}\phi_0(X,\bar X)=0$ then by Lemma \ref{derivatives} $\operatorname{ad}(X)$ kills $\frak{h}_{x_0}^{\mathrm{odd}}$ and in particular $\bar X$, but this implies $X=0$ \cite[Corollary 12.6.3(iii)]{perioddomain}. \end{proof} Define the horizontal distance from $x$ to $x_0$, denoted $d_{0}^\mathrm{horiz}(x)$, to be the geodesic distance between $y:=\pi(x)$ and $y_0:=\pi(x_0)$ with respect to the natural $\mathbf{G}(\mathbb{R})$-invariant metric on the symmetric space $D_W$. Let $A$ be an $\mathbb{R}$-split torus of $\mathbf{G}(\mathbb{R})$ that is Killing-orthogonal to $K$. By the $KAK$ decomposition of $\mathbf{G}(\mathbb{R})$, the distance $d^{D_W}_{0}(y)$ and $\phi_0(x)$ are both determined by $d^{D_W}_{0}(a y_0)$ and $\phi_0(ax_0)$ for $a\in A$. $A y_0$ is evidently a totally geodesic submanifold of $D_W$, and the restriction of the invariant metric is a Euclidean metric in exponential coordinates, so \begin{equation}\label{dist} d^{D_W}_{0}(ay_0)^2\sim\sum_{i}t^2_i \end{equation} where $a=\exp(\sum_i t_iT_i)$ for some chosen basis $T_i$ of the Lie algebra $\frak{a}$ of $A$. The main result of this subsection is the following comparison. Note that both $d_0^{\mathrm{horiz}}$ and $\phi_0$ vanish exactly on $F_0$. \begin{prop}\label{comparison} $d^\mathrm{horiz}_0(x)\ll \phi_0(x)+O(1)$ and $\phi_0(x)\ll d^\mathrm{horiz}_0(x)+O(1)$. \end{prop} \begin{proof} Griffiths--Schmid \cite{GS} show that a function closely related to $\phi_0$ is an exhaustion function of $D$. For $D_W$, their function is \[\phi_0'(gy_0) = h_{x_0}(gv_0)\] and their result implies $\phi_0\to \infty$ at the boundary of $D$. Now, consider the decomposition $$v_0=\sum_\alpha v_\alpha $$ by $\frak{a}$-weights. Note that as $A$ is Killing-orthogonal to $K$, $\frak{a}$ is odd and therefore self adjoint with respect to $h_{x_0}$. It follows then that the decomposition of $\bigwedge^{\dim D_W}g_{\mathbb{C}}$ into $\frak{a}$-weight spaces is orthogonal with respect to $h_{x_0}$, and thus for $T\in \frak{a}$, \[h_{x_0}(\exp(-T)v_0)=\sum_{\alpha}e^{-2\alpha(T)}h_{x_0}(v_\alpha).\] Let $\Xi\subset\frak{a}$ be the convex hull of the $\alpha$ for which $v_\alpha\neq 0$. Since $\phi_0\to\infty$ at the boundary, we must have $0\in\Xi$, for otherwise there would be a direction in which $\phi_0$ is bounded. It then follows that \[\log\sum_i \left(e^{T_i^\vee}(a)+e^{-T_i^\vee}(a)\right)\ll\phi_0\left(ax_0\right)\ll \log\sum_i \left(e^{T_i^\vee}(a)+e^{-T_i^\vee}(a)\right)\] which implies the claim by \eqref{dist}. \end{proof} \subsection{Multiplicity bounds} For any $r>0$ and $x_0\in D$, denote by \[B^{\phi_{0}}(r):=\{x\in D\mid \phi_0(x)<r\}\] and for any Griffiths transverse analytic subvariety $Z\subset D$, \[\operatorname{vol}^{\phi_0}(Z):=\int_{Z}i\partial\bar\partial\phi_0.\] \begin{prop}\label{formbounds} Let $\omega$ be the K\"ahler form of the natural left-invariant hermitian metric on $D$. \begin{enumerate} \item $i\partial\bar{\partial}\phi_0\geq_{\operatorname{trans}} 0$ and $i\partial\bar{\partial} \phi_0=O_{\operatorname{trans}}(\omega)$; \item $|\d \phi_0|^2 =O_{\operatorname{trans}}(i\partial\bar{\partial} \phi_0)$. \end{enumerate} \end{prop} In the statement of the proposition, the notations $O_{\operatorname{trans}}(\cdot)$ and $\geq_{\operatorname{trans}}$ mean the bound holds in Griffiths transverse tangent directions. \begin{proof} By definition, $\omega_x(X,\bar X) \sim h_x(X)$. For horizontal $X$, $\operatorname{tr}(X\bar X)\sim h_x(X)$ is larger than the maximum eigenvalue of $X^*h_x$ with respect to $h_x$. For $X\in \frak{g}^-$ pure, by Lemma \ref{derivatives} we have \[i\partial\bar{\partial}\phi_{0}(X,\bar X)=2\left (\frac{h_x(Xv_0)}{h_x(v_0)}+\frac{h_x(\bar X v_0)}{h_x(v_0)}\right)-\left|\frac{h_x(Xv_0,v_0)+h_x(v_0,\bar X v_0)}{h_x(v_0)}\right|^2\] which is nonnegative by the triangle inequality and bounded by the maximal eigenvalue of $X^*h_x$ with respect to $h_x$, so (1) follows. The second claim follows by Lemma \ref{derivatives} and the following lemma: \begin{lemma}\label{CS}There is a $\beta>0$ (only depending on $D$) such that for any $x\in D$, $w\in H_\mathbb{C}$, and $X\in\frak{g}_x^{-1,1}$, \[h_x(w)\cdot \frac{h_x(Xw)+h_x(\bar X w)}{2}\geq (1+\beta)\left|h_x(Xw,w)\right|^2.\] \end{lemma} \begin{proof} Let $w=\sum_i w^{i,n-i}$ be the decomposition into Hodge components at $x$, so that we have Hodge decompositions $Xw=\sum_i Xw^{i,n-i}$, $\bar Xw=\sum_i \bar Xw^{i,n-i}$. Now let $$ a_i^2=h_x(w^{i,n-i}),\quad b_{i-1}^2=h_x(Xw^{i,n-i}), \quad c_{i+1}^2=h_x(\bar X w^{i,n-i}),$$ and we'll also set $b_n=c_0=0$. Note that since $X$ and $\bar X$ are adjoint we have $$ h_x(Xw,w)=\sum_i h_x(Xw^{i+1,n-i-1},w^{i,n-i})$$ and $$|h_x(Xw^{i+1,n-i-1},w^{i,n-i})|=|h_x(w^{i+1,n-i-1},\bar Xw^{i,n-i})|\leq \min(a_ib_i,a_{i+1}c_{i+1}).$$ Thus it is sufficient to show that $$ \left(\sum_{i=0}^n a_i^2\right)\left(\sum_{i=0}^{n-1} b_i^2 + \sum_{i=1}^n c_i^2 \right)\geq (2 + \delta)\left(\sum_{i=0}^n a_i(r_ib_i+s_ic_i) \right)^2$$ for some choice of nonnegative $r_i,s_i$ with $r_i+s_{i+1}=1$ for $0\leq i\leq n-1$. By the Cauchy--Schwartz inequality, the left-hand side is greater than or equal to $\left(\sum_{i=0}^na_i\sqrt{b_i^2+c_i^2}\right)^2$. Thus, it suffices to show for each $i$, \[b_i^2+c_i^2\geq (2+\delta)\left(r_ib_i+s_ic_i\right)^2.\] Note that $x^2+y^2 - 2(rx+sy)^2$ is positive definite if $(1-2r^2)(1-2s^2)>4r^2s^2$. \begin{lemma} There exist non-negative real numbers $r_0,s_1,r_1,s_2,\dots,s_{n-1},r_{n-1},s_n$, with $r_i+s_{i+1}=1$ for $0\leq i\leq n-1$, $\max(r_0,s_n)<\frac{1}{\sqrt{2}}$, and $(1-2r_i^2)(1-2s_i^2)> 4r_i^2s_i^2$ for all $1\leq i \leq n-1$. \end{lemma} \begin{proof} Note that at $r_i=s_i=\frac12$ we get exact equality, in that $(1-2r_i^2)(1-2s_i^2)= 4r_i^2s_i^2$. Thus, we set $r_j=\frac12+\delta_j$, where $\delta_0=\frac19$ and $\delta_{j+1}$ is sufficiently small in terms of $\delta_j$ to ensure $(1-2r_{j+1}^2)(1-2s_{j+1}^2) > 4r_j^2s_j^2$. \end{proof} The statement now follows by picking the $r_i,s_i$ from the previous lemma, and setting $(2+\delta)$ to be the largest number such that $x^2+y^2 - (2+\delta)(r_ix+s_iy)^2$ is positive semi-definite for $1\leq i\leq n-1$ and $1-(2+\delta)s_0^2$ is nonnegative. \end{proof} \end{proof} The previous proposition implies that the volume of Griffiths transverse subvarieties of $D$ grows at least exponentially in the radius: \begin{prop} There is a constant $\beta>0$ such that for any $R>0$ and any positive-dimensional Griffiths transverse global analytic subvariety $Z\subset B^{\phi_0}(R)$, \[e^{-\beta r}\operatorname{vol}^{\phi_0}(Z\cap B^{\phi_0}(r))\] is a nondecreasing function in $r\in[0,R]$. \end{prop} \begin{proof} Let $\psi_0 =-e^{-\beta\phi_0}$ for $\beta$ the constant from Lemma \ref{CS}. We have \[i\partial\bar{\partial} \psi_0=\beta e^{-\beta\phi_0}\left(i\partial\bar{\partial} \phi_0-\beta|\d\phi_0|^2\right)\] which is nonnegative in Griffiths transverse directions by the proof of Proposition \ref{formbounds}(ii). By Stokes' theorem we have \begin{align*} \operatorname{vol}^\phi (Z\cap B^{\phi_{0}}(r))&= \int_{Z\cap B^{\phi_{0}}(r)}(i\partial\bar{\partial} \phi_0)^d\\ &=\int_{Z\cap\partial B^{\phi_{0}}(r)}d^c \phi_0\wedge (i\partial\bar{\partial} \phi_0)^{d-1} \\ &=\beta^{-1}e^{\beta r}\int_{Z\cap\partial B^{\phi_{0}}(r)}d^c \psi_0\wedge (i\partial\bar{\partial} \phi_0)^{d-1}\\ &=\beta^{-1}e^{\beta r}\int_{Z\cap B^{\phi_{0}}(r)}i\partial\bar\partial \psi_0\wedge (i\partial\bar{\partial} \phi_0)^{d-1}\\ &=\beta^{-d}e^{\beta dr} \int_{Z\cap B^{\phi_{0}}(r)}(i\partial\bar{\partial} \psi_0)^d \end{align*} which implies the claim, as $\psi_0|_Z$ is plurisubharmonic. \end{proof} \begin{proof}[Proof of Theorem \ref{volumebound}] Choose a fixed euclidean ball $B$ centered around $x_0$. By a classical result Federer (see for example \cite{Stolzenberg}), we have an inequality of the form $\operatorname{vol}^{\mathrm{eucl}}(Z\cap B)\gg \operatorname{mult}_{x_0}Z$. Choose a fixed radius $r_0$ such that $B\subset B^{\phi_0}(r_0)$. After possibly shrinking $B$, $i\partial\bar{\partial}\phi_0$ is comparable to the euclidean K\"ahler form on $B$ in Griffiths transverse directions by Lemma \ref{nonzero}, and combining this with the above proposition we have $$\operatorname{vol}^{\phi_0}(Z\cap B^{\phi_0}(r))\gg e^{\beta r}\operatorname{vol}^{\phi_0}(Z\cap B^{\phi_0}(r_0))\gg e^{\beta r}\operatorname{mult}_{x_0}Z$$ for $r>r_0$. By Proposition \ref{comparison}, the balls $B^{\phi_{0}}(r)$ are comparable to the balls $B^\mathrm{horiz}_{x_0}(r)$, which are in turn comparable to the balls $B_{x_0}(r)$ with respect to a left-invariant metric on $D$ for $r\gg0$, so we obtain the bound in Theorem \ref{volumebound}. \end{proof} \section{Definable fundamental sets}\label{definable} Throughout the following, by definable we mean definable with respect to the o-minimal structure $\mathbb{R}_{\mathrm{an,exp}}$. Let $X$ be a smooth algebraic variety supporting a pure polarized integral variation of Hodge structures $\mathscr{H}_\mathbb{Z}$, and let $(\bar X,E)$ be a proper log-smooth compactification of $X$. For simplicity we may assume that $\mathscr{H}_\mathbb{Z}$ has unipotent local monodromy and that the associated period map $\phi:X\to \Gamma\backslash D$ is proper, although the argument carries through without making these assumptions. We may also assume that the monodromy $\Gamma$ is torsion-free. The structure of $X$ as an algebraic variety canonically endows it with the structure of a definable manifold, and the choice of compactification $(\bar X,E)$ allows us to choose a definable atlas of $X$ of finitely many polydisks $\Delta^k\times(\Delta^*)^\ell$. Note that any polydisk chart $P$ in such an atlas $\{P_i\}$ can be shrunk to yield a new such atlas, as the complement of $\bigcup_{P_i\neq P}P_i$ is contained in $P$ and has compact closure in the interior closure of $P$ in $\bar X$. Let \[\exp:\Delta^k\times\mathbb{H}^\ell\to \Delta^k\times(\Delta^*)^\ell\] be the standard universal cover, and choose a bounded vertical strip $\Sigma\subset \mathbb{H}$ such that $\Delta^k\times\Sigma^\ell$ is a fundamental set for the action of covering transformations. By the above remark, by shrinking a polydisk we may always restrict to a region in $\Delta^k\times\Sigma^\ell$ where $|z_i|$ is bounded away from $1$ on the $\Delta$ factors and $\operatorname{Im} z_i$ is bounded away from $0$ on the $\Sigma$ factors. Choose lifts $\tilde\phi:\Delta^k\times\mathbb{H}^\ell\to D $ of the period map restricted to each chart, and let $\mathcal{F}$ be the disjoint union of $\Delta^k\times \Sigma^\ell$ over all charts. We then have a diagram \begin{equation} \xymatrix{ \mathcal{F}\ar[r]^{\tilde\phi}\ar[d]_\exp& D \\ X&}\label{diagram}\end{equation} and $\mathcal{F}$ has a natural definable structure. Note that the embedding $ D \subset\check{ D }$ gives $ D $ a canonical definable structure. \begin{lemma}\label{definablelift} Both maps in \eqref{diagram} are definable. \end{lemma} \begin{proof} The claim for the vertical map is obvious. By the nilpotent orbit theorem, for each polydisk $\tilde\phi=e^{zN}\tilde\psi$ where $\tilde\psi=\psi\circ\exp$ for some extendable holomorphic function $\psi:\Delta^{n}\to D $ (after shrinking the polydisks). The action of $\mathbf{G}(\mathbb{R})$ on $ D $ is definable, and $e^{z\cdot N}$ is polynomial in $z$, so $\tilde\phi:\Delta^k\times\Sigma^\ell\to D $ is definable. \end{proof} Fix a left-invariant metric $h_ D $ on $ D $ and let $\Phi = \tilde\phi(\mathcal{F})$. \begin{prop}\label{smallvolume} Let $Z\subset \check D $ be algebraic. For all $\gamma\in \mathbf{G}(\mathbb{Z})$, $\operatorname{vol}(Z\cap\gamma\Phi)= O(1)$. \end{prop} \begin{proof}Evidently it is enough to show $\operatorname{vol}(Z'\cap\Phi)= O(1)$ for all $Z'$ in the same connected component of the Hilbert scheme of $\check D $ as $Z$. Further, it suffices to show $\operatorname{vol}(\tilde\phi^{-1}(Z')\cap\Delta^k\times\Sigma^\ell)= O(1)$ for each lifted polydisk chart $\tilde\phi:\Delta^k\times\mathbb{H}^\ell\to D $, where the volume is computed with respect to $\tilde\phi^*h_ D $. For any holomorphic horizontal map $f:M\to \Gamma\backslash D $ we have $f^*h_ D \ll \kappa_M$ where $\kappa_M$ is the Kobayashi metric of $M$. In particular, for $M=\Delta^k\times\mathbb{H}^\ell$ the metric $\kappa_M$ is the maximum over the coordinate-wise Poincar\'e metrics. After shrinking the polydisk, the factors in $\Delta^k\times \Sigma^\ell$ have finite volume with respect to the Kobayashi metric of the larger polydisk, and thus it is enough to uniformly bound the degree of the projection of $\tilde\phi^{-1}(Z')$ to any subset of coordinates. By definable cell decomposition, for any definable subset $L\subset\mathbb{R}^N$ and any coordinate projection $\mathbb{R}^N\to\mathbb{R}^M$, the number of connected components in the fibers of $L$ is bounded. Applying this to the universal family of $\tilde\phi^{-1}(Z')\subset \Delta^k \times\Sigma^\ell$, the claim follows. \end{proof} \section{Heights}\label{heights} Fix a basepoint $x_0\in \Phi$ so that we have an identification $ D \cong \mathbf{G}(\mathbb{R})/V$ for a compact subgroup $V\subset \mathbf{G}(\mathbb{R})$. Thinking of $ D $ as a space of Hodge structures on the fixed integral lattice $(H_\mathbb{Z},Q_\mathbb{Z})$, as before we denote by $h_x$ the induced Hodge metric on $H_\mathbb{C}$ corresponding to $x\in D$. \begin{defn} For $\gamma\in \mathbf{G}(\mathbb{Z})$ let $H(\gamma)$ be the height of $\gamma$ with respect to the representation $\rho_\mathbb{Z}:\mathbf{G}(\mathbb{Z})\to \operatorname{GL}(H_\mathbb{Z})$. For $g\in \mathbf{G}(\mathbb{R})$, we denote by $||\rho_\mathbb{R}(g)||$ the maximum archimedean size of the entries of $\rho_\mathbb{R}(g)$, so that if $\gamma\in\mathbf{G}(\mathbb{Z})$ we have $H(\gamma)=||\rho_\mathbb{R}(\gamma)||$. \end{defn} For any $R>0$ let $B_{x_0}(R)\subset D $ be the ball of radius $R$ centered at $x_0$. The main goal of this section is to establish the following: \begin{thm}\label{smallheight} For any $R>0$, every element $\gamma$ of \[\{\gamma\in \mathbf{G}(\mathbb{Z})\mid B_0(R)\cap \gamma^{-1}\Phi\neq \varnothing\}\] has height $H(\gamma)=e^{O(R)}$. \end{thm} Define $d_0(x)=d(x,x_0)$. We write $f\preceq g$ if $|f|\ll |g|^{O(1)}+O(1)$, and $f\asymp g$ if $f\preceq g$ and $g\preceq f$. \begin{lemma}\label{distanceheight}Let $\lambda(x,x')$ be the maximal eigenvalue of $h_x$ with respect to $h_{x'}$. Then \begin{enumerate} \item For all $g\in \mathbf{G}(\mathbb{R})$ we have $||\rho_\mathbb{R}(g)||\asymp e^{d_0(gx_0)}$; \item $\lambda(x,x')\asymp e^{d(x,x')}$. \end{enumerate} \end{lemma} \begin{proof}Choose a maximal compact subgroup $K\subset \mathbf{G}(\mathbb{R})$ containing $V$ and a left-invariant metric on the associated symmetric space $\mathbf{G}(\mathbb{R})/K$. Note that the diameters of the fibers of $\mathbf{G}(\mathbb{R})/V\to \mathbf{G}(\mathbb{R})/K$ are bounded. Choosing a split maximal torus $A\subset \mathbf{G}(\mathbb{R})$ and a basis $A_i$ of the Lie algebra $\mathfrak{a}$ of $A$, we have for any $g\in \mathbf{G}(\mathbb{R})$ with $KAK$ decomposition $g=k_1a k_2$ \[\sqrt{\sum_i t_i^2}\ll d_0(gx_0)=d_0(ax_0)+O(1)\ll\sqrt{\sum_i t_i^2}+O(1)\] where $a=\exp(\sum_it_iA_i)$. As $$\max_i \exp(|t_i|)\preceq\rho_\mathbb{R}(g)\preceq \max_i \exp(|t_i|)$$ part (1) follows. For part (2), note that by $\mathbf{G}(\mathbb{R})$-invariance we may restrict to the case $x'=x_0$. Setting $\rho=\rho_\mathbb{R}$ for convenience, note that $\operatorname{tr}(\rho(g)^*\rho(g))$ is a sum of the eigenvalues of $h_{gx_0}$ wrt $h_{x_0}$, where $\rho(g)^*$ is the adjoint of $\rho(g)$ wrt $h_{x_0}$. Thus $\operatorname{tr}(\rho(g)^*\rho(g))\asymp \lambda(gx_0,x_0)$. As $\operatorname{tr}(\rho(g)^*\rho(g))$ is the sum of the squares of the entries of $\rho(g)$, part (2) follows from part (1). \end{proof} We define a proximity function of the boundary by the minimal period length: \[\mu(x)=\min_{v\in H_\mathbb{Z}\backslash\{0\}} h_x(v).\] For any $v\in H_\mathbb{C}$ we have $\log\frac{h_{x_0}(v)}{h_x(v)}\ll d_0(x)+O(1) $ by part (2) of Lemma \ref{distanceheight}, and so we deduce the following: \begin{cor}\label{bound1} $-\log\mu \ll d_0+O(1)$. \end{cor} \begin{proof} There is some $v\in H_\mathbb{Z}$ with $\log\mu=\log h_x(v)$ and thus \[-\log\mu=-\log h_x(v)\ll\log \frac{h_{x_0}(v)}{h_x(v)}+O(1)\ll d_0(x) +O(1).\] \end{proof} When restricted to the fundamental set $\Phi$, we in fact have a comparison in the other direction: \begin{lemma}\label{bound2} For $x\in\Phi$ we have $d_0(x)\ll -\log\mu(x)+O(1)$. \end{lemma} \begin{proof}We may assume $\mathcal{F}$ is a single $\Delta^k\times\Sigma^\ell$. After choosing logarithms $N_1,\dots,N_\ell$ of the local monodromy operators of the variation over $\Delta^k\times (\Delta^*)^\ell$, let $v_i$ be a fixed basis of $H_\mathbb{Z}$ descending to a basis of the multi-graded module associated to the $\ell$ weight filtrations, where we take each grading centered at $0$. Let $w_i^{(j)}$ for $j=1,\ldots,\ell$ be the weights of $v_i$ w.r.t. $N_j$. By \cite{hodgeasymp}, for every permutation $\pi$ and on each region $S_{\pi}\subset\Delta^k\times \mathbb{H}^\ell$ of the form $\operatorname{Im} z_{\pi(1)}\gg \cdots \operatorname{Im} z_{\pi(\ell)}\gg 1$ we have \[h_{\tilde\phi(z)}(v_i)\sim \left(\frac{\operatorname{Im} z_{\pi(1)}}{\operatorname{Im} z_{2}}\right)^{w_i^{(1)}}\cdots\hspace{1em}\left(\frac{\operatorname{Im} z_{\pi(\ell-1)}}{\operatorname{Im} z_{\pi(\ell)}}\right)^{w_i^{(\ell-1)}}\cdot\hspace{1em}(\operatorname{Im} z_{\pi(\ell)})^{w_i^{(\ell)}}.\] where ``$\sim$'' means ``within a bounded function of." As the set of weights is preserved under negation, it follows that $\max_i h_{\tilde\phi(z)}(v_i)\sim (\min_i h_{\tilde{\phi}(z)}(v_i))^{-1}$, and so by Lemma \ref{distanceheight}, \[d_0(\tilde{\phi}(z))\ll \max_i\log h_{\tilde{\phi}(z)}(v_i)\ll -\log\mu(\tilde{\phi}(z))+O(1)\] uniformly on every such region. The $S_{\pi}$ can be made to cover the region $\Delta^k\times\Sigma^\ell$ after shrinking $\Sigma$, and the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{smallheight}] Suppose $x\in B_0(R)\cap \gamma^{-1}\Phi$ for $\gamma\in \mathbf{G}(\mathbb{Z})$. Putting together Lemma \ref{bound2} and Corollary \ref{bound1} we have \[d_0(\gamma x)\ll -\log\mu(\gamma x)+O(1)=-\log\mu(x)+O(1)\ll d_0(x)+O(1)\] and since \[d_0(\gamma x_0)\leq d(\gamma x,\gamma x_0 )+d(\gamma x,x_0)\leq d_0(x)+d_0(\gamma x)\] we are finished by part (1) of Lemma \ref{distanceheight}. \end{proof} \section{ The proof of Theorem \ref{main}}\label{proof} The remainder of the proof follows the same general strategy as \cite{MPT}. There are sufficiently many differences, however, that we include the necessary modifications. Recall that $ D $ sits naturally as an open subset in its compact dual $\check{ D }$ which has the structure of a projective variety. Let $M$ be the Hilbert scheme of all subvarieties of $X\times\check{ D }$ with the same Hilbert polynomial as $V$. Moreover let $\mathcal{V}\rightarrow M$ be the universal family over $M$, with a natural embedding $\mathcal{V}\hookrightarrow(X\times\check{ D })\times M$. Let $\mathcal{V}_W$ be the base-change to $W\times M$. The action of $\Gamma$ on $X\times D $ lifts to $\mathcal{V}_W$, and we define $\mathcal{V}_X:=\Gamma\backslash \mathcal{V}_{ W}$, which is naturally an analytic variety. Note that as $M$ is proper, $\mathcal{V}_W$ is proper over $W$, and likewise $\mathcal{V}_X$ is proper over $X$. We endow $\mathcal{V}_X$ with a definable structure as follows. $\mathcal{V}$ is algebraic and has an induced definable structure. By Lemma \ref{definablelift}, pulling back to $\mathcal{F}\times M$ and quotienting out by the definable equivalence relation $\mathcal{F}\rightarrow X$ we obtain the desired definable structure on $\mathcal{V}_X$. Suppose the theorem is false for the sake of contradiction. Moreover, suppose that $\dim X$ is minimal, and subject to that assumption, $\operatorname{codim} V+\operatorname{codim} W-\operatorname{codim} U$ is as large as possible, and subject to that assumption, that $\dim U$ is maximal. Define a closed analytic subvariety $T\subset \mathcal{V}_W$ consisting of all pairs $(p,V')$ such that $V'\cap W$ has dimension at least $\dim U$ around $p$, and let $T_0$ be the irreducible component containing $(p,V)$ for some (hence any) point $p\in U$. Let $Y:=\Gamma\backslash T_0\subset \mathcal{V}_X$, which is a closed definable analytic subvariety. Now, the projection $q:Y\to X$ is defineable and proper, so the image $Z$ is a closed complex analytic defineable subvariety of $X$ by Remmert's theorem, and therefore it is also algebraic by definable Chow \cite{definechow} (see also \cite{MPT}). Moreover, it contains $\operatorname{pr}_X(U)$, and thus it contains the smallest algebraic variety containing $\operatorname{pr}_X(U)$, so we may assume $Z=X$. Consider the family $\mathscr{F}$ of algebraic varieties parametrized by $T_0$. Let $\Gamma_\mathscr{F}\subset\Gamma$ be the subgroup of elements $\gamma$ such that a very general\footnote{Recall that very general means in the complement of countably many proper closed subvarieties.} fiber of $\mathscr{F}$ is stable under $\gamma$. The stabilizer of a very general fiber of $\mathscr{F}$ in $\Gamma$ is then exactly $\Gamma_\mathscr{F}$. Let $\mathbf{\Theta}$ be the identity component of the $\mathbb{Q}$-Zariski closure of $\Gamma_\mathscr{F}$ in $\mathbf{G}$. \begin{lemma} $\mathbf{\Theta}$ is a normal subgroup of $\mathbf{G}$. \end{lemma} \begin{proof} Let $W'$ be a connected component of $W$ which intersects $X\times\Phi$. Note that $W'$ is stable under the monodromy group $\Gamma$ of $X$. Clearly $\mathscr{F}$ is stable under the image $\Gamma_Y$ of $\pi_1(Y)\to\pi_1(X)\to \mathbf{G}(\mathbb{Z})$ which is finite index in $\Gamma$, and therefore $\Gamma_Y$ is Zariski-dense in $\mathbf{G}$ by Andre-Deligne. Each element of $\Gamma_Y$ sends a very general fiber of $\mathscr{F}$ to a very general fiber, so by the above remark $\Gamma_\mathscr{F}=\gamma\cdot \Gamma_\mathscr{F}\cdot\gamma^{-1}$ for all $\gamma\in\Gamma_Y$. It follows that $\mathbf{\Theta}$ is invariant under conjugation by $\Gamma_Y$ and hence by the Zariski closure of $\Gamma_Y$ as well, which is all of $\mathbf{G}$. \end{proof} \begin{prop} $\mathbf{\Theta}$ is the identity subgroup. \end{prop} \begin{proof} Without loss of generality $V$ is a very general fiber of $F$, and hence is invariant by exactly $\mathbf{\Theta}$. Since $\mathbf{\Theta}$ is a $\mathbb{Q}$-group by construction, it follows that $\mathbf{G}$ is isogenous to $\mathbf{\Theta}_1\times\mathbf{\Theta}_2$ with $\mathbf{\Theta}_2=\mathbf{\Theta}$ and we have a splitting of weak Mumford--Tate domains $D=D_1\times D_2$ with $D_i=D(\mathbf{\Theta}_i)$. Replacing $X$ by a finite cover we also have a splitting of the period map \cite[Theorem III.A.1]{GGK} \[\phi = \phi_1\times\phi_2:X\to \Gamma_1\backslash D_1\times \Gamma_2\backslash D_2.\] Moreover, $\phi_1,\phi_2$ satisfy Griffiths transversality (see the proof of \cite[Theorem III.A.1]{GGK}). Note that $V\subset X\times D$ by assumption, and as $V$ is invariant under $\mathbf{\Theta}_2$ it is of the form $V_1\times D_2$ where $V_1\subset X\times D_1$. Consider the period map $X\to \Gamma_1\backslash D_1$, the resulting $W_1\subset X\times D_1$, and the subvariety $V_1\subset X\times D_1$. Let $U_1$ be the component of $V_1\cap W_1$ onto which $U$ projects. By assumption the theorem applies in this situation, and as $U_1$ cannot be contained in a proper weak Mumford--Tate subdomain (for then $U$ would as well), we must have \[\operatorname{codim}_{X\times D_1} (U_1)=\operatorname{codim}_{X\times D_1} (V_1)+\operatorname{codim}_{X\times D_1} (W_1).\] Note that the projection $W\to W_1$ has discrete fibers, so $\dim W=\dim W_1$ and $\dim U=\dim U_1$, whereas $\operatorname{codim} V_1=\operatorname{codim} V$, which is a contradiction if $\phi_2$ is non-constant. \end{proof} It follows that $V$ is not invariant by any infinite subgroup of $\Gamma$. The proof of Theorem \ref{main} is then completed by the following lemma, which produces a contradiction: \begin{lemma} $V$ is invariant by an infinite subgroup of $\Gamma$. \end{lemma} \begin{proof} Consider the definable set $$I:=\{g\in\mathbf{G}(\mathbb{R}) \mid \dim \left(g\cdot V\cap W\cap(X\times\Phi)\right)=\dim U\}.$$ Clearly, $I$ contains $\gamma\in\Gamma$ whenever $U$ intersects $X\times \gamma^{-1}\Phi$. We may assume $1\in I$, and take $x_0\in\Phi$ the second coordinate of a point of intersection of $U$ and $X\times\Phi$. For any $R>0$, consider the ball $B_0(R)$ centered at $x_0$. On the one hand, by Theorem \ref{volumebound} we have \[\operatorname{vol}\left(U\cap \left(X\times B_{x_0}(R)\right)\right)\gg e^{\beta R}.\] $U$ is covered with bounded overlaps by $U\cap(X\times\gamma^{-1}\Phi)$ for $\gamma\in \mathbf{G}(\mathbb{Z})$, so by Proposition \ref{smallvolume} it follows that $I$ has $e^{ \omega (R)}$ integer points. On the other hand, by Theorem \ref{smallheight} each of these points has height $e^{O(R)}$, and it follows by the Pila-Wilkie theorem that $I$ contains a real algebraic curve $C$ containing arbitrarily many integer points, in particular at least 2 integer points. If $V_c$ is constant in $c$, then $V$ is stable under $C\cdot C^{-1}$. Since $C$ contains at least 2 integer points, it follows that $V$ is stabilized by a non-identity integer point, completing the proof (since $\Gamma$ is torsion free). So we assume that $V_c$ varies with $c\in C$. Note that since $C$ contains an integer point that $\tilde{\phi}(V_c\cap W)$ is not contained in a weak Mumford-Tate subdomain for at least one $c\in C$, and thus for all but a countable subset of $C$ (since there are only countably many families of weak Mumford--Tate subdomains). We now have 2 cases to consider. First, suppose that $U\subset V_c$ for $c\in C$. Then we may replace $V$ by $V_c\cap V_{c'}$ for a generic $c,c'\in C$ and lower $\dim V$, contradicting our induction hypothesis on $\dim V-\dim U$. On the other hand, if it is not true that $U\subset V_c$ for $c\in C$ then $V_c\cap W$ varies with $C$, and so we may set $V'$ to be the Zariski closure of $C\cdot V$. This increases the dimension of $V$ by $1$, but then $\dim V'\cap W = \dim U + 1$ as well, and thus we again contradict our induction hypothesis, this time on $\dim U$. This completes the proof. \end{proof}
{ "timestamp": "2017-12-15T02:03:30", "yymm": "1712", "arxiv_id": "1712.05088", "language": "en", "url": "https://arxiv.org/abs/1712.05088" }
\section{Introduction} Despite all the phenomenological success of the Standard Model (SM), certain theoretical and experimental issues like neutrino masses, dark matter, charge quantization, hierarchy problem etc. seem to indicate the need to go beyond its well-established framework. The ultraviolet completions motivated by neutrino mass models may address the open questions and pave the road beyond the Standard Model (BSM). For example, the neutrino masses in canonical type-I~\cite{Minkowski:1977sc,Yanagida:1979as,GellMann:1980vs,Glashow:1979nm,Mohapatra:1979ia}, type-II~\cite{Konetschny:1977bn,Magg:1980ut,Schechter:1980gr,Cheng:1980qt,Lazarides:1980nt,Mohapatra:1980yp}, and type-III~\cite{Foot:1988aq} tree-level seesaw models percolate down from a single scale that may be linked to the unification point of SM gauge couplings, hinted first within the \SU{5} Grand Unified Theory (GUT) of Georgi and Glashow~\cite{Georgi:1974sy}. After realizing that there is no single gauge coupling crossing in this simplest GUT, it was noticed that augmenting the SM by the second Higgs doublet and the corresponding supersymmetric (SUSY) partners enables a successful minimal SUSY SM (MSSM) unification~\cite{Amaldi:1991cn}. A decisive role~\cite{Li:2003zh} played by incomplete (or {\em split}) irreducible representations ({\em irreps}) \irrepsub{5}{H} in the MSSM unification success, motivated the corresponding non-SUSY attempts to cure the crossing problem~\cite{Krasnikov:1993sc,Willenbrock:2003ca} with just six copies of the SM Higgs doublet field and nothing more. Still, the scale of such unifications would be too low. Further studies of unification in the context of non-SUSY \SU{5} GUTs employed incomplete \SU{5} irreps which contain new states introduced by tree-level seesaw models. The studies in~\cite{Bajc:2006ia,Bajc:2007zf,Dorsner:2006fx} employed adjoint \SU{5} representation \irrepsub{24}{F} which contains both the fermion singlet and the $\, \mathrm{TeV}$-scale fermion triplet fields providing a low scale hybrid of type-I and type-III seesaw models. Similarly, Refs.~\cite{Dorsner:2005ii,Dorsner:2005fq,Dorsner:2007fy} employed \irrepsub{15}{S} \SU{5} representation with the $\, \mathrm{TeV}$-scale complex scalar triplet, employed in the type-II seesaw mechanism. When considering possible GUT-embeddings of a radiative neutrino mass generating mechanism, we opt for genuine radiative Zee-type models, {\em genuine} in the sense that no additional symmetries are required to make them the dominant contribution to neutrino mass. At the same time, by avoiding fermion singlets we are choosing the \SU{5} embedding and discard the $SO(10)$ one. The first one-loop model proposed by Zee~\cite{Zee:1980ai} has introduced only new \emph{scalar} fields, the charged singlet and the second complex doublet, which do not lead to competing tree-level seesaw mechanisms. The embedding of original Zee model in renormalizable non-SUSY \SU{5} setup has been studied in~\cite{Perez:2016qbo}. Our focus here will be on the variant of the Zee model presented in~\cite{Brdar:2013iea}, which in the following we call the BPR model. It keeps the Zee's charged scalar singlet, but a real scalar triplet replaces Zee's second Higgs doublet. Finally, BPR model introduces three copies of vector-like lepton doublet fields which, if embedded in split \irrepsub{5}{F}, may influence the gauge running as twelve Higgs doublets. Let us note that besides the genuine one-loop model~\cite{Brdar:2013iea} there exist three-loop radiative neutrino models~\cite{Culjak:2015qja,Ahriche:2015wha}, where an automatic protection from the tree-level or lower loop-contributions has been achieved by introducing appropriate larger weak multiplets. However, the appearance of $\sim 10^6\, \mathrm{GeV}$ Landau pole (LP) for the $\SU{2}_L$ gauge coupling $g_2$ \cite{Sierra:2016qfa} eliminates these models from a unification framework. In contrast, as demonstrated in~\cite{Antipin:2017wiz}, the BPR one-loop model with the scalar triplet as the largest weak representation exhibits, in addition to the absence of LP, perturbativity and stability up to the Planck scale. Therefore, we proceed here with the study of the gauge coupling unification in the context of the BPR loop model~\cite{Brdar:2013iea} for which the above requirements with respect to Yukawa and quartic couplings may remain valid when including extra color octet or color sextet scalar fields~\cite{Heikinheimo:2017nth}. As it will turn out, adding these fields may be crucial to achieve the proper gauge unification. In Sec.~\ref{sec:BPR} we first present the set of BSM particles from the neutrino model~\cite{Brdar:2013iea}, dubbed BPR particles, and then present the gauge-unification conditions which the newly introduced states have to satisfy. In Section~\ref{sec:completions}, we will study the conditions under which the gauge couplings unify, and the particle spectra which make a proper unification possible. Then in Section~\ref{sec:spectrum} we will show that the appropriate particle spectra are consistent with the scalar potential of our \SU{5} GUT scenarios. We conclude in Sec.~\ref{sec:Conclusions}. The details of the algorithm for search are given in App.~A and the details of \SU{5} representations in App.~B. \section{BPR model from GUT perspective}\label{sec:BPR} \subsection{BPR-model states}\label{sec:BPR-model} We adopt a simple and predictive TeV-scale radiative model~\cite{Brdar:2013iea} in which the loop contribution is genuine, i.e. self-protected like in the original Zee model. In its present variant the color singlet, weak triplet, hypercharge zero scalar field $\Delta \sim (1,3,0)$, \begin{equation} \Delta=\frac{1}{\sqrt{2}}\sum_{j}\sigma_{j}\Delta^{j}= \left( \begin{array}{ccc} \frac{1}{\sqrt{2}} \Delta^0 & \Delta^+\\ \Delta^- & -\frac{1}{\sqrt{2}} \Delta^0 \end{array} \right) \; , \end{equation} is supplemented by a charged scalar singlet \begin{equation} h^+ \sim (1,1,1)\ , \end{equation} and by additional three generations of vector-like lepton doublets \begin{equation} E_R \equiv (E_R^0, E_R^-)^T \sim (1,2,-1/2) \ , \ \ E_L \equiv (E_L^0,E_L^-)^T \sim (1,2,-1/2)\ , \end{equation} which are needed to close the neutrino mass loop diagram displayed in Fig.~\ref{fig:bpr}. The corresponding vertices in the loop diagram are provided by Yukawa and quartic couplings in \begin{equation} -\mathcal{L} \supset y_1 \overline{(L_L)^c} E_L h^+ + y_2 \overline{L_L} \Delta E_R + \lambda_7 H^\dag\Delta H^c h^+ + \mathrm{h.c.} \ . \end{equation} The vacuum expectation value $v_{\rm SM}$ of the SM Higgs doublet $H$ leads to the neutrino mass matrix \begin{equation} \mathcal{M}_{ij}=\sum_{k=1}^3\frac{[(y_1)_{ik} (y_2)_{jk} + (y_2)_{ik}(y_1)_{jk}]} {8\pi^{2}} \ \lambda_7 \: v_{\rm SM}^2 \: M_{E_k} \; f(M_{E_k}, m_{\Delta^+}, m_{h^+}) \; , \label{effective} \end{equation} where $f(m_1, m_2, m_3)$ is a loop function specified in \cite{Brdar:2013iea}. Assuming like in~\cite{Brdar:2013iea} the mass values $M_E \sim m_{\Delta^+} \sim m_{h^+} \sim 200\!-\!500$ GeV, Eq.~(\ref{effective}) leads to $m_\nu \sim 0.1\,\mathrm{eV}$ for the couplings $y_{1,2}$ and $\lambda_7$ of the order of $10^{-4}$. For definiteness, we will in the most of this work keep masses of these new states fixed at $500 \, \mathrm{GeV}$. In principle, even much larger masses would lead to a viable neutrino mass model, with larger but still perturbative values of $y_{1,2}$ and $\lambda_7$. Still, as we shall discuss later, such scenarios would not bring much additional insight from the GUT perspective. \begin{figure}[t] \centering \centerline{\includegraphics[scale=0.45]{1loop-figure}} \caption{The one-loop BPR \protect\cite{Brdar:2013iea} neutrino mass mechanism.} \label{fig:bpr} \end{figure} We display in Table~\ref{tab:SU(5)-particle-content} the \SU{5} embedding of the SM extended by states in the neutrino mass model at hand. We note that additional potentially light BPR particles are described by the same SM group representations as those already populated by the SM states: new vector-like fermionic doublets $E_{R,L}$ belong to the same representation as the Higgs $H^c$, and similarly for the charged scalar singlet $h^+$ and the SM lepton singlet $e^{c}_R$, or the scalar adjoint triplet $\Delta$ and the spin one triplet $W_{\mu}^i$. Understanding the quantum numbers of SM particles was one of the main motivations which led to the development of GUTs. The fact that BPR states populate already established SM representations could be viewed as an additional motive to study them in the GUT setup. \subsection{Matching BPR states with SU(5) irreps} \label{sec:BPR-embedding} One of the strongest arguments in favor of original Georgi-Glashow GUT scenario is a neat embedding of all SM fermion representations, with apparently arbitrary quantum numbers, into sum $\irrepbarsub{5}{F}\oplus \irrepsub{10}{F}$ of just two complete lowest \SU{5} representations. Since the gauge bosons have to belong to adjoint multiplet of the \SU{5} group, essentially the only remaining unknown has been the structure of the scalar sector. In the present study, this generalizes to question of incorporating the well motivated BPR set of particles into the same GUT context. Following the pattern of SM states, and general principles of economy and elegance, BPR states at hand may be expected to belong to the lowest possible representations of the SU(5) group. This would put scalar $\Delta$ in an adjoint \irrep{24}, scalar $h^+$ in \irrep{10}, and vector-like leptons $E_{R,L}$ in appropriate number of \irrep{5}+\irrepbar{5}, which is a choice displayed in the right column of Table~\ref{tab:SU(5)-particle-content}. These new \SU{5} irreps contain additional states displayed in Table~\ref{tab:SU(5)-particle-content}, that are not needed for BPR neutrino mass mechanism. Some among these additional states will prove crucial in obtaining the desired gauge coupling unification. \begin{table}[t] \caption{Particle content and the \SU{5} embedding options of the BPR neutrino mass model~\cite{Brdar:2013iea}. } \label{tab:SU(5)-particle-content} \centering \vskip 0.2cm \renewcommand{\arraystretch}{1.2} \begin{tabular}{*{3}{c}} \toprule & SM + BPR & $\subset SU(5)$ \\ \addlinespace[0.2em]\midrule scalar & $H=(1,2,+\tfrac{1}{2})$ & $\irrep{5}=(1,2,+\tfrac{1}{2})\oplus(3,1,-\tfrac{1}{3})$ ; or \irrep{45}, \irrep{70} \\\addlinespace[0.2em] \cline{2-3} \addlinespace[0.2em] & $\Delta=(1,3,0)$ & $ \begin{aligned}[m] \irrep{24} &= (1,3,0)\oplus(8,1,0)\oplus (1,1,0) \\ &\qquad{}\oplus(3,2,-\tfrac{5}{6})\oplus(\xbar{3},2,+\tfrac{5}{6}) \end{aligned} $ \\[3ex] & $h^+=(1,1,+1)$ & $\irrep{10}=(1,1,+1)\oplus(\xbar{3},1,-\tfrac{2}{3})\oplus (3,2,+\tfrac{1}{6})$ \\ \addlinespace[0.2em]\midrule fermion & $3\times Q=(3,2,+\tfrac{1}{6})$ & \multirow{3}{*}{$3\times \irrep{10} = (3,2,+\tfrac{1}{6})\oplus (\xbar{3},1,-\tfrac{2}{3})\oplus (1,1,+1)$} \\ & $3 \times u^c=(\xbar{3},1,-\tfrac{2}{3})$ & \\ & $3 \times e^c=(1,1,+1)$ & \\ & $3\times L=(1,2,-\tfrac{1}{2})$ & \multirow{2}{*}{$3\times \irrepbar{5} = (1,2,-\tfrac{1}{2})\oplus (\xbar{3},1,+\tfrac{1}{3})$} \\ & $3 \times d^c=(\xbar{3},1,+\tfrac{1}{3})$ & \\\addlinespace[0.2em] \cline{2-3} \addlinespace[0.2em] & $3 \times E_{R,L}=(1,2,-\tfrac{1}{2})$ & $3\times \irrepbar{5}= (1,2,-\tfrac{1}{2})\oplus (\xbar{3},1,+\tfrac{1}{3})$ ; or \irrepbar{45}, \irrepbar{70} \\ \addlinespace[0.2em]\midrule gauge & $G_{\mu}=(8,1,0)$ & \multirow{3}{*}{$ \begin{aligned}[m] \irrep{24} &= (1,3,0)\oplus(8,1,0)\oplus (1,1,0) \\ &\qquad{}\oplus(3,2,-\tfrac{5}{6})\oplus(\xbar{3},2,+\tfrac{5}{6}) \end{aligned} $} \\ & $W_{\mu}=(1,3,0)$ & \\ & $B_{\mu}=(1,1,0)$ & \\ \addlinespace[0.2em] \bottomrule \end{tabular} \end{table} To completely specify the structure of our model Lagrangian, we need to choose the \SU{5} irreps that will contain the Standard Model Higgs $H$. Here the most economical choice would be \irrep{5}, but, as we shall see, this would not lead to a viable GUT model. Thus, we will consider also the options where the Higgs state belongs to \irrep{45} or \irrep{70} irreps\footnote{Using even higher \SU{5} irreps would expose us to the danger of low Landau poles.}, and we complete $\SU{5} \supset $ $\,\SU{3}_c \times \protect\linebreak[0]\SU{2}_L \times \protect\linebreak[0]\U{1}_Y\,$ branching rules from Table~\ref{tab:SU(5)-particle-content} with: \begin{align} \irrep{45} = & \:(1,2,+\tfrac{1}{2}) \oplus (3,1,-\tfrac{1}{3}) \oplus (3,3,-\tfrac{1}{3}) \oplus (\xbar{3},1,+\tfrac{4}{3}) \oplus (\xbar{3},2,-\tfrac{7}{6}) \,\oplus \nonumber \\ & \oplus (\xbar{6},1,-\tfrac{1}{3}) \oplus (8,2,+\tfrac{1}{2}) \; , \label{eq:45}\\ \irrep{70} = & \:(1,2,+\tfrac{1}{2}) \oplus (1,4,+\tfrac{1}{2}) \oplus (3,1,-\tfrac{1}{3}) \oplus (3,3,-\tfrac{1}{3}) \oplus (\xbar{3},3,+\tfrac{4}{3}) \, \oplus \nonumber \\ & \oplus (6,2,-\tfrac{7}{6}) \oplus (8,2,+\tfrac{1}{2}) \oplus (15,1,-\tfrac{1}{3}) \; . \label{eq:70} \end{align} As is known, Higgs in a mixture of \irrep{5} and \irrep{45} can improve the GUT fermion mass relations, like in the Georgi-Jarlskog mechanism~\cite{Georgi:1979df}. However, in the present work we will not study the pattern of SM fermion masses. On the other hand, a pattern of scalar masses and of new vector-like lepton masses \emph{is} important for our considerations, since these particles decisively affect the running of gauge couplings. We need then to check for any of these possibilities whether unification can be achieved for large enough $M_{\rm GUT}$, whether scalar Lagrangian at renormalizable level allows for required masses of particles and, finally, whether such scenario is in compliance with phenomenological constraints and general theoretical requirement of perturbativity~\cite{Heikinheimo:2017nth,Kopp:2009xt}. \subsection{Gauge coupling unification criteria}\label{sec:unif-criteria} The unification of gauge couplings is controlled by the renormalization group equations (RGE) which govern the running of gauge couplings with the one-loop $\beta$ coefficients given by \begin{align} b_i & = -\tfrac{11}{3} \sum_G T(G_i) D(G_i) + \tfrac{4}{3} \sum_F T(F_i) D(F_i) \kappa + \tfrac{1}{3} \sum_S T(S_i) D(S_i) \eta \; . \end{align} Here, the Dynkin indices $T(R_i)$ are defined as $T(R_i)\delta_{ab}\equiv \Tr[\hat{T}_a(R_i)\hat{T}_b(R_i)]$ for generators $\hat{T}_{a}$ in gauge, fermion and scalar representations $G_i$, $F_i$ and $S_i$, respectively, and are conventionally normalized to $\tfrac{1}{2}$ for fundamental representations of \SU{N} groups (and thus to $\tfrac{3}{5} \hat{Y}^2$ for $U(1)_Y$). $D(R_i) \equiv \prod_{j\neq i} dim(R_j)$, $\kappa$ being $\tfrac{1}{2}$ ($1$) for Weyl (Dirac) fermions and $\eta$ being $\tfrac{1}{2}$ ($1$) for real (complex) scalars. The SM $\beta$ coefficients (including the Higgs doublet) are $b_i^{(SM)} = (\tfrac{41}{10},-\tfrac{19}{6},-7)$, and RGE have analytical solution \begin{align} \alpha_{i}^{-1}(M_{\rm GUT}) & = \alpha_i^{-1}(m_Z)-\frac{1}{2\pi} B_i \ln{\tfrac{M_{\rm GUT}}{m_Z}} \;, \quad i = 1, 2, 3\;, \label{eq:definitions} \end{align} with coefficients \begin{equation} B_i = b_i^{\rm(SM)} + \sum_{m_k<M_{\rm GUT}} \Delta b_i^{(k)} {r_k} \;, \label{eq:Bi} \end{equation} and threshold weight factor of BSM state $k$, defined as \begin{equation} r_{k} = \frac{ \ln M_{\rm GUT}/m_k }{\ln M_{\rm GUT} /m_{Z}} \;, \label{eq:rk} \end{equation} with value between $1$ (for $m_k = m_Z$) and $0$ (for $m_k = M_{\rm GUT}$), depending on the mass of the BSM particle $m_k$. The sum in (\ref{eq:Bi}) goes over all BSM states, with $\Delta b_{i}^{(k)} = b_{i}^{(k)} - b_{i}^{(k-1)}$ being the increase in the $\beta$ coefficients at the threshold of a given BSM state, and $b_{i}^{(0)} = b_{i}^{(\rm SM)}$. As first, the unification condition $\alpha_1(M_{\rm GUT}) = \alpha_2(M_{\rm GUT}) = \alpha_3(M_{\rm GUT}) \equiv \alpha_{\rm GUT}$ can be expressed in the form of the so-called B-test~\cite{Giveon:1991zm,Dorsner:2017ufx}: \begin{align} \frac{B_{23}}{B_{12}} & \equiv \frac{B_2-B_3}{B_1-B_2}=\frac{\alpha_2^{-1}(m_Z)-\alpha_3^{-1}(m_Z)}{\alpha_1^{-1}(m_Z)-\alpha_2^{-1}(m_Z)}=\tfrac{5}{8}\,\frac{\sin^2\theta_w(m_Z)-\tfrac{\alpha_{EM}(m_Z)}{\alpha_3(m_Z)}}{\tfrac{3}{8}-\sin^2\theta_w(m_Z)}=0.718 \;, \label{eq:Btest} \end{align} where we used the average numerical values for the constants at $m_Z$ scale, as given in~\cite{Patrignani:2016xqp}. The comparison to the corresponding SM value $0.528$ indicates that the couplings do not unify in the SM. Second, the associated GUT scale \begin{align} M_{\rm GUT} & = m_Z \, \exp\left (\frac{2\pi (\alpha_1^{-1}(m_Z)-\alpha_2^{-1}(m_Z))}{B_1-B_2}\right ) = m_Z \, \exp\left (\frac{184.87}{B_{12}}\right ) \label{eq:defMGUT} \end{align} yields for the SM the value $M_{\rm GUT} = 10^{13}\, \mathrm{GeV}$. Therefore, additional BSM states should improve unification and increase its scale up to at least $5\times 10^{15}\, \mathrm{GeV}$ which is in agreement with proton lifetime bounds \cite{Miura:2016krn}. Such additional BSM states must therefore provide a negative net contribution to $B_{12}$ and positive to $B_{23}$. \section{Possible gauge-unification realizations}\label{sec:completions} \subsection{Effect of BPR states on gauge unification} Before embedding in the \SU{5} GUT framework, we first investigate how the new states, needed for neutrino mass mechanism, influence the RGE running and to what extent they alone could satisfy the unification criteria from Sect.~\ref{sec:unif-criteria}. \begin{table}[th!] \caption{Contributions of BPR states to RGE running, where threshold weights $r_{k}$ are defined in Eq.~\eqref{eq:rk}. Note that two Weyl fermion states $E_{L,R}$ come in $N_{\rm vec}=3$ copies} \label{tab:BPR-beta-coefficents} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{c c| c c} \multicolumn{2}{c|}{$k$} & $\Delta B_{23}$ & $\Delta B_{12}$ \\ \hline $h^+$ & $(1,1,1)$ & $0$& $\hphantom{+}\frac{1}{5} r_k$ \\ $\Delta$ & $(1,3,0)$ & $\hphantom{+}\frac{1}{3} r_k$& $- \frac{1}{3} r_k$ \\ $E_{L,R}$ & $(1,2,-1/2)$ & $\hphantom{+}\frac{1}{3} r_k$& $- \frac{2}{15} r_k$ \\ \end{tabular} \end{table} In Table~\ref{tab:BPR-beta-coefficents} we list extra BPR states together with their contribution to pertinent combinations of $\beta$-function coefficients. As already stressed, the states with positive $B_{23}$ and negative $B_{12}$ are promising for unification. It can be readily seen that only $h^+$ is not of this kind. If we consider the default configuration with all BPR states close to the electroweak scale (i.e. weight factors from Eq.~\eqref{eq:rk} being $r_k \sim 1$), one immediately observes that the B-test combination increases to $B_{23}/B_{12} = 0.974$, from the SM value $0.528$, considerably overshooting the required value of $0.718$ from Eq.~\eqref{eq:Btest}. This is mostly due to the strong effect of three copies of BPR vector-like lepton doublets. Since they actually double the RGE effect of previously mentioned six Higgs doublets, one can similarly achieve correct unification if they are set at the intermediate scale with the factor $r_k \sim 0.5$. However, again like in the six Higgs doublet case, the unification scale would be too low. Indeed, one observes that there is no way to obtain high enough unification scale by using only BPR states. Namely, even if negative effects of scalar $h^+$ state are avoided by putting it at some very high scale, the total contribution of $\Delta$ and $E_{L,R}$ states to $\Delta B_{12}$ is at most only $-17/15$ (for $r_k = 1$) resulting in $M_{\rm GUT}=1.1\times 10^{15}\, \mathrm{GeV}$ as the maximal possible GUT scale, even if one would completely disregard the condition of the gauge coupling crossing. In conclusion, BPR states alone cannot lead to a successful unification that would at the same time respect proton decay bounds. To achieve that, some other states below the GUT scale should be invoked. Such states are naturally provided by embedding BPR model in a \SU{5} unification framework, as we shall show in detail. \subsection{Higgs doublet in \irrepsub{5}{H}}\label{sec:Higgs-in-5_H} As explained in Sect.~\ref{sec:BPR-embedding}, we will first try the simplest possible \SU{5} embedding of the BPR mechanism, where SM Higgs doublet becomes a member of the \irrepsub{5}{H}. The scalar sector of this model contains \irrepsub{5}{H}, \irrepsub{10}{S} and \irrepsub{24}{S} multiplets (we use subscript $H$ on those scalar irreps of \SU{5} which contain SM Higgs field), and there are $N_{\rm vec}$ generations of vector-like matter in $\irrepsub{5}{F}\oplus\irrepbarsub{5}{F}$, on top of the Standard Model quarks and leptons in $n_{g}=3$ generations of \irrepsub{10}{F} and $\irrepbarsub{5}{F}$, and gauge bosons in the adjoint \irrepsub{24}{g} representation, as displayed in Table~\ref{tab:SU(5)-particle-content}. Their contributions to the RGE running are listed in Table~\ref{tab:5_H-model-beta-coefficents}. \begin{table}[th!] \caption{BSM contributions to RGE running in the simplest \SU{5} embedding of the BPR mechanism. $H$ stands for SM Higgs doublet whose contribution has already been accounted for by $b_i^{(SM)}$. The massless scalar leptoquarks $X$ and $Y$ get absorbed into longitudinal components of massive gauge bosons, as dictated by the Nambu-Goldstone mechanism --- the $\beta$ coefficients of these scalars thus enter at the same scale as heavy vectors (\emph{i. e.} $r_k \approx 0$).} \label{tab:5_H-model-beta-coefficents} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{c c c| c c} \multicolumn{3}{c|}{$k$} & $\Delta B_{23}$ & $\Delta B_{12}$ \\ \hline $H$ & $(1,2,1/2)$ & \multirow{ 2 }{*}{ \irrepsub{5}{H} }& $\hphantom{+}\frac{1}{6} r_k$& $- \frac{1}{15} r_k$ \\ $S_1$ & $(3,1,-1/3)$ & & $- \frac{1}{6} r_k$& $\hphantom{+}\frac{1}{15} r_k$ \\ \hline $h^+$ & $(1,1,1)$ & \multirow{ 3 }{*}{ \irrepsub{10}{S} }& $0$& $\hphantom{+}\frac{1}{5} r_k$ \\ & $(\bar{3},1,-2/3)$ & & $- \frac{1}{6} r_k$& $\hphantom{+}\frac{4}{15} r_k$ \\ & $(3,2,1/6)$ & & $\hphantom{+}\frac{1}{6} r_k$& $- \frac{7}{15} r_k$ \\ \hline & $(1,1,0)$ & \multirow{ 5 }{*}{ \irrepsub{24}{S} }& $0$& $0$ \\ $\Delta$ & $(1,3,0)$ & & $\hphantom{+}\frac{1}{3} r_k$& $- \frac{1}{3} r_k$ \\ & $(8,1,0)$ & & $- \frac{1}{2} r_k$& $0$ \\ $X, Y$ & $(3,2,-5/6)$ & & $\hphantom{+}\frac{1}{12} r_k$& $\hphantom{+}\frac{1}{6} r_k$ \\ $\bar{X}, \bar{Y}$ & $(3,2,5/6)$ & & $\hphantom{+}\frac{1}{12} r_k$& $\hphantom{+}\frac{1}{6} r_k$ \\ \hline $E_{L,R} $ & $(1,2,-1/2)$ & \multirow{ 2 }{*}{ \irrepbarsub{5}{F} }& $\hphantom{+}\frac{1}{3} r_k$& $- \frac{2}{15} r_k$ \\ $$ & $(\bar{3},1,1/3)$ & & $- \frac{1}{3} r_k$& $\hphantom{+}\frac{2}{15} r_k$ \\ \end{tabular} \end{table} It is known that the simplest Georgi-Glashow \SU{5} GUT suffers from the doublet-triplet splitting problem, where leptoquark $S_1 = (3, 1, -1/3)$, which completes \irrepsub{5}{H} together with SM Higgs, has to be much heavier than the Higgs so that it doesn't induce the fast proton decay. There is nothing preventing the other scalar \SU{5} multiplets to be split, and we will check in Sect.~\ref{sec:spectrum} that our splitting patterns are consistent with the structure of the general scalar potential. Still, whatever the actual mechanism responsible for the multiplet splitting, there is no reason to assume that this mechanism is somehow aligned with the neutrino mass mechanism. Thus, we will be quite general in allowing the splitting of masses within \SU{5} multiplets. With this freedom, and having at our disposal variety of states from Table~\ref{tab:5_H-model-beta-coefficents} with different RGE behaviour, the unification prospects look promising. Indeed, we have found several scenarios where coupling constants correctly unify (cross at the single point). However, we also find that, whatever the masses of BSM states between $M_Z$ and $M_{\rm GUT}$, the highest possible unification scale in this model is $M_{\rm GUT} < 10^{15}\,\, \mathrm{GeV}$, in violation of experimental bounds on proton decay widths. Thus, this simplest embedding of the proposed neutrino mass model with the Higgs doublet restricted to \irrepsub{5}{H} irrep is ruled out. \subsection{Higgs doublet in \irrepsub{45}{H}}\label{sec:Higgs-in-45_H} Next we consider the scenario where the SM Higgs doublet is embedded into \irrepsub{45}{H} instead of \irrepsub{5}{H}, or in some mixture of both. The larger particle content can help raising the unification scale and, as a bonus, this setup can serve to correct the wrong mass relations between charged leptons and down-type quarks at the renormalizable level which are usually obtained in the simplest \SU{5} models. The $\beta$ coefficients of the extra states from scalar \irrepsub{45}{H} can be found in Table~\ref{tab:45_H-model-beta-coefficents}, which should be added to states in Table~\ref{tab:5_H-model-beta-coefficents} to obtain a complete embedding of the SM Higgs and the BPR states into \SU{5} multiplets. \begin{table}[th!] \caption{Contributions of \irrepsub{45}{H} to running. For a complete model the multiplets from Table~\ref{tab:5_H-model-beta-coefficents} are to be added. The states $S_{1}$, $S_3$ and $\tilde{S}_1$ are leptoquarks that, if light, would induce too fast proton decay. } \label{tab:45_H-model-beta-coefficents} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{c c c| c c} \multicolumn{3}{c|}{$k$} & $\Delta B_{23}$ & $\Delta B_{12}$ \\ \hline $\Sigma_a$ & $(1,2,1/2)$ & \multirow{ 7 }{*}{ \irrepsub{45}{H} }& $\hphantom{+}\frac{1}{6} r_k$& $- \frac{1}{15} r_k$ \\ $S_1\equiv\Sigma_b$ & $(3,1,-1/3)$ & & $- \frac{1}{6} r_k$& $\hphantom{+}\frac{1}{15} r_k$ \\ $S_3\equiv\Sigma_c$ & $(3,3,-1/3)$ & & $\hphantom{+}\frac{3}{2} r_k$& $- \frac{9}{5} r_k$ \\ $\tilde{S}_1\equiv\Sigma_d$ & $(\bar{3},1,4/3)$ & & $- \frac{1}{6} r_k$& $\hphantom{+}\frac{16}{15} r_k$ \\ $\Sigma_e$ & $(\bar{3},2,-7/6)$ & & $\hphantom{+}\frac{1}{6} r_k$& $\hphantom{+}\frac{17}{15} r_k$ \\ $\Sigma_f$ & $(\bar{6},1,-1/3)$ & & $- \frac{5}{6} r_k$& $\hphantom{+}\frac{2}{15} r_k$ \\ $\Sigma_g$ & $(8,2,1/2)$ & & $- \frac{2}{3} r_k$& $- \frac{8}{15} r_k$ \\ \end{tabular} \end{table} In this more realistic model one finds many ways in which one can achieve a correct unification, so we need to specify some criteria that will lead to a set of models covering all interesting scenarios; let us list those implemented in our study. \begin{itemize} \item First, note that if all states of a given \SU{5} irrep appear at the same mass scale, their effect on RGE cancels (contributions to either $B_{23}$ or $B_{12}$ from all states of a given \SU{5} irrep in, \emph{e. g.}, Table~\ref{tab:5_H-model-beta-coefficents}, taking the same $r_k$, add up to zero). Thus, we will fix the BPR states close to the electroweak scale (for definiteness, we put them at $500\,\, \mathrm{GeV}$), and by doing so we don't loose much generality, from the standpoint of RGE, because the effect on unfication of making e. g. BPR vector-like leptons $E_{L,R}$ heavier is the same as making the rest of the multiplet (in this case $(3,1,-2/3)$ states) lighter. (Some generality may be lost if some of these other states cannot be made lighter for other reasons.) \item Next, since, as discussed before, we allow general splitting of \SU{5} multiplets, with any experimentally allowed mass for the rest of BSM states (see Sect.~\ref{sec:spectrum}), we have enough freedom to achieve the \emph{exact} gauge coupling unification \emph{i.e.} a fulfilment of the B-test, see Eq.~\eqref{eq:Btest}. Then, we require the GUT scale $M_{\rm GUT}$ larger than $2\times 10^{15} \, \mathrm{GeV}$. The lowest experimental bound, coming from proton decay searches, is actually about $5\times 10^{15}$; however, it turns out that for most of the scenarios presented here, a simplified analysis (ignoring the Yukawa contributions to 2-loop RGE) shows that improving RGE to 2-loop accuracy increases $M_{\rm GUT}$ beyond $5\times 10^{15}\,\, \mathrm{GeV}$. \item We will also exclude scenarios with very heavy new BSM particles, with masses between $\sim 10^{11} \, \mathrm{GeV}$ and $M_{\rm GUT}$. Otherwise, one can always take any successful model, add some particles slightly below $M_{\rm GUT}$ that will have only small influence on running, and thus obtain many more models which will be qualitatively the same as the ones presented in this paper, only more complicated. This requirement at the same time excludes from consideration leptoquarks $S_1 = (3,1,-1/3)$, $\tilde{S}_{1} = (3,1,4/3)$ and $S_{3}=(3,3,-1/3)$ which, if lighter than $\sim 10^{11} \, \mathrm{GeV}$ would naturally lead to proton decay in violation of experimental limits. \item For all BSM particles we take $500\,\, \mathrm{GeV}$ as a lower bound on their masses. Direct LHC searches sometimes put higher bounds on such states, but these bounds are often obtained only within specific benchmark scenarios. E. g. recent CMS search \cite{Sirunyan:2016iap} puts lower bound of $3\,\, \mathrm{TeV}$ on color octet state, like $(8,1,0)$ from \irrepsub{24}{S}, but only within benchmark model of Refs. \cite{Han:2010rf,Chivukula:2014pma}, where couplings to loop fermions in production and decay are taken to be of order one (see also Ref. \cite{Hayreter:2017wra}). \item We include only particles from single copies of scalar \SU{5} irreps \irrepsub{5}{H}, \irrepsub{10}{S}, \irrepsub{24}{S} and \irrepsub{45}{H} (or, in the next Sect. \irrepsub{70}{H}) and $N_{\rm vec} = n_{\rm g} = 3$ copies of vector-like \irrepbarsub{5}{F}, which are all already needed for embedding the BPR neutrino mass mechanism. \end{itemize} Under these conditions, we performed the exhaustive search of the parameter space, using the algorithm specified in Appendix~\ref{app:algo}, and resulting in successful scenarios listed in Table \ref{tab:H45}. As explained in the Appendix~\ref{app:algo}, when a given set of new BSM states offers a continuum of possible GUT scenarios (with different spectra), we represent this continuum by a specific choice of spectrum with minimal average mass of particles\footnote{More precisely, maximal average threshold weight factor $r_k$ defined in Eq.~\eqref{eq:rk}.}. Such a choice is motivated, besides the need for definiteness, by the desire to focus on models which have maximal discovery potential at LHC and future colliders. \begin{table} \renewcommand{\arraystretch}{1.2} \caption{\label{tab:H45} Seven unification scenarios with SM Higgs in ${\mathbf{5}}$ and/or ${\mathbf{45}}$ of SU(5) and BPR states fixed at $\sim 0.5\, \mathrm{TeV}$. } \begin{center} \begin{tabular}{cc|ccccccc} \multicolumn{2}{c|}{irreps} & \multicolumn{7}{c}{$m_{k}$ [TeV]} \\ SM & SU(5) & A1 & A2 & A3 & A4 & A5 & A6 & A7 \\ \hline $(3,1, 1/3)$ & \irrepbarsub{5}{F} & 5000 & & $2.3\times 10^6$ & 450 & & $2\times 10^5$ & \\ \hline $(\bar{3},1,-2/3)$ & \multirow{2}{*}{\irrepsub{10}{S}} & & & & & & & 2.4 \\ $(3,2, 1/6)$ & & 0.5 & & 0.5 & 0.5 & & 0.5 & 0.5 \\ \hline $(8,1,0)$ & \irrepsub{24}{S} & & 0.5 & 0.5 & & 0.5 & 0.5 & \\ \hline $(1,2, 1/2)$ & \multirow{3}{*}{\irrepsub{45}{H}} & & & & 0.5 & 260 & 0.5 & \\ $(\bar{6},1, -1/3)$ & & & 90 & & & 0.5 & & 0.5\\ $(8,2, 1/2)$ & & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ \hline \multicolumn{2}{c|}{$M_{\rm GUT}^{\rm max}/(10^{15}\,\, \mathrm{GeV})$} & 2.8 & 2.5 & 6.2 & 2.8 & 2.8 & 6.2 & 6.5 \end{tabular} \end{center} \end{table} Note that a light $(8,2,1/2)$ is the only other allowed representation in $45_H$ with negative $B_{12}$ contribution needed to increase $M_{\rm GUT}$ and thus suppressing the proton decay. Of course by itself it doesn't help the unification due to negative $B_{23}$ (acting alone it can decrease ${B_{23}}/{B_{12}}$ to $0.470$), but its strong effect on unification scale is important for all models displayed in Table~\ref{tab:H45}. \subsection{Higgs doublet in \irrepsub{70}{H}}\label{sec:Higgs-in-70_H} If we opt for SM Higgs belonging to \irrepsub{70}{H} instead of \irrepsub{45}{H} (in addition to \irrepsub{5}{H}), the search proceeds under the conditions explicated in the previous subsection, and the $\beta$ coefficients of the extra states can be found in Table~\ref{tab:70_H-model-beta-coefficents}. \begin{table}[th!] \caption{Contributions of \irrepsub{70}{H} to RGE running. For a complete model, the multiplets from Table~\ref{tab:5_H-model-beta-coefficents} are to be added. The states $S_{1}$ and $S_3$ are leptoquarks that, if light, would naturally induce too fast proton decay. } \label{tab:70_H-model-beta-coefficents} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{c c c| c c} \multicolumn{3}{c|}{$k$} & $\Delta B_{23}$ & $\Delta B_{12}$ \\ \hline $\Omega_a$ & $(1,2,1/2)$ & \multirow{ 8 }{*}{ \irrepsub{70}{H} }& $\hphantom{+}\frac{1}{6} r_k$& $- \frac{1}{15} r_k$ \\ $S_1 \equiv \Omega_b$ & $(3,1,-1/3)$ & & $- \frac{1}{6} r_k$& $\hphantom{+}\frac{1}{15} r_k$ \\ $S_3 \equiv \Omega_c$ & $(3,3,-1/3)$ & & $\hphantom{+}\frac{3}{2} r_k$& $- \frac{9}{5} r_k$ \\ $\Omega_d$ & $(\bar{3},3,4/3)$ & & $\hphantom{+}\frac{3}{2} r_k$& $\hphantom{+}\frac{6}{5} r_k$ \\ $\Omega_e$ & $(6,2,-7/6)$ & & $- \frac{2}{3} r_k$& $\hphantom{+}\frac{34}{15} r_k$ \\ $\Omega_f$ & $(15,1,-1/3)$ & & $- \frac{10}{3} r_k$& $\hphantom{+}\frac{1}{3} r_k$ \\ $\Omega_g$ & $(8,2,1/2)$ & & $- \frac{2}{3} r_k$& $- \frac{8}{15} r_k$ \\ $\Omega_h$ & $(1,4,1/2)$ & & $\hphantom{+}\frac{5}{3} r_k$& $- \frac{22}{15} r_k$ \\ \end{tabular} \end{table} In this setup, we find three unification scenarios displayed in Table \ref{tab:H70} to which scenarios A1, A3, A4 and A6 from Table \protect\ref{tab:H45} should be added, since they employ only states from \irrepsub{45}{H} that are also present in \irrepsub{70}{H}. From this search we have explicitly excluded representation $(15, 1, -1/3)$ which, if light, leads to Landau poles below $M_{\rm GUT}$. To avoid this, it should be heavier than at least $10^7\,\, \mathrm{GeV}$ \cite{Heikinheimo:2017nth} so that its effect on the RG running would be diminished. Including also this representation leads to 16 additional scenarios beyond those in Tables \ref{tab:H45} and \ref{tab:H70}, which we have chosen not to list. \begin{table} \renewcommand{\arraystretch}{1.3} \caption{\label{tab:H70} Three unification scenarios with SM Higgs in \irrepsub{5}{H} and \irrepsub{70}{H} of SU(5), and BPR states fixed at $\sim 0.5\, \mathrm{TeV}$. These are at the same time \emph{all} viable scenarios under assumption of unsplit vector-like fermion \irrepsub{5}{F} .} \begin{center} \begin{tabular}{cc|ccc} \multicolumn{2}{c|}{irreps} & \multicolumn{3}{c}{$m_{k}$ [TeV]} \\ SM & SU(5) & B1 & B2 & B3 \\ \hline $(3,1, 1/3)$ & \irrepbarsub{5}{F} & 0.5 & 0.5 & 0.5 \\ \hline $(3,2, 1/6)$ & \irrepsub{10}{S} & & & 0.5 \\ \hline $(8,1,0)$ & \irrepsub{24}{S} & & 0.5 & 0.5 \\ \hline $(1,4, 1/2)$ & \multirow{2}{*}{\irrepsub{70}{H}} & $1.8\times 10^6$ & $1.3\times 10^4$& $6.3\times 10^6$ \\ $(8,2, 1/2)$ & & 0.5 & 0.5 & 0.5 \\ \hline \multicolumn{2}{c|}{$M_{\rm GUT}^{\rm max}/(10^{15}\,\, \mathrm{GeV})$} & 2.6 & 10.6 & 32.0 \end{tabular} \end{center} \end{table} Interestingly, all viable scenarios in this setup, as displayed in Table~\ref{tab:H70}, involve the color triplet $(3,1,1/3)$ at the same scale ($500\,\, \mathrm{GeV}$) as BPR vector-like leptons, making the irrep \irrepsub{5}{F} complete and nullifying its influence on the RGE running. Thus, these states can all be at any other scale as well, without changing the gauge unification property of the model. We note that there are no viable scenarios with such unsplit fermion \irrepsub{5}{F} and with Higgs in \irrepsub{45}{H}, \emph{i.e.} in the framework of Sect.~\ref{sec:Higgs-in-45_H}. \section{Scalar potential and spectrum}\label{sec:spectrum} In previous Sec.~~\ref{sec:Higgs-in-45_H} and Sec.~\ref{sec:Higgs-in-70_H} we singled out viable scenarios in two variants of non-supersymmetric \SU{5} unification. Now we are presenting for them the relevant expressions for the scalar potentials and the resulting mass spectra, demonstrating their consistency with the appropriate scalar sector extensions. The scalar sector for the scenarios from Sec.~\ref{sec:Higgs-in-45_H} contains: \begin{itemize} \item \irrepsub{24}{S}: an adjoint $24$-dimensional real traceless representation $\chi^{i}_{j}$; \item \irrepsub{5}{H}: a fundamental $5$-dimensional complex representation $H^{i}$; \item \irrepsub{10}{S}: an antisymmetric $10$-dimensional complex representation $\phi^{ij}$; \item \irrepsub{45}{H}: a $45$-dimensional complex $2$-index antisymmetric traceless representation $\Sigma^{ij}_{k}$, \end{itemize} which get decomposed under the SM gauge group as displayed in Table~\ref{tab:SU(5)-particle-content} and Eq. (\ref{eq:45}). For the scenario from Sec. \ref{sec:Higgs-in-70_H}, $\Sigma^{ij}_{k}$ is replaced with \begin{itemize} \item \irrepsub{70}{H}: a $70$-dimensional complex $2$-index symmetric traceless representation $\Omega^{ij}_{k}$ , \end{itemize} with the SM decomposition displayed in Eq.~\eqref{eq:70}. The details of individual representations can be found in the Appendix~\ref{app:irreps}. The following fields from this set can develop potentially non-vanishing VEVs: \begin{itemize} \item the SM singlet field from $\chi^{i}_{j}$ whose GUT scale VEV $\langle (1,1,0)_{\chi} \rangle \equiv v_{GUT}$ results in breaking $SU(5) \to SU(3)_c \times SU(2)_L \times U(1)_Y$; \item the neutral components of the weak doublets from $H^{i}$ and $\Sigma^{ij}_{k}$ whose $SU(2)_L \times U(1)_Y \to U(1)_{Q}$ breaking VEVs $\langle (1,2,+\tfrac{1}{2})_{H} \rangle \equiv v_{5}$ and $\langle (1,2,+\tfrac{1}{2})_{\Sigma} \rangle \equiv v_{45}$ are subject to the condition $v_{5}^2+v_{45}^2 = v_{SM}^2$; \item the neutral components of $(1,2,+\tfrac{1}{2})_{\Omega}$ and $(1,4,+\tfrac{1}{2})_{\Omega}$ from $\Omega^{ij}_{k}$ can also develop VEVs of the order of electroweak scale; \item the neutral component of the weak triplet from $\chi^{i}_{j}$ can develop a tiny (few $\, \mathrm{GeV}$) VEV $\langle (1,3,0)_{\chi} \rangle$ severely constrained by the measured electroweak precision $\rho$ parameter. \end{itemize} \subsection{Scalar potential with $\irrepsub{5}{H}\oplus \irrepsub{10}{S} \oplus \irrepsub{24}{S} \oplus \irrepsub{45}{H}$\label{sec:45-model}} We are only interested in the part of the renormalizable scalar potential that provides the $\,\SU{3}_c \times \protect\linebreak[0]\SU{2}_L \times \protect\linebreak[0]\U{1}_Y\,$ \ invariant contributions to the scalar spectrum proportional to $v_{\rm GUT}$ \begin{align} \label{eq:V} V & = V_{24}(\chi) + V_{5}(H, \chi) + V_{10}(\phi,\chi) + V_{45}(\Sigma,\chi) + V_{\rm mix}(H, \chi, \Sigma) \; . \end{align} Here \begin{align} V_{24} = & -\tfrac{1}{2} \, m_{\chi}^2 \, \chi^{i}_{j} \chi^{j}_{i} + \sqrt{\tfrac{10}{3}} \, \mu_{\chi} \, \chi^{i}_{j} \chi^{j}_{k} \chi^{k}_{i} + \tfrac{1}{8} \, \lambda_1 \, \chi^{i}_{j} \chi^{j}_{i} \chi^{k}_{l} \chi^{l}_{k} + \tfrac{15}{2} \, \lambda_2 \, \chi^{i}_{j} \chi^{j}_{k} \chi^{k}_{l} \chi^{l}_{i} \:, \label{eq:V_24} \\ V_{5\phantom{0}} = & \ m_{H}^2 \, H_{i}^* H^{i} + \sqrt{30} \, \mu_{H} \, H_{i}^* \chi^{i}_{j} H^{j} + \alpha_1 \, H_{i}^* H^{i} \chi^{j}_{k} \chi^{k}_{j} + 30 \, \alpha_2 \, H_{i}^* \chi^{i}_{j} \chi^{j}_{k} H^{k} \:, \label{eq:V_5} \\ V_{10} = & - m_{\phi}^2 \, \phi^*_{ij} \phi^{ji} + 2 \sqrt{30} \, \mu_{\phi} \, \phi_{ij}^* \chi^{j}_{k} \phi^{ki} + \beta_1 \, \phi^*_{ij} \phi^{ji} \chi^{k}_{l} \chi^{l}_{k} + 60 \, \beta_2 \, \phi_{ij}^{*} \chi^{j}_{k} \chi^{k}_{l} \phi^{li} + \nonumber \\ & + 30 \, \beta_3 \, \phi_{ij}^{*} \chi^{j}_{k} \phi^{kl} \chi^{i}_{l} \:, \label{eq:V_10}\\ V_{45} = & \ m_{\Sigma}^2 \, \Sigma^{ij}_{k} \Sigma_{ij}^{*k} + 4 \sqrt{30} \, \mu_{\Sigma} \, \Sigma^{ij}_{k} \Sigma_{ij}^{*l} \chi^{k}_{l} + 8 \sqrt{30} \, \mu_{\Sigma}' \, \Sigma^{ij}_{k} \Sigma_{il}^{*k} \chi^{l}_{j} + \nonumber \\ & + \eta_1 \, \Sigma^{ij}_{k} \Sigma_{ij}^{*k} \chi^{l}_{m} \chi^{m}_{l} + 120 \, \eta_2 \, \Sigma^{ij}_{k} \Sigma_{ij}^{*l} \chi^{k}_{m} \chi^{m}_{l} + 240 \, \eta_3 \, \Sigma^{ij}_{k} \Sigma_{il}^{*k} \chi^{l}_{m} \chi^{m}_{j} + \nonumber \\ & + \tfrac{48}{5} \, \eta_4 \, \Sigma^{ij}_{k} \Sigma_{im}^{*l} \chi^{k}_{j} \chi^{m}_{l} + 120 \, \eta_5 \, \Sigma^{ij}_{k} \Sigma_{im}^{*l} \chi^{m}_{j} \chi^{k}_{l} + 120 \, \eta_6 \, \Sigma^{ij}_{k} \Sigma_{lm}^{*k} \chi^{l}_{i} \chi^{m}_{j} \:, \label{eq:V_45} \\ V_{\rm mix} = & \ \tfrac{12\sqrt{5}}{5} \, \tau \, \Sigma^{ij}_{k} \chi^{k}_{i} H_{j}^* + 12\sqrt{2} \, \kappa_1 \, \Sigma^{ij}_{k} \chi^{k}_{i} \chi^{l}_{j} H_{l}^* + 12\sqrt{2} \, \kappa_2 \, \Sigma^{ij}_{k} \chi^{k}_{l} \chi^{l}_{i} H_{j}^* + \text{h.c.} \:, \label{eq:V_mix} \end{align} and the summation over one upper and one lower repeating index is assumed. The potential contains nine real parameters $\{m_{\chi}$, $\mu_{\chi}$, $m_{H}$, $\mu_{H}$, $m_{\phi}$, $\mu_{\phi}$, $m_{\Sigma}$, $\mu_{\Sigma}$, $\mu_{\Sigma}' \}$ and one complex parameter $\{\tau \}$ with positive dimension of mass. There are additional thirteen real and two complex dimensionless parameters $\{ \lambda_1$, $\lambda_2$, $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$, $\beta_3$, $\eta_1$, $\eta_2$, $\eta_3$, $\eta_4$, $\eta_5$, $\eta_6 \}$ and $\{ \kappa_1, \kappa_2 \}$, respectively. The signs and various symmetry factors are introduced for convenience. Note that in the unbroken phase the mass terms $\{-m_{\chi}^2$, $m_{H}^2$, $m_{\phi}^2$, $m_{\Sigma}^2 \}$ represent the squared masses of the corresponding $SU(5)$ representations (with the conventional prefactor $\tfrac{1}{2}$ for the real scalar fields and $1$ for complex scalars). \\ The spectrum presented in Tables~\ref{tab:common-scalar-spectrum} and~\ref{tab:45_H-model-scalar-spectrum} is computed in the minimum of the scalar potential (the vacuum state) obtained for \begin{align} \frac{\partial\langle V \rangle}{\partial v_{\rm GUT}} & \equiv 0 \:, \end{align} where \begin{align} m_{\chi}^2 & = \tfrac{1}{2} \, v_{\rm GUT} \, (2 \, \mu_{\chi} + (\lambda_1 + 14 \, \lambda_2) v_{\rm GUT}) \:, \end{align} and $v_{\rm GUT}$ is kept as a free parameter. The six massless complex scalar states $(3,2,-\tfrac{5}{6})_{\chi}$ are absorbed into longitudinal components of twelve heavy gauge bosons. \begin{table}[th!] \caption{The scalar spectrum for the simplest \SU{5} embedding of BPR model with only \irrepsub{5}{H}, \irrepsub{10}{S} and \irrepsub{24}{S} multiplets, which corresponds to setting to zero the parameters in $V_{45}$ and $V_{\rm mix}$. Their masses remain unchanged even after adding \irrepsub{45}{H} or \irrepsub{70}{H} to the particle content. Note that the parameters $\mu_{H}$, $\mu_{\phi}$ and $\mu_{\chi}$ should be understood as multiplied by $v_{\rm GUT}$ and $\alpha_1$, $\alpha_2$, $\beta_1$, $\beta_2$, $\beta_3$, $\lambda_1$ and $\lambda_2$ by $v_{\rm GUT}^2$, while each of the masses is a sum of the pertinent contributions. For example, $ m^{2}(3, 1, -1/3)_H = m_{H}^2 - 2\, \mu_{H}\, v_{\rm GUT} + (\alpha_1 + 4\alpha_2) v_{\rm GUT}^2 \;. $ } \label{tab:common-scalar-spectrum} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{5pt} \centering \vskip 0.2cm \begin{tabular}{c|*{12}{c|}} \cline{2-13} \multicolumn{1}{c|}{} & $m^2_{H}$ & $\mu_{H}$ & $\alpha_1$ & $\alpha_2$ & $m^2_{\phi}$ & $\mu_{\phi}$ & $\beta_1$ & $\beta_2$ & $\beta_3$ & $\mu_{\chi}$ & $\lambda_1$ & $\lambda_2$ \\ \hline \multicolumn{1}{|c|}{{$m^2(1,2,+\tfrac{1}{2})_{H}$}} & $1$ & $3$ & $1$ & $9$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,1,-\tfrac{1}{3})_{H}$} & $1$ & $-2$ & $1$ & $4$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,2,+\tfrac{1}{6})_{\phi}$} & $$ & $$ & $$ & $$ & $1$ & $-1$ & $-1$ & $-13$ & $6$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(\xbar{3},1,-\tfrac{2}{3})_{\phi}$} & $$ & $$ & $$ & $$ & $1$ & $4$ & $-1$ & $-8$ & $-4$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(1,1,+1)_{\phi}$} & $$ & $$ & $$ & $$ & $1$ & $-6$ & $-1$ & $-18$ & $-9$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(1,1,0)_{\chi}$} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $1$ & $1$ & $14$ \\ \hline \multicolumn{1}{|c|}{$m^2(1,3,0)_{\chi}$} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $5$ & $$ & $20$ \\ \hline \multicolumn{1}{|c|}{$m^2(8,1,0)_{\chi}$} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $-5$ & $$ & $5$ \\ \hline \end{tabular} \end{table} \begin{table}[th!] \caption{Additional contribution to the scalar spectrum in the vacuum after adding \irrepsub{45}{H} to the $\irrepsub{5}{H} \oplus \irrepsub{10}{S} \oplus \irrepsub{24}{S}$ model. The last two rows represent the mixing between \irrepsub{5}{H} and \irrepsub{45}{H}. Again, the parameters $\mu_{\Sigma}$, $\mu_{\Sigma}'$ and $\tau$ should be multiplied by $v_{\rm GUT}$ and $\eta_1$, $\eta_2$, $\eta_3$, $\eta_4$, $\eta_5$, $\eta_6$, $\kappa_1$ and $\kappa_2$ by $v_{\rm GUT}^2$.} \label{tab:45_H-model-scalar-spectrum} \centering \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{4pt} \vskip 0.2cm \begin{tabular}{c|*{12}{c|}} \cline{2-13} \multicolumn{1}{c|}{} & $m^2_{\Sigma}$ & $\mu_{\Sigma}$ & $\mu_{\Sigma}'$ & $\eta_1$ & $\eta_2$ & $\eta_3$ & $\eta_4$ & $\eta_5$ & $\eta_6$ & $\tau$ & $\kappa_1$ & $\kappa_2$ \\ \hline \multicolumn{1}{|c|}{{$m^2(1,2,+\tfrac{1}{2})_{\Sigma}$}} & $1$ & $7$ & $19$ & $1$ & $31$ & $67$ & $3$ & $26$ & $21$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,1,-\tfrac{1}{3})_{\Sigma}$} & $1$ & $2$ & $-6$ & $1$ & $26$ & $42$ & $4$ & $11$ & $-4$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,3,-\tfrac{1}{3})_{\Sigma}$} & $1$ & $12$ & $4$ & $1$ & $36$ & $52$ & $$ & $6$ & $-24$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(\xbar{3},1,+\tfrac{4}{3})_{\Sigma}$} & $1$ & $-8$ & $24$ & $1$ & $16$ & $72$ & $$ & $-24$ & $36$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(\xbar{3},2,-\tfrac{7}{6})_{\Sigma}$} & $1$ & $12$ & $-16$ & $1$ & $36$ & $32$ & $$ & $-24$ & $16$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(\xbar{6},1,-\tfrac{1}{3})_{\Sigma}$} & $1$ & $-8$ & $-16$ & $1$ & $16$ & $32$ & $$ & $16$ & $16$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(8,2,+\tfrac{1}{2})_{\Sigma}$} & $1$ & $-8$ & $4$ & $1$ & $16$ & $52$ & $$ & $-4$ & $-24$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{{$m^2(1,2,+\tfrac{1}{2})_{\rm mix}$}} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $-3$ & \small{$-3\sqrt{3}$} & \small{$-\sqrt{3}$} \\ \hline \multicolumn{1}{|c|}{$m^2(3,1,-\tfrac{1}{3})_{\rm mix}$} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & {\small $2\sqrt{3}$} & $-4$ & $2$ \\ \hline \end{tabular} \end{table} The two $(1,2,+\tfrac{1}{2})$ representations from \irrepsub{5}{H} and \irrepsub{45}{H} mix to form a SM Higgs doublet responsible for electroweak symmetry breaking. To compute their physical masses one needs to diagonalize the matrix \begin{align} & \begin{pmatrix} m^2(1,2,+\tfrac{1}{2})_{H} & m^2(1,2,+\tfrac{1}{2})_{\rm mix} \\ (m^2(1,2,+\tfrac{1}{2})_{\rm mix})^* & m^2(1,2,+\tfrac{1}{2})_{\Sigma} \end{pmatrix} . \end{align} Similar diagonalization proceeds for the states $(3,1,-\tfrac{1}{3})$ from $H$ and $\Sigma$. One of the masses needs to be around the weak scale to correspond to the SM Higgs. It can as well be fine-tuned to zero, since in our case we have neglected all the $v_{SM}$ contributions to spectrum. \\ When both of the Higgs doublets develop a non-vanishing VEVs the Georgi-Jarlskog mechanism can be implemented to account for the observed masses of light fermions. It is also interesting to note that by excluding the mixing terms ($V_{\rm mix}$ with coefficients $\tau$, $\kappa_1$ and $\kappa_2$), as for example in the scenario without \irrepsub{5}{H} where the Higgs doublet belongs entirely to \irrepsub{45}{H}, the masses of fields from $\Sigma$ are not linearly independent, and the following relation among them holds \begin{align} & m^2(1,2,+\tfrac{1}{2})_{\Sigma} -\tfrac{3}{4} \, m^2(3,1,-\tfrac{1}{3})_{\Sigma} -\tfrac{9}{8} \, m^2(3,3,-\tfrac{1}{3})_{\Sigma} -\tfrac{3}{4} \, m^2(\xbar{3},1,+\tfrac{4}{3})_{\Sigma} + \nonumber \\ & +\tfrac{3}{4} \, m^2(\xbar{3},2,-\tfrac{7}{6})_{\Sigma} -\tfrac{3}{8} \, m^2(\xbar{6},1,-\tfrac{1}{3})_{\Sigma} +\tfrac{5}{4} \, m^2(8,2,+\tfrac{1}{2})_{\Sigma} = 0 \:. \end{align} However, in the most general case the above expressions for scalar masses are all linearly independent. One can simplify the spectrum even further by imposing an additional $\mathbb{Z}_2$ symmetry under which in Eq. (\ref{eq:V_24}) $\mu_{\chi} \to 0$, thus imposing a strong correlation between the weak triplet and the color octet masses \begin{align} m^2(1,3,0)_{\chi} & = 20 \, \lambda_2 \, v_{\rm GUT}^2 = 4 \, m^2(8,1,0)_{\chi} \:, \\ m^2(1,1,0)_{\chi} & = 2 \, m_{\chi}^2 \:. \end{align} \subsection{Scalar potential with $\irrepsub{5}{H}\oplus \irrepsub{10}{S} \oplus \irrepsub{24}{S} \oplus \irrepsub{70}{H}$\label{sec:70-model}} As long as we are interested only in the $v_{\rm GUT}$-proportional spectrum, the form of the scalar potential remains unaltered upon replacing $\Sigma^{ij}_{k}$ with $\Omega^{ij}_{k}$ in Eqs.~\eqref{eq:V}--\eqref{eq:V_mix}. The corresponding scalar spectrum is shown in Tables~\ref{tab:common-scalar-spectrum} and~\ref{tab:70_H-model-scalar-spectrum}. \begin{table}[th!] \caption{Additional contribution to the scalar spectrum in the vacuum after adding \irrepsub{70}{H} to the $\irrepsub{5}{H} \oplus \irrepsub{10}{S} \oplus \irrepsub{24}{S}$ model. The last two rows represent the mixing between \irrepsub{5}{H} and \irrepsub{70}{H}. Again, $\mu_{\Omega}$, $\mu_{\Omega}'$ and $\widetilde{\tau}$ should be understood as multiplied by $v_{\rm GUT}$ and $\widetilde{\eta}_1$, $\widetilde{\eta}_2$, $\widetilde{\eta}_3$, $\widetilde{\eta}_4$, $\widetilde{\eta}_5$, $\widetilde{\eta}_6$, $\widetilde{\kappa}_1$ and $\widetilde{\kappa}_2$ by $v_{\rm GUT}^2$.} \label{tab:70_H-model-scalar-spectrum} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{3pt} \centering \vskip 0.2cm \begin{tabular}{c|*{12}{c|}} \cline{2-13} \multicolumn{1}{c|}{} & $m^2_{\Omega}$ & $\mu_{\Omega}$ & $\mu_{\Omega}'$ & $\widetilde{\eta}_1$ & $\widetilde{\eta}_2$ & $\widetilde{\eta}_3$ & $\widetilde{\eta}_4$ & $\widetilde{\eta}_5$ & $\widetilde{\eta}_6$ & $\widetilde{\tau}$ & $\widetilde{\kappa}_1$ & $\widetilde{\kappa}_2$ \\ \hline \multicolumn{1}{|c|}{{$m^2(1,2,+\tfrac{1}{2})_{\Omega}$}} & $1$ & $2$ & $14$ & $1$ & $26$ & $62$ & $6$ & $16$ & $6$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,1,-\tfrac{1}{3})_{\Omega}$} & $1$ & $\tfrac{16}{3}$ & $-\tfrac{8}{3}$ & $1$ & $\tfrac{88}{3}$ & $\tfrac{136}{3}$ & $\tfrac{16}{3}$ & $\tfrac{28}{3}$ & $-\tfrac{32}{3}$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(3,3,-\tfrac{1}{3})_{\Omega}$} & $1$ & $12$ & $4$ & $1$ & $36$ & $52$ & $$ & $6$ & $-24$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(\xbar{3},3,+\tfrac{4}{3})_{\Omega}$} & $1$ & $-8$ & $24$ & $1$ & $16$ & $72$ & $$ & $-24$ & $36$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(6,2,-\tfrac{7}{6})_{\Omega}$} & $1$ & $12$ & $-16$ & $1$ & $36$ & $32$ & $$ & $-24$ & $16$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(15,1,-\tfrac{1}{3})_{\Omega}$} & $1$ & $-8$ & $-16$ & $1$ & $16$ & $32$ & $$ & $16$ & $16$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(8,2,+\tfrac{1}{2})_{\Omega}$} & $1$ & $-8$ & $4$ & $1$ & $16$ & $52$ & $$ & $-4$ & $-24$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{$m^2(1,4,+\tfrac{1}{2})_{\Omega}$} & $1$ & $12$ & $24$ & $1$ & $36$ & $72$ & $$ & $36$ & $36$ & $$ & $$ & $$ \\ \hline \multicolumn{1}{|c|}{{$m^2(1,2,+\tfrac{1}{2})_{\rm mix}$}} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & {\small$ -3\sqrt{2}$} & \small{$-3\sqrt{6}$} & \small{$-\sqrt{6}$} \\ \hline \multicolumn{1}{|c|}{$m^2(3,1,-\tfrac{1}{3})_{\rm mix}$} & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $$ & $-4$ & $\tfrac{8}{\sqrt{3}}$ & $-\tfrac{4}{\sqrt{3}}$ \\ \hline \end{tabular} \end{table} There are two major differences from the previous case with \irrepsub{45}{H}. As before, disabling the mixing terms in the scalar potential introduces the linear dependence among the masses of \irrepsub{70}{H} \begin{align} & m^2(1,2,+\tfrac{1}{2})_{\Omega} - \tfrac{9}{8} \, m^2(3,1,-\tfrac{1}{3})_{\Omega} - \tfrac{3}{8} \, m^2(\xbar{3},3,+\tfrac{4}{3})_{\Omega} + \tfrac{3}{8} \, m^2(6,2,-\tfrac{7}{6})_{\Omega} + \nonumber \\ & + \tfrac{1}{4} \, m^2(8,2,+\tfrac{1}{2})_{\Omega} - \tfrac{1}{8} \, m^2(1,4,+\tfrac{1}{2})_{\Omega} = 0 , \\ & m^2(3,3,-\tfrac{1}{3})_{\Omega} + \tfrac{1}{2} \, m^2(\xbar{3},3,+\tfrac{4}{3})_{\Omega} - \tfrac{1}{2} \, m^2(6,2,-\tfrac{7}{6})_{\Omega} + \tfrac{1}{2} \, m^2(15,1,-\tfrac{1}{3})_{\Omega} - \nonumber \\ & - m^2(8,2,+\tfrac{1}{2})_{\Omega} - \tfrac{1}{2} \, m^2(1,4,+\tfrac{1}{2})_{\Omega} = 0 , \label{eq:linear_dependency_70_eq2} \end{align} but now this dependence is preserved even after the states $(1,2,+\tfrac{1}{2})_{\Omega}$ and $(3,1,-\tfrac{1}{3})_{\Omega}$ effectively decouple through the mixing with \irrepsub{5}{H}. As can be seen from Eq.~\eqref{eq:linear_dependency_70_eq2} the rest of the states remain linearly dependent, and since some of them are heavy (e.g. the leptoquarks) a certain fine-tuning is needed to make a particular state light as required by unification. \\ The second difference comes from the fact that, when considering the full scalar potential for \irrepsub{45}{H} and \irrepsub{70}{H}, they are not of the same form any more due to different symmetry properties of $\Sigma^{ij}_{k}$ and $\Omega^{ij}_{k}$. Namely, since $\phi^{ij}$ is antisymmetric and $\Omega^{ij}_{k}$ is symmetric, all terms contracting $\Omega^{ij}_{k}$ with $\phi_{ij}^*$ vanish. Consequently, the Georgi-Jarlskog mechanism cannot be used in this case and we have to rely on either non-renormalizable Yukawa terms or some other mechanism to explain the pattern of SM fermion masses. \section{Conclusions}\label{sec:Conclusions} Although the SM particle set has been completed with the discovery of the Higgs boson in 2012, it is far from being established as unique, isolated set~\cite{Wells:2017aoy}. Our search for possible additional particles proceeds with an aim to both explain the neutrino masses and to achieve the unification of gauge couplings. With this in mind, we rely on the BSM states employed in the selected Zee-type BPR neutrino model~\cite{Brdar:2013iea}. This set of states allows us to introduce incomplete \SU{5} representations, that have a potential to improve gauge coupling crossing. Still, this set alone leads to too low unification scale $M_{\rm GUT} < 10^{15}\,\, \mathrm{GeV}$, if the Higgs doublet is restricted to belong to \irrepsub{5}{H} irrep. In contrast, there are immense possibilities to achieve the successful unification if the SM Higgs doublet is embedded into \irrepsub{45}{H}. Therefore we specify a plausible set of criteria under which our search algorithm shrinks the number of possibilities to seven successful scenarios listed in Table \ref{tab:H45}. In all of them, a light colored scalar $(8,2,1/2)$ provided by \irrepsub{45}{H} plays a decisive role. If we choose the SM Higgs belonging to \irrepsub{70}{H} instead of \irrepsub{45}{H}, our search algorithm selects four scenarios (A1, A3, A4 and A6) from Table \protect\ref{tab:H45}, and allows for three additional scenarios displayed in Table \ref{tab:H70}. Notably, in these new scenarios the BPR vector-like leptons are assigned to complete irrep \irrepsub{5}{F}, that do not affect the RGE running. Since in these latter scenarios only the scalar \SU{5} irreps are incomplete, an eventual verification of them would be in support of a conjecture~\cite{Cox:2016epl} that only scalar irreps may be split. To conclude, in our procedure of renormalizable \SU{5} embedding, the colorless BPR particles employed in the neutrino mass model get accompanied by the colored partners to enable a successful unification. We decide to keep sufficiently heavy those among the colored leptoquark scalars which present a threat to proton stability, and the other colored states may play a model-monitoring role both through the LHC phenomenology~\cite{Giveon:1991zm,Dorsner:2017ufx} and through tests at Super(and future Hyper)-Kamiokande \cite{Miura:2016krn} experiments. We also point out that in most of the allowed parameter space the color octet scalar $(8,2,1/2)$ is the most promising BSM state for the LHC searches, and as such is studied already in \cite{Perez:2016qbo}. Additional colored states in the specific gauge unification scenarios in Tables~\ref{tab:H45} and \ref{tab:H70} call for a study of characteristic exotic signals at the LHC, which may make some among these specific models falsifiable.
{ "timestamp": "2017-12-15T02:08:31", "yymm": "1712", "arxiv_id": "1712.05246", "language": "en", "url": "https://arxiv.org/abs/1712.05246" }
\section{Introduction}\label{introduction} The discussion of apparent paradoxes in physical theories have proven to be powerful means of elucidating key theoretical concepts, and Einstein's relativity is no exception. While there has been significant focus upon the `twin paradox' and `barn-and-pole paradox', both occupying a significant number of pages in the literature and textbooks, more recent discussions have considered uniformly accelerating spaceships in what has become known as `Bell's spaceship paradox' \citep{1959AmJPh..27..517D,Bell:111272}. In this paper, we re-examine the question of uniform acceleration in special relativity, with a focus upon the view from the two spaceships in Bell's paradox. We consider the exchange of light signals during their flights, recovering established results, as well as uncovering novel and intriguing outcomes in terms of the view from the spaceships. The layout of this paper is as follows; in Section~\ref{paradox} we review in detail the discussion of Bell's spaceship paradox in the literature, while in Section~\ref{approach} we outline the approach adopted in this paper. The results and discussion are presented in Section~\ref{results}, and the paper concludes in Section~\ref{conclusions}. \section{Bell's Spaceship Paradox}\label{paradox} Bell's spaceship paradox has a long, and sometimes contradictory, history in the story of special relativity; see \cite{ 1972AmJPh..40.1170E, doi:10.1119/1.19161, 2000physics...4024N, 0295-5075-71-5-699, doi:10.1119/1.2733691, 0143-0807-29-3-N02, 0143-0807-31-2-006, Franklin2013, 2014PhRvA..89f2103M, 2014SerAJ.188...55R} for several examples of contributions to the literature, and the reader is directed to the recent review by \citet{Flores2011} for a more complete discussion. The starting point is to consider two spaceships initially at rest with respect to each other. At a time that is synchronous in the rest frame of the spaceships, they fire their engines and undergo uniform acceleration; here, uniform implies that the crews of the spaceships experience identical and constant `g-forces' as the spaceships accelerate. Given the identical nature of the acceleration experienced by the spaceships, their separation in the original rest frame remains constant. Their world-lines in the coordinates of this frame differ only by an offset: $x_\textrm{leading}(t) = x_\textrm{trailing}(t) + L$. The paradoxical aspect of the Bell's scenario comes from considering a taut thread strung between the two spaceships while they are initially sitting at rest. Once the spaceships begin to accelerate, what happens to the thread? There are three options, namely that it remains taut, sags, or stretches until it eventually breaks. A cursory examination of the problem might suggest that, due to relativistic length contraction, the distance between the two spaceships decreases and so the thread will sag. Or, the string and the distance between the spaceships will contract equally, so the thing will be unaffected. However, when examined in detail, it is found that the distance between the spaceships increases, so the string snaps. Perhaps the clearest way to see that the string snaps is presented by \citet{maudlinspace-time}. Suppose that after some time, $t_\textrm{end}$, both spaceships shut off their engines and return to inertial motion at coordinate velocity $v_\textrm{end}$ in the original rest frame. Note that this can be achieved by the spaceship pilots agreeing to burn their engines for exactly the same amount of proper time before shutting them down. While the rockets shutting down their engines is seen to be simultaneous in the original rest frame, these events will not be simultaneous to the pilots on-board the spaceships \citep[e.g.][]{1989AmJPh..57..791B}. However, the resultant world-lines after the acceleration are straight and parallel. If the string returns to the same equilibrium state as before the acceleration, with the same forces balanced between atoms that make up the string, then a beam of light bounced off the far end of the string will return in the same amount of time as before the acceleration. If the light takes longer to return, then this can only mean that the distance is greater and the string must have stretched and snapped; we will return to this question of `radar-ranging' through the exchange of photons in detail in Section~\ref{radar}, but here we give an outline of the results. Figure \ref{paths} presents a systematic representation of two spaceships under consideration in Bell's paradox, with the red line indicating the leading spaceship, while the blue line is the trailing spaceship. The grey line denotes the the path of a light ray exchanged between the leading and trailing spaceship. Before the acceleration, the bouncing light returns to the stationary spaceship after $\Delta t_\textrm{before} = 2$. We can similarly trace a radar-ranging photon on a space-time diagram after the acceleration has ceased and the spaceships are in uniform motion, and calculate the proper time between its emission and reception. We find that the time taken for the photon to return is $\Delta t_\textrm{after} = 2 / \sqrt{1 - v^2_\textrm{end}/c^2}$. The distance between the spaceships has expanded, and the string has snapped. We can also reconstruct the series of events in the instantaneous reference frames of each of the spaceships. Consider the trailing spaceship at some time during the accelerating phase. In the coordinates of the original rest frame, the simultaneity slice of the rocket is tilted upwards with respect to the $x$-axis. Given an event at time t on the world-line of the trailing rocket, the simultaneous event (according to the trailing rocket) on the world-line of the leading rocket is at a larger value of $t$. While the acceleration is symmetric in the original frame, the trailing rocket concludes (having reconstructed the series of events in its instantaneous frame) that the leading ship is in fact accelerating more rapidly, as it is travelling faster at any given time. Conversely, the leading ship concludes that the trailing ship accelerated more slowly; thus, the string snaps. This is not merely a consequence of the finite speed of light. That is, it is not about what the trailing ship \emph{sees}. It is consequence of the relativity of simultaneity, and the changing slices of simultaneity as the spaceships accelerate. The majority of the literature on Bell's spaceship paradox has focused upon this question of what happens to the taut string between the ship, with examinations of the question of relativistic stress in accelerating objects. However, in the remainder of this paper, we will turn to less-explored questions regarding the relativistic influence on their view of each other. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figure1_paths.png} \caption{An illustrative space-time diagram of the paths of the two spaceships under consideration for this paper, with the leading spaceship shown in red, whereas the trailing spaceship shown in blue. Example exchanges of photons are shown in grey.} \label{paths} \end{center} \end{figure} \section{Approach}\label{approach} To understand the view from each spaceship, we will consider acceleration within special relativity, a topic which has been discussed in detail elsewhere [see Chapter 5 of general relativity by \citet{hartle2003gravity} for an excellent discussion] and here we provide a summary of the key points. The motion of an accelerating spaceship through space-time is described by a 4-velocity, $u^\alpha$, and 4-acceleration, $a^\alpha$, the motion obeys key normalization relationships, namely; $$\setlength\arraycolsep{0.4em} \begin{array}{ccccc} u \cdot u &=& \eta_{\alpha\beta}\ u^\alpha\ u^\beta &=& -c^2 \nonumber \\ a \cdot a &=& \eta_{\alpha\beta}\ a^\alpha\ a^\beta &=& g^2 \nonumber \\ u \cdot a &=& \eta_{\alpha\beta}\ u^\alpha\ a^\beta &=& 0 \end{array} $$ where $c$ is the speed of light. Here, $\eta_{\alpha\beta}$ are the components of the Minkowski metric of flat space-time, and $g$ is the magnitude of the acceleration, experienced as a g-force within the spaceship. The final expression here demonstrates that the 4-velocity and 4-acceleration remain orthogonal during the motion; see the work of \citet{1960PhRv..119.2082R} for a detailed discussion of orthogonality and the hyperbolic nature of relativistic motion. In this paper, we will consider uniform acceleration, such that the magnitude of the acceleration is the same in the instantaneous reference frame aligned with the spaceship. For motion purely in the $x-$direction, the world-line of a spaceship has an analytic form, namely; \begin{align} x^\alpha & = \left( c t(\tau) , x( \tau ) \right) \nonumber \\ & = \frac{c^2}{g} \left( \sinh\left(\frac{g \tau}{c}\right) , \cosh\left(\frac{g \tau}{c} \right) + C \right) \label{analyticpaths} \end{align} where $\tau$ is the proper time experienced by an observer on the accelerating spaceship. The acceleration begins at the proper time of $\tau=0$, corresponding to a coordinate time of $t=0$, and $C$ is a constant set by the starting spatial location of the spaceship. The components of the 4-velocity are given by \begin{equation} u^\alpha = \left( c \frac{dt}{d\tau} , \frac{dx}{d\tau}\right) = c \left( \cosh\left(\frac{g \tau}{c}\right) , \sinh\left(\frac{g \tau}{c} \right) \right) \end{equation} In examining the situation as described in Bell's spaceship paradox, the two spaceships are initially considered to be at rest within a particular coordinate system, with one spaceship at the origin, $x=0$, while another is located at $x=L$; clearly, in this reference frame, the two spaceships are separated by $L$. At a coordinate time of $t=0$, the two spaceships fire their engines to produce an identical uniform acceleration, and the motion is described by the above equations. The space-time diagram shown in Figure~\ref{paths} demonstrates these paths, with the leading spaceship shown in red, while the trailing spaceship is shown in blue. Again, note that the separation between the two spaceships in the initial reference frame remains $L$ for the duration of the journey. Within the flat space-time of special relativity described by the Minkowski metric, the null paths traced by photons are lines at $45^o$ in $(ct,x)$ coordinates, and therefore the trajectories of photons exchanged between the two spaceships can be determined from purely geometric considerations; two example paths of photons that are exchanged between the two spaceships are shown in Figure~\ref{paths}. To determine the relative energies of the exchanged photons, we can use the fact that the energy of a photon with a null 4-momentum of $k^\alpha = (k^t , k^x)$ as seen by an observer with a 4-velocity of $u^\alpha = (u^t , u^x)$ is given by \begin{equation} E = - k \cdot u = -\eta_{\alpha\beta} k^\alpha u^\beta \end{equation} [see \citet{1994AmJPh..62..903N} for a generalised discussions of redshifts in relativity]. With this, the energy of a photon as observed by one spaceship, $E_o$, compared to the energy it was emitted by the other spaceship, $E_e$, is given by \begin{equation} \frac{E_o}{E_e} = \frac{ \eta_{\alpha\beta} k^\alpha u_o^\beta }{\eta_{\alpha\beta} k^\alpha u_e^\beta} \end{equation} Given that photons follow null paths, in the Minkowski space-time $k^t = | k^x |$, where the absolute aspect of the spatial component accounts for photon paths in either the positive or negative $x-$direction. With this, the relative energies of the photons is given by \begin{equation} \frac{E_o}{E_e} = \frac{u^t_o \mp u^x_o}{u^t_e \mp u^x_e} \end{equation} where the positive term is for photons travelling in the negative $x-$direction, while the negative term is for photons travelling in the positive $x-$direction. \section{Results}\label{results} Given the analytic form of the 4-velocity in Equation~\ref{analyticpaths}, and the geometric form of the photon path, we are able to drive expressions for the relative emitted and observed photon energies between the two spaceships. In the following, the subscript, $l$, refers to the leading spaceship, while $t$ is the trailing spaceship. Additionally, we adopt units in which the speed of light, $c=1$, in which we employ a normalized acceleration given by $a = \frac{g}{c}$. \subsection{Radar Distance}\label{radar} The concept of radar distance is well established in relativistic physics, with an observer measuring the time taken for a photon to travel to an object of interest and back again. In Figure~\ref{paths}, the grey photon path from leading spaceship to trailing spaceship and back again to the leading spaceship, representing a single `ping' in the leading spaceship's determination of the radar distance to the trailing spaceship \citep{2008MNRAS.388..960L,2008ASSL..349..131P}. Given the analytic form of the spaceship paths given by Equation~\ref{analyticpaths} we can relate the proper times for the emission and receipt of light rays during the accelerated portion of the spaceships' journeys. For a light ray travelling in the positive $x-$direction, this is given by \begin{equation} \exp( -a \tau_l ) = \exp( -a \tau_t ) - a\ L \end{equation} whereas for a light ray travelling in the negative $x-$direction, the corresponding expression is \begin{equation} \exp( a \tau_l ) = \exp( a \tau_t ) - a\ L \end{equation} Using these expressions, it is straight-forward to calculate the radar distance determined by the two spaceships and these are presented in Figure~\ref{radardistance}, for an acceleration of $a=0.5$ and $L=1$; we note that there is a degeneracy between the separation and acceleration, with present equations above dependent upon only their (dimensionless) product, namely $a\ L$. Here, the solid line denoted the light travel time as a function of the proper time that a photon is emitted, whereas the dashed line is as a function of the proper time that a photon is received. The red curves represent the view from the leading spaceship, whereas the blue curves are for the trailing spaceships. \begin{figure} \begin{center}\includegraphics[width=\columnwidth]{./Figure2_radar.png} \caption{The radar distance as measured by the leading spaceship in red as a function of the proper time photons are emitted (solid), or subsequently received (dashed). The corresponding radar distance for the trailing spaceship is presented in blue. As for the previous figures, this is presented for a fiducial case where $a=0.5$ and $L=1$.} \label{radardistance} \end{center} \end{figure} In examining the solid curves in Figure~\ref{radardistance} it is immediately apparent that the radar distance for both spaceships diverges at a finite proper time. This is understandable due to presence of the Rindler horizon for the leading spaceship as eventually it will outrun any photon emitted or reflected from the trailing spaceship, and so the return journey required for the radar `pings' become impossible. But as we have seen previously, while the leading spaceship loses sight of the trailing spaceship as it slips behind the Rindler horizon, the trailing ship continues to view the leading spaceship, even though it can no longer measure a finite radar distance. So it would be able to see the leading ship's tail-lights, but would be unable to illuminate the ship with its own headlights. This also confirms the resolution of the original Bell's paradox: as the ships accelerate, the (radar-ranging) distance between them increases, snapping the string. In the final section of this paper, we will consider the apparent angular size of the two spaceships as determined by those travelling on them. \subsection{Blueshift and Redshift} \subsubsection{View from the leading spaceship:}\label{leading} When considering the view from the leading spaceship, it must be remembered that when it begins its acceleration, it is still receiving photons from the trailing ship that were emitted before it begins its acceleration. So, for the initial stages of its journey, the 4-velocity for the photon emitter is given by $u_e^\alpha = (u_e^t,u_e^x) = (1,0)$. Eventually, the leading spaceship will receive photons from the trailing spaceship after it begins its acceleration, where the 4-velocity is given by the above relations. With this, for the first stage of the journey, the relative energies of the photons is given by \begin{equation} \frac{E_l}{E_t} = \exp( { - a \tau_l } ) \end{equation} while the second stage of the path, this is given by \begin{equation} \frac{E_l}{E_t} = \frac{1}{1 + a\ L\ \exp({a \tau_l})} \label{eq1} \end{equation} where $\tau_1$ is the proper time recorded on a clock on the leading spaceship. The transition between these two relations occurs at \begin{equation} \tau_l = -\frac{1}{a} \log_e \left( 1 - a\ L \right) \end{equation} \subsubsection{View from the trailing spaceship:}\label{trailing} As with the view of the leading spaceship, when the trailing spaceship begins its acceleration it is still receiving photons from the leading spaceship when it was at rest, so again $u_e^\alpha = (u_e^t,u_e^x) = (1,0)$, and, again, the trailing spaceship begins to receive photons from the leading spaceship once it begins its acceleration. In this circumstance, the ratio of the photon energies for the first stage of the accelerated path is given by \begin{equation} \frac{E_t}{E_l} = \exp( { a \tau_t } ) \end{equation} and the corresponding values for the second part of the journey is given by \begin{equation} \frac{E_t}{E_l} = \frac{1}{1 - a\ L\ \exp({- a \tau_t})} \label{eq2} \end{equation} where $\tau_2$ is the proper time recorded on a clock on the trailing spaceship. The transition between these two relations occurs at \begin{equation} \tau_t = \frac{1}{a} \log_e \left( 1 + a\ L \right) \end{equation} \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{./Figure3_redshifts.png} \caption{The observed blue/redshift of photons as seen by the leading spaceship (red curve) and trailing spaceship (blue curve) as a function of the proper time of the observe. The distinct breaks in each of the curves delineates the point in the journey where there is a transition between observing the emitting spaceship from being stationary to accelerating. As noted in the text, the leading spaceship loses sight of the trailing ship, with the observed energy tending to zero. The trailing spaceship initially sees an increase in the blueshifting of the leading spaceship, before it decreases back towards unity.} \label{redshift} \end{center} \end{figure} \subsubsection{Interpretation:}\label{interpretation} Figure~\ref{redshift} presents a graphical illustration of these results, again for an initial separation of $L=1.0$ and an acceleration of $a=0.5$, with the red curve representing the view from the leading spaceship, while the blue curve is the view from the trailing ship; again, we note that the results depend only upon the product of these two quantities. As per the above expressions, the leading spaceship's views the trailing ship as being increasingly redshifted, with the observed energy dropping to zero as the image of the trailing ship is frozen on what is known as the Rindler horizon \citep{1966AmJPh..34.1174R}. This freezing on the Rindler horizon can also be seen in Figure~\ref{time} which presents the proper time on the emitting spaceship when a photon is emitted, and the corresponding time on the receiving spaceship when the photon is observed. Again, the red curve corresponds to the case where photons are emitted from the trailing spaceship and observed by the leading spaceship, whereas the blue curve is the case where photons are emitted from the leading spaceship and received by the trailing spaceship. Considering the red curve, it is clear that proper time of the emitter is asymptoting to a particular value as the redshift increases, demonstrating the freezing of the image of the trailing spaceship on the Rindler horizon. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./Figure4_times.png} \caption{The relationship between the proper time of the emitting spaceship when a photon is emitted, compared to the proper time on the observing spaceship when the photon is received. The red curve represents the case of a photon being emitted from the trailing spaceship and being observed by the leading spaceship, while the blue curve corresponds to the emission of photons from the leading spaceship and being observed by the trailing spaceship. } \label{time} \end{center} \end{figure} Examining the blue curves in Figures~\ref{redshift} and \ref{time} reveals that the view from trailing spaceship is quite different, with the asymptotic behaviour of the relative energies tending to unity, so there is no net redshifting or blueshifting, and the relative ticking of the clocks becoming in sync. Hence, the trailing ship appears to settle back into the view before the acceleration began. We will return to the interpret the asymptotic behaviour of the emitted and observed photon energies in the next section. In re-examining the results presented in equations \ref{eq1} and \ref{eq2}, in the limit where the acceleration and separation as small, we note that the exponential terms tends to unity and the ratio of the observed photon energies depend upon $a\ L$. The form is equivalent of the observed photon redshifting and blueshifting in a uniform weak gravitational field which is dependent upon the potential difference between the emitter and the observer, with $a\ L$ being equivalent to $\Delta \Phi = g h$; such a result is expected from the relativistic equivalence principle of uniform acceleration and a uniform gravitation field, and the reader is directed to Chapter 6 of \citet{hartle2003gravity} for more details. \subsection{Angular Size}\label{angular} Unlike radar ranging, which requires a two-way exchange of light, ``seeing" only requires light to travel in one direction. For the purposes of this study, we will consider the angular size of each spaceship as determined from the other. In calculating this, we assume each spaceship is a disk of radius $d$ orientated perpendicular to the $x-$direction. It is straight-forward to connect light paths leaving the edge of one of the spaceships to a camera at the centre of the other by using the fact that these paths must be null; \begin{equation} -(t(\tau_i) - t(\tau_j))^2 + (x(\tau_i) - x(\tau_j))^2 + d^2 = 0 \label{null} \end{equation} where $\tau_i$ and $\tau_j$ are the proper times experienced on each of the spaceships. For the purposes of this study, we decided to employ a numerical root-finder to identify the null paths. Once the light paths are identified in space-time, their orientation relative to the camera can be determined. However, it is essential to transform these into the observer's frame to determine the angle as discerned by the camera. For this, we identify an orthonormal frame (see \citet{hartle2003gravity}, chapter 8 for details) with the observer and can transform the $x-$component of the photon 4-momentum to be \begin{equation} p'^x = p^x u^t - p^t u^x \end{equation} where $u^t$ and $u^x$ are the components of the 4-velocity of the observer, and $p^t$ is determined from the fact that photon paths are null. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{./Figure5_angular.png} \caption{The apparent angular size of the two spaceships as viewed from each other, with the red being the view from the leading spaceship, and the blue being the view from the trailing spaceship. As in previous examples, $L=1$, $a=0.5$ and the radius of the spaceships is $d=0.01$. } \label{angularfig} \end{center} \end{figure} For the purposes of this study, we solve this problem numerically and in Figure~\ref{angularfig} we present the apparent angular size of the two spaceships. For this, we assume that $d=0.01$, with $L=1$ and $a=0.5$ assumed previously, with the red curve denoting the view from the leading spaceship, while the blue curve represents the view from the trailing spaceship. The view from each spaceship is starkly different. The trailing spaceship sees the leading steadily decreases in angular size, getting smaller and smaller as the two spaceships accelerate. However, the leading spaceship initially sees the trailing spaceship grow in size; this, of course, is a well-known special relativistic result, where an observer at relativistic speeds sees objects, such as a distant star-field, apparently pile up in the direction of motion, a relativistic aberration effect\footnote{e.g. {\tt math.ucr.edu/home/baez/physics/Relativity... /SR/Spaceship/spaceship.html}}. This is reflected in the linear change in the apparent angular size of the spaceships once the acceleration starts. In the low velocity limit, we can approximate the velocity to be $v \sim a \tau$ and the apparent angular size given by special relativistic aberration relation is given by \begin{equation} \frac{\theta'}{\theta} \sim \left( 1 \pm v \right) \sim \left( 1 \pm a \tau \right) \end{equation} accurately describing this initial linear behaviour at the start of the acceleration. But, as the acceleration continues, the angular size deviates from the linear relationship, and the view seen by the leading spaceship decreases, tending back to the angular size seen before the acceleration starts. An exploration of the magnitude of the acceleration and the separation of the spaceships reveal these views to be generic, with the trailing spaceship seeing a diminishing size for the leading spaceship, whereas the the leading spaceship sees the angular size of the trailing ship asymptote to some fixed value. This is generally not the size of the spaceship as seen before the acceleration starts, and the particular asymptote see in Figure~\ref{angularfig} is due to the particular choice of $a$ and $L$. Will illustrate this in Figure~\ref{angularfig2}, which presents the case in the previous figure, but also adding the case where $L=0.5$ and $d=0.005$ (thick solid line) and $L=1.5$ and $d=0.015$ (thick dashed line). With these, the apparent size of the spaceships before the acceleration is the same, and after the acceleration, the trailing ship sees the size of the leading spaceship continue to decrease, whereas the leading spaceship sees the size of the trailing spaceship asymptote to a constant value, although the precise value of this asymptote depends explicitly of the chosen values of $L$ and $d$. \section{Conclusions}\label{conclusions} In this paper, we have revisited Bell's spaceship paradox, expanding the problem to consider the view from both the leading and trailing spaceships, with the exchange of photons between the two during their journeys, a previously unexplored aspect of the scenario of identically accelerating spaceships. \begin{figure} \begin{center}\includegraphics[width=\columnwidth]{./Figure6_angular2.png} \caption{As Figure~\ref{angularfig}, with an acceleration of $a=0.5$, but different values of the initial separation ($L$) and angular size ($d$). Again, the red lines corresponds to the view from the leading spaceship, while the blue is the view from the trailing spaceship. The thin line represents the situation where $L=1$ and $d=0.01$ (as in the previous figure), while the thicker solid line corresponds to $L=0.5$ and $d=0.005$, and the thicker dashed line is for $L=1.5$ and $d=0.015$. In each case, the the apparent angular size before the period of acceleration is the same, but it is clear that the asymptotic angular size of the trailing spaceship as seen by the leading spaceship depends upon the chosen values of $L$ and $d$.} \label{angularfig2} \end{center} \end{figure} As well as recovering established results in Bell's spaceships, we also find that, due to the presence of the Rindler horizon, the distance between the two spaceships as determined by radar ranging diverges for both spaceships. However, the view from each spaceship does not reflect this divergence and are distinctly different. For the leading spaceship, we find that it sees the trailing spaceship being progressively redshifted and shrinking as it vanishes behind its Rindler horizon. The view from the trailing spaceship is quite different, finding that there is an initial increase in the blueshifting of photons, before a subsequent decrease. Similarly, the angular size of the two spaceships has some strange behaviour, with the trailing spaceship seeing the leading spaceship shrink in size, whereas the leading spaceship sees the angular size of the trailing spaceship asymptote to some particular value. This behaviour is quite peculiar and unexpected, and, in conclusion, while Bell's spaceship paradox is well studied and discussed in the field of relativity, it still has some surprises yet to yield. \section*{Acknowledgements} We thank the referee for their positive comments and additional relativistic insights that improved the presentation of this paper. LAB is supported by a grant from the John Templeton Foundation. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. \newpage \bibliographystyle{pasa-mnras}
{ "timestamp": "2017-12-15T02:09:07", "yymm": "1712", "arxiv_id": "1712.05276", "language": "en", "url": "https://arxiv.org/abs/1712.05276" }
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \IEEEPARstart{C}{omputer vision} has been addressing the problem of head pose estimation for several years.\\ In 2009, Murphy-Chutorian and Trivedi \cite{Trivedi2009} made a first assessment of the proposed techniques. More recently, different approaches have been proposed together with some annotated datasets useful for both training and testing those systems. The interest of the research community is mainly due to a large number of applications that require or are improved by a reliable head pose estimation: face recognition with aliveness detection, human-computer interaction, people behavior understanding are some examples. Moreover, a large effort has been recently devoted to applications in the automotive field, such as monitoring drivers and passengers. Together with the estimation of the upper-body and shoulder pose, the head monitoring is one of the key technologies required to set up (semi)-autonomous driving cars, human-car interaction for entertainment, and driver's attention measurement. In the automotive field, vision-based systems are required to cooperate or even replace other traditional sensors, due to the increasing presence of cameras inside new car's cockpits and to the ease of capturing images and videos in a completely non-invasive manner. \\ In the past, encouraging results for driver head pose estimation have been achieved using RGB images \cite{Trivedi2009,tran2011,bergasa2006,doshi2012,czuprynski2014} as well as different camera types, such as infrared \cite{ji2004real}, thermal \cite{trivedi2004occupant}, or depth \cite{meyer2015,malassiotis2005,borghi17cvpr}. Among them, the last ones are very promising, since they allow robustness when facing strong illumination variations. Moreover, standard techniques based on RGB images are not always feasible due to poor or absent illumination conditions during the night or to the continuous illumination changes during the day. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{images/demofinale7.png} \caption{Visual examples of the proposed framework output in indoor (first row) and automotive (second and third row) settings. Head pose angles are reported as colored arrows. Depth maps, \textit{Face-from-Depth} and Motion Image inputs are depicted on the left of each frame. } \label{fig:demofinale} \end{figure} Nowadays, the acquisition of depth data is feasible thanks to commercial low-cost, high-quality and small-sized depth sensors, that can be easily placed inside the vehicle. In this paper, we propose a robust and fast solution for head and shoulder pose estimation, especially devoted to drivers in cars, but that can be easily generalized to any application where depth images are available. \begin{figure}[t!] \centering \includegraphics[width=0.7\columnwidth]{images/dark.png} \caption{Example of reliability of the \textit{FfD} network on depth images. Two consecutive frames have been selected from a sequence with an abrupt illumination change (from light to dark). In the first column the auto equalized RGB, then the corresponding depth maps and finally the \textit{FfD} reconstruction output. } \label{fig:darkscene} \end{figure} The presented framework provides impressive results, reaching an accuracy higher than 73\% on the new \textit{Pandora} dataset (see Fig. \ref{fig:pandora}) and a low average error on the \textit{Biwi} dataset, thus overcoming all state-of-art related works. \begin{figure*}[t!] \centering \includegraphics[width=0.85\linewidth]{images/pandora1.png} \caption{Sample frames from the \textit{Pandora} dataset. As depicted, extreme poses and challenging camouflage can be present.} \label{fig:pandora} \end{figure*} The core of the framework is a Convolutional Neural Network (CNN), called \textit{POSEidon$^+$}, that combines depth, appearance and Motion Images as input to estimate the 3D pose angles in regression. An overview of the model is depicted in Figure \ref{fig:general}. The model is enhanced with a \textit{Face-from-Depth (FfD)} component. \\ This is motivated by recent literature results\cite{ahn2014,drouard2015} that testifies the importance of intensity images for the task. The \textit{FfD} component is able to reconstruct the gray-level appearance of a face directly from the corresponding depth image. Thanks to the insensitivity of the depth image to the external illumination conditions, the provided reconstruction is more stable and reliable than gray or color images captured from the same RGB-D sensor. Moreover, the reconstruction can be applied in situations where the depth sensor is exploited alone without the color stream for computational or implementation constraints. \\ As an example, in Figure \ref{fig:darkscene}, we have reported two frames captured from an RGB-D sensor in correspondence of an abrupt illumination change (from light to dark). The depth images are not affected by the illumination change and thus the corresponding \textit{FfD} reconstructions are identical. The provided output highlights the reliability of the developed network as well as the quality of the results. The overall system is split into two components: the \textit{Face-from-Depth} architecture followed by the pose estimation module, that takes as input the reconstructed gray level images. From a first glance, this approach could be improper since we are somehow forcing the \textit{FfD} model to output a human understandable intermediate representation, \textit{i.e.}, the gray level image. Training an end-to-end system enables the network to find the best internal/intermediate representation. However, in addition to a performance improvement as reported in Section \ref{sec:results}, the introduction of the \textit{Face-from-Depth} component allows the second part of the system to be trained on wider datasets since more annotated datasets on gray-level images are usually available rather than on depth ones. More generally, \textit{FfD} moves input depth images on a domain where more experience is available in order to understand and process them. This paper is an improved and extended version of our preliminary work, that has been described in \cite{borghi17cvpr}, where the body pose estimation task was carried on through a baseline version of the \textit{POSEidon$^+$} framework, here referred as \textit{POSEidon}. In this paper we present the overall framework, introducing a new \textit{Face-from-Depth} architecture, which exploits the recent \textit{Deterministic Conditional GAN} models \cite{isola2016image} to reconstruct gray-level face images. To the best of our knowledge, this is one of the first proposal to generate intensity images from depth data for the head pose estimation task with an \textit{adversarial} approach. Moreover, we evaluate and check the overall quality of the computed face images and results confirm their high quality and accuracy.\\ Extensive experiments have been carried out and results show that the \textit{POSEidon$^+$}, equipped with the improved version of the \textit{Face-from-Depth} architecture, achieves significant improvements in the head pose estimation task. Besides, we show that is possible to obtain competitive results exploiting a CNN trained on gray-level faces and tested on generated ones. \section{Related Work} \label{sec:related} The complete framework proposed in this paper merges together several modern aspects of computer vision. Among the others, the detection, localization, and pose estimation of the head and the shoulders on depth images have been included. In the following, we describe the state of the art of each mentioned topic, including the \textit{Domain Translation} research area related to the \textit{Face-from-Depth} module. \\ \noindent \textbf{Head Pose.} Head pose estimation approaches can rely on different input types: RGB images, depth maps, or both. For this reason, in order to discuss related works, we adopt a classification based on the input data types leveraged by each method. \textit{RGB} methods take monocular or stereo intensity images as input. In \cite{whitehill2008discriminative} a discriminative approach to frame-by-frame tracking the head pose is presented, based on the detection of the centers of both eyes, the tip of the nose and the center of the mouth. Also, \cite{vatahska2007,yang2002model,matsumoto2000} leverage well visible facial features on RGB input images, and \cite{sun2008} on 3D data. \cite{drouard2017switching} proposed to predict pose parameters from high-dimensional feature vectors, embedding a Gaussian mixture of linear inverse-regression model into a dynamic Bayesian model. However, these methods need facial (\textit{e.g.} nose and eyes) or pose-dependent features, that should be always visible: consequently, these methods fail when such features are not detected.\\ A different approach for head pose estimation involves 3D model registration techniques. Firstly, Blanz and Vetter~\cite{blanz1999} propose a technique for modeling textured 3D faces automatically generated from one or more photographs. Cao \textit{et al}. \cite{cao2013} exploited a 3D regression algorithm that learns an accurate, user-specific face alignment model from an easily acquired set of training data, generated from images of the user performing a sequence of predefined facial poses and expressions. Furthermore, \cite{storer20093d} proposed a hybrid approach, which exploits the flexibility of a generative 3D facial model in a combination with a fitting algorithm. However, those techniques often need a manual initialization which is indeed critical for the effectiveness of the method.\\ A first attempt to use deep learning techniques combined with the regression task in head the pose estimation problem has been performed by Ahn \textit{et al}. \cite{ahn2014}, through a CNN trained on RGB input images. Also, \cite{osadchy2007synergistic} exploits a CNN by mapping images of faces on a low dimensional manifold parameterized by pose. In \cite{xu2017joint} a framework to jointly estimate the head pose and the face alignment using global and local CNN features has been presented while a hybrid approach based on CNN and Gaussian mixture was proposed in \cite{lathuiliere2017deep} and \cite{khan2017head}. With deep learning-based approaches, synthetic datasets were often used to train CNNs, that generally require a huge amount of data \cite{liu3d2016}. \\ Additionally, a bunch of methods regard head pose estimation as an \textit{optimization} problem: in \cite{bar2012} a multi-template, \textit{Iterative Closest Point} (ICP) \cite{low2004} based gaze tracking system is introduced. Besides, other works use linear or nonlinear regression with extremely low-resolution images \cite{chen2016}. HOG features and a Gaussian locally-linear mapping model were used in \cite{drouard2015} and, finally, recent works produce head pose estimations performing a face alignment task \cite{zhu2015} using CNNs.\\ In general, RGB based methods are highly sensitive to illumination, partial occlusions and bad image quality \cite{Trivedi2009}. \begin{figure*}[bt] \centering \includegraphics[width=0.90\linewidth]{images/Schemav4.png} \caption{Overview of the whole \textit{POSEidon$^+$} framework. Depth input images are acquired by depth sensors (black) and provided to a head localization CNN (blue) to suitably crop the images around the upper-body or head regions. The head crop is used to produce the three inputs for the following networks (green), that are then merged to output the head pose (red). In particular, the \textit{Face-from-Depth} architecture reconstructs gray-level face images from the corresponding depth maps, while the Motion Images are obtained by applying the \textit{Farneback} algorithm. Finally, the upper-body crop is used for the shoulder pose estimation (orange). [best in color]} \label{fig:general} \end{figure*} \textit{Depth} methods, on the other hand, exploit only range data to perform the pose estimation task. A first attempt to localize accurate nose locations from depth maps in order to perform head tracking and pose estimation was done in~\cite{malassiotis2005}. Consequently, \cite{breitenstein2008} used geometric features to identify nose candidates to produce the final pose estimation. A more robust approach was done in \cite{fanelli2011,fanelli2011dagm,fanelli2013}, where a Random Regression Forest \cite{liaw2002classification} algorithm is exploited for both head detection and pose estimation purposes. Furthermore, in \cite{papazov2015} facial point clouds were matched with pose candidates, through a novel triangular surface patch descriptor.\\ As previously stated for RGB methods, those techniques require facial attributes, thus are prone to errors when such features are not detected.\\ Remaining depth methods regard the head pose estimation task as an \textit{optimization} problem: \cite{padeleris2012} used the \textit{Particle Swarm Optimization} (PSO) \cite{kennedy2011} while \cite{meyer2015} perform pose estimation by registering a morphable face model to the measured depth data combining PSO and ICP techniques. Furthermore, \cite{kondori2011} used a least-square technique to minimize the difference between the input depth change rate and the prediction rate, to perform 3D head tracking. Finally, in \cite{sheng2017generative} a generative model is proposed, that unifies pose tracking and face model adaptation on-the-fly. \\ However, no previous method that uses depth maps as the only input exploits CNNs in an effective way. In this work we propose a method based on \cite{borghi17cvpr} which uses depth maps to produce accurate head pose predictions by leveraging CNNs. \textit{RGB-D} methods combine together RGB images and depth maps. A first effort to leverage both data was done in \cite{Seemann2004}, where a Neural Network is exploited to perform head pose predictions. HOG features \cite{dalal2005histograms} were extracted from RGB and depth images in \cite{yang2012,saeed2015}, then a \textit{Multi Layer Perceptron} and a linear SVM \cite{hearst1998support} were used for feature classification, respectively. In \cite{kaymak2012exploiting} Random Forests and tensor regression algorithms are exploited while \cite{tulyakov2014} used a cascade of tree classifiers to tackle extreme head pose estimation task. Recently, in \cite{mukherjee2015} a multimodal CNN was proposed to estimate gaze direction: a regression approach was only approximated through a classifier with a granularity of \ang{1} and with 360 classes. As for RGB and depth methods, these appearance-based techniques are not robust enough: they still strongly depend on the detection of visible facial features. \\ Following 3D model registration techniques, \cite{baltruvsaitis2012} leverage intensity and depth data to build a 3D constrained local method for robust facial feature tracking. Furthermore, in \cite{ghiass2015,cai2010,bleiweiss2010,li2016real} a 3D morphable model is fitted, using both RGB and depth data to predict head pose. Finally, \cite{rekik2013}, based on a particle filter formalism, presents a new method for 3D face pose tracking in color images and depth data acquired by RGB-D cameras.\\ Several works based on head pose estimation, however, do not take in consideration the head localization task. \\ \noindent \textbf{Head Detection and Localization.} To propose a complete head pose estimation framework, head detection is firstly required to find the complete head or a particular point, for example the head center \cite{shotton2013}. With RGB or intensity images Viola and Jones \cite{viola2004} face detector is often exploited, \textit{e.g.} in \cite{ghiass2015,cai2010,rekik2013,baltruvsaitis2012,Seemann2004}. A different approach demands the head location to a classifier, \textit{e.g., } \cite{tulyakov2014}. As reported in \cite{meyer2015}, these approaches suffer due to the lack of generalization capabilities of exploited models, with different acquisition devices and scene contexts.\\ Recently, deep learning approaches trained on huge face datasets allowed to reach impressive results \cite{Faceness17,cao2017realtime}. However, very few works in literature propose methods for head detection or localization using \textit{only} depth images as input. A method based on a novel head descriptor and an LDA classifier is described in \cite{chen2016exploring}. Every single pixel is classified as head or non-head, and all pixels are clustered for final head detection. In \cite{nghiem2012head} a fall detection system is proposed, in which is included a module for head detection. Heads are detected only on moving objects through a background suppression. In \cite{fanelli2011} patches extracted from depth images are used to both compute the location and the pose of the head, through a regression forest algorithm.\\ \begin{figure}[b!] \centering \includegraphics[width=0.9\columnwidth]{images/localization3.png} \caption{Architecture of the Head Localization network with corresponding kernel size (k), number of feature maps (n) and stride (s) indicated for each convolutional layer.} \label{fig:headlocalization} \end{figure} \noindent \textbf{Driver Body Pose.} Only a limited number of works in literature tackle the problem of driver body pose estimation, focusing only on upper-body parts and taking into account automotive contexts. Ito \textit{et al.} \cite{ito2008}, adopting an intrusive approach, placed six marker points on the driver body to predict some typical driving operations. A 2D driver body tracking system was proposed in \cite{datta2008}, but a manual initialization of the tracking model is strictly required. In \cite{trivedi2004occupant} a thermal long-wavelength infrared video camera was used to analyze occupant position and posture. In \cite{tran2009introducing} an approach for upper body tracking system using 3D head and hands movements was developed. \\ \noindent \textbf{Domain Translation.} Domain translation is the task of learning a parametric translation function between two domains. Recent works have addressed this problem by exploiting Conditional Generative Adversarial Networks (cGAN) \cite{mirza2014conditional} in order to learn a mapping from input to output images.\\ Isola \textit{et al.} \cite{isola2016image} demonstrated that their model, namely \textit{pix2pix}, is effective at synthesizing photos from label maps, reconstructing objects from edge maps and colorizing images. Moreover, Wang \textit{et al.} \cite{wang2016generative} proposed a method that acts as a rendering engine: given a synthetic scene, their \textit{Style GAN} is able to render a realistic image. In \cite{zhang2017improving} a cGAN is capable of translating an RGB face image to depth data. Recently, a coupled generative adversarial networks framework has been proposed \cite{liu2016coupled}, to generate pairs of corresponding images in two different domains. In our preliminary work \cite{borghi17cvpr}, we proposed one of the first approach, based on a traditional CNN with common aspects with respect to autoencoders \cite{masci2011stacked} and Fully Convolutional Networks \cite{long2015fully}, that was trained to compute the appearance of a face using the corresponding depth information. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{images/gan.png} \caption{Architecture of the \textit{Face-from-Depth} network.} \label{fig:gan} \end{figure} \section{The POSEidon$^+$ framework} \label{sec:architecture} An overview of the \textit{POSEidon$^+$} framework is depicted in Figure \ref{fig:general}. The final goal is the estimation of the pose of the driver's head and shoulders, defined as the mass center position and the corresponding orientation relative to the reference frame of the acquisition device \cite{Trivedi2009}. The orientation is provided using three rotation angles \textit{pitch}, \textit{roll} and \textit{yaw}. \textit{POSEidon$^+$} directly processes the stream of depth frames captured in real-time by a commercial sensor. Position and size of the foremost head are estimated by a head localization module based on a regressive CNN (Sect. \ref{sec:headlocalization}). The output provided is used to crop the input frames around the head and the shoulder bounding boxes, depending on the further pipeline type. Frames cropped around the head are fed to the head pose estimation block, while the others are exploited to estimate the shoulders pose.\\ The core components of the system are the \textit{Face-from-Depth} network (Sect.~\ref{sec:Face-from-Depth}), and \textit{POSEidon$^+$} (Sect.~\ref{sec:poseidon}), the network which gives the name to the whole framework. Its trident shape is due to the three included CNNs, each working on a specific source: depth, gray level (the output of \textit{FfD}) and \textit{Motion Images} data. The first one -- \textit{i.e.}, the CNN directly connected to the input depth data -- plays the main role on the pose estimation, while the others cooperate to reduce the estimation error. \subsection{Head Localization} \label{sec:headlocalization} In this step, we defined and trained a network for head localization, relying on the main assumption that a single person is in the foreground. The image coordinates $(x_H,y_H)$ of the head center are the network outputs, or rather, the center mass position of all head points in the frame~\cite{shotton2013}.\\ Details on the deep architecture adopted are reported in Figure \ref{fig:headlocalization}. A limited depth and small-sized filters have been chosen to meet real-time constraints while keeping satisfactory performance. For this reason, input images are firstly resized to $160 \times 132$ pixels. A max-pooling layer ($2 \times 2$) is run after each of the first four convolutional layers, while a dropout regularization ($\sigma=0.5$) is exploited in fully connected layers. The hyperbolic tangent activation (\textit{tanh}) function is adopted, in order to map continuous output values to a predefined range $[-\infty, +\infty] \rightarrow [-1, +1]$. The network has been trained by \textit{Stochastic Gradient Descent} (SGD) \cite{krizhevsky2012} and the $L_2$ loss function.\\ Given the head position $(x_H,y_H)$ in the frame, a dynamic size algorithm provides the head bounding box with center $(x_H, y_H)$, width $w_H$ and height $h_H$, around which the frames are cropped: \begin{equation} w_H=\frac{f_x \cdot R_x}{D}, \quad h_H=\frac{f_y \cdot R_y}{D} \label{eq:headBB} \end{equation} \noindent where $f_x, f_y$ are the horizontal and the vertical focal lengths in pixels of the acquisition device, respectively. $R_x, R_y$ are the average width and height of a face (for head pose task $R_x=R_y=320$) and $D$ is the distance between the head center and the acquisition device, computed averaging the depth values around the head center. \\ Some examples of bounding boxes estimated by the network are superimposed in the frames of Figure~\ref{fig:demofinale}. \begin{figure}[b!] \centering \includegraphics[width=0.80\columnwidth]{images/pose3.png} \caption{Architecture of the Head and Shoulder Pose Estimation networks. } \label{fig:networkpose} \end{figure} \section{Face-from-Depth} \label{sec:Face-from-Depth} Due to illumination issues, the appearance of the face is not always available if acquired with a RGB camera, \textit{e.g.} inside a vehicle at night. On the contrary, depth maps are generally invariant to illumination conditions but lack of texture details. \\ We aim to investigate if it is possible to imagine the appearance of a face given the corresponding depth data. Metaphorically, we ask the model to mimic the behavior of a blind person when he tries to figure out the appearance of a friend through the touch. \begin{table*}[th!] \caption{Head Pose Estimation Results on \textit{Biwi}. To allow fair comparisons with state of the art methods, \textbf{POSEidon$^+$} has been evaluated using different evaluation protocols.} \centering \small \begin{tabular}{cccccccc} \hline \multirow{2}{*}{\textbf{Validation Procedure}} &\multirow{2}{*}{\textbf{Year}} &\multicolumn{2}{c}{\textbf{Data}} &\multicolumn{3}{c}{\textbf{Head}} &\multirow{2}{*}{\textbf{Avg}} \\ & &Depth &RGB &Pitch &Roll &Yaw & \\ \hline \multicolumn{8}{c}{}\\[-0.6em] \textsc{All sequences used as test set } & & & & & & &\\ \hline Padeleris \cite{padeleris2012} &2012 &$\surd$ & &6.6 & 6.7 & 11.1 &8.1 \\ \hline Rekik \cite{rekik2013} &2013 &$\surd$ &$\surd$ &4.3 & 5.2 & 5.1 &4.9 \\ \hline Martin \cite{Martin2014} &2014 &$\surd$ & & 2.5 & 2.6 & 3.6 &2.9 \\ \hline Papazov \cite{papazov2015} &2015 &$\surd$ & & 2.5 $\pm$ 7.4 & 3.8 $\pm$ 16.0 & 3.0 $\pm$ 9.6 &4.0 $\pm$ 11.0 \\ \hline Meyer \cite{meyer2015} &2015 &$\surd$ & & 2.4 & 2.1 & 2.1 &2.2 \\ \hline Li \cite{li2016real} &2016 &$\surd$ &$\surd$ & 1.7 & 3.2 & 2.2 &2.4 \\ \hline Sheng \cite{sheng2017generative}&2017 &$\surd$ & & 2.0 & 1.9 & 2.3 &2.1 \\ \hline \multicolumn{8}{r}{}\\[-0.5em] \textsc{Leave One Out (LOO) } & & & & & & & \\ \hline Drouard \cite{drouard2015} &2015 & &$\surd$ & 5.9 $\pm$ 4.8 & 4.7 $\pm$ 4.6 & 4.9 $\pm$ 4.1 &5.2 $\pm$ 4.5 \\ \hline Drouard \cite{drouard2017switching} &2017 & &$\surd$ & 10.0 $\pm$ 8.7 & 8.4 $\pm$ 8.0 & 8.6 $\pm$ 7.2 &9.0 $\pm$ 7.9 \\ \hline \textbf{POSEidon$^+$} &2017 &$\surd$ & & \textbf{2.4} $\pm$ \textbf{1.3} & \textbf{2.6} $\pm$ \textbf{1.5} & \textbf{2.9} $\pm$ \textbf{1.5} &\textbf{2.6} $\pm$ \textbf{1.4} \\ \hline \multicolumn{8}{c}{}\\[-0.6em] \textsc{k4-fold subject cross validation} & & & & & & & \\ \hline Fanelli \cite{fanelli2013} &2011 &$\surd$ & & 3.5 $\pm$ 5.8 & 5.4 $\pm$ 6.0 & 3.8 $\pm$ 6.5 &- $\pm$ - \\ \hline \textbf{POSEidon$^+$} &2017 &$\surd$ & & \textbf{2.8} $\pm$ \textbf{1.7} & \textbf{2.9} $\pm$ \textbf{2.1} & \textbf{3.6} $\pm$ \textbf{2.5} &\textbf{3.1} $\pm$ \textbf{2.1} \\ \hline \multicolumn{8}{c}{}\\[-0.6em] \textsc{k5-fold subject cross validation} & & & & & & & \\ \hline Fanelli \cite{fanelli2011dagm} &2011 &$\surd$ & & 8.5 $\pm$ 9.9 & 7.9 $\pm$ 8.3 & 8.9 $\pm$ 13.0 &8.43 $\pm$ 10.4 \\ \hline \textbf{POSEidon$^+$} &2017 &$\surd$ & & \textbf{2.8} $\pm$ \textbf{1.8} & \textbf{2.8} $\pm$ \textbf{2.2} & \textbf{3.6} $\pm$ \textbf{2.2} &\textbf{3.0} $\pm$ \textbf{2.1} \\ \hline \multicolumn{8}{c}{}\\[-0.6em] \textsc{k8-fold subject cross validation} & & & & & & & \\ \hline Lathuiliere \cite{lathuiliere2017deep}&2017 & &$\surd$ & 4.7 & 3.1 & \textbf{3.1} &3.6 \\ \hline \textbf{POSEidon$^+$} &2017 &$\surd$ & & \textbf{2.8} $\pm$ \textbf{1.9} & \textbf{2.8} $\pm$ \textbf{1.8} & 3.3 $\pm$ 2.0 &\textbf{3.0} $\pm$ \textbf{1.9} \\ \hline \multicolumn{8}{c}{}\\[-0.6em] \textsc{Fixed train and test splits} & & & & & & &\\ \hline Yang \cite{yang2012} &2012 &$\surd$ &$\surd$ & 9.1 $\pm$ 7.4 & 7.4 $\pm$ 4.9 & 8.9 $\pm$ 8.3 &8.5 $\pm$ 6.9 \\ \hline Baltrusaitis \cite{baltruvsaitis2012} &2012 &$\surd$ &$\surd$ & 5.1 & 11.3 & 6.3 &7.6 \\ \hline Kaymak \cite{kaymak2012exploiting} &2013 &$\surd$ &$\surd$ &7.4 &6.6 & 5.0 &6.3 \\ \hline Wang \cite{wang2013head} &2013 &$\surd$ &$\surd$ &8.5 $\pm$ 14.3 & 7.4 $\pm$ 10.8 & 8.8 $\pm$ 14.3 & 8.2 $\pm$ 12.0 \\ \hline Ahn \cite{ahn2014} &2014 & &$\surd$ & 3.4 $\pm$ 2.9 & 2.6 $\pm$ 2.5 & 2.8 $\pm$ 2.4 &2.9 $\pm$ 2.6 \\ \hline Saeed \cite{saeed2015} &2015 &$\surd$ &$\surd$ & 5.0 $\pm$ 5.8 & 4.3 $\pm$ 4.6 & 3.9 $\pm$ 4.2 &4.4 $\pm$ 4.9 \\ \hline Liu \cite{liu3d2016} &2016 & &$\surd$ & 6.0 $\pm$ 5.8 & 5.7 $\pm$ 7.3 & 6.1 $\pm$ 5.2 &5.9 $\pm$ 6.1 \\ \hline POSEidon \cite{borghi17cvpr} &2017 &$\surd$ & & 1.6 $\pm$ 1.7 & 1.8 $\pm$ 1.8 & 1.7 $\pm$ 1.5 &1.7 $\pm$ 1.7 \\ \hline \textbf{POSEidon$^+$} &2017 &$\surd$ & & \textbf{1.6} $\pm$ \textbf{1.3} & \textbf{1.7} $\pm$ \textbf{1.7} & \textbf{1.7} $\pm$ \textbf{1.3} &\textbf{1.6} $\pm$ \textbf{1.4} \\ \hline \end{tabular} \label{tab:resBiwiPose} \end{table*} \subsection{Deterministic Conditional GAN} The \textit{Face-From-Depth} network exploits the \textit{Deterministic Conditional GAN} (det-cGAN) paradigm \cite{isola2016image} and it is obtained as a generative network $G$ capable of estimating a gray-level image $I^E$ of a face from the corresponding depth representation $I^D$. The generator $G$ is trained to produce outputs as much indistinguishable as possible from \textit{real} images $I$ by an adversarially trained discriminator $D$, which is expressly trained to distinguish the \textit{real} images from the \textit{fake} ones produced by the generator. Differently from a traditional GAN \cite{goodfellow2014generative,radford2015unsupervised}, the Generator network of a det-cGAN takes an image as input (to be \textit{Conditional}) and not a random noise vector (to be \textit{Deterministic}). As a result, a det-cGAN learns a mapping from observed images $x$ to output images $y$: $G:x\rightarrow y$. The objective of a det-cGAN can be expressed as follows: \begin{multline} \label{eq:ldetcgan} L_{det-cGAN}(G,D)=\mathbb{E}_{I \sim p_{data}(I)}[\log{D(I)}] \\ + \mathbb{E}_{I^D \sim p_{data}(I^D)}[\log({1 - D(G(I^D))})] \end{multline} \noindent where $\log{D(I)}$ represents the log probability that $I$ is \textit{real} rather than \textit{fake} while $\log({1 - D(G(I^D))})$ is the log probability that $G(I^D)$ is \textit{fake} rather than \textit{real}. $G$ tries to minimize the term $L_{det-cGAN}(G,D)$ of Equation~\ref{eq:ldetcgan}, against $D$ that tries to maximize it. The optimal solution is: \begin{equation} \label{eq:eg} G^*=\arg \min_{G} \max_{D}L_{det-cGAN}(G,D) \end{equation} \noindent As a possible drawback, the images generated by $G$ are forced to be realistic thanks to $D$, but they can be unrelated with the original input. For instance, the output could be a nice image of a head with a very different pose with respect to the input depth. Thus, is fundamental mixing the GAN objective with a more traditional loss, such as SSE distance \cite{pathak2016context}. While discriminator’s job remains unchanged, the generator, in addition to fooling the discriminator, tries to emulate the ground truth output in an SSE sense. The pixel-wise SSE is calculated between downsized versions of the generated and target images, first applying an averaged pooling layer. We formulate the final objective as the weighted sum of a content loss and an adversarial loss as: \begin{equation} \label{eq:final} G^*=\arg \min_{G} \max_{D}L_{det-cGAN}(G,D) + \lambda L_{SSE}(G) \end{equation} \noindent where $\lambda$ is the weight controlling the content loss impact. \begin{table*}[t] \caption{Evaluation metrics computed on the reconstructed gray-level face images with \textit{Biwi} and \textit{Pandora} datasets. Starting from the left, $L_1$ and $L_2$ distances are reported, then the absolute and the squared differences, the root-mean-square error and, finally, the percentage of pixels under a certain threshold. Further details about metrics are reported in \cite{NIPS2014}.} \centering \small \begin{tabular}{c|c||cc|cc|ccc|ccc} \hline \multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Method}} &\multicolumn{2}{c|}{\textbf{Norm} $\downarrow$} &\multicolumn{2}{c|}{\textbf{Difference} $\downarrow$} &\multicolumn{3}{c|}{\textbf{RMSE} $\downarrow$} &\multicolumn{3}{c}{\textbf{Threshold} $\uparrow$} \\ \cline{3-12} & &$L_1$ &$L_2$ &Abs &{\footnotesize Squared} &linear &log &scale-inv &1.25 &2.5 &3.75 \\ \hline \hline \multirow{2}{*}{Biwi} & FfD \cite{borghi17cvpr} &33.35 &2586 &0.454 &24.07 &40.55 &\textbf{0.489} & \textbf{0.445} &0.507 &\textbf{0.806} & \textbf{0.878} \\ \cline{2-12} & \textbf{FfD} &\textbf{24.44} &\textbf{2230} &\textbf{0.388} &\textbf{19.81} &\textbf{35.50} &0.653 & 0.610 &\textbf{0.615} &0.764 &0.840 \\ \hline \hline \multirow{5}{*}{Pandora} & FfD \cite{borghi17cvpr} &41.36 &3226 &0.705 &46.00 &50.77 &0.603 &\textbf{0.485} &0.263 &0.725 &0.819 \\ \cline{2-12} & pix2pix \cite{isola2016image} &19.37 &1909 &\textbf{0.468} &24.07 &30.80 &0.568 &0.539 &0.583 &0.722 &0.813 \\ \cline{2-12} & AVSS \cite{matteo2017generative} &23.93 &2226 &0.629 &34.49 &35.46 &0.658 &0.579 &0.541 &0.675 &0.764 \\ \cline{2-12} & FfD + U-Net &23.75 &2123 &0.653 &34.96 &33.89 &0.639 &0.553 &0.555 &0.689 &0.775 \\ \cline{2-12} & \textbf{FfD} &\textbf{18.21} &\textbf{1808} &0.469 &\textbf{22.90} &\textbf{28.90} &\textbf{0.556} &0.501 &\textbf{0.605} &\textbf{0.743} &\textbf{0.828} \\ \hline \end{tabular} \label{tab:metrics} \end{table*} \subsection{Network Architecture} \label{sec:gan_architecture} We propose to modify the classic hourglass generator architecture, performing a limited number of upsampling and downsampling operations. As shown in the following experimental section, the \textit{U-Net} architecture \cite{ronneberger2015u} can be adopted in order to shuttle low-level information between input and output directly across the network \cite{isola2016image}, but it is less convenient in our case.\\ Following the main architecture guidelines for stable Deep Convolutional GANs by Radford \textit{et al.} \cite{radford2015unsupervised}, we instead adopt the architecture illustrated in Figure \ref{fig:gan} for the Generator. Specifically, in the encoder part, we use three convolutional layers followed by a strided convolutional layer (with stride 2) to halve the image resolution.\\ The decoding stack uses three convolutional layers followed by a transposed convolutional layer (also referred as fractionally strided convolutional layers) with stride $1/2$ to double the resolution, and a final convolution. The number of filters follows a power of 2 pattern, from 128 to 1024 in the encoder and from 512 to 64 in the decoder. \textit{Leaky ReLU} is used as activation function in the encoding phase while \textit{ReLU} is used in the decoding phase. \\ We adopt \textit{batch normalization} before each activation (except for the last layer) and a kernel size $5 \times 5$ for each convolution.\\ The discriminator architecture complies with the generator’s encoder in terms of activation and number of filters, but contains only strided convolutional layers (with stride 2) to halve the image resolution each time the number of filters is doubled. The network then outputs one \textit{sigmoid} activation. In the discriminator, we use batch normalization before every Leaky ReLU activation, except for the first layer. \subsection{Training details} We trained the det-cGAN with depth images and simultaneously providing the network with the original gray-level images associated with the depth data in order to compute the $L_{SSE}$. To optimize the network we adopted the standard approach from Goodfellow \textit{et al.} \cite{goodfellow2014generative} and alternate the gradient descent updates between the generator and the discriminator with $K=1$. We used mini-batch SGD applying the \textit{Adam} solver \cite{kingma2014adam} with $\beta_1 = 0.5$ and batch size of $64$. We set $\lambda=10^{-1}$ in Equation~\ref{eq:final} for the experiments. Moreover, to encourage the discriminator to estimate soft probabilities rather than to extrapolate extremely confident classifications, we used a technique called \textit{one-sided label smoothing} \cite{salimans2016improved} where the target for the real examples are replaced with a value slightly less than $1$, such as $0.9$. This solution prevents the discriminator to produce extremely confident predictions that could unbalance the adversarial learning. \section{Pose Estimation from depth}\label{sec:poseidon} \subsection{POSEidon$^+$ network} \label{sec:poseidonpose} The \textit{POSEidon$^+$} network is a fusion of three CNNs and has been developed to perform a regression on the 3D pose angles. As a result, continuous Euler values -- corresponding to the \textit{yaw}, \textit{pitch} and \textit{roll} angles -- are estimated (right part of Fig. \ref{fig:general}). The three \textit{POSEidon$^+$} components have the same shallow architecture based on 5 convolutional layers with kernel size of $5 \times 5$, $4 \times 4$ and $3 \times 3$ and a $2 \times 2$ max-pooling is conducted only on the first three layers due to the limited size of the input ($64 \times 64$). The first four convolutional layers have 32 filters each, the last one has 128 filters. \textit{tanh} is exploited as activation function; we are aware that \textit{ReLU} \cite{nair2010rectified} converges faster, but better performance in term of accuracy prediction are achieved.\\ The three networks are fed with different input data types: the first one, directly takes as input the head-cropped depth images; the second one is connected to the \textit{Face-from-Depth} output and the last one operates on Motion Images, obtained applying the standard \textit{Farneback} algorithm \cite{farneback2001} on pairs of consecutive depth frames. The presence of depth discontinuities around the nose and the eyes generates specific motion patterns which are related to the head pose. Motion Images, thus, provide useful information for the estimation of the pose of a moving head. Frames with motionless heads are very rare in real videos. However, in those cases the common image compression creates artifacts around the face landmarks which allow the estimation of the head pose. A fusion step combines the contributions of the three above described networks. The last fully connected layer of each component is removed in order to provide the following layers with more data and not only the estimated angles. As a results, the output of the whole \textit{POSEidon$^+$} network is not only a weighted mean of the three component outputs, but a more complex combination. Different fusion approaches that have been proposed by Park \textit{et al}. \cite{park2016} are investigated. Given two feature maps $x^a, x^b$ with a certain width $w$ and height $h$, for every feature channel $d^x_a, d^x_b$ and $y \in R^{w \times h \times d}$: \begin{itemize} \item \textbf{Multiplication}: computes the element-wise product of two feature maps, as $ y^{mul}=x^a \circ x^b, d^y = d^x_a = d^x_b $ \item \textbf{Concatenation}: stacks two features maps, without any blend $y^{cat} = [x^a | x^b], d^y = d^x_a + d^x_b$ \item \textbf{Convolution}: stacks and convolves feature maps with a filter $k$ of size $1 \times 1 \times (d^x_a + d^x_b)/2$ and $\beta$ as bias term, $ y^{conv} = y^{cat} \ast k + \beta, \quad d^y = (d^x_a+d^x_b) / 2 $ \end{itemize} \begin{figure*}[t!] \centering \subfigure[]{\includegraphics[width=0.607\linewidth]{images/A.png}} \quad \subfigure[]{\includegraphics[width=0.3\linewidth]{images/B.png}} \subfigure[]{\includegraphics[width=0.607\linewidth]{images/C.png}} \quad \subfigure[]{\includegraphics[width=0.3\linewidth]{images/D.png}} \caption{Test (a) and train (c) images on \textit{Pandora} dataset, test (b) and train (d) images on \textit{Biwi} dataset. For each block, gray-level images and then the corresponding depth faces are depicted in the first columns; face images taken from the method described in \cite{borghi17cvpr} are reported in the third column; finally, the output of the \textit{Face-from-Depth} network proposed in this paper is depicted in the last column. } \label{fig:facefromdepth} \end{figure*} The final \textit{POSEidon$^+$} framework exploits a combination of two fusing methods, in particular, a convolution followed by a concatenation. After the fusion step, three fully connected layers composed of 128, 84 and 3 activations respectively and two dropout regularization ($\sigma=0.5$) complete the architecture. \textit{POSEidon$^+$} is trained with a double-step procedure. First, each individual network described above is trained with the following $L_2^w$ weighted loss: \begin{equation} L_2^w = \sum_{i=1}^3 \big \lVert w_i \cdot \left(y_i - f(x_i)\right) \big \rVert _2 \label{eq:l2_pesata} \end{equation} where $ w_i \in [0.2, 0.35, 0.45]$. This weight distribution gives more importance to the yaw angle, which is preponderant in the selected automotive context. During the individual training step, the last fully connected layer of each network is preserved, then is removed to perform the second training phase. Holding the weights learned for the trident components, the new training phase is carried out on the last three fully connected layers of \textit{POSEidon$^+$} only, with the loss function $L^w_2$ reported in Equation \ref{eq:l2_pesata}. In all training steps, the SGD optimizer \cite{krizhevsky2012} is exploited, the learning rate is set initially to $10^{-1}$ and then is reduced by a factor 2 every 15 epochs. \subsection{Shoulder Pose Estimation} \label{sec:shoulderpose} The framework is completed with an additional network for the estimation of the shoulder pose. We employ the same architecture adopted for the head (Sect. \ref{sec:poseidonpose}), performing regression on the three pose angles.\\ Starting from the head center position (Sect. \ref{sec:headlocalization}), the depth input images are cropped around the driver neck, using a bounding box $\{x_S, y_S, w_S, h_S\}$ with center $(x_S = x_H, y_S = y_H - (h_H/4))$, and width and height obtained as described in Equation \ref{eq:headBB}, but with different values of $R_x, R_y$ to produce a rectangular crop: these values are tested and discussed in Section \ref{sec:results}. The network is trained with SGD optimizer \cite{krizhevsky2012}, using the weighted $L_2^w$ loss function described above (see Eq.~\ref{eq:l2_pesata}). As usual, hyperbolic tangent is exploited as activation function. \section{Experimental Results} \label{sec:results} \subsection{Datasets} Network training and testing phases have been done exploiting two publicly available datasets, namely \textit{Biwi Kinect Head Pose} and \textit{ICT-3DHP}. In addition, we collected a new dataset, called \textit{Pandora}, which also contains shoulder pose annotations. Data augmentation techniques are employed to enlarge the training set, in order to achieve space invariance and avoid overfitting \cite{krizhevsky2012}.\\ Random translations on vertical, horizontal and diagonal directions, jittering, zoom-in and zoom-out transformation of the original images have been exploited. Percentile-based contrast stretching, normalization and scaling of the input images are also applied to produce zero mean and unit variance data. \\ Other datasets for head pose estimation and related tasks have been collected in last decades \cite{Masi,gourier2004estimating,Nuevo2010,yuen2016looking,martin2016vision}, but in most cases there are some not desirable features, for instance, no depth or 3D data, no continuous ground truth annotations and not enough data for deep learning techniques.\\ Follows a detailed description of the three adopted datasets. \subsubsection{Biwi Kinect Head Pose dataset} Fanelli \textit{et al.} \cite{fanelli2011} introduced this dataset in 2013. It is acquired with the \textit{Microsoft Kinect} sensor, i.e., a structured IR light device. It contains about 15k frames, with RGB ($640 \times 480$) and depth maps ($640 \times 480$). Twenty subjects have been involved in the recordings: four of them were recorded twice, for a total of 24 sequences. The ground truth of yaw, pitch and roll angles is reported together with the head center and the calibration matrix. The original paper does not report the adopted split between training and testing sets; fair comparisons are thus not guaranteed. To avoid this, we clearly report the adopted split in the following. \subsubsection{ICT-3DHP dataset} \textit{ICT-3DHP} dataset has been introduced by Baltrusaitis \textit{et al.} in 2012 \cite{baltruvsaitis2012}. It has been collected using a \textit{Microsoft Kinect} sensor and contains RGB images and depth maps of about 14k frames, divided into 10 sequences. The image resolution is $640 \times 480$ pixels. An additional hardware sensor (\textit{Polhemus Fastrack}) is exploited to generate the ground truth annotation. The device is placed on a white cap worn by each subject, visible in both RGB and depth frames. The presence of a few subjects and the limited number of frames make this dataset unsuitable for training deep learning approaches. \subsubsection{Pandora dataset} \label{sec:dataset} In addition to publicly available datasets, we have also collected and used a new challenging dataset, called \textit{Pandora}. It has been specifically created for head center localization, head pose and shoulder pose estimation in automotive contexts (See Fig.\ref{fig:pandora}). A frontal and fixed device acquires the upper body part of the subjects, simulating the point of view of a camera placed inside the dashboard. The subjects mainly perform driving-like actions, such as holding the steering wheel, looking to the rear-view or lateral mirrors, shifting gears and so on. \textit{Pandora} contains 110 annotated sequences of 10 male and 12 female actors. Each subject has been recorded five times. \textit{Pandora} is the first publicly available dataset which combines the following features: \begin{itemize} \item \textbf{Shoulder angles}: in addition to the head pose annotation, \textit{Pandora} contains the ground truth data of the shoulder pose expressed as yaw, pitch, and roll. \item \textbf{Wide angle ranges}: subjects perform wide head ($\pm \ang{70}$ roll, $\pm \ang{100}$ pitch and $\pm \ang{125}$ yaw) and shoulder ($\pm \ang{70}$ roll, $\pm \ang{60}$ pitch and $\pm \ang{60}$ yaw) movements. For each subject, two sequences are performed with constrained movements, changing the yaw, pitch and roll angles separately, while three additional sequences are completely unconstrained. \item \textbf{Challenging camouflage}: garments, as well as various objects are worn or used by the subjects to create head and/or shoulder occlusions. For example, people wear prescription glasses, sunglasses, scarves, caps, and manipulate smart-phones, tablets or plastic bottles. \item \textbf{Deep-learning oriented}: the dataset contains more than 250k full resolution RGB ($1920 \times 1080$) and depth images ($512 \times 424$) with the corresponding annotation. \item \textbf{Time-of-Flight (ToF) data}: a \textit{Microsoft Kinect One} device is used to acquire depth data, with a better quality than other datasets created with the first \textit{Kinect} version~\cite{sarbolandi2015}. \end{itemize} Each frame of the dataset is composed of an RGB appearance image, the corresponding depth map, and the 3D coordinates of the skeleton joints corresponding to the upper body part, including the head center and the shoulder positions. For convenience's sake, the 2D coordinates of the joints on both color and depth frames are provided as well as the head and shoulder pose angles with respect to the camera reference frame. Shoulder angles are obtained through the conversion to Euler angles of a corresponding rotation matrix, obtained from a user-centered system~\cite{papadopoulos2014} and defined by the following unit vectors $(N_1,N_2,N_3)$: \begin{equation} \begin{array}{cc} N_1=\frac{p_{RS}-p_{LS}}{\| p_{RS}-p_{LS} \|} & U=\frac{p_{RS}-p_{SB}}{\|p_{RS}-p_{SB}\|}\\ \\ N_3=\frac{N_1 \times U}{\| N_1 \times U \|} & N_2=N_{1} \times N_{3} \end{array} \label{eq:refframe} \end{equation} where $p_{LS}$, $p_{RS}$ and $p_{SB}$ are the 3D coordinates of the left shoulder, right shoulder and spine base joints. The annotation of the head pose angles has been collected using a wearable \textit{Inertial Measurement Unit} (IMU) sensor. To avoid distracting artifacts on both color and depth images, the sensor has been placed in a non-visible position, \textit{i.e.}, on the rear of the subject's head. The IMU sensor has been calibrated and aligned at the beginning of each sequence, assuring the reliability of the provided angles. The dataset is publicly available (\url{http://aimagelab.ing.unimore.it/pandora/}). \begin{table}[b] \caption{Results obtained on \textit{Pandora} dataset with head pose network trained on gray level images and tested with the original gray-level and reconstructed ones. } \centering \small \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Testing input}} &\multicolumn{3}{c}{\textbf{Head}} &\multirow{2}{*}{\textbf{Acc.}} \\ &Pitch &Roll &Yaw & \\ \hline \hline \textbf{gray-level} &\textbf{7.1} $ \pm$ \textbf{5.6} &\textbf{5.6} $\pm$ \textbf{5.8} &\textbf{9.0} $\pm$ \textbf{10.9} &\textbf{0.613} \\ \hline \hline pix2pix \cite{isola2016image} &7.9 $ \pm$ 8.0 &5.9 $\pm$ 6.3 &12.8 $\pm$ 21.4 &0.581 \\ \hline AVSS \cite{matteo2017generative} &8.9 $ \pm$ 8.5 &6.2 $\pm$ 6.4 &13.4 $\pm$ 20.4 &0.543 \\ \hline FfD \cite{borghi17cvpr} &8.5 $ \pm$ 8.9 &6.1 $\pm$ 6.2 &12.4 $\pm$ 17.3 &0.559 \\ \hline FfD + U-Net &8.7 $ \pm$ 8.4 &6.4 $\pm$ 6.6 &13.5 $\pm$ 19.9 &0.552 \\ \hline \textbf{FfD} &\textbf{7.6} $ \pm$\textbf{ 6.9} &\textbf{5.8} $\pm$ \textbf{6.0} &\textbf{10.1} $\pm$\textbf{ 12.6} &\textbf{0.613} \\ \hline \end{tabular} \label{tab:testgenerated} \end{table} \begin{table*}[th!] \caption{Results of the head pose estimation on \textit{Pandora} comparing different system architectures. The baseline is a single CNN working on the source depth map (Row 1). The accuracy is the percentage of correct estimations ($err<$\ang{15}). FfD: \textit{Face-from-Depth}, MI: \textit{Motion Images}.} \centering \small \begin{tabular}{c cccc c ccccc} \multicolumn{10}{c}{\textsc{Head Pose estimation error [euler angles]}}\\ \hline \multirow{2}{*}{\textbf{\#}} & \multicolumn{4}{c}{\textbf{Input}} &\multirow{2}{*}{\textbf{Crop}} &\multirow{2}{*}{\textbf{Fusion}} &\multicolumn{3}{c}{\textbf{Head}} &\multirow{2}{*}{\textbf{Accuracy}} \\ & Depth &FfD &MI &Gray & & &Pitch &Roll &Yaw & \\ \hline \hline 1& $\surd$ & & & & &- &8.1 $\pm$ 7.1 &6.2 $\pm$ 6.3 &11.7 $\pm$ 12.2 &0.553 \\ \hline 2&$\surd$ & & & &$\surd$ &- &6.5 $\pm$ 6.6 &5.4 $\pm$ 5.1 &10.4 $\pm$ 11.8 &0.646 \\ \hline 3& &$\surd$ & & &$\surd$ &- &6.8 $\pm$ 6.1 &5.8 $\pm$ 5.0 &10.1 $\pm$ 12.6 &0.658 \\ \hline 4& & &$\surd$& &$\surd$ &- &7.7 $\pm$ 7.5 &5.3 $\pm$ 5.7 &10.0 $\pm$ 12.5 &0.609 \\ \hline 5& & & &$\surd$&$\surd$ &- &7.1 $\pm$ 6.6 &5.6 $\pm$ 5.8 &9.0 $\pm$ 10.9 &0.639 \\ \hline 6&$\surd$ &$\surd$ & & &$\surd$ &concat &5.6 $\pm$ 5.0 &4.9 $\pm$ 5.0 &9.7 $\pm$ 12.1 &0.698 \\ \hline 7&$\surd$ & &$\surd$& &$\surd$ &concat &6.0 $\pm$ 6.1 &4.5 $\pm$ 4.8 &9.2 $\pm$11.5 &0.690 \\ \hline 8&$\surd$ &$\surd$ &$\surd$& &$\surd$ &conv+concat &\textbf{5.6} $\pm$ \textbf{5.2} &\textbf{4.8} $\pm$ \textbf{5.0} &\textbf{8.2} $\pm$ \textbf{9.8} &\textbf{0.736} \\ \hline \end{tabular} \label{tab:resOurDataset} \end{table*} \subsection{Quantitative Results} The proposed framework has been deeply tested using the datasets described in Section \ref{sec:dataset}. For evaluation with the \textit{Pandora} dataset, sequences of subjects 10, 14, 16 and 20 have been used for testing, the remaining for training and validation. With \textit{Biwi} dataset, test subjects are determined by the validation procedure adopted. Finally, we tested the system on all the sequences contained in \textit{ICT-3DHP} dataset.\\ \noindent \textbf{Domain Translation. } First, we check the capabilities of the \textit{Face-from-Depth} network alone. Some visual examples of input, output, and ground-truth images are reported in Figure \ref{fig:facefromdepth}.\\ With this aim, we propose two types of evaluation. The first is based on metrics related to the reconstruction accuracy. Following the work of Eigen etal \cite{NIPS2014}, Table \ref{tab:metrics} reports some results. The system is evaluated both on \textit{Biwi} and on \textit{Pandora} datasets. \textit{FfD} network is compared with other Image-to-Image methods taken from the recent literature. In particular, we trained from scratch the deep models proposed in \cite{isola2016image,matteo2017generative} (referred here as \textit{pix2pix} and \textit{AVSS}, respectively), following procedures reported in the corresponding papers.\\ Moreover, in order to investigate how architectural choices impact the reconstruction quality of \textit{FfD}, we tested a different design. We modified the network adding the \textit{U-Net} \cite{ronneberger2015u} skip connections between mirrored layers (cf. Sect. \ref{sec:gan_architecture}). \\ We also compared the presented approach with our preliminary version of \textit{Face-from-Depth} network \cite{borghi17cvpr}, that fuses the key aspects of \textit{encoder-decoder} \cite{masci2011stacked} and \textit{fully convolutional} \cite{long2015fully} neural networks. \\ For the sake of comparison, we report here key details about the preliminary \textit{FfD} version \cite{borghi17cvpr}. It has been trained in a single step, with input head images resized to $64 \times 64$ pixels. The activation function is the hyperbolic tangent and best training performance are reached through the self adaptive \textit{Adadelta} optimizer \cite{zeiler2012adadelta}. A particular loss function is exploited in order to highlight the central area of the image, where the face is supposed to be after the cropping step, and takes in account the distance between the reconstructed image and the corresponding gray-level ground truth: \begin{equation} L = \frac{1}{R\cdot C} \sum_i^R \sum_j^C \left( || y_{ij} - \bar{y}_{ij} || ^ 2 _2 \cdot w_{ij}^{\mathcal{N}} \right) \end{equation} \noindent where $R,C$ are the number of rows and columns of the input images, respectively. $y_{ij}, \bar{y}_{ij} \in \mathcal{R}^{ch}$ are the intensity values from ground truth ($ch=1$) and predicted appearance images. Finally, the term $w_{ij}^{\mathcal{N}}$ introduces a bivariate Gaussian prior mask. Best results have been obtained using $\mu=[\frac{R}{2},\frac{C}{2}]^T$ and $\Sigma = \mathbb{I} \cdot [ \left(R / \alpha \right)^2, \left( C / \beta \right)^2 ]^T$ with $\alpha$ and $\beta$ empirically set to $3.5, 2.5$ for squared images of $R=C=64$. Other details about network architecture and training are reported in \cite{borghi17cvpr}. The second set of tests is specific to the head pose estimation task. The head pose network described in Section \ref{sec:poseidon}, trained with gray-level images taken from the \textit{Pandora} dataset, is tested on the reconstructed face images. Since the network has been trained on real gray-level images to output the angles of the head pose, we can suppose that the more generated images are similar to the corresponding gray-level ones, the better the results are. The comparison is presented in Table \ref{tab:testgenerated}. In the first row, results obtained using gray-level images as testing input are reported, this is the best case and should be used as a reference baseline. Results present in the following rows confirm that our \textit{FfD} is able to generate high-quality faces, very similar to gray-level faces. Moreover, we note that the head pose network has the ability to generalize well on cross-dataset evaluations since we generally obtain a good accuracy even with different types of face images as input.\\ The \textit{Face-from-Depth} network has been created to this goal, even if the output is not always realistic and visually pleasant: however, the promising results confirm their positive contribution in the estimation of the head pose.\\ \begin{table}[b!] \caption{Results for head pose estimation task on \textit{Pandora} dataset. In particular, here we compare our preliminary work \cite{borghi17cvpr} with the proposed one. In addition, we include a comparison with \textit{POSEidon$^+$} framework, in which we replace the head pose estimation network trained on reconstructed face images with the same network trained on gray-level images, here referred as \textit{POSEidon}*.} \centering \small \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Method}} &\multicolumn{3}{c}{\textbf{Head}} &\multirow{2}{*}{\textbf{Acc.}} \\ &Pitch &Roll &Yaw & \\ \hline \hline POSEidon \cite{borghi17cvpr} &5.7 $ \pm$ 5.6 &4.9 $\pm$ 5.1 &9.0 $\pm$ 11.9 &0.715 \\ \hline POSEidon* &5.6 $ \pm$ 5.8 &4.8 $\pm$ 5.0 &8.8 $\pm$ 10.9 &0.720 \\ \hline \textbf{POSEidon$^+$} &\textbf{5.6} $ \pm$ \textbf{5.2} &\textbf{4.8} $\pm$\textbf{ 5.0} &\textbf{8.2} $\pm$ \textbf{9.8} &\textbf{0.736} \\ \hline \end{tabular} \label{tab:testposeidon} \end{table} \noindent \textbf{Head Pose Estimation. } An ablation study of \textit{POSEidon$^+$} framework on \textit{Pandora} is conducted and results are reported in Table \ref{tab:resOurDataset}, providing mean and standard deviation of the estimation errors obtained on each angle and for each system configuration. Similar to Fanelli \textit{et al.} \cite{fanelli2013}, we also report the mean accuracy as percentage of good estimations (\textit{i.e.}, angle error below \ang{15}).\\ The first row of Table \ref{tab:resOurDataset} shows the performance of a baseline system, obtained using the head pose estimation network only, and input depth frames are directly fed to the network without processing and cropping the input around the head. As expected, results are reasonable proving the ability of the deep network to extract useful features for head pose estimation from whole images.\\ The cropping step is included instead in the other rows, using the ground truth head position as the center and the cropping method described in Section \ref{sec:headlocalization}. All three branches (i.e., depth, \textit{FfD}, and Motion Images) of \textit{POSEidon$^+$} framework are individually evaluated. In particular, the fifth row includes an indirect evaluation of the reconstruction capabilities of the \textit{Face-from-Depth} network. The same network trained and tested on the original gray level images performs similarly to the one trained and tested on \textit{FfD} outputs (Row 3). The similar results confirm that the image reconstruction quality is sufficiently accurate, at least for the pose estimation task.\\ Results obtained using couples of networks are shown in rows 6 and 7, exploiting concatenation to merge the final layers of each component. Finally, the last row reports the performance of the complete framework. To merge layers, we use a \textit{conv} fusion of couples of input types, followed by the \textit{concat} step. We found that it is the best combination, as described in \cite{borghi17cvpr}. Even if the choice of the fusion method has a limited effect (as deeply investigated in \cite{park2016,feichtenhofer2016}), the most significant improvement of the system is reached by combining and exploiting the three input types together. Figure \ref{fig:erroriperangolo} shows a comparison of the performance provided by each trident component: each graph plots the error distribution of a specific network with respect to the ground truth value. Depth data allows reaching the lowest error rates for frontal heads, while the other input data types are better in presence of rotated poses. The graphs highlight the averaging capabilities of \textit{POSEidon$^+$} too. \\ Furthermore, in Table \ref{tab:testposeidon} we compare best performance of \textit{POSEidon$^+$} on \textit{Pandora} dataset, obtained exploiting the \textit{FfD} network proposed in this paper and the previous one described in \cite{borghi17cvpr}. We also evaluate \textit{POSEidon$^+$} replacing the central CNN (see Fig. \ref{fig:general}) trained on reconstructed face images with the same CNN but trained on gray-level images (this experiment is here referred as \textit{POSEidon}*). Results confirm that the proposed \textit{POSEidon$^+$} overcomes our preliminary work. The overall quality of reconstructed face images is confirmed and also the feasibility to train and test the pose network on different dataset without a significant drop in performance. \begin{table}[t!] \caption{Estimation errors and mean accuracy of the shoulder pose estimation on \textit{Pandora}} \centering \small \begin{tabular}{cc|cccc} \hline \multicolumn{2}{c|}{\textbf{Parameters}} &\multicolumn{3}{c}{\textbf{Shoulders}} &\multirow{2}{*}{\textbf{Accuracy}}\\ \cline{3-5} $R_x$ &$R_y$ &Pitch &Roll &Yaw & \\ \hline \hline \multicolumn{2}{c|}{No crop} &2.5 $\pm$ 2.3 &3.0 $\pm$2.6 &3.7 $\pm$ 3.4 &0.877 \\ \hline 700 &250 &2.9 $\pm$ 2.6 &2.6 $\pm$2.5 &4.0 $\pm$ 4.0 &0.845 \\ \hline 850 &250 &2.4 $\pm$ 2.2 &2.5 $\pm$2.2 &3.1 $\pm$ 3.1 &0.911 \\ \hline 850 &500 &\textbf{2.2} $\pm$ \textbf{2.1} &\textbf{2.3} $\pm$\textbf{2.1} &\textbf{2.9} $\pm$ \textbf{2.9} &\textbf{0.924} \\ \hline \end{tabular} \label{tab:res_crop} \end{table} Finally, we compare the results of \textit{POSEidon$^+$} with the state-of-art on the \textit{Biwi} dataset. Due to the lack of a common validation and test protocol, Table \ref{tab:resBiwiPose} is split accordingly to the evaluation procedures adopted, in order to allow fair comparisons. For each validation procedure, we report results of \textit{POSEidon$^+$}. In particular, we implement a 2-folds (half subjects in train and half in test), 4-folds, 5-folds (as adopted in the original works \cite{fanelli2011dagm, fanelli2013}, respectively) and 8-folds subject independent cross evaluations. We also conduct the \textit{Leave-One-Out} (LOO) validation protocol. We dedicate the last section of Table \ref{tab:resBiwiPose} also for those methods that do not follow a standard evaluation procedure since they create a fixed or random~\cite{ahn2014} sets with a limited number of subjects (or sequences) to test their systems. Besides, we note that a fair comparison with methods reported in the top part of Table \ref{tab:resBiwiPose} is not possible since they exploit all sequences of \textit{Biwi} dataset for test, while deep learning approaches need a certain amount of training data.\\ Results confirm the excellent performance of \textit{POSEidon$^+$} and the generalization ability across different training and testing subsets with different validation protocol. The system overcomes all the reported methods, included our previous proposal \cite{borghi17cvpr}. The average error is lower than other approaches, even those are not using all the frames available on \textit{Biwi} dataset (some works exclude the frames on which the face detection fails \cite{fanelli2011dagm, fanelli2013}). \\ \noindent \textbf{Shoulder Pose Estimation. } The network performing the shoulder pose estimation has been tested on \textit{Pandora} only, due to the lack of the corresponding annotation in the other datasets. Results are reported in Table \ref{tab:res_crop}. \\ In particular, we conduct evaluation on different input types, varying the values $R_x$ and $R_y$ (cf. Section \ref{sec:shoulderpose}) that affect head and shoulder crops. We test also the shoulder pose network using the whole input depth frame, without any crop. The reported results are very promising, reaching an accuracy of over $92\%$. \\ \begin{table}[t!] \centering \small \caption{Results on \textit{Biwi}, \textit{ICT-3DHP} and \textit{Pandora} dataset of the complete \textit{POSEidon$^+$} pipeline (\textit{i.e.}, head localization, cropping and pose estimation).} \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Dataset}} &\multirow{2}{*}{\textbf{Local.}} &\multicolumn{3}{c}{\textbf{Head}} \\ \cline{3-5} & &Pitch &Roll &Yaw \\ \hline \hline Biwi &3.27$\pm$2.19 & 1.5$\pm$1.4 &1.6$\pm$1.6 &2.2$\pm$2.0 \\ \hline ICT-3DHP &- & 4.9$\pm$4.2 &3.5$\pm$3.4 &6.8$\pm$6.0 \\ \hline Pandora &4.27$\pm$3.25 & 7.3$\pm$8.2 &4.6$\pm$4.5 &10.3$\pm$11.4 \\ \hline \end{tabular} \label{tab:pipeline} \end{table} \begin{figure}[b!] \centering \includegraphics[width=0.9\linewidth]{images/error2.png} \caption{Error distribution of each \textit{POSEidon$^+$} components on \textit{Pandora} dataset. On $x$-axis are reported the ground truth angles, on $y$-axis the distribution of error for each input type. } \label{fig:erroriperangolo} \end{figure} \noindent \textbf{Complete pipeline. } In order to have a fair comparison, results reported in Tables \ref{tab:resBiwiPose} and \ref{tab:resOurDataset} are obtained using the ground truth head position as input to the crop procedure. We finally test the whole pipeline, including the head localization network described in section \ref{sec:headlocalization}, using also \textit{ICT-3DHP} dataset. The mean error of the head localization (in pixels) and the pose estimation errors are summarized in Table \ref{tab:pipeline}. Sometimes, the estimated position generates a more effective crop of the head and, as a result, the whole pipeline performs better on the head pose estimation over the \textit{Biwi} dataset. \textit{POSEidon$^+$} reaches valuable results also on the \textit{ICT-3DHP} dataset and it provides comparable results with respect to state-of-the-art methods working on both depth and RGB data (4.9$\pm$5.3, 4.4$\pm$4.6, 5.1$\pm$5.4 \cite{saeed2015}, 7.06, 10.48, 6.90 \cite{baltruvsaitis2012}, for pitch, roll and yaw respectively). We note that \textit{ICT-3DHP} does not include the head center annotation, but the position of the device used to acquire pose data placed on the back of the head, and this partially compromises the performance of our method. Besides, we can not suppose a coherency between the annotations obtained with different IMU devices, in particular regarding the definition of the null position (\textit{i.e.}, when the head angles are equal to zero). \noindent The complete framework -- except for the \textit{FfD} module -- has been implemented and tested on a desktop computer equipped with a \textit{NVidia Quadro k2200} GPU board and on a laptop with a \textit{NVidia GTX 860M}. Real-time performance has been obtained in both cases, with a processing rate of more than 30 frames per second. The whole system has been tested instead on a \textit{Nvidia GTX 1080} and is able to run at more than 50 frames per second. Some examples of the system output are reported in Figure \ref{fig:demofinale}. In addition, the original depth map, the \textit{Face-from-Depth} reconstruction and the motion data given in input to \textit{POSEidon$^+$} are placed on the left of each frame. \section{Conclusions} An end-to-end framework to monitor the driver's body pose called \textit{POSEidon$^+$} is presented. In particular, a new \textit{Face-from-Depth} architecture is proposed, based on a Deterministic Conditional GAN approach, to convert depth faces in gray-level images and supporting head pose prediction.\\ The system is based only on depth images, no previous computation of specific facial features is required and has shown real-time and impressive results with two public datasets. All these aspects make the proposed framework suitable to particular challenging contexts, such as automotive. Since the system has been developed with a modular architecture, each module can be used as single or in combination, reaching worst but still satisfactory performances. This work provides a comprehensive review and a comparison of recent state-of-art works and can be used as a brief review to understanding the current state of the 3D head pose estimation task. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This work has been carried out within the projects “Citta educante” (CTN01-00034-393801) of the National Technological Cluster on Smart Communities funded by MIUR and “\textit{FAR2015 - Monitoring the car drivers attention with multisensory systems, computer vision and machine learning}” funded by the University of Modena and Reggio Emilia. We also acknowledge \textit{Ferrari} SpA and CINECA for the availability of real car equipments and high performance computing resources, respectively. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2018-09-03T02:11:24", "yymm": "1712", "arxiv_id": "1712.05277", "language": "en", "url": "https://arxiv.org/abs/1712.05277" }
\section{Introduction} Multiple quantum enhanced algorithms have been proposed to speed up certain machine learning tasks \cite{2017Natur.549..195B} and existing quantum processors have now reached intermediate sizes. The vast majority of quantum enhanced algorithms were developed for fault-tolerant quantum computing. However, near-term quantum devices will not be error corrected and will hence inherently experience certain and often high-levels of errors during operation. Can quantum devices with pre-fault-tolerance-threshold noise levels be useful for industrial applications or not? Recent classical-quantum hybrid algorithms, such as the Quantum Approximate Optimization Algorithm \cite{2014arXiv1411.4028F} and the Variational Quantum Eigensolver \cite{2016NJPh...18b3023M}, provide evidence that for some applications in optimization and quantum chemistry, there might exist a quantum advantage that is partially robust to noise. As for machine learning applications, annealers have been shown to be able to perform certain types of machine learning, but the question remained open whether or not near-term circuit model quantum computers would be able to accomplish similar tasks. This is what we demonstrate in this paper, we demonstrate how machine learning with energy-based neural network models can be performed on noisy intermediate scale quantum devices. Our algorithm, which we call the Quantum Approximate Boltzmann Machine (QABoM) algorithm, generates approximate samples of distributions for use in machine learning on a near-term circuit model device rather than a quantum annealer. We do so by building upon a shallow depth quantum algorithm, the Quantum Approximate Optimization Algorithm \cite{2014arXiv1411.4028F} (QAOA; which generalizes to the Quantum Alternating Operator Ansatz \cite{2017arXiv170903489H}). QAOA can be seen as a bang-bang control variant of the quantum adiabatic algorithm \cite{2017PhRvX...7b1027Y}. It can be thought of as a coarsely Trotterized simulated adiabatic time evolution where one optimizes the pulse lengths variationally in order to approximately accomplish the same task as the adiabatic evolution with as few gates as possible \cite{2014arXiv1411.4028F}. Thus far, QAOA has been used to find the ground states of operators which encode problem instances---in this paper we extend its capabilities to generating a quantum distribution which can be sampled in machine learning. An interesting property of QAOA is that one might find optima of cost functions in a way where the classical optimization overhead is near-constant, by e.g.~keeping the number of pulses and the optimization run-time fixed. {\it Structure.} We will now present background material on Quantum Boltzman Machines, as they have been used in quantum annealing. Then we will explain the basic building blocks of Quantum Approximate Optimization. In Section \ref{sec:algos} the main algorithms are presented, followed by a section devoted to presenting the results of our numerical experiments. A discussion section precedes the conclusion which is followed by an appendix providing additional details that can aid in reproducing the results of this study. All source code and data is available open-source and linked to in the references. \section{Background} \subsection{Quantum Boltzmann Machines} Boltzmann machines are a tyae of generative neural network learning model in which an interacting collection of spins---representing bits of data---are typically trained to associate a low-energy to the spin-representations of a training data-set distribution. After thermalization---a process thought to be accelerated by quantum computers \cite{2016arXiv160905537B}---a Boltzmann machine can be sampled to produce and also to recognize patterns. Training a network of quantum spins such that this network of spins will assign low-energy values to an entire training set is precisely the computational bottleneck of Boltzmann-machine-based deep learning. The approaches to train a Boltzmann machine rely on sampling the distribution which is thermal with respect to the network's energy function. From this procedure, a model which approximates the data and its correlation structure is obtained. The energy or Hamiltonian function---given as a linear matrix representation in the quantum case---is often chosen to be that of an Ising model, i.e.~a symmetric Hamiltonian diagonal in the standard basis and of the form \begin{equation}\label{Ising} \hat{H}\equiv-\sum_{j,k\in u} J_{jk} \hat{Z}_j\hat{Z}_k - \sum_{j\in u} B_j \hat{Z}_j \end{equation} where $u$ is an index set for the vertices of a neural network graph $\mathcal{G}$ and $\hat{Z}$ is a Pauli-Z operator. The subset of spins representing data are called \textit{visible} units, while all the rest are called \textit{hidden} units. Mathematically, the goal of the Boltzmann machine algorithm (quantum as well as classical) is for the reduced thermal state on the visibles $\rho|_v = \text{tr}_h \left(e^{-\beta H}\right)/\text{tr}(e^{-\beta H})$ to approximate the state representing the normalized sum over all the data $\rho_{\text{data}}= |D|^{-1}\sum_{d_j\in D} \ket{d_j}\bra{d_j}$, where the non-empty indexed data set was labelled as $D = \{d_j\}_j$. To quantify the statistical distance between the visibles' reduced state and the training data, we can use the quantum relative entropy, the quantum analogue of the Kullback-Leibler divergence. By sampling the Gibbs state of the Ising Hamiltonian for a given choice of parameters $\bm{\theta} = \{J_{jk}, B_j\}_{j,k\in u}$, one can then compare the statistics of the reduced state on the visible units to that of the data and then suggest weight updates to reduce the relative entropy (and hence train the network). \begin{figure}[h!] \begin{center} \includegraphics[width=0.75 \columnwidth]{Network_data.pdf} \caption{Representation of a Restricted Boltzmann Machine (RBM) neural network, a distribution over states of the visible units, a data distribution, and the relative entropy density between these distributions. An RBM is a Boltzmann machine with a complete bipartite graph topology. The relative entropy between the data and the marginal Gibbs state on the visible units is the measure of statistical distance to be minimized for effective learning. } \label{fig:sampling} \end{center} \end{figure} Instead of minimizing the relative entropy between the visibles and the data by computing derivatives of the relative entropy, it is rather simpler to minimize a lower bound on this relative entropy \cite{QRBM}. This method consists of updating each component of $\bm{\theta}$ by comparing expectation values of $\langle \partial_{\bm{\theta}} \hat{H}\rangle$ for the thermal state of $\hat{H}$, to that of the average statistics of a thermal state with so-called \textit{clamped} visibles. The procedures consisting of collecting datapoints to estimate these expectation values are called \textit{unclamped} and \textit{clamped} Gibbs sampling, respectively. Clamped sampling can be achieved by simulating thermalization with respect to $\hat{H}$ with an added \textit{clamping potential}, $\hat{V}_j = -\log \ket{d_j}\!\bra{d_j}_v$ for each data point $d_j\in D$. The update rule for each of the parameters is $\theta_k \mapsto \theta_k+\delta\theta_k$, where \[\delta \theta_k = \tfrac{1}{|D|}\sum_{x\in D}\braket{\partial_{\theta_k} \hat{H} }_{\hat{H}+\hat{V}_x} -\braket{\partial_{\theta_k} \hat{H} }_{\hat{H}}.\] The above expectation(s) are taken for the state which is thermal according to the subscript Hamiltonian, i.e.~$\braket{\ldots}_{\hat{K}} \equiv \text{tr}(e^{-\beta \hat{K}}\ldots)/\text{tr}(e^{-\beta \hat{K}})$. Estimating these expectation values of various observables with respect to Gibbs states is where quantum computers can be used to accelerate the training. We will describe how quantum circuits can be used to achieve this Gibbs sampling approximately in Section \ref{sec:algos}. \subsection{Quantum Approximate Optimization} The Quantum Approximate Optimization Algorithm was originally proposed to solve instances of the MaxCut problem. The QAOA framework has since been extended to encompass multiple problem classes related to finding the low-energy states of Ising Hamiltonians. In general, the goal of the algorithm is to find approximate minima of a pseudo Boolean function $f$ on $n$ bits, $f(\bm{z})$, $\bm{z}\in\{-1,1\}^{\times n}$. This function is often an $m^{\text{th}}$-order polynomial of binary variables for some positive integer $m$, e.g., $f(\bm{z}) = \sum_{p\in\{0,1\}^m}\alpha_{\bm{p}}\bm{z}^{\bm{p}}$, where $\bm{z}^{\bm{p}}=\prod_{j=1}^n z_j^{p_j}$. The case where this polynomial is quadratic ($m=2$) has been extensively explored in the literature, and will be the main focus in this paper. The QAOA approache to optimization first starts in an initial product state $\ket{\psi_0}^{\otimes n}$ and then a tunable gate sequence produce a wavefunction with a high probability of being measured in a low-energy state (with respect to a cost Hamiltonian). We define the energy to be minimized as the expectation value of the cost Hamiltonian $\hat{H}_C \equiv f(\bm{\hat{\bm{Z}}})$, where $\bm{\hat{Z}} = \{\hat{Z}_j\}_{j=1}^n$. \begin{figure}[h!] \includegraphics[width=0.65\columnwidth]{state_path.pdf} \begin{center} \caption{Conceptual analogy for comparing (bottom) QAOA to (top) analog adiabatic and (middle) quantum simulated adiabatic evolution as a path through state space. } \label{fig:sampling} \end{center} \end{figure} \section{Algorithms}\label{sec:algos} \subsection{Quantum Variational Thermalization} The goal of quantum variational thermalization is to variationally approximate the statistics of the thermal state of a given Hamiltonian. In our case, we would like to approximate the statistics of the thermal state $\hat{\rho}_\beta = e^{-\beta \hat{H}_C}/\text{tr}(e^{-\beta \hat{H}_C})$ of the Ising Hamiltonian ~$\hat{H}_C\equiv-\sum_{j,k} J_{jk} \hat{Z}_j\hat{Z}_k - \sum_{j} B_j \hat{Z}_j$. Our approach will consist of optimizing over a family of states $\hat{\rho}(\bm{\mu})$ where $\bm{\mu}$ is a set of variational parameters for our state preparation ansatz. We will use the free energy $F_\beta$ as our error function to be minimized through the variation of parameters. Note that the free energy at inverse temperature $\beta$ is proportional to the temperature times the relative entropy \footnote{We use the natural logarithm in our relative entropy for convenience. Hence, our units of information are in nats rather than bits.} of the state relative to to the thermal state, $F_\beta(\hat{\rho}) = \tfrac{1}{\beta}D_{K\!L}(\hat{\rho}|| \hat{\rho}_\beta)$, hence minimizing the free energy at fixed temperature will minimize the relative entropy to the thermal state. Note that our interest lies in the classical statistics of the samples in the computational basis, hence our free energy is the \textit{Shannon} free energy, namely $F_\beta(\hat{\rho}) = \braket{\hat{H}_C}-\tfrac{1}{\beta}S$. In order to variationally minimize the free energy, we propose two approaches. One keeps the Von Neumann entropy of the system fixed while the energy is minimized with a QAOA loop. The second method variationally adapts the input Von Neumann entropy. Our algorithm consists of first preparing an exact thermal state of an initial uncoupled Hamiltonian $\hat{H}_I$, e.g.~$\hat{H}_I = \sum_j \hat{Z}_j$, then using QAOA to minimize the energy expectation with respect to a \textit{cost Hamiltonian} $\hat{H}_C$, e.g.~$\hat{H}_C\equiv-\sum_{j,k} J_{jk} \hat{Z}_j\hat{Z}_k - \sum_{j} B_j \hat{Z}_j$, which consists of applying alternating exponentials of the cost and of a \textit{mixer} Hamiltonian, e.g., $\hat{H}_M = \sum_j \hat{X}_j$. This allows the sampling of a state which \textit{approximates} the thermal state statistics of the cost Hamiltonian $e^{-\beta \hat{H}_C}/\text{tr}(e^{-\beta \hat{H}_C})$, when measured in the standard basis. For the variational energy fixed entropy variant, explicitly, we start by preparing the state $ \ket{\psi_0}=\bigotimes_{j} \sqrt{2\cosh(\beta)} \textstyle\sum_{b\in\{0,1\}} e^{(-1)^{1+b} \beta/2 }\ket{b}_j\ket{b}_{E_j}$ using a set of environment purification registers $E = \bigotimes_{j} E_j$ of equal number of qubits to that of the system to be thermalized. This state is efficient to prepare as it is of low constant depth. Tracing this state over the environment qubits, one recovers the thermal state of $\hat{H}_I$, i.e. $e^{-\beta \hat{H}_I}/\text{tr}(e^{-\beta \hat{H}_I})$. Which is equal to $\bigotimes_{j} \frac{1}{2}\sum_{b\in\{0,1\}}\left(1-(-1)^b \tanh (\beta)\right) \ket{b}_j\!\bra{b}_j$. Following this initial thermal state preparation, we apply the QAOA to minimize the expectation value $\braket{\hat{H}_C}$, with $\hat{H}_M$ as the mixer Hamiltonian and $\hat{H}_C$ as the cost Hamiltonian. This consists of applying the operations $\prod_{l=1}^P \exp(-i\nu_l \hat{H}_M)\exp(-i\gamma_l \hat{H}_C)$ for some fixed $P$ and some real parameters $\{\bm{\nu},\bm{\gamma}\}$, then measuring the system in the computational basis, repeating this preparation and measurement $N$ times to get an estimate of $\braket{\hat{H}_C}$, and using $M$ steps of a classical optimization algorithm (such as Nelder-Mead \cite{Nelder_1965}) to vary the parameters $\{\bm{\nu},\bm{\gamma}\}$ in a way to minimize $\braket{\hat{H}_C}$. After this QAOA, we claim that the final state for this set of pulses (even crudely) approximates the thermal state of $\hat{H}_C$, in the sense that measuring local observables of this state yields expectation values approximating that of the state $e^{-\beta \hat{H}_C}/\text{tr}(e^{-\beta \hat{H}_C})$, to an accuracy that is sufficient for sampling in the context of neural network training. We demonstrate this empirically through our numerical experiments. In the appendix we explore in greater detail what the final state after QAOA optimization would look like in the asymptotic large-depth QAOA case $P\rightarrow \infty$. It is believed that in this regime the QAOA behaves effectively \cite{2014arXiv1411.4028F} like a gapped adiabatic evolution under an interpolating Hamiltonian. We compare this to analog simulated thermalization, which is the evolution performed by physical quantum annealers (such as D-Wave devices \cite{2016arXiv161104528K}), which can also be used for Gibbs sampling. \subsection{Quantum Approximate Boltzmann Machine} We now describe the main algorithm. There are two variants of the algorithm, (i) a gate-based analogy of the Quantum Boltzmann Machine algorithm \cite{QRBM}, and (ii) a \textit{quantum randomized clamping} (QRC) variant of the same algorithm, where the training is performed with batches of data at a time, and the input is randomized either through classical randomization or through the use of a Quantum Random Access Memory \cite{2008PhRvL.100p0501G}. Let $v$ and $h$ be the sets of indices for the visible and hidden units. Let $u = v\cup h$ be the set of indices for all units. Let $D$ be the dataset made of bit strings $\bm{d}\in D$ of length number of visible units $|v|$. In the first variant (i), we begin by initializing the network parameters (weights and biases) randomly, providing the zeroth epoch parameters $\bm{\theta}^{(0)}$. Alternatively, one might perform a grid search over random weights which provides a better loss, or perform any other form of hyperparameter optimization, standard in machine learning \cite{LeCun_2015,Hinton_2012,Goodfellow-et-al-2016}. Each weight update depends on computing expectation values of certain terms in the cost Hamiltonian of a given epoch. The averages to be computed are for equilibrium with respect to the \textit{clamped} and \textit{unclamped} averages. Both sampling procedures are done via our QAOA-based quantum approximate thermalization, in each case a different cost and mixer Hamiltonian is used. At each epoch, we have a set of network parameters $\bm{\theta}^{(n)}$. Given these network parameters, we can define the \textit{full} and \textit{partial} cost Hamiltonians for epoch $n$, the full Hamiltonian is \begin{equation} \hat{H}^{(n)}_C\equiv-\sum_{j,k\in u} J^{(n)}_{jk} \hat{Z}_j\hat{Z}_k - \sum_{j\in u} B^{(n)}_j \hat{Z}_j \end{equation} while the partial cost Hamiltonian $\hat{H}_{\tilde{C}}^{(n)}$ excludes terms strictly supported on the visibles. See appendix \ref{app:reg} for the full form of this cost Hamiltonian. These Hamiltonians are used to perform QAOA for the unclamped and clamped sampling respectively. The QAOA mixer Hamiltonians, for the unclamped and clamped Gibbs sampling, which we call the full and partial mixer Hamiltonians, are given by $\hat{H}_M = \sum_{j\in u} \hat{X}_j$ and $\hat{H}_{\tilde{M}} = \sum_{j\in h} \hat{X}_j$ respectively. Again the partial Hamiltonian is like the full Hamiltonian with the terms on the visibles removed. The algorithms for the clamped and unclamped sampling are closely related; as they both rely on a QAOA subroutine with similar Hamiltonians. We begin by describing the process of unclamped sampling. First, using a set of $|u|$ ancillary qubits, by creating some partially entangled Bell pairs, we prepare the thermal state $\hat{\rho}_I = \mathcal{Z}^{-1}_Ie^{-\beta \hat{H}_I} =\bigotimes_{j} \mathcal{Z}_j^{-1}e^{-\beta \hat{Z}_j}$ where $\mathcal{Z}_I= \prod_j \mathcal{Z}_j,\ \mathcal{Z}_j = \text{sech}(\beta)/2$. Following this, for each epoch $n$, we apply the QAOA algorithm with our full cost and full mixer Hamiltonians, the $m^\text{th}$ QAOA iteration of the $n^\text{th}$ epoch consists of applying the pulses \begin{equation}\hat{U}^{(n,m)}_{\bm{\nu},\bm{\gamma}} \equiv \prod_{l=1}^P \exp(-i\nu^{(n,m)}_l \hat{H}_M)\exp(-i\gamma^{(n,m)}_l \hat{H}_C).\end{equation} After the pulses are applied, we measure the cost Hamiltonian expectation value \begin{equation} \braket{\hat{H}^{(n)}_C}_{(n,m)} = \text{tr}(\hat{U}^{(n,m)\dagger}_{\bm{\nu},\bm{\gamma}}\hat{H}_C \hat{U}^{(n,m)}_{\bm{\nu},\bm{\gamma}}\hat{\rho}_M) \end{equation} via the expectation estimation algorithm (QEE) \cite{2016NJPh...18b3023M}. The QEE algorithm consists of estimating expectation values of individual terms in the Hamiltonian via repeated identical state preparations followed by measurements, and classically summing up these up to get an estimate for the global expectation value. The pulse parameters are updated using a classical optimizer, such as Nelder-Mead \cite{Nelder_1965}, to minimize $\braket{\hat{H}^{(n)}_C}_{(n,m)}$, for a number of optimization iterations $M$. We then repeat the state preparation and measurement with these new parameters $\bm{\gamma}^{(n,m+1)}$ and $\bm{\nu}^{(n,m+1)}$. The first set of pulse parameters for a given weight epoch $n$, i.e., $\bm{\gamma}^{(n,0)}$ and $\bm{\nu}^{(n,0)}$ are initialized as random. Once an optimum of $\braket{\hat{H}^{(n)}_C}$ is deemed reached; the optimal $\bm{\gamma}^{(n)}$ and $\bm{\nu}^{(n)}$ QAOA parameters have been found for epoch $n$. At this point we have the full circuit to perform Gibbs sampling for our weight updates. We can then measure the unclamped expectation values $\braket{\hat{Z}_j\hat{Z}_k}_{(n)}$ and $\braket{\hat{Z}_j}_{(n)}$ for this optimal QAOA circuit for epoch $n$. Thus we have explained how to perform \textit{unclamped} Gibbs sampling. For \textit{clamped} Gibbs sampling, the algorithm differs in every case where the mixer Hamiltonian and the cost Hamiltonian were used: they are replaced with the partial mixer and partial cost Hamiltonians. To sample the Gibbs distribution of the clamped Hamiltonian for data point $x\in D$, we initially prepare a thermal state of the hidden units $\sim e^{-\beta H_{\tilde{I}}}$, $H_{\tilde{I}}\equiv \sum_{j\in h} Z_j$ via partially entangled Bell pairs, which leaves the hidden units in a mixed state, while preparing the visible units in the computational basis state $\ket{x}_v$. The same QAOA routine as the unclamped sampling is applied, except with the partial mixer and cost Hamiltonians $H_{\tilde{M}}$, and $H_{\tilde{C}}$. At a given epoch $n$ we can sample the expectation values for the optimal partial cost minimizing QAOA pulse sequence, $\braket{Z_jZ_k}_{(n),x}$ and $\braket{Z_j}_{(n),x}$. We repeat this QAOA optimization and sampling for each data point. Once the expectation values for the unclamped case and the clamped case for each data point is estimated, we can then update the weights according to Melko et al.'s \cite{QRBM} bound-based QBM rule., i.e. \begin{align} \delta J^{(n)}_{jk} &= \overline{\braket{Z_j Z_k}}_D - \braket{Z_j Z_k}\\ \delta B^{(n)}_{j} &= \overline{\braket{Z_j }}_D - \braket{Z_j} \end{align} and the $(n+1)^\text{th}$ epoch's weights are then $J^{(n+1)}_{j} = J^{(n)}_{j}+ \delta J^{(n)}_{j}$, and $B^{(n+1)}_{j} = B^{(n)}_{j}+ \delta B^{(n)}_{j}$. The regular training algorithm can be parallelized over multiple quantum chips aided by classical computers, each running QAOA optimization for each data point to compute each gradient update step. The clamping of each data point is done in a simulated fashion by preparing the initial state of the visible units in the $\ket{\bm{x}}$ state (step 3 (b)). Instead of clamping to a single data point at a time, we can perform \textit{Quantum Randomized Clamping} (QRC), this allows us to train all data points (or a randomly chosen subset; also known as a \textit{minibatch}) with one QAOA optimization. One option for this Quantum Randomized Clamping (QRC) is to use a Quantum Random Access Memory \cite{2008PhRvL.100p0501G}. For a dataset $D=\{d_j\}_j$, using a QRAM, in a $\mathcal{O}(\log |D|)$ gate depth, we can prepare a state $|D|^{-1/2}\sum_{j=0}^{D-1} \ket{j}_A\ket{d_j}_V$ where $A$ is a binary address register. We can feed the $V$ register to the visible units, and run the rest of the algorithm similarly, except that the averaging of expectation values over all data points will be done automatically. Another option to the same effect is to classically randomly pick a certain data point $d_j$ to clamp our visibles to, for each measurement iteration of the QEE, for each QAOA update, for each weight update. This effectively is akin to preparing the state $|D|^{-1}\sum_{d_j\in D} \ket{d_j}\! \bra{d_j}_V$ and simulating thermalization with this mixed state clamped for the visibles. Since we have to run QAOA only twice for each gradient update in version (ii) as compared to version (i) needing $1+|D|$ different QAOA optimizations, this can be seen as a speedup over the traditional clamping algorithm, albeit at perhaps a cost of greater difficulty of QAOA optimizations. In appendix \ref{app:QRC} we examine in greater detail the relation between both of our approaches to randomized clamping. \section{Numerical Experiments} Figure \ref{fig:KL_noises} depicts training a restricted Boltzmann machine with both variants of the QABoM algorithm. The Kullback-Leibler (KL) divergence is computed by performing inference classically using standard techniques of Restricted Boltzmann Machines \cite{Hinton_2012}, using the weights trained on the quantum computer at each epoch. Note that for the specific case of restricted Boltzmann machines, the clamped sampling can be done efficiently classically, but we opted to perform it using our algorithm, as this would be needed for more general network topologies, such as semi-restricted or full/deep Boltzmann machines \cite{QRBM, LeCun_2015}. The number of measurements per QAOA update was $N=500$, with QAOA depth $P=3$, the number of Nelder-Mead optimizations per QAOA parameter update was $M=100$. The circuit was compiled with a probability $p$ of applying each Pauli Error, i.e.~each gate has the depolarizing channel added \begin{equation} \mathcal{N}_p(\rho) = (1-3p)\rho+pX\rho X+pY\rho Y +pZ\rho Z.\end{equation} An alternate way to write this channel is $\mathcal{N}_p(\rho) = (1-4p)\rho +4p (I/2)$, this gives an average gate fidelity $\bar{F}_1 = (1-2p)$ and $\bar{F}_2=(1-2p)^2$ for 1 and 2-qubit gates respectively. All cases with $p\leq 1\%$ showed signs of training convergence. In some cases the training updates were terminated once the minimal value of KL divergence was reached, as tested with new data points. The network consisted of 4 visible units and 2 hidden units. We see that the version of the training with QRC outperforms the regular training algorithm. This shows that the randomized clamping provides weight updates that better approximate the KL gradient as compared to the regular \cite{QRBM} bound-based update rule using single-data-point clamping. \begin{figure}[h!] \begin{center} \includegraphics[width=1.02\columnwidth]{KL_plot_alt.pdf} \caption{Kullback-Leibler divergence of reconstructed distribution relative to data, versus training epoch, for various levels of depolarizing noise ($\mathcal{N}_p$) (a) for regular clamping training (b) for quantum randomized clamping (QRC), including a noiseless case with QRAM-aided QRC. Both plots share the same vertical scale. } \label{fig:KL_noises} \end{center} \end{figure} Figure \ref{fig:sampling} examines the scaling of the quality of the Quantum Expectation estimation with increasing measurements (right), and if we try to scale up the number of QAOA pulses (left), with the number of Nelder-Mead iterations fixed to $M=100$, for various noise levels. For the Quantum Expectation Estimation, we depict the average error in weight update, measured in the squared Euclidean $\mathbb{R}^T$ norm, where $T=\text{dim}(\bm{\theta})$. We see that in the noisy case the extra depth is detrimental, while in the noiseless case due to increased optimization difficulty and fixed $M$ we get a slightly worse performance. This could be perhaps partially medied by the use of a different optimization algorithm than Nelder-Mead. \begin{figure}[h!] \begin{center} \includegraphics[width=1.02\columnwidth]{QEE_QAOA_alt.pdf} \caption{For various depolarizing noise levels ($\mathcal{N}_p$), training with hidden mode data \cite{QRBM}: (a) Minimum KL achieved versus number of QAOA pulses. Dotted line is initial KL. Number of Nelder-Mead steps fixed to $M=100$. (b) Euclidean norm squared of error in weight update due to quantum expectation estimation versus number of measurements. $\delta\theta_\text{est}$ is the weight update calculated from these estimated expectation values through measurements, while $\delta\theta$ is the weight update for the actual expectation values calculated directly from the simulated wavefunctions. } \label{fig:sampling} \end{center} \end{figure} \section{Discussion} Sampling exact thermal states of quantum systems, such as if one were to use the quantum metropolis algorithm \cite{2011Natur.471...87T}, is still futuristic. For near-term quantum computing devices, our algorithm provides a means to train neural networks using noisy devices in an approximate way. This achieves practical levels of learning, as we have demonstrated through numerical simulation of noisy quantum computation. Our algorithm prepared crude pseudo-thermal states, not exact thermal states, yet this achieved levels of learning performance. A possible extension would be to attribute our near-thermal performance to the Eigenstate Thermalization Hypothesis \cite{2017arXiv171004631B}, we leave this connection to be fleshed out in future work. An extension of this algorithm to improve proximity to thermality of our states could be to variationally maximize the entropy at fixed energy, assuming one could get estimates of the entropy efficiently. We focused on a restricted Boltzmann machines. Our QABoM algorithm is technically general enough to allow for any sort of training of quantum Boltzmann machine, as in \cite{QRBM}, i.e.~supervised, unsupervised, deep, restricted and semi-restricted. Thus, an extension of this work could be to test semi-restricted Boltzmann machines and other network architectures. We chose the restricted case in order to perform the inference classically and as an initial stepping stone to network architechtures of higher complexity. Given that we demonstrated a certain robustness of our training algorithm under levels of noise comparable to that of current-term devices \cite{ 2015arXiv151004375G,reagor2018demonstration}, it is quite feasible that this algorithm be implemented in the near-future. An immediate extension to this work preceding implementation could be to test robustness under different types gate and measurement noise models (beyond depolarizing) which better reflect the observed noises of an implementation of interest. \section{Acknowledgements} The authors would like to thank the Creative Destruction Lab for hosting the Quantum Machine Learning program, where the idea for this paper was conceived. All circuits in this paper were simulated on the Rigetti Forest API Quantum Virtual Machine and written in PyQuil \cite{2016arXiv160803355S}. We thank Will Zeng and the Rigetti Computing team for providing technical support and computing facilities to run the quantum circuit simulations. We also thank Mohammid Amin, Jason Pye, Nathan Wiebe, Peter Johnson, Jonathan Romero, and Jonathan Olson for useful discussions. GV acknowledges financial support from NSERC. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research and Innovation.
{ "timestamp": "2019-08-13T02:07:05", "yymm": "1712", "arxiv_id": "1712.05304", "language": "en", "url": "https://arxiv.org/abs/1712.05304" }
\section{Introduction} Markov Chain Monte Carlo (MCMC), based on the Metropolis-Hastings class of algorithms \citep{metropolis_etal53,hastings70}, has enjoyed great success in a wide range of fields from astrophysics and physics \citep[see, e.g.,][for a review]{sharma17} to biology, medicine, and statistics \citep[e.g.,][]{brooks_etal11}. The most common application of the MCMC approach is sampling an $n$-dimensional probability distribution function (pdf), $\pi(x)$, and to compute various related statistics, such as the average of a function $f(x)$ with respect to $\pi(x)$: \begin{equation} \langle f \rangle_\pi := \int f(x) \pi(x) \,{\mathrm d}x \label{eqn::avgf} \end{equation} where $x\in\,{\mathbb R}^n$. MCMC's many successes not withstanding, the computational cost of the $\pi(x)$ evaluation, slow convergence of the MCMC estimate, and the high dimensionality of $x$ often make evaluation of $\langle f\rangle_\pi$ a computationally challenging problem. This is particularly true when the low-probability tails of the distribution contribute significantly to the integral of eq. \ref{eqn::avgf} and when $\pi(x)$ is multi-modal, as is often the case in astrophysical applications in multi-dimensional spaces \citep[e.g.,][]{farr_etal14}. For example, in sampling a Bayesian posterior in a statistical analysis, one may be interested in robust determination of credible regions up to high-levels (e.g., $99.99\%$) to evaluate the level of discrepancy with theoretical prediction or with another measurement. One technique in widespread use in computational chemistry is umbrella sampling (US) -- a variant of the importance sampling approach originally proposed by \citet{torrie_valleau77}. The method has proved crucial in many chemical problems where traditional sampling methods are unable provide insight \citep[see e.g.,][]{boczko1995first,berneche2001energetics}. In the US algorithm, sampling of $\pi$ is split or {\it stratified} into several easier sampling problems (see Figure \ref{fig::usfig}). Specifically, a sequence of overlapping window functions, or {\it umbrellas} $\psi_i(x)$, is introduced and the algorithm samples the corresponding distributions, $\pi_i(x)\propto \psi_i(x)\pi(x)$. Selecting windows in low probability regions of the posterior and thereby confining samples of $\pi_i$ to these regions, allows one to get a much more efficient coverage of outlying areas of the posterior and ensure discovery of widely separated peaks in multi-modal distributions. In this sense, the umbrella sampling approach is a part of a large class of MCMC methods that are designed to sample the parameter space more uniformly, such as parallel tempering \citep[see, e.g.,][and references therein]{earl_deem05} and parallel MCMC \citep{vanderwerken_schmidler13,basse_etal16}. \begin{figure} \begin{center} \includegraphics[width=.45\textwidth]{./figs/twopanelv2.pdf} \caption{Umbrella sampling splits the difficult problem of sampling $\pi$ (black curve) into smaller, simpler subproblems $\pi_i$ (colored curves, bottom) by introducing biasing functions $\psi_i$ (colored curves, top) so that $\pi_i \propto \pi\psi_i$. The $\pi_i$ distributions can be sampled independently, with $\pi$ recovered as a weighted sum of samples. \label{fig::usfig} } \end{center} \end{figure} The idea of umbrella sampling is simple enough, but its successful application relies upon a robust way to combine the samples in different umbrellas into a set of samples of $\pi(x)$ \citep[see][and references therein]{dinner_etal17}. In particular, an efficient iterative method for computing the relative weights of the samples -- Eigenvalue Method for Umbrella Sampling (EMUS) -- has been developed \citep{thiede_etal16,dinner_etal17}. This re-weighting is cheap and does not require extra evaluations of $\pi$. Moreover, it does not interfere with the sampling of the individual $\pi_i(x)$ distributions. As a result, the sampling of the umbrella distributions $\pi_i(x)$ is independent of the re-weighting procedure and can be done with any MCMC sampling method that is deemed sufficiently effective for the task. In this paper, we present the EMUS method and its public implementation in a python package. The current version of the package uses a parallel implementation of the affine-invariant ensemble sampling algorithm of \citet[][hereafter GW10]{goodman_weare10} in the {\tt emcee} code\footnote{\href{http://dfm.io/emcee/current/}{http://dfm.io/emcee/current/}} \citep{foreman_mackey_etal13} to sample target distributions $\pi_i(x)$. However, the US framework is general and other samplers can be used instead of {\tt emcee}. Given that sampling in different umbrellas can be done in parallel, US combined with {\tt emcee} allows us to exploit parallelism on two levels: both while sampling within individual windows and in sampling different windows independently, with occasional replica exchange communications. The GW10 and its implementation in {\tt emcee} are themselves quite efficient in sampling degenerate distributions and traversing relatively high-probability valleys between peaks in multi-modal distributions. Umbrella sampling is designed to make sampling of low-probability areas much easier and thus its combination with a sampler, such as {\tt emcee}, not only allows for efficient sampling of low-probability tails of distribution, but also efficient traversal of the low-probability valleys between peaks in the distributions. This is illustrated using a distribution with two peaks in Figure \ref{fig::usfig}. In a standard scheme, samples would visit the low-probability valley between the peaks extremely rarely and thus discovery of the peaks or mixing of samples between them would be difficult. Umbrella windows placed between the peaks, on the other hand, restrict samples to these low probability valleys ensuring that many samples are available for mixing through the low-probability region. Clearly, the approach is most effective when we have some knowledge about the low probability regions of the target distribution, as umbrellas can be designed specifically to sample these regions efficiently. Such information is often available either from prior knowledge or exploratory MCMC runs. We also show that umbrella sampling can be efficiently applied even in cases when no such prior information is available. In this case umbrella windows can be chosen so that the $\pi_i$ are tempered distributions, defined as in the parallel tempering method. US provides a mechanism by which samples from the high temperature distributions (in low probability regions) can be incorporated into more accurate estimates of tail probabilities. The layout of the paper is as follows. In Section \ref{sec::emus} we describe the umbrella sampling method and the EMUS algorithm that re-weights samples to reconstruct the full target distribution (Section \ref{sec::weights}). Sections \ref{sec::cvbias} and \ref{sec::tempbias} detail two specific options for the windows $\psi_i$ for generic applications of umbrella sampling. Section \ref{sec::evidence} explains how we can use umbrella sampling for evidence calculation, while Section \ref{sec::parallel} describes how the algorithm is parallelized. Section \ref{sec::examples} presents two numerical examples demonstrating the utility of the umbrella sampling approach. We finish with discussion and conclusions in Section \ref{sec::conclusions}. \section{Umbrella sampling method} \label{sec::emus} In the umbrella sampling method the target pdf $\pi(x)$ is constructed from sampling $L$ individual distributions $\pi_i(x)$, sampled independently where \begin{equation} \pi_i(x) := \frac{1}{z_i} \psi_i(x)\pi(x), \label{eq:umbf} \end{equation} with umbrella window functions $\psi_i(x)$, often called the biasing functions or ``umbrellas'', and normalization constants $z_i$ are defined by the condition \begin{equation*} z_i := \int \psi_i(x)\pi(x)\,{\mathrm d}x \end{equation*} ensuring that $\pi_i$ is a properly normalized pdf ($\int\pi_i \,{\mathrm d}x=1$), even if the normalization of $\pi(x)$ is unknown. The calculation of these normalization constants, $z_i$, is a key part of the algorithm. Before we discuss their practical calculation, we will discuss possible strategies for windows $\psi_i$. The optimal strategy is problem specific, but some general strategies will nevertheless give a robust improvement in most cases. The simplest case is when we know a variable $x_k$ in the vector $x$ along which it is important to sample the low probability regions of $\pi(x)$. The windows can then be defined along the $x_k$ direction: $\psi_i(x)=\psi_i(x_k)$. Similarly we can define a windowing along two or more directions: $\psi_{i,j}(x)=\psi_i(x_k)\psi_j(x_l)$. However, such a splitting is akin to gridding and the curse of dimensionality usually makes it impractical for more than three dimensions. In practice, many posteriors are difficult to sample along directions that correspond to a function of one or more $x$ components: $\sigma(x_k,\ldots)$. We shall refer to such a function as a collective variable (CV). For example, if the posterior contains a degeneracy ridge, the difficult-to-sample direction is often perpendicular to the ridge, and the projection onto this direction may be a useful choice for CV with $\psi_i = \psi_i(\sigma)$. In the case where no intuition exists for a choice of CV, it is possible to construct an efficient method by considering $\pi_i(x)$ to be tempered distributions (see Section \ref{sec::tempbias}), similar to those used in the parallel tempering approach \citep[e.g.,][and references therein]{earl_deem05}. In the following subsections we describe in detail the method to estimate relative normalization constants, or {\emph {weights} $z_i$, based upon previous work by \citet[][]{vardi1985empirical,meng1996simulating,shirts2008statistically}. We also give specific illustrations where the collective variable and $\log\pi$ stratification strategies are employed to sample distributions. A more rigorous exposition of the approach, including error analysis and proofs of its consistency, are presented in \citet{thiede_etal16} and \citet{dinner_etal17}. \subsection{Calculation of window weights} \label{sec::weights} We seek to evaluate $\langle f \rangle_\pi$: the average of a function $f(x)$ over some pdf $\pi(x)$ defined in Equation \ref{eqn::avgf}. The goal of umbrella sampling is to recast this equation as the weighted sum over the $L$ umbrella distributions. For a set of $L$ umbrella windows, $\{\psi_i(x)\}$, which combined cover the entire region of space $x$ relevant for evaluation of the target integral, we can construct MCMC samples from the biased distributions $\pi_i(x)=\psi_i(x)\pi(x)$, with normalizations \begin{equation} z_i = \int \psi_i(x)\pi(x)\, dx = \langle \psi_i\rangle_\pi. \label{eq:zi} \end{equation} From this definition of $z_i$, we can rewrite Equation \ref{eqn::avgf} in a different form: \begin{eqnarray} \langle f \rangle_\pi &=& \int f(x) \pi(x) \, \,{\mathrm d}x\nonumber\\ &=& \int f(x)\, \frac{\sum_{i=1}^{L} \psi_i(x) / z_i }{\sum_{j=1}^L \psi_j(x) / z_j}\,\pi(x) \, \,{\mathrm d}x\nonumber\\ &=& \sum\limits_{i=1}^L \int\,\frac{f(x)}{\sum_{j=1}^L \psi_j(x) / z_j}\,\frac{\psi_i(x)}{z_i}\,\pi(x) \,\,{\mathrm d}x \nonumber\\ &=& \sum\limits_{i=1}^L \int\,\frac{f(x)}{\sum_{j=1}^L \psi_j(x) / z_j}\,\pi_i(x) \,\,{\mathrm d}x\nonumber\\ &=& \sum\limits_{i=1}^L\left\langle \frac{f(x)}{\sum_{j=1}^L \psi_j(x) / z_j} \right\rangle_{\pi_i}, \label{eq:us \end{eqnarray} Here $\langle\rangle_\pi$ and $\langle\rangle_{\pi_i}$ denote averages with respect to distributions $\pi$ and $\pi_i$, respectively. Thus the average of $f$ over $\pi$, can be computed as the sum of averages of $f(x)/[\sum_{j=1}^{L} \psi_j(x) / z_j]$ over the windows $\pi_i$. The sum can be computed by sampling the $\pi_i$ distributions to evaluate each of the terms. Though we do not need to know the normalizations ${z_i}$ in order to sample, they will be needed in the end to define relative weights of each umbrella sample. These can then be used to reconstruct the target pdf $\pi(x)$. The key part that is left is to define the weights ${z_i}$ for a given set of windows ${\psi_i}$ and target pdf $\pi(x)$. To do this, let us define a matrix $[F_{ij}]$ with elements defined as follows: \begin{equation} F_{ij} = \left\langle \frac{\psi_j / z_i }{\sum_{k=1}^{L} \psi_k / z_k} \right\rangle_{\pi_i}. \label{eq:Fij} \end{equation} Thus, evaluation of $F_{ij}$ involves averaging of the normalized window functions over random MCMC samples of $\pi_i$ distributions. It is clear that a particular entry $F_{ij}$ will be zero if there is no overlap between samples of $\pi_i$ and support of the window $\psi_j$. We therefore call $F$ the {\it overlap matrix.} The product of the vector $z=[z_1, z_2,\dots,z_{L}]$ and the $j^\textrm{th}$ column of $F$ will then be \begin{equation} \sum\limits_{i=1}^{L}z_i\,F_{ij} = \sum\limits_{i=1}^{L} \left\langle \frac{\psi_j }{\sum_{k=1}^{L} \psi_k / z_k} \right\rangle_{\pi_i} = \langle \psi_j\rangle_{\pi} = z_j, \label{eq:zF} \end{equation} from Equation \ref{eq:us}, and using the definition of $z_i$ in Equation \ref{eq:zi}. Considering all columns of $F$ in Equation \ref{eq:zF} gives the left eigenvalue problem \begin{equation} z F(z) = z, \label{eq::zF2} \end{equation} the solution of which is the required vector $z$ of normalization constants. Existence of the solution is guaranteed by the Perron-Frobenius theorem, if matrix $F$ is irreducible, i.e. it cannot be transformed into block upper-triangular form by row and column permutations -- the requirement that is satisfied if umbrella windows overlap \citep[see Section 2 in][for proof]{dinner_etal17}. The solution of Equation \ref{eq::zF2} can be obtained using a fixed point iteration in $z$, and does not require extra sampling of the distribution. As we only need to solve for the $z$ values once, this is an inexpensive additional computation compared to the sampling of the $\pi_i$ distributions. Note again that for $F$ to be non-degenerate, there should be a sufficient fraction of windows that do overlap. Note also that we only need to obtain the $z_i$ values up to a constant multiple, as if \[ \widehat{z_i} = \alpha z_i \] for some $\alpha>0$ independent of $i$, then \[ \sum_{i=1}^L \left< \frac{f}{\sum_{j=1}^L \psi_j / \widehat{z_j}} \right>_i = \alpha \langle f \rangle, \] and $\alpha$ can be evaluated by computing the above with $f(x)\equiv1$. Note that for $L=1$ this is equivalent to the importance sampling using a biasing function $\psi_1(x)$. Thus, importance sampling can be viewed as a specific case of umbrella sampling. \subsection{Replica exchange} \label{sec::repex} Replica exchange is a technique often used to enhance the rate of exploration in umbrella sampling. In this method multiple copies of the simulation are used, known as {\emph {replicas} or {\emph{walkers}, to sample distributions $\pi_i$, with periodic exchanges of walkers between windows to promote faster mixing. Every $K$ steps, we choose a random walker $w_i$ in window $i$ and walker $w_j$ in window $j$, and swap their positions with a probability that leaves the overall target distribution intact. Specifically, if the position of walker $w_i$ is $x_i$, and similarly walker $w_j$ is $x_j$, then the probability of accepting the swap is \begin{align*} P(\textrm{accept swap}) &= \min\left(1, \frac{\pi_i(x_j) \pi_j(x_i) }{\pi_i(x_i) \pi_j(x_j) } \right)\\ &= \min\left(1, \frac{\psi_i(x_j) \psi_j(x_i) }{\psi_i(x_i) \psi_j(x_j) } \right) \end{align*} by virtue of $\pi_i\propto\pi\psi_i$. Accepting the swap is equivalent to reassigning which window the walkers belong to, and as we only need to evaluate the bias functions this process amounts to book-keeping and is of negligible computational cost. Exchanging walkers is most likely to succeed where the overlap between the biasing distributions is greatest, so typically we consider only swapping between adjacent windows with $j=i+1$. In the worst case replica exchange should not harm the progress of sampling the $\pi_i$, whereas for many choices of biasing function it becomes critically important. For example, in the case of multi-modal distributions with peaks separated by very low-probability areas, replica exchange can greatly increase the efficacy of the sampling. \subsection{Collective variable stratification} \label{sec::cvbias} When we know the direction in the parameter space in which we want to stratify the target pdf, we can define that direction as a collective variable. CV stratification allows one to make use of prior intuition of the structure of the likelihood, or to focus sampling in one area of interest. For example, if we know that the posterior is expected to have peaks, collective variable can be defined to be along the direction connecting the peaks. In the real-world example we will discuss below in Section \ref{sec::OmOl}, we are interested in estimating the probability that the universe is decelerating given supernova type Ia observational data. Given that we know the region in the parameter space corresponding to a decelerating universe, we can define umbrella windows as to maximize the efficiency and accuracy of sampling this region. Without loss of generality, the CV direction can be define as a function $\sigma(x)\in [0,1]$. For $L$ umbrella distributions, we define an increasing sequence of centers $c_i \in [0,1]$ where \[ 0\leq c_1 < c_2 < \ldots < c_{L} \leq 1. \] The biasing functions are then designed to restrain points around the associated center in CV-space. A common choice is to use Gaussians as the bias functions: \[ \psi_i(x) = \exp\left\{-\frac{\kappa_i^2}{2}[\sigma(x) - c_i ]^2 \right\}. \] The $\kappa_i$ defines the strength of the restraint towards the target CV value. If chosen too weakly, the sampling protocol will be ineffective, but if $\kappa_i$ is chosen too strong there will be poor overlap between windows and thus a possibility of degenerate $F$ matrix (see \S \ref{sec::weights}). A good balance in experiments is to choose the adjacent windows to be two standard deviations away, so \[ \kappa_i = 2 / \max( c_i-c_{i-1} , c_{i+1}-c_i ), \] where $c_{-1}=0$ and $c_{L+1}=1$. An alternative is to use tent bias functions (sometimes called chapeau functions) defined as \[ \psi_i(x) = \left\{\begin{array}{lc} 1-|\sigma(x) - c_i|/ l_i & \mathrm{if } \,\, |\sigma(x) - c_i| \leq l_i \\ 0 & \mathrm{otherwise} \end{array}\right. \tag{tent} \] where the parameter $l_i>0$ defines the width of the bias' support, with a reasonable choice being $l_i = 2 / \kappa_i$. This gives a sawtooth-like family of bias functions with a steeper log-bias than the harmonic umbrella close to the edges. Tent biases have compact support which may be beneficial where the log likelihood is particularly steep preventing effective stratification. However, one disadvantage is that the initialization is more difficult without knowing $\sigma^{-1}(x)$ as sample points need to be started inside the support of the tent. Stratification along a collective variable coordinate can give more accurate information about e.g. the height of a barrier by concentrating samples in particular regions in space. However, the hidden degrees of freedom (i.e. the space orthogonal to $\sigma$) can stymie the progress of the sampling if the CV is chosen poorly. For example, consider a planar ring-shaped likelihood distribution, with small peaks and troughs around the ring. Stratifying in the $x$ or $y$ direction would define a multimodal $\pi_i$, making sampling more difficult. A more sensible CV would be to use the angle $\mathrm{atan}(y/x)$, which would break the ring into small arcs. \subsection{Temperature stratification} \label{sec::tempbias} If an obvious collective variable choice is not readily available we can define the biasing functions $\psi_i$ similarly to the modified posteriors in the parallel tempering approach \citep[see][for a review]{earl_deem05}. Namely, for a series of $L$ temperatures $T_i$ \[ 1\leq T_1 < T_2 < \ldots < T_L \] the sequence of biasing window functions is defined as \begin{equation} \psi_i(x) = \exp[(1/T_i-1) \log\pi(x)], \label{eq:psi_temp} \end{equation} ensuring that \begin{equation} \pi_i(x)\propto\pi(x)\psi_i(x)=\exp\left[\frac{1}{T_i}\log\pi(x)\right]=\pi(x)^{1/T_i}, \end{equation} as expected in the parallel tempering approach. Higher temperatures effectively flatten distribution $\pi(x)$ allowing for exploration of wider ranges of parameters. This can be further improved by incorporating replica exchange into umbrella sampling simulations (see Section \ref{sec::repex} for details). A balance must be struck between the range and number of temperatures that are used. Typically we follow the advice from parallel tempering literature, and choose temperatures that are spaced exponentially ($T_k=\exp(\lambda k)$) to give roughly equal exchange probabilities between windows. Note that although the sampling procedure in this case is identical to the traditional parallel tempering simulation (with $T_1=1$), the re-weighting scheme described in \S \ref{sec::weights} allows us to use samples from all $L$ windows rather than just the $T_1$ window, as in the standard parallel tempering approach. Thus, the umbrella sampling approach greatly enhances the efficiency of the parallel tempering method. This is particularly critical for applications where evaluations of the likelihood are expensive, as is often the case in cosmology. Note also, the US re-weighting scheme differs from the na\"ive importance sampling re-weighting, in which samples for different temperatures are are combined with the weights $\pi/\pi^{1/T_i}$. The latter is highly inaccurate, as demonstrated in Section \ref{sec::smile} (see Figure \ref{fig::smileres}). \subsection{Estimating evidence using US MCMC samples} \label{sec::evidence} In Bayesian model comparisons, one needs to evaluate the evidence, or marginal likelihood -- the integral of the posterior over the entire parameter space. Although a number of approaches to estimating the evidence have been explored \citep[see][for a review]{friel_wyse12}, it is particularly convenient to use the MCMC samples themselves to estimate evidence. However, the MCMC samples in the standard MCMC sampling algorithms are biased to the high-probability regions by construction. For ``diffuse'' prior distributions the contribution of low-probability areas to the evidence integral can be large \citep[e.g.,][]{efstathiou08,trotta17,cousins2017}. US sampling of the low-probability areas can improve the accuracy of the evidence estimates in such cases. For example, estimation of the evidence using samples from multiple biased distribution have been considered by \citet{geyer94}. The marginal likelihood can be estimated from the MCMC samples within umbrella windows and their weights using the estimator of \citet[][see their eq. 23; see also \S 2.2 in \citealt{robert_wraith09}]{gelfand_dey94}. Namely, if $q(x)$ is a normalized pdf with dimensionality of the posterior, and ${\tilde{\pi}}(x)$ is an unnormalized posterior with normalization constant (the evidence) $Z_\pi = \int{\tilde{\pi}}(x)\,{\mathrm d}x$, so that normalized posterior is ${\pi}={\tilde{\pi}}/Z_\pi$, we can write: \begin{eqnarray} \int q(x)\,\,{\mathrm d}x &=& \int \frac{q(x)}{{\tilde{\pi}}(x)}{\tilde{\pi}}(x)\,{\mathrm d}x = Z_\pi \int \frac{q(x)}{{\tilde{\pi}}(x)}{\pi(x)}\,\,{\mathrm d}x\nonumber\\ & =&Z_\pi \left\langle \frac{q}{{\tilde{\pi}}}\right\rangle_\pi,\ \ \ \mathrm{so}\ \ \ Z_\pi = \left\langle \frac{q}{{\tilde{\pi}}}\right\rangle^{-1}_\pi. \end{eqnarray} In the context of the umbrella sampling approach presented in this paper (see Equation \ref{eq:us}), the evidence $Z_\pi$ can then be estimated as \begin{eqnarray} Z_\pi = \left\langle \frac{q}{{\tilde{\pi}}}\right\rangle^{-1}_\pi=\left[\sum\limits_{i=1}^{L}\left\langle \frac{q(x)/{\tilde{\pi}}(x)}{\sum_{j=1}^{L} \psi_j(x) / z_j} \right\rangle_{\!\!\pi_i\,\,}\right]^{-1}. \end{eqnarray} For unimodal distributions, a good choice for $q(x)$ is a multivariate Gaussian distribution with the covariance matrix similar to that of $\pi$. In the case of multi-modal distributions one can adopt a Gaussian mixture approximation to $\pi(x)$ as a suitable $q(x)$. The accuracy of this evidence estimator was discussed in \citet[][see their section 2]{robert_wraith09}, who demonstrated that it is competitive in accuracy with other estimators. Alternatively, in the case of the temperature stratification (Section \ref{sec::tempbias}), one can use umbrella sampling and the thermodynamic integration method to estimate the evidence, as in the parallel tempering approach \citep{gelman_meng98,neal00}. \subsection{Parallelization} \label{sec::parallel} Given that the sampling in each umbrella window is independent (up to periodic replica exchanges which require communication), we can exploit the independence to run in parallel on distributed systems. In principle, the sampling within an umbrella can be achieved using any suitable sampler. Many modern sampling methods, such the GW10 method implemented in the {\tt emcee} code, sample multiple chains (walkers) in parallel. Thus, with umbrella sampling the parallelism both within umbrella windows and between them can be exploited. In the umbrella sampling python package we use in this article, parallel execution and communications are organized using a message passing interface library, as implemented in the {\tt mpi4py} python package. The available $C$ computing cores are split into $L$ MPI communicator groups corresponding to each umbrella window with $\lfloor L/C\rfloor$ cores in each group. The umbrella windows are sampled simultaneously in parallel, with a parallel sampler within each window making use of the $\lfloor L/C\rfloor$ cores within the window MPI communicator. The method itself is able to utilize all cores efficiently, with no additional computational costs when sampling, compared to conventional parallel samplers. The extra work involved in the umbrella sampling method is at the end of the simulations and requires no additional likelihood evaluations. \section{Examples of umbrella sampling} \label{sec::examples} We illustrate the umbrella sampling method presented above using two example problems: 1) sampling of a difficult to sample, multi-modal synthetic pdf -- the Rosenbrock pdf with two distant Gaussian peaks (Section \ref{sec::smile}) and 2) sampling of the real world posterior distribution of the mean matter and vacuum energy density resulting from the existing constraints of the supernovae type Ia measurements (Section \ref{sec::OmOl}). The umbrella sampling code used in the first example can be found in the {\tt usample} python package.\footnote{Available at \href{https://github.com/c-matthews/usample}{https://github.com/c-matthews/usample}} The second test was run using {\tt usample} within the CosmoSIS cosmological analysis package \citep{zuntz_etal15}.\footnote{The umbrella sampler will be included in the next CosmoSIS version release.} \subsection{Sampling the ``smiley'' pdf} \label{sec::smile} \begin{figure} \begin{center} \includegraphics[width=.45\textwidth]{./figs/smilepdfv3.pdf} \caption{ (\emph{Left}) The marginal posterior density is plotted in $x$ and $y$. Contours are drawn for the logarithm of the density, with the first contour at $-5$ and spaced every 25 units. (\emph{Right}) The marginal distribution is plotted in the $x$ direction. \label{fig::smilepdf} } \end{center} \end{figure} We compare results for a four dimensional toy problem, with two variables of interest (denoted $x$ and $y$) distributed in a smiley face shape, and two nuisance variables (denoted $u_1$ and $u_2$) that are distributed with independent Gaussian pdfs. The overall posterior density is \begin{equation} \pi(x,y,u_1,u_2) \propto \pi_\textrm{smile}(x,y) \times \exp(-u_1^2/2 - u_2^2/2), \label{eq::smilepdf1} \end{equation} where $\pi_\textrm{smile}$ is \begin{align} \pi_\textrm{smile}(x,y) \propto & \exp(-8(x-2)^2-8(y-3)^2)\nonumber\\ &+ \exp(-8(x+2)^2-8(y-3)^2) \label{eq::smilepdf2}\\ &+ \exp(-10( y+3.5 - x^2/4)^2 - x^4/100 )\nonumber \end{align} The addition of the isolated distant Gaussian peaks (the eyes) to the Rosenbrock density makes the distribution multi-modal, with standard MCMC methods requiring a large number of samples to fully converge. For example, \citet{goodman_weare10} show convergence on the 2d Rosenbrock pdf alone requires of order $10^9$ MCMC samples even with their affine-invariant algorithm that has a relatively short autocorrelation length. To test the efficiency of sampling for such a pdf, we will compare the performance of different sampling methods when producing an accurate log-marginal curve in the $x$ direction, plotted in Figure \ref{fig::smilepdf}. However, even though we have good prior knowledge of the properties of $\pi_\textrm{smile}$ in this case, it is still difficult to choose an optimal collective variable $\sigma$ for this pdf along which the sampling could be stratified effectively. For example, the choices of $\sigma(x,y)=x$, $\sigma(x,y)=y$ or any simple combination of $x$ and $y$ does not remove multi-modality from the sampling in individual windows. A good choice for this problem is thus to use umbrella sampling with stratification in the temperature, as discussed in Section \ref{sec::tempbias}. We show results for the four windows with the temperature schedule of $T_i\in[1,10,100,1000]$ and replica exchange between windows every 100 steps. With this choice the replica exchange probabilities during the run were $\approx 15\%$. We use the {\tt emcee} package implementation of the \citet{goodman_weare10} algorithm with 16 walkers to sample the pdf within each window. For performance comparison purposes we have run the algorithm for a fixed 400,000 steps for each walker. It is most natural to compare results of umbrella sampling in this test to the sampling using the parallel tempering (PT) approach with the same parameters. In PT only samples from $T=1$ are used in the average's estimate, but replica exchange through the higher temperatures does allow exploration of the parameter space and discovery of isolated peaks in the pdf better than the simple MCMC sampling. In this context US uses the same trajectory data from a PT run, but offers a new way of combining the data from all temperatures to give a more accurate estimate. It does not speed up sampling of the $\pi_i$ tempered distributions, but offers a more efficient post-processing of the data compared to PT. We also compare to the case when samples from all parallel temperature samples are used, but the results are combined using a na\"{\i}ve re-weighting of samples with weights $\pi(x)/\pi^{1/T_i}(x)$, where $x$ are samples from $\pi^{1/T_i}(x)$, rather than the US weighting scheme described in Section \ref{sec::weights}. This na\"ive weighting is analogous to the weighting often used in the importance sampling approach. Additionally, we compare against four independent runs of the {\tt emcee} sampler without the parallel tempering with the total number of samples equal to the parallel tempering runs. All of the runs we compare thus have the same number of likelihood evaluations and hence comparable computational cost. We compute the absolute difference in marginalized posterior $\pi(x)$ at a given value of $x$, computed by binning the samples in the interval $[0,6.5]$ into 150 equal-sized bins. For each scheme, we obtain the final posterior over ten independent runs and we use these runs to estimate the error of the average $\pi(x)$. The 2D histogram in Figure \ref{fig::smileres} shows the $\log_{10}$ of the absolute error in the log-marginal posterior $\pi(x)$ relative to the true value as a function of $x$ and the normalized wall-clock time. For simplicity we plot the results for positive $x$ as the results are symmetric around $x=0$. \begin{figure} \begin{center} \includegraphics[width=.45\textwidth]{./figs/newboxes.pdf} \caption{The absolute error in the log-marginal distribution in the $x$ direction (plotted only for $x>0$), as a function of normalized wall-clock time for four sampling methods: ({\emph {clockwise}} from the top left panel) the {\tt emcee} package, parallel tempering, parallel tempering with the importance sampling re-weighting of all samples, and umbrella sampling. The color indicates the average absolute error from ten experiments, with white indicating that no samples were found at that value. \label{fig::smileres}} \end{center} \end{figure} The GW10 scheme implemented in the {\tt emcee} package is unable to resolve the basin at $x=2$ in the allotted time, and this gives a large error that diminishes extremely slowly. This error is also apparent in the other schemes, but disappears quickly due to replica exchange with the higher temperature simulations. The far tails of the distribution at ${x \in [5,6.5]}$ are poorly sampled in the {\tt emcee} and PT simulations. While in regular parallel tempering we do not recover any samples in this tail, re-weighting samples from all the temperatures does give some information in this region. However, this process greatly increases the variance of the result in the entire range of $x$ and this variance does not decrease with time. The umbrella sampling result gives the most efficient and accurate result for the entire range of $x$, even in the tail regions. \subsection{A real-world example: cosmological constraints using type Ia supernovae} \label{sec::OmOl} To illustrate the power of the umbrella sampling algorithm to accurately sample the tails of the marginal posterior distribution, we use cosmological constraints derived from type Ia supernovae observations. Specifically, we sample the marginal posterior of the mean dimensionless matter and vacuum energy densities, $\Omega_{\rm m}$ and $\Omega_\Lambda$, using the SDSS-II/SNLS3 Joint Light-curve Analysis (JLA) supernovae dataset \citep{betoule_etal14} and the associated {\tt JLA v3} likelihood. Type Ia supernovae are one of the key probes of the cosmological parameters governing expansion of the universe \citep[see, e.g.,][for reviews]{frieman_etal08,freedman_madore10,goobar_leibundgut11} and played the main role in the discovery of the accelerating expansion of the universe \citep{riess_etal98,perlmutter_etal99}. The current supernovae samples, such as the JLA dataset, cover a wide range of redshifts and provide complimentary constraints to those derived from the Cosmic Microwave Background and the Baryonic Acoustic Oscillations measurements \citep[e.g.,][]{eisenstein_etal99}. Recently, the significance of the evidence for acceleration from supernovae observations alone was questioned \citep{nielsen_etal16}. Given that estimate of such significance at the $\gtrsim 3-5\sigma$ level requires reliable sampling of the posterior tails, this problem is a good target for application of the US algorithm. In this section we show that the US approach allows us to make these estimates accurately and efficiently, even in the regions of the parameter space that contain a tiny fraction of the total integrated probability. We test two different parameterizations of windows in the umbrella sampling approach: temperature stratification only and a combination of the temperature stratification with a collective variable. The latter is possible in this case, because we are interested in estimating the probability that the universe is not accelerating, and we can define the collective variable in the direction perpendicular to the line separating accelerating and non-accelerating universes: $\Omega_{\rm m}/2 - \Omega_\Lambda=0$. When using only temperature windows, we use the following schedule of sixteen temperatures: $T_j=50^{(j-1)/15}$. We believe that this is a robust and flexible choice when little about the overall likelihood surface is known a priori, as it will give thorough sampling in all variables. When using the collective variable we also use sixteen windows, but each window uses one of four temperatures (with $T_i\in\{1,3.7,13.6,50\}$) as well as one of four collective variable centers (with $c_j\in\{0,1/3,2/3,1\}$), divided so each window has a unique pair $(T_i,c_j)$. The overall bias function for a window is then the product of the bias functions in the collective variable and in the temperature. We use a tent bias function with the collective variable \[ \sigma(x) = \max\{0,\min[1 , (x-p_1) \cdot (p_2-p_1) / \|p_2-p_1\|^2 ]\}, \] which gives a normalized distance of a point along a line connecting two anchor points $p_1$ and $p_2$, when the point $x$ is projected along the line. The anchor points are chosen so that the line segment they define is perpendicular to and intersecting the line separating accelerating and decelerating universes, $\Omega_{\rm m}/2 - \Omega_\Lambda=0$: \[ p_1 = (0.55,0.9,0,\ldots)^T,\quad p_2 = (0.85,0.3,0,\ldots)^T. \] This ensures that the level sets of $\sigma(x)$ define strips parallel to the line $\Omega_{\rm m}/2 - \Omega_\Lambda=0$. \begin{figure} \begin{center} \includegraphics[width=.4\textwidth]{./figs/triple2.pdf} \caption{The iso-density contours of the sampled marginal posterior distribution using the {\tt JLA v3} likelihood in the $(\Omega_m,\Omega_\lambda)$ plane, marginalized over the 6-dimensional space in which sampling was done.The three panels show results of sampling using the \protect\citet{goodman_weare10} algorithm implemented in the {\tt emcee} code (top panel) and umbrella sampling with different choices of the umbrella partition (middle and bottom panels). In each case, the sampled was done using similar amount of CPU time. The solid lines show the actual iso-significance contours of the posterior up to fifteen sigma levels, while the dotted lines give the corresponding contours for a Gaussian approximation of the posterior. A Gaussian filter was used to smooth the plotted solid contours, but no filter was applied to the underlying shaded surface. \label{fig::3panel}} \end{center} \end{figure} Using this collective variable $\sigma(x)$ means that some samples will be drawn from within the target region in the tail of the distribution. This improves efficiency and accuracy of the estimate of the probability that we are interested in: \begin{equation} p_{\rm dec} = \int \mathbf{1}_\mathrm{dec}(x) \pi(x)\,{\mathrm d}x, \label{eq::pdec1} \end{equation} where $x=\{\Omega_{\rm m},\Omega_\Lambda\}$, $\pi(x)$ is the posterior marginalized over all other parameters, and \begin{equation} \mathbf{1}_\mathrm{dec}(x) = \begin{cases} 1, & \mathrm{if } \, \, \Omega_{\rm m} > 2\,\Omega_\Lambda \\ 0 & \mathrm{otherwise.} \end{cases}. \label{eq::pdec2} \end{equation} By contrast, using pure temperature windows places no such constraint on the sample distribution. In this case samples in each window will explore the entirety of the parameter space. Figure \ref{fig::3panel} shows results of sampling the {\tt JLA v3} likelihood in 6D parameter space using the \citet{goodman_weare10} algorithm implemented in the {\tt emcee} code \citep[][top panel]{foreman_mackey_etal13} and umbrella sampling with different choices of the umbrella partition (middle and bottom panels). Here we sample the {\tt JLA v3} likelihood using the CosmoSIS package \citep{zuntz_etal15}, to which we have added the US sampler. In each case, the sampling was done using the same number of likelihood evaluations, and thus similar amount of CPU time. Calculations were carried out using two Intel E5-2670 16-core nodes with MPI communications between and within nodes using MPI pools, as described in Section \ref{sec::parallel}. For the {\tt emcee} package, we used 192 walkers with $10^5$ steps per walker. For the umbrella sampling, we used 16 windows with 32 walkers per window and $3.75\times10^4$ steps per walker. Thus, in both experiments $1.92\times10^7$ samples were generated. The runs using {\tt emcee} used all 32 available cores in parallel. The umbrella sampling runs used sixteen windows sampled in parallel, each with two cores that worked in parallel. This gave the umbrella sampling results the same parallel efficiency as the {\tt emcee} sampler. An mpi pool object was used to na{\"i}vely distribute the tasks evenly, without any load balancing. Figure \ref{fig::3panel} shows the successive sigma level contours (solid black lines). In the middle and bottom panels that show results of US method, the contours are shown to $15\sigma$. A Gaussian filter was used to smooth the plotted contours, but no filter was applied to the underlying shaded surface showing the posterior distribution.The figure shows that with a given CPU time GW10 algorithm samples the likelihood well only to $\approx 3\sigma$ contour, while the US sampler samples it with a nearly uniform accuracy to $\approx 15\sigma$ level. The results obtained using umbrella sampling with only temperature show a higher variance than results using collective variables, as is evident from the spikier and less-defined contours at the furthest sigma levels. This is simply because umbrella windows with the collective variable stratified, ensure that a fixed number of walkers sample the low probability areas of the posterior in the $\Omega_{\rm m}-\Omega_\Lambda$ plane, while in the temperature windows walkers explore the entirety of the parameter space without any restraints. However, the contours do demonstrate good agreement even in the tails. In particular, the first three contours show good agreement between all of the methods. This means that even US sampling with the more flexible temperature umbrellas is as accurate as the affine-invariant MCMC for the $\approx 3\sigma$ credible region, but is far more accurate in lower significance regions. Thus, the accuracy gain in these low probability regions is obtained without significant loss of accuracy in high probability region. In Table \ref{tab::area_estimate} we compare estimates for $p_{\rm dec}$ computed using sampling and the estimate using the Gaussian approximation of the posterior. We can see that the value of $p_{\rm dec}$ estimated using the {\tt emcee} sampling is in good agreement with the estimates obtained using the US method because we have allowed for sufficient length of the chains to sample the $\approx 3\sigma$ region. Nevertheless, the US sampling using collective variable achieves a factor of four smaller error for the same amount of work. Given that Monte Carlo estimates of such quantities carry $\mathcal{O}(N^{-1/2})$ error for $N$ samples, this means that {\tt emcee} would need to be run with 16 times more samples to achieve the same accuracy in this case. \begin{center} \begin{table*} \begin{tabular}{|l|c|c|} \hline Scheme & $p_{\rm dec}$ for $\Omega_{\rm m}>0$& $p_{\rm dec}$ for $\Omega_{\rm m}>0.2$ \\ \hline\hline Gaussian approximation & $7.3\times10^{-7}$ & $1.6\times10^{-10}$ \\ {\tt emcee} & $3.1\times10^{-6}\pm1.3\times10^{-7}$ & not available \\ Umbrella sampling (temperature) & $3.3\times10^{-6}\pm1.5\times10^{-7}$ & $5.4\times10^{-9}\pm1.7\times10^{-10}$ \\ Umbrella sampling (CV) & $3.2\times10^{-6}\pm3.0\times10^{-8}$ & $5.4\times10^{-9}\pm4.1\times10^{-11}$ \\ \hline \end{tabular}\caption{The estimated value of $p_{\rm dec}$ is computed from the mean of five runs of each scheme, with the standard error of the estimate. \label{tab::area_estimate} } \end{table*} \end{center} The faint dotted lines in Figure \ref{fig::3panel} correspond to the contours assuming a Gaussian approximation of the posterior. More precisely, we measured the covariance parameters using the {\tt emcee} samples taken within the second sigma contour. The sampled log likelihood surface was fit to a quadratic form using the Matlab \emph{fit} function, and its corresponding sigma levels plotted. Although the initial agreement up to the third contour is good, it is clear that the Gaussian approximation fails to accurately describe the tails of the posterior. The sampled surface obtained from umbrella sampling shows that there exists a much ``fatter'' tail compared to the Gaussian. As a consequence of this, any estimate of the probability of a decelerating universe, $p_{\rm dec}$ (eqs \ref{eq::pdec1}-\ref{eq::pdec2}), using the Gaussian posterior assumption, as done by \citet{nielsen_etal16} for example, will be inaccurate. Indeed, the Gaussian approximation estimate shown in Table \ref{tab::area_estimate} underestimates the probability that we live in a decelerating universe by a factor of six. To be precise, \citet{nielsen_etal16} estimated $p_{\rm dec}$ using the $\chi^2$ approximation ($p_{\rm cov}$ in their Equation 10), which effectively assumes that posterior is Gaussian, and using the likelihood profile rather than marginalized posterior distribution. This is not equivalent to integrating the probabilities using marginalized Gaussian posterior in the $\Omega_{\rm m}-\Omega_\Lambda$ plane. \begin{figure} \begin{center} \includegraphics[width=.415\textwidth]{./figs/onepanel.pdf} \caption{Marginalized posterior in the $\Omega_{\rm m}-\Omega_\Lambda$ plane using the same JLA data set but now with the likelihood of \citet{nielsen_etal16}. Here we sampled the posterior varying only three parameters $M_0$, $\Omega_{\rm m}$, and $\Omega_\Lambda$, while keeping the other parameters of their likelihood fixed at the best values reported in the first row of Table 1 in \citet{nielsen_etal16}. This approximates the "profile likelihood contours" used by these authors. Note that the contours shown in this plot are the actual contours of the posterior enclosing a given fraction of the total probability, not the contours of a Gaussian pdf. The dashed line defined by $\Omega_{\Lambda}=\Omega_{\rm m}/2$ is the boundary between accelerating (above) and decelerating (below) universes. The dotted line is the line of geometrically flat universes, $\Omega_{\rm m}+\Omega_\Lambda=1$. \label{fig::nielsen}} \end{center} \end{figure} In Figure \ref{fig::nielsen} we show the result of sampling the likelihood of \citet{nielsen_etal16}\footnote{We include the routine implementing likelihood similar to that of Nielsen et al. (2016) that we use in this analysis with the public version of the US code. } varying $M_0$, $\Omega_{\rm m}$, and $\Omega_\Lambda$, while keeping the other parameters of their likelihood fixed at the best values reported in the first row of their Table 1. This procedure approximates the likelihood profile analysis of \citet{nielsen_etal16}. Using this posterior we estimate the probability of deceleration of $p_{\rm dec}\approx 1.03\times 10^{-4}$ corresponding to $\approx 3.85\sigma$ significance, which is close to, albeit somewhat higher than, the significance reported by \citet{nielsen_etal16}. We can see that, compared to the JLA likelihood results in Figure \ref{fig::3panel}, the contours are shifted up and to the right towards the deceleration region and this region is now close to $\approx 3\sigma$ contour. This difference is the reason values for the $p_{\rm dec}$ probability in our Table \ref{tab::area_estimate} are very small and do not show significant evidence for deceleration. The difference is due to our use of the {\tt JLA v3} likelihood, while \citet{nielsen_etal16} used a different likelihood that accounted for intrinsic scatter of supernovae properties, which is done only in post-processing and in approximate fashion in the JLA likelihood. On the other hand, \citet{nielsen_etal16} did not account for significant survey selection effects in their analysis that affect the apparent properties of supernovae. These differences likely account for the discrepancy in the estimates of the deceleration probability. Regardless of these differences, other cosmological probes indicate that $\Omega_{\rm m}>0.2$ with very high confidence \citep[e.g.,][]{planck16,alam_etal17}. The probability of deceleration with $\Omega_{\rm m}>0.2$ prior for the posterior derived from \citet{nielsen_etal16} likelihood (Fig. \ref{fig::nielsen}) is only $p_{\rm dec}\approx 1.9\times 10^{-6}$ or $\approx 4.75\sigma$. For the JLA likelihood $p_{\rm dec}$ is only $\approx 5.4\times 10^{-9}$. Note that we cannot make this estimate with the {\tt emcee} run of the same length because it has no samples in the region of decelerating universes bounded by $\Omega_{\rm m}>0.2$. The US runs, on the other hand, still give a reasonably accurate estimate, as shown in Table \ref{tab::area_estimate}. The value of the $p_{\rm dec}$ estimate as a function of the number of likelihood evaluations (equivalently simulation time) is shown in Figure \ref{fig::area_estimate}. The solid lines indicate the mean of the five independent runs, while the shaded regions indicate the standard deviation of the runs themselves, showing the expected behavior of one trajectory. It is clear that umbrella sampling using the collective variable provides a rapid and precise estimate for $p_{\rm dec}$. By contrast, the emcee estimate has extremely large variance for a significant portion of the run, with no samples recorded for the first tenth of the run overall. Using US with the temperature gives behavior in between these two schemes, with an early accurate approximation. However, the variance appears to decay more slowly in this case compared to the CV-defined umbrella windows. \begin{figure} \begin{center} \includegraphics[width=.47\textwidth]{./figs/progress.pdf} \caption{The mean (bold line) and standard deviation (shaded region) of the estimate of $\ln(p_{\rm dec})$ is plotted as a function of the number of likelihood evaluations. Values are compared for the {\tt emcee} package (blue) versus using Umbrella Sampling with either temperature (red) or a collective variable (green). \label{fig::area_estimate}} \end{center} \end{figure} \section{Discussion and conclusions} \label{sec::conclusions} In this paper we presented the umbrella sampling technique and showed that it can be used to sample low probability areas of the posterior distribution that may be required in statistical analyses of data. In this technique the parameter space is partitioned into umbrella windows by splitting the target likelihood into separate likelihoods given by the original likelihood multiplied by appropriately weighted window functions. Though US has been used successfully in computational chemistry, it has not been used there to compute general averages, tail probabilities of $\pi$, or for parameter estimation. The tempering umbrella sampling approach is, to our knowledge, presented here for the first time. We show that the US method is cheap and can be easily implemented ``on top'' of existing MCMC samplers, such as {\it emcee}. The method allows the user to capitalize on their own intuition by using collective variables to define umbrella windows, or to make use of a more general technique by stratifying in the temperature. A publicly available standalone python package implementing the scheme can be found at: \href{https://github.com/c-matthews/usample}{https://github.com/c-matthews/usample}. Additionally we have added umbrella sampling to the CosmoSIS package \citep{zuntz_etal15}\footnote{\href{https://bitbucket.org/joezuntz/cosmosis/}{https://bitbucket.org/joezuntz/cosmosis/}} and the US sampler will be included in the next release of CosmoSIS, as one of the available samplers. We presented a number of tests illustrating the power of the US method in sampling low probability areas of the posterior. We also showed that this ability allows a considerably more robust sampling of multi-modal distributions compared to the {\tt emcee} direct sampling methods. For the toy model distribution given by the sum of Rosenbrock and two multi-variate isolated Gaussian pdfs, the umbrella sampling method presented in Section \ref{sec::emus} was shown to be more efficient compared to parallel tempering, as well as a naive recombination of the data, despite using exactly the same set of samples. This is because in parallel tempering only one subset of samples with $T=1$ is actually used for the final analyses. By contrast, the umbrella sampling approach allows the use of samples from {\it all} of the temperatures. In the supernova cosmological constraints example, umbrella sampling was shown to provide significantly more information about the posterior in the low probability areas compared to the direct sampling by the {\tt emcee} code. In particular, as shown in Figure \ref{fig::3panel}, for the same amount of work {\tt emcee} samples the posterior to the $\approx 3\sigma$ credible region, while the US method samples the same posterior to the $\approx 15\sigma$ region. This ability to sample far into the tail of the posterior distribution may find other applications, such as evaluation of the marginal likelihood, also known as the Bayesian evidence, which requires evaluation of the integral of the posterior over the entire parameter space, as discussed in Section \ref{sec::evidence}. Finally, in the era of precision cosmology, as errors of cosmological parameter estimates shrink, the need to evaluate discrepancies and significance of tensions at a $\approx 4-5\sigma$ level will become commonplace. The umbrella sampling method presented in this work will allow us to do such evaluations efficiently. \section*{Acknowledgments} AK is very grateful to Rick Kessler and Dan Scolnic for many enlightening discussions on the cosmological analyses using supernovae type Ia samples and effects of survey selection function. AK was supported by a NASA ATP grant NNH12ZDA001N, NSF grant AST-1412107, and by the Kavli Institute for Cosmological Physics at the University of Chicago through grant PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. EJ is supported by the Argonne Leadership Computing Facility, a U.S. Department of Energy,Office of Science User Facility operated under Contract DE-AC02-06CH11357. CM and JW were supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) under Contract DE-AC02-06CH11347. JW was also supported by ASCR through award DE-SC0014205. Numerical experiments presented in this paper were carried out on the {\tt midway} computing cluster maintained by the University of Chicago Research Computing Center. We acknowledge the Center and its staff for support. \bibliographystyle{mn}
{ "timestamp": "2017-12-15T02:01:56", "yymm": "1712", "arxiv_id": "1712.05024", "language": "en", "url": "https://arxiv.org/abs/1712.05024" }
\section{Introduction}\label{sec:introduction} Visual object tracking is one of the fundamental problems in computer vision with a variety of real-world applications, such as video surveillance and robotics. Although having achieved substantial progress during past decade, it is still difficult to deal with the challenging unconstraint environmental variations, such as illumination changes, partial occlusions, motion blur, fast motion and scale variations. \def0.112\textheight{0.112\textheight} \def0.41{0.41} \begin{figure}[!t] \centering \includegraphics[width=0.41\linewidth]{./figs/videos/48-176} \includegraphics[width=0.41\linewidth]{./figs/videos/48-431} \includegraphics[width=0.41\linewidth]{./figs/v1} \includegraphics[width=0.41\linewidth]{./figs/v3} \includegraphics[width=0.8\linewidth]{./figs/labels.pdf} \caption {The similarity geometric transformation representation achieves more accurate and robust tracking results. } \label{fig:top} \end{figure} \begin{table*}[!t] \centering \scriptsize \caption{Comparison with different kinds of trackers.} \label{tab:all} \begin{tabular}{@{}|l|l|c|c|c|c|c|c|l|@{}} \toprule Type & Trackers & Sample Num. & Scale & Rot. & Pretrain & Performance & GPU & FPS \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Traditional\\Methods \end{tabular}} & Lucas-Kanade based~\cite{Baker2004Lucas} & Depends & $\surd$ & $\surd$ & $\times$ & Fair & $\times$ & Depends \\ & Keypoint based~\cite{cmt} & Depends & $\surd$ & $\surd$ & $\times$ & Fair & $\times$& 1\texttildelow20 \\ & Particle filter-based~\cite{ross2008incremental,Ji2012apga} & 300\texttildelow600 & $\surd$ & $\surd$ & $\times$ & Fair & $\times$& 1\texttildelow20 \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Deep\\ Learning\end{tabular}} & MDNet~\cite{nam2015learning} & 250 & $\surd$ & $\times$ & $\surd$ & Excellent & $\surd$& \texttildelow1 \\ & SiamFC~\cite{BertinettoVHVT16} & 5 & $\surd$ & $\times$ & $\surd$ & Excellent & $\surd$& 15\texttildelow25 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Correlation\\ Filter\end{tabular}} & original CF~\cite{henriques2015high,bolme2010visual} & 1 &$\times$ & $\times$ & $\times$ & Fair & $\times$ & 300+ \\ & DSST~\cite{martin2017fdsst},SAMF~\cite{li2014scale} & 7\texttildelow33 & $\surd$ & $\times$ & $\times$ &Good & $\times$ &20\texttildelow80 \\ & Ours & 2\texttildelow8 & $\surd$ & $\surd$ & $\times$ & Excellent & $\times$ & 20\texttildelow30 \\ \hline \end{tabular} \end{table*} Recently, correlation filter-based methods have attracted continuous research attention~\cite{mueller2017ca,Ma-ICCV-2015,ma2015cvpr,zhang2017largedisplacement,li2017cfnn} due to its superior performance and robustness in contrast to traditional tracking approaches. However, with correlation filters, little attention has been paid on how to efficiently and precisely estimate scale and rotation changes, which are typically represented in a 4-Degree of Freedom (DoF) similarity transformation. To deal with scale changes of the conventional correlation filter-based trackers,~\cite{martin2017fdsst} and~\cite{li2014scale} extended the 2-DoF representation of original correlation filter-based methods to 3-DoF space, which handles scale changes in object appearance by introducing a pyramid-like scale sampling ensemble. Unfortunately, all these methods have to intensively resample the image in order to estimate the geometric transformation, which incurs huge amounts of computational costs. In addition, their accuracy is limited to the pre-defined dense sampling of the scale pool. This makes them unable to handle the large displacement that is out of the pre-defined range in the status space. Thus, none of these methods is guaranteed to the optimum of the scale estimation. On the other hand, rotation estimation for the correlation filter-based methods has not been fully exploited yet, since it is very easy to drift away from the inaccurate rotation predictions. This greatly limits their scope of applications in various wide situations. Table~\ref{tab:all} summarizes the properties of several typical trackers. To address the above limitations, in this paper, we propose a novel visual object tracker to estimate the similarity transformation of the target efficiently and robustly. Unlike existing correlation filter-based trackers, we formulate the visual object tracking into a status space searching problem in a 4-DoF status space, which gives a more appropriate geometric transformation parameterization for the target. As shown in Fig.~\ref{fig:top}, the representation in similarity transformation describes the object more correctly and helps to track the visual object more accurately. To yield real-time tracking performance in the 4-DoF space, we propose to tackle the optimization task of estimating the similarity transformation by applying an efficient Block Coordinates Descent (BCD) solver. Specifically, we employ an efficient phase correlation scheme to deal with both scale and rotation changes simultaneously in log-polar coordinates and utilize a fast variant of correlation filter to predict the translational motion. This scheme sets our approach free from intensive sampling, and greatly boosts the performance in the 4-DoF space. More importantly, as BCD searches the entire similarity transformation space, the proposed tracker achieves very accurate prediction performance in large displacement motion while still retaining advantages of the efficiency and simplicity in conventional correlation filter. Experimental results demonstrate that our approach is robust and accurate for both generic object and planar object tracking. The main contributions of our work are summarized as follows: 1) a novel framework of similarity transformation estimation which only samples once for correlation filter-based trackers; 2) a joint optimization to ensure the stability in translation and scale-rotation estimation; 3) a new approach for scale and rotation estimation with efficient implementation which can improve a family of existing correlation filter-based trackers~(our implementation is available at \url{https://github.com/ihpdep/LDES}) \section{Related Work} Traditionally, there are three genres to handle scale and rotation changes. The most widely used approach is to iteratively search in an affine status space with gradient descent-based method~\cite{Baker2004Lucas,wenjie2016rpca}. However, they are easy to get stuck at local optima, which are not robust for large displacements. Trackers based on particle filter~\cite{ross2008incremental,Ji2012apga,zhang2017mcpf,li2015reliable} search the status space stochastically by observing the samples, which are employed to estimate the global optima in the status space. Their results are highly related to the motion model that controls the distribution of the 6-DoF transformation. This makes the tracker perform inconsistently in different situations. Another choice is to take advantage of keypoint matching to predict the geometric transformation~\cite{cmt,2010ferns}. These keypoint-based trackers first detect feature points, and then find the matched points in the following frames. Naturally, they can handle any kind of transformations with the matched feature points. Due to the lack of global information on the whole target, these trackers cannot effectively handle the general objects~\cite{kristan2015visual}. Our proposed method is highly related to correlation filter-based trackers~\cite{henriques2015high,bolme2010visual}. ~\cite{martin2017fdsst} and~\cite{li2014scale} extend the original correlation filter to adapt the scale changes in the sequences. ~\cite{bertinetto2015staple} combines color information with correlation filter method in order to build a robust and efficient tracker. Later,~\cite{martin2015srdcf} and~\cite{galoogahi2017bacf} decouple the relationship between the size of filter and searching range. These approaches enable the correlation filter-based methods to have larger searching range while maintaining a relative compact presentation of the learned filters. ~\cite{mueller2017ca} learns the filter with the additional negative samples to enhance the robustness. Note that all these approaches emphasize on the efficacy issue, which employs either DSST or SAMF to deal with the scale changes. However, these methods cannot deal with rotation changes. Fourier Mellin image registration and its variants~\cite{gopalan1994rotationInvariant,siavash2005logpolarRegistraion} are also highly related to our proposed approach. These methods usually convert both the test image and template into log-polar coordinates, in which the relative scale and rotation changes turn into the translational displacement. ~\cite{gopalan1994rotationInvariant} propose a rotation-invariant correlation filter to detect the same object from a god view. ~\cite{siavash2005logpolarRegistraion} propose an image registration method to recover large-scale similarity in spatial domain. Recently,~\cite{yan2016logpolar} and~\cite{zhang2015cf-rotation} introduce the log-polar coordinates into correlation filter-based method to estimate the rotation and scale. Compared with their approaches, we directly employ phase correlation operation in log-polar coordinates. Moreover, an efficient Block Coordinates Descent optimization scheme is proposed to deal with large motions with real-time performance. \begin{figure*}[tbp] \centering \includegraphics[width=0.86\linewidth]{./figs/overall.pdf} \caption{Overview of our proposed approach in estimation of similarity transformation.} \label{fig:overview} \end{figure*} \section{Our Approach} In this paper, we aim to investigate robust visual object tracking techniques to deal with challenging scenarios especially when there are large displacements. We propose a novel robust object tracking approach, named ''Large-Displacement tracking vis Estimation of Similarity~(LDES), where the key idea is to enable the tracker with capability in 2D similarity transformation estimation in order to handle large displacement. Figure~\ref{fig:overview} gives an overview of the proposed LDES approach. In the following, we first formally formulate the problem as an optimization task, and then divide it into two sub-problems, translation estimation and scale-rotation prediction We solve the two sub-problems iteratively to achieve a global optimal. \subsection{Problem Formulation} Given an image patch $\mathbf{x}_i$ sampled from the $i$-th frame ${I_i}$ in a video sequence, the key idea of our proposed approach is to estimate the similarity transformation $Sim(2)$ in 2D image space of the tracked traget. To this end, we need to predict a 4-DoF transformation status vector $\mathbf{\tau}_i \in \mathcal{R}^4 $ based on the output of the previous frame. Generally, $\mathbf{\tau}_i$ is obtained by optimizing the following score function: \begin{equation} \tau_i = \arg \max_{\tau \in Sim(2)} f(\mathcal{W}( {I}_i,\tau); \mathcal{\mathbf{h}}_{i-1}), \label{eq:eq1} \end{equation} where $f(\cdot)$ is a score function with the model $\mathbf{h}_{i-1}$ learned from the previous frames $I_{1:i-1}$. $\mathcal{W}$ is an image warping function that samples the image ${I}_i$ with respect to the similarity transformation status vector $\tau$. The 2D similarity transformation $Sim(2)$ deals with 4-DoF $\{t_x,t_y,\theta,s\}$ motion, where $\{t_x,t_y\}$ denotes the 2D translation. $\theta$ denotes the in-plane rotation angle, and $s$ represents the scale change with respect to the template. Obviously, $Sim(2)$ has a quite large searching space, which is especially challenging for real-time applications. A typical remedy is to make use of effective sampling techniques to greatly reduce the searching space~\cite{particlefilter}. Since the tracking model $\mathbf{h}_{i-1}$ is learned from the previous frame, which is kept constant during the prediction. The score function $f$ is only related to the status vector $\tau$. We abuse the notation for simplicity: \begin{equation} f_i(\tau) = f(\mathcal{W}( {I}_i,\tau);\mathbf{h}_{i-1}). \end{equation} Typically, most of the conventional correlation filter-based methods only take into account of in-plane translation with 2-DoF, where the score function $f_i$ can be calculated completely and efficiently by taking advantage of Convolution Theorem. To search the 4-DoF similarity space, the total number of candidate status exponentially increases. Although Eq.~\ref{eq:eq1} is usually non-convex, the optimal translation is near to the one in the previous frame in object tracking scenarios. Thus, we assume that the function is convex and smooth in the nearby region, and split the similarity transformation $Sim(2)$ into two blocks, $\mathbf{t}=\{t_x,t_y\}$ and $\rho = \{\theta,s\}$, respectively. We propose a score function $f_i(\tau)$, which is the linear combination of three separate parts: \begin{equation} f_i(\mathbf{\tau};\mathbf{h}_{i-1}) = \eta f_t(\mathbf{t};\mathbf{h_t}) + (1-\eta) f_{\rho}(\mathbf{\rho};\mathbf{h}_{\rho}) + g(\mathbf{t},\mathbf{\rho}), \label{eq:score} \end{equation} where $\eta$ is an interpolation coefficient. $f_t$ is the translational score function, and $f_{\rho}$ denotes the scale and rotation score function. $g(\mathbf{t},\mathbf{\rho})=exp(|\tau - \tau_{i-1}|_2)^{-1}$ is the motion model which prefers the location nearby the last status. Please note that we omit the subscript $i-1$ of $\mathbf{h_t}$ and $\mathbf{h}_{\rho}$ for simplicity. Eq.~\ref{eq:score} is a canonical form which can be solved by the Block Coordinate Descent Methods~\cite{peter2014cd,yu2010cd}. We optimize the following two subproblems alternatively to achieve the global solution: \begin{equation} \arg \max_{\mathbf{t}} g(\mathbf{t},{\rho}^*) + \eta f_t(\mathbf{t}), \label{eq:eq3} \end{equation} \begin{equation} \arg \max_{{\rho}} g(\mathbf{t}^*,{\rho})+ (1-\eta)f_\rho(\rho) , \label{eq:eq4} \end{equation} ${\rho}^*$ and $\mathbf{t}^*$ denote the local optimal estimation result from previous iteration, which is fixed for the current subproblem. Since $g$ can be calculated easily, the key to solving Eq.~\ref{eq:eq1} in real-time is to find the efficient solvers for the above two subproblems, $f_\rho$ and $f_t$ \subsection{Translation Estimation by Correlation Filter} Translation vector $\mathbf{t}$ can be effectively estimated by Discriminative Correlation Filters (DCF)~\cite{henriques2015high,mueller2017ca}. A large part of its success is mainly due to the Fourier trick and translation-equivariance within a certain range, which calculates the $f_t$ in the spatial space exactly. According to the property of DCF, the following equation can be obtained: \begin{equation} f_t(\mathcal{W}({I},\mathbf{t});\mathbf{h_t}) = \mathcal{W}( f_t({I};\mathbf{h_t}),\mathbf{t}). \label{eq:eq6} \end{equation} Since the calculation of $\arg \max_{\mathbf{t}} \mathcal{W}( f_t({I};\mathbf{h_t}),\mathbf{t})$ is unrelated to $\mathcal{W}$, we can directly obtain the transformation vector $\mathbf{t}$ from the response map. Thus, the overall process is highly efficient. The score function $f_t$ can be obtained by \begin{equation} f_t(\mathbf{z}) = \mathcal{F}^{-1}\sum_{k}\hat{\mathbf{h_t}}^{(k)}\odot\hat\Phi^{(k)}(\mathbf{z}), \label{eq:eq7} \end{equation} where $\mathbf{z}$ indicates a large testing patch. $\mathcal{F}^{-1}$ denotes the inverse Discrete Fourier Transformation operator, $\odot$ is the element-wise multiplication and $\hat{\cdot}$ indicates the Fourier space. $\mathbf{h_t}^{(k)}$ and $\Phi^{(k)}$ represent the $k$-th channel of the linear model weights and the feature map, respectively. The whole computational cost is $\mathcal{O}(KN\log N)$, where $K$ is the channel number and $N$ is the dimension of the patch $\mathbf{z}$. To this end, we need to learn a model $\mathbf{h_t}$ in the process. Note that any quick learning method can be used. Without loss of generality, we briefly review a simple correlation filter learning approach~\cite{bolme2010visual} as follows: \begin{equation} \begin{aligned} {{{\left\| \sum\limits_k {\Phi^{(k)}(\mathbf{x}) \star {\mathbf{{h}_t}^{(k)}} - {\mathbf{y}}} \right\|}_2^2} + \lambda_1{{\left\| \mathbf{h_t} \right\|}_2^2}}, \label{eq:eq8} \end{aligned} \end{equation} where $\star$ indicates the correlation operator and $\lambda_1$ is the regularization filters. $\mathbf{y}$ is the desired output, which is typically a Gaussian-like map with maximum value of one. According to Parseval's theorem, the formulation can be calculated without correlation operation. By stalling each channel and vectorizing the matrix, Eq.~\ref{eq:eq8} can be reformulated as a normal ridge regression without correlation operation. Thus, the solution to Eq.~\ref{eq:eq8} can expressed as follows: \begin{equation} \mathbf{\hat{h}_t} = (\mathbf{\hat{X}}^T\mathbf{\hat{X}} + \lambda_1\mathbf{I})^{-1}\mathbf{\hat{X}}^T\mathbf{\hat{y}}, \end{equation} where $\mathbf{\hat{X}} = [\mathbf{diag}({\hat\Phi^{(1)}(\mathbf{x})})^T,...,\mathbf{diag}({\hat\Phi^{(K)}(\mathbf{x})})^T]$ and $\mathbf{\hat{h}_t} = [\mathbf{\hat{h}_t}^{(1)T},...,\mathbf{\hat{h}_t}^{(K)T}]^T$. In this form, we need to solve a $KD \times KD $ linear system, where $D$ is the dimension of testing patch $\mathbf{x}$. \def\stopfig{0.146} \begin{figure*}[tbp] \centering \includegraphics[width=\stopfig\linewidth]{./figs/img2} \includegraphics[width=\stopfig\linewidth]{./figs/img1} \includegraphics[width=\stopfig\linewidth]{./figs/log2} \includegraphics[width=\stopfig\linewidth]{./figs/log1} \includegraphics[width=\stopfig\linewidth]{./figs/r1} \includegraphics[width=\stopfig\linewidth]{./figs/r2.png} \caption{ The 3rd and 4th charts are corresponding Log-polar coordinates of the 1st and 2nd images. 2nd image is a $30^{\circ}$ rotation and 1.2 times scale version of the first image. The last two charts are the phase correlation response maps. In log-polar coordinates, the response is a peak while it is noisy in Cartesian coordinates.} \label{fig:log} \end{figure*} To solve our sub-problem efficiently, we assume that every channel is independent. Thus, by applying Parseval's theorem, the whole system can be simplified as element-wise operation. The final solution can be derived as below: \begin{equation} \begin{aligned} \mathbf{\hat{h}_t}^{(k)} &= \hat\alpha \odot {\hat\Psi}^{(k)}\\ &=(\mathbf{\hat{y}}\odot^{-1} (\sum_k\hat\Phi^{(k)}(\mathbf{{x}})^* \odot \hat\Phi^{(k)}(\mathbf{{x}}) + \lambda)) \odot \hat\Phi^{(k)}(\mathbf{{x}})^* , \label{eq:eq10} \end{aligned} \end{equation} where $\alpha$ denotes the parameters in dual space and $\Psi $ indicates the model sample in feature space. $\odot^{-1}$ is the element-wise division. Thus, the solution can be very efficiently obtained with a computational cost of $\mathcal{O}(KD)$. With Eq.~\ref{eq:eq10}, the computational cost of Eq.~\ref{eq:eq8} is $\mathcal{O}(KD\log D)$ which is dominated by the FFT operation. For more details, please refer to the seminal work~\cite{henriques2015high,multichannel2013mccf,mueller2017ca}. \subsection{Scale and Rotation in Log-polar Coordinates} We introduce an efficient method to estimate scale and rotation changes simultaneously in the log-polar coordinates. \subsubsection{Log-Polar Coordinates} Suppose an image $I(x,y)$ in the spatial domain, the log-polar coordinates $I'(s,\theta)$ can be viewed as a non-linear and non-uniform transformation of the original Cartesian coordinates. Like polar coordinates, the log-polar coordinates needs a pivot point as the pole and a reference direction as the polar axis in order to expend the coordinates system. One of the dimension is the angle between the point and the polar axis. The other is the logarithm of the distance between the point and the pole. Given the pivot point $(x_0,y_0)$ and the reference direction $\mathbf{r}$ in Cartesian coordinates, the relationship between Cartesian coordinates and Log-polar coordinates can be formally expressed as follows: \begin{equation} \begin{aligned} &s = \log(\sqrt{(x-x_0)^2 + (y-y_0)^2})\\ &\theta = \cos^{-1}(\frac{<\mathbf{r},(x-x_0,y-y_0)>}{||\mathbf{r}||\sqrt{(x-x_0)^2 + (y-y_0)^2}}). \label{eq:eq11} \end{aligned} \end{equation} Usually, the polar axis is chosen as the $x$-axis in Cartesian coordinates, where $\theta$ can be simplified as $\tan^{-1}(\frac{y-y_0}{x-x_0})$. Suppose two images are related purely by rotation $\tilde{\theta}$ and scale $e^{\tilde{s}}$ which can be written as $I_0(e^s\cos\theta,e^s\sin\theta) = I_1(e^{s+\tilde{s}}\cos(\theta+\tilde{\theta}),e^{s+\tilde{s}}\sin(\theta+\tilde{\theta}))$ in Cartesian coordinates. The log-polar coordinates enjoy an appealing merit that the relationship in the above equation can be derived as the following formula in log-polar coordinates: \begin{equation} I_0'(s,\theta) = I_1'(s+\tilde{s},\theta+\tilde{\theta}), \end{equation} where the pure rotation and scale changes in Log-polar coordinates can be viewed as the translational moving along the axis. As illustrated in Fig.~\ref{fig:log}, this property naturally can be employed to estimate the scale and rotation changes of the tracked target. \subsubsection{Scale and Rotation Changes} By taking advantage of the log-polar coordinates, Eq.~\ref{eq:eq4} can be calculated very efficiently. Similarly, scale-rotation invariant can be hold as in Eq.~\ref{eq:eq6}. The scale-rotation can be calculated as below: \begin{equation} f_{\rho}(\mathcal{W}(I_i,\rho);\mathbf{h}_\rho) = \mathcal{W}( f_{\rho}(I_i;\mathbf{h}_\rho),\rho'), \end{equation} where $\rho'=\{\theta',s'\}$ is the coordinates of $\rho$ in log-polar space. $s = e^{s'\log(W/2)/W}$ and $\theta = 2\pi\theta'/H$. $H$ and $W$ is the height and width of the image $I_i$, respectively. Similar to estimating the translation vector $\mathbf{t}$ by $f_{t}$, the whole space of $f_{\rho}$ can be computed at once through the Fourier trick: \begin{equation} f_{\rho}(\mathbf{z}) = \mathcal{F}^{-1}\sum_{k}\mathbf{\hat{h}}_{\rho}^{(k)}\odot\hat\Phi^{(k)}(\mathcal{L}(\mathbf{z})), \label{eq:eq14} \end{equation} where $\mathcal{L}(x)$ is the log-polar transformation function, and $\mathbf{h}_{\rho}$ is a linear model weights for scale and rotation estimation. Therefore, the scale and rotation estimation can be obtained very efficiently without any transformation sampling $\mathcal{W}$. Note that the computational cost of Eq.~\ref{eq:eq14} is unrelated to the sample numbers of scale or rotation. This is extremely efficient compared to the previous enumerate methods~\cite{li2014scale,martin2017fdsst} To obtain the $\mathbf{\hat{h}}_{\rho}$ efficiently, we employ the phase-correlation to conduct the estimation, \begin{equation} \mathbf{\hat{h}}_{\rho} = \hat{\Upsilon}^* \odot^{-1} |\hat{\Upsilon} \odot \hat\Phi(\mathcal{L}(\mathbf{x}))|, \end{equation} where $\Upsilon = \sum_{j}\beta_j\Phi(\mathcal{L}(\mathbf{x}_j))$ is the linear combination of previous feature patch and $|\cdot|$ is the normal operation. Intuitively, we compute the phase correlation between current frame and the average of previous frames to align the image. \subsection{Implementation Details} In this work, we alternatively optimize Eq.~\ref{eq:eq3} and Eq.~\ref{eq:eq4} until $f(\mathbf{x})$ does not decrease or reaches the maximal number of iterations. After the optimization, we update the correlation filter model as \begin{equation} \hat\Psi_i = (1-\lambda_{\phi})\hat\Psi_{i-1} +\lambda_{\phi}\hat\Phi(\mathbf{{x_i}}), \end{equation} where $\lambda_\phi$ is the update rate of the feature data model in Eq.~\ref{eq:eq10}. The kernel weight in dual space is updated as below: \begin{equation} \begin{aligned} \hat{\mathbf{\alpha}}_i =& (1-\lambda_\alpha)\hat{\mathbf{\alpha}}_{i-1} \\ +&\lambda_\alpha (\hat{\mathbf{y}} \odot^{-1} (\sum_k\hat\Phi^{(k)}(\mathbf{{x}}_i)^*\odot \hat\Phi^{(k)}(\mathbf{{x}}_i) + \lambda_1)), \end{aligned} \end{equation} where $\lambda_\alpha$ is the update of the kernel parameter in dual space of Eq.~\ref{eq:eq10}. Although there exist some theoretical sounding updating schemes~\cite{kiani2015cfwlb,martin2017eco,martin2015srdcf}, the reason we use linear combination is due to its efficiency and the comparable performance. Meanwhile, we also update the scale and rotation model as a linear combination, \begin{equation} \Upsilon_i = (1-\lambda_w)\Upsilon_{i-1} + \lambda_w\Phi(\mathcal{L}(\mathbf{x}_i)), \end{equation} where $\lambda_w$ can be explained as an exponentially weighted average of the model $\beta_j\Phi(\mathcal{L}(\mathbf{x}_j))$. We update the model upon $\Phi$ instead of $x_i$ because $\Phi(\sum_i \mathcal{L}(\mathbf{x}_i))$ is not defined. The logarithm function in log-polar transformation intends to blur the image due to the nonuniform sampling. This will decrease the visual information in the original images. To alleviate the artificial effects casued by discretization, we interpolate the $f_{t}$ and $f_{\rho}$ with a centroid-based method to obtain sub-pixel level precision. In addition, we use different size of $\mathbf{z}$ in testing and $\mathbf{x}$ in training since a larger search range ($N>D$) help to improve the robustness for the solution to sub-problems. To match the different dimension $N$ and $D$, we pad $\mathbf{h}$ with zero in spatial space. \def0.23{0.23} \begin{figure*}[!t] \centering \subfloat[Overall results on OTB-2013] { \includegraphics[width=0.23\linewidth]{./eps/tb50-overlap} \includegraphics[width=0.23\linewidth]{./eps/tb50-error} } \subfloat[Overall results on OTB-100] { \includegraphics[width=0.23\linewidth]{./eps/tb100-overlap} \includegraphics[width=0.23\linewidth]{./eps/tb100-error} } \caption{The overall and scale performance in precision and success plots on OTB-2013 and OTB-100 dataset.} \label{fig:tb100} \end{figure*} \section{Experiments} In this section, we conduct four different experiments to evaluate our proposed tracker LDES comprehensively. \subsection{Experimental Settings} All the methods were implemented in Matlab and the experiments were conducted on a PC with an Intel i7-4770 3.40GHz CPU and 16GB RAM. We employ HoG feature for both translational and scale-rotation estimation, and the extra color histogram is used to estimate translational. All patch is multiplied a Hann window as suggested in~\cite{bolme2010visual}. $\eta$ is 0.15 and $\lambda$ is set to $1e^{-4}$. $\lambda_\phi$ and $\lambda_\alpha$ are both set to 0.01. $\lambda_\omega$ is 0.015. The size of learning patch $D$ is 2.2 larger than the original target size. Moreover, the searching window size $N$ is about 1.5 larger than the learning patch size $D$. For scale-rotation estimation, the phase correlation sample size is about 1.8 larger than the original target size. All parameters are fixed in the following experiments. \subsection{Experiments on Proposed Scale Estimator} As one of the contributions in our work is a fast scale estimator, we first evaluate our proposed log-polar based scale estimator on OTB-2013 and OTB-100 dataset~\cite{wu2013online,wu2015object}. Three baseline trackers are involved in the scale estimation evaluation. They are SAMF~\cite{li2014scale}, fDSST~\cite{martin2017fdsst} and ECO~\cite{martin2017eco}. For fair comparison, we implement three counterpart-trackers including fDSST-LP, SAMF-LP and ECO-LP, which replace the original scale algorithm with our proposed scale estimator In Fig.~\ref{fig:compare}, these variant trackers with our scale component outperform their original implementation. This indicates that our proposed scale estimator has superior performance compared with current state-of-the-art scale estimator. Specifically, ECO-LP achieves 69.1\% and 67.3\% in OTB-2013 and OTB-2015 respectively, compared with its original CPU implementation's 67.8\% and 66.8\%. This proves the effectiveness of our proposed scale method since it can even improve the state-of-the-art tracker with a simple replacement of the scale component. Since the proposed scale estimator only samples once in each frame, the most significant part is the efficiency of scale estimating. In Table~\ref{tab:speed}, the proposed approach has a 3.8X+ speedup on SAMF, and ECO, which obtains a significant improvement on efficiency. Even with fDSST which is designed in efficiency with many tricks, our method can still reduce its computational time. This strongly supports that our proposed scale estimator is superior to current state-of-the-art scale estimating approaches. In addition, our method is very easy to implement and plug-in to other trackers. \def0.13\textheight{0.13\textheight} \begin{figure} [!t] \normalsize \centering \includegraphics[height=0.13\textheight]{./charts/compare5} \includegraphics[height=0.13\textheight]{./charts/compare6} \caption{Evaluation of tracking success rate improvements over the original implementations of different trackers enhanced by the proposed scale estimator. } \label{fig:compare} \end{figure} \subsection{Comparison with Correlation Filter Trackers} With efficient and effective scale estimator, our proposed tracker performs very promising in different situations. We select seven state-of-the-art Correlation Filter-based tracker as reference methods, including ECO-HC~\cite{martin2017eco}, SRDCF~\cite{martin2015srdcf}, Staple~\cite{bertinetto2015staple}, SAMF, fDSST, BACF~\cite{galoogahi2017bacf}, and KCF~\cite{henriques2015high}. We initialize the proposed tracker with axis-aligned bounding box and ignore the rotation parameter in the similarity transformation as tracking output since the benchmarks only provide axis-aligned labels. In Fig.~\ref{fig:tb100}, it can be clearly seen that our proposed method outperforms most of the state-of-the-art correlation filter-based trackers and obtains 67.7\% and 81.0\% in OTB-2013 success and precision plots, and 63.4\% and 76.0\% in OTB-100 plots respectively. ECO-HC achieves better results in OTB-100. However, we can see that our method is more accurate above 0.6 overlap threshold in success plot and comparable in the precision plot. The reason is that introducing rotation improves the accuracy but also enlarges the search space and hurts the robustness when large deformation occurs. In general, our method is very promising in generic object tracking task. The proposed approach maintains 20 fps with similarity estimation and is easy to implement due to its simplicity. Moreover, our no-BCD version tracker achieves 82 fps in the benchmark while stall maintains comparable performance (67.5\% and 62.2\% accuracy in OTB-2013 and OTB-100, respectively). Please note that our proposed LDES is quite stable in searching the 4-DoF status space. Introducing rotation gives the tracker more status choice in tracking process while the benchmark only provides axis-aligned labels which make the performance less robust in OTB-100. However, our proposed tracker still ranks 1st and 2nd in OTB-2013 and OTB-100 respectively, and beats most of the other correlation filter-based trackers. \begin{table}[!] \centering \footnotesize \caption{Evaluation of speedup results on different trackers achieved by applying the proposed Log-Polar~(LP) based scale estimator.} \label{my-label} \begin{tabular}{|l|c||l|c||c|} \hline Trackers & FPS & Trackers & FPS & Speedup \\ \hline fDSST & 101.31& fDSST-LP & 112.77 & 1.11X \\ \hline SAMF & 20.95 & SAMF-LP & 86.75 & 4.14X \\ \hline ECO & 2.58 & ECO-LP & 9.88 & 3.82X\\ \hline \end{tabular} \label{tab:speed} \end{table} \def\pot_h{0.13\textheight} \begin{figure*}[!t] \centering \subfloat[All 210 sequences] { \includegraphics[height=\pot_h]{./pfigs/All-seq-precision} \includegraphics[height=\pot_h]{./pfigs/All-seq-homography} \label{fig:pot:a} } \subfloat[30 videos with unconstrained changes] { \includegraphics[height=\pot_h]{./pfigs/UC-precision} \includegraphics[height=\pot_h]{./pfigs/UC-homography} \label{fig:pot:b} } \subfloat[30 videos with pure rotation changes] { \includegraphics[height=\pot_h]{./pfigs/RT-precision} \includegraphics[height=\pot_h]{./pfigs/RT-homography} \label{fig:pot:c} } \subfloat[30 videos with pure scale changes] { \includegraphics[height=\pot_h]{./pfigs/SC-precision} \includegraphics[height=\pot_h]{./pfigs/SC-homography} \label{fig:pot:d} } \caption{Precision and success plots on the POT dataset.} \label{fig:pot} \end{figure*} \subsection{Comparison with State-of-the-Art trackers on POT} \begin{table*}[!t] \centering \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|c||c|} \hline Precision & Scale & Rotation & Persp. Dist. & Motion Blur & Occlusions & Out of View & Unconstrained & ALL\\ \hline LDES & \textbf{0.7858} & \textbf{0.6986} & \textbf{0.2807} & \textbf{0.1699} & 0.6277 & \textbf{0.5663} & \textbf{0.4159} & \textbf{0.5064}\\ \hline LDES-NoBCD & 0.6461 & 0.6898 & 0.2528 & 0.1595 & \textbf{0.6679} & 0.5507 & 0.3759 & 0.4775\\ \hline \hline Success & Scale & Rotation & Persp. Dist. & Motion Blur & Occlusions & Out of View & Unconstrained & ALL\\ \hline LDES & \textbf{0.5298} & 0.6778 & \textbf{0.3339} & \textbf{0.44} & 0.5947 & \textbf{0.5629} & \textbf{0.4526} & \textbf{0.5131}\\ \hline LDES-NoBCD & 0.4724 & \textbf{0.6849} & 0.3215 & 0.43 & \textbf{0.6392} & 0.5615 & 0.4353 & 0.5064\\ \hline \end{tabular} \caption{Comparison in all 7 categories attributed videos on POT benchmark.} \label{tab:pot} \end{table*} To better evaluate our proposed approach to rotation estimation, we conduct an additional experiment on POT benchmark~\cite{lian2017pot} which is designed to evaluate the planar transformation tracking methods. The POT dataset contains 30 objects with 7 different categories which yield 210 videos in total. Alignment error and homography discrepancy are employed as the evaluation metrics. In addition, six state-of-the-art trackers and two rotation-enabled trackers are involved They are ECO-HC, ECO~\cite{martin2017eco}, MDNet~\cite{nam2015learning}, BACF~\cite{galoogahi2017bacf}, ADNet~\cite{yun2017adnet}, SiameseFC~\cite{BertinettoVHVT16}, IVT~\cite{ross2008incremental} and L1APG~\cite{Ji2012apga}. To illustrate the POT plots appropriately, we set the maximal value of the alignment error axis from 20 to 50 pixels in precision plot and utilize the AUC as the metrics for ranking in both precision and homography discrepancy plots as same as OTB~\cite{wu2015object}. Fig.~\ref{fig:pot} shows that our proposed tracker, with hand-craft features only, performs extremely well in all sequences attributed plots and even outperforms deep learning based methods with a large margin. In Fig.~\ref{fig:pot:a}, our LDES achieves 50.64\% and 51.31\% compared with second rank tracker ECO's 35.99\% and 37.79\% in precision and success rate plots within all 210 sequences, which is a 13\%+ performance improvement. Since the POT sequences are quite different from OTB, it indicates our proposed method has better generalization capabilities compared with pure deep learning based approaches in wide scenarios. It also shows that our proposed method is able to search the 4-DoF similarity status simultaneously, efficiently and precisely. Moreover, our method ranks 1st in almost all other plots It not only validates the effectiveness of our proposed rotation estimation but also shows the superiority of our method compared with traditional approaches. In Fig.~\ref{fig:pot:d}, we argue that our proposed log-polar based scale estimation is at least comparable with mainstream methods in performance. \subsection{BCD Framework Evaluation on POT} To verify the proposed framework with Block Coordinate Descent (BCD), we implement an additional variant, named LDES-NoBCD, which turns off the BCD framework and only estimates the object status once in each frame. We conduct comparison experiments on POT benchmark with LDES and LDES-NoBCD. In Table~\ref{tab:pot}, LDES performs better than its No BCD version in most of the categories. Specifically, BCD contributes more performance in scale attributed videos and unconstrained videos. LDES achieves 0.7858 and 0.5298 in scale compared with LDES-NoBCD's 0.6461 and 0.4724, which is about 14\% improvement in precision plot and 5\% in success plot, respectively. This indicates that the proposed framework ensures the stable searching in the 4-DoF space. In rotation column, the ranks in precision and success rate metrics are inconsistent. The reason is that rotation attributed videos contain pure rotation changes. This gives rotation estimation a proper condition to achieve a promising result The only category that LDES performs inferior is occlusion attributed videos. When the occlusion occurs, BCD framework tries to find the best status of the templated object while the original object is being occluded and cannot be seen properly. This leads the algorithm to an inferior status. In contrast, No-BCD version algorithm does not search an optimal point in the similarity status space \section{Conclusion} In this paper, we proposed a novel visual object tracker for robust estimation of similarity transformation with correlation filter. We formulated the 4-DoF searching problem into two 2-DoF sub-problems and applied a Block Coordinates Descent solver to search in such a large 4-DoF space with real-time performance on a standard PC. Specifically, we employed an efficient phase correlation scheme to deal with both scale and rotation changes simultaneously in log-polar coordinates and utilized a fast variant of correlation filter to predict the translational motion. Experimental results demonstrated that the proposed tracker achieves very promising prediction performance compared with the state-of-the-art visual object tracking methods. \section*{Acknowledgments} This work is supported by the National Key Research and Development Program of China (No. 2016YFB1001501) and also by National Natural Science Foundation of China under Grants (61831015). This research is also supported by the National Research Foundation Singapore under its AI Singapore Programme [AISG-RP-2018-001], and the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centres in Singapore Funding Initiative.
{ "timestamp": "2018-11-08T02:08:58", "yymm": "1712", "arxiv_id": "1712.05231", "language": "en", "url": "https://arxiv.org/abs/1712.05231" }
\section{Introduction}\label{intro} Light propagation in complex and random media is a very general problem in many disciplines of science and engineering, such as radiative heat transfer \cite{tien1987thermal,modest2013radiative}, optics and photonics \cite{wiersma2013disordered,Rotter2017}, astrophysics and remote sensing \cite{tsangRS2000,mishchenko2006multiple}, soft mater physics and chemistry \cite{xiaoSciAdv2017}, biomedical engineering \cite{horstmeyer2015guidestar} and so on. Generally, in such media, light is scattered and absorbed in a very complicated way, which not only depends on the features of incident light (frequency and polarization state, etc.), but also heavily depends on the properties of media, including the permittivity and permeability as well as their frequency and spatial dispersion (if any) of the composing materials, the morphology (size, shape and topology) of inclusions, and their time- and temperature-dependency (if any). Conventional parameters describing light propagation in random media, including scattering coefficient $\mu_s$, absorption coefficients $\mu_a$ and scattering phase function $P(\mathbf{\Omega}',\mathbf{\Omega})$, are defined in the framework of radiative transfer equation (RTE) \cite{VanRossum1998,tsang2004scattering}. The RTE, although derived phenomenologically in its initial stage, is actually an approximate form of Bethe-Salpter equation for classical photons \cite{lagendijk1996resonant,VanRossum1998,tsang2004scattering,sheng2006introduction}. The latter is an rigorous equation accounting for all interference phenomena for the transport of electromagnetic field correlation function $\langle\mathbf{E}\mathbf{E}^*\rangle$ in random media, which is originally taken from quantum field theory and is exactly equivalent to Maxwell equations for electromagnetic waves. Particularly, for discrete random media composed of a disordered distribution of scatterers, the RTE is valid when the following two conditions are simultaneously satisfied: (1) the scatterers are far apart from each other and each scatterer scatters light as if no other scatters exist. This is called independent scattering approximation (ISA); (2) the interference between each multiple scattering trajectory and its time-reversal counterpart is neglected \cite{VanRossum1998,mishchenko2006multiple,sheng2006introduction} . The latter condition is called ladder approximation because it results in the Feynmann diagrammatic representation of field correlation function resembling a series of ladders \cite{VanRossum1998,mishchenko2006multiple,sheng2006introduction}. The approximate nature of RTE leads to a bunch of considerations on the effect of ``dependent scattering'' in the last a few decades, which usually indicates those deviations RTE (combined with ISA) lead to from experimental and exact numerical results due to electromagnetic wave interference \cite{tien1987thermal,kumar1990dependent,leeJTHT1992,ivezicIJHMT1996,durantJOSAA2007,nguyenOE2013,wangIJHMT2015,maJQSRT2017}. Note herein ``dependent scattering effect'' is termed as a generalization for those interference effects that are not possible to explain under independent scattering approximation (ISA) \cite{yamadaJHT1986,aernoutsOE2014}. This is a very broad definition and different from the meaning of van Tiggelen et al.'s \cite{Vantiggelen1990JPCM}, where they classified the multiple scattering trajectories visiting the same particle more than once and resulting in a closed loop, or ``recurrent scattering'', as the dependent scattering mechanism. There are several dependent scattering mechanisms recognized both theoretically and experimentally already, including recurrent scattering \cite{Vantiggelen1990JPCM, Cherroret2016}, coherent backscattering (also called weak localization) \cite{mishchenko2006multiple} and the well-known Anderson (strong) localization of light \cite{wiersma1997localization}. Note these mechanisms are not entirely independent concepts with each other. Typically, the dependent scattering mechanisms are treated in a renormalized way, i.e., by correcting conventional radiative parameters into effective parameters to include dependent scattering effects and retaining the form of RTE and diffusion equation (an approximate form of RTE) to solve the transport problem \cite{tsang2004scattering}. Most researches up to now focus on the role of dependent scattering mechanism in scattering properties of random media, because its impact on scattering properties is more prominent and can result in attractive phenomena as mentioned above. In terms of its role in absorption, very few studies, to the best of our knowledge, exist \cite{kumar1990dependent,ma1990enhanced,prasherJAP2007,weiAO2012}. Actually, when the particle density is small (typically volume fraction $f_v<0.05$) and absorption coefficient is much larger than the scattering coefficient, i.e., $\mu_a\gg\mu_s$, this ignorance gives rise to no substantial discrepancies because multiple and dependent scattering is weak. However, when particle density continues to increase or $\mu_s$ is comparable with or much larger than $\mu_a$, a careful consideration of dependent scattering effect on total absorption is necessary because the interparticle interference of scattering waves may lead to a redistribution in particle absorption. This issue is becoming important as the recent growing interests in nanofluids, as well as other nanoparticle-based solar absorbers, which usually utilize plasmonic resonances of metallic particles to enhance solar absorption \cite{taylorJAP2013,saidICHMT2013,xuanRSCA2014,hoganNL2014,liuNanoscale2017,gaoJAP2017}, finding their applications in concentrating solar power and direct-steam generation. In this situation, the scattering coefficient might be comparable with the absorption coefficient for plasmonic particles with diameter approaching 100nm. Actually this feature is exploited by some authors because multiple scattering of light can enhance the light path length in the medium and thus improve the absorption efficiency \cite{hoganNL2014}. Note the ``multiple scattering'' in this context is under the framework of RTE, meaning that light \textit{intensity} is multiply scattered many times, without any considerations on wave aspect of the coherent scattering problem. Here throughout the rest of this paper, we term ``multiple scattering'' as the multiple scattering of \textit{electromagnetic waves} implicitly. In fact in this circumstance, the strong scattering strength can also lead to a strong modification of the local electromagnetic field to be different with external incident field, as a manifestation of dependent scattering mechanism \cite{kumar1990dependent,ma1990enhanced}. Wei et al.\cite{weiAO2012} recently investigated the effect of dependent scattering on absorption coefficient for very dense nanofluids ($f_v$ up to 0.74) containing very small metallic particles (radius $a=15\mathrm{nm}$) using several different dependent-scattering models as well as developed a modified quasicrystalline approximation (QCA) model. However, their model, as well as the model proposed by Prasher et al. \cite{prasherJAP2007} can only treat the particle absorption is much larger than its scattering, which is only valid for very small particles. Another active field is optofluidics, a combination of micro/nano fluidics and photonics, which utilizes the easy reconfigurability and compactness of nanofluids through flow rate, viscosity, and so on, to obtain the desirable optical properties for integrated photonics applications, like lasers, sensors etc., as well as photocatalysis, solar thermochemistry and solar desalination applications. One of the most popular optofluidic systems is nanoparticle colloidal system controlled by microfluid chips, which shows the necessity of understanding the interplay between multiple scattering and absorption \cite{psaltisNature2006,ericksonNPhoton2011}. In some researches on structural color based on disordered photonic structures, predictions based on ISA can poorly interpret the experimental results, partly due to the extremely-high absorption predicted by ISA for the short-range ordered, densely packed nanostructures, which may smear out the reflectance peak indeed observed by the experiments \cite{xiaoSciAdv2017}. An in-depth physical interpretation for the scattering and absorption mechanism will also be helpful for these applications. Here we perform a rigorous study on the effect of dependent scattering on absorption for highly scattering particles, attempting to explore the role of dependent scattering. Deep understanding on the coherent phenomena will be given by making comparison between the (incoherent) radiative transfer model and coherent coupled dipole model. We also develop a theoretical model using quasi-crystalline approximation (QCA) to obtain dependent-scattering corrected radiative properties, based on the path-integral diagrammatic technique in multiple scattering theory. This model results in a more reasonable agreement with numerical simulations. The present study can provide physical insights on the applications of nanofluids in solar energy concentration, vapor generation using localized-light-induced-heat and heat transfer enhancement \cite{zielinskiNL2016}. Our results also have implications in the dipole-dipole interaction effect on absorption in other highly scattering random media. \section{The Coupled Dipole Method} We consider a slab geometry containing randomly distributed metallic particles in air as the investigated disordered medium with side lengths of $L_x$ and $L_y$, and a thickness of $L_z$. The incident light propagates along the direction $z$. Here the treatment of matrix as air doesn't lose the essential physics of multiple and dependent scattering, and can be easily extended to any kind of realistic matrix, such as gelatin, water, etc. We further assume that the scatterers are not so small that $a$ is larger than a few nanometers, which means that the bulk permittivity can be used to describe the small particles and nonlocal (or quantum) effect in permittivity can also be neglected. Moreover, we are working at the long wavelength limit, where the wavelength $\lambda$ is much larger than the small particle. In this way, the EM responses of individual dipoles are described by the dipole polarizability $\alpha$, which can be expressed by the first order electric Mie coefficient as \cite{bohrenandhuffman}: \begin{equation}\label{alphamie} \alpha=\frac{6\pi i}{k^3}\frac{m^2j_1(mx)[xj_1(x)]'-j_1(x)[mxj_1(mx)]'}{m^2j_1(mx)[xh_1(x)]'-h_1(x)[mxj_1(mx)]'} \end{equation} where $k=2\pi/\lambda$ is the wavenumber in vacuum, $x=ka=2\pi a/\lambda$ is the size parameter, $a$ is the radius of spherical particle, and $m=\sqrt{\varepsilon}$ is the complex refractive index of metal. $j_1$ and $h_1$ are first-order spherical Bessel functions and Hankel functions, respectively. \begin{figure} \includegraphics[width=\linewidth]{system_config.jpg} \caption{A random medium with a slab geometry consisting of randomly distributed spherical particles. Incident radiation is denoted by the large, solid arrow.}\label{system_config} \end{figure} Since the EM response of an individual dipole is described by the dipole polarizability $\alpha$, the coupled dipole method (CDM), as an exact solution for dipole scatterers distributed in the investigated random medium, is then used to calculate the exact total absorption of all particles. In vacuum or any homogeneous, isotropic host medium, the CDM has the following form \cite{markel1993}: \begin{equation}\label{coupled-dipole} \mathbf{p}_j=\alpha\left[\mathbf{E}_{inc}(\mathbf{r}_j)+k^2\sum_{i=1,i\neq j}^{N}\mathbf{G}_0(\mathbf{r}_j,\mathbf{r}_i)\mathbf{p}_i\right] \end{equation} where $\mathbf{E}_{inc}(\mathbf{r}_j)$ is the incident field impinging on the $j$-th particle. Here we model the incident light as a plane wave leading to $\mathbf{E}_{inc}(\mathbf{r}_j)=\mathbf{E}_0\exp{(i\mathbf{k}\cdot\mathbf{r}_j)}$ with $\mathbf{k}=k\hat{\mathbf{z}}$. $\mathbf{p}_i$ is the excited dipole moment of $i$-th particle. $\mathbf{G}_{0}(\omega,\mathbf{r}_j,\mathbf{r}_i)$ is the free-space dyadic Green's function and describes the propagation of scattering field of $j$-th dipole to $i$-th dipole as \cite{markel1993,Cherroret2016} \begin{equation} \begin{split} \mathbf{G}_{0}(\mathbf{r}_j,\mathbf{r}_i)&=\frac{\exp{(ikr)}}{4\pi r}\left(\frac{i}{kr}-\frac{1}{k^2r^2}+1\right)\mathbf{I}\\ &+\frac{\exp{(ikr)}}{4\pi r}\left(-\frac{3i}{kr}+\frac{3}{k^2r^2}-1\right)\mathbf{\hat{r}}\mathbf{\hat{r}}+\frac{\delta({\mathbf{r}})}{3k^2} \end{split} \end{equation} where the Dirac function $\delta(\mathbf{r})$ is responsible for the so-called local field in the scatterers \cite{Cherroret2016}. $\mathbf{I}$ is identity matrix and $\hat{\mathbf{r}}$ is the unit vector of $\mathbf{r}=\mathbf{r}_j-\mathbf{r}_i$. After calculating the EM responses of all scatterers based on above multiple scattering equations, we find the total scattering field of the random cluster of particles at an arbitrary position $\mathbf{r}\neq\mathbf{r}_j$ where $\mathbf{r}_j$ denotes the position of scatterers, is computed as \begin{equation} \mathbf{E}_s(\mathbf{r})=k^2\sum_{i=1}^{N}\mathbf{G}_0(\mathbf{r},\mathbf{r}_i)\mathbf{p}_i \end{equation} The total absorption cross section is given by \begin{equation} C_{abs}=k\sum_{j=1}^N\mathrm{Im}(\mathbf{p}_j\cdot\mathbf{E}_{exc,j}^*-\frac{k^3}{6\pi}|\mathbf{p}_j|^2) \end{equation} where $\mathbf{E}_{exc,j}$ is the exciting field imping on $j$-th particle and given by $\mathbf{E}_{exc,j}=\mathbf{p}_j/\alpha$. Therefore, the total absorption for a slab geometry with a cross section $S$, whose normal vector is parallel to the propagation direction of incident light, is given by $A=C_{abs}/S$ when a plane wave illumination is applied. Here in this study, we regard CDM as an exact, standard method for solving the absorptance of the random media containing dipolar particles, and examine the validity of theoretical models by comparing their results with it. \section{Independent Scattering Approximation and Radiative transfer equation}\label{RTE} For a comparison with exact full-wave model for point scatterers, we present an incoherent Monte Carlo simulation based on radiative transfer equation. The conventional RTE relies on the radiative properties defined from independent scattering approximation (ISA) as aforementioned in Sec.\ref{intro}. Under this approximation, the extinction, scattering and absorption coefficients for randomly distributed dipolar particle systems are \begin{equation} \mu_{e}=n_0k\mathrm{Im}(\alpha) \end{equation} \begin{equation} \mu_{s}=n_0\frac{k^4|\alpha|^2}{6\pi} \end{equation} \begin{equation} \mu_{a}=n_0\left[k\mathrm{Im}(\alpha)-\frac{k^4|\alpha|^2}{6\pi}\right] \end{equation} where $n_0=N/V$ is the number density of the nanoparticles. Thus using these parameters and radiative transfer equation (RTE), we are able to determine the total absorption of plasmonic systems without considering dependent scattering effects. Since RTE is derived under independent scattering approximation (ISA) for the field and ladder approximation for the intensity, which neglect all the interference effects, or dependent scattering effects, it is typically valid for dilute, uncorrelated media. Here we will compare the results from RTE to the numerically exact results to obtain the role of dependent scattering on absorption for a slab geometry. The calculated radiative properties, including scattering coefficients, absorption coefficients and phase functions, are substituted into the radiative transfer equation (RTE) to solve the hemisphere reflectance and transmittance of random media slabs shown in Fig.\ref{system_config}. The RTE for unpolarized light can be expressed as \cite{tsang2000scattering1}: \begin{equation} \frac{dI_\lambda}{ds}=\mu_aI_{b\lambda}-\mu_eI_\lambda+\frac{\mu_s}{4\pi}\int_{4\pi}I_\lambda(\Omega_i)P(\Omega,\Omega_i)d\Omega_i \end{equation} where $I_\lambda$ is the spectral intensity, $I_{b\lambda}$ is the spectral intensity of blackbody, $s$ is the transport path length and $P(\Omega,\Omega_i)$ is the phase function representing the scattering probability from direction $\Omega_i$ to direction $\Omega$. In the current study, the thermal radiation emission term $\mu_aI_{b\lambda}$ can omitted, since the reflectance and transmittance measurement is carried out in room temperature with sample emission far smaller than the incident light energy. The random medium boundary here is treated as specular surface. Since the introduction of scatterers always changes the effective refractive index of the random media, when calculating specular reflectivity using Fresnel equations at the geometry boundary, the refractive index will not be 1 but the effective index of the random medium. The real part $n_{eff}$ of the effective complex refractive index $m_{eff}=\sqrt{\varepsilon_{eff}}$ is calculated under Maxwell-Garnett formula. MG is, to some extent, is adequate in determining the real part of refractive index, even in some photonic crystals and highly scattering media \cite{schuurmansScience1999}. The boundary effects are important in simulation, especially for highly scattering samples. The scattering phase function for unpolarized radiation is of the Rayleigh type \cite{bohrenandhuffman} \begin{equation}\label{rayleigh_pf} P(\Omega,\Omega_i)=\frac{3}{4}(1+\cos^2{\theta}) \end{equation} where the scattering angle $\theta$ is the angle between incident direction solid angle $\Omega_i$ and scattering direction $\Omega$. The Monte Carlo (MC) method, as a widely used technique to provide simple but excellent interpretation of very complex transfer problem for arbitrary geometries, is implemented here to trace the reflecting, scattering and absorption processes of a large amount of light energy bundles to solve the RTE. The total absorptance of the medium then can be determined by simply adding up the absorbed weight of energy bundles inside the medium. A popular Monte Carlo code developed by Wang et al. is used in this study \cite{wang1995mcml} , where some modifications to consider specific phase function based on rejection sampling method are implemented \cite{wangIJHMT2015}. The details of Monte Carlo method can refer to Ref.\cite{wang1995mcml}. \section{Dependent Scattering Model} \subsection{General theory of multiple scattering}\label{general_theory} When a large number of identical dipolar particles are randomly packed and constitute a disordered medium, it is of great interest to investigate whether the dependent scattering effect can lead to a modification of absorption behavior which ISA-RTE cannot predict. There are various models considering the interference effects in disordered systems containing randomly distributed small particles such as colloidal media, nanofluid and suspensions, including the well-known Maxwell-Garnett approximation (MGA) derived from Lorentz-Lorenz relation (LLR), i.e., the local field correction, the quasi-crystalline approximation (QCA) \cite{laxPR1952}, effective medium approximation (EMA) of Roth, Davis and Schwartz's \cite{rothPRB1974,davisPRB1985}, Persson and Liebsch's lattice-gas coherent potential approximation (LG-CPA) \cite{liebschJPC1983,liebschPRB1984}, and other models taking nonlocal, correlation and multiplole effects into account were also developed in Refs. \cite{Felderhof1983,torquatoJCP1984,barreraPRB1988,barreraPRB1989,Claro1991,vasilevskiyPRB1996,felderhofPRE1998}. However, few models handled with the scattering coefficient induced by random fluctuations of particle density in disordered media, partly because their studies focused on very small particles in which scattering is much smaller than absorption, especially for those metallic particles with a radius $\sim5\mathrm{nm}$. It now comes into interest that for larger metallic particles, scattering cross section is usually comparable with or even larger than absorption cross section and the dependent scattering effect on absorption is appreciable. Here in this section, we will give analytical expressions of the multiple scattering problem for a random medium containing dipolar particles based on quasicrystalline approximation (QCA) to consider the dependent scattering effect on radiative properties of the present random medium \cite{laxPR1952,tsang2004scattering}. Before proceeding to the theoretical derivation of radiative properties, let us give a brief overview of the multiple scattering theory, and demonstrate how this theoretical framework leads to the quasicrystalline approximation we will use to predict transport properties throughout the rest of the paper \cite{laxRMP1951,laxPR1952,lagendijk1996resonant,VanRossum1998, tsang2004scattering,sheng2006introduction}. In the general case of an infinite nonmagnetic three-dimensional (3D) medium, the spatial distribution of permittivity $\varepsilon(\mathbf{r})$ is inhomogeneous and can be generally described as $\varepsilon(\mathbf{r})=1+\delta \varepsilon(\mathbf{r})$, where $\delta \varepsilon(\mathbf{r})$ is the fluctuational part of the permittivity due to random morphology of the inhomogeneous medium, and electromagnetic wave propagation in such media is described by vector Helmholtz equation \cite{lagendijk1996resonant,tsang2004scattering}: \begin{equation} \nabla \times \nabla \times \mathbf{E}(\mathbf{r})-k^{2}\varepsilon(\mathbf{r}) \mathbf{E}(\mathbf{r})=0 \end{equation} Let $k^2=\omega^2/c_0^2$ be the wavenumber in the background medium and $V(\mathbf{r})=k^2\delta \varepsilon(\mathbf{r})=\omega^2\delta \varepsilon(\mathbf{r})/c_0^2$ be disordered potential inducing electromagnetic scattering, where $c_0$ is the speed of light in the background medium. Then we have an alternative form of vector Helmholtz equation convenient for random media problems, \begin{equation} \nabla \times \nabla \times \mathbf{E}(\mathbf{r})-k^{2} \mathbf{E}(\mathbf{r})=V(\mathbf{r})\mathbf{E}(\mathbf{r}) \end{equation} To solve the equation, we introduce the dyadic Green's function for this random medium which satisfies \begin{equation} \nabla \times \nabla \times \mathbf{G}(\mathbf{r},\mathbf{r}')-k^{2} \mathbf{G}(\mathbf{r},\mathbf{r}')=V(\mathbf{r})\mathbf{G}(\mathbf{r},\mathbf{r}')+\mathbf{I}\delta(\mathbf{r},\mathbf{r}') \end{equation} In the meanwhile, the Green's function in the homogeneous background medium is \begin{equation} \nabla \times \nabla \times \mathbf{G}_0(\mathbf{r},\mathbf{r}')-k^{2} \mathbf{G}_0(\mathbf{r},\mathbf{r}')=\mathbf{I}\delta(\mathbf{r},\mathbf{r}') \end{equation} Taking the Fourier transform from $\mathbf{r}$ and $\mathbf{r}'$ to its reciprocal space vector $\mathbf{p}$ and $\mathbf{p}'$ and let $V(\mathbf{r},\mathbf{r}')=k^2\delta \epsilon(\mathbf{r})\delta(\mathbf{r}-\mathbf{r}')$ using the Dirac delta function, we can write down the solution for dyadic Green's function in the disordered media as \begin{equation}\label{lp_eq1} \mathbf{G}(\mathbf{p},\mathbf{p}')=\mathbf{G}_0(\mathbf{p},\mathbf{p}')+\mathbf{G}_0(\mathbf{p},\mathbf{p}')\mathbf{V}(\mathbf{p},\mathbf{p}')\mathbf{G}(\mathbf{p},\mathbf{p}') \end{equation} which is known as the Lippman-Schwinger equation \cite{VanRossum1998,mishchenko2006multiple,sheng2006introduction}. Introducing the $T$-matrix $\mathbf{T}$, Eq.(\ref{lp_eq1}) is transformed into the following form \begin{equation}\label{lp_eq2} \mathbf{G}(\mathbf{p},\mathbf{p}')=\mathbf{G}_0(\mathbf{p},\mathbf{p}')+\mathbf{G}_0(\mathbf{p},\mathbf{p}')\mathbf{T}(\mathbf{p},\mathbf{p}')\mathbf{G}_0(\mathbf{p},\mathbf{p}') \end{equation} It is easily shown that \begin{equation} \mathbf{T}(\mathbf{p},\mathbf{p}')=[\mathbf{I}-\mathbf{V}(\mathbf{p},\mathbf{p}')\mathbf{G}_0(\mathbf{p},\mathbf{p}')]^{-1}\mathbf{V}(\mathbf{p},\mathbf{p}') \end{equation} If the medium only contains only one discrete scatterer, $\mathbf{T}(\mathbf{p},\mathbf{p}')$ is then known as the $T$-matrix for the single scatterer. Obviously for a random medium composed of many scatterers, Eq.(\ref{lp_eq1}) still applies. However, if each scatterer can be described by its own $T$-matrix, it is more convenient to transform Eq.(\ref{lp_eq1}) into the form only involving the $T$-matrices of the particles, rather than the ``scattering potential'' $\mathbf{V}$. In this manner, the Lippman-Schwinger equation for a medium consisting of $N$ discrete scatterers is rewritten as \begin{equation}\label{lp_eq3} \mathbf{G}(\mathbf{p},\mathbf{p}')=\mathbf{G}_0(\mathbf{p},\mathbf{p}')+\mathbf{G}_0(\mathbf{p},\mathbf{p}')\sum_{j=1}^N\mathbf{T}_j(\mathbf{p},\mathbf{p}')\mathbf{G}_0(\mathbf{p},\mathbf{p}') \end{equation} where $\mathbf{T}_j(\mathbf{p},\mathbf{p}')=[\mathbf{I}-\mathbf{V}_j(\mathbf{p},\mathbf{p}')\mathbf{G}_0(\mathbf{p},\mathbf{p}')]^{-1}\mathbf{V}_j(\mathbf{p},\mathbf{p}')$ is the $T$-matrix of $j$-th scatterer. This equation is also known as Foldy-Lax equation for multiple scattering of classical waves \cite{foldyPR1945,laxRMP1951,mishchenko2006multiple}. To obtain statistically meaningful description of a random medium, it is necessary to take ensemble average of the full system to eliminate the impact of specific configuration. When taking ensemble average of Eq.(\ref{lp_eq3}), we obtain \begin{equation}\label{lp_eq4} \langle\mathbf{G}(\mathbf{p},\mathbf{p}')\rangle=\mathbf{G}_0(\mathbf{p},\mathbf{p}')+\mathbf{G}_0(\mathbf{p},\mathbf{p}')\langle\mathbf{T}(\mathbf{p},\mathbf{p}')\rangle\mathbf{G}_0(\mathbf{p},\mathbf{p}') \end{equation} where $\langle\mathbf{G}(\mathbf{p},\mathbf{p}')\rangle$ denotes ensemble averaged amplitude Green's function, and $\langle\mathbf{T}(\mathbf{p},\mathbf{p}')\rangle$ is the ensemble averaged T-matrix of the full system, which is related to individual T-matrix of each scatterer in the manner of infinite multiple scattering process as \begin{equation}\label{lp_eq5} \begin{split} \langle\mathbf{T}(\mathbf{p},\mathbf{p}')\rangle&=\langle\sum_{i=1}^N\mathbf{T}_j(\mathbf{p},\mathbf{p}')\rangle+\langle\sum_{i=1}^{N}\sum_{j\neq i}^{N}\mathbf{T}_j(\mathbf{p},\mathbf{p}_1)\mathbf{G}_0(\mathbf{p}_1,\mathbf{p}_2)\\&\times\mathbf{T}_j(\mathbf{p}_2,\mathbf{p}')\rangle+... \end{split} \end{equation} where the dummy variables $\mathbf{p}_1$ and $\mathbf{p}_2$ are integrated out and we don't write this procedure explicitly. We then obtain the well-known Dyson equation for the coherent, or mean, component of the field \cite{lagendijk1996resonant,VanRossum1998,tsang2004scattering} \begin{equation} \langle\mathbf{G}(\mathbf{p},\mathbf{p}')\rangle=\mathbf{G}_0(\mathbf{p},\mathbf{p}')+\mathbf{G}_0(\mathbf{p},\mathbf{p}')\bm{\Sigma}(\mathbf{p},\mathbf{p}')\langle\mathbf{G}(\mathbf{p},\mathbf{p}')\rangle \end{equation} where $\bm{\Sigma}(\mathbf{p},\mathbf{p}')$ is the so-called self-energy (or mass operator) containing all irreducible multiple scattering expansion terms in T-matrix $\langle\mathbf{T}(\mathbf{p},\mathbf{p}')\rangle$. The irreducible terms are those multiple scattering diagrams that cannot be divided without breaking the particle connections, including the same particle or particle correlations \cite{VanRossum1998}. For statistical homogeneous medium having translational symmetry, $\bm{\Sigma}(\mathbf{p},\mathbf{p}')=\bm{\Sigma}(\mathbf{p})\delta(\mathbf{p}-\mathbf{p}')$ and $\langle\mathbf{G}(\mathbf{p},\mathbf{p}')\rangle=\langle\mathbf{G}(\mathbf{p})\rangle\delta(\mathbf{p}-\mathbf{p}')$ \cite{sheng2006introduction}. In the momentum representation the free-space dyadic Green's function is $ \mathbf{G}_{0}(\mathbf{p})=-{1}/(k^{2}\mathbf{I}-p^{2}(\mathbf{I}-\mathbf{\hat{p}}\mathbf{\hat{p}}))$, the averaged amplitude Green's function is then \begin{equation} \langle\mathbf{G}(\mathbf{p})\rangle=-\frac{1}{k^{2}\mathbf{I}-p^{2}(\mathbf{I}-\mathbf{\hat{p}}\mathbf{\hat{p}})-\bm{\Sigma}(\mathbf{p})} \end{equation} where $\mathbf{\hat{p}}=\mathbf{p}/p$ is the unit vector in the momentum space. Through this equation, self-energy $\bm{\Sigma}(\mathbf{p}) $ provides a renormalization for the electromagnetic wave propagation in random media, and determines the effective (renormalized) permittivity as \begin{equation}\label{epsilon_eff} \bm{\varepsilon}_{eff}(\mathbf{p})=\mathbf{I}-\frac{\bm{\Sigma}(\mathbf{p})}{k^2} \end{equation} For a statistically isotropic random medium, the obtained momentum-dependent effective permittivity tensor is decomposed into a transverse part and a longitudinal part as $ \bm{\varepsilon}(\mathbf{p})=\varepsilon^{\bot}(\mathbf{p})(\mathbf{I}-\mathbf{\hat{p}}\mathbf{\hat{p}})+\varepsilon^{\parallel}(\mathbf{p})\mathbf{\hat{p}}\mathbf{\hat{p}}$, where $\varepsilon^{\bot}(\mathbf{p})=1-\Sigma^{\bot}(\mathbf{p})/{k^{2}}$ and $\varepsilon^{\parallel}(\mathbf{p})=1-\Sigma^{\parallel}(\mathbf{p})/{k^{2}}$ determine the effective permittivities of transverse and longitudinal modes in momentum space. Therefore, by determining the poles of amplitude Green's function we can obtain the dispersion relation which corresponds collective excitation of the disordered medium. For transverse waves, the dispersion relation is $\mathbf{K}^{ 2}=\mathbf{\varepsilon}^{\bot}(\mathbf{K})k^{2}$. $\mathbf{K}$ is viewed as the effective propagation wave vector for the disordered medium. It is noted that Dyson equation and self-energy only provide a characterization for coherent electromagnetic field propagation in random media, i.e., the first moment of electromagnetic field, while a more relevant quantity to our concern is the radiation intensity in random media which directly determines the phase function of each scattering process in terms of energy transport. This is exactly governed by the Bethe-Salpeter equation \cite{lagendijk1996resonant,VanRossum1998,tsang2004scattering}, which describes the second moment of the electromagnetic field $\langle \mathbf{G}\mathbf{G}^*\rangle$ in random media. In operator notation, Bethe-Salpeter equation is written as \begin{equation} \langle\mathbf{G}\mathbf{G}^*\rangle=\langle\mathbf{G}\rangle\langle\mathbf{G}^*\rangle+\langle\mathbf{G}\rangle\langle\mathbf{G^*}\rangle\bm{\Gamma}\langle\mathbf{G}\mathbf{G}^*\rangle \end{equation} where $\bm{\Gamma}$ is the irreducible vertex representing the renormalized scattering center for incoherent part of radiation intensity due to random fluctuation of the disordered media. It can be understood as the differential scattering cross section related as well as scattering phase function in radiative transfer. However, it should be noted that the irreducible vertex is a much more complex quantity describing all interference phenomena in multiple scattering. \subsection{Quasicrystalline approximation} The primary task at hand is to evaluate the self-energy to obtain an exact enough ensemble-averaged Green's function. As a first-order perturbative approximation, under independent scattering approximation (ISA) self-energy is simply the configurational sum of the $t$-matrices for all scatterers \begin{equation} \bm{\Sigma}^{(1)}(\mathbf{p})=n_0t\mathbf{I} \end{equation} where $t=-k^2\alpha$ is the \textit{t}-matrix of the dipole scatterer. In ISA, all particles scatter light independently as if no other particles exist. This approximation is only valid for very dilutely distributed scatterers. In denser random media, dependent scattering mechanism originating for wave interference arises. Since the scatterers are randomly distributed in the medium and have finite sizes comparable with wavelength, it is pivot to take the inter-particle correlation into account in the dependent scattering mechanism \cite{fradenPRL1990,rojasochoaPRL2004}. This is because the existence of one particle would create an exclusion volume into which other particles are not allowed to penetrate, if we regard the particles as hard spheres, leading to definite phase differences among scattered waves preserving over ensemble average. Here we only consider the correlation between a pair of particles, which is described by the pair distribution function $g_2(\mathbf{r}_1,\mathbf{r}_2)=g_2(|\mathbf{r}_1-\mathbf{r}_2|)$ of the hard-sphere type inter-particle correlation, where we have implicitly assumed the random medium is statistically homogeneous and isotropic. $g_2(r)$ describes the probability of finding a particle in the distance of $r$ from a fixed particle. High-order position correlations involving three or more particles simultaneously are treated as a hierarchy of pair distribution functions, e.g., $g_3(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)=g_2(\mathbf{r}_1,\mathbf{r}_2)g_2(\mathbf{r}_2,\mathbf{r}_3)$, $g_4(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3,\mathbf{r}_4)=g_2(\mathbf{r}_1,\mathbf{r}_2)g_2(\mathbf{r}_2,\mathbf{r}_3)g_2(\mathbf{r}_3,\mathbf{r}_4)$ and so forth. This approximating method allows us to solve the propagation problem of coherent electromagnetic field (or mean field) in random media, and is called quasicrystalline approximation (QCA) \cite{laxPR1952,tsangJAP1982}. The diagrammatic representation of self-energy under QCA is presented in Fig.\ref{diagram_qca}, which only contains irreducible diagrams of multiple scattering, as we have mentioned in Section \ref{general_theory}. \begin{figure}[htbp] \flushleft \subfloat{ \includegraphics[width=\linewidth]{diagrammatic_sigma_qca.jpg} } \caption{Diagrammatic representation of $\bm{\Sigma}$ under quasicrystalline approximation (QCA). The hollow circle denotes single particle $t$-matrix, solid line denotes the free-space Green's function (propagator), dotted line denotes particle correlation function $h_2(r)=g_2(r)-1$, and the large circle with a $\times$ inside denotes $\bm{\Sigma}$.}\label{diagram_qca} \end{figure} In low-frequency limit for dipole particles, the self-energy under QCA is expressed in a self-consisted way in reciprocal space according to Fig.\ref{diagram_qca} as \begin{equation} \bm{\Sigma}\left(\mathbf{p} \right)=n_0t \mathbf{I}+n_0t\int\mathbf{G}_0\left( \mathbf{p} \right)H_2\left( \mathbf{p}\right)\Sigma\left(\mathbf{p} \right) \end{equation} Let $\bm{\Sigma}\left(\mathbf{p} \right)=\Sigma\mathbf{I}$ be $\mathbf{p}$ independent, which is applicable for small particles meaning the the scattering properties are local with no spatial dispersion and we have \begin{equation}\label{qca1} \Sigma\mathbf{I}=n_0t\mathbf{I}-\frac{n_0t\Sigma}{3k^2}\mathbf{I}+n_0t\Sigma\int{d\mathbf{r}\mathrm{PV}\mathbf{G}_0\left( \mathbf{r} \right)h_2(r)} \end{equation} where we have split the singular part of Green's function and define the principal value of Green's function as $\mathrm{PV}\mathbf{G}_0\left( \mathbf{r} \right)$. The integral is also transformed from reciprocal domain to space domain. Then self-energy is solved as \begin{equation} \Sigma=\frac{n_0t}{1+n_0t/(3k^2)+2n_0t\int{drr\exp(ikr)\left[g_2(r)-1\right]}/3} \end{equation} Therefore according to Eq.(\ref{epsilon_eff}) the effective propagation constant $K$ is given by \begin{equation} K^2=k^2-\frac{1}{1/(n_0t)+1/(3k^2)+2\int{drr\exp(ikr)\left[g_2(r)-1\right]}/3} \end{equation} \begin{figure}[htbp] \flushleft \subfloat{ \includegraphics[width=\linewidth]{diagrammatic_gamma_qca.jpg} } \caption{Diagrammatic representation of irreducible intensity vertex $\bm{\Gamma}$ under QCA. The dashed lines denote the connected particles are the same one.}\label{gamma_diagram} \end{figure} After calculating the effective propagation constant for the coherent wave, we then proceed to derive the differential scattering cross section for incoherent waves. It is determined by the irreducible intensity vertex $\Gamma$. Its diagrammatic representation under QCA is also given in Fig.\ref{gamma_diagram}. Again here we only consider two-particle statistics. Note the derivation can be simplified using $\Sigma$ in the second line of Fig.\ref{gamma_diagram} is due to our local assumption for the scattering process, which reduces the correlation function into only two particle positions. This simplification is important and is to some extent equivalent to the so-called ``distorted Born approximation''\cite{maAO1988,tsangRS2000,tsang2004scattering}. The irreducible (intensity) vertex is solved as \begin{equation}\label{gamma_qca} \Gamma(\mathbf{p},\mathbf{p}')=n_0|c|^2+n_0^2|c|^2H_2(\mathbf{p}-\mathbf{p}') \end{equation} where $c=\Sigma/n_0$, and $H_2(\mathbf{q})$ is defined as the Fourier transform of pair correlation function $h_2(r)=g_2(r)-1$ as \begin{equation}\label{Hq_eq} H_2(\mathbf{q})=\int_{-\infty}^{\infty}d^3\mathbf{r}[g_2(r)-1]\exp{(-i\mathbf{q}\cdot\mathbf{r})} \end{equation} Afterwards, we take the on-shell approximation, which implies the photons transport with a fixed momentum value $p=K$ and those excitations with other momentum values are negligible, like a shell in the momentum (reciprocal) space. This approximation also corresponds to a mean Green's function with a sharp Dirac-type peak at $p=K$. It gives \begin{equation}\label{gamma_qca2} \Gamma(K\hat{\mathbf{p}},K\hat{\mathbf{p}}')=n_0|c|^2+n_0^2|c|^2H_2(K\hat{\mathbf{p}}-K\hat{\mathbf{p}}') \end{equation} Since we only consider transverse electromagnetic waves, the the transverse component of irreducible vertex plays the role of renormalized scattering center for transverse electromagnetic intensity, which is given by \cite{Cherroret2016} \begin{equation}\label{gamma_qca3} \Gamma^{\bot}(K\hat{\mathbf{p}},K\hat{\mathbf{p}}')=(\mathbf{I}-\hat{\mathbf{p}}\hat{\mathbf{p}})\Gamma(K\hat{\mathbf{p}},K\hat{\mathbf{p}}')(\mathbf{I}-\hat{\mathbf{p}}'\hat{\mathbf{p}}') \end{equation} For isotropic media, the scattering properties (scattering coefficient, phase function) do not rely on the incident direction but only the angle between the incident and scattering direction, i.e., the polar scattering angle $\theta_s=\arccos(\hat{\mathbf{p}}\cdot\hat{\mathbf{p}}')$ and azimuth angle $\varphi_s$, which depends on the definition of local frame of spherical coordinates with respect to the incident direction $\hat{\mathbf{p}}'$. Here the isotropy is manifested in the express of pair correlation function $H_2(K\hat{\mathbf{p}}-K\hat{\mathbf{p}}')$, which only depends on the difference between $\hat{\mathbf{p}}$ and $\hat{\mathbf{p}}'$. For unpolarized light transport in random media, azimuth symmetry is preserved. Therefore, the differential scattering cross section can be obtained by integrating over $\varphi_s$ with incident direction $\hat{\mathbf{p}}'$ fixed as \cite{barabanenkov1975multiple} \begin{equation}\label{pf_qca1} \begin{split} \frac{d\sigma_s'}{d\theta_s}=\frac{1}{(4\pi)^2}\int_{0}^{2\pi}\Gamma^{\bot}(K\hat{\mathbf{p}},K\hat{\mathbf{p}}')d\varphi_s \end{split} \end{equation} This leads to the following result of differential scattering cross section that is not dependent on the specific incident direction $\hat{\mathbf{p}}'$ but the relative polar scattering angle $\theta_s$ between $\hat{\mathbf{p}}$ and $\hat{\mathbf{p}}'$ as \begin{equation}\label{pf_qca2} \begin{split} \frac{d\sigma_s'}{d\theta_s}=\frac{n_0|c|^2(1+\cos^2\theta_s)}{16\pi n_{eff}}\left[1+n_0H_2(2K\sin(\theta_s/2))\right] \end{split} \end{equation} The effective refractive index $n_{eff}$ in the denominator indicates that the differential scattering coefficient is with respect to the coherent transmitted energy flux, rather than the incident energy flux. It is given by the effective propagation constant as \begin{equation} n_{eff}=\frac{\mathrm{Re}K}{k} \end{equation} Therefore the scattering coefficient which is defined as scattering cross section per unit volume is given by\footnote{In the long-wavelength limit, the structure factor can be replaced by its $\mathbf{q}=0$ value, leading to $\mu_s'=n_0|c|^2S(\mathbf{q}=0)/(6\pi)$. However, this is not valid for the present case with $k_0a\sim 0.6.$} \begin{equation}\label{kappas_qca} \begin{split} \mu_s'&=\int_0^\pi\frac{d\sigma_s'}{d\theta_s}\sin{\theta_s}d\theta=\int_0^\pi d\theta\sin{\theta_s}\frac{n_0|c|^2(1+\cos^2\theta_s)}{16\pi n_{eff}}\\&\times\left[1+n_0H_2(2K\sin(\theta_s/2))\right] \end{split} \end{equation} Since the spheres are all hard, impenetrable spheres without any other interparticle potentials, we can use the Percus-Yevick hard-sphere model for the pair correlation function \cite{Baxter1968,tsang2004scattering2}. In this model, $H_2(\mathbf{q})$ is calculated as \begin{equation} H_2(\mathbf{q})=\frac{(2\pi)^3C(\mathbf{q})}{1-n_0(2\pi)^3C(\mathbf{q})} \end{equation} where \begin{equation} \begin{split} C(\mathbf{q})&=C(q)=24f_v[\frac{\alpha+\beta+\delta}{u^2}\cos u-\frac{\alpha+2\beta+4\delta}{u^3}\sin u\\&-2\frac{\beta+6\delta}{u^4}\cos u+\frac{2\beta}{u^4}+\frac{24\delta}{u^6}(\cos u-1)] \end{split} \end{equation} in which $u=4qa$, $\alpha=(1+2f_v)^2/(1-f_v)^4$, $\beta=-6f_v(1+f_v/2)^2/(1-f_v)^4$, $\delta=f_v(1+2f_v)^2/[2(1-f_v)^2]$ and $f_v$ is the volume fraction of the identical spherical particles. However, the self-energy and irreducible vertex under QCA don't exactly fulfill the so-called Ward-Takahashi identity for energy conservation \cite{tsang2004scattering}, hence we use another method to determine the absorption coefficient. Since in the present case the self-energy is local and independent of wavevector, the quantity $c$ can be understood as an effective scattering center or a renormalized scatterer. This quantity gives a modification for the exciting field impinging on the scatterers as $c/t$. In this way, the absorption coefficient is calculated from the single particle absorption cross section as \begin{equation} \mu_a'=\frac{n_0}{n_{eff}}\Big|\frac{c}{t}\Big|^2\left(-\frac{\mathrm{Im}t}{k}-\frac{|t|^2}{6\pi}\right) \end{equation} The quantity in the parentheses is the single scatterer absorption cross section. Again the effective refractive index $n_{eff}$ in the denominator plays the same role as in Eq.(\ref{pf_qca1}). Therefore, we have established a dependent scattering model for radiative properties of randomly distributed dipole scatterers based on QCA and multiple scattering theory. This is the main theoretical contribution of the present paper. This model is not equivalent to the conventional QCA in the low-frequency limit as used by many authors \cite{ma1990enhanced,wen1990dense,prasherJAP2007,weiAO2012}. In this model, thanks to the diagrammatic technique in multiple scattering theory, we can formally and rigorously derive the irreducible intensity vertex under QCA as well as the dependent absorption coefficient, and take the anisotropic scattering phase function into account. It is thus still applicable for larger dipolar particles with size parameter $x=ka$ approaching 1, in which particle correlation plays a more important role in affecting radiative properties than the case of very small particles. \section{Results and Discussions}\label{results} \subsection{Dependent scattering effect on radiative properties} \begin{figure}[htbp] \includegraphics[width=0.8\linewidth]{single_particle.jpg} \caption{Extinction efficiency $Q_{ext}$ of a single silver spherical particle with radius $a=50nm$ at different wavelengths, in which the full Mie theory result (black solid line) is compared with the electric dipole approximation (red solid line).}\label{singleqext} \end{figure} Here we investigate a random medium consisting of identical silver spheres and their radius is set to be $a=50$nm, a typical size of silver nanoparticles. The metallic particle made of Ag, whose permittivity is described by the Drude formula \begin{equation} \varepsilon(\omega)=\varepsilon_r-0.73\frac{\omega_p^2}{\omega(\omega+i\gamma)} \end{equation} with $\varepsilon_r=5.45$, $\omega_p=1.72\times10^{16}\text{rad/s}$ and decay rate $\gamma=8.35\times10^{13}\text{rad/s}$ \cite{shore2012complex}. The total extinction efficiency defined as $Q_{ext}=C_{ext}/(\pi a^2)$ is calculated through Mie theory to take high orders of multipoles into account, shown in Fig.\ref{singleqext}. The extinction efficiency under electric dipole approximation is also calculated to be $Q_{ext}=k\mathrm{Im}\alpha/(\pi a^2)$, where $\alpha$ is electric dipole polarizability obtained from Eq.(\ref{alphamie}). It can be observed that for wavelength $\lambda>0.35\mu m$, the electrical dipole approximation is able to describe silver nanoparticle's single scattering properties. Therefore, except for very dense systems where near-field electromagnetic interaction can induce the coupling of high order multipole modes between different particles, the present medium can be well described by a cluster of electric dipoles. In this condition, the numerical CDM can be seen as an exact treatment for electromagnetic wave propagation in such medium, and our theoretical model can be applied. Here we focus on the operating wavelength of $\lambda=0.5\mu m$, where the size parameter of nanoparticle $x=2\pi a/\lambda=0.628$. A single silver particle has scattering and absorption efficiencies of $Q_{sca}=1.4596$ and $Q_{abs}=0.0929$ respectively, showing a highly scattering and weakly absorptive feature. The scattering and absorption coefficients of the random media with different nanoparticle volume fractions calculated using ISA and QCA are then presented in Fig.\ref{model_properties}. Even though $x$ is not much smaller than 1 as assumed in our model, we expect our theoretical model assuming local radiative properties to still be applicable. In Fig.\ref{model_properties}, it is obvious that ISA exhibits a linear increase of scattering and absorption coefficient with volume fraction, while QCA predicts a much smaller scattering coefficient when the volume fraction increases. The two models only overlap at very low volume fraction ($f_v\sim0.01$). We only investigate volume fraction up to $0.2$ since for higher densities, two-particle statistics only is not able to correctly capture multi-particle correlations in random media, making QCA invalid \cite{westJOSAA1994}. When $f_v=0.2$, scattering coefficient under ISA is almost four times larger than that of QCA, demonstrating that dependent scattering, in the present case, hugely reduces scattering coefficient. On the other hand, the difference of absorption coefficient between the two models is less significant, where dependent scattering mechanism slightly reduces absorption coefficient. \begin{figure}[htbp] \subfloat{ \label{musca} \includegraphics[width=0.8\linewidth]{mus.jpg} } \hspace{0.01in} \subfloat{ \label{muabs} \includegraphics[width=0.8\linewidth]{mua} } \caption{ Radiative properties (a) scattering coeffcient $\mu_s$ (b) absorption coefficient $\mu_a$ calculated from independent scattering approximation (ISA) and quasicrystalline approximation (QCA) for different volume fractions $f_v$ of nanoparticles.}\label{model_properties} \end{figure} \begin{figure}[htbp] \subfloat{ \label{asymfac} \includegraphics[width=0.8\linewidth]{asymfac.jpg} } \hspace{0.01in} \subfloat{ \label{phasefunc} \includegraphics[width=0.8\linewidth]{phasefunc} } \caption{(a) Asymmetry factor $g$ calculated by QCA for different volume fractions $f_v$ of nanoparticles, where for ISA $g$ is constantly zero. (b) Phase function under ISA and QCA for the $f_v=0.2$ case.}\label{anisotropy} \end{figure} Another feature of dependent scattering is to alter the scattering phase function. In Fig.\ref{asymfac} we show the scattering asymmetry factor $g$, defined as the mean cosine of scattering phase function, predicted by ISA and QCA varying with volume fraction. In ISA, the scattering phase function is always Rayleigh type and thus $g$ is kept to be zero. In the meanwhile, QCA gives rise to a negative $g$, which further decreases when volume fraction of particles grows. This suggests that dependent scattering enhances backscattering of radiation in the present medium. This feature arises from the structural correlation between particles, which enters into the phase function in the form of structure factor $S(\theta_s)1+n_0H_2(2K\sin(\theta_s/2))$ in Eq.(\ref{pf_qca2}). It is usually called by many authors as ``short-range order induced Bragg scattering'' in backscattering direction \cite{rojasochoaPRL2004}, which recently finds its application in structural coloration \cite{magkiriadou2012disordered}. In Fig.\ref{phasefunc}, we further plot the phase functions under ISA and QCA for $f_v=0.2$, where the backscattering enhancement is appreciable. \subsection{Dependent scattering effect on total absorptance} \begin{figure}[htbp] \includegraphics[width=0.8\linewidth]{Lz1abs.jpg} \caption{Variation of absorptance with nanoparticle volume fraction $f_v$ under a fixed slab thickness $L_z=1 \mathrm{\mu m}$.}\label{absorptance1} \end{figure} To verify the effectiveness of QCA model for radiative properties, we insert the calculated scattering and absorption coefficients and phase function into RTE to obtain the total absorptance of the random medium slab. We denote the combined modeling as ISA-RTE and QCA-RTE. In the following calculation we have fixed $L_x=L_y$ to be $5\mathrm{\mu m}$, which are sufficiently large than the operating wavelength ($\lambda=0.5\mu m$) in order to reduce side diffraction effects. The thickness of the slab, $L_z$, is varied from $L_z=0.1\mathrm{\mu m}$ to $L_z=2\mathrm{\mu m}$ to cover a sufficiently broad optical thicknesses from a monolayer of particles to optically thick slabs. The incident light propagates along the direction $z$ as shown in Fig.\ref{system_config}. The results are then compared with the full wave simulations of CDM. We first fix the thickness of the slab to be $L_z=1\mu m$ and vary the volume fraction of nanoparticles, and the comparison is shown in Fig.\ref{absorptance1}. It can be observed that when $f_v<0.05$, the three methods show almost the same result. When $f_v$ continues to grows, ISA starts to deviate and QCA can still predict correct absorptance. However, when $f_v>0.1$, both methods are not capable to exactly reproduce the CDM results. Nevertheless, in high density random media, QCA can give much more reasonable predictions on absorptance than ISA. In Fig.\ref{absorptance2}, we show the thickness dependence of absorptance for the three methods, where the thickness $L_z$ varies from $0.1\mathrm{\mu m}$ to $1.2\mathrm{\mu m}$, and the volume fraction of particles is chosen to be $f_v=0.2$. It is found that when $L_z=0.1\mathrm{\mu m}$, ISA-RTE converges to CDM while QCA-RTE deviates. This is because in the particle monolayer, multiple scattering of electromagnetic waves is weak and the absorptance is dominated by single scattering regime. On the other hand, our QCA model is established for a sufficiently large random medium where Dyson and Bethe-Salpeter equations describing multi-particle scattering processes apply, and it is actually invalid and leads to errors for such monolayer. When $L_z$ increases, it is seen that QCA results in more closer absorptance to CDM than ISA in the entire thickness range, especially for $L_z<0.4\mathrm{\mu m}$. However, the deviations of both models are still very large for thicker slabs and further grow with thickness. This indicates that for very dense, highly scattering random media with appreciable optical thickness (under QCA, for $L_z=0.4\mathrm{\mu m}$, the optical thickness $\tau=L_z(\mu_a+\mu_s)=0.591$) where multiple scattering mechanism prevails, QCA model is still not sufficient to predict total absorptance, although it already indicates much better performance than ISA. \begin{figure}[htbp] \includegraphics[width=0.8\linewidth]{fv02abs.jpg} \caption{Variation of absorptance with slab thickness $L_z$ under a fixed nanoparticle volume fraction $f_v=0.2$.}\label{absorptance2} \end{figure} Particularly, in such thick and dense random media, high particle density brings several mechanisms that QCA can't capture or even neglect. The first mechanism is high-order correlation among three or more particles, which QCA treats approximately using a cascading multiplication of two particle statistics. The second is the recurrent scattering mechanism, which denotes the multiple scattering trajectories visit the same particle more than once and is ignored in QCA. Moreover, high optical thickness may also lead to very long multiple scattering paths that is responsible for the position dependent radiative properties, for example, position-dependent diffusion constant as investigated by many authors \cite{tiggelenPRL2000,yamilovPRL2014}. Effects of these mechanisms on light absorption in random media need further exploration and are out the scope of the present paper. \section{Conclusions} In this paper we theoretically and numerically demonstrate the effect of dependent scattering on absorption in disordered media consisting of highly scattering scatterers. Based on the path-integral diagrammatic technique in multiple scattering theory, we develop a theoretical model using quasi-crystalline approximation (QCA) to derive dependent-scattering corrected radiative properties. We investigate a disordered medium composed of randomly distributed silver nanoparticles with radius $a=50\text{nm}$ operating at $\lambda=0.5\mathrm{\mu m}$ with large single scattering cross sections and very small absorption cross sections. Compared to independent scattering approximation (ISA), we theoretically find that dependent scattering mechanism strongly reduces the overall scattering coefficient, and slight lowers the absorption coefficient. This reduction grows as the volume fraction of nanoparticles increases. Moreover, we demonstrate that dependent scattering leads to a backscattering enhancement in scattering phase function due to the local short-range correlations between particles. We further reveal that all these effects of dependent scattering on radiative properties will be reflected in the total absorptance of the random media. By performing comparison among the full-wave coupled-dipole method (CDM) (regarded as an exact benchmark in the present case), QCA-RTE and ISA-RTE, we find that dependent scattering effect shows a strong impact on total absorptance. When $f_v<0.05$, the three methods show almost the same result. When $f_v$ continues to grow, ISA-RTE starts to deviate and QCA-RTE can still predict correct absorptance. When $f_v>0.1$, both methods are not capable to exactly reproduce the CDM results. However, QCA-RTE still shows much better performance in predicting absorptance than ISA in high-density random media. We thus conclude that QCA-RTE is a feasible method to correctly treat absorption in random media in the dependent scattering regime, if the particle density is not too high. \section*{Acknowledgments} We thank the financial support of the National Natural Science Foundation of China (Grant Nos.51636004 and 51476097); Shanghai Key Fundamental Research Grant (Grant No.16JC1403200); The Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No.51521004).
{ "timestamp": "2017-12-15T02:04:00", "yymm": "1712", "arxiv_id": "1712.05102", "language": "en", "url": "https://arxiv.org/abs/1712.05102" }
\section{Introduction} Recent advances in \ac{ML} have paved the way for implementing systems that compute compact and fixed-size embeddings of music data \cite{hu2014,wang2015,choi2017,lee2017a,lee2017b,park2017}. The design of these systems is usually motivated by the pursuit of automatic inference of music semantics from audio, by describing it in a learned semantic space. However, most of these systems are limited to the availability of labeled datasets and, more importantly, are limited to learning patterns in data solely from the artifacts themselves, i.e., solely from fixed (objective) descriptions of the object of the subjective experience. Although audio content is important and, to a certain extent, empirically proven to be effective in representing music semantics, it does not account for all factors involved in music cognition. Therefore, since music is ultimately in the mind, understanding the process of its perception by focusing on the listener is necessary to effectively model music semantics \cite{widmer2016}. In order to address the lack of attention to the listener in previous \ac{MIR} approaches to music semantics, we focus on the neural firing patterns that are manifested by the human brain during perception of music artifacts. These patterns can be recorded using \ac{EEG} technology and effectively employed to study music semantics. Previous research has applied \acp{EEG} for studying the correlations between neural activity and music, yielding important insights, namely, regarding appropriate electrode positions and spectrum frequency bands \cite{schmidt2001,altenmuller2002,sammler2007,lin2010,duan2012,hadjidimitriou2012,daly2014,thammasan2014,thammasan2016}. We present a generic framework to model multimedia semantics. We leverage multi-view models, that learn a space of shared embeddings between \acp{EEG} and the chosen medium, as an implementation. We instantiate this framework in the context of music semantics, by proposing a novel end-to-end \ac{NN} architecture for processing audio and \acp{EEG}, making use of the \ac{DCCA} loss objective. The learned space is capable of capturing the semantics of music audio by using subjective \ac{EEG} signals as regularizers during its training. In this sense, the framework defines music semantics as a by-product of the interplay between audio artifacts and perception of listeners, being only theoretically limited by the measuring precision of the \acp{EEG}. We evaluate the effectiveness of this model in a transfer learning setting, using it as a feature extractor in a proxy task: music audio-lyrics cross-modal retrieval. We show that the proposed framework is able to achieve very promising results when compared against standard features and a state-of-the-art model, using much less data during training. We also discuss improvements to this specific instance of the framework that can improve its performance. This paper is organized as follows: Sections \ref{sec:audio} and \ref{sec:eeg} review related work on modeling audio semantics and \ac{EEG}-based \ac{MIR}, respectively; Section \ref{sec:dcca} introduces \ac{DCCA} and Section \ref{sec:architecture} proposes our novel \ac{NN} architecture for modeling audio and \ac{EEG} correlations; Section \ref{sec:dataset} explains the \ac{EEG} data collection processs; Section \ref{sec:experiments} details the experimental setup; Section \ref{sec:results} presents and discusses results as well as the advantages of this approach to modeling music semantics; and Section \ref{sec:conclusions} draws conclusions and proposes future work. \section{Music Audio Semantics} \label{sec:audio} Several proposed approaches can be used for modeling music by estimating an audio latent space. Gaussian-\ac{LDA} \cite{hu2014}, proposed as a continuous data extension of \ac{LDA} \cite{blei2003}, has been successfully applied in an audio classification scenario. This unsupervised approach estimates a mixture of latent Gaussian topics, that are shared among a collection of documents, to describe each document. Even though this approach requires no labeling, it is yet to be proven to be able to infer robust music features. Music audio has also been modeled with Gaussian mixtures in the context of \ac{MER} \cite{wang2015}, where the affective content of music is described by a probability distribution in the continuous space of the \ac{AV} plane \cite{russell1980,thayer1989}. This probabilistic approach is motivated by the fact that emotion is subjective in nature. However, this study only focuses on prediction of affective content and requires expensive annotation data. In order to overcome the issue of expensive data annotation, a \ac{CNN} was trained using only artist labels \cite{park2017}, which are usually available and require no annotation. This system was shown to produce robust features in transfer learning contexts. However, even though the assumption that artist information guides the learning of a meaningful semantic space is usually valid, it is not powerful enough since it breaks down in the presence of polyvalent musicians. Even when using expensive labeling, such as in \cite{choi2017}, where ``semantic'' tags were used to learn the semantic space, there are still problems such as the granularity and abstraction level of the tags not being consistent or aligned with the corresponding audio that is responsible for the presence of those tags. Heuristic attempts to solve the problem of granularity and abstraction level were proposed in \cite{lee2017b}, where several models are trained, each operating on a different time-scale, and the final embeddings consist of an aggregation of embeddings from all models. However, the label alignment issue is still unresolved, the feature aggregation step is far from optimal and it is virtually impossible to find and cover every appropriate time-scale. Our framework differs from these related works which suffer from the previously mentioned drawbacks. As opposed to relying on explicit labels, we rely on measurements of the perception of listeners. We can think of this paradigm as automatic and direct ``labeling'' by the brain, bypassing faulty conscious labeling decisions and the ``tyranny'' of words or categories. Thus, we no longer have the labeling taxonomy issue of chosing between too coarse or too granular categories which lead to not rich enough or ambiguous categories, respectively \cite{posner2005,yang2011}. We also do not need to resort to dimensional models of emotion and, thus, to specify which psychological dimensions are worth modeling \cite{russell1980,thayer1989}. Furthermore, since both audio and \ac{EEG} signals unfold in time, we have a natural and precise time alignment between both and, thus, a more fine-grained and reliable ``annotation'' of music audio. \section{\ac{EEG}-based \ac{MIR}} \label{sec:eeg} The link between brain signals and music perception has been previously explored in \ac{MER} using \ac{EEG} data. Several studies reduce this problem to finding correlations between music emotion annotations and the time-frequency representation of the \acp{EEG} in five frequency bands (in Hz): $\delta$ ($<4$), $\theta$ ($\ge4$ and $<8$), $\alpha$ ($\ge8$ and $<14$), $\beta$ ($\ge14$ and $<32$), and $\gamma$ ($\ge32$). In \cite{thammasan2014}, 3 subjects annotated 6 clips on a 2D emotion space and had their 12-channel \acp{EEG} recorded. \ac{SVM} classification achieved accuracies of 90\% and 86\% for arousal and valence, respectively (binary classification). In \cite{altenmuller2002}, 12-channel \acp{EEG} were recorded from 16 subjects and 160 clips, revealing correlations between lateralised and bilateralised patterns with positive and negative emotions, respectively. In \cite{duan2012}, 62-channel \ac{LDS}-smoothed \ac{DASM} features extracted from 5 subjects and 16 tracks were able to achieve 82\% classification accuracy. In \cite{daly2014}, pre-frontal and parietal cortices were correlated with emotion distinction in an experiment involving 31 subjects and 110 excerpts, using 19-channel \acp{EEG}. 82\% accuracy was achieved in 4-way classification with 32-channel \ac{DASM} features extracted from 26 subjects and 16 clips in \cite{lin2010}. Correlations were also found between mid-frontal activation and dissonant music excerpts in the context of an 18 subjects and 10 clips 24-channel \ac{EEG} experiment in \cite{sammler2007}. In \cite{schmidt2001}, 59 subjects listening to 4 excerpts provided the 4-channel \ac{EEG} data which revealed that asymmetrical frontal activation and overall frontal activation are correlated with valence and arousal perception, respectively. 14-channel \acp{EEG} extracted from 9 subjects that listened to 75 clips showed correlations with emotion recognition in the frontal cortex in \cite{hadjidimitriou2012}. Binary emotion classification over time was performed in \cite{thammasan2016}, where an average 82.8\% and 87.2\% accuracy were achieved for arousal and valence, respectively. Not all studies report the same correlations nor used the same experimental setup, but common and relevant conclusions can be found regarding features and electrode locations relevant for music perception. Power density, in the frontal and parietal regions, has been observed to correlate with emotion detection in music \cite{schmidt2001,altenmuller2002,sammler2007,lin2010,duan2012,hadjidimitriou2012,daly2014,thammasan2014,thammasan2016}. Asymmetrical power density in the frontal region was linked to music valence perception \cite{schmidt2001,altenmuller2002,daly2014,thammasan2014,thammasan2016}. A link has also been revealed between overall frontal activity and music arousal perception \cite{schmidt2001}. In our work, we follow previously mentioned major conclusions regarding electrode positioning but not frequency bands, since our proposed architecture is end-to-end, thereby bypassing handcrafted feature selection. Furthermore, the focus of this paper is on using \ac{EEG} responses as regularizers in the estimation of a generic semantic audio embeddings space, as opposed to using \acp{EEG} for studying specific aspects of music. Note that these previous works build systems that can predict these aspects (emotion), given new \ac{EEG} input. Our approach is able to predict generic semantic embeddings given new audio input, as it needs \ac{EEG} data only during training. \section{Deep Canonical Correlation Analysis} \label{sec:dcca} \ac{DCCA} \cite{andrew2013} is a model that learns maximally correlated embeddings between two views of data and is effective at estimating a music audio semantic space by leveraging \ac{EEG} data from several regularizer human subjects. It is a non-linear extension of \ac{CCA} \cite{hotelling1936} and has previously been applied to learn a correlated space in music between audio and lyrics views in order to perform cross-modal retrieval \cite{yu2017}. It jointly learns non-linear mappings and canonical weights for each view: \begin{small} \begin{equation} \left(w_x^*,w_y^*,\varphi_x^*,\varphi_y^*\right)=\underset{\left(w_x,w_y,\varphi_x,\varphi_y\right)}{\operatorname{argmax}}\operatorname{corr}\left(w_x^{\bf{T}}\varphi_x\left(\bf{x}\right),w_y^{\bf{T}}\varphi_y\left(\bf{y}\right)\right) \end{equation} \end{small} where $\bf{x}\in{\rm I\!R}^m$ and $\bf{y}\in{\rm I\!R}^n$ are the zero-mean observations for each view, with covariances $C_{xx}$ and $C_{yy}$, respectively, and cross-covariance $C_{xy}$. $\varphi_x$ and $\varphi_y$ are non-linear mappings for each view, and $w_x$ and $w_y$ are the canonical weights for each view. We use backpropopagation and minimize: \begin{equation} -\sqrt{\operatorname{tr}\left(\left(C_{XX}^{-1/2}C_{XY}C_{YY}^{-1/2}\right)^{\bf{T}}\left(C_{XX}^{-1/2}C_{XY}C_{YY}^{-1/2}\right)\right)}\ \end{equation} \begin{equation} C_{XX}^{-1/2}=Q_{XX}\Lambda_{XX}^{-1/2} Q_{XX}^{\bf{T}} \end{equation} where $X$ and $Y$ are non-linear projections for each view. $C_{XX}$ and $C_{YY}$ are the regularized, zero-centered covariances while $C_{XY}$ is the zero-centered cross-covariance. $Q_{XX}$ are the eigenvectors of $C_{XX}$ and $\Lambda_{XX}$ are the eigenvalues of $C_{XX}$. $C_{YY}^{-1/2}$ can be computed analogously. We finish training by computing a forward pass with the training data and fitting a linear \ac{CCA} model on those non-linear mappings. The canonical components of these deep non-linear mappings implement our semantic embeddings space. \section{Neural Network Architecture} \label{sec:architecture} Following the success of sample-level \acp{CNN} in music audio modeling \cite{lee2017a}, we propose a novel fully end-to-end architecture for both views/branches of our model: audio and \ac{EEG}. It takes, as input, 1.5s signal chunks of 22050Hz-sampled mono audio and 250Hz-sampled 16-channel \acp{EEG} and outputs embeddings that are maximally correlated through their \ac{CCA} projections. We use 1D convolutional layers with ReLu non-linearities, followed by maxpooling layers. We also use batch normalization layers before each convolutional layer \cite{ioffe2015}. Window sizes were chosen so that the remainder of the integer division between the size of the input stream with the size of the output stream is 0. We refer to a convolutional layer with filter width $x$, stride length $y$, and $z$ channels as conv-$x$-$y$-$z$ and a maxpool layer with window and stride length of $x$ as mp-$x$. The audio branch is composed of the following sequence of layers: conv-3-3-128, conv-3-1-128, mp-3, conv-3-1-256, mp-3, conv-5-1-256, mp-5, conv-5-1-512, mp-5, conv-7-1-512, mp-7, conv-7-1-1024, mp-7, conv-1-1-128. The \ac{EEG} branch is: conv-3-3-128, conv-5-1-256, mp-5, conv-5-1-512, mp-5, conv-5-1-1024, mp-5, conv-1-1-128. Figure \ref{fig:model} illustrates the high-level architecture of our model. \begin{figure}[htbp] \begin{center} \includegraphics[width=\columnwidth]{architecture} \caption{High-level deep audio-\ac{EEG} model architecture.} \label{fig:model} \end{center} \end{figure} \section{\ac{EEG} Dataset Collection} \label{sec:dataset} The \ac{EEG} data used in these experiments consist of two out of three subsets belonging to the same dataset, whose collection process is described in this section. All of the 18 subjects listened to 60 music segments and 2 baseline segments (noise and silence) selected by us for further research, in a randomized order. Then, each subject listened to 2 self-chosen full songs in a fixed order. Segments and full songs were separated by a 5 seconds silence interval. Each listening session took place in a quiet room, with dim light and a comfortable armchair. The subjects were asked to sit and find a relaxed position while the setup was being prepared. Then, the electrodes were placed and the subjects were asked to close their eyes and to move as little as possible, in order to avoid \ac{EOG} and \ac{EMG} artifacts. The headphones were placed and the listening session started when the subjects signaled they were ready. Subjects were informed of this setup beforehand, in order to avoid surprising them. We detail the selections for each subset below. The first subset was built on top of a subset of a \ac{MER} dataset \cite{eerola2011}. This dataset consists of continuous clips (11.13 to 18.08 seconds, average 15.13 seconds) that were chosen in terms of dimensional and discrete emotion models. This subset consists of 60 clips but it is not used in this paper. The second and third subsets consist of the 2 self-chosen songs, selected according to the following criteria: one favorite song and one song that the subject does not like or does not appreciate as much, as long as that song belongs to the same artist and album as the first. The favorite song was listened to before the the second one. We use the union of both subsets (36 audio-\ac{EEG} pairs) in the experiments of this paper. To record the \acp{EEG}, we used the OpenBCI 32bit Board with the OpenBCI Daisy Module, which provide 16 channels and up to 16kHz sampling rate. We used the default 250Hz sampling rate. The 16 electrodes were placed according to the Extended International 10-20 system on three regions of interest: frontal, central, and parietal. The locations were chosen based on the results obtained in previous studies described in Section \ref{sec:eeg}. For the frontal region of we used the Fp1, Fpz, Fp2, F7, F3, Fz, F4, and F8 locations; for the central region we used the C3, Cz, and C4 locations; and for the parietal region we used the P7, P3, Pz, P4, and P8 locations. \section{Experimental Setup} \label{sec:experiments} We evaluate the semantics learned by our proposed model in a transfer learning context through a music cross-modal audio-lyrics retrieval task, using an independent dataset and model \cite{yu2017}. We compare the instance- and class-based \ac{MRR} performance of the embeddings produced by our model against a feature set available for crawling from Spotify and also against state-of-the-art embeddings. Instance-based \ac{MRR} considers that only the corresponding cross-modal object is considered as relevant, whereas in class-based \ac{MRR} any cross-modal object of the same class is considered a relevant object for retrieval. Note that we first train our proposed model with the \ac{EEG} dataset and then use this trained model as an audio feature extractor for the independent audio-lyrics dataset for performing cross-modal retrieval. The next sections present details of these experiments. \subsection{Preprocessing} We applied some preprocessing on the \ac{EEG} signals, namely, we remove power supply noise as well as \ac{DC} offset, with a $>$ 0.5Hz bandpass filter and a 50Hz notch filter, respectively. We attempt to perform \ac{WAR} by decomposing the signal into wavelets and then, for each wavelet, independently, removing coefficients that deviate from the mean value more than a specific multiplier (5 in our experiments) of the standard deviation and, finally, reconstruct the signals with the modified wavelets. We also use a technique called \ac{WSD} in order to remove \ac{EEG} recording noise \cite{saavedra2013}, that removes coefficients in the wavelet domain when all channels are not correlated enough, i.e., below a threshold between 0 and 1 (0.5 in our experiments). Furthermore, no matter how hard we try, the overall power of the \ac{EEG} recordings will differ across subjects, across stimuli for the same subject, and even across channels for the same subject and stimulus. This is due to loose contact between the electrodes and the scalp which is mainly caused by different people having different hair and also different head shapes. In order to circumvent this issue, we scale every \ac{EEG} signal between the values of -1 and 1 for each stimulus and channel, independently, after artifact removal but before \ac{WSD}. We also preprocess the audio signals by scaling them to fit between -1 and 1. \subsection{Music Audio-Lyrics Dataset and Model} We use the audio-lyrics dataset of \cite{yu2017}, implement its model, and follow its lyrics feature extraction. The \ac{NN} performing cross-modal retrieval is a 4-layer fully-connected \ac{DCCA}-based model. Layers dimensionalities for both branches are: 512, 256, 128, and 64. We use 32 canonical components. Figure \ref{fig:setup} illustrates how this model is used in the experiments. \begin{figure}[htbp] \begin{center} \includegraphics[width=\columnwidth]{setup} \caption{Audio-lyrics cross-modal task setup.} \label{fig:setup} \end{center} \end{figure} \subsection{Baselines} We compare the performance of our 128-dimensional embeddings against two baselines: a 65-dimensional feature vector provided by Spotify and a 160-dimensional embeddings vector from the pre-trained model of \cite{choi2017}. The Spotify set, used before in \cite{mcvicar2012}, consists of rhythmic, harmonic, high-level structure, energy, and timbre features. The pre-trained model features are computed by a \ac{CNN}-based model which was trained on supervised music tags, yet it produces embeddings that have been shown to be state-of-the-art in several tasks \cite{choi2017}. Hereby, we refer to these sets as Spotify and Choi. \subsection{Setup} As detailed before, our end-to-end architecture takes 1.5s of aligned audio and \acp{EEG} as input. Therefore, we segment every song and corresponding \ac{EEG} recording in 1.5s chunks for training. When predicting embeddings from this model for a new audio file, we take the average of the embeddings of all 1.5s chunks of audio as the final song-level embeddings. We partition each dataset (audio-\ac{EEG} and audio-lyrics) into 5 balanced folds. We train our model, for 20 epochs, using 102-sized batches of size 102, 5 runs for each fold, leaving the test set out for loss function validation. This means that we have 25 different converged model instances to be used for feature extraction. Then, we run the cross-modal retrieval experiments 5 times for each feature set: our proposed embeddings, the Choi embeddings, and the Spotify features. Thus, we end up running 25$\times$5 cross-modal retrieval experiments for our proposed model. The cross-modal retrieval model is trained for 500 epochs, using batches of size 1000. We report on the average instance- and class-based \ac{MRR}. \section{Results and Discussion} \label{sec:results} Table \ref{tab:mrr} shows the \ac{MRR} results. Our proposed embeddings outperform Spotify, which consists of typical handcrafted features, for this task, by 1.2 \ac{pp} for instance-based \ac{MRR} and 1.1 \ac{pp} for class-based \ac{MRR}, while performing comparably to Choi, the state-of-the-art embeddings. This is very promissing because Choi's model is trained on more than 2083 hours of music, whereas our model was trained on less than 3 hours of both music and \acp{EEG}. This also means that our model is trained faster. In fact, our model finishes training in about 20 minutes, using an NVIDIA GeForce GTX 1080 graphics card. Qualitatively, the main contribution of this approach is two-fold: (1) it provides a fine-grained and precise time alignment between the audio and \ac{EEG} regularizer data; and (2) it bypasses any fixed taxonomy selection for defining music semantics, i.e., it learns about music semantics through observation and modeling of the human brain correlates of music perception. \begin{table}[htbp] \normalsize \centering \caption{Audio-Lyrics Cross-modal retrieval results (\ac{MRR})} \begin{tabular}{c|cc|cc} \multirow{2}{*}{Features} & \multicolumn{2}{c|}{Instance} & \multicolumn{2}{c}{Class}\\ & Audio & Lyrics & Audio & Lyrics\\ \hline Spotify & 23.4\% & 23.4\% & 35.1\% & 35.1\%\\ Choi & 24.7\% & 24.8\% & 36.5\% & 36.4\%\\ Proposed model & 24.6\% & 24.6\% & 36.2\% & 36.2\%\\ \end{tabular} \label{tab:mrr} \end{table} Although we already obtained good results using a simple model, they can be further improved. It is possible to learn an optimal aggregation of the embeddings of each segment using \acsp{LSTM} \cite{hochreiter1997}. Taking a personalized view for each subject is also very likely to improve the estimation of the semantic space, since having a specific set of parameters for the brain activity of each subject is, intuitively, a more realistic model. The recent success of residual learning in \acp{NN} \cite{he2016} suggests that our approach may also benefit from it. Furthermore, different loss functions for constraining the topology of the semantic space can be experimented with, including ones that impose intra-modal constraints on the embeddings to avoid destroying too much structure in each view \cite{hong2017}. When applying this framework for music discovery/recommendation, either based on audio or \ac{EEG} query, deep hashing techniques can be leveraged to design a scalable real-word system \cite{cao2017}. \section{Conclusions and Future Work} \label{sec:conclusions} We proposed a novel generic framework that sets up a new approach to music semantics and a concrete architecture that implements it. We use \acp{EEG} as regularizers for learning a maximally audio-\ac{EEG} correlated space that outperforms handcrafted features and performs comparably to a state-of-the-art model that was trained with 700 times more audio data. Music embeddings can be predicted for new objects given an audio file and used for general purpose tasks, such as classification, regression, and retrieval. Future work includes a validation of these semantic spaces for music discovery as well as in other transfer learning settings. The model can be improved through several extensions, such as \acs{LSTM}, residual connections, personalized views, and other loss functions that model intra-modal constraints. Finally, it is worth studying this framework in the context of other multimedia domains. \bibliographystyle{IEEEtran}
{ "timestamp": "2017-12-18T02:08:14", "yymm": "1712", "arxiv_id": "1712.05197", "language": "en", "url": "https://arxiv.org/abs/1712.05197" }
\section{Introduction} \label{sec:intro} \subsection{Motivation} Measuring the dissimilarities of objects is of a particular importance in many areas of mathematics and mathematical physics. From our viewpoint, the most important area with the above property is the field of classical and quantum information theory. \par The measure of the dissimilarity of elements of a certain space may be a true metric, but there are several important examples when this is not the case. In classical information theory, the most fundamental quantity measuring the dissimilarity of probability distributions is the \emph{relative entropy} (or \emph{Kullback-Leibler divergence} \cite{kullback-leibler}) which is far from being a true metric. Indeed, the relative entropy is not symmetric and it does not satisfy the triangle inequality either. Despite these "defects", the Kullback-Leibler divergence (and its non-commutative analogue, the \emph{Umegaki relative entropy}) has several very useful properties which make it a central quantity of the classical (respectively, quantum) information theory. One of these useful properties is the \emph{joint convexity}. (A two-variable function $h$ defined on a convex set $C$ is said to be jointly convex if $h\ler{\sum_{j=1}^m \alpha_j x_j, \sum_{j=1}^m \alpha_j y_j} \leq \sum_{j=1}^m \alpha_j h \ler{x_j, y_j}$ holds for any $x_1, \dots, x_m, y_1, \dots, y_m \in C$ and $\alpha_1, \dots, \alpha_m \in [0,1]$ with $\sum_{j=1}^m \alpha_j =1.$) For \emph{homogeneous} relative entropy-type quantities, joint convexity is equivalent to the monotonicity under stochastic maps (both in classical and in quantum information theory; the non-commutative counterpart of the stochastic map is the \emph{completely positive trace preserving} (CPTP) map). For details, see \cite[remarks after Def. 2.3]{les-rus}. \par The aim of this paper is to investigate \emph{quantum Jensen divergences} from the viewpoint of joint convexity. Given a convex function $\varphi$ on a convex set $C$ and a number $\lambda \in (0,1),$ the Jensen divergence (which will be denoted by $j_{\varphi, \l}(.,.)$) of the objects $a,b \in C$ is nothing else but the difference between the two sides of the Jensen inequality, that is, $$ j_{\varphi, \l}(a,b)=(1-\l)\varphi(a) + \l \varphi(b) -\varphi\ler{(1-\l)a+\l b}. $$ By quantum Jensen divergences we mean Jensen divergences of self-adjoint matrices which are generated by functions of the form $A \mapsto \tr f(A).$ (Note that the convexity of the function $f: \R \rightarrow \R$ ensures the convexity of the map $A \mapsto \tr f(A)$ on the set of self-adjoint matrices, see, e.g., \cite[Thm. 2.10.]{carlen}.) For the precise definition of quantum Jensen divergences, see Definition \ref{def:jensen-div}. \par In quantum information theory, we are mainly interested in Jensen divergences of positive matrices (or even more restictively, density matrices). Certain special quantum Jensen divergences (e.g., the quantum Jensen-Shannon divergence) have interesting metric properties, as well (see \cite{briet-harremoes} and \cite{lamberti}). The generalized isometries of the Jensen divergences on positive definite matrices have been determined in \cite{mpv-15}. \par The main result of our paper is that for a convex function $f$ defined on the positive half-axis the quantum Jensen divergence induced by the map $A \mapsto \tr f(A)$ is jointly convex on the set of positive definite matrices if and only if the generating function $f$ belongs to the \emph{Matrix Entropy Class} which is an interesting function class introduced by Chen and Tropp in \cite{chen-tropp} quite recently. This result may be considered as a continuation of our previous work on the joint convexity of \emph{Bregman divergences} \cite{pv15}. \subsection{Basic notions, notation} We denote the set of all complex $n \times n$ matrices by $M_n(\C)$ and the sets of all self-adjoint, positive semidefinite, and positive definite complex $n \times n$ matrices are denoted by $M_n^{sa}(\C), \, M_n^{+}(\C),$ and $M_n^{++}(\C),$ respectively. The spectrum of a matrix $A \in M_n(\C)$ is denoted by $\sigma(A).$ We denote the interior of a set $X$ by $\mathrm{int}(X).$ Throughout this paper, the symbol $\cH$ stands for a Hilbert space (either real or complex), and $\bh$ denotes the set of all bounded linear operators on $\cH.$ We denote the sets of all self-adjoint, positive semidefinite, and positive definite bounded linear operators on $\cH$ by $\bh^{sa}, \bh^{+},$ and $\bh^{++},$ respectively. There is a natural partial order on the set of the self-adjoint operators, the \emph{L\"owner order.} This is the partial order defined by the cone of the positive semidefinite operators, that is, $A \leq B$ if and only if $B-A \in \bh^{+}.$ Recall that the set $M_n^{sa}(\C)$ is a real Hilbert space of dimension $n^2$ with the \emph{Hilbert-Schmidt} inner product $\inner{A}{B}_{HS}=\tr A B.$ \par If $f: \R \supseteq \mathrm{dom} f \rightarrow \R$ is a function and $A \in M_n^{sa}(\C)$ such that $\sigma(A) \subset \mathrm{dom} f$ then the expression $f(A)$ is defined by the functional calculus, that is, $f(A)=\sum_k f\ler{\lambda_k} P_k,$ where the $\lambda_k'$s are the eigenvalues and the $P_k$'s are the eigenprojections of $A=\sum_k \lambda_k P_k.$ If $\mathrm{dom} f$ is an open interval $(a,b) \subseteq \R$ and $f \in C^1(a,b)$ then the map \be \label{eq:func-calc} \left\{A \in M_n^{sa}(\C) \, \middle| \, \sigma(A) \subset (a,b)\right\} \rightarrow M_n^{sa}(\C); \, A \mapsto f(A) \ee is \emph{Fr\'echet differentiable} at every point of its domain. The Fr\'echet derivative of the map \eqref{eq:func-calc} at the point $A$ is denoted by $\mathbf{D}f [A]\{.\}$ (or simply $\mathbf{D}f[A]$), and $\mathbf{D}f [A]\{.\} \in \cB\ler{M_n^{sa}(\C)}^{sa}.$ For every $B \in M_n^{sa}(\C),$ the equation $$ \lim_{t \to 0} \frac{1}{t}\ler{f\ler{A+tB}-f(A)}=\mathbf{D}f[A]\{B\} $$ holds. Moreover, $\mathbf{D}f [A]\{.\}$ can be given in a very concrete form. Assume that $A \in M_n^{sa}(\C)$ admits the spectral decomposition $A=\sum_{k=1}^n \lambda_k v_k \otimes v_k,$ where $v_1, \dots, v_n$ is an orthonormal basis in $\C^n$ and for $u,v \in \C^n$ the symbol $u \otimes v$ denotes the linear map $u \otimes v: \C^n \rightarrow \C^n; \, w \mapsto u \otimes v (w):=\inner{w}{v} u.$ Let us introduce the short notation $E_{jk}:=v_j \otimes v_k \, (j,k \in \{1, \dots, n\}).$ The set of "matrix units" $\left\{E_{jk}\right\}_{j,k=1}^n$ form an orthonormal basis of $M_n^{sa}(\C)$ with respect to the Hilbert-Schmidt inner product. The map $\mathbf{D}f [A]\{.\} \in \cB\ler{M_n^{sa}(\C)}^{sa}$ is given by \be \label{eq:frech-diff} M_n^{sa}(\C) \ni \sum_{j,k=1}^n \alpha_{jk} E_{jk} \mapsto \sum_{j,k=1}^n \alpha_{jk} f^{[1]}\left[\l_j, \l_k\right] E_{jk}, \text{ where } f^{[1]}\left[x,y\right]=\begin{cases}\frac{f(x)-f(y)}{x-y} \text{ if } x \neq y, \\ f'(x) \text{ if } x=y.\end{cases} \ee For details, the reader is advice to consult \cite[Thm. 3.33]{HP-book} and also \cite[Thm. 3.25]{HP-book}. \begin{definition} \label{def:jensen-div} Let $I \subseteq \R$ be an interval and let $f$ be a continuous convex function defined on $I.$ Set $\l \in (0,1).$ The \emph{quantum Jensen $(f,\l)$-divergence} of the self-adjoint matrices $A$ and $B$ with $\sigma(A) \cup \sigma(B) \subseteq I$ is denoted by $J_{f,\l}(A,B)$ and it is defined by \be \label{eq:q-jensen-div} J_{f, \l}(A,B):=(1-\l)\tr f(A)+\l \tr f(B) -\tr f \ler{(1-\l)A+\l B}. \ee \end{definition} \section{The main result} In the paper \cite{chen-tropp} Chen and Tropp introduced a function class which they called the \emph{Matrix Entropy Class.} Let us recall the definition of that function class. \begin{definition} \label{def:MEC} The \emph{Matrix Entropy Class} consists of functions $f: [0, \infty) \rightarrow \R$ which are either affine or satisfy the following conditions: \bit \item $f$ is continuous and convex, and $f \in C^2((0,\infty)),$ \item for every $n \in \N$ and for every $A \in M_n^{++}(\C)$ the linear map $\mathbf{D}f'[A] \in \cB\ler{M_n^{sa}(\C)}^{sa}$ is invertible and the map $M_n^{++}(\C) \rightarrow \cB\ler{M_n^{sa}(\C)}^{sa};\ A \mapsto \ler{\mathbf{D}f'[A]}^{-1}$ is concave with respect to the L\"owner order. \eit \end{definition} \begin{remark} \label{rem:of} Note that for a convex function $f \in C^2((0,\infty))$ the existence of $\ler{\mathbf{D}f'[A]}^{-1}$ for every $A \in M_n^{++}(\C)$ is equivalent to the property $f''>0$ on $(0,\infty).$ Indeed, convexity implies that $f''\geq 0$ on $(0, \infty),$ and if $f''(c)=0$ for some $c \in (0,\infty)$ then formula \eqref{eq:frech-diff} clearly shows that $\mathbf{D}f'[c I_n]=0 \in \cB\ler{M_n^{sa}(\C)}^{sa}$ which is not invertible. (Here the symbol $I_n$ denoted the identity element of $M_n(\C).$) On the other hand, formula \eqref{eq:frech-diff} shows also that the property $f''>0$ ensures that $\mathbf{D}f'[A]$ is a positive definite and hence invertible operator for every $A \in M_n^{++}(\C).$ \end{remark} \begin{remark} \label{rem:kernel} Let us introduce the notation $\mc{C}:=\left\{f \,\middle|\, f: [0, \infty) \rightarrow \R, \, f \text{ is continuous and convex} \right\}.$ Clearly, $\mc{C}$ is a convex cone. Let $\lambda \in (0,1)$ be fixed. The map $$ \mc{C} \rightarrow [0,\infty)^{M_n^{+}(\C) \times M_n^{+}(\C)}; \, f \mapsto J_{f, \l}(.,.) $$ is additive and positive homogeneous, and the kernel coincides with the set of the affine functions. That is, $J_{f, \l}(.,.)=0$ if and only if $f(x)=\alpha x + \beta$ for some $\alpha, \beta \in \R.$ This means that affine functions are absolutely not interesting from the viewpoint of Jensen divergences. \end{remark} Now we are in the position to state our main result that characterizes the jointly convex quantum Jensen divergences. \begin{theorem} \label{thm:main} Let $f$ be a continuous convex function on $[0, \infty)$ such that $f \in C^2((0, \infty))$ and $f''>0$ on $(0, \infty).$ The followings are equivalent. \ben [label=(\Roman*)] \item \label{prop:mec} The function $f$ belongs to the Matrix Entropy Class. \item \label{prop:jc-jen-div} The map $$ J_{f,\l}(.,.): \, M_n^{++}(\C) \times M_n^{++}(\C) \rightarrow [0,\infty); \, (A,B) \mapsto J_{f, \l}(A,B) $$ is jointly convex. \een \end{theorem} \section{The proof of the main result} The proof of Theorem \ref{thm:main} has essentially two ingredients. The first one is described in Subsection \ref{subsec:int-rep} while Subsection \ref{subsec:conc-conv} is devoted to the second one. Subsection \ref{subsec:proof} contains the proof itself which refers to the results of the previous subsections. \subsection{An integral representation of Jensen divergences} \label{subsec:int-rep} The following integral representation of quantum Jensen divergences turns out to be useful in the sequel, although it looks a bit more complicated than the definition itself. \begin{claim} \label{claim:int-rep} Assume that $I \subseteq \R$ is an interval and $f: I\rightarrow \R$ is continuous and convex. Furthermore, assume that $f \in C^2\ler{\mathrm{int}(I)}$ and $A, B \in M_n^{sa}(\C)$ with $\sigma(A) \cup \sigma(B) \subset \mathrm{int}(I).$ Then the quantum Jensen $(f, \l)$-divergence $J_{f, \l}(A,B)$ (defined in \eqref{eq:q-jensen-div}) admits the following integral representation: \be \label{eq:jen_int_repr} J_{f,\l}(A,B)=\l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\xi_{\l,t,s}(A,B) \right]\left\{B-A\right\}}{B-A}_{HS} \ds \dt, \ee where \be \label{eq:xi-def} \xi_{\l,t,s}(A,B)=A+t \l (B-A)+s\ler{(1-t)\l+t(1-\l)}(B-A). \ee \end{claim} \begin{proof} Clearly, $$ J_{f, \l}(A,B)=\l \ler{\tr f(B)-\tr f\ler{(1-\l)A+\l B}}+(1-\l)\ler{\tr f(A)-\tr f\ler{(1-\l)A+\l B}}. $$ By the Newton-Leibniz formula, the above equation can be written as follows: $$ J_{f, \l}(A,B)=\l \int_0^1 \frac{\mathrm{d}}{\dt}\left\{\tr f\ler{A+\l(B-A)+t(1-\l)(B-A)}\right\} \dt -(1-\l) \int_0^1 \frac{\mathrm{d}}{\dt} \left\{\tr f\ler{A+t \l (B-A)}\right\} \dt $$ Now we use the well-known identity (see, e.g., \cite[Thm. 3.23]{HP-book}) $$ \frac{\mathrm{d}}{\mathrm{d}\tau} \tr f\ler{X+\tau Y}_{|\tau=0}=\tr f'(X)Y $$ to deduce that $$ J_{f, \l}(A,B)=\l \int_0^1 \tr f'\ler{A+\l(B-A)+t(1-\l)(B-A)} (1-\l)(B-A) \dt - $$ $$ -(1-\l) \int_0^1 \tr f'\ler{A+t \l (B-A)}\l (B-A) \dt, $$ that is, \be \label{eq:jlf1} J_{f, \l}(A,B)=\l (1-\l)\int_0^1\tr \left\{f'\ler{A+\l(B-A)+t(1-\l)(B-A)}-f'\ler{A+t \l (B-A)}\right\}(B-A) \dt. \ee The (appropriate version of the) Newton-Leibniz formula tells us that $$ f'\ler{A+\l(B-A)+t(1-\l)(B-A)}-f'\ler{A+t \l (B-A)} $$ $$ =\int_0^1 \frac{\mathrm{d}}{\ds} \left\{f'\ler{A+t\l (B-A)+s\ler{(1-t)\l +t(1-\l)}(B-A)}\right\}\ds. $$ Clearly, this latter expression is equal to \be \label{eq:belso-int-rep} \int_0^1 \mathbf{D}f' \left[A+t\l (B-A)+s\ler{(1-t)\l +t(1-\l)}(B-A)\right]\left\{\ler{(1-t)\l +t(1-\l)}(B-A)\right\}\ds. \ee Now let us use the notation $\xi_{\l,t,s}(A,B)=A+t \l (B-A)+s\ler{(1-t)\l+t(1-\l)}(B-A)$ and substitute the expression \eqref{eq:belso-int-rep} into \eqref{eq:jlf1}. We get that $$ J_{f, \l}(A,B)=\l (1-\l)\int_0^1\tr \left\{\int_0^1 \mathbf{D}f' \left[\xi_{\l,t,s}(A,B)\right]\left\{\ler{(1-t)\l +t(1-\l)}(B-A)\right\}\ds\right\}(B-A) \dt $$ $$ =\l (1-\l)\int_0^1 \ler{(1-t)\l +t(1-\l)} \int_0^1 \tr \mathbf{D}f' \left[\xi_{\l,t,s}(A,B)\right]\left\{B-A\right\}(B-A)\ds \dt $$ $$ =\l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\xi_{\l,t,s}(A,B) \right]\left\{B-A\right\}}{B-A}_{HS} \ds \dt, $$ the proof is done. \end{proof} \subsection{The equivalence of a certain concavity property and a certain convexity property} \label{subsec:conc-conv} The following claim is an essential part of our argument. This statement appeared in the proof of Thm. 1 in \cite{pv15} in a somewhat hidden way, and it also appeared in \cite[Thm 2.1]{hansen-zhang} in a slightly more concrete form and with a different proof. \begin{claim} \label{claim:conc-conv} Let $\ler{\cH, \inner{.}{.}}$ be a Hilbert space, let $C$ be a convex set and let $\varphi: C \rightarrow \bh^{++}$ be a map. The followings are equivalent. \ben[label=(\Alph*)] \item \label{prop:conc} The map $\psi: C \rightarrow \bh^{++}; \, x \mapsto \psi(x):=\ler{\varphi(x)}^{-1}$ is concave with respect to the L\"owner order. \item \label{prop:conv} The map $\eta: C \times \cH \rightarrow [0,\infty); \, (x,v) \mapsto\eta(x,v):=\inner{\varphi(x)v}{v}$ is convex. \een \end{claim} \begin{proof} Let us show the direction \ref{prop:conc} $\Longrightarrow$ \ref{prop:conv} first. Let $x_1, \dots, x_m \in C$ and $v_1, \dots, v_m \in \cH$ be arbitrary and set $\alpha_1,\dots, \alpha_m \in [0,1]$ such that $\sum_{j=1}^m \alpha_j=1.$ By the concavity assumption \ref{prop:conc} we have \be \label{eq:1conc} \ler{\varphi\ler{\sum_{j=1}^m \alpha_j x_j}}^{-1} \geq \sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}. \ee It is well-known that the function $t \mapsto 1/t$ is operator monotone decreasing on $(0,\infty),$ hence \eqref{eq:1conc} is equivalent to \be \label{eq:2conc} \varphi\ler{\sum_{j=1}^m \alpha_j x_j} \leq \ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1}. \ee By the definition of the L\"owner order, \eqref{eq:2conc} implies that \be \label{eq:ingr1} \inner{\varphi\ler{\sum_{j=1}^m \alpha_j x_j} \ler{\sum_{j=1}^m \alpha_j v_j}}{\sum_{j=1}^m \alpha_j v_j} \leq \inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1}\ler{\sum_{j=1}^m \alpha_j v_j}}{\sum_{j=1}^m \alpha_j v_j}. \ee It is well-known that for any Hibert space $\cH$ the map \be \label{eq:lieb-conv} \zeta: \bh^{++} \times \cH \rightarrow [0,\infty);\, (T,v) \mapsto \zeta(T,v):=\inner{T^{-1} v}{v} \ee is convex. (This statement can be obtained easily as a consequence of \cite[Thm. 1]{lieb-rusk-schwarz}, and it appears in this form in, e.g., \cite[Prop. 4.3]{hansen-j-stat}.) By \eqref{eq:lieb-conv}, we obtain that \be \label{eq:ingr2} \inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1}\ler{\sum_{j=1}^m \alpha_j v_j}}{\sum_{j=1}^m \alpha_j v_j} \leq \sum_{j=1}^m \alpha_j \inner{\ler{\ler{\varphi\ler{x_j}}^{-1}}^{-1}v_j}{v_j}. \ee Inequalities \eqref{eq:ingr1} and \eqref{eq:ingr2} together imply the desired convexity property $$ \inner{\varphi\ler{\sum_{j=1}^m \alpha_j x_j} \ler{\sum_{j=1}^m \alpha_j v_j}}{\sum_{j=1}^m \alpha_j v_j} \leq \sum_{j=1}^m \alpha_j \inner{\varphi\ler{x_j} v_j}{v_j}. $$ To prove the direction \ref{prop:conv} $\Longrightarrow$ \ref{prop:conc} let us consider an arbitrary vector $u \in \cH.$ Let us define \be \label{eq:wi-def} w_i:=\ler{\varphi\ler{x_i}}^{-1} \circ \ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u). \ee Observe that $\sum_{i=1}^m \alpha_i w_i= u.$ Therefore, the expression $\inner{\varphi\ler{\sum_{i=1}^m \alpha_i x_i} u}{u}$ can be written in the form $\inner{\varphi\ler{\sum_{i=1}^m \alpha_i x_i} \ler{\sum_{i=1}^m \alpha_i w_i}}{\sum_{i=1}^m \alpha_i w_i}$ and $\inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u)}{u}$ can be written as $\sum_{i=i}^m \alpha_i \inner{\varphi\ler{x_i} w_i}{w_i}$ because $$ \sum_{i=i}^m \alpha_i \inner{\varphi\ler{x_i} w_i}{w_i}= \sum_{i=i}^m \alpha_i \inner{\varphi\ler{x_i} \circ \ler{\varphi\ler{x_i}}^{-1} \circ \ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u)}{w_i} $$ $$ = \inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u)}{\sum_{i=1}^m \alpha_i w_i} = \inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u)}{u}. $$ By the convexity assumption \ref{prop:conv} we have $$ \inner{\varphi\ler{\sum_{i=1}^m \alpha_i x_i} \ler{\sum_{i=1}^m \alpha_i w_i}}{\sum_{i=1}^m \alpha_i w_i} \leq \sum_{i=i}^m \alpha_i \inner{\varphi\ler{x_i} w_i}{w_i} $$ which is equivalent to $$ \inner{\varphi\ler{\sum_{i=1}^m \alpha_i x_i} u}{u} \leq \inner{\ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} (u)}{u} $$ by the above observations. The vector $u$ was arbitrary, which means that we have \be \label{eq:majd-conc} \varphi\ler{\sum_{i=1}^m \alpha_i x_i} \leq \ler{\sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}}^{-1} \ee in the L\"owner sense. Taking the inverse reverses the order on positive definite operators, hence \eqref{eq:majd-conc} is equivalent to the desired concavity property $$ \ler{\varphi\ler{\sum_{i=1}^m \alpha_i x_i}}^{-1} \geq \sum_{j=1}^m \alpha_j \ler{\varphi\ler{x_j}}^{-1}. $$ \end{proof} \subsection{The proof of Theorem \ref{thm:main}} \label{subsec:proof} \begin{proof} Let us consider the direction \ref{prop:mec} $\Longrightarrow$ \ref{prop:jc-jen-div}. Assume that the map $M_n^{++}(\C) \rightarrow \cB\ler{M_n^{sa}(\C)}^{sa};\ A \mapsto \ler{\mathbf{D}f'[A]}^{-1}$ is concave with respect to the L\"owner order. Note that by the property $f''>0$ the operator $\mathbf{D}f'[A]$ is positive definite for every $A \in M_n^{++}(\C)$ (see Remark \ref{rem:of}). This means that the image of the map $A \mapsto \mathbf{D}f'[A]$ is contained in $\cB\ler{M_n^{sa}(\C)}^{++}.$ Therefore, we can apply Claim \ref{claim:conc-conv} and deduce that the map \be \label{eq:konkr-konv} M_n^{++}(\C) \times M_n^{sa}(\C) \rightarrow [0, \infty); \, (X,Y) \mapsto \inner{\mathbf{D}f'[X]\{Y\}}{Y}_{HS} \ee is jointly convex. Furthermore, for fixed $\l \in (0,1)$ and $t,s \in [0,1]$ the map $\xi_{\l,t,s}: \, (A, B) \mapsto \xi_{\l,t,s}(A,B)$ is affine, that is, $\xi_{\l,t,s}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j B_j}=\sum_{j=1}^m \alpha_j \xi_{\l,t,s}\ler{A_j,B_j}.$ (For the definition of $\xi_{\l,t,s}$ see formula \eqref{eq:xi-def}.) The convexity of \eqref{eq:konkr-konv} and the affine property of $\xi_{\l,t,s}$ together imply that $$ \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j B_j}\right]\left\{\sum_{j=1}^m \alpha_j B_j-\sum_{j=1}^m \alpha_j A_j\right\}}{\sum_{j=1}^m \alpha_j B_j-\sum_{j=1}^m \alpha_j A_j}_{HS} $$ $$ =\inner{\mathbf{D}f'\left[\sum_{j=1}^m \alpha_j \xi_{\l,t,s}\ler{A_j,B_j}\right]\left\{\sum_{j=1}^m \alpha_j \ler{B_j-A_j}\right\}}{\sum_{j=1}^m \alpha_j \ler{B_j-A_j}}_{HS} $$ $$ \leq \sum_{j=1}^m \alpha_j \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{A_j,B_j}\right]\left\{B_j-A_j\right\}}{B_j-A_j}_{HS} $$ holds for all $A_1, \dots A_m, B_1, \dots, B_m \in M_n^{++}(\C)$ and $\alpha_1, \dots, \alpha_m \in [0,1]$ with $\sum_{j=1}^m \alpha_j =1.$ Consequently, \footnotesize $$ \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j B_j}\right]\left\{\sum_{j=1}^m \alpha_j B_j-\sum_{j=1}^m \alpha_j A_j\right\}}{\sum_{j=1}^m \alpha_j B_j-\sum_{j=1}^m \alpha_j A_j}_{HS}\ds \dt $$ \normalsize $$ \leq \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \ler{\sum_{j=1}^m \alpha_j \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{A_j,B_j}\right]\left\{B_j-A_j\right\}}{B_j-A_j}_{HS} }\ds \dt. $$ By the result of Claim \ref{claim:int-rep}, the above inequality is nothing else but $J_{f,\l}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j B_j} \leq \sum_{j=1}^m \alpha_j J_{f,\l} \ler{A_j, B_j},$ which is the desired joint convexity property. \par Now we prove the direction \ref{prop:jc-jen-div} $\Longrightarrow$ \ref{prop:mec}. Let $A_1, \dots A_m \in M_n^{++}(\C), \, B_1, \dots, B_m \in M_n^{sa}(\C)$ and $\alpha_1, \dots, \alpha_m \in [0,1]$ with $\sum_{j=1}^m \alpha_j =1$ be arbitrary but fixed. Let us choose $\eps_0>0$ such that $A_j+\eps B_j \in M_n^{++}(\C)$ for every $0 < \eps <\eps_0$ and $j \in \{1,\dots, m\}.$ By assumption, we have $$ J_{f,\l}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j A_j + \eps \sum_{j=1}^m \alpha_j B_j }\leq \sum_{j=1}^m \alpha_j J_{f,\l}\ler{A_j, A_j+\eps B_j}. $$ By the integral representation of the Jensen divergences (Claim \ref{claim:int-rep}), the above inequality is equivalent to \small $$ \eps^2 \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j A_j + \eps \sum_{j=1}^m \alpha_j B_j } \right]\left\{\sum_{j=1}^m \alpha_j B_j\right\}}{\sum_{j=1}^m \alpha_j B_j}_{HS} \ds \dt $$ \normalsize \be \label{eq:egyen} \leq \eps^2 \sum_{j=1}^m \alpha_j \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\xi_{\l,t,s}\ler{A_j,A_j+\eps B_j} \right]\left\{B_j\right\}}{B_j}_{HS} \ds \dt. \ee The continuity of $f''$ ensures that the map $X \mapsto \mathbf{D} f' [X]$ is continuous. It is clear by the formula \eqref{eq:xi-def} that $\lim_{\eps \to 0} \xi_{\l,t,s}\ler{\sum_{j=1}^m \alpha_j A_j, \sum_{j=1}^m \alpha_j A_j + \eps \sum_{j=1}^m \alpha_j B_j }=\sum_{j=1}^m \alpha_j A_j$ and $\lim_{\eps \to 0} \xi_{\l,t,s}\ler{A_j,A_j+\eps B_j}=A_j.$ Therefore, if we divide the inequality \eqref{eq:egyen} by $\eps^2$ and take the limit $\eps \to 0$ then we get that $$ \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[\sum_{j=1}^m \alpha_j A_j \right]\left\{\sum_{j=1}^m \alpha_j B_j\right\}}{\sum_{j=1}^m \alpha_j B_j}_{HS} \ds \dt $$ $$ \leq \sum_{j=1}^m \alpha_j \l (1-\l)\int_{0}^{1}\ler{(1-t)\l+t(1-\l)} \int_{0}^1 \inner{\mathbf{D}f'\left[A_j \right]\left\{B_j\right\}}{B_j}_{HS} \ds \dt, $$ that is, $$ \inner{\mathbf{D}f'\left[\sum_{j=1}^m \alpha_j A_j \right]\left\{\sum_{j=1}^m \alpha_j B_j\right\}}{\sum_{j=1}^m \alpha_j B_j}_{HS} \leq \sum_{j=1}^m \alpha_j \inner{\mathbf{D}f'\left[A_j \right]\left\{B_j\right\}}{B_j}_{HS}. $$ So we deduced that the map given by \eqref{eq:konkr-konv} is jointly convex. By Claim \ref{claim:conc-conv} this means that the map $A \mapsto \ler{\mathbf{D}f'[A]}^{-1}$ is concave with respect to the L\"owner order, and hence the function $f$ belongs to the Matrix Entropy Class. The proof is done. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2018-03-09T02:06:41", "yymm": "1712", "arxiv_id": "1712.05324", "language": "en", "url": "https://arxiv.org/abs/1712.05324" }